url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/0710.2387
Characterizing arbitrarily slow convergence in the method of alternating projections
In 1997, Bauschke, Borwein, and Lewis have stated a trichotomy theorem that characterizes when the convergence of the method of alternating projections can be arbitrarily slow. However, there are two errors in their proof of this theorem. In this note, we show that although one of the errors is critical, the theorem itself is correct. We give a different proof that uses the multiplicative form of the spectral theorem, and the theorem holds in any real or complex Hilbert space, not just in a real Hilbert space.
\section{Introduction}\label{S: intro} For the notation and basic Hilbert space results necessary to read this paper, the book \cite{deu;01} is a good source, especially chapter 9. Let $H$ be a (real or complex) Hilbert space with inner product $\langle x, y \rangle$ and norm $\|x\|=\sqrt{\langle x, x\rangle}$. If $M$ is any closed (linear) subspace of $H$, let $P_M$ denote the orthogonal projection onto $M$. That is, $P_M: H \to M$ is defined by \[ \|x-P_M(x)\|=\inf_{y\in M}\|x-y\|. \] Let $M_1$ and $M_2$ be closed subspaces in $H$ and $M:=M_1\cap M_2$. It is well-known that $P_{M_1}P_{M_2}=P_M$ if and only if $P_{M_1}$ and $P_{M_2}$ commute: $P_{M_1}P_{M_2}=P_{M_2}P_{M_1}$. Von Neumann established the following result which yields an interesting analogue in the non-commuting case. \begin{thm}\label{vNH}{\rm\textbf{(von Neumann \cite{vN;50})}} For each $x\in H$, there holds \begin{equation}\label{vNH;eq1} \lim_{n\to \infty} \|(P_{M_2}P_{M_{1}})^n(x)-P_M(x)\|=0. \end{equation} \end{thm} The method of constructing the sequence $(P_{M_2}P_{M_{1}})^n(x)$ by alternately projecting onto one subspace and then the other is called the \emph{method of alternating projections}. While Von Neumann's theorem shows that the sequence of iterates $(P_{M_2}P_{M_{1}})^n(x)$, \emph{always} converges to $P_M(x)$ for every $x$, it does not say anything about the speed or rate of convergence. To say something about this, we will use the notion of angle between subpaces. Recall that the (Friedrichs) \textbf{angle} between the subspaces $M_1$ and $M_2$ is defined to be the angle in $[0, \pi/2]$ whose cosine is given by \[ c(M_1, M_2):=\sup\{|\langle x, y\rangle | \mid x\in M_1\cap M^\perp\cap B_H, \; y\in M_2\cap M^\perp\cap B_H \}, \] where $B_H:=\{x\in H \mid \|x\| \le 1\}$ is the unit ball in $H$. It is easy to see that $0\le c(M_1, M_2) \le 1$. \begin{thm}\label{AKW}{\rm\textbf{(Aronszajn \cite{aro;50})}} For each $x\in H$ and $n\ge 1$, we have \begin{equation}\label{AKW;eq1} \|(P_{M_2}P_{M_1})^n(x)-P_M(x)\| \le c(M_1, M_2)^{2n-1}\|x\|. \end{equation} \end{thm} Kayalar and Weinert \cite{kw;88} showed that the constant in Aronszajn's theorem is smallest possible independent of $x$. More precisely, they proved that \begin{equation}\label{kw:eq1} \|(P_{M_2}P_{M_1})^n-P_M\|=c(M_1, M_2)^{2n-1} \mbox{\quad for each $n\in {\mathbb N}$. } \end{equation} The usefulness of the bound in (\ref{AKW;eq1}) depends on knowing when the cosine of the angle between $M_1$ and $M_2$ is less than one, i.e., when the angle is positive. A useful characterization of when this happens is the following. \begin{lem}\label{DS} $c(M_1, M_2)<1$ if and only if $M_1+M_2$ is closed. \end{lem} This lemma is a consequence of results of Deutsch \cite{deu;84} and Simonic, whose result appeared in \cite[Lemma 4.10]{bb;93} (see also \cite[Theorem 9.35, p. 222]{deu;01}). Recall that a sequence $(x_n)$ is said to converge to $x$ \textbf{linearly} provided there exists an $\alpha<1$ and a constant $c$ such that \[ \|x_n-x\| \le c\alpha^n \mbox{\quad for each $n\ge 1$}. \] In this case, we say that the rate of convergence is $\alpha$. Using Lemma \ref{DS} and Theorem \ref{AKW}, we see that there is \emph{linear convergence} for the method of alternating projections whenever the sum of the subspaces is closed. What can be said when the sum is not closed? Franchetti and Light \cite{fl;86} gave the first example of a Hilbert space and two closed subspaces whose sum was not closed such that: given any sequence of reals decreasing to zero, there exists a point in the space with the property that the convergence in the von Neumann theorem was at least as slow as this sequence of reals. But this still left open the question of whether such a construction could be made in \emph{any} Hilbert space whenever $M_1$ and $M_2$ were \emph{any} closed subspaces whose sum was not closed. In their study of the method of alternating projections, Bauschke, Borwein, and Lewis \cite{bbl;97} stated the following dichotomy. (Actually, they stated their result as a trichotomy since they were considering the more general setting of closed \emph{affine} sets, i.e., translates of subspaces, rather than subspaces. In this situation, unlike the subspace case, one must also consider the possibility that the intersection of the affine sets is empty. However, when the intersection is nonempty, the affine sets case easily reduces to the subspace case by a simple translation.) Roughly speaking, it states that in the method of alternating projections, either there is linear convergence for each starting point, or there exists a point which converges arbitrarily slowly. \begin{thm}\label{BBL} {\rm (dichotomy)} Let $M_1$ and $M_2$ be closed subspaces in a Hilbert space $H$ and $M=M_1\cap M_2$. Then exactly one of the following alternatives holds. \begin{enumerate} \item[{\rm(1)}] $M_1+M_2$ is closed. Then for each $x\in H$, the sequence $(P_{M_2}P_{M_1})^n(x)$ converges linearly to $P_M(x)$ with a rate $[c(M_1, M_2)]^2$. \item[{\rm(2)}] $M_1+M_2$ is not closed. Then for each $x\in H$, the sequence $(P_{M_2}P_{M_1})^n(x)$ converges to $P_M(x)$. But convergence is ``arbitrarily slow'' in the following sense: for each sequence $(\lambda_n)$ of positive real numbers with $1>\lambda_1 \ge \lambda_2 \ge \cdots \ge \lambda_n \to 0$, there exists a point $x_\lambda \in H$ such that \[ \|(P_{M_2}P_{M_1})^n(x_\lambda)-P_M(x_\lambda)\| \ge \lambda_n\mbox{\quad for all $n$.} \] \end{enumerate} \end{thm} \noindent \textbf{Remark} Clearly, the first statement of Theorem \ref{BBL} is an immediate consequence of Theorem \ref{AKW} and Lemma \ref{DS}. Thus we need only verify the second statement. We will do this in Section \ref{S: BBL} below. \section{Multiplicative form of the spectral theorem} The main fact that we will use in the proof of Theorem \ref{BBL} is the multiplicative form of the spectral theorem (see Halmos \cite{hal;63} or Reed-Simon \cite[Corollary on p. 227]{rs;72}). Recall that a bounded linear operator $U:H_1\to H_2$ between Hilbert spaces $H_1$ and $H_2$ is called \emph{unitary} if $U$ is invertible and $U^*=U^{-1}$. It follows that a unitary operator is isometric: $\|Ux\|=\|x\|$ for each $x\in H$. Since the inverse of a unitary operator is unitary, it too is isometric. (We will use these facts in a few places below without explicit mention.) \begin{thm}\label{RS}{\rm\textbf {(Spectral Theorem; multiplicative form)}} Let $H$ be a (real or complex) Hilbert space, and let $T$ be a self-adjoint bounded linear operator on $H$. Then there exists a finite measure space $(\Omega, \mu )$, a bounded real-valued function $F$ on $\Omega$, and a unitary map $U: H\to L_2(\Omega, \mu)$ such that \begin{equation}\label{RS;eq1} UTU^{-1}f=F\cdot f \mbox{\quad for all $f\in L_2(\Omega, \mu)$.} \end{equation} Defining $D: L_2(\Omega, \mu) \to L_2(\Omega, \mu)$ to be the operator ``multiplication by $F$'', $(Df)(t):=F(t)f(t)$, this can be expressed in operator notation as \begin{equation}\label{RS;eq2} UTU^{-1}=D. \end{equation} \end{thm} Actually, in both \cite{hal;63} and \cite{rs;72}, the theorem is stated for a \emph{complex} Hilbert space only, and \cite{rs;72} even assumes separability. However, it is easy to check that each of the tools used in the proof in \cite{hal;63}, for example, has a corresponding real space analogue. \medskip\noindent \textbf{Acknowledgements} We are greatly indebted to Joel Anderson, Nigel Higson, and Barry Simon for personally transmitting some very useful comments to us related to the multiplicative form of the spectral theorem. A self-adjoint operator $T$ on $H$ is called \emph{positive} if $\langle Tx, x\rangle \ge 0$ for each $x\in H$. A simple, but important, example of a positive operator is the orthogonal projection $P_S$ onto any closed subspace $S \subset H$ (see, e.g., \cite[p. 79]{deu;01}). \begin{cor}\label{cor to RS} Assume the hypothesis of Theorem \ref{RS}. If $T$ is also positive, then the bounded real-valued function $F$ of Theorem \ref{RS} is also nonnegative a.e.$(\mu)$. \end{cor} \emph{Proof. } Let $f\in L_2(\Omega, \mu)$ be arbitrary and $y=U^{-1}f$. Since $T$ is positive, we have that \begin{eqnarray*} \int _{\Omega}F|f|^2d\mu&=& \langle Ff, f\rangle =\langle Df, f\rangle= \langle UTU^{-1}f, f\rangle\\ &=&\langle TU^{-1}f, U^*f\rangle =\langle Ty, y\rangle \ge 0. \end{eqnarray*} Briefly, $\int _{\Omega}F|f|^2d\mu \ge 0$ for each $f\in L_2(\Omega, \mu)$. We readily deduce that $F\ge 0$ a.e.($\mu$). $\blacksquare$ \section{Proof of Theorem \ref{BBL} }\label{S: BBL} In this section we will prove the second statement of Theorem \ref{BBL}. Our proof is along the same general lines as in \cite{bbl;97} in that we proceed by a series of small steps that are each easily digested. However, there are subtle errors in steps 2 and 3 of \cite{bbl;97} (see Section \ref{S:Errors} for the details). We will avoid these errors by using Theorem \ref{RS} and following a somewhat different path. \textbf{Proof of the second statement in Theorem \ref{BBL}.} Suppose $M_1+M_2$ is not closed, and let $(\lambda_n)$ be a sequence with $1>\lambda_1\ge \lambda_2 \ge \cdots \ge \lambda_n>0$, and $\lambda_n \to 0$. By Lemma \ref{DS}, $c(M_1, M_2)=1$. Let \begin{equation}\label{eq1} A=M_1\cap M^\perp \mbox{\quad and \quad} B=M_2\cap M^\perp. \end{equation} Note that $A$ and $B$ are closed subspaces with $A\cap B=\{0\}$. Clearly, \begin{equation}\label{eq1} c(A, B)=c(M_1, M_2)=1 \end{equation} and hence, by Lemma \ref{DS} again, $A+B$ is not closed. Since $c(A, B)=\|P_BP_A\|$ by \cite{deu;84} (see also \cite[Lemma 9.5(7), p. 197]{deu;01}), it follows that $\|P_BP_A\|=1$. \begin{lem} \label{T facts} The operator $T:=P_AP_BP_A$ is a bounded self-adjoint linear operator on $H$ which is positive and $\|T\|=1$. Hence there exists a finite measure space $(\Omega, \mu)$, a nonnegative bounded function $F$ on $\Omega$, and a unitary operator $U: H\to L_2:=L_2(\Omega, \mu)$ such that \begin{equation}\label{eq1} UTU^{-1}=D, \end{equation} where $D: L_2 \to L_2$ is defined by $Df:=Ff$ for each $f\in L_2$. \end{lem} \emph{Proof of Lemma \ref{T facts}}. By Corollary \ref{cor to RS}, it suffices to verify the first statement of the lemma. Clearly, $T$ is self-adjoint and bounded. Moreover, using \cite[Corollary 5.17]{dhII;06}, $\|T\|=\|P_AP_BP_A\|=\|P_BP_A\|^2=1$. Fix any $x\in H$ and set $y=P_Ax$. Since $P_B$ is positive, we have that \[ \langle Tx, x \rangle=\langle P_AP_BP_Ax, x \rangle= \langle P_BP_Ax, P_Ax \rangle= \langle P_By, y\rangle \ge 0. \] This shows that $T$ is positive on $H$ and completes the proof of Lemma \ref{T facts}. \medskip For each $k\in {\mathbb N}\!:=\{1, 2, \dots\}$, let $s_k$ be the largest integer such that $s_k\lambda_k<1$. Then the following claim is clear. \medskip \textbf{Claim 1.} \textsl{$s_k\lambda_k<1 \le (s_k+1)\lambda_k$ for all $k\in {\mathbb N}$, $s_1\le s_2\le s_3\le \cdots$, and each $s_k$ occurs only finitely often.} \medskip Next let $(t_n)$ be the strictly increasing sequence of integers with \begin{equation} \{t_1, t_2, \dots \}=\{s_1, s_2, \dots \}. \end{equation} Note that since $(t_n)$ is a subsequence of $(n)$, it follows that \begin{equation} \sum_1^\infty \frac1{t_n^2} < \infty. \end{equation} For each $n\in{\mathbb N}$, we define \begin{equation} k_0(n): =\min \{ k\mid s_k=t_n \} \mbox{\quad and\quad } k_1(n):=\max \{ k \mid s_k=t_n\}. \end{equation} It is clear that $k_0(n) \to \infty$, $k_1(n) \to \infty$, and \begin{equation}\label{eq1;claim 1} s_{k_0(n)-1}=t_{n-1}<t_n=s_{k_0(n)}=s_{k_0(n)+1}=\cdots=s_{k_1(n)} < t_{n+1}=s_{k_1(n)+1}. \end{equation} Set \begin{equation}\label{eq2;claim 1} \alpha_n:=(\lambda_{k_0(n)}t_n)^{\frac1{2k_1(n)}} \mbox{\quad for each $n\in {\mathbb N}$}. \end{equation} \medskip \textbf{Claim 2.} \textsl{ For each $n\in {\mathbb N}$, \begin{equation}\label{eq1;claim 2} 1> \lambda_{k_0(n)}s_{k_0(n)}=\lambda_{k_0(n)}t_n \ge 1-\lambda_{k_0(n)}, \end{equation} \begin{equation}\label{eq2;claim 2} 0< \alpha_n < 1, \mbox{\quad and \quad} \alpha_n \to 1. \end{equation}} To see this, note that by definition, $\lambda_{k_0(n)}t_n=\lambda_{k_0(n)}s_{k_0(n)} < 1$, and $1\le \lambda_{k_0(n)}(s_{k_0(n)}+1)$. But the latter inequality implies that $1-\lambda_{k_0(n)} \le \lambda_{k_0(n)}s_{k_0(n)}=\lambda_{k_0(n)}t_n$. Also, $\lambda_{k_0(n)}t_n < 1$ implies that $\alpha_n < 1$. Since $\lambda_{k_0(n)} \to 0$, relation (\ref{eq1;claim 2}) implies that $\lambda_{k_0(n)}t_n \to 1$. This, along with $k_1(n) \to \infty$, shows that $\alpha_n \to 1$, which completes the proof of Claim 2. We note that the first two claims follow exactly as in the proof given in \cite{bbl;97}. However, at this point our approach will deviate significantly from that of \cite{bbl;97}. \medskip \textbf{Claim 3.} $\mu\{F^{-1}([1, \infty))\} =0$. \medskip To see this, let $S:=F^{-1}[1, \infty)$ and $y=U^{-1}(\chi_S)$, where $\chi_S$ denotes the characteristic function of $S$: $\chi_S(t)=1$ if $t\in S$ and $0$ otherwise. We must show that $\mu(S)=0$. Since \begin{equation}\label{first eq; claim 3} \|y\|= \|U^{-1}(\chi_S)\|=\|\chi_S\|=\left(\int_S1d\mu\right)^{1/2}=[\mu(S)]^{1/2}, \end{equation} it suffices to show that $y=0$. Using (\ref{first eq; claim 3}), we have \begin{eqnarray} \|Ty\|&=& \|U^{-1}DUy\|=\|U^{-1}D( \chi_{S})\| = \|U^{-1}(F\chi_S)\|=\|F\chi_S\| \nonumber \\ &=&\left[\int_SF^2d\mu\right]^\frac12 \ge \left[\int_S1d\mu\right]^\frac12=\|y\|. \label{eq1;claim 3} \end{eqnarray} This shows that $\|Ty\|\ge \|y\|$. But since $T=P_AP_BP_A$ is the product of norm one operators, $\|Ty\|\le \|y\|$. Thus $\|Ty\|=\|y\|$. We deduce that \begin{equation}\label{eq2;claim 3} \|y\|=\|P_AP_BP_Ay\| \le \|P_BP_Ay\| \le \|P_Ay\| \le \|y\|. \end{equation} Thus we must have equality holding throughout the string of inequalities (\ref{eq2;claim 3}). It follows (see, e.g., \cite[Theorem 5.8(2), p. 76]{deu;01}) that $y\in A\cap B=\{0\}$ and hence $y=0$. This proves Claim 3. \medskip \textbf{Claim 4.} \textsl{For each} $\varepsilon>0$, $\mu\{F^{-1}((1-\varepsilon, 1))\}>0$. \medskip If not, there exists $\varepsilon>0$ such that $\mu\{F^{-1}((1-\varepsilon, 1))\}=0$. Choose any $ y \in H$ and set $g=Uy$. Then, using Claim 3, we have that \begin{eqnarray*} \|Ty\|^2&=&\|U^{-1}DUy\|^2=\|DUy\|^2=\|Dg\|^2=\int|Fg|^2d\mu =\int F^2|g|^2 d\mu\\ &=&\int_{F^{-1}([0, 1-\varepsilon])}F^2|g|^2d\mu +\int_{F^{-1}((1-\varepsilon, 1))}F^2|g|^2d\mu +\int_{F^{-1}([1, \infty))}F^2|g|^2d\mu \\ &\le&(1-\varepsilon)^2\int_{F^{-1}([0, 1-\varepsilon])}|g|^2d\mu +0 +0 \le (1-\varepsilon)^2\int |g|^2d\mu\\ &=& (1-\varepsilon)^2\|g\|^2=(1-\varepsilon)^2\|Uy\|^2=(1-\varepsilon)^2\|y\|^2. \end{eqnarray*} Briefly, $\|Ty\| \le (1-\varepsilon)\|y\|$ for each $y\in H$. It follows that $\|T\| \le 1-\varepsilon$, which (by Lemma \ref{T facts}) contradicts $\|T\|=1$. This proves Claim 4. \medskip \textbf{Claim 5.} \textsl{For each $\varepsilon>0$, there exists $\varepsilon_1 \in (0, \varepsilon)$ such that} \[ \mu\{ F^{-1}((1-\varepsilon, 1-\varepsilon_1))\} >0. \] To verify this, we use Claim 4 and the countable additivity of $\mu$ to obtain \begin{eqnarray*} 0&<& \mu\{F^{-1}((1-\varepsilon, 1))\}=\mu\left\{\bigcup_{i=1}^\infty F^{-1}\left(\left(1-\frac{\varepsilon}i, 1-\frac{\varepsilon}{i+1}\right]\right)\right\}\\ &=&\sum_{i=1}^\infty \mu\left\{F^{-1}\left(\left(1-\frac\varepsilon{i}, 1-\frac\varepsilon{i+1}\right]\right)\right\}. \end{eqnarray*} Thus there exists an integer $i$ such that $\mu\left\{F^{-1}\left((1-\frac{\varepsilon}{i}, 1-\frac{\varepsilon}{i+1}]\right)\right\} >0$. Let $\varepsilon_1=\frac{\varepsilon}{i+2}$. Then $\varepsilon_1 \in (0, \varepsilon)$ and \[ \mu\left\{ F^{-1}\left((1-\varepsilon, 1-\varepsilon_1)\right)\right\} \ge \mu\left\{F^{-1}\left((1-\frac\varepsilon{i}, 1-\varepsilon_1)\right)\right\} > 0. \] This proves Claim 5. \medskip \textbf{Claim 6.} \textsl{There exists a sequence of reals $(\beta_n)\subset (0, 1)$ such that} $\alpha_n^2\le \beta_n< \beta_{n+1}<1$ and $\mu\{F^{-1}\left([\beta_n, \beta_{n+1})\right)\}>0$ \textsl{for each $n\in {\mathbb N}$}. \medskip We prove Claim 6 by induction. For $n=1$, take $\beta_1=\alpha_1^2$. Then $\beta_1 < 1$. Assume next that $\beta_1, \dots, \beta_m$ have been chosen so that $\beta_1<\beta_2< \cdots <\beta_m < 1$, $\beta_k \ge \alpha_k^2$ for $k=1, 2, \dots, m$, and $\mu\{F^{-1}\left( [\beta_k, \beta_{k+1})\right)> 0$ for $k=1, 2, \dots, m-1$. Let $\varepsilon:=\min\{ 1-\alpha^2_{m+1}, 1-\beta_m\}$. Then $\varepsilon>0$ and Claim 5 implies the existence of $\varepsilon_1\in (0, \varepsilon)$ such that $\mu\{F^{-1}\left((1-\varepsilon, 1-\varepsilon_1)\right)\}>0$. Let $\beta_{m+1}:=1-\varepsilon_1$. Then $\beta_{m+1}> 1-\varepsilon \ge \beta_m$. Also, $\beta_{m+1} > 1-\varepsilon \ge \alpha^2_{m+1}$. Finally, $\mu\{F^{-1}\left( [\beta_m, \beta_{m+1})\right) \} \ge \mu\{F^{-1}\left( [1-\varepsilon, 1-\varepsilon_1)\right)\} >0$. This completes the induction step and hence the proof. \begin{defn} \label{orthog vectors} With $\beta_n$ given as in Claim 6, for each $n\in {\mathbb N}$, let $S_n:=F^{-1}( [\beta_n, \beta_{n+1}))$ and define the vector $e_n\in H$ by \[ e_n:=\frac1{\sqrt{\mu(S_n)}}U^{-1}(\chi_{S_n}). \] \end{defn} Note that $$Ue_n=\frac1{\sqrt{\mu(S_n)}}\chi _{S_n}.$$ \textbf{Claim 7.} $\|e_n\|=1$ \textsl{for each} $n\in {\mathbb N}$. \medskip This follows from \[ \|e_n\|=\|Ue_n\|=\frac1{\sqrt{\mu(S_n)}}\|\chi_{S_n}\|=1. \] \medskip It is convenient to list next a few basic and easily verified facts concerning powers of $T$ and $D$. \medskip \textbf{Claim 8.}\textsl{\begin{enumerate} \item[{\rm (1)}] $T^k=(U^{-1}DU)^k=U^{-1}D^kU$. \item[{\rm (2)}] $D^kf=F^kf$ for all $f\in L_2(\Omega, \mu)$. \item[{\rm (3)}] If $f, g \in L_2(\Omega, \mu)$ and $f(t)g(t)=0$ for $\mu$ almost all $t$, then $\langle D^jf, D^kg\rangle =0$ for every $ j, k \in {\mathbb N}\cup \{0\}$. \end{enumerate}} \medskip \textbf{Claim 9.} \textsl{For all integers $j, k \in {\mathbb N}\cup\{0\}$ and $m, n \in {\mathbb N}$ with $m\ne n$, we have \[ \langle T^je_m, T^ke_n\rangle =0. \] } \medskip To verify this, let $f_r:=Ue_r=\frac1{\sqrt{\mu (S_r)}}\chi_{S_r}$ for each $r\in {\mathbb N}$. Then $ \chi_{S_n}\chi_{S_m}=\chi_{S_n\cap S_m}=0 $ since $S_n\cap S_m=\emptyset$. Thus $f_nf_m=0$. Using statements (1) and (3) of Claim 8, we get that \[ \langle T^je_m, T^ke_n\rangle=\langle U^{-1}D^jUe_m, U^{-1}D^kUe_n\rangle=\langle D^jf_m, D^kf_n\rangle=0. \] \medskip \textbf{Claim 10.} $\beta_{n+1}^k \ge \|T^ke_n\| \ge \beta_n^k\ge \alpha_n^{2k}$ \textsl{for all} $k, n \in {\mathbb N}$. \medskip The last inequality follows from Claim 6. Next observe that \begin{eqnarray*} \|T^ke_n\|^2&=&\|U^{-1}D^kUe_n\|^2=\left\|D^k(\frac{\chi_{S_n}}{\sqrt{\mu (S_n)}})\right\|^2=\int \left[D^k(\frac{\chi_{S_n}}{\sqrt{\mu (S_n)}})\right]^2d\mu \\ &=&\frac1{\mu(S_n)}\int F^{2k}\chi_{S_n}^2d\mu=\frac1{\mu(S_n)}\int_{S_n}F^{2k}d\mu. \end{eqnarray*} Also, by the definition of $S_n$ (in Definition \ref{orthog vectors}), it is clear that \[ \beta_n^{2k} \le \frac1{\mu(S_n)}\int_{S_n}F^{2k}d\mu \le \beta_{n+1}^{2k}. \] Taking square roots completes the proof of Claim 10. \medskip Now we can define the element which will converge slower than the sequence $(\lambda_n)$. \begin{defn}\label{the slow guy} Set \[ x_\lambda:=\sum_1^\infty\frac1{t_n} e_n. \] \end{defn} Since $\sum_1^\infty 1/t^2_n \le \sum_1^\infty 1/n^2 < \infty$ and $\|e_n\|=1$, it follows that $x_\lambda$ is a well-defined element of $H$. \medskip \textbf{Claim 11.} $\|T^kx_\lambda\| \ge \alpha_n^{2k}/t_n$ \textsl{for all $n, k\in {\mathbb N}$.} \medskip We deduce \begin{eqnarray*} \|T^kx_\lambda\|^2&=&\langle T^kx_\lambda, T^kx_\lambda\rangle=\left\langle T^k\left(\sum_ne_n/t_n\right), T^k\left(\sum_me_m/t_m\right) \right\rangle \\ &=&\sum_n\frac 1{t_n} \sum_m\frac1{t_m} \langle T^ke_n, T^ke_m\rangle\\ &=&\sum_n\frac1{t_n^2}\|T^ke_n\|^2 \mbox{\quad (by Claim 9)}\\ &\ge& \frac1{t_n^2}\|T^ke_n\|^2 \mbox{\quad for each $n$}\\ &\ge & \frac{\alpha_n^{4k}}{t^2_n} \mbox{\quad (by Claim 10)}. \end{eqnarray*} Thus $\|T^kx_\lambda\| \ge {\alpha_n^{2k}}/{t_n}$ as claimed. \medskip \textbf{Claim 12.} $\|(P_BP_A)^kx_\lambda\| \ge \lambda_k \mbox{\quad \textsl{for each} $k\in {\mathbb N}$.}$ \medskip Fix any $k\in {\mathbb N}$ and choose $n\in {\mathbb N}$ such that $k_0(n) \le k \le k_1(n)$. Using Claim 11, we get that \begin{equation} \|(P_BP_A)^kx_\lambda\| \ge \|P_A(P_BP_A)^kx_\lambda\|=\|T^kx_\lambda\|\ge \frac{\alpha_n^{2k}}{t_n} \ge \frac{\alpha_n^{2k_1(n)}}{t_n}=\lambda_{k_0(n)}\ge \lambda_k, \nonumber \end{equation} which proves Claim 12. \medskip \textbf{Claim 13.} \textsl{For each $k\in {\mathbb N}$}, $(P_{M_2}P_{M_1})^k-P_M=(P_BP_A)^k$. \medskip Using the facts that $M=M_1\cap M_2$, $P_{M^\perp}=I-P_M$, and $P_{M^\perp}$ is idempotent and commutes with both $P_{M_1}$ and $P_{M_2}$ (see, e.g., \cite[p. 194]{deu;01}), we get that $P_{M_i}P_{M^\perp}=P_{M_i\cap M^\perp}$ for $i=1, 2$ and \begin{eqnarray*} (P_{M_2}P_{M_1})^k-P_M&=&(P_{M_2}P_{M_1})^k(I-P_M)=(P_{M_2}P_{M_1})^kP_{M^\perp}\\ &=&(P_{M_2}P_{M^\perp}P_{M_1}P_{M^\perp})^k=(P_{M_2\cap M^\perp}P_{M_1\cap M^\perp})^k\\ &=&(P_BP_A)^k, \end{eqnarray*} which proves Claim 13. Combining Claims 12 and 13, we immediately obtain \medskip \textbf{Claim 14.} $\|(P_{M_2}P_{M_1})^k(x_\lambda)-P_M(x_\lambda)\| \ge \lambda_k $\ \textsl{ for each} $k\in {\mathbb N}$. \medskip This completes the proof of the second statement of Theorem \ref{BBL}. \section{Two errors in \cite{bbl;97}}\label{S:Errors} In this section, we point out two errors in \cite{bbl;97}. We shall use the notation of \cite{bbl;97}. (Note that this is the same as the notation of the present paper except that here we have used $M_1, M_2$ instead of $C_1, C_2$.) \medskip \textbf{First error.} The proof of the Claim in Step~2 of the proof of Theorem~5.7.16 in \cite{bbl;97} has a mistake. The Claim itself is correct, only the proof of this claim is incorrect. Specifically, we inductively construct $(e_n')$ and $(f_n')$ in $A$ and $B$, respectively. Let $E$ and $F$ be the finite-dimensional spaces as in the proof. Let $(a_n)$ in $A$ and $(b_n)$ in $B$ as in the proof: \begin{equation} \label{eq:eins} \|a_n\|=1=\|b_n\|\quad\text{and}\quad \langle a_n, b_n\rangle \to 1, \end{equation} and $a_n\to 0$ weakly and $b_n\to 0$ weakly. Because $E+F$ is \emph{finite-dimensional}, the sum $A^\bot + (E+F)$ is closed. Hence $\{A^\bot,E+F\}$ is regular (by \cite[Proposition~5.16]{bb;96}) and so is $\{A^{\bot\bot},(E+F)^\bot\} = \{A,E^\bot \cap F^\bot\}$ (again by \cite[Proposition~5.16]{bb;96}). This means the following by definition of regularity. \textbf{Observation.}\textsl{ If $(z_n)$ is a bounded sequence with \\ $\max\big\{d(z_n,A), d(z_n,E^\bot\cap F^\bot)\big\}\to 0$, then $d(z_n,A \cap E^\bot\cap F^\bot)\to 0$. (And analogously when $A$ is replaced by $B$.)} Now back to the proof of the Claim. This time, $P_{E+F}$ is a compact operator. (In \cite{bbl;97}, $P_E$ and $P_F$ were considered, which is not sufficient.) Since $a_n \to 0$ weakly and $b_n\to 0$ weakly, we deduce that \begin{equation} P_{E+F}a_n \to 0\quad\text{and}\quad P_{E+F}b_n \to 0. \end{equation} Since $(E+F)^\bot = E^\bot \cap F^\bot$, this implies \begin{equation} a_n-P_{E^\bot \cap F^\bot}a_n \to 0 \quad\text{and}\quad b_n-P_{E^\bot \cap F^\bot}b_n \to 0; \end{equation} equivalently, \begin{equation} d(a_n,E^\bot \cap F^\bot)\to 0 \quad\text{and}\quad d(b_n,E^\bot \cap F^\bot)\to 0. \end{equation} The above Observation now implies $d(a_n,A\cap E^\bot \cap F^\bot)\to 0$ and $d(b_n,B\cap E^\bot \cap F^\bot)\to 0$; equivalently, \begin{equation} a_n-P_{A \cap E^\bot \cap F^\bot}a_n \to 0 \quad\text{and}\quad b_n-P_{B \cap E^\bot \cap F^\bot}b_n \to 0. \end{equation} In view of \eqref{eq:eins}, we deduce that \begin{equation} \langle P_{A \cap E^\bot \cap F^\bot}a_n, P_{B \cap E^\bot \cap F^\bot}b_n\rangle \to 1. \end{equation} Thus, for all $n$ sufficiently large, we have $\|P_{A \cap E^\bot \cap F^\bot}a_n\|\leq 1$, $\|P_{B \cap E^\bot \cap F^\bot}b_n\|\leq 1$, $P_{A \cap E^\bot \cap F^\bot}a_n\in A \cap E^\bot \cap F^\bot$, $P_{B \cap E^\bot \cap F^\bot}b_n\in B \cap E^\bot \cap F^\bot$, and \\ $\langle P_{A \cap E^\bot \cap F^\bot}a_n, P_{B \cap E^\bot \cap F^\bot}b_n \rangle$ is as close to $1$ (from below) as we like. Then for $n$ sufficiently large, we can take $e_{m+1}'=P_{A\cap E^\perp\cap F^\perp}a_n$ and $f_{m+1}'=P_{B\cap E^\perp\cap F^\perp}b_n$. \medskip \textbf{Second error.} The second error is on the third line on page 32 of \cite{bbl;97}, where it is claimed that \begin{equation} \label{e:falsch} C_1 = (C_1\cap C_2)\oplus E \oplus (A \cap E^\bot \cap F^\bot),\; C_2 = (C_1\cap C_2)\oplus F \oplus (B \cap E^\bot \cap F^\bot). \end{equation} Unfortunately, only \begin{equation*} C_1 = (C_1\cap C_2)\oplus E \oplus (A\cap E^\bot),\quad C_2 = (C_1\cap C_2)\oplus F \oplus (B\cap F^\bot) \end{equation*} is true. This invalidates the rest of the proof in \cite{bbl;97}. Here is a counterexample to \eqref{e:falsch}. Let $\{u_n \mid n\in {\mathbb N}\}$ be an orthonormal basis of a separable Hilbert space. Set \begin{equation}\nonumber C_1 := \ensuremath{\overline{\operatorname{span}}} \{ u_{2n}+\tfrac{1}{n}u_{2n-1}\mid n\in {\mathbb N}\} \quad\text{and}\quad C_2 := \ensuremath{\overline{\operatorname{span}}} \{ u_{2n}+\tfrac{1}{n}u_{2n+1}\mid n\in {\mathbb N} \}. \end{equation} Then \begin{equation} C_1 \cap C_2 =\{0\}. \end{equation} (Sketch: the spanning vectors are orthogonal. Normalize and use Fourier expansions. Equate coefficients, compare odd and even ones. Deduce that they are all equal; thus they must be equal to $0$.) Hence $A=C_1$ and $B=C_2$. Set \begin{equation} e_n = e_n' = \rho_n\big(u_{4n} + \tfrac{1}{2n}u_{4n-1}\big)\quad\text{and}\quad f_n = f_n' =\rho_n \big(u_{4n} + \tfrac{1}{2n}u_{4n+1}\big), \end{equation} where $\rho_n:=(1+\tfrac{1}{4n^2})^{-1/2}$. Since $\langle {e_n'}, {f_n'} \rangle= (1+\tfrac{1}{4n^2})^{-1}$, the sequences $(e_n')$ and $(f_n')$ are as in the Claim of Step~2, and the sequences $(e_n)$ and $(f_n)$ are as in Step~3. Set \begin{equation} E = \ensuremath{\overline{\operatorname{span}}}\{e_n \mid n\in {\mathbb N}\} \quad\text{and}\quad F = \ensuremath{\overline{\operatorname{span}}}\{f_n \mid n\in{\mathbb N}\}. \end{equation} Then \begin{equation} \overline{E+F} = \ensuremath{\overline{\operatorname{span}}}\{2ne_{4n}+u_{4n-1}, \; 2ne_{4n}+u_{4n+1} \mid n\in {\mathbb N} \} \end{equation} is a subspace of $\ensuremath{\overline{\operatorname{span}}}\{u_{4n-1},u_{4n},u_{4n+1}\mid n\in {\mathbb N} \}$. Thus $\{u_1,u_2,u_6,u_{10},\ldots\}\subset (E+F)^\bot$. Since the orthogonal complement of $\ensuremath{\overline{\operatorname{span}}}\{2ne_{4n}+u_{4n-1},2ne_{4n}+u_{4n+1} \mid n\in {\mathbb N}\}$ in $\ensuremath{\overline{\operatorname{span}}}\{u_{4n-1},u_{4n},u_{4n+1} \mid n\in {\mathbb N}\}$ is $\ensuremath{\overline{\operatorname{span}}}\{-2nu_{4n-1}+u_{4n} - 2nu_{4n+1}\mid n\in {\mathbb N} \}$, we obtain \begin{eqnarray} \label{e:EbotFbot} E^\bot \cap F^\bot &=&(E+F)^\bot \\ &=& \ensuremath{\overline{\operatorname{span}}}\{u_1,\; u_{4n-2},\; -2nu_{4n-1}+u_{4n} - 2nu_{4n+1}\mid n\in {\mathbb N} \}. \nonumber \end{eqnarray} Consider the vector $x := u_6 + \tfrac{1}{3}u_5$. Then $x$ belongs to $C_1 = A$. Since $E \subset \ensuremath{\overline{\operatorname{span}}}\{u_{4n-1},u_{4n} \mid n\in {\mathbb N} \}$, it follows that $x\in E^\bot$ and hence $P_Ex = 0$. Now consider the first term in the false statement \eqref{e:falsch}, which in our present situation becomes \begin{equation} \label{e:falsch2} A = E \oplus (A \cap E^\bot \cap F^\bot). \end{equation} This would imply that $x$ belongs entirely to $A \cap E^\bot \cap F^\bot$. While it is true that $x\in A \cap E^\bot$, it is \emph{not} true that $x$ belongs to $E^\bot \cap F^\bot$. This can be verified using relation \eqref{e:EbotFbot}. \bibliographystyle{plain}
{ "timestamp": "2007-10-12T07:31:40", "yymm": "0710", "arxiv_id": "0710.2387", "language": "en", "url": "https://arxiv.org/abs/0710.2387", "abstract": "In 1997, Bauschke, Borwein, and Lewis have stated a trichotomy theorem that characterizes when the convergence of the method of alternating projections can be arbitrarily slow. However, there are two errors in their proof of this theorem. In this note, we show that although one of the errors is critical, the theorem itself is correct. We give a different proof that uses the multiplicative form of the spectral theorem, and the theorem holds in any real or complex Hilbert space, not just in a real Hilbert space.", "subjects": "Functional Analysis (math.FA); Optimization and Control (math.OC)", "title": "Characterizing arbitrarily slow convergence in the method of alternating projections", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985271387015241, "lm_q2_score": 0.8221891327004132, "lm_q1q2_score": 0.8100794271645941 }
https://arxiv.org/abs/1808.03230
Does Hamiltonian Monte Carlo mix faster than a random walk on multimodal densities?
Hamiltonian Monte Carlo (HMC) is a very popular and generic collection of Markov chain Monte Carlo (MCMC) algorithms. One explanation for the popularity of HMC algorithms is their excellent performance as the dimension $d$ of the target becomes large: under conditions that are satisfied for many common statistical models, optimally-tuned HMC algorithms have a running time that scales like $d^{0.25}$. In stark contrast, the running time of the usual Random-Walk Metropolis (RWM) algorithm, optimally tuned, scales like $d$. This superior scaling of the HMC algorithm with dimension is attributed to the fact that it, unlike RWM, incorporates the gradient information in the proposal distribution. In this paper, we investigate a different scaling question: does HMC beat RWM for highly $\textit{multimodal}$ targets? We find that the answer is often $\textit{no}$. We compute the spectral gaps for both the algorithms for a specific class of multimodal target densities, and show that they are identical. The key reason is that, within one mode, the gradient is effectively ignorant about other modes, thus negating the advantage the HMC algorithm enjoys in unimodal targets. We also give heuristic arguments suggesting that the above observation may hold quite generally. Our main tool for answering this question is a novel simple formula for the conductance of HMC using Liouville's theorem. This result allows us to compute the spectral gap of HMC algorithms, for both the classical HMC with isotropic momentum and the recent Riemannian HMC, for multimodal targets.
\section{Introduction} Markov chain Monte Carlo (MCMC) algorithms, and in particular the Hamiltonian Monte Carlo (HMC) algorithms, are workhorses in many scientific fields including physics \cite{Hybrid_MCMC}, statistics and machine learning \cite{NUTS,Riemannian_HMC2, MCMC_Application_Machine_Learning,welling2011bayesian, mangoubi2018convex}, and molecular biology \cite{MCMC_Application_molecular_biology}. An important question practitioners face is how to choose an algorithm for a \textit{particular} problem? Are there any general principles that allow users to prefer one algorithm to another for \textit{generic} but \textit{simple} problems? One of the most fruitful branches of MCMC theory has focused on analyzing MCMC algorithms by studying their \textit{scaling limits.} The idea is to study how an MCMC algorithm's speed depends on some underlying parameter, often the dimension $d$ of the target distribution. Results from the theory of scaling limits suggest that HMC often has a running time of $O(d^{0.25})$ \cite{beskos2013optimal} whereas the Random-Walk Metropolis (RWM) algorithm has a much longer running time of $O(d)$ steps under very broad conditions \cite{roberts1997weak,mps12,bed07}. These results and other evidence, both empirical and theoretical (see \textit{e.g.,} \cite{betancourt2017conceptual,mangoubi2017rapidp1,rabee2018HMCcoup, mangoubi2016rapid}) have lead to the widespread understanding that HMC is superior to RWM for a wide variety of problems. In this paper, we investigate the extent to which this superiority holds in another natural scaling regime: highly multimodal target distributions. There is no a priori reason to believe that algorithms superior in one regime are also superior in other regimes. To give a simple example, consider the mixture of Gaussians \begin{equs} \label{eqn:gausmix} \frac{1}{2} \mathcal{N}(-1,\sigma^2) + \frac{1}{2} \mathcal{N}(1,\sigma^2). \end{equs} A natural regime to study is when $\sigma \rightarrow 0$. Let us tune both algorithms to optimize their performance within a single mode. To this end, for the RWM algorithm, results of \cite{roberts1997weak} (and also simple scale-invariance of the mixture components) imply that the proposal variance must be $O(\sigma^2)$. For the classical HMC algorithm \cite{Hybrid_MCMC} with isotropic momentum, we set the integration length to be $O(\sigma)$; this is the first time we would expect the algorithm to take a U-turn (and is also suggested by scale-invariance of the mixture components). For the above example, with these tuning parameters, Theorem \ref{ThmHmcMultimodal} of this paper and Theorem 3 of our companion paper \cite{mangoubi2018simple} imply that the spectral gap of HMC and RWM \emph{both} decay exactly like $e^{-\frac{1}{2} \sigma^{-2}}$ as $\sigma \rightarrow 0$. Of course, we expect that the spectral gap goes to 0 quickly in both cases, just as it does in the high-dimensional scaling regime of \cite{roberts1997weak,beskos2013optimal}. The interesting fact is that they go to 0 at the \textit{same rate}, so that the asymptotic performance of the two algorithms is the same. For this example, instead of isotropic momentum, if we choose the metric to be the inverse of the Fisher information as in the Reimannian HMC of \cite{Riemannian_HMC2}, the spectral gap still decays like $e^{-\frac{1}{2} \sigma^{-2}}$ as $\sigma \rightarrow 0$. Thus, our main conclusion is that, for this example is that HMC is \emph{not} better than RWM. Finally, the above results also hold for the high dimensional analog of \eqref{eqn:gausmix}; see Section \ref{sec:highdim}. It is natural to ask if this comparison is a result of bad tuning. In fact, this is not the case. Tuning the integration length of HMC cannot improve its relative performance. The natural tuning for HMC used above is essentially optimal (in the sense of maximizing the effective sample size per unit computation; see Section \ref{sec:RemChoiceInt} for a more detailed discussion of tuning parameters and optimality). We give a further discussion of scaling results, as well as related open questions, in Section \ref{SecCompMulti}. We highlight here one fact from Section \ref{SecCompMulti} that is particularly interesting for our simple target distribution \eqref{eqn:gausmix}: considering other tuning parameters makes HMC look even worse than RWM. More precisely, the computational cost of the HMC algorithm is \textit{always} at least on the order of $e^{-\frac{1}{2} \sigma^{-2}}$ even for much longer integration times, while well-chosen tuning parameters can make RWM much more efficient (again detailed in Section \ref{sec:RemChoiceInt} and Section \ref{SecCompMulti}). Readers and practitioners familiar with the MCMC literature may be surprised at the tone of the above paragraph, since HMC is generally viewed as having much better performance than many older and simpler random walk based MCMC algorithms \cite{betancourt2017conceptual,roberts1997weak,beskos2013optimal,mangoubi2017concave,rabee2018HMCcoup}. However, the main heuristics justifying the dominance of HMC in the high-dimensional regime do not apply in our highly-multimodal regime. In particular, HMC can use knowledge of the gradient of the log-likelihood to make larger moves than RWM in the high-dimensional regime, but the gradient cannot detect multimodality. \subsection{Conductance Results} One of the main tools we use in our calculations is \textit{Cheeger's inequality}. This well known tool provides a coarse estimate of the efficiency of an MCMC algorithm in terms of a geometric quantity called the \textit{conductance} (see\cite{cheeger1970lower, lawler1988bounds} and a survey of variants \cite{montenegro2006mathematical}). It is useful primarily because it can be easier to estimate than more precise measures of efficiency. Unfortunately, the conductance can still be difficult to compute. One of the main contributions of this paper is a simple exact formula for the conductance of both HMC and RHMC (see Theorem \ref{thm:1}). This formula relates the conductance of a set $S$ to the integral of a function, which we interpret as its ``crossing rate," on the boundary $\partial S$ of $S$. For the `standard' HMC algorithm that uses isotropic momentum, Theorem \ref{thm:1} (see Corollary \ref{cor:1}) yields the following bound for conductance of a set $S$: \begin{equs} \mathrm{Conductance}(S) \leq \frac{1}{2}T { \int_{\partial S} \pi(q) \mathrm{d}q \over \pi(S)}, \end{equs} where $T$ is the integration length and the integral is over the boundary $\partial S$ of the set $S$ and $\pi$ is the target density. In particular, the above formula yields that the conductance can increase \emph{at most} linearly with the integration time for the HMC algorithm. This is also true for RHMC. In Section \ref{sec:RemChoiceInt}, we use this observation to show that changing the integration length cannot vastly improve the performance of the HMC algorithm in multimodal regimes. Markov chains do not typically have conductance formulas that can easily be expressed in terms of surface integrals. As seen from above, our new formula is much easier to use. Having a simple formula for HMC is particularly useful because HMC algorithms are some of the most widely-used \cite{Riemannian_HMC2, cheung2009bayesian, mehlig1992hybrid} modern MCMC algorithms, but their theoretical properties are not yet well-understood (though again see \cite{holmes2014curvature, livingstone2016geometric, mangoubi2017concave, rabee2018HMCcoup}). Understanding the conductance of HMC algorithms can help us understand when these algorithms perform poorly, and often suggest what has gone wrong. As with \textit{e.g.,} \cite{holmes2014curvature}, we begin our quantitative study of mixing times by studying ``idealized" versions of HMC. That is, we ignore the error introduced by solving Hamilton's equations with a numerical integrator such as the leapfrog integrator and instead assume that we can solve the Hamilton's ODEs exactly. We expect similar qualitative conclusions to hold for appropriate practical implementations of HMC, and there is substantial work on relating ``ideal" Monte Carlo schemes to their numerical implementations (see \textit{e.g.,} \cite{durmus2016sampling2} \cite{livingstone2016geometric}, \cite{mangoubi2017rapidp1}, \cite{mangoubi2017rapidp2}, \cite{lee2017convergence} \cite{mangoubi2018dimensionally}). \subsection*{When can HMC beat RWM for multimodal targets?} \begin{figure} \begin{tikzpicture}[>=latex] \coordinate (A) at (0,0) ; \coordinate (B) at (0,-5) ; \coordinate (C) at (4,0) ; \coordinate (D) at (4,-5) ; \filldraw[fill=gray!20] (A) to[out=-100,in=100] (A) (C) to[out=-80,in=80] coordinate[pos=0.05] (auxru) coordinate[pos=0.95] (auxrl) (D) to[out=60,in=-60] (C); \node[circle,draw = black, fill = gray!20,inner sep=0.5cm] at (1.8,-2.5) (circle) {}; \end{tikzpicture} \caption{We see a small deep mode next to a `banana' shaped mode. HMC can often explore long, skinny modes much more quickly than RWM. Thus, it is possible to tune the length of the banana shaped mode, in relation to the distance between the centers of the banana and the circle, so that the time for RWM to mix on the long mode is much larger than the time to escape the small mode, while the time for HMC to mix on the long mode is much smaller than the time to escape the small mode. In this case, the HMC algorithm can exhibit metastability while the RWM algorithm does not; we would expect HMC to mix more quickly than RWM in this situation.} \label{FigBanana} \end{figure} We envision that there may be some classes of highly multimodal target densities where the HMC algorithm is superior to random walk based algorithms. One such example is shown in Figure \ref{FigBanana}. The basic idea is that HMC can often explore long, skinny modes much more quickly than RWM. If we consider an example with several modes, at least one of which is very long and skinny, the running time for RWM may be determined by the time it takes to traverse the long and skinny mode, whereas the running time for HMC will be determined by the time it takes to travel between the modes. This `energy-entropy' competition could lead to different performances of the two algorithms, and certainly merits further study. In particular, the HMC algorithm in this example could exhibit `metastability' while the RWM algorithm does not. In our companion paper \cite{mangoubi2018simple}, we derive simple conditions for checking the metastability of commonly used MCMC algorithms. It is also natural to ask if any of the popular HMC variants can substantially improve the performance of HMC on multimodal targets. We believe that this is an important research question, but there are many HMC variants and a careful survey of their performance is beyond the scope of the present paper; instead we give a quick summary in relation to our results. Our main results do not apply as stated to the popular NUTS algorithm \cite{NUTS}, though we expect very similar bounds to be true. Our main conductance bounds, Theorem \ref{thm:1} and Corollary \ref{cor:1}, do apply as stated to the Riemannian HMC (RHMC) algorithm of \cite{Riemannian_HMC2}. For the toy example \eqref{eqn:gausmix}, these bounds can be used to show that the RHMC algorithm with its ``usual" tuning (the inverse of the Fisher information as suggested in \cite{Riemannian_HMC2}) does not offer substantial improvements on the RWM algorithm with the ``usual" tuning discussed above. On the other hand, it does appear possible to use extremely unusual tuning parameters for RHMC to substantially improve performance, just as this is possible for RWM (see Section \ref{SecHmcImp} for detailed discussion of this tuning for RWM). As with RWM, there does not seem any obvious way to ``guess" these good tuning parameters without unrealistically good knowledge of the locations of the modes, and so it is not clear if the existence of good tuning parameters has any practical importance. \subsection{Guide to Paper} In Section \ref{SecAlgDefs}, we review basic notation and the definitions of the MCMC algorithms that we study in this paper: an ``ideal" version of the standard and Riemannian HMC algorithms, as well as a simple Metropolis-Hastings algorithm for comparison. We then prove bounds on the conductance for HMC in Section \ref{sec:HMC}. Finally, we illustrate the uses of our results by analyzing the performance of HMC for two simple but important examples in Section \ref{SecAppl}: a mixture of Gaussians and a highly degenerate Gaussian. Finally, we discuss the consequences of our work and give related conjectures in Section \ref{SecCompMulti}. The longer proofs are deferred to the appendices. \subsection{Basic Notation} We review the standard ``big-O" notation, which is used throughout the paper. For two nonnegative functions or sequences $f,g$, we write $f = O(g)$ as shorthand for the statement: there exist constants $0 < C_{1},C_{2} < \infty$ so that for all $x > C_{1}$, we have $f(x) \leq C_{2} \, g(x)$. We write $f = \Omega(g)$ for $g = O(f)$, and we write $f = \Theta(g)$ if both $f= O(g)$ and $g=O(f)$. Relatedly, we write $f = o(g)$ as shorthand for the statement: $\lim_{x \rightarrow \infty} \frac{f(x)}{g(x)} = 0$. We write $f = \tilde{O}(g)$ if there exist constants $0 < C_{1},C_{2}, C_{3} < \infty$ so that for all $x > C_{1}$, we have $f(x) \leq C_{2} \, g(x) \log(x)^{C_{3}}$, and write $f = \tilde{\Omega}(g)$ for $g = \tilde{O}(f)$. Finally, we say that a function $f$ is ``bounded by a polynomial" if there exists $0 < c < \infty$ so that $f(x) = O(x^{c})$. \section{Algorithms and Notation} \label{SecAlgDefs} In this section we review two important Hamiltonian Monte Carlo algorithms, as well as the commonly-used Random Walk Metropolis algorithm. We also review some important definitions for MCMC algorithms, including careful definitions of the conductance, spectral gap and Cheeger inequality. \subsection{Basic Notation} Throughout the remainder of the paper, we denote by $\pi$ the smooth density function of a probability distribution on $\mathbb{R}^{d}$. We denote by $\mathcal{L}(X)$ the distribution of a random variable $X$. Similarly, if $\mu$ is a probability measure, we write ``$X \sim \mu$" for ``$X$ has distribution $\mu$." Throughout, we will generically let $Q \sim \pi$ and $P \sim \mathcal{N}(0, \mathrm{Id})$ be independent random variables, where $\mathrm{Id}$ is the $d$-dimensional identity matrix. \subsection{Random Walk Metropolis} The Random Walk Metropolis (RWM) algorithm (Algorithm \ref{alg:RWM}) is the most basic commonly-used MCMC algorithm. At each step $i$ of the Markov chain, the RWM algorithm proposes to take the next step $\hat{x}_{i+1}$ in a random direction and distance from the current position $x_i$. The step is accepted with a probability of $\mathrm{min}\{\frac{\pi(\hat{x}_{i+1})}{\pi(x_{i})}, 1\}$, in which case we set $x_{i+1} = \hat{x}_{i+1}$. Otherwise, we say that the step is rejected and we set $x_{i+1} = x_{i}$, so that the algorithm stays at its current position until the next time step. \begin{algorithm}[H] \caption{Random Walk Metropolis \cite{metropolis1953equation}}\label{alg:RWM} \flushleft \textbf{Input:} Tuning parameter $\epsilon > 0$, starting point $x_0$, target density $\pi: \mathbb{R}^{d} \rightarrow [0, \infty)$. \\ \textbf{Output:} Markov chain $x_1, x_2, \ldots$, with stationary distribution $\pi$. \\ \begin{algorithmic}[1] \FOR{$i = 1,2,\ldots$,} \STATE Sample $p_{i} \sim \mathcal{N}(0,\epsilon^{2})^{d}$ and $U \sim \mathrm{Unif}([0,1])$. \STATE Set $\hat{x}_{i+1} = x_{i} + p_{i}$. \IF{$U < \mathrm{min}\{\frac{\pi(\hat{x}_{i+1})}{\pi(x_i)}, 1\},$} \STATE Set $x_{i+1} = \hat{x}_{i+1}$. \ELSE \STATE Set $x_{i+1} = x_{i}$. \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Hamiltonian Monte Carlo algorithms} Hamiltonian Monte Carlo (HMC) algorithms \cite{Hybrid_MCMC} seek to avoid quadratic slowdowns associated with diffusive ``random walk" behavior. They do so by adding a notion of ``momentum" to the random walk, which encourages the underlying Markov chain to propose longer steps without incurring a large chance of rejection. For this reason HMC algorithms work especially well in high dimensions - concentration of the posterior measure $\pi$ causes most other MCMC algorithms to either propose steps that are very small or larger steps that are rejected with high probability (see \textit{e.g.,} \cite{roberts2001optimal,beskos2013optimal} for discussion of these heuristics). In this section we review two commonly-used HMC algorithms. Both algorithms are based on Hamiltonian dynamics, which we review here. Fix a smooth function $H \, : \, \mathbb{R}^{2d} \mapsto \mathbb{R}^{+}$, called the \textit{Hamiltonian function}, and starting points $p_{0},q_{0} \in \mathbb{R}^{d}$. We then define \textit{Hamilton's equations} to be the pair of differential equations \begin{equs} \label{EqHamEq} \frac{d}{dt} p(t) = - \frac{\partial H(p,q)}{\partial q}, \quad \frac{d}{dt} q(t) = \frac{\partial H(p,q)}{\partial p}, \end{equs} with initial condition $q(0) = q_{0}$, $p(0) = p_{0}$. Throughout the paper, we denote by $\gamma_{p,q}(t) = q(t)$ the first component of the solution to these equations with these initial conditions, so that \textit{e.g.}, $\gamma_{p,q}'(t) = p(t)$ for the ``standard" choice of Hamiltonian in Equation \eqref{EqStandardHam}. The associated Hamiltonian function $H$ will always be clear from the context. The solutions to Hamilton's equations have a number of special properties. Two of the most important are \textit{conservation of energy} and \textit{conservation of volume}: \begin{enumerate} \item \textbf{Conservation of Energy:} For $\gamma_{p,q}$ as above, we have \begin{equs} \label{EqConsEn} \frac{d}{dt} H(\gamma_{p,q}'(t), \gamma_{p,q}(t)) \equiv 0. \end{equs} \item \textbf{Invariant distribution:} Assume that $\int_{p,q} e^{-H(p,q)} dq dp = 1$, and write $\mu_{H} = e^{-H(p,q)}$. If we sample $(P,Q) \sim \mu_{H}$, then we also have \begin{equs} \label{EqConsVol} \mathcal{L}(\gamma_{P,Q}'(t), \gamma_{P,Q}(t)) = \mu_{H} \end{equs} for all $t \in \mathbb{R}^{+}$. This fact is a restatement of Liouville's theorem in probabilistic language. \end{enumerate} We refer the reader to \cite{lanczos1949variational} for proofs of these facts. The second fact motivates the standard choice of Hamiltonian \begin{equs} \label{EqStandardHam} H(p,q) = -\log(\pi(q)) + \frac{1}{2} \| p \|^{2}. \end{equs} Under this measure, if $Q \sim \pi$ and $P \sim \mathcal{N}(0,1)$, then $\gamma_{P,Q}(t) \sim \pi$ for all $t \in \mathbb{R}$. This choice leads to the isotropic-momentum HMC algorithm (Algorithm \ref{alg:isotropic_HMC}), developed in \cite{mehlig1992hybrid}: \begin{algorithm}[H] \caption{Isotropic-Momentum HMC (idealized symplectic integrator) \cite{mehlig1992hybrid}} \label{alg:isotropic_HMC} \flushleft \textbf{Input:} Starting point $q_0 \in \mathbb{R}^{d}$, integration time $T \in \mathbb{R}^{+}$, smooth target density $\pi: \mathbb{R}^n \rightarrow [0, \infty)$. \\ \textbf{Output:} Markov chain $q_1, q_2, \ldots$, with stationary distribution $\pi$. \\ \begin{algorithmic}[1] \STATE Define $H(p,q) := -\mathrm{log}(\pi(q)) + \frac{1}{2} \| p \|^{2}$. \FOR{$i = 1,2,\ldots$,} \STATE Sample $p_i \sim \mathcal{N}(0,1)^d$. \STATE Set $q_{i+1} = \gamma_{p_{i}, q_{i}}(T)$. \ENDFOR \end{algorithmic} \end{algorithm} Riemannian Manifold HMC seeks to take longer steps by choosing initial momenta from a multivariate Gaussian distribution that agrees with the local geometry of the posterior density $\pi$. This is achieved by using trajectories that evolve according to Hamiltonian dynamics on a Riemmanian manifold, with metric defined by some positive definite matrix $G(q)$. For the special case of isotropic-momentum HMC, we have $G(q) = I_d$. Alternatively, one may choose $G(q)$ to be a regularization of the Hessian of $U(q)$, which acts as a local pre-conditioner for $U$ \cite{betancourt2013general}. This can result in much faster mixing for distributions that are not close to isotropic; for example, it avoids the bad performance of Algorithm \ref{alg:isotropic_HMC} on the nearly-degenerate Gaussian that is observed in Theorem \ref{ThmHmcDegenerate}. The algorithm is identical to Algorithm \ref{alg:isotropic_HMC}, except for the choice of the Hamiltonian $H$: \begin{algorithm}[H] \caption{Riemannian Manifold HMC (idealized symplectic integrator) \cite{Riemannian_HMC1, Riemannian_HMC2}} \label{alg:Riemannian_algorithm} \flushleft \textbf{Input:} Starting point $q_0 \in \mathbb{R}^{d}$, Integration time $T \in \mathbb{R}^{+}$, smooth target density $\pi: \mathbb{R}^d \rightarrow [0, \infty)$. \\ \textbf{Input:} Positive definite matrix $G(q)$ defining a Riemannian metric at every $q\in \mathbb{R}^d$. \\ \textbf{Output:} Markov chain $q_1, q_2, \ldots$, with stationary distribution $\pi$. \\ \begin{algorithmic}[1] \STATE Define $H(p,q) := -\mathrm{log}(\pi(q)) + \, \frac{1}{2}\log\left((2\pi)^d\det(G(q))\right) + \frac{1}{2} \langle p^\top G^{-1}(q), p \rangle$ \FOR{$i = 1,2,\ldots$,} \STATE Sample $p_i \sim \mathcal{N}(0, G^{-1}(q_i))$. \STATE Set $q_{i+1} = \gamma_{p_{i}, q_{i}}(T)$. \ENDFOR \end{algorithmic} \end{algorithm} We will usually make the following additional assumption on the Riemannian metric matrix $G(q)$ in this case: \begin{assumption} \label{AssumptionBoundFIM} Let $\lambda_{0}(q), \lambda_{1}(q)$ be the smallest and largest singular values of $G(q)$. Assume that for any compact set $A \subset \mathbb{R}^{d}$ we have \begin{equs} \sup_{q \in A} \max \left( \frac{1}{\lambda_{0}(q)}, \, \lambda_{1}(q) \right) < \infty. \end{equs} \end{assumption} Assumption \ref{AssumptionBoundFIM} says that the singular values of the metric $G$ are bounded away from $0$ and $\infty$ on compact sets. This is satisfied when $G$ is continuous and positive definite. More effort is required when $G$ is allowed to be positive semi-definite; we do not study this case. \subsection{Cheeger's inequality and the spectral gap} We recall the basic definitions used to measure the efficiency of MCMC algorithms. Let $L$ be a reversible transition kernel with unique stationary distribution $\mu$ on $\mathbb{R}^{d}$. We view $L$ as an operator from $L_{2}(\pi)$ to itself: \begin{equs} (L f)(x) = \int_{y \in \mathbb{R}^{d}} L(x,dy) f(y). \end{equs} The constant function is always an eigenfuncton of this operator, with eigenvalue 1. We define the space $W^{\perp} = \{ f \in L_{2}(\mu) \, : \, \int_{x} f(x) \mu(dx) = 0\}$ of functions that are orthogonal to the constant function, and denote by $L^{\perp}$ the restriction of the operator $L$ to the space $W^{\perp}$. We then define the \textit{spectral gap} $\rho$ of $L$ by the formula \begin{equs} \rho = \rho(L) \equiv 1 - \sup \{ |\lambda| \, : \, \lambda \in \mathrm{Spectrum}(L^{\perp}) \}, \end{equs} where $\mathrm{Spectrum}$ refers to the usual spectrum of an operator. If $L^{\perp}$ has a largest eigenvalue $|\lambda_2|$, then $\rho = 1-| \lambda_2 |$. Geometric ergodicity of HMC algorithms was proved under very general conditions in \cite{livingstone2016geometric, Nawaf}, implying existence of a non-zero spectral gap under those conditions \cite{roberts1997geometric}. Cheeger's inequality \cite{cheeger1970lower,lawler1988bounds} provides bounds for the spectral gap in terms of the ability of $L$ to move from any set to its complement in a single step. This ability is measured by the conductance $\Phi(L)$, which is defined by the pair of equations \begin{equs} \Phi(L) &= \inf_{S \in \mathcal{A} \, : \, 0 < \mu(S) < \frac{1}{2}} \Phi(L,S) \\ \Phi(L,S) &= \frac{ \int_{x} \mathbbm{1}\{x \in S\} L(x,S^{c}) \mu(dx)}{\mu(S) }, \end{equs} where $\mathcal{A}= \mathcal{A}(\mathbb{R}^{d})$ denote the usual collection of Lebesgue-measurable subsets of $\mathbb{R}^{d}$. Cheeger's inequality for Markov chains, first proved in \cite{lawler1988bounds}, gives: \begin{equation} \label{IneqCheegPoin} \frac{\Phi(L)^2}{2} \leq \rho(L) \leq 2 \Phi(L). \end{equation} \section{Conductance of Hamiltonian Monte Carlo} \label{sec:HMC} We derive equations for the conductance of HMC. Our main result, Theorem \ref{thm:1}, is based on the following two heuristics about HMC started at stationarity: \begin{enumerate} \item The conductance of a Markov chain measures the probability that a Markov chain will jump from some set $S$ to its complement in a single step. In the special case of HMC, this is roughly equivalent to the probability that the path $\gamma_{P,Q}$ will cross the boundary $\partial S$ an odd number of times. \item By Liouville's theorem and reversibility, the \textit{rate} $\frac{d}{dt} | \gamma_{P,Q}([0,t]) \cap (\partial S)|$ at which the path $\gamma_{P,Q}$ intersects $\partial S$ is constant. \end{enumerate} Theorem \ref{thm:1} describes the conductance of a set purely in terms of this constant ``crossing rate" and the probability that the total number of crossings will be odd. Unfortunately, defining ``crossings" in a precise way requires a substantial amount of additional notation, which we give in Section \ref{SubsecNotationManifolds}. Readers primarily interested in upper bounds on the conductance may skip to Corollary \ref{cor:1}, which gives an easier-to-state upper bound on the conductance that is often close to sharp. Although we introduce some restrictions on the set $S$ before giving our formula for the conductance of a \textit{particular set} $S$, we later prove that is possible to compute the conductance of \textit{the entire Markov chain} exactly using only the sets that satisfy these assumptions; see Lemma \ref{LemmSuffLoc} for details. Thus, our formula can in fact be used to compute the conductance exactly. \subsection{Notation Related to Manifolds and Intersections} \label{SubsecNotationManifolds} Fix a set $S \subset \mathbb{R}^{d}$ whose topological boundary $\partial S$ is an embedded $(d-1)$-dimensional manifold. Fix also an integration time $T \in \mathbb{R}^{+}$. In this special case, we define the \textit{number of intersections} $N_{\partial S}$ as follows: \begin{defn} \label{DefNumInt} For fixed $T > 0$, define the set $\mathcal{G}_{S} = \mathcal{G}_{S}(T) \subset \mathbb{R}^{2d}$ to be all pairs $(p,q) \in \mathbb{R}^{2d}$ satisfying: \begin{enumerate} \item The function $\gamma_{p,q} \, : \, \mathbb{R} \mapsto \mathbb{R}^{d}$ is \textit{transverse} to the manifold $\partial S$. \item The set $\{ t \in [0,T] \, : \, \gamma_{p,q}(t) \in \partial S\}$ is finite. \end{enumerate} For $(p,q) \in \mathcal{G}_{S}$, define \begin{equs} N_{\partial S} = N_{\partial S}(p,q) \equiv |\{ t \in [0,T] \, : \, \gamma_{p,q}(t) \in \partial S\}|. \end{equs} \end{defn} To recall a precise definition of a \textit{transverse} intersection, see \textit{e.g.,} \cite{guillemin2010differential}. In this paper, we are primarily interested in the following property of transverse intersections. Let $\partial S \subset \mathbb{R}^{d}$ be a closed surface that partitions $\mathbb{R}^{d}$ into two open pieces $P_{1}'$, $P_{2}'$ with closures $P_{1},P_{2}$, and let $\gamma \, : \, [0,1] \mapsto \mathbb{R}^{d}$ be a curve that is transverse to $\partial S$. Assume that $|\gamma([0,1]) \cap \partial S| < \infty$. Then $\gamma(0) \notin \partial S, \, \gamma(1)$ are in the same closed piece of $\mathbb{R}^{d}$ if and only if $|\gamma([0,1]) \cap \partial S| < \infty$ is even. In other words: if all intersections are transverse, we can tell which side of a curve a path will end on by simply counting the number of intersections. This is not true for non-transverse intersections. For example, consider the intersection of the circle given by $x^{2} + y^{2} = 1$ and the curve $\gamma(t)=(2t - 1, 1)$; they intersect at exactly one point but the curve never enters the inside of the circle. See also Figure \ref{FigTransverse}. \begin{figure}[H] \includegraphics[scale=0.4]{TransverseNot} \caption{A boundary $\partial S$ is shown in black. The intersection of $\partial S$ with the blue curve is transverse; the intersection with the red curve is not.\label{FigTransverse}} \end{figure} We next show that this definition of $N_{\partial S}(p,q)$ can easily be extended to all $(p,q) \in \mathbb{R}^{2d}$, under modest assumptions about $S$. We need the following simple notation: \begin{itemize} \item For $q \in \partial S$, define $\eta(q)$ to be the unit normal vector of $\partial S$ at $q$ that points away from the interior of $S$, and for $p \in \mathbb{R}^{d}$ define \begin{equs} p_{q} = \langle G^{-1}(q) p, \eta(q) \rangle \end{equs} to be the component of $p$ in the direction orthogonal to $\partial S$ at q. \item For $q \in \partial S$, define the set \begin{equs} \mathcal{P}^{+}_{S}(q) = \{ p \in \mathbb{R}^{d} \, : \, p_{q} > 0 \} \end{equs} to be the half-space of momentum vectors pointing away from $S$ at $q\in \partial S$. \end{itemize} For $x \in \partial S$, denote by $\mathcal{T}_{x}$ the tangent space of $\partial S$ at $x$. We view this tangent space as being embedded in the same copy of $\mathbb{R}^{d}$ as $S$ and being based at the point $x$ (so that, for example, it will generally not include 0). Denote by $\mathbbm{Proj}_{x} \, : \, \mathbb{R}^{d} \mapsto \mathcal{T}_{x}$ the projection map from $\mathbb{R}^{d}$ to the tangent space. We assume: \begin{assumption} [Locally Well-Behaved Manifold] \label{DefLocallyWellBehaved} We say that $S$ is \textit{locally well-behaved} if, for every compact set $A \subset \mathbb{R}^{d}$, there exist constants $0 < \epsilon, C < \infty$ so that: \begin{enumerate} \item For every ball $A' \subset A$ of diameter less than $\epsilon$, the set $A' \cap \partial S$ has a single component, and \item For every $x, y \in (A \cap \partial S)$, \begin{equs} \| y - \mathbbm{Proj}_{x}(y) \| \leq C \|x - \mathbbm{Proj}_{x}(y) \|^{2} \end{equs} and \begin{equs} \| \eta(x) - \eta(y) \| \leq C \|x-y \|. \end{equs} \end{enumerate} \end{assumption} \begin{remark} We have not tried to provide the weakest-possible assumptions in Assumption \ref{DefLocallyWellBehaved}. Since Lemma \ref{LemmSuffLoc} implies that the conductance of most realistic HMC chains can be computed entirely in terms of sets that satisfy Assumption \ref{DefLocallyWellBehaved}, these assumptions are weak enough for our purposes. Here, we explain why something like Assumption \ref{DefLocallyWellBehaved} is needed at all. The goal is to rule out the possibility that solutions to Hamilton's equations will pass through $\partial S$ very many times over very short intervals. To give a simple pathological example, we wish to avoid sets such as \begin{equs} \label{BadSetSDef} S_{\infty} = \cup_{n \in \mathbb{N}} [0,1] \times [(4n)^{-2}, (4n+1)^{-2}]. \end{equs} The problem with $S_{\infty}$ is that the boundary $\partial S$ contains a countably infinite union of parallel lines within the compact set $[0,1]^{2}$, and so arbitrarily short Hamiltonian paths can cross $\partial S$ arbitrarily (or even infinitely) often. \end{remark} We next note that $N_{\partial S}$ can be defined $\mathbb{R}^{2d}$-almost everywhere: \begin{lemma} \label{LemCountingPossible} Set notation as above. Then the set $\mathcal{G}_{S}^{c}(T)$ has measure 0 for any $T \in \mathbb{R}^{+}$. \end{lemma} \begin{proof} The proof is deferred to Appendix \ref{SecCountingPossible}. \end{proof} By this lemma, for all fixed $T \in \mathbb{R}^{+}$ we can define a measurable function $N_{\partial S} \, : \, \mathbb{R}^{2d} \mapsto \mathbb{N}$ that agrees with Definition \ref{DefNumInt} for almost every value of $(p,q) \in \mathbb{R}^{2d}$. Throughout the rest of the paper, we will always assume that all intersections are transverse. By this lemma, the complement of the set $\cap_{n \in \mathbb{N}} \mathcal{G}_{S}(n)$ has measure 0, and so this assumption will not influence the results of any calculations. Next, we define two families of tilted measures on $\mathbb{R}^{2d}$: \begin{defn} [Tilted Measures] Let $\tilde{\mathbb{P}}$ be the probability measure on $\mathbb{R}^{2d}$ with density \begin{equs} \tilde{\mathbb{P}}(p,q) \propto e^{-H(p,q)} \, \mathbbm{1} \{ N_{\partial S}(p,q) \geq 1 \}. \end{equs} Define $\mathbb{Q}$ to be the probability measure on $\mathbb{R}^{2d}$ with density \begin{equs} \mathbb{Q}(p,q) = \tilde{\mathbb{P}}(p,q) \cdot N_{\partial S}(p,q). \end{equs} \end{defn} For a set $A \subset \mathbb{R}^{d}$ and a constant $c > 0$, define the $c$-thickening of $A$ to be \begin{equs} A_{c} = \{x \in \mathbb{R}^{d} \, : \, \inf_{a \in A} \, \|x-a\| \leq c\}. \end{equs} \subsection{Main Conductance Formula} Our main theorem is: \begin{thm} \label{thm:1} Let $T \in \mathbb{R}^{+}$, let $\pi(q)$ be any smooth probability density on $\mathbb{R}^d$, let $K$ and $H$ be the transition kernel and Hamiltonian of either Algorithm \ref{alg:isotropic_HMC} or Algorithm \ref{alg:Riemannian_algorithm} with these parameters, and let $\mu_{H}(p,q) \propto e^{-H(p,q)}$ be the associated Hamiltonian measure. Let $S \subset \mathbb{R}^{d}$ be any subset whose boundary $\partial S$ is a smooth manifold satisfying Assumption \ref{DefLocallyWellBehaved}. Finally, in the case of Algorithm \ref{alg:Riemannian_algorithm}, let Assumption \ref{AssumptionBoundFIM} also hold. Then the conductance of $K$ satisfies \begin{equs}\label{eq:HMC_traditional_Cheeger} \Phi(K,S) = \Phi^+ \cdot \mathbb{E}\bigg[\frac{1}{N_{\partial S}(P,Q)} \cdot \mathbbm{1}\{N_{\partial S}(P,Q) \, \mathrm{odd}\}\bigg] \bigg/ (\pi(S)), \end{equs} where the expectation is taken with respect to the random variables $(P,Q) \sim \mathbb{Q}$ and the total positive flux $\Phi^+$ is given by \begin{equs} \label{EqPosFluxConc1} \Phi^+ = \frac{1}{2} T \cdot \int_{\partial S} \int_{\mathbb{R}^d} \mu_{H}(p,q) \cdot |p_q| \mathrm{d}p\mathrm{d}q. \end{equs} In the case of Algorithm \ref{alg:isotropic_HMC}, this formula for $\Phi^+$ reduces to the simpler expression \begin{equs} \label{EqPosFluxConc2} \Phi^+ = \frac{1}{2}T \cdot \int_{\partial S} \pi(q) \mathrm{d}q. \end{equs} \end{thm} \begin{proof} The proof of this result is given in Appendix \ref{SecPfThm1}. \end{proof} Note that, in Equations \eqref{EqPosFluxConc1} and \eqref{EqPosFluxConc2}, the integral over $\partial S$ is taken with respect to the volume measure on the $(d-1)$-dimensional manifold $\partial S$, not with respect to the Lebesgue measure on the $d$-dimensional space $\mathbb{R}^{d}$. In many applications, it is more important to get a good bound on the conductance than to compute it directly. Observing that $\mathbb{E}_\mathbb{Q}\bigg[\frac{1}{N_{\partial S}} \cdot \mathbbm{1}\{N_{\partial S} \, \mathrm{odd}\}\bigg] \leq 1$, we have the following useful consequence of Theorem \ref{thm:1}: \begin{cor}[Simple Conductance Bound] \label{cor:1} Set notation as in Theorem \ref{thm:1}. Then the conductance for the Isotropic-Momentum HMC algorithm is bounded by \begin{equs} \Phi(K,S) \leq \frac{1}{2}T\frac{\int_{\partial S} \pi(q) \mathrm{d}q \cdot }{\pi(S)}. \end{equs} \end{cor} An immediate consequence of this bound is that the time it takes energy-conserving Hamiltonian Markov chains to search for sub-Gaussian modes grows exponentially with both the dimension and the distance between modes. Numerical simulations for various two-mode densities that approximate Gaussian mixture models (Figure \ref{figure:simulation}) suggest that this upper bound is nearly tight in many cases where $T$ is not too large. \subsection{Removing Assumption \ref{DefLocallyWellBehaved}} Theorem \ref{thm:1} only applies to sets that satisfy Assumption \ref{DefLocallyWellBehaved}. Fortunately, it is not necessary to consider any other sets to compute the conductance of typical HMC Markov chains: \begin{lemma} [Sufficiency of Compact Locally-Good Manifolds] \label{LemmSuffLoc} Let $\pi$ be a distribution with twice-differentiable density, let $T > 0$, and let $K_{T}$ be the transition kernel given by Algorithm \ref{alg:isotropic_HMC} with these parameters. Let $\mathfrak{B}$ be the Lebesgue measurable subsets of $\mathbb{R}^{d}$, and let $\mathfrak{A} \subset \mathfrak{B}$ be the subsets that also satisfy Assumption \ref{DefLocallyWellBehaved}. Then \begin{equs} \label{IneqSuffLoc1} \inf_{S \in \mathfrak{A}} \Phi(K_{T},S) = \inf_{S \in \mathfrak{B}} \Phi(K_{T},S). \end{equs} \end{lemma} \begin{proof} The proof is deferred to Appendix \ref{SecLemmSuffLoc}. \end{proof} \section{Examples and Applications} \label{SecAppl} We illustrate our conductance bounds with two examples: \begin{enumerate} \item In Section \ref{SubsecAppMix}, we use Theorem \ref{thm:1} to compute the Cheeger constant of an HMC algorithm targetting a mixture of Gaussians. We also compute the spectral gap of this Markov chain, showing that it is roughly equal to the Cheeger constant. In particular, the \textit{upper} bound of Inequality \eqref{IneqCheegPoin} is close to sharp in this example. \item In Section \ref{SubsecAppDeg}, we use Theorem \ref{thm:1} to compute the Cheeger constant of an HMC algorithm targetting a multivariate Gaussian with nearly-singular covariance matrix. We also compute the spectral gap of this Markov chain, showing that it is roughly equal to the \textit{square} of the Cheeger constant. In particular, the \textit{lower} bound of Inequality \eqref{IneqCheegPoin} is close to sharp. \item In Section \ref{SubsecAppNum}, we numerically compute the spectral gap and Cheeger constant of an HMC algorithm to illustrate Theorem \ref{thm:1}. \end{enumerate} \subsection{Application: HMC Targetting Mixture of Gaussians} \label{SubsecAppMix} For $\sigma > 0$, define the mixture distribution \begin{equs} \label{EqDefMixDistBasic} \pi_{\sigma} = \frac{1}{2} \mathcal{N}(-1,\sigma^2) + \frac{1}{2} \mathcal{N}(1,\sigma^2) \end{equs} and denote its density by $f_{\sigma}$. Let $K_{\sigma}$ be the transition kernel of Algorithm \ref{alg:isotropic_HMC} with target distribution $\pi = \pi_{\sigma}$ and time-step $T = T_{\sigma} \equiv \sigma$. Denote by $\lambda_{\sigma}$ the relaxation time of $K_{\sigma}$, and denote by $\Phi_{\sigma} = \Phi(K_{\sigma}, (-\infty,0))$ the Cheeger constant associated with kernel $K_{\sigma}$ and set $(-\infty,0)$. For fixed $x \in (-\infty,0)$, let $\{X_{t}^{(\sigma)}\}_{t \in \mathbb{N}}$ be a Markov chain with transition kernel $K_{\sigma}$ and initial point $X_{1}^{(\sigma)} = x$. Define the hitting time \begin{equs} \label{EqDefTauSigmaX} \tau_{x}^{(\sigma)} = \inf \{ t > 0 \, : \, X_{t}^{(\sigma)} \notin (-\infty,0)\}. \end{equs} We show: \begin{thm} [HMC for Multimodal Distributions] \label{ThmHmcMultimodal} The Cheeger constant $\Phi_{\sigma}$ satisfies \begin{equs} \label{EqMultiCheegAsym} \lim_{\sigma \rightarrow 0} (-2 \sigma^{2}) \, \log(\Phi_{\sigma}) = 1. \end{equs} Furthermore, for all $\epsilon > 0$ and fixed $x \in (-\infty,0)$, the hitting time $\tau_{x}^{(\sigma)}$ satisfies \begin{equs} \label{EqMultiCheegHitting} \lim_{\sigma \rightarrow 0} \mathbb{P}[\frac{\log(\tau_{x}^{(\sigma)})}{\log(\Phi_{\sigma})} < 1 + \epsilon] = 1 \end{equs} and the relaxation time satisfies \begin{equs} \label{IneqRelMulti} \lim_{\sigma \rightarrow 0} \frac{\log(\lambda_{\sigma})}{\log(\Phi_{\sigma})} = \lim_{\sigma \rightarrow 0} \frac{\log(\lambda_{\sigma})}{\log(\Phi(K_{\sigma}))} = 1. \end{equs} \end{thm} We defer the proof to Appendix \ref{AppSubsecPfHmcMulti}. Note that this result exactly matches the spectral gap for optimally-tuned random-walk Metropolis algorithm with the same target distribution (see Theorem 3 of the our companion paper \cite{mangoubi2018simple}). We also observe that this result implies the Cheeger constant $\Phi(K_{\sigma})$ of $K_{\sigma}$ is close to the bottleneck ratio $\Phi_{\sigma} = \Phi(K_{\sigma}, (-\infty,0))$ associated with the set $(-\infty,0)$, at least for $\sigma$ very small. The set $(-\infty,0)$ is of course a natural guess for the set with the ``worst" conductance, though we do not know of any simple argument that would prove something like this. In fact, deriving simple criteria to verify this fact was the motivation behind our companion paper \cite{mangoubi2018simple}. \subsection{High Dimensional Analog} \label{sec:highdim} Almost all of the work in the proof of Theorem \ref{ThmHmcMultimodal} is checking that, in fact, the spectral gap is not much \textit{smaller} than the natural guess and upper bound $\Phi(K_{\sigma}, (-\infty,0))$; computing a sharp upper bound $\Phi(K_{\sigma}, (-\infty,0))$ itself is straightforward using Theorem \ref{thm:1}. This remains true in higher-dimensional examples. For example, considering the mixture distribution \begin{equs} \pi_{\sigma} = \frac{1}{2} \mathcal{N}((-1,0,\ldots,0),\sigma^2 \mathrm{Id}) + \frac{1}{2}\mathcal{N}((1,0,\ldots,0),\sigma^2 \mathrm{Id}) \end{equs} on $\mathbb{R}^{d}$. It is natural to guess that, for the HMC algorithm with typical time step $T_{\sigma} = O(\sigma)$, the set $\{ x \in \mathbb{R}^{d} \, : \, x[1] < 0 \}$ has (nearly) the worst conductance. Theorem \ref{thm:1} can be used to get a very good estimate of this conductance, using the same calculation as the start of Theorem \ref{ThmHmcMultimodal}. \subsection{Choice of Integration Time and Computational Cost}\label{sec:RemChoiceInt} It is natural to ask: why do we study the choice $T_{\sigma} = \sigma$, rather than some other choice of integration time? The simplest answer is that it is impossible to substantially improve the performance of the algorithm by choosing a larger integration time. To make this statement precise, we note that the computational cost of running a single step of the HMC algorithm is approximately proportional to the integration time $T$. Thus, the effective computational cost of a sample from a target distribution $\pi$ is roughly the ratio of the integration time to the spectral gap (this is a standard way to compare the efficiency of MCMC algorithms with widely differing costs per step - see \cite{bornn2017use,sherlock2017pseudo} and references therein). It is this effective computational cost that is bounded below: we always have \begin{equs} \limsup_{\sigma \rightarrow 0} 2 \sigma^{2} (\log(\lambda_{\sigma}) - \log(T_{\sigma})) \leq 1, \end{equs} for all $T_{\sigma} \geq \sigma$ (see discussion in Section \ref{SecHmcImp} for a proof of this fact). Thus, $T_{\sigma} = \sigma$ is essentially the optimal choice for $T_{\sigma}$ in this case. Having said this, we find this simple answer slightly misleading. In practice, one never knows the optimal value of $T_{\sigma}$. Instead, one often tunes an MCMC algorithm according to some ``Goldilocks principle."\footnote{The first use of Goldilocks analogy in this context is by Jeff Rosenthal.} For HMC, one chooses $T_{\sigma}$ so that the probability of an HMC trajectory making a ``U-turn" (in the sense of \cite{NUTS}) is not too close to 0 and not too close to 1. In this example, this means choosig $T_{\sigma} = \Theta(\sigma)$. Note that this heuristic is very similar to the ``Goldilocks principle" for tuning RWM: one should choose the standard deviation of the proposal distribution so that the probability of rejecting a proposal is not too close to 0 and not too close to 1. This popular tuning choice is exactly the one that we study in our companion paper \cite{mangoubi2018simple}, even though it is \textit{not} close to optimal in that context. \subsection{Application: HMC Targetting Degenerate Multivariate Gaussian} \label{SubsecAppDeg} This section is motivated by the study of the standard HMC Algorithm \ref{alg:isotropic_HMC} with target distribution of the form $\mathcal{N}(0, M_{\sigma})$, where the 2-dimensional covariance matrix $M_{\sigma}$ is given by: \begin{equs} M_{\sigma}= \left[ {\begin{array}{cc} 1 & 0 \\ 0 & \sigma^{2} \\ \end{array} } \right]. \end{equs} We next observe that the target distribution $\mathcal{N}(0,M_{\sigma})$ is special: if $\{X_{t}\}_{t \in \mathbb{N}}$ is a Markov chain drawn from Algorithm \ref{alg:isotropic_HMC} targetting $\mathcal{N}(0,M_{\sigma})$, then the coordinate sequences $\{X_{t}[1]\}_{t \in \mathbb{N}}$ and $\{X_{t}[2]\}_{t \in \mathbb{N}}$ are each Markov chains as well. Furthermore, they both evolve independently, and they evolve according to Algorithm \ref{alg:isotropic_HMC}, with $\{X_{t}[1]\}_{t \in \mathbb{N}}$ targetting $\mathcal{N}(0,1)$ and $\{X_{t}[2]\}_{t \in \mathbb{N}}$ targetting $\mathcal{N}(0,\sigma^{2})$. Thus, rather than analyzing the full chain $\{X_{t}\}_{t \in \mathbb{N}}$, to calculate the spectral gap it is enough to analyze the slower-mixing marginal chain $\{X_{t}[1]\}_{t \in \mathbb{N}}$. Denote by $K_{\sigma}$ the transition kernel associated with target distribution $\mathcal{N}(0,1)$ and integration time $T_{\sigma}$ satisfying $T_{\sigma} = o(1)$. Denote by $\Phi_{\sigma}$ the Cheeger constant associated with this transition kernel and the set $(-\infty,0)$; denote by $\rho_{\sigma}$ the spectral gap of $K_{\sigma}$. We have: \begin{thm} \label{ThmHmcDegenerate} The Cheeger constant $\Phi_{\sigma}$ satisfies \begin{equs} \label{EqMultiCheegDeg} \lim_{\sigma \rightarrow 0} \frac{\log(\Phi_{\sigma})}{\log(T_{\sigma})} \leq 1. \end{equs} Furthermore, the relaxation time satisfies \begin{equs} \label{IneqRelDeg} \limsup_{\sigma \rightarrow 0} \frac{\log(\rho_{\sigma})}{\log(T_{\sigma})} \geq \frac{1}{2}. \end{equs} \end{thm} The proof is deferred to the appendix. Note that we do not give a bound for the hitting time in this example. This is not an accident: the target distribution is unimodal, and so the HMC algorithm does not exhibit metastability and the fluctuations of the hitting time remain large relative to the relaxation time as $\sigma$ goes to 0. \begin{remark} This example is studied in Example 2.1 of \cite{rabee2018HMCcoup}. We observe that our results are sharper in a few ways (we allow larger integration times; we obtain the correct order of the dependence of the spectral gap on $\sigma$), but are also the results of exact computations rather than general theorems. We mention that our own previous work \cite{mangoubi2017concave} and that of \cite{holmes2014curvature} gives similar (non-sharp) estimates to \cite{rabee2018HMCcoup} in this example. We think that finding general bounds that give sharp answers in this prototypical case to be an interesting problem. \end{remark} \subsection{Numerical Simulation: Spectral Gap of Hamiltonian Monte Carlo} \label{SubsecAppNum} In this section we plot the spectral gap (Figure \ref{figure:simulation}) of an Isotropic-Momentum HMC algorithm sampling a two-mode density; the plot was generated by numerically diagonalizing an analytical solution for the transition matrix of the HMC Markov chain. The results of the calculation agree closely with the upper bounds on the spectral gap given by applying Corollary \ref{cor:1} with Inequality \eqref{IneqCheegPoin}. In this simulation we computed the spectral gap for the Isotropic-Momentum HMC algorithm with the stationary distributions of the form $\pi_{a}(q) = \frac{1}{2F_{\mathcal{N}(0,1)}(a)}\mathrm{max}(f_{\mathcal{N}(0,1)}(q-a),f_{\mathcal{N}(0,1)}(q+a))$, where $f_{\mathcal{N}(0,1)}$ and $F_{\mathcal{N}(0,1)}$ denote the CDF and PDF of the standard normal density. Note that $\pi_{a}(q)$ approximates the Gaussian mixture model $\tilde{\pi}_{a}(q) = \frac{1}{2}f_{\mathcal{N}(0,1)}(q-a) + \frac{1}{2}f_{\mathcal{N}(0,1)}(q+a)$; indeed $\lim_{a \rightarrow \infty} \| \pi_{a} - \tilde{\pi}_{a} \|_{\mathrm{TV}} = 0$. As suggested by the formula for the conductance in Theorem \ref{thm:1}, the spectral gap is bounded above by a linear function of $T$, and in fact increases approximately linearly with $T$ when $a=0$ for $T\leq \frac{\pi}{2}$. Note that, for fixed $a$ small, the spectral gap $(1 - \lambda_{2})$ looks like a periodic function of $T$. This is due to the fact that the trajectories themselves are very close to periodic with period $\geq\frac{\pi}{2}$, meaning that the expectation term in Equation \eqref{eq:HMC_traditional_Cheeger} also varies (approximately) periodically with T. The exponential decay in $a^2$ is explained by considering the set $S = (-\infty,0)$ and noting that $\int_{\partial S} \pi_{a}(q) \mathrm{d}q = f_{\mathcal{N}(0,1)}(q-a)$, so the corresponding term in Corollary \ref{cor:1} of Theorem \ref{thm:1} decreases exponentially in $a^2$. \begin{figure}[H] \includegraphics[scale=0.3]{Spectral_gap} \caption{The spectral gap for the Isotropic-Momentum HMC algorithm with stationary distribution $\pi(q) = \frac{1}{2F_{\mathcal{N}(0,1)}(a)}\mathrm{max}(f_{\mathcal{N}(0,1)}(q-a),f_{\mathcal{N}(0,1)}(q-a))$, for different inter-modal distances $2a$ and different Hamiltonian trajectory times $T$. The results agree closely with the bound in Theorem \ref{thm:1}.\label{figure:simulation}} \end{figure} \section{Discussion and Open Problems} \label{SecCompMulti} We give some consequences of Theorem \ref{thm:1}, and discuss open questions related to the performance of HMC. \subsection{Bounds on HMC Improvements} \label{SecHmcImp} One immediate consequence of Theorem \ref{thm:1} and Lemma \ref{LemmSuffLoc} is that it is not possible to dramatically improve the performance of Algorithm \ref{alg:isotropic_HMC} by tuning the trajectory integration time $T$. We give a quick discussion of this fact and some related open problems. Fix a probability distribution with twice-differentiable density $\pi$ and denote by $K_{T}$ the transition kernel defined by Algorithm \ref{alg:isotropic_HMC} with parameters $\pi$ and $T > 0$. Define \begin{equs} \Phi_{0}(\pi,S) &= \frac{1}{2} \frac{\int_{\partial S} \pi(q) \mathrm{d}q }{\pi(S)}\\ \Phi_{0}(\pi) &= \inf_{S \in \mathfrak{A}} \Phi_{0}(\pi,S), \\ \end{equs} for $S \in \mathfrak{A}$. By Corollary \ref{cor:1} of Theorem \ref{thm:1}, combined with Lemma \ref{LemmSuffLoc}, we have the upper bound \begin{equs} \label{IneqLinIncCond} T^{-1} \Phi(K_{T}) \leq \Phi_{0}(\pi), \end{equs} so that $\Phi(K_{T})$ is bounded by a linear function in the integration time $T$. This observation is already in stark contrast to RWM as given in Algorithm \ref{alg:RWM}, whose tuning parameter $\epsilon > 0$ can have very large impacts on performance. As an illustration of this large change in performance, consider the family of transition kernels $\{ Q_{\sigma} \}_{0<\sigma <1}$ given by Algorithm \ref{alg:RWM} with target distribution $\pi_{\sigma}$ given in Equation \eqref{EqDefMixDistBasic} and tuning parameter $\epsilon \equiv 10$ for all $\sigma$. It is a straightforward exercise \footnote{ For example, one could apply Theorem 5 of \cite{rosenthal1995minorization} with ``small set" $(-5,5)$ and ``Lyapunov function" $V(x) = e^{\|x\|}$ to obtain a bound on the rate of convergence of the walk to stationarity. One could then apply Theorem 2.1 of \cite{roberts1997geometric} to convert this bound on the convergence rate to a bound on the spectral gap.} to check that $\log(\rho(Q_{\sigma})^{-1}) = O(\log(\sigma^{-1}))$ for $\sigma$ small. Comparing this with the result of Theorem 3 of \cite{mangoubi2018simple}, we see that changing the tuning parameter $\epsilon$ of Algorithm \ref{alg:RWM} by a factor on the order of $\sigma$ will vastly decrease the size of the log of the relaxation time, from $\Theta(\sigma^{-2})$ to $O(-\log(\sigma))$. This dramatic performance improvement does not have obvious practical applications, since choosing a good tuning parameter for multimodal targets requires a more detailed understanding of the mode than a user will typically have. Our point is just that such dramatic improvements are possible for RWM with multimodal targets, while they are not possible for HMC. We interpret the linear bound \eqref{IneqLinIncCond} in terms of computational cost. Roughly speaking, the computational cost of a step of Algorithm \ref{alg:isotropic_HMC} is proportional to the integration time $T$. Paraphrasing Inequality \eqref{IneqLinIncCond}, it is impossible to improve the cost-normalized conductance of Algorithm \ref{alg:isotropic_HMC} by increasing the integration time $T$. This immediately implies, via Cheeger's inequality \eqref{IneqCheegPoin}, that the spectral gap is bounded by a quadratic in $T$. We point out that this behaviour is very similar to that of ``lifted" Markov chains (see \cite{chen1999lifting} for a definition). Roughly speaking, like HMC, ``lifted" Markov chains attempt to combat diffusive behaviour of RWM by adding an abstract notion of ``momentum." The central idea behind the non-improvement theorems for lifted chains in \cite{chen1999lifting,ramanan2017bounds} is that it is impossible to increase the conductance of a Markov chain by ``lifting" it. By Cheeger's inequality, this gives an upper bound on the best-possible improvement due to lifting. We close this discussion with an open question. We have shown that the conductance \textit{of a given set} can increase at most \textit{linearly} in $T$. Via Cheeger's inequality \eqref{IneqCheegPoin}, this suggests that the spectral gap should satisfy a similar \textit{quadratic} inequality in $T$. To make this rigorous, it would be sufficient to show: \textbf{Open Problem 1:} Prove the limit \begin{equs} \label{EqLimOpenQuestion} \lim_{T \rightarrow 0} \inf_{S \in \mathfrak{A}} T^{-1} \Phi(K_{T},S) = \inf_{S \in \mathfrak{A}} \Phi_{0}(\pi,S). \end{equs} Note that Equality \eqref{IneqE4Small} already implies that, for fixed $S \in \mathfrak{A}$, \begin{equs} \lim_{T \rightarrow 0} T^{-1} \Phi(K_{T},S) = \Phi_{0}(\pi,S). \end{equs} \subsection{Multimodal Targets} In Theorem \ref{ThmHmcMultimodal} and Theorem 3 of the companion paper \cite{mangoubi2018simple}, we showed that Algorithm \ref{alg:isotropic_HMC} has very similar performance to Algorithm \ref{alg:RWM} for a strongly multimodal example. However, our comparison is not direct: we prove that the two algorithms have similar spectral gaps by laboriously computing the spectral gaps of both algorithms. We suspect that this behaviour is quite general, and propose the following informal problem: \textbf{Open Question 2:} Let $\pi_{\sigma}$ be a mixture distribution of the form \begin{equs} \pi_{\sigma}(x) \propto \sum_{i=1}^{k} \mu_{i} f_{i}(\frac{x}{\sigma}), \end{equs} where $f_{1},\ldots,f_{k}$ are the densities of probability distributions and $\mu_{1},\ldots,\mu_{k} \geq 0$ are fixed weights that sum to $\sum_{i=1}^{k} \mu_{i} = 1$, $a_{1},\ldots,a_{k}$. Let $K_{\sigma}$, $Q_{\sigma}$ be the transition kernels of Algorithms \ref{alg:isotropic_HMC} and \ref{alg:RWM} respectively, with target distribution $\pi_{\sigma}$, integration time $T_{\sigma} \propto \sigma$ and standard deviation $\epsilon_{\sigma} \propto \sigma$. We wish to know: what are sufficient conditions for $\{ \mu_{i}\}_{i=1}^{k}$ so that \begin{equs} \label{LimConj2Strong} \lim_{\sigma \rightarrow 0} \frac{\log(\rho(K_{\sigma}))}{\log(\rho(Q_{\sigma}))} \geq 1? \end{equs} Note that the constant ``1" in the limit \eqref{LimConj2Strong} is important. We expect all algorithms to perform poorly for multimodal targets, and thus for the limit \eqref{LimConj2Strong} to be strictly greater than 0. A limit of ``1" would suggest that HMC exhibits \textit{no} asymptotic improvement over RWM; a limit of \textit{e.g.} $\frac{1}{2}$ would suggest a quadratic improvement over RWM, which is substantial. We conjecture that Inequality \eqref{LimConj2Strong} holds as long as the level sets of $f_{i}$ are fairly ball-like; for example, if the sets $S_{i,C} \equiv \{x \, : \, f_{i}(x) \leq C\}$ are all convex, with the ratio of the inner and outer radii \begin{equs} r_{\mathrm{inner}}(i,C) &\equiv \sup\{r \, : \, \exists \, x \in S_{i,C} \text{ s.t. } B_{r}(x) \subset S_{i,C} \} \\ r_{\mathrm{outer}}(i,C) &\equiv \inf\{r \, : \, \exists \, x \in S_{i,C} \text{ s.t. } B_{r}(x) \supset S_{i,C} \} \\ \end{equs} uniformly bounded. We mention that we do not expect this to hold in complete generality. For example, if the level sets $S_{i,C}$ of $\pi_{\sigma}$ for $0 <C \ll e^{\sigma^{-1}}$ look like Figure \ref{FigBanana}, More generally, we are interested in determining which ``momentum-based" algorithms satisfy the limit \eqref{LimConj2Strong}. We note that not all similar chains satisfy this equality. In particular, lifted Markov chains can replace the ``1" with a ``$\frac{1}{2}$" for realistic examples (see \textit{e.g.} \cite{bierkens2017piecewise}). It would be valuable to determine if other generic algorithms, such as Algorithm \ref{alg:Riemannian_algorithm} or \cite{bierkens2016zig}, can also achieve this. Finally, we suggest a more straightforward question. The level sets of the multimodal densities studied in this paper are \textit{completely disconnected} below some (fairly large) threshold. Thus, the multimodality is essentially invisible to the gradient of the log-likelihood within each mode. Can HMC, and especially Riemannian Manifold HMC under an appropriate implementable choice of Riemannian metric, improve on MH for multimodal examples in which the level sets are \textit{connected}, and multimodality is induced by these level sets being merely \textit{very narrow}? Note that this is only possible in two or more dimensions. \section*{Acknowledgement} We would like to thank Neil Shephard for asking us the title question. NSP thanks Gareth Roberts and Andrew Stuart for introducing him to the optimal scaling of MCMC algorithms, and Jeff Rosenthal for references. Part of this work was done when OM was a Ph.D. student at MIT working on this project under the supervision of NSP at Harvard University, and later when he was a postdoctoral researcher with AS at the University of Ottawa. We thank these institutions for their hospitality. \bibliographystyle{alpha}
{ "timestamp": "2018-09-05T02:32:46", "yymm": "1808", "arxiv_id": "1808.03230", "language": "en", "url": "https://arxiv.org/abs/1808.03230", "abstract": "Hamiltonian Monte Carlo (HMC) is a very popular and generic collection of Markov chain Monte Carlo (MCMC) algorithms. One explanation for the popularity of HMC algorithms is their excellent performance as the dimension $d$ of the target becomes large: under conditions that are satisfied for many common statistical models, optimally-tuned HMC algorithms have a running time that scales like $d^{0.25}$. In stark contrast, the running time of the usual Random-Walk Metropolis (RWM) algorithm, optimally tuned, scales like $d$. This superior scaling of the HMC algorithm with dimension is attributed to the fact that it, unlike RWM, incorporates the gradient information in the proposal distribution. In this paper, we investigate a different scaling question: does HMC beat RWM for highly $\\textit{multimodal}$ targets? We find that the answer is often $\\textit{no}$. We compute the spectral gaps for both the algorithms for a specific class of multimodal target densities, and show that they are identical. The key reason is that, within one mode, the gradient is effectively ignorant about other modes, thus negating the advantage the HMC algorithm enjoys in unimodal targets. We also give heuristic arguments suggesting that the above observation may hold quite generally. Our main tool for answering this question is a novel simple formula for the conductance of HMC using Liouville's theorem. This result allows us to compute the spectral gap of HMC algorithms, for both the classical HMC with isotropic momentum and the recent Riemannian HMC, for multimodal targets.", "subjects": "Probability (math.PR); Machine Learning (cs.LG); Computation (stat.CO); Methodology (stat.ME); Machine Learning (stat.ML)", "title": "Does Hamiltonian Monte Carlo mix faster than a random walk on multimodal densities?", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713848528317, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.8100794232403077 }
https://arxiv.org/abs/1410.7791
Hölder stability for Serrin's overdetermined problem
In a bounded domain $\Omega$, we consider a positive solution of the problem $\Delta u+f(u)=0$ in $\Omega$, $u=0$ on $\partial\Omega$, where $f:\mathbb{R}\to\mathbb{R}$ is a locally Lipschitz continuous function. Under sufficient conditions on $\Omega$ (for instance, if $\Omega$ is convex), we show that $\partial\Omega$ is contained in a spherical annulus of radii $r_i<r_e$, where $r_e-r_i\leq C\,[u_\nu]_{\partial\Omega}^\alpha$ for some constants $C>0$ and $\alpha\in (0,1]$. Here, $[u_\nu]_{\partial\Omega}$ is the Lipschitz seminorm on $\partial\Omega$ of the normal derivative of $u$. This result improves to Hölder stability the logarithmic estimate obtained in [1] for Serrin's overdetermined problem. It also extends to a large class of semilinear equations the Hölder estimate obtained in [6] for the case of torsional rigidity ($f\equiv 1$) by means of integral identities. The proof hinges on ideas contained in [1] and uses Carleson-type estimates and improved Harnack inequalities in cones.
\section[Introduction]{Introduction} Serrin's overdetermined problem has been the object of many investigations. In its classical form, it involves a sufficiently smooth bounded domain $\Omega$ in ${\mathbb R^N}$ and a classical solution of the set of equations: \begin{eqnarray} &\Delta u + f(u) = 0 \ \mbox{ and } \ u\ge 0 \ \mbox{ in } \ \Omega, \ u=0 \ \mbox{ on } \ \partial \Omega, \label{serrin1} \\ &u_\nu=c \ \mbox{ on } \ \partial \Omega. \label{serrin2} \end{eqnarray} Here, $f:[0,+\infty)\to\mathbb R$ is a locally Lipschitz continuous function, $u_\nu$ denotes the {\it inward} normal derivative of $u$ and $c$ is a positive constant. Under these assumptions, Serrin \cite{Se} proved that $\Omega$ must be a ball and $u$ be radially symmetric. For his proof, he adapted and improved a method created by Aleksandrov to prove his {\it Soap Bubble Theorem} (see \cite{Al}). In fact, the radial symmetry of $\Omega$ and $u$ is obtained by the so-called {\it method of moving planes} based upon the observation that the euclidean ball is the only bounded domain that is symmetric with respect to any hyperplane passing through its center of mass. There are other interesting proofs of this symmetry result, based on integral identities, that generally need severe restrictions on $f$ (see for instance \cite{We}, \cite{PS}, \cite{BNST}). \par Overdetermined problems like \eqref{serrin1}-\eqref{serrin2} arise in many physical and geometric situations; they can be seen as a prototype of inverse problems and often emerge in free boundary and shape optimization problems (see \cite{He}). \par Despite the intense research that has been devoted to them for almost five decades, there are still many open problems. An important one --- the focus of this paper --- concerns the study of the stability of the radial configuration. \par The first contribution in this direction is \cite{ABR}, where it is proved that if one assumes that $u_\nu$ is \emph{almost} constant on $\partial \Omega$, then there exist two concentric balls $B_{r_i}$ and $B_{r_e}$, with \begin{equation} \label{concentric balls} B_{r_i} \subset \Omega \subset B_{r_e} \end{equation} and such that $r_e-r_i$ can be bounded in terms of some measure of the deviation of $u_\nu$ from being a constant. More precisely, in \cite{ABR} it is proved the estimate \begin{equation} \label{stability ABR} r_e-r_i\le C\,|\log \left\|u_\nu-d\|_{C^1(\partial \Omega)} \right|^{-1/N}, \end{equation} where $d$ is some given constant, provided $\| u_\nu-d\|_{C^1(\partial \Omega)}$ is sufficiently small; here, $C$ is a constant depending on $N$, the regularity of $\partial\Omega$, the diameter of $\Omega$ and the Lipschtz constant of $f$. The proof is based on a quantitative study of the method of moving planes and works for a general locally Lipschitz non-linearity $f$. \par In the case of the {\it torsional rigidity} problem, that is when $f\equiv 1$, \eqref{stability ABR} was improved in \cite{BNST2} (see also \cite{BNST3} for Monge-Amp\`ere equations). Indeed, the authors replace the logarithmic dependence at the right-hand side of \eqref{stability ABR} by a power law of H\"older type. Furthermore, they also give a stability estimate in terms of the $L^1$-norm of the deviation instead of its $C^1$-norm. \par In our main result, Theorem \ref{stability serrin}, we will show that the logarithmic estimate \eqref{stability ABR} can be improved to obtain a stability of H\"older type. In order to avoid unnecessary technicalities, in this section we present our result in a relevant particular case. In what follows, we denote by $d_\Omega$ the {\it diameter} of $\Omega$, $r_\Omega$ the radius of the optimal interior touching sphere to $\partial\Omega$ (see Section \ref{sec:MovPlanes} for its precise definition), and use the following notation: \begin{equation} \label{seminorm normal derivative} \big[u_\nu \big]_{\pa \Om} = \sup_{\substack{x,y \in \partial \Omega\\ \ x \neq y}} \frac{|u_\nu(x) - u_\nu(y)|}{|x-y|}. \end{equation} \begin{theorem} \label{th:stability-convex} Let $\Omega \subset \mathbb R^N$ be a convex domain with boundary of class $C^{2,\alpha}$ and let $f$ be a locally Lipschitz continuous function such that $f(0)\ge 0$. Let $u\in C^{2,\alpha}(\overline{\Omega})$ be a solution of \eqref{serrin1}. \par There exist two positive numbers $\varepsilon$, $C$, that depend on \begin{eqnarray*} \displaystyle N, f, d_\Omega, r_\Omega, \mbox{the $C^{2,\alpha}$-regularity of } \Omega, \max_{\overline{\Omega}}u, \min_{\partial\Omega} u_\nu, \end{eqnarray*} and a number $\tau\in (0,1)$ such that, if $[u_\nu]_{\partial \Omega} \leq \varepsilon$, then \begin{equation*} B_{r_i} \subset \Omega \subset B_{r_e}, \end{equation*} where $B_{r_i}$ and $B_{r_e}$ are two concentric balls and their radii satisfy \begin{equation} \label{stability-improved} r_e-r_i \leq C\,[u_\nu]_{\partial \Omega}^{\tau}. \end{equation} \end{theorem} The number $\tau$ can be explicitly determined (see Theorem \ref{stability serrin}, for details). Notice that the semi-norm \eqref{seminorm normal derivative} can be bounded in terms of the deviation used in \eqref{stability ABR}, if we choose $d$ as the minimum of $u_\nu$ on $\partial\Omega$. \par Therefore, \eqref{stability-improved} significantly improves the estimates of \cite{ABR} and \cite{BNST2}, since it enhances the stability from logarithmic, as in \eqref{stability ABR}, to that of H\"older type proved in \cite{BNST2}, but for {\it any locally Lipschitz non-linearity $f$} (that is, not only for the case $f\equiv 1$). \par The assumption on the convexity of $\Omega$ can be slightly relaxed (see Theorem \ref{stability serrin}), by requiring that $\Omega$ be what we call a $(\mathcal{C},\theta)${\it -domain}. Roughly speaking, we require that every maximal cap that comes about in employing the method of moving planes has a boundary with a Lipschitz constant bounded uniformly with respect to the direction of mirror reflection chosen (see Section \ref{sec:MovPlanes} for details). \par As already mentioned, our results and the ones in \cite{BNST2} are obtained by using very different techniques. The starting point of \cite{BNST2} is the proof of symmetry given by the same authors in \cite{BNST} which makes use of information, such as Newton inequalities and Pohozaev identity, which is in a sense more global and avoids an extensive use of maximum principles needed to employ the method of moving planes. On the other hand, the method in \cite{BNST2} is more restrictive, since it seems to be suitable only for $f$ constant. \par Instead, based on Serrin's original proof, our approach is more flexible and allows to treat a {\it general Lipschitz continuous non-linearity $f$}; in fact, our study relies on quantitative versions of the maximum principle, such as (global) Harnack-type inequalities. Thus, the quality of the relevant stability estimate is affected by that of the Harnack inequality we employ. This is the reason why, at this stage, even if we can consider a general non-linearity, we need to put a restriction on the type of domain under study. In particular, for convex and $(\mathcal{C},\theta)${-domains}, we can use improved Harnack-type inequalities in every cap which is generated by the method of moving planes. \par Besides by those in \cite{ABR}, the techniques used in this paper are inspired by those employed in \cite{CMS2} (see also \cite{CMS}), where a quantitative study of the radially symmetric configuration was carried out for a related problem --- the {\it parallel surface problem} --- motivated by the remark, made in \cite{MSaihp} (see also \cite{MS1}, \cite{MSm2as}), that {\it time-invariant level surfaces} of solutions of certain nonlinear non-degenerate {\it fast diffusion} equations are parallel surfaces. \par As in \cite{ABR} and \cite{CMS2}, our approach consists in fixing a direction, defining an approximate set $X(\delta)$ --- built upon the so-called maximal cap (see Section \ref{sec:MovPlanes}) and its mirror-symmetric image in that direction --- which fits $\Omega$ well, as the parameter $\delta$ tends to $0$. This approximation process is controlled in terms of $\big[u_\nu \big]_{\pa \Om}$ and does not depend on the particular direction chosen. \par The application of Harnack's inequality and Carleson estimates in the maximal cap plays a crucial role in obtaining the new stability estimates. Since we are assuming that every maximal cap has Lipschitz regularity, the improvement in \eqref{stability-improved} is obtained by a refinement of Harnack's inequality in suitable cones. \par The paper is organized as follows. In Section \ref{sec:MovPlanes}, we recall the notations and preliminaries necessary to use the method of moving planes. There, we also discuss on the definition of $(\mathcal{C},\theta)$-domains and present our main result, Theorem \ref{stability serrin}, of which Theorem \ref{th:stability-convex} is a straightforward corollary. In Section \ref{sec:enhanced harnack} --- the core of the paper --- we begin the proof of Theorem \ref{stability serrin}, by producing the necessary enhanced Harnack estimates. Finally, in Section \ref{sec:stability}, we complete the proof of \eqref{stability-improved}. \setcounter{equation}{0} \section{Remarks on the method of moving planes and\\ statement of the main result} \label{sec:MovPlanes} We consider a bounded domain $\Omega$ of class $C^{2,\alpha}$, $0<\alpha \leq 1$. In what follows we will often use the notation $C(N, \Omega, d_\Omega, r_\Omega, f, \dots)$ to denote a constant that depends on the relevant parameters; in particular, the dependence on $\Omega$ is meant to be only on the $C^{2,\alpha}$ regularity of $\partial\Omega$, as explained in \cite[Remark 1]{ABR}. The assumed regularity of $\Omega$ implies that $\Omega$ satisfies a {\it uniform interior sphere condition}; for $x \in \partial \Omega$, we let $r(x)$ be the radius of the maximal ball $B \subset \Omega$ with $x \in \partial B$ and set $$ r_\Omega = \min_{x \in \partial \Omega} r(x). $$ \par In the sequel, it will be useful to consider the {\it parallel set} of $\Omega$: \begin{equation} \label{Om de} \Omega(\delta) = \{x \in \Omega:\ \mbox{\rm dist}(x,\partial \Omega)>\delta\}. \end{equation} If $0<\delta<r_\Omega$, $\partial\Omega(\delta)$ is $C^{2,\alpha}$-smooth; also, $\Omega(\delta)$ satisfies a uniform interior sphere condition with optimal radius $r_\Omega-\delta$. \par For a unit vector $\omega\in{\mathbb R^N}$ and a parameter $\mu\in\mathbb R$, we define the following objects: \begin{equation} \label{definitions} \begin{array}{lll} &\pi_{\mu}=\{ x\in{\mathbb R^N}: x\cdot\omega=\mu\}\ &\mbox{a hyperplane orthogonal to $\omega,$}\\ & \mathcal{H}_\mu=\{ x\in{\mathbb R^N}: x\cdot\omega>\mu\}\ &\mbox{the half-space \emph{on the right} of $\pi_\mu$,}\\ &\Omega_{\mu}=\{x\in A: x\cdot\omega>\mu\}\ &\mbox{the right-hand cap of $\Omega$},\\ &x^{\mu}=x-2(x\cdot\omega-\mu)\,\omega\ &\mbox{the reflected image of $x$ in $\pi_{\mu},$}\\ &\Omega^{\mu}=\{x\in{\mathbb R^N}:x^{\mu}\in \Omega_{\mu}\}\ &\mbox{the reflected cap in $\pi_{\mu}$.} \end{array} \end{equation} Set $\Lambda=\sup\{x\cdot\omega: x\in \Omega\}$, the {\it extent} of $\Omega$ in direction $\omega$; if $\mu<\Lambda$ is close to $\Lambda$, the reflected cap $\Omega^\mu$ is contained in $\Omega$ (see \cite{Fr}). Set \begin{equation} \label{lambda def} \lambda=\inf\{\mu: \Omega^{\mu'}\subset \Omega \mbox{ for all } \mu'\in(\mu,\Lambda)\}. \end{equation} Then at least one of the following cases occurs (see \cite{Se}, \cite{Fr}): \begin{enumerate} \item[(S1)] $\Omega^{\lambda}$ becomes tangent to $\partial \Omega$ at some point $P^\lambda\in\partial \Omega^\lambda \setminus\pi_{\lambda}$, that is the reflected image of a point $P \in \partial \Omega_\lambda \setminus\pi_{\lambda}$; \item[(S2)] $\pi_{\lambda}$ is orthogonal to $\partial \Omega$ at some point $Q\in\partial \Omega\cap\pi_{\lambda}.$ \end{enumerate} The cap $\Omega_\lambda$ will be called the {\it maximal cap}. \par As customary in the method of moving planes, we define the function \begin{equation}\label{w def} w(x)= u(x^\lambda)-u(x),\quad x\in \Omega_\lambda; \end{equation} $w$ satisfies the equation \begin{equation} \label{eq w semilinear} \Delta w+c(x)\,w=0 \ \mbox{ in } \Omega_\lambda, \end{equation} where for $x\in\Omega_\lambda$ \begin{equation*} \label{defc} c(x)=\left\{ \begin{array}{lll} \displaystyle\frac{f(u(x^\lambda))-f(u(x))}{u(x^\lambda)-u(x)} &\mbox{ if } u(x^\lambda)\not= u(x),\\ \displaystyle 0 &\mbox{ if } u(x^\lambda) = u(x). \end{array} \right. \end{equation*} Notice that $c(x)$ is bounded by the {\it Lipschitz constant} $\frak{L}_f$ of $f$ in the interval $[0,\max\limits_{\overline{\Omega}}u].$ \par All the improved estimates in Section \ref{sec:enhanced harnack} concern $w$. As proved in \cite{Se} and refined in \cite{BNV} (see also \cite{Fr}), since $w\ge 0$ on $\partial\Omega_\lambda$, we can assume that $w\ge 0$ in $\Omega_\lambda$. Hence, a standard application of the strong maximum principle to the inequality $\Delta w-c^-(x)\, w\le 0$ with $c^-(x)=\max[-c(x),0]$ shows that either $w=0$ in $\Omega_\lambda$ (and $\Omega$ and $u$ are symmetric about $\pi_\lambda$) or $$ w>0 \ \mbox{ in } \ \Omega_\lambda. $$ The following lemma ensures that the maximal cap always contains a half ball tangent to $\partial\Omega$ at either point $P$ or $Q$. \begin{lm} \label{lemma ball ABR} Let $P$ and $Q$ be as in case (S1) and (S2), respectively. Let $B_\rho(p)\subset\Omega$ be a ball with $0<\rho\le r_\Omega$ and such that $P\in\partial B_\rho(p)$ or $Q\in\partial B_\rho(p)$. \par Then, $p\in \overline{\Omega}_\lambda$ and $B_\rho(p) \cap \mathcal{H}_\lambda \subset \Omega_\lambda$. \end{lm} \begin{proof} The assertion is trivial for case (S2). \par If case (S1) occurs, without loss of generality, we can assume that $\omega=e_1=(1, 0,\dots, 0)$ and $\lambda=0$. Since (S1) holds, the point $P^\lambda$ lies on $\partial\Omega$ and cannot fall inside $B_\rho(p)$, since $P\in \partial B_\rho (p)$ and $B_\rho (p)\subset\Omega$. Thus, $|p-P^\lambda|\geq \rho=|p-P|$ and hence $|p_1+P_1| \geq |p_1-P_1|$, which implies that $p_1 \geq 0$, being $P_1>0$. \end{proof} As mentioned in the Introduction, the convexity assumption of Theorem \ref{th:stability-convex} can be relaxed; with this purpose, we introduce the class of $(\mathcal{C}, \theta)$-domains that, roughly speaking, have the property that every maximal cap $\Omega_\lambda$ has a boundary with a Lipschitz constant bounded by a number which does not depend on the direction of reflection chosen. To this aim, for a fixed $0<t<1/2$, by formula \eqref{Om de}, we define the set \begin{equation}\label{G def} G= \Omega(t\,r_\Omega); \end{equation} we know that $G$ is connected (see \cite[pp. 923--924]{ABR}). Thus, we say that $\Omega$ is a {\it $(\mathcal{C}, \theta)$-domain} if there exists $\theta > 0$ such that for any direction $\omega$ and for any $x\in \Omega_\lambda\setminus\overline{G}_\lambda$ there exists $\zeta \in \partial G_\lambda \setminus \pi_\lambda$ such that $x$ and $\zeta$ belong to the axis of a (finite) right spherical cone $\mathcal{C} \subset \Omega_\lambda$ with vertex at $x$ and aperture $2\theta$. \par This property is not easy to check, since it relies on the knowledge of the position of the critical hyperplane (see Fig.\ref{fig1}). However, the class of $(\mathcal{C}, \theta)$-domains is not empty, as shown by the following proposition. \begin{figure}[h] \centering \subfigure[A $(\mathcal{C}, \theta)$-domain.]{\includegraphics[width=0.5\textwidth]{ok_Cth.png}} \hspace{2em} \subfigure[Not a $(\mathcal{C}, \theta)$-domain.]{\includegraphics[width=0.4\textwidth]{no_Cth.png}} \caption{Every maximal cap of a $(\mathcal{C}, \theta)$-domain has Lipschitz boundary.} \label{fig1} \end{figure} \begin{prop} \label{proposition cone} Any convex domain $\Omega$ of class $C^1$ and satisfying the uniform interior sphere condition with minimal radius $r_\Omega$ is a $(\mathcal{C}, \theta)$-domain with \begin{equation} \label{theta} \theta\ge \arctan \frac{(1-t)\,r_\Omega}{2\,d_\Omega}, \end{equation} where $t$ is the parameter appearing in \eqref{G def}. \end{prop} \begin{proof} Since $\Omega$ is of class $C^1$, then the method of moving planes can be applied (see \cite{Fr}). As already observed, $G$ satisfies a uniform interior sphere condition with optimal radius $(1-t)\,r_\Omega$; also, \begin{equation*} \Omega=\{ x+y: x\in G,\, |y|<t\, r_\Omega\}, \end{equation*} and $G$, $G_\lambda$ and $\Omega_\lambda$ are all convex, since $\Omega$ is convex. From Lemma \ref{lemma ball ABR} we have that $\Omega_\lambda$ contains a half-ball of radius $r_\Omega$ with center $p\in \overline{\Omega}_\lambda$, and $G_\lambda$ contains the half-ball of radius $(1-t)\,r_\Omega$ centered at $p$. The maximal ball $B$ contained in this half ball has radius $\overline{r}=(1-t)\,r_\Omega /2$ and is contained in $G_\lambda$; we denote its center by $y$. \par Let $x$ be any point in $\Omega_\lambda \setminus \overline{G}_\lambda$ and let $\mathcal{C}$ be the (finite) right circular cone with vertex at $x$ and based on the $(N-1)$-dimensional ball of radius $\overline{r}$ obtained by intersecting $B$ with the hyperplane perpendicular to the vector $x-y$ and passing through $y$. Since $\Omega_\lambda$ is convex, then $\mathcal{C} \subset \Omega_\lambda$. Moreover, being $B\subset G_\lambda$, the axis of $\mathcal{C}$ intersects $\partial G_\lambda$ at a point $\zeta \notin\pi_\lambda$. \par It is clear that the aperture $2\theta$ of $\mathcal{C}$ is such that $ \theta \geq \arctan(\overline{r}/d_\Omega), $ and hence \eqref{theta} holds. \end{proof} \begin{corollary} Let $\Omega$ be a convex domain and $\partial \Omega$ be of class $C^1$. \par Then any maximal cap $\Omega_\lambda$ has a Lipschitz continuous boundary $\partial\Omega_\lambda$ with Lipschitz constant bounded by $2d_\Omega / r_\Omega$. \end{corollary} \par We now state our main result. \begin{theorem} \label{stability serrin} Let $\Omega \subset \mathbb R^N$ be a $(\mathcal{C}, \theta)$-domain with boundary of class $C^{2,\alpha}$ and let $f$ be a locally Lipschitz continuous function with $f(0)\ge 0$. \par There exist two positive constants $\varepsilon$ and $C$, that depend on $$ N, \, f, \, d_\Omega, \, r_\Omega, \, \mbox{the $C^{2,\alpha}$-regularity of } \Omega, \, \max_{\overline{\Omega}} u, \, \min_{\partial\Omega} u_\nu, $$ and a number $\tau\in (0,1)$ such that, if $[u_\nu]_{\partial \Omega} \leq \varepsilon$, then \begin{equation*} B_{r_i} \subset \Omega \subset B_{r_e}, \end{equation*} where $B_{r_i}$ and $B_{r_e}$ are two concentric balls and their radii satisfy \eqref{stability-improved}: \begin{equation*} r_e-r_i \leq C\,[u_\nu]_{\partial \Omega}^\tau. \end{equation*} \end{theorem} \begin{remark} (a) As it will be clear from the proof, it holds that $\tau=1/(1+\gamma)$ where $\gamma$ is given by \eqref{gamma}; $\gamma$ depends on the half-aperture $\theta$ of the cone $\mathcal{C}$ in the definition of $(\mathcal{C},\theta)$-domain and on the Harnack's constant for equation \eqref{eq w semilinear}. \par (b) It is clear that Theorem \ref{th:stability-convex} is an easy corollary of this theorem, by Proposition \ref{proposition cone}. \par (c) As explained in \cite{ABR}, in Theorem \ref{stability serrin} some precautions are in order. The dependence of $\varepsilon$ and $C$ on a lower bound for $u_\nu$ is needed in case $f(0)=0$. In fact, if $f(0)>0$ a comparison argument in an interior ball touching $\partial\Omega$ shows that the minimum of $u_\nu$ on $\partial \Omega$ can be bounded from below by $f(0)$ times a constant that only depends on $N, \Omega, \| u\|_\infty$, and $f$. When $f(0)=0$, instead, the first inequality in \eqref{estimate-distance} below does not hold and the constant $\varepsilon$ (and hence $C$) must depend on a lower bound for $u_\nu$. This fact can be seen by considering any (positive) multiple of the first Dirichlet eigenfunction $\phi_1$ for $-\Delta$: in fact, for any $n\in\mathbb N$ the function $\phi_1/n$ satisfies \eqref{serrin1} with $f(u)=\lambda_1\,u$, being $\lambda_1$ the first Dirichlet eigenvalue; although $(\phi_1/n)_\nu\to 0$ on $\partial\Omega$ as $n\to\infty$, one cannot expect to derive any information on the shape of $\Omega$. \par The question whether Theorem \ref{stability serrin} or \cite[Theorem 1]{ABR} still hold if $f(0)<0$ remains open (see \cite[Section 5.1]{ABR}). \end{remark} \par The proof of Theorem \ref{stability serrin} will be carried out in Sections \ref{sec:enhanced harnack} and \ref{sec:stability}. Since it is quite long and complex, for the reader's convenience, we give an outline which consists of three main steps. \begin{itemize} \item[(I)] We first improve \cite[Proposition 1]{ABR}. We fix a direction $\omega$ and apply the procedure of moving planes. For a sufficiently small $\delta>0$, we consider the set $\Omega(\delta)$ in \eqref{Om de} and show that there exists a connected component $\Sigma_\delta$ of $\Omega(\delta) \cap \mathcal{H}_\lambda$ such that $$ \|w\|_{L^\infty(\Sigma_\delta)} \leq \frac{C}{\delta^\gamma} \big[u_\nu \big]_{\pa \Om}\, , $$ where $C$ and $\gamma$ do not depend on $\delta$ and $\omega$ (see Proposition \ref{prop:new-prop-1} below). \item[(II)] Let $X(\delta)$ be the union of $\Sigma_\delta$ and its reflected image in the critical hyperplane $\pi_\lambda$. Since $u$ grows linearly near $\partial \Omega$, the smallness of $w$ in $\Sigma_\delta$ implies that $X(\delta)$ fits $\Omega$ well (see Lemma \ref{lemma 34 ABR} and Proposition \ref{prop 5 ABR} below). This step gives the approximate symmetry of $\Omega$ in the direction $\omega$. \item[(III)] Since the arguments in (I) and (II) do not depend on the chosen direction, we apply them for $N$ (mutually) orthogonal directions and obtain a point $\mathcal{O}$ as the intersection of the corresponding $N$ critical hyperplanes. The point $\mathcal{O}$ can be chosen as an approximate center of symmetry. In fact, we are able to define two balls centered at $\mathcal{O}$ such that Theorem \ref{stability serrin} holds (see the proof of Theorem \ref{stability serrin} in Section \ref{sec:stability} below). \end{itemize} We notice that once one proves the first step, (I), the remaining ones, (II) and (III), follow from a well-established argument as in \cite{ABR} and \cite{CMS2}. \setcounter{equation}{0} \section{Harnack's inequality in a cone} \label{sec:enhanced harnack} In this section, we prove some results based on Harnack's type inequality in a cone, which will be used in Section \ref{sec:stability} to prove step (I) (in particular Proposition \ref{prop:new-prop-1}). Let $0<a<1$ be fixed. It is well-known (see \cite[Theorem 8.20]{GT}) that a solution $w$ of \eqref{eq w semilinear} satisfies the following {\it Harnack's inequality} \begin{equation} \label{harnack} \sup_{B_{ar}} w \leq \frak{H}_a \inf_{B_{ar}} w , \end{equation} for any ball $B_r \subset \Omega_\lambda$; the Harnack constant $\frak{H}_a$ can be bounded by the power $\sqrt{N} + \sqrt{r\,\| c\|_\infty}$ of a constant only depending on $N$ and $a$ (see \cite{GT}). For instance, if $c(x) \equiv 0$, by the explicit Poisson's representation formula for harmonic functions, we have that \begin{equation}\label{Harnack Harmonic} \sup_{B_{ar}} w \leq \left( \frac{1+a}{1-a} \right)^N \inf_{B_{ar}} w , \end{equation} for any $B_r \subset \Omega_\lambda$ (see \cite{GT}). The following Lemma consists of an application of Harnack's inequality to a Harnack's chain of balls contained in a cone. The result is well-known (see \cite{Mo}, \cite{JK}, \cite{Ke}); however, since we are interested in a quantitative version of it, we provide our proof. \begin{lm} \label{lemma harnack cone} Fix a number $a\in (0,1)$. Pick $z \in \partial \Omega \cap \overline{\mathcal{H}}_\lambda$ and let $\mathcal{C}$ be any right spherical cone contained in $\Omega_\lambda$ and with vertex at $z$; let $2\theta$ be the aperture of $\mathcal{C}$. \par Let $w$ be given by \eqref{w def} and pick any two points $x$ and $\zeta$ on the axis of $\mathcal{C}$ such that $$ |x-z|<|\zeta-z|. $$ \par Then we have that \begin{equation}\label{Harnack x e y} \frac{|x-z|^\gamma}{K}\, w (x) \leq w(\zeta)\, \leq \frac{K}{|x-z|^\gamma}\,w (x), \end{equation} where \begin{eqnarray} && K=\frak{H}_a \,\left[\frac{|\zeta-z|(1-a\sin\theta)}{1-a}\right]^\gamma,\label{K} \\ && \gamma = \log_\beta \frak{H}_a, \ \ \beta= \frac{1+a\sin\theta}{1-a\sin\theta},\label{gamma} \end{eqnarray} and $\frak{H}_a$ is given by \eqref{harnack}. \end{lm} \begin{proof} Here we prove the second inequality in \eqref{Harnack x e y}; the first one can be proved similarly. \par Let $\ell$ be the unit vector defining the axis of $\mathcal{C}$ through $z$, that is for instance \begin{equation*} \ell=\frac{x-z}{|x-z|}. \end{equation*} We now construct a chain of balls $B_{r_i}(p_i)$, $i=0, 1, \dots, n$, joining $x$ to $y$ with the following specifications: \begin{itemize} \item[(i)] the centers $p_0, p_1,\dots, p_n$ belong to the axis of $\mathcal{C}$; \item[(ii)] $p_0=x$, $r_0=|x-z|$ and $\zeta\in B_{r_n}(p_n)$; \item[(iii)] the balls $B_{r_1}(p_1), \dots, B_{r_n}(p_n)$ are all contained in $\mathcal{C}$ and tangent to the lateral surface of $\mathcal{C}$; \item[(iv)] the radii $r_1,\dots, r_n$ are chosen such that the balls $B_{ar_0}(p_0), \dots, B_{ar_n}(p_n)$ are pairwise disjoint and \begin{equation*} \overline{B_{ar_i}(p_i)} \cap \overline{B_{ar_{i+1}}(p_{i+1})} = \{p_i + a r_i \ell \},\ i=0,\ldots,n-1. \end{equation*} \end{itemize} \par A calculation shows that (i)-(iv) determine the $r_i$'s and $p_i$'s as \begin{equation} \label{ri&pi} r_i=r_0\,\frac{(1-a) \sin\theta}{1-a\sin\theta}\,\beta^i, \quad p_i=z+r_0\,\frac{1-a}{1-a\sin\theta}\,\beta^i\,\ell , \ \ i=1, \dots, n. \end{equation} \par Since $|p_{n-1}-z|\le |\zeta-z|$, from the second formula in \eqref{ri&pi} we obtain a bound for $n$: \begin{equation} \label{bound-n} n\le 1+\frac{\log_{\frak{H}_a}\left(\frac{|\zeta-z|}{r_0}\frac{1-a\sin\theta}{1-a}\right)}{\log_{\frak{H}_a}\beta} \end{equation} \par As usual, the application of Harnack's inequality \eqref{harnack} to each ball $B_{r_i}(p_i)$ of the chain gives that $ w(\zeta)\le\frak{H}_a^n\,w(x); $ \eqref{Harnack x e y} then follows from \eqref{bound-n} and some algebraic manipulations. \end{proof} \begin{remark} We notice that when $c(x) \equiv 0$ (and hence $\Delta w=0$) we have that \begin{equation} \label{gamma-torsion} \gamma= N \frac{\log \frac{1+a}{1-a}}{\log\frac{1+a \sin\theta}{1-a \sin\theta}}. \end{equation} We observe that $\gamma \geq N$ and equality holds only if $\theta=\pi/2$. \end{remark} \setcounter{equation}{0} \section{Stability for Serrin's problem: the proof} \label{sec:stability} In this section, thanks to the preparatory lemmas of Section \ref{sec:enhanced harnack} and by following the outline announced in Section \ref{sec:MovPlanes}, we shall bring the proof of Theorem \ref{stability serrin} to an end. \par As already mentioned, the crucial step in the proof is item (I) in the outline; it is the analog of \cite[Proposition 1]{ABR}. We modify and improve the procedure used in \cite{ABR} at two salient points: (i) the definition of the approximating symmetric set $X(\delta)$; (ii) the bound on the smallness of the function $w$ in \eqref{w def} in terms of the parameter $\delta$ and the semi-norm $\big[u_\nu \big]_{\pa \Om}$. We draw the reader's attention on the fact that, while (i) is due to a refinement, based on Carleson-type estimates, of the technique used in \cite{ABR}, for (ii) the assumption that $\Omega$ is $(\mathcal{C}, \theta)$-domain is necessary --- and, so far, we are not able to remove it --- to treat the case of a general bounded domain with $C^{2,\alpha}$-smooth boundary. More precisely, to every maximal cap that comes about in the method of moving planes, we want to apply Lemma \ref{lemma harnack cone} that, roughly speaking, requires that such maximal cap is Lipschitz continuous, which is essentially our definition of $(\mathcal{C}, \theta)$-domain, that is certainly fulfilled if $\Omega$ is convex, as shown in Proposition \ref{proposition cone}. In other words, we are so far unable to exclude (even generically) that the situation depicted in Figure 1 (b) may occur. \par We now start the procedure. For a fixed direction $\omega$, we let $\lambda$ be the number defined in \eqref{lambda def}; we then consider the connected component $\Sigma$ of $\Omega_\lambda$ which intersects the interior touching ball at the point $P$, if case (S1) occurs, or at the point $Q$, if (S2) occurs. \par For $\delta >0$, we consider the set $\Omega(\delta)$ defined in \eqref{Om de} and define $\Sigma_\delta$ as the connected component of $ \Omega(\delta) \cap \mathcal{H_\lambda} $ contained in $\Sigma$ --- notice that our definition of $\Sigma_\delta$ differs from that in \cite{ABR}. In addition, we fix the domain $G$ in \eqref{G def} by choosing $t=1/32$ and we set $a=1/2$ in Lemma \ref{lemma harnack cone}. \begin{prop} \label{prop:new-prop-1} Let $\Omega$ be a $(\mathcal{C}, \theta)$-domain. \par Then, there is a constant $C=C(N, \Omega, d_\Omega, r_\Omega, f, \max_{\overline{\Omega}} u)$ such that \begin{equation} \label{w Linfty Sigma} \|w\|_{L^\infty(\Sigma_\delta)} \leq C\, \delta^{-\gamma} [u_\nu]_{\partial \Omega} \ \mbox{ for } \ 0<\delta \le r_\Omega/32, \end{equation} where $\gamma$ and $\beta$ are the numbers defined in \eqref{gamma}. \end{prop} \begin{proof} Let $\widetilde{\Sigma}=\{x\in \Sigma:\ \mbox{\rm dist}(x,\partial \Sigma)> r_\Omega/64\}$. We apply \cite[Proposition 1]{ABR} with $\delta=r_\Omega/64$ and obtain that \begin{equation*} \|w\|_{L^\infty(\widetilde{\Sigma}_\delta)} \leq \widetilde{C}\, [u_\nu]_{\partial \Omega}, \end{equation*} for some constant $\widetilde{C}=\widetilde{C}(N, \Omega, d_\Omega, r_\Omega, f, \| u\|_\infty)$ . By using an argument analogous to the ones in the proofs of \cite[Lemmas 3.2 and 4.2]{CMS2} -- which are based on boundary Harnack's inequality -- we can extend the previous estimate to $G_\lambda$: \begin{equation} \label{w Gm serrin} \|w\|_{L^\infty(G_\lambda)} \leq M C\, [u_\nu]_{\partial \Omega}, \end{equation} where $M$ is the constant appearing in the boundary Harnack inequality (see Theorem 1.3 in \cite{BCN}). It is important to notice that the bound in \eqref{w Gm serrin} does not depend on $\delta$. Let $x$ be any point in $\Sigma_\delta \setminus \overline{G}_\lambda$, with $\delta < r_\Omega/32$. Since $\Omega$ is a $(\mathcal{C}, \theta)$-domain, from Lemma \ref{lemma harnack cone}, by choosing $\zeta$ equal to the point $\zeta\in \partial G_\lambda\setminus\pi_\lambda$ (in the definition of $(\mathcal{C}, \theta)$-domain) in Lemma \eqref{lemma harnack cone}, we obtain that \begin{equation*} w(x) \leq K\, \delta^{-\gamma} w(\zeta); \end{equation*} \eqref{w Gm serrin} then yields \eqref{w Linfty Sigma} with $C=\widetilde{C} KM$. \end{proof} \begin{remark} The statement of Proposition \ref{prop:new-prop-1} differs from that of \cite[Proposition 1]{ABR} in three aspects: (i) the set $\Sigma_\delta$ we consider extends up to the hyperplane $\mathcal{H}_\lambda$; (ii) the dependence on $\delta$ in \eqref{w Linfty Sigma} greatly improves that of \cite[Eq. (8)]{ABR}, that blows up exponentially as $\delta\to 0$; (iii) the seminorm $[u_\nu]_{\partial\Omega}$ replaces the norm $\| u_\nu-d\|_{C^1(\partial\Omega)}$. \end{remark} Now, we define a symmetric open set as \begin{equation} \label{X def} X(\delta)=\mbox{the interior of} \ \,\Sigma_\delta \cup \Sigma_\delta^{\lambda} \cup (\partial \Sigma_\delta \cap \pi_\lambda); \end{equation} we want to show that $X(\delta)$ \emph{fits $\Omega$ well}; how well is the main point of this paper. \par The main idea is to combine the estimate \eqref{w Linfty Sigma} on the smallness of $w$, together with the fact the $u$ grows linearly near $\partial \Omega$; as shown in \cite[Proposition 4]{ABR} indeed, we know that \begin{equation} \label{estimate-distance} \underline{K}\,\mbox{\rm dist}(x,\partial\Omega)\le u(x)\le \overline{K}\,\mbox{\rm dist}(x,\partial\Omega) \ \mbox{ for all} \ x\in\Omega, \end{equation} where $$ 1/\underline{K}, \overline{K}\le C=C(\Omega,d_\Omega, \max_{\overline{\Omega}}u,f,\min_{\partial\Omega} u_\nu). $$ In the following lemma we give our version \cite[Eq.(34)]{ABR}. \begin{lm} \label{lemma 34 ABR} For $0<\sigma, \delta \leq r_\Omega/16$ let $\Omega(\sigma)$ and $X(\delta)$ be the sets defined in \eqref{Om de} and \eqref{X def}, respectively. Let $\gamma$ and $C$ be given by \eqref{gamma} and \eqref{w Linfty Sigma}, respectively. \par If \begin{equation} \label{K si geq} \underline{K}\,\sigma > C \delta^{-\gamma} [u_\nu]_{\partial \Omega} + \overline{K}\,\delta, \end{equation} then we have that \begin{equation} \label{Om sig subset X de} \Omega(\sigma) \subset X(\delta) \subset \Omega. \end{equation} \end{lm} \begin{proof} We have that $X(\delta) \subset \Omega$ by construction. \par To show the first inclusion in \eqref{Om sig subset X de} we proceed by contradiction. Since the maximal cap $\Omega_\lambda$ contains a ball of radius $r_\Omega/4$, then $X(\delta)$ intersects $\Omega(\sigma)$. Assume that there exists a point $y \in \Omega(\sigma) \setminus X(\delta)$ and let $x$ be any point in $X(\delta) \cap \Omega(\sigma)$. Since $\Omega(\sigma)$ is connected, $x$ is joined to $y$ by a path contained in $\Omega(\sigma)$. Let $z$ be the first point on this path which falls outside $X(\delta)$. It is clear that $z \in \partial X(\delta) \cap \Omega(\sigma)$. We now consider two cases. \par If $z\cdot \omega < \lambda$ then the reflection $z^\lambda$ of $z$ in $\pi_\lambda$ is such that $z^\lambda \in \partial \Sigma_\delta$ and $\mbox{\rm dist}(z^\lambda,\partial \Omega) =\delta$. Since $u(z)=w(z^\lambda)+u(z^\lambda)$, from \eqref{w Linfty Sigma} and \eqref{estimate-distance} we have that \begin{equation} \label{34 by contrad} \underline{K}\,\sigma\le u(z) \leq C \delta^{-\gamma} [u_\nu]_{\partial \Omega} + \overline{K}\,\delta, \end{equation} a contradiction. \par If $z\cdot \omega \geq \lambda$, then $z \in \partial \Sigma_\delta$ and $\mbox{\rm dist}(z,\partial \Omega) =\delta$. Hence, from \eqref{estimate-distance} we obtain that $u(z) \leq \overline{K}\,\delta$ and \eqref{34 by contrad} holds as well. Since $z \in \Omega(\sigma)$ then from \eqref{estimate-distance} we have that $u(z) \geq \underline{K}\, \sigma$ which contradicts \eqref{K si geq} on account of \eqref{34 by contrad}. \end{proof} We draw the reader's attention on the differences in \eqref{K si geq} compared to \cite[Eq. (34)]{ABR}: (i) thanks to Lemma \ref{lemma harnack cone}, the term $\delta^{-\gamma}$ replaces one in \cite[Eq. (34)]{ABR} that blows up exponentially; (ii) due to the different definition of the symmetrized set $X(\delta)$ (denoted by $X_\delta$ in \cite{ABR}), we simplify the last summand in \eqref{K si geq}. \begin{prop} \label{prop 5 ABR} Let $\gamma$ be given by \eqref{gamma}. There exist positive numbers $C$ and $\varepsilon$, depending on $\Omega,d_\Omega, r_\Omega, \max_{\overline{\Omega}}u,f,\min_{\partial\Omega} u_\nu$, and $\sigma,\, \delta>0$ such that \eqref{Om sig subset X de} holds with \begin{equation} \label{de leq si} \delta < \sigma < C\, [u_\nu]_{\partial \Omega}^\frac{1}{\gamma+1}, \end{equation} provided that $[u_\nu]_{\partial \Omega}\le\varepsilon$. \end{prop} \begin{proof} We must choose $\delta, \sigma\le r_\Omega/16$ that satisfy \eqref{K si geq}. If we let \begin{equation*} \delta = \left(\frac{C}{\overline{K}}\right)^{\frac{1}{\gamma+1}} [u_\nu]_{\partial \Omega}^{\frac{1}{\gamma+1}} \ \mbox{ and } \ \sigma = \frac{4\overline{K}}{\underline{K}} \delta, \end{equation*} since $\overline{K}\,\delta=C\,\delta^{-\gamma}\,[u_n]_{\partial\Omega}$, then \eqref{K si geq} holds provided that $\sigma\le r_\Omega/16$, that is for \begin{equation*} \varepsilon \leq \frac{\overline{K}}{C} \left( \frac{r_\Omega \underline{K}}{64\,\overline{K}} \right)^{\gamma +1}, \end{equation*} The conclusion then follows from Lemma \ref{lemma 34 ABR}. \end{proof} \par We are now ready to prove the main result of this paper. \begin{proof}[Proof of Theorem \ref{stability serrin}] The proof is analogous to that of \cite[Theorem 1]{ABR}; here, we use \eqref{de leq si} in place of \cite[Eq. (33)]{ABR}. \par Indeed, for any fixed direction $\omega\in{\mathbb S}^{N-1}$, Proposition \ref{prop 5 ABR} implies that, if $\sigma$ satisfies \eqref{de leq si}, then for any $x \in \partial \Omega$ there exists $y_\omega\in \partial \Omega$ such that \begin{equation} \label{x lam y} |x^\lambda - y_\omega | \leq 2 \sigma, \end{equation} where $x^\lambda$ is the reflection of $x$ in the critical hyperplane $\pi_\lambda$ with $\lambda$ defined by \eqref{lambda def} (see also \cite[Corollary 2]{ABR}). \par We now choose $N$ orthogonal directions, say $e_1,\ldots,e_N$, denote by $\pi_1,\ldots,\pi_N$ the corresponding critical hyperplanes and we place the origin $0$ of ${\mathbb R^N}$ at the (unique) point in common to all the $\pi_j$'s. If we denote by $R_j(x)$ the reflection of a point $x$ in $\pi_j$, we have that $$ -x= (\mathcal{R}_{N} \circ \cdots \circ \mathcal{R}_{1})(x); $$ also, if we set $y_0=x$, $y_1=y_{e_1}$ and, for $j=2,\dots, N$, $y_j$ as the point in $\partial\Omega$ such that $|\mathcal{R}_j(y_{j-1})-y_j|\le 2 \sigma$ determined by \eqref{x lam y}, we obtain that \begin{multline*} |-x-y_N|\le \\ |(\mathcal{R}_{N} \circ \cdots \circ \mathcal{R}_{1})(x)-\mathcal{R}_{N}(y_{N-1})|+ |\mathcal{R}_{N}(y_{N-1})-y_N|\le \\ |(\mathcal{R}_{N-1} \circ \cdots \circ \mathcal{R}_{1})(x)-y_{N-1}|+2 \sigma\le \\ |(\mathcal{R}_{N-2} \circ \cdots \circ \mathcal{R}_{1})(x)-y_{N-2}|+4 \sigma\le \cdots \le 2N\sigma. \end{multline*} \par Thus, we showed that there exists $y=y_N\in\partial\Omega$ such that \begin{equation*} \label{Rx lam y} |x +y | \leq 2 N \sigma. \end{equation*} This fact and \eqref{x lam y} imply that $0$ is an approximate center of symmetry for $\Omega$, in the sense of \cite[Proposition 6]{ABR}, that is, for any direction $\omega$, for the critical hyperplane $\pi$ in the direction $\omega$ we have that \begin{equation} \label{bound prop 6 ABR} \mbox{\rm dist}(0,\pi) \leq 4N\,[1+d_\Omega]\,\sigma. \end{equation} \par By letting \begin{equation*} r_i=\min_{x \in \partial \Omega} |x|, \quad \textmd{and }\ \ r_e=\max_{x \in \partial \Omega} |x|, \end{equation*} we obtain \eqref{stability-improved} thanks to \eqref{bound prop 6 ABR}, \cite[Proposition 7]{ABR} and \eqref{de leq si}. \end{proof} \par Notice that Theorem \ref{stability serrin} was proved by fixing $a=1/2$ in Lemma \ref{lemma harnack cone}. However, an analog of Theorem \ref{stability serrin} for any fixed $a \in (0,1)$ can be proved by the same arguments and the number $\gamma$ in \eqref{gamma} and the exponent in \eqref{stability-improved} may be optimized in terms of $a$. It is clear that the constants $\varepsilon$ and $C$ will change accordingly (the smaller is $\gamma$, the larger is $C$ and smaller is $\varepsilon$). By using this plan, in case $\Omega$ is convex and $u$ is a solution of the torsional rigidity problem --- i.e. when $f(u)=1$ in \eqref{serrin1} --- we are able to give more explicit formulas and partially compare our results to those in \cite{BNST2}. \begin{corollary} Let $\Omega$ be a bounded convex domain with boundary of class $C^{2,\alpha}$. Let $u$ be a solution of $$ \Delta u = -1 \ \mbox{ in } \ \Omega, \quad u=0 \ \mbox{ on } \ \partial\Omega. $$ \par Then, for any $\eta>0$ there exist constants $C$ and $\varepsilon$, that depend on $N, \Omega,d_\Omega, r_\Omega$ and $\eta$, such that \begin{equation} \label{stability convex torsion} r_e-r_i \leq C\,[u_\nu]_{\partial \Omega}^{\frac{1}{\tau+\eta}}, \end{equation} provided that $[u_\nu]_{\partial \Omega} \leq \varepsilon$, where \begin{equation} \label{tau} \tau = 1+N\,\sqrt{1+\left[\frac{2d_\Omega}{r_\Omega}\right]^2}. \end{equation} \end{corollary} \begin{proof} We recall that, in the case in hand, $\gamma$ is given by \eqref{gamma-torsion} and is increasing as $a$ grows; thus, the optimal exponent should be looked for when $a\to 0^+$. Therefore, for a $(\mathcal{C},\theta)$-domain, the exponent in \eqref{stability-improved} behaves as $$ \frac1{1+\gamma}= \frac{1}{1+N/\sin\theta+ o(1)} \ \mbox{ as } \ a\to 0^+. $$ The conclusion then follows from Proposition \ref{proposition cone}. \par We finally notice that, in this case, the dependence of the constants $C$ and $\varepsilon$ appearing in Theorem \ref{stability serrin} on the quantities $\|u\|_\infty$ and $\min_{\partial\Omega} u_\nu)$ can be removed, since these quantities can be bounded in terms of the $C^2$ regularity of $\Omega$, by standard barrier techniques. \end{proof} \begin{remark} In \cite[Theorem 2]{BNST2} the authors prove an estimate similar to \eqref{stability convex torsion}. When $\Omega$ is convex, our estimate improves that in \cite[Theorem 2]{BNST2} if $$ \frac{d_\Omega}{r_\Omega} \leq \sqrt{\frac{2N^2+N-5/2}{N}}. $$ \end{remark} \section*{Acknowledgements} The authors wish to thank Paolo Salani for the useful discussions we had together. The authors have been supported by the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The paper was completed while the author was visiting \lq\lq The Institute for Computational Engineering and Sciences\rq\rq (ICES) of The University of Texas at Austin, and he wishes to thank the Institute for hospitality and support. The author has been also supported by the NSF-DMS Grant 1361122 and the Firb project 2013 ``Geometrical and Qualitative aspects of PDE''.
{ "timestamp": "2015-06-22T02:06:55", "yymm": "1410", "arxiv_id": "1410.7791", "language": "en", "url": "https://arxiv.org/abs/1410.7791", "abstract": "In a bounded domain $\\Omega$, we consider a positive solution of the problem $\\Delta u+f(u)=0$ in $\\Omega$, $u=0$ on $\\partial\\Omega$, where $f:\\mathbb{R}\\to\\mathbb{R}$ is a locally Lipschitz continuous function. Under sufficient conditions on $\\Omega$ (for instance, if $\\Omega$ is convex), we show that $\\partial\\Omega$ is contained in a spherical annulus of radii $r_i<r_e$, where $r_e-r_i\\leq C\\,[u_\\nu]_{\\partial\\Omega}^\\alpha$ for some constants $C>0$ and $\\alpha\\in (0,1]$. Here, $[u_\\nu]_{\\partial\\Omega}$ is the Lipschitz seminorm on $\\partial\\Omega$ of the normal derivative of $u$. This result improves to Hölder stability the logarithmic estimate obtained in [1] for Serrin's overdetermined problem. It also extends to a large class of semilinear equations the Hölder estimate obtained in [6] for the case of torsional rigidity ($f\\equiv 1$) by means of integral identities. The proof hinges on ideas contained in [1] and uses Carleson-type estimates and improved Harnack inequalities in cones.", "subjects": "Analysis of PDEs (math.AP)", "title": "Hölder stability for Serrin's overdetermined problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713835553862, "lm_q2_score": 0.8221891305219504, "lm_q1q2_score": 0.8100794221735621 }
https://arxiv.org/abs/1101.5638
Lower Bound for Convex Hull Area and Universal Cover Problems
In this paper, we provide a lower bound for an area of the convex hull of points and a rectangle in a plane. We then apply this estimate to establish a lower bound for a universal cover problem. We showed that a convex universal cover for a unit length curve has area at least 0.232239. In addition, we show that a convex universal cover for a unit closed curve has area at least 0.0879873.
\section{Introduction} One of the open classical problems in discrete geometry is the Moser's Worm problem, which originally asked for ``the smallest cover for any unit-length curve''. In the other words, the question asks for a minimal universal cover for any curve of unit length -- also called unit worm. Although it is not clearly stated in the original problem, in this report, we will only concern ourselves with convex covers. In 1979, Laidacker and Poole \cite{popo} proved that such minimal cover exists. However, finding this minimal cover turns out to be much more difficult. Instead, there have been attempts trying to estimate the area of this minimal cover. In 1974, Gerriets and Poole \cite{scho} constructed a rhombus-shaped cover with an area of 0.28870, thus establishing the first upper bound for the problem. Later, Norwood and Poole \cite{tree} improved the upper bound to 0.2738, while Wetzel \cite{pi} conjectured the upper bound of 0.2345. On the other hand, the lower bound for the problem has not been as extensively studied. In 1973, Wetzel \cite{cam} showed that any cover has an area at least 0.2194, exploiting the fact that such cover must contain both a unit segment and the \textit{broadworm} \cite{mansur}. This lower bound has only recently improved to 0.227498 \cite{maco}, using the following facts: \begin{itemize} \item Any convex universal cover must contain a unit segment, an equilateral triangle of side length $\frac{1}{2}$ and a square of side length $\frac{1}{3}$. \item The minimum area of the convex hull of these three objects provide a lower bound for Moser's Worm Problem. \end{itemize} In this paper, we generalize these ideas by considering an arbitrary rectangle instead of the square. In Section 2, we provide a lower bound for the convex hull area of a set of points and a rectangle, and then apply the technique to a universal cover problem. As a result, in Section 3, we improve the lower bound for the Moser's problem to 0.232239. In Section 4, we consider a variation of the Moser's problem: we try to estimate an area of a universal cover for any unit closed curve. Only partial results were established by Wetzel in 1973 when he showed that a translational cover for any unit closed curve must have area between 0.155 and 0.159 \cite{sec}. We are able to show that a convex universal cover must have area at least 0.0879873. \section{Estimating area of the convex hull of points and rectangles} Let $\mathcal{P}$ be a polygon with vertices $K_1,\, \dots,\, K_n$, we denote by $\mu(\mathcal{P})=\mu(K_1,\, \dots,\, K_n)$ the area of the convex hull of $\mathcal{P}$. We also denote by $\mu(\mathcal{P}_1,\,\mathcal{P}_2,\,\dots,\,\mathcal{P}_m)$ the area of the convex hull of $\mathcal{P}_1 \cup \dots \cup \mathcal{P}_m$, where $\mathcal{P}_i$ are sets in a plane. Next, we define \begin{defin} Given segments $AB$ and $DC$, the \emph{height} of $AB$ with respect to $DC$ is the length of the perpendicular segment from either $A$ or $B$ to the parallel of $DC$ passing another point. We denote this height by $h_{DC}(AB)$ (Figure \ref{Example}). \end{defin} \begin{figure}[htbp] \begin{center} \includegraphics[height = 1.5in]{Lemma1_Example.eps} \caption{The height of $AB$ with respect to $DC$ is $h_{DC}(AB)$.} \label{Example} \end{center} \end{figure} First, we provide a lower bound for the convex hull area of any four points on a plane. \begin{lem} Let $E$, $F$, $P$, $Q$ be points in $\mathbb{R}^2$. Then $\mu(EFPQ)\geq\frac{1}{2}|EF|\,h_{EF}(PQ)$. \label{R} \end{lem} \begin{proof} Without loss of generality, we can rotate the figure so that $EF$ is horizontal and $P$ is above $EF$. We can also relabel points to ensure that $P$ is above $Q$, as well as, $E$ and $F$. Let $d_1$ be the distance from the point $P$ to $EF$, and $d_2$ the distance from $Q$ to $EF$. Note that the convex hull of $EFPQ$ always contain triangle $EFP$ and so $\mu(EFPQ)\geq\mu(EFP)$. If $Q$ also lies above $EF$, then it is clear that $d_1\geq h_{EF}(PQ)$ (Figure \ref{lemmacase2}) and we have $\mu(EFPQ)\geq\mu(EPF) = \frac{1}{2}|EF| d_1\geq \frac{1}{2}|EF|\, h_{EF}(PQ)$. \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{Lemma1_Case2.eps} \caption{The case where both $P$ and $Q$ lie above $EF$.} \label{lemmacase2} \end{center} \end{figure} Otherwise, if $Q$ lies below $EF$, we notice that $EFPQ$ contains both triangles $EPF$ and $EQF$, and the two triangles do not intersect except on $EF$. Hence, $\mu(EFPQ)\geq\mu(EPF)+\mu(EQF)=\frac{1}{2}|EF|\,(d_1+d_2)=\frac{1}{2}|EF|\,h_{EF}(PQ)$, and the equality holds when $EFPQ$ is convex (Figure \ref{lemmacase1}). \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{Lemma1_Case1.eps} \caption{The case where $EF$ lies between $P$ and $Q$.} \label{lemmacase1} \end{center} \end{figure} \begin{flushright} $ \Box $ \end{flushright} \end{proof} The next proposition provides a lower bound for the convex hull area of a rectangle and four arbitrary points on the same plane. \begin{prop} Let $ABCD$ be a rectangle and $E,\, F,\, P,\, Q$ be points on the same plane. Assume that $h_{BC}(EF)>|AB|$ and $h_{AB}(PQ)>|BC|$. Then \begin{equation} \mu(ABCDEFPQ)\geq \frac{1}{2}(h_{BC}(EF)-|AB|)\,|BC|+\frac{1}{2}(h_{AB}(PQ)-|BC|)\,|AB|+|AB||BC| \label{main} \end{equation} \end{prop} \begin{proof} Without loss of generality, we can assume that the slope of $AB$ is finite and non-negative. Let $\mathcal{V}$ be the strip between extension of $BC$ and $AD$ and $\mathcal{W}$ be the strip between extension of $BC$ and $AD$ (Figure \ref{strips}). To eliminate redundant cases on the position of four points $E,\, F,\, P,\, Q$ relative to the strips $\mathcal{V}$ and $\mathcal{W}$, we note that by reflecting across the perpendicular bisector of $BC$ and re-labeling $P$ and $Q$ as necessary, we can ensure that $Q$ lies under both $\mathcal{W}$ and $P$. Specifically, if $Q$ initially lies above $\mathcal{W}$, so is $P$ and the reflection will bring both points below $\mathcal{W}$. Otherwise, if $Q$ initially lies inside $\mathcal{W}$, from the assumption that $h_{AB}(PQ)>|BC|$, $P$ must lie above $\mathcal{W}$ and the reflection and re-labeling of $P$ and $Q$ will bring $Q$ below both $\mathcal{W}$ and $P$ as desired. Similarly, we can ensure that $E$ lies to the left of $\mathcal{V}$ by using reflection across the perpendicular bisector of $AB$ and re-labeling of points $E$ and $F$. Moreover, because the reflection around the perpendicular bisector of $AB$ does not affect the relative positions of $P$, $Q$, and $\mathcal{W}$, and vice versa, we can obtain both conditions simultaneously. Next, we consider four cases for whether $P$ lies above $\mathcal{W}$ and whether $F$ lies to the right of $\mathcal{V}$. \textbf{Case 1:} $P$ lies above $\mathcal{W}$ and $F$ lies on the right of $\mathcal{V}$. \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{Case1_1.eps} \caption{Case with no intersection between triangles.} \label{case11} \end{center} \end{figure} If no pair of triangles $BEC$, $CQD$, $DFA$, $APB$ intersects (Figure \ref{case11}), then we directly obtain the following lower bound for the convex hull area: \begin{align*} \mu(ABCDEFPQ) &\geq \mu(BEC) + \mu(DFA) + \mu(APB) + \mu(CQD) + \mu(ABCD) \\ &= \frac{1}{2}|BC|\,h_1+\frac{1}{2}|AD|\,h_2+\frac{1}{2}|AB|\,g_1+\frac{1}{2}|CD|\,g_2+|AB||BC|, \end{align*} where $h_1$, $h_2$, $g_1$, and $g_2$, are the respective heights of triangles $BEC$, $DFA$, $APB$, and $CQD$. This can also be written as \begin{align*} \mu(ABCDEFPQ) &\geq \frac{1}{2}(h_1+h_2)\,|BC|+\frac{1}{2}(g_1+g_2)\,|AB|+|AB||BC| \\ &= \text{R.H.S. of inequality \ref{main}} \end{align*} and the equality holds when $ABCDEFPQ$ is convex. When some triangles intersect, because of the conditions on the positions of $E,\, F,\, P$, and $Q$, triangle $BEC$ cannot intersect $DFA$, triangle $APB$ cannot intersect $CQD$, and no three triangles can have nonempty intersection. We pick the case where $DFA\cap CQD\neq \emptyset$ as a representative example and subsequently show that, in general, we can always find a set of disjoint triangles lying inside the convex hull of $ABCDEFPQ$ whose total area is greater than or equal to the sum of areas of triangles $BEC$, $APB$, $DFA$, and $CQD$. For the two triangles $DFA$ and $CQD$ to intersect, both $F$ and $Q$ must lie below $\mathcal{W}$ and to the right of $\mathcal{V}$ (Figure \ref{caseinter}). We then draw a line $\mathcal{L}_1$ passing through $F$ parallel to $AD$ and a line $\mathcal{L}_2$ passing through $F$ parallel to $CD$. It is clear that $Q$ must lie either above $\mathcal{L}_2$ or to the right of $\mathcal{L}_1$. \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{Case1_1inter.eps} \caption{The case where $ADF$ and $CDQ$ intersects.} \label{caseinter} \end{center} \end{figure} Hence, we see that either $\mu(CFD) > \mu(CQD)$ or $\mu(DQA) > \mu(DFA)$, and obtain the bound $\max\{\mu(DQA) + \mu(CQD),\ \mu(DFA) + \mu(CFD)\} \geq \mu(CQD) + \mu(DFA)$. Note that none of these triangles can intersect $BEC$, $BPC$, $APB$, or $AEB$ because $P$ lies above $\mathcal{W}$ and $E$ lies to the left of $\mathcal{V}$. If triangles $BEC$ and $APB$ also intersect, we can indenpendently apply the same consideration as above and derive a similar bound. For the sake of simplicity, consider the case where $BEC$ and $APB$ are disjoint and compute the following lower bound for the convex hull area \begin{eqnarray*} \mu(ABCDEFPQ) &\geq &\mu(ABCD)+\mu(APB)+\mu(BEC) \\ & &+\max\{\mu(DQA) + \mu(CQD),\ \mu(DFA) + \mu(CFD)\} \\ &\geq &\mu(ABCD)+\mu(APB)+\mu(BEC)+\mu(CQD)+\mu(DFA) \\ &= &\text{R.H.S. of inequality \ref{main}} \end{eqnarray*} and thus inequality \ref{main} holds when $P$ lies above $\mathcal{W}$ and $F$ lies on the right of $\mathcal{V}$. \textbf{Case 2:} $F$ lies on the right of $\mathcal{V}$ but $P$ does not lie above $\mathcal{W}$. We now ignore triangle $APB$ and consider only $BEC$, $DFA$, and $CQD$. If $CQD$ intersect $BEC$ or $DFA$, we can use the same argument from Case 1 to show that we can always find a set of disjoint triangles lying inside the convex hull of $ABCDEFPQ$ whose total area is greater than or equal to the sum of areas of triangles $BEC$, $DFA$, and $CQD$. Moreover, since $P$ does not lie above $\mathcal{W}$, the height of $CQD$ with base $CD$ is greater than or equal to $h_{AB}(PQ) - |BC|$, and so we have \begin{eqnarray*} \mu(ABCDEFPQ) &\geq &\mu(ABCD)+\mu(BEC)+\mu(DFA)+\mu(CQD) \\ &\geq &\mu(ABCD)+\mu(BEC)+\mu(DFA) \\ & &+\frac{1}{2}(h_{AB}(PQ) - |BC|)\,|AB| \\ &= &\text{R.H.S. of inequality \ref{main}} \end{eqnarray*} \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{Case1_2pq.eps} \caption{Case 2.} \label{case12} \end{center} \end{figure} \textbf{Case 3:} $P$ lies above $\mathcal{W}$ but $F$ does not lie on the right of $\mathcal{V}$. This case is analogous to Case 2. \textbf{Case 4:} $P$ does not above $\mathcal{W}$ and $F$ does not lie on the right of $\mathcal{V}$. Here we conveniently consider only triangles $BEC$ and $CQD$. By similar argument from Case 1, we can resolve the case where $BEC$ and $CQD$ intersect. Using similar argument from Case 2, we also know that the height of $BEC$ is greater than or equal to $h_{BC}(EF) - |AB|$. Hence, \begin{eqnarray*} \mu(ABCDEFPQ) &\geq &\mu(ABCD)+\mu(BEC)+\mu(CQD) \\ &\geq &\mu(ABCD)+\frac{1}{2}(h_{BC}(EF) - |AB|)\,|BC| \\ & &+\frac{1}{2}(h_{AB}(PQ) - |BC|)\,|AB| \\ &= &\text{R.H.S. of inequality \ref{main}} \end{eqnarray*} \begin{flushright} $ \Box $ \end{flushright} \end{proof} \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{Case2_2.eps} \caption{In Case 4 we take only $Q$ and $E$ into consideration.} \label{case22} \end{center} \end{figure} Now that we have proved some basic results, we are ready to proceed to the main problem. \section{Lower bound for a universal cover of curves of unit length} \subsection{Basic Figures} Consider a unit segment$\mathcal{L}$ with endpoints $EF$, a V-shaped unit worm $\mathcal{T}$ with vertices $QPR$ and side length $\frac{1}{2}$, and a U-shaped right angle unit worm $\mathcal{R}$ with vertices $ABCD$. To maximize the area of the convex hull of $ABCD$, we let $AB = CD = \frac{1}{2}$ and $BC = AD = \frac{1}{4}$. Another unit worm that we consider is the well known \textit{unit broadworm} \cite{broad}, denoted by $\mathcal{B}$, which was introduced by Schaer in 1968 \cite{mansur} as the broadest unit worm whose minimum width in any direction is given by $b_0$, approximately 0.4389. We start by introducing parameters to define the positioning of $\mathcal{L}$, $\mathcal{T}$ and $\mathcal{R}$. By rotation, we can assume that $\mathcal{L}$ is horizontal. Let $O_1 = (x_1,\,y_1)$ be the centroid of the rectangle $\mathcal{R}$. We can always pick the vertex for $A$ so that $A=(x_1,\,y_1)+\frac{\sqrt{5}}{8}(\cos{\alpha},\,\sin{\alpha})$ and $\theta_0 \leq \alpha < \theta_0 + \pi$, where $\theta_0 = \arctan{\frac{1}{2}}$, and then label the rest of the vertices $B$, $C$, and $D$ going counterclockwise. Furthermore, for each configuration of $\mathcal{R}$, the value of $\alpha$ is uniquely defined and we denote it by $\alpha(\mathcal{R})$. Regarding vector as complex number, we see that \begin{eqnarray*} arg\left(\overrightarrow{C A}\right) &=& \alpha \\ arg\left(\overrightarrow{C D}\right) &=& \alpha - \theta_0. \end{eqnarray*} \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{coordinates.eps} \caption{We construct the rectangle by rotating the vector.} \label{coord1} \end{center} \end{figure} Let $O_2 = (x_2,y_2)$ be the centroid of the triangle $\mathcal{T}$, we can similarly pick the vertex for $P$ so that $P=(x_2,y_2)+\frac{\sqrt{3}}{6}(\cos{\beta},\sin{\beta})$ and $\frac{\pi}{6} \leq \beta < \frac{5\pi}{6}$, and then label the rest of the vertices $Q$ and $R$ going counterclockwise. Furthermore, for each configuration of $\mathcal{T}$, the value of $\beta$ is uniquely defined and we denote it by $\beta(\mathcal{T})$. Again, regarding vector as complex number, we have \begin{eqnarray*} arg\left(\overrightarrow{QP}\right) &=& \beta - \frac{\pi}{6} \\ arg\left(\overrightarrow{RP}\right) &=& \beta + \frac{\pi}{6}. \end{eqnarray*} Let $\sigma$ be the point reflection across the origin and $\tau$ be the reflection across the $y$-axis. Both transformations keep the segment $\mathcal{L}$ horizontal. We notice that \begin{eqnarray*} \alpha\left(\sigma(\mathcal{R})\right) &=& \alpha(\mathcal{R}) \\ \beta\left(\sigma(\mathcal{T})\right) &=& \begin{cases} \beta(\mathcal{T}) + \frac{\pi}{3} \ \ \mbox{when} \ \ \frac{\pi}{6} \leq \beta(\mathcal{T}) < \frac{\pi}{2} \\ \beta(\mathcal{T}) - \frac{\pi}{3} \ \ \mbox{when} \ \ \frac{\pi}{2} \leq \beta(\mathcal{T}) < \frac{5\pi}{6} \end{cases} \\ \alpha\left(\tau(\mathcal{R})\right) &=& \pi - \alpha(\mathcal{R}) \\ \beta\left(\tau(\mathcal{T})\right) &=& \pi - \beta(\mathcal{T}). \end{eqnarray*} Hence, with a suitable combination of $\sigma$ and $\tau$, we can ensure that $\theta_0 \leq \alpha(\mathcal{R}) \leq \theta_0 + \frac{\pi}{2}$ and $\frac{\pi}{3} \leq \beta(\mathcal{T}) \leq \frac{2\pi}{3}$. \begin{figure}[htbp] \begin{center} \includegraphics[height = 2in]{coordinates2.eps} \caption{By rotating the vector $\Lambda A$ we can construct the whole triangle.} \label{coord2} \end{center} \end{figure} \subsection{Area Estimation} Here we use the inequalities from the Section 2 to bound the areas of the convex hull of $\mathcal{L}$, $\mathcal{B}$, $\mathcal{T}$ and $\mathcal{R}$. For simplicity, we shall refer to $\alpha(\mathcal{R})$ and $\beta(\mathcal{T})$ by simply $\alpha$ and $\beta$, respectively. The following lemmas provide basic lower bounds on the area of the convex hull of configurations involving line segment $\mathcal{L}$. \begin{lem} $ \mu(\mathcal{L},\, \mathcal{R})\geq \frac{\sqrt{5}}{8} \sin{\alpha} $. \label{lemmaLR} \end{lem} \begin{proof} From Lemma \ref{R} and because $EF$ is horizontal and $arg\left(\overrightarrow{C A}\right) = \alpha$, it follows that $\mu(\mathcal{L},\,\mathcal{R}) = \mu(ABCDEF) \geq \mu(ACEF) \geq \frac{1}{2}|EF|\, h_{EF}(AC) = \frac{\sqrt{5}}{8} \sin{\alpha}$. \begin{flushright} $ \Box $ \end{flushright} \end{proof} \begin{lem} $ \mu(\mathcal{L},\,\mathcal{T})\geq \frac{1}{4} \max\left\{ \sin{(\beta - \frac{\pi}{6})},\,\sin{(\beta + \frac{\pi}{6})}\right\}. $ \end{lem} \begin{proof} Note that $\mu(\mathcal{L}, \mathcal{T}) \geq \max\left\{ \mu(EFPQ),\, \mu(EFPR) \right\}$, and we also know that $\mu(EFPQ) \geq \frac{1}{4} \sin{(\beta - \frac{\pi}{6})}$ and $\mu(EFPR) \geq \frac{1}{4} \sin{(\beta + \frac{\pi}{6})}$. \begin{flushright} $ \Box $ \end{flushright} \end{proof} Next, we provide a lower bound for the area of the convex hull of configurations involving $\mathcal{L}$, $\mathcal{T}$ and $\mathcal{R}$ together. \begin{prop} $\mu(\mathcal{L},\,\mathcal{T},\,\mathcal{R}) \geq \frac{1}{8}\left(\sin(\alpha - \theta_0 + \frac{\pi}{2})+ \sin(\beta - \alpha + \theta_0 + \frac{\pi}{6})\right). $ \end{prop} \begin{proof} First notice that $\mu(\mathcal{L},\,\mathcal{T},\,\mathcal{R}) \geq \mu(ABCDEFPR)$, then we apply Proposition \ref{main} to obtain $$\mu(ABCDEFPR) \geq \frac{1}{2}\left(h_{BC}(EF)\, |BC| + h_{AB}(PR)\, |AB|\right).$$ Since $arg\left(\overrightarrow{B C}\right) = \alpha - \theta_0 + \frac{\pi}{2}$, we have $h_{BC}(EF) = \sin{(\alpha - \theta_0 + \frac{\pi}{2})}$. Similarly, we obtain $h_{AB}(PR) = \frac{1}{2} \sin{(\beta - \alpha + \theta_0 + \frac{\pi}{6})}$, and therefore get the desired result. \begin{flushright} $ \Box $ \end{flushright} \end{proof} Lastly, we replace $\mathcal{T}$ with the broadworm $\mathcal{B}$ and denote its breadth by $b_0$. \begin{prop} $\mu(\mathcal{L},\,\mathcal{R},\,\mathcal{B}) \geq \frac{1}{4}\left(\frac{1}{2}\sin(\alpha - \theta_0 + \frac{\pi}{2})+b_0\right) $. \label{propLRB} \end{prop} \begin{proof} By the definition of breath, there exist points $S$ and $T$ on $\mathcal{B}$ such that $h_{AB}(ST)\geq b_0$. Again, we apply Proposition \ref{main} to obtain $$\mu(\mathcal{L},\,\mathcal{T},\,\mathcal{B}) \geq \mu(ABCDEFST) \geq \frac{1}{8}\sin(\alpha - \theta_0 + \frac{\pi}{2}) + \frac{b_0}{4}$$ and complete the proof. \begin{flushright} $ \Box $ \end{flushright} \end{proof} Now we can combine all the lower bounds together to try to minimize $\mu(\mathcal{L},\,\mathcal{R},\,\mathcal{T},\,\mathcal{B})$. We define the following functions: \begin{itemize} \item {$p(\alpha)=\frac{\sqrt{5}}{8} \sin{\alpha} $} \item {$q(\beta)= \frac{1}{4} \max\left\{ \sin{(\beta - \frac{\pi}{6})},\, \sin{(\beta + \frac{\pi}{6})}\right\} $} \item {$f(\alpha,\beta)= \frac{1}{8}\left(\sin(\alpha - \theta_0 + \frac{\pi}{2})+ \sin(\beta - \alpha + \theta_0 + \frac{\pi}{6})\right)$} \item {$g(\alpha)= \frac{1}{4}\left(\frac{1}{2}\sin(\alpha - \theta_0 + \frac{\pi}{2})+b_0\right)$} \item {$F(\alpha,\, \beta) = \max\left\{p(\alpha),\, q(\beta),\,f(\alpha,\beta),\, g(\alpha)\right\}$} \end{itemize} It follows immediately that $\mu(\mathcal{L},\,\mathcal{R},\, \mathcal{T},\,\mathcal{B}) \geq F(\alpha,\, \beta)$, and we will find a lower bound for $F(\alpha,\, \beta)$ on the domain $\theta_0 \leq \alpha \leq \theta_0 + \frac{\pi}{2}$ and $\frac{\pi}{3} \leq \beta \leq \frac{2\pi}{3}$. \begin{prop} $F(\alpha,\,\beta)\geq 0.232239$ on the domain $[\theta_0, \theta_0 + \frac{\pi}{2}] \times [\frac{\pi}{3} , \frac{2\pi}{3}]$. \label{result} \end{prop} \begin{proof} We consider the possible values of $F(\alpha, \beta)$ in four cases. \noindent\textbf{Case 1:} $0.980693572 < \alpha \leq \theta_0 + \frac{\pi}{2}$ Clearly we have $\frac{\sqrt{5}}{8}\sin{\alpha} > 0.23223900008$. \newline\newline \noindent\textbf{Case 2:} $\theta_0 \leq \alpha < 0.663720973$ We have $\frac{1}{4}\left(\frac{1}{2}\sin(\alpha - \theta_0 + \frac{\pi}{2})+b_0\right) > 0.232239000003$. \newline\newline \noindent\textbf{Case 3:} $\frac{\pi}{3} \leq \beta < \frac{\pi}{2} - 0.1443850667\,$ or $\,\frac{\pi}{2} + 0.1443850667 < \beta \leq \frac{2\pi}{3}$ We have $\frac{1}{4}\sin(\beta- \frac{\pi}{6}) > 0.232239000012$ when $\frac{\pi}{3} \leq \beta < \frac{\pi}{2} - 0.1443850667$, and $\frac{1}{4}\sin(\beta+\frac{\pi}{6}) > 0.232239000012$ when $\frac{\pi}{2} + 0.1443850667 < \beta \leq \frac{2\pi}{3}$. \newline\newline \noindent\textbf{Case 4:} $0.663720972 \leq \alpha \leq 0.980693573\,$ and $\,\frac{\pi}{2} - 0.1443850668 \leq \beta \leq \frac{\pi}{2} + 0.1443850668$ Consider $f(\alpha,\,\beta)$ on this domain, which is a product of closed and bounded intervals. We can check that there is no local minimum except possibly at the corners and compute \begin{align*} f(0.663720972,\,\pi/2 - 0.1443850668) &= 0.245506 \\ f(0.663720972,\,\pi/2 + 0.1443850668) &= 0.234071 \\ f(0.980693573,\,\pi/2 - 0.1443850668) &= 0.232475 \\ f(0.980693573,\,\pi/2 + 0.1443850668) &= 0.232239210 \end{align*} Thus, $f(\alpha,\,\beta) \geq 0.232239210$ on this domain Therefore, $F(\alpha,\,\beta)= \max\left\{p(\alpha), q(\beta),f(\alpha,\,\beta),g(\alpha)\right\} \geq 0.232239$ \begin{flushright} $ \Box $ \end{flushright} \end{proof} Hence, we have established a new lower bound for the Moser's problem. \section{Lower bound for a universal cover of unit closed curves} We now consider a universal cover for any unit closed curve. Denote the segment of length $\frac{1}{2}$, the circle with unit circumference, and a square of side length $\frac{1}{4}$ by $\mathcal{L}$, $\mathcal{C}$, and $\mathcal{R}$, respectively. We parameterize the orientation of $\mathcal{L}$ and $\mathcal{R}$ in essentially the same way as before. The circle $\mathcal{C}$ imitates the role of the broadworm as the $\mathcal{C}$ has width $\frac{1}{\pi}$ in every direction. Additionally, we can assume that $\frac{\pi}{4} \leq \alpha \leq \frac{\pi}{2}$ because of the symmetry of the square. Then we have the following \begin{prop} $i)$ $\mu(\mathcal{L},\mathcal{R}) \geq \frac{\sqrt{2}}{16} \sin(\alpha)\,$ and $\,ii)$ $\mu(\mathcal{L},\mathcal{R},\mathcal{C}) \geq \frac{1}{8}\left(\frac{1}{2}\sin(\alpha + \frac{\pi}{4}) + \frac{1}{\pi}\right)$ \end{prop} \begin{proof} The prove is exactly the same as in Lemma \ref{lemmaLR} and Proporsition \ref{propLRB}. \end{proof} We now define $$G(\alpha) = \max \left\{ \frac{\sqrt{2}}{16} \sin(\alpha),\, \frac{1}{8}\left(\frac{1}{2}\sin(\alpha + \frac{\pi}{4}) + \frac{1}{\pi}\right) \right\}$$ Then we have, \begin{prop} $G(\alpha) \geq 0.0879873$ on the domain $[\frac{\pi}{4},\,\frac{\pi}{2}]$. \end{prop} \begin{proof} We divide the domain into two parts \noindent\textbf{Case 1:} When $1.4755040221 < \alpha \leq \frac{\pi}{2}$, we see that $\frac{\sqrt{2}}{16} \sin(\alpha) > 0.08798734$ \noindent\textbf{Case 2:} When $\frac{\pi}{4} \leq \alpha < 1.4755040222$, we have $\frac{1}{8}\left(\frac{1}{2}\sin(\alpha + \frac{\pi}{4}) + \frac{1}{\pi}\right) >0.08798739$ \end{proof} Hence we establish a lower bound for a universal cover of unit closed curves. \noindent\textit{Remark}: In this case, adding an equilateral triangle of side $\frac{1}{3}$ does not improve the lower bound.
{ "timestamp": "2011-02-01T02:00:18", "yymm": "1101", "arxiv_id": "1101.5638", "language": "en", "url": "https://arxiv.org/abs/1101.5638", "abstract": "In this paper, we provide a lower bound for an area of the convex hull of points and a rectangle in a plane. We then apply this estimate to establish a lower bound for a universal cover problem. We showed that a convex universal cover for a unit length curve has area at least 0.232239. In addition, we show that a convex universal cover for a unit closed curve has area at least 0.0879873.", "subjects": "Metric Geometry (math.MG)", "title": "Lower Bound for Convex Hull Area and Universal Cover Problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9879462187092607, "lm_q2_score": 0.8198933315126792, "lm_q1q2_score": 0.8100105166128897 }
https://arxiv.org/abs/1711.04504
Tilings with noncongruent triangles
We solve a problem of R. Nandakumar by proving that there is no tiling of the plane with pairwise noncongruent triangles of equal area and equal perimeter. We also show that no convex polygon with more than three sides can be tiled with finitely many triangles such that no pair of them share a full side.
\section{Introduction} In his blog, R. Nandakumar~\cite{Na06, Na14} raised several interesting questions on tilings. They have triggered a lot of research in geometry and topology. In particular, he and Ramana Rao~\cite{NaR12} conjectured that for every natural number $n$, any plane convex body can be partitioned into $n$ convex pieces of equal area and perimeter. After some preliminary results~\cite{BBS10, KaHA14} indicating that the problem is closely related to questions in equivariant topology, the conjecture was settled in the affirmative by Blagojevi\'c and Ziegler~\cite{BlZ14} in the special case where $n$ is a prime power. All other cases, including the case $n=6$, are open. \smallskip Nandakumar also asked the following interesting variant of the last question. A family of closed triangles that together cover the whole plane is said to form a {\em tiling} if no two of its members share an interior point. \medskip \noindent{\bf Problem 1.} (Nandakumar~\cite{Na14}) {\em Is it possible to tile the plane with pairwise noncongruent triangles of the same area and perimeter?} \medskip The aim of this note is to show that the answer to this question is in the negative. \medskip \noindent{\bf Theorem 2.} {\em There is no tiling of the plane with pairwise noncongruent triangles of the same area and the same perimeter.} \medskip We start with a trivial observation. \medskip{\bf Observation 3.} {\em If two triangles of the same area and perimeter share a side, then they are congruent.} \medskip \noindent Indeed, if two triangles, $xyz$ and $xyz'$, have the same perimeter, then $z$ and $z'$ must lie on a common ellipse with foci $x$ and $y$. On the other hand, if their areas are also the same, the distances of $z$ and $z'$ from the line $xy$ line must be equal. Consequently, if $xyz$ and $xyz'$ do not coincide, one can obtain one from the other by a reflection through the line $xy$, or the midpoint of the segment $xy$, or the orthogonal bisector of this segment. Therefore, in order to prove Theorem~2, it is sufficient to establish the following result. \medskip \noindent{\bf Theorem 4.} {\em Let $\mathcal T$ be tiling of the plane with triangles of unit perimeter, each of which has area at least $\epsilon>0$. Then there are two triangles in $\mathcal T$ that share a side.} \medskip In the periodic tiling depicted in Fig.~1, no two triangles share a side. Therefore, Theorem~4 does not hold without the assumption that all triangles have the same perimeter. \begin{figure}\centering \begin{tikzpicture}[scale=1.4] \foreach \x in {1,...,8} \foreach \y in {2,...,4} \filldraw[color=Red!80,thick,draw=black] ({1.5*\x},{\y}) -- ({1.5*\x+0.5},{\y+1}) -- ({1.5*\x+1},{\y}) -- ({1.5*\x},{\y}); \foreach \x in {1,...,7} \foreach \y in {1,...,4} \filldraw[color=Red!80,thick,draw=black] ({1.5*\x+0.75},{\y+0.5}) -- ({1.5*\x+1.25},{\y+1.5}) -- ({1.5*\x+1.75},{\y+0.5}) -- ({1.5*\x+0.75},{\y+0.5}); \end{tikzpicture} \caption{Periodic tiling with no two triangles sharing a side.} \end{figure} Another way of enforcing that two triangles of a tiling share a side by imposing a condition on the lengths of the sides of the participating triangles. We say that a tiling is {\em locally finite} if any bounded region intersects only a finite number of its members. \medskip \noindent{\bf Theorem 5.} {\em Let $\mathcal T$ be a locally finite tiling of the plane with triangles, and suppose that the lengths of their sides belong to interval $[1,2)$. Then there are two triangles in $\mathcal T$ that share a side.} \medskip By properly scaling the tiling depicted in Fig.~1, we obtain an example showing that Theorem~5 does not remain true if we replace the interval $[1,2)$ by its closure, $[1,2]$. One of the main ideas of the proof of Theorem 2 in a simplified form can be used to obtain a result on triangular tilings of a convex $k$-gon with $k>3$. \medskip \noindent{\bf Theorem 6.} {\em Let $k\ge 4$. In any tiling of a convex $k$-gon with finitely many triangles, there are two triangles that share an edge.} \medskip We note that, while revising the paper, it was pointed out to us by Roman Karasev and Pavel Kozhevnikov that Theorem~6, for the case $k=4$, was once posed on a Russian mathematics olympiad and was published in Kvant \cite{Kv}. \begin{figure} \centering \begin{tikzpicture}[scale=0.12] \foreach \x in {4,...,8} \draw[thick] ({114.8-10.6*\x}:{(3/2)^\x}) -- ({234.8-10.6*\x}:{(3/2)^\x}) -- ({-5.2-10.6*\x}:{(3/2)^\x}) -- ({114.8-10.6*\x}:{(3/2)^\x}); \foreach \x in {5,...,8} \draw[thick] ({234.8-10.6*\x}:{(3/2)^\x}) -- ({234.8-10.6*(\x-1)}:{(3/2)^(\x-1)}); \foreach \x in {5,...,8} \draw[thick] ({114.8-10.6*\x}:{(3/2)^\x}) -- ({114.8-10.6*(\x-1)}:{(3/2)^(\x-1)}); \foreach \x in {5,...,8} \draw[thick] ({-5.2-10.6*\x}:{(3/2)^\x}) -- ({-5.2-10.6*(\x-1)}:{(3/2)^(\x-1)}); \end{tikzpicture} \caption{Tiling a triangle with triangles not sharing a side. A triangle is recursively subdivided into $4$ pieces.} \label{fig3} \end{figure} The tiling depicted in Figure~\ref{fig3} shows that Theorem 6 is false for $k=3$. The tiling in Figure~\ref{figsquare} shows that Theorem 6 does not hold for infinite tilings. \begin{figure}\centering \begin{tikzpicture}[scale=0.6] \draw[thick] (0,0) -- (5,5)-- (10,0)--(5,-5)--cycle; \draw[thick] (0,0) -- (10,0); \draw[thick,red] (7.5,0) -- (7.5,-2.5)--(2.5,-2.5)--(2.5,0); \draw[ultra thick] (6.25,0) -- (7.5,-1.25)--(5,-2.5)--(2.5,-1.25)--(3.75,0); \draw[thick,red] (5.625,0) -- (6.875,-0.625)--(6.25,-1.875)--(3.75,-1.875)--(3.125,-0.625)--(4.375,0); \node[fill=white, circle, inner sep = -0pt, minimum size=0pt] at (5,-1.3) {.}; \node[fill=white, circle, inner sep = -0pt, minimum size=0pt] at (5,-1.45) {.}; \node[fill=white, circle, inner sep = -0pt, minimum size=0pt] at (5,-1.6) {.}; \node[fill=black, circle, inner sep = -0pt, minimum size=6pt] at (5,0) {}; \end{tikzpicture} \caption{Tiling of a square with infinitely many triangles, no pair of which share a side.}\label{figsquare} \end{figure} The rest of this note is organized as follows. In Section~2, we establish Theorem~6. The proof of Theorem~4 (which implies Theorem 2) is similar, but more involved. It requires careful asymptotic estimates of certain parameters and is presented in a separate section (Section~3). The proof of Theorem~5 is given in Section~4. \smallskip For many other problems on tilings, consult~\cite{BMP05} and~\cite{Na14}. \section{Tiling a convex polygon---Proof of Theorem 6} Consider a convex $k$-gon $P$ which is tiled with $t$ triangles such that no two of them share a side. In what follows, by a {\em side} we will always mean a side of one of the $t$ triangles, so that the total number of sides is $3t$. We use the term {\em vertex} for the vertices of the triangles (including the vertices of $P$). Define a simple graph $G$ on this vertex set by connecting two vertices with an edge if the segment between them belongs to the side of a triangle and contains no other vertex. Obviously, $G$ is a connected planar graph with $f(G)=t+1$ faces. The number of vertices of $G$ is $v(G)=v_{\rm bd}+v_{\rm int}$, where $v_{\rm bd}$ and $v_{\rm int}$ denote the number of vertices lying on the boundary of $P$ and in the interior of $P$, respectively, and $v_{\rm bd}\ge k$. By Euler's polyhedral formula, the number of edges of $G$ satisfies $$e(G)=v(G)+f(G)-2=v_{\rm bd}+v_{\rm int}+t-1.$$ The number of edges along the boundary of $P$ is $v_{\rm bd}$, and every edge in the interior of $P$ belongs to the side of precisely two triangles. Denote by $v^*_{\rm int}\le v_{\rm int}$ the number of vertices in the interior of $P$ that subdivide (that is, lie in the interior of) a side of a triangle. In fact, each such vertex subdivides precisely one side. Double counting the edges, we obtain $$2e(G)=3t+v^*_{\rm int}+v_{\rm bd}.$$ Comparing the last two equations gives \begin{equation}\label{eq1} v_{\rm bd}+2v_{\rm int}-v^*_{\rm int}=t+2. \end{equation} A {\em minimal} segment that can be decomposed into sides of triangles in two different ways is called a {\em stretch}. The total number of triangles whose sides participate in such a decomposition (that is, lie along the stretch) is called the {\em size} of the stretch. Since no two triangles share a side, the size of every stretch is at least 3. If a stretch has size $s$, then there are precisely $s-2\ge {s\over 3}$ vertices in its interior that {\em subdivide} a side. Observe that each of the $v^*_{\rm int}$ vertices in the interior of $P$ that subdivide a side lies in the interior of precisely one stretch. \begin{figure}\centering \begin{tikzpicture}[scale=3] \draw[thick] (0,2) -- (4,2) -- (4,0) -- (0,0) -- (0,2); \draw[thick] (4,2) -- (2.5,0.5); \draw[thick] (4,0) -- (3,1); \draw[ultra thick, red] (0,0) -- (2.5,0.5); \draw[thick] (2.5,0.5) -- (3.34,0.67); \draw[thick] (0,2) -- (1.5,0.3); \draw[thick] (4,0) -- (2.5,0.5); \draw[thick] (4,2) -- (2,0.4) -- (0,2); \draw[thick] (1,1.2) -- (3.2,1.38) -- (0,2); \node[fill=black, circle, inner sep = 0pt, minimum size=7pt] at (2.5,0.5) {}; \node[fill=Grey, circle, inner sep = 0pt, minimum size=7pt] at (2,0.4) {}; \node[fill=Grey, circle,inner sep = 0pt, minimum size=7pt] at (1.5,0.3) {}; \end{tikzpicture} \caption{Tiling of a rectangle with triangles. The thick segment is a stretch of size $4$ with two subdividing vertices marked by grey dots. The only interior vertex which is not a subdividing vertex is marked by a black dot. } \end{figure} Let $\Sigma$ denote the sum of the sizes of all stretches. Since every side of a triangle, except for the $v_{\rm bd}$ sides that lie on the boundary of $P$, contributes precisely one to this sum (and those on the boundary contribute zero, due to convexity), we have that $\Sigma=3t-v_{\rm bd}$. Along each stretch, the number of subdividing vertices is at least one-third of the size of the stretch. Therefore, the total number of subdividing vertices is at least ${\Sigma\over 3}$. That is, $$v^*_{\rm int}\ge {3t-v_{\rm bd}\over 3} = t-{v_{\rm bd}\over 3},$$ whence $$t\le v^*_{\rm int} + {v_{\rm bd}\over 3}.$$ Plugging this inequality into (\ref{eq1}), we obtain that $$v_{\rm bd}+3(v_{\rm int}-v^*_{\rm int})\le 3.$$ This can hold only if $v_{\rm bd}=3$ and $v_{\rm int}=v^*_{\rm int}$. In particular, $k\le v_{\rm bd}$ must also be equal to $3$. This completes the proof of Theorem~6. $\Box$ \medskip The above proof also shows that if $P$ is a triangle ($k=3$), then any tiling of $P$ with triangles such that no two triangles share a sides satisfies that {\bf (i)} there is no vertex subdividing any side of $P$, {\bf (ii)} every vertex in the interior of $P$ must subdivide a side, and {\bf (iii)} the size of every stretch is exactly $3$. \noindent Despite all these limitations, for any $k>1$, a triangle can be cut into $3k+1$ smaller triangles in several different ways without having two pieces that share a side. For example, this can be achieved by recursively subdividing one of the triangles into $4$ pieces; see, e.g., Fig.~\ref{fig3}. \section{Equal perimeter tilings---Proof of Theorem 4} Here we extend the ideas of the proof of Theorem~6 given in the previous section from a finite tiling of a polygon to tilings of the whole plane. We start with a quick overview. Assume for a contradiction that there is a tiling with no pair of triangles sharing a side which satisfies the conditions in Theorem~4. We want to apply the computation described in the previous section to a large but bounded portion of this tiling. In general, we will not be able to apply Theorem~6 directly, because there is no guarantee that there exists a large part of the tiling whose union is a convex polygon. For this reason, as a ``boundary effect'', some small error terms will creep into our calculations. Nevertheless, we will be able to conclude that, analogously to condition (iii) at the end of the last section, {\em most} stretches in the tiling have size $3$. Out of the three sides along a stretch of size 3, one is as long as the other $2$ combined. We call the former side {\em long}, while the other two sides along the same stretch are called {\em short}. Of course, there may be a short side in one stretch that is much longer than a long side in another. Overall, there are twice as many short sides as long sides, but their total lengths are exactly the same. We deduce a strong triangle inequality for the short and long sides of a single triangle. Summing up these inequalities for all triangles, and combining the obtained bound with the above facts, we will arrive at the desired contradiction. Next, we spell out the details of the proof. \medskip \noindent{\bf Proof of Theorem~4}. Assume for a contradiction that there is a tiling ${\mathcal T}_{\infty}$ of the plane satisfying the conditions in Theorem~4, but not having two triangles that share a side. Since every triangle has area at least $\epsilon>0$ and unit perimeter, the tiling must be locally finite. The perimeter of each triangle is $1$, which implies that the areas of the triangles are also bounded from above, by the constant $\delta=\sqrt3/36$. Moreover, each side of a triangle has to be longer than $\epsilon'=4\epsilon$, otherwise, its area would be smaller than $\epsilon$. At the end of the proof, we will use the following strong triangle inequality: the total length of any two sides of a triangle in our tiling exceeds the length of the third side by at least some fixed $\epsilon''>0$. The existence of such $\epsilon''$ follows from a compactness argument. For the rest of the proof, we think of $\epsilon$, $\delta$, $\epsilon'$, and $\epsilon''$ as fixed positive constants. Their exact values are not relevant for our argument. \smallskip Let $r>0$ be a sufficiently large number to be specified later, and choose an open circular disk $D(r)$ of radius $r$. Let ${\mathcal T}_0$ denote the family of all triangles in $T\in {\mathcal T}_{\infty}$ that have a nonempty intersection with $D(r)$. The union of these triangles is a connected set, but not necessarily {\em simply} connected. Filling up the possible holes with other triangles in ${\mathcal T}_{\infty}$, we turn this set into a simply connected polygonal region $P$. Let ${\mathcal T}$ denote the family of all triangles $T\in {\mathcal T}_{\infty}$ that belong to $P$. The union of these triangles is $P$. Let ${\mathcal T}'\subset{\mathcal T}_{\infty}$ denote the family of triangles that touch the boundary of $P$ from outside. Since the diameter of every triangle is at most $1/2$, all members of ${\mathcal T}'$ lie in the annulus $D(r+1)\setminus D(r)$, where $D(r+1)$ is the disk of radius $r+1$ concentric with $D(r)$. Using the above lower and upper bounds on the areas of the triangles, and denoting the number of triangles in $\mathcal T$ (and ${\mathcal T}'$) by $t$ (and $t'$, respectively), we have \begin{equation}\label{1} \begin{split} t &= |\mathcal T| \ge \frac{r^2\pi}\delta=\Omega(r^2),\\ t' &= |{\mathcal T}'| \le \frac{(r+1)^2\pi-r^2\pi}\epsilon=O(r). \end{split} \end{equation} Here we use the asymptotic notation with respect the choice of $r$. \smallskip When speaking of {\em vertices} and {\em sides}, we mean the vertices and sides of the triangles in $\mathcal T$. As no two triangles share a side, the total number of sides is $3t$. Define a simple graph $G$ on the set of vertices by connecting two vertices by an {\em edge} if the segment between them belongs to a side and contains no other vertex. Obviously, $G$ is a connected planar graph with $f(G)=t+1$ faces, including the exterior face. Let $v(G)$ and $e(G)$ denote the number of vertices and the number of edges in $G$, respectively. By Euler's polyhedral formula, we have \begin{equation}\label{3} e(G)=v(G)+f(G)-2=v(G)+t-1. \end{equation} Let us call an edge a {\em boundary edge} if it belongs to the boundary of the polygon $P$. A boundary edge is said to be {\em full} if it is the full side of a triangle in $\mathcal T$, and {\em partial} if it is a proper part of a side. (Note that partial boundary edges did not show up in the proof of Theorem~6, because there $P$ was a {\em convex} polygon.) Let $e_{\rm full}$ and $e_{\rm part}$ stand for the number of full and partial boundary edges, respectively. Note that all boundary edges are contained in the union of the boundaries of the triangles in ${\mathcal T}'$. The length of any side is at least $\epsilon'$. Thus, in view of (\ref{1}), the number of full boundary edges satisfies the inequality \begin{equation}\label{4} e_{\rm full}\le\frac{t'}{\epsilon'}=O(r). \end{equation} To bound the number of partial boundary edges, it is enough to observe that every partial boundary edge contains a vertex of a triangle $T\in{\mathcal T}'$, and any vertex of $T$ belongs to at most one partial boundary edge. Hence, we have \begin{equation}\label{5} e_{\rm part}\le3t'=O(r). \end{equation} A {\em stretch} is a {\em minimal} segment that can be decomposed into {\em sides} (of triangles in $\mathcal T$) and {\em partial boundary edges} in two different ways. (Note that in the proof of Theorem~6 we did not have to include boundary edges in the definition of a stretch, because in that setting all stretches lied in the interior of $P$.) The number of sides belonging to (contained in) a stretch is called the {\em size} of the stretch. Observe that sides contained in the boundary of $P$ (full boundary edges) do not belong to any stretch, but every other side belongs to a unique one. Thus, the total size of the stretches is equal $3t-e_{\rm full}$. We call a stretch {\em improper} if it contains a partial boundary edge. Otherwise, it is called a {\em proper} stretch. The size of a proper stretch is at least $3$, the size of an improper stretch is at least $2$. We call a proper stretch of size $3$ {\em tight}, and denote the number of tight stretches by $\sigma_{\rm tight}$. A stretch is said to be {\em loose} if it is not tight. Let $L_{\rm loose}$ denote the total length of all loose stretches. \begin{figure}\centering \begin{tikzpicture}[scale=0.6] \draw[thick,dashed] (-3,-1) -- (6,2) -- (-3,8) -- (0,0); \draw[thick,dashed] (-8,2) -- (-2,1) -- (-3,8); \draw[thick] (-3,8) -- (-6,3); \draw[thick,dashed] (-3,-1) -- (-1.05,2.9); \draw[thick] (-8,2) -- (-2.5,4.75); \draw[ultra thick,red] (-6,1.66) -- (-0.8,-3); \draw[thick] (-0.8,-3)-- (-8,2); \draw[ultra thick,red] (-1.9,-2) -- (10,4); \draw[thick] (2,-2) --(-1.9,-2); \draw[thick] (-0.8,-3) -- (1,-2); \draw[thick] (2,-0.05)--(2,-3)-- (10,4); \draw[thick,dashed] (0,6)--(15,3); \draw[thick] (15,3) -- (9,9)--(0,6); \draw[thick] (6,0.5)--(15,3); \draw[thick] (-3,8)--(6,8); \node[fill=white, circle, inner sep = 0pt] at (-2.7,2.7) {s}; \node[fill=white, circle, inner sep = 0pt] at (-3,5.7) {s}; \node[fill=white, circle, inner sep = 0pt] at (-2.07,3.5) {l}; \node[fill=white, circle, inner sep = 0pt] at (-1.6,2.4) {s}; \node[fill=white, circle, inner sep = 0pt] at (-1.8,4.2) {s}; \node[fill=white, circle, inner sep = 0pt] at (-0.6,3) {l}; \node[fill=white, circle, inner sep = 0pt] at (-0.7,1) {s}; \node[fill=white, circle, inner sep = 0pt] at (-1.7,0.8) {l}; \node[fill=white, circle, inner sep = 0pt] at (-1.4,-0.1) {s}; \node[fill=white, circle, inner sep = 0pt] at (-4.7,2) {l}; \node[fill=white, circle, inner sep = 0pt] at (-3.8,1) {s}; \node[fill=white, circle, inner sep = 0pt] at (-6.6,1.4) {s}; \node[fill=white, circle, inner sep = 0pt] at (-2.9,0.1) {s}; \node[fill=white, circle, inner sep = 0pt] at (2,1) {s}; \node[fill=white, circle, inner sep = 0pt] at (0,-0.5) {l}; \node[fill=white, circle, inner sep = 0pt] at (1,4.8) {l}; \node[fill=white, circle, inner sep = 0pt] at (4,3.8) {s}; \node[fill=white, circle, inner sep = 0pt] at (-0.8,7) {s}; \node[fill=white, circle, inner sep = 0pt] at (8,5) {l}; \node[fill=white, circle, inner sep = 0pt] at (6,4.3) {s}; \node[fill=white, circle, inner sep = 0pt] at (12,3.1) {s}; \node[fill=white, circle, inner sep = 0pt] at (5.5,-1.5) {partial}; \node[fill=white, circle, inner sep = 0pt] at (11,1.1) {full}; \end{tikzpicture} \caption{Part of a tiling without two triangles sharing an edge. The short and long parts of each tight stretch (dashed) are marked by s and l, respectively. Loose stretches are marked thick and red. One partial and one full boundary edge is marked.} \end{figure} Summing up the lengths of tight and loose stretches, we obtain $$3t-e_{\rm full}=3\sigma_{\rm tight}+L_{\rm loose},$$ whence \begin{equation}\label{6} t=\sigma_{\rm tight}+{L_{\rm loose}\over 3} + {e_{\rm full}\over 3}. \end{equation} A vertex is called {\em subdividing} if it is the interior point of a side. Let $v^*$ denote the number of subdividing vertices. A subdividing vertex is an interior point of a single side and, hence, of a single stretch. As in the proof of Theorem~6, the number of subdividing vertices in the interior of a proper stretch of size $s$ is exactly $s-2$. The number of subdividing vertices of an improper stretch of size $s$ is at least $s-1$. Thus, any loose stretch of size $s$ has at least $s/2$ subdividing vertices. Hence, the total number of subdividing vertices satisfies \begin{equation}\label{7} v^*\ge \sigma_{\rm tight}+{L_{\rm loose}\over 2}. \end{equation} The sum of the number of edges on all faces of $G$ is equal to $2e(G)$. The $t$ triangles have $3t+v^*$ edges in total, while the infinite face has $e_{\rm full}+e_{\rm part}$ edges (namely, the boundary edges). Hence, \begin{equation}\label{8} 2e(G)=3t+v^*+e_{\rm full}+e_{\rm part}. \end{equation} Comparing this with (\ref{3}), we get $2v^*+2t-2\le 2e(G)=3t+v^*+e_{\rm full}+e_{\rm part},$ so that $$v^*\le t+e_{\rm full}+e_{\rm part}+2.$$ If we plug into this inequality the estimates~(\ref{6}) and~(\ref{7}), we conclude that \[L_{\rm loose}\le8e_{\rm full}+6e_{\rm part}+12.\] Taking~(\ref4) and~(\ref5) into account, this implies \begin{equation}\label{9} L_{\rm loose}=O(r). \end{equation} Since the number of sides on loose stretches as well as the number of sides that are full boundary edges is $O(r)$, we should concentrate on the tight stretches containing most of the $3t=\Omega(r^2)$ sides. We call the longest side on any tight stretch {\em long} and the other two sides on the same tight stretch {\em short}. The sides not belonging to any tight stretch are neither long nor short. \smallskip Along each tight stretch, the length of the long side is equal to the sum of the lengths of the two short sides. Therefore, the total length of all short sides is the same as the total length of all long sides. Define the quantity $W$ as \begin{equation*} \begin{aligned} W=&\left({\frac23}-\epsilon''\right)(\# \textrm{long sides})-{\frac13}(\# \textrm{short sides})\\ &+(\textrm{total length of short sides})-(\textrm{total length of long sides}). \end{aligned} \end{equation*} Since each tight stretch has one long and two short sides, we have $\# \textrm{long sides}=\sigma_{\rm tight}$ and $\# \textrm{short sides}=2\sigma_{\rm tight}$. The last two terms of $W$ cancel out. Thus, \begin{equation}\label{10} W=-\epsilon''\sigma_{\rm tight}. \end{equation} On the other hand, we can compute $W$ by adding up the contributions of the triangles $T\in {\mathcal T}$. We classify the triangles in $\mathcal T$ as follows. If a triangle has a side that it neither short, nor long, we call it {\em exceptional}. If $T$ is not exceptional and it has $i$ long sides, we say that $T$ is of {\em type} $i$, for $i=0,1,2,3$. If $T$ is of \smallskip \noindent{\bf type 0}, its contribution to $W$ is $0$, as the total length of its short sides is the perimeter of $T$, which is equal to $\frac13(\# \textrm{short sides of T})=1$. \noindent{\bf type 1}, its contribution to $W$ is the total length of its two short sides minus the length of its long side minus $\epsilon''$, which is nonnegative, according to the definition of $\epsilon''$. \noindent{\bf type 2}, its contribution to $W$ is $2(\textrm{length of its short side})-2\epsilon''\ge2\epsilon'-2\epsilon''\ge0$. \noindent{\bf type 3}, its contribution to $W$ is equal to $1-3\epsilon''>0$. \smallskip \smallskip The only triangles whose contribution to $W$ can be negative are the exceptional ones. The contribution of each exceptional triangle is is at least $-\frac23$. The number of these triangles in $\mathcal T$ is at most $L_{\rm loose}+e_{\rm full}$, so by (\ref4) and (\ref9) we obtain $-W=O(r)$. Comparing this bound to (\ref{10}), we have \[\sigma_{\rm tight}=O(r).\] Plugging (\ref{4}), (\ref{9}), and this bound into~(\ref6), we conclude that $t=O(r)$, which contradicts~(\ref1). The contradiction proves the theorem. $\Box$ \section{Triangles with similar sides---Proof of Theorem 5} Theorem 5 is an easy corollary to the following result. \medskip \noindent{\bf Theorem 7.} {\em In any locally finite tiling of the plane with triangles, there is a triangle with a side that is the union of one or more sides of other triangles.} \medskip \noindent{\bf Proof.} Assume we have a tiling $\mathcal T$ contradicting the statement of the theorem, and let $pst$ be any triangle in $\mathcal T$. As $pt$ cannot be obtained as the union of sides of other triangles, $p$ or $t$ must be an interior point of a side of another triangle along the line $pt$. Similar statements are true for the other two sides of the triangle $pst$, but no vertex can be an interior point of more than one side in the tiling. Thus, there are three triangles along the lines $pt$, $ts$, and $sp$, that contain $p$, $t$, and $s$, respectively, in this {\em cyclic} order. Similar statements hold for all triangles in the tiling. \smallskip We have seen that there is a unique triangle $T\in \mathcal T$ that has $p$ as an interior point of one of its sides. Assume without loss of generality that this side of $T$ belongs to the line $\ell$ containing $pt$. Notice that there is precisely one other triangle in $\mathcal T$, different from $pst$, whose vertex is $p$, and this triangle must also have a side contained in $\ell$. (Indeed, if there were a triangle in $\mathcal T$ with $p$ as a vertex that does not have a side contained in $\ell$, it would not satisfy the cyclic property described in the previous paragraph.) In the same way, for any other vertex $p'$ of the tiling, there are precisely two triangles with this vertex, and there is a line that contains one side of each. Let $pqr$ and $pst$ be the two triangles containing $p$ as a vertex, where $p$, $r$, and $s$ are collinear; see Figure~\ref{fig5v2}. Since $p$ is an interior point of a side along $\ell$, we conclude that $q$ must be an interior point of a side along the line $qr$ and $t$ must be an interior point of a side along the line $st$. Let $quv$ be the other triangle with vertex $q$ along $\ell$, where $v$ lies on the line $qr$. As $q$ is not an interior point of a side along the line $\ell$, so $u$ must be an interior point of a side along $\ell$: either of the side $qp$ or of $pt$ (as the line $\ell$ is ``blocked'' by another triangle at $t$). Finally, consider the other triangle of the tiling that has $u$ as a vertex. It must have another vertex $w$ along $\ell$ (in fact, in the segment $ut$, because $\ell$ is ``blocked'' at $t$) and a third vertex $z$ on the ray emanating from $u$ toward $v$ (see Figure~\ref{fig5v2}). Now $u$ is the interior point of a side along $\ell$, so $w$ must be an interior point of a side along the line $wz$. This is possible only if $w=t$ and $z$ lies on the line $st$, because this is the first line ``blocking'' $\ell$. The ray from $u$ toward $v$ (containing $z$) does not intersect the line $st$ (which is also supposed to contain $z$). This contradiction proves the theorem. $\Box$ \begin{figure}\centering \begin{tikzpicture}[scale=1] \draw[thick,dotted] (-4,0) -- (2.5,0); \draw[thick] (-2,0) -- (1,0)-- (-4,2)--cycle; \draw[thick] (-3.5,1.5) -- (-3.5,0)-- (-2,0); \draw[thick,dotted] (-4,2) -- (2.25,-0.5); \draw[thick] (-3.5,0) -- (-3.5,-2)-- (-1,0); \draw[thick,dotted] (-1,0) -- (-4.75,-3); \draw[thick] (-3.5,-2) -- (-4,-2.4)-- (0.5,0); \node[fill=white, circle, inner sep = -0pt, minimum size=0pt] at (2.5,0) {$\ell$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (-2,0) {$p$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (-4,2) {$s$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (1,0) {$t$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (-3.5,1.5) {$r$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (-3.5,0) {$q$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (-3.5,-2) {$v$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (-1,0) {$u$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (0.5,0) {$w$}; \node[fill=Grey!30, circle, inner sep = -0pt, minimum size=0pt] at (-4,-2.4) {$z$}; \end{tikzpicture} \caption{Illustration for the proof of Theorem 7.}\label{fig5v2} \end{figure} \medskip Two triangles of a locally finite triangular tiling $\mathcal T$ are called {\em neighbors} if they share a boundary segment. It follows from the proof of the last theorem that for any triangle $T\in \mathcal T$, either $T$ itself, or one of its neighbors, second neighbors, or third neighbors has a side that is the union of one or more sides of other triangles.
{ "timestamp": "2018-04-12T02:09:06", "yymm": "1711", "arxiv_id": "1711.04504", "language": "en", "url": "https://arxiv.org/abs/1711.04504", "abstract": "We solve a problem of R. Nandakumar by proving that there is no tiling of the plane with pairwise noncongruent triangles of equal area and equal perimeter. We also show that no convex polygon with more than three sides can be tiled with finitely many triangles such that no pair of them share a full side.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Metric Geometry (math.MG)", "title": "Tilings with noncongruent triangles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964186047326, "lm_q2_score": 0.8418256432832333, "lm_q1q2_score": 0.8296161565452516 }
https://arxiv.org/abs/1404.7254
On Hodge Theory of Singular Plane Curves
The dimensions of the graded quotients of the cohomology of a plane curve complement with respect to the Hodge filtration are described in terms of simple geometrical invariants. The case of curves with ordinary singularities is discussed in detail.
\section{Introduction} The Hodge theory of the complement of projective hypersurfaces have received a lot of attention, see for instance Griffiths \cite{G} in the smooth case, Dimca-Saito \cite{DS} and Sernesi \cite{Se} in the singular case. In this paper we consider the case of plane curves and continue the study initiated by Dimca-Sticlaru \cite{DSt} in the nodal case and the author \cite{A} in the case of plane curves with ordinary singularities of multiplicity up to $3$. In the second section we compute the Hodge-Deligne polynomial of a plane curve $C$, the irreducible case in Proposition \ref{propirred} and the reducible case in Proposition \ref{propred}. Using this we determine the Hodge-Deligne polynomial of $U=\mathbb{P}^2 \setminus C$ and then we deduce in Theorem \ref{mhnU} the dimensions of the graded quotients of $H^2(U)$ with respect to the Hodge filtration. In section three we consider the case of arrangements of curves having ordinary singularities and intersecting transversely at smooth points and obtain a formula in Theorem \ref{mhnU2} generalizing the formulas obtained in \cite{DSt} and in \cite{A} (for this type of curves). In fact, the results in \cite{A} show that this formula holds in the more general case of plane curves with ordinary singularities of multiplicity up to $3$ (without assuming transverse intersection). In the forth section we show that the case of plane curves with ordinary singularities of multiplicity up to $4$ (without assuming transverse intersection) is definitely more complicated and the formula in Theorem \ref{mhnU2} has to be replaced by the formula in Theorem \ref{mhnUformult4} containing a correction term coming from triple points on one component through which another component of $C$ passes. In the final section we give some applications, we hope of general interest, expressing the difference between the Hodge filtration and the pole order filtration on $H^2(U,\mathbb{C})$ in terms of numerical invariants easy to compute in given situations, see Theorem \ref{F=P} and its corollaries. One example involving a free divisor concludes this note. \section{Hodge Theory of plane curve complements} For the general theory of mixed Hodge structures we refer to \cite{De} and \cite{Voisin}. Recall the definition of the Hodge-Deligne polynomial of a quasi-projective complex variety $X$ $$P(X)(u,v)=\sum_{p,q} E^{p,q}(X)u^pv^q$$ where $E^{p,q}(X)=\sum_s (-1)^sh^{p,q}(H_c^s(X))$, with $h^{p,q}(H^s_c(X)) =\dim Gr_F^p Gr_{p+q}^WH^s_c(X,\mathbb{C})$, the mixed Hodge numbers of $H_c^s(X)$. This polynomial is additive with respect to constructible partitions, i.e. $P(X)=P(X\setminus Y)+P(Y)$ for a closed subvariety $Y$ of $X$. In this section we determine $P(C)$ for a (reduced) plane curve $C$. Suppose first that the curve $C$ is irreducible, of degree $N$. Denote by $a_k$, $k=1, ... ,p$ the singular points of $C$, and let $r(C,a_k)$ be the number of irreducible branches of the germ $(C,a_k)$. Let $\nu: \tilde{C} \rightarrow C$ be the normalization mapping. Using the normalization map $\nu$ and the additivity of the Hodge-Deligne polynomial, it follows that, $$ P(C) =P(C\backslash(C)_{sing})+P((C)_{sing})=P(\tilde{C}\backslash (\cup_k \nu^{-1}(a_k))+p=$$ $$=P(\tilde{C})-\sum_k P( \nu^{-1}(a_k))+p = uv-gu-gv+1-\sum_k (r(C,a_k)-1). $$ Indeed, it is known that for the smooth curve $\tilde C$, the genus $g=g(\tilde C)$ is exactly the Hodge number $h^{1,0}(\tilde C)=h^{0,1}(\tilde C)$. Moreover, it is known that one has the formula \begin{equation}\label{genus-delta} g=\frac{(N-1)(N-2)}{2} -\sum_k\delta(C,a_k), \end{equation} relating the genus, the degree and the local singularities of $C$, and the $\delta$-invariants can be computed using the formula \begin{equation}\label{Milnor} 2\delta(C,a_k)=\mu(C,a_k)+r(C,a_k)-1, \end{equation} where $\mu(C,a_k)$ is the Milnor number of the singularity $(C,a_k)$. For both formulas above, see Milnor, p. 85. This proves the following result. \begin{prop}\label{propirred} With the above notation and assumptions, we have the following for an irreducible plane curve $C \subset \mathbb{P}^2$. \begin{enumerate}[(i)] \item The Hodge-Deligne polynomial of $C$ is given by $$P(C)(u,v)=uv-gu-gv+1-\sum_k (r(C,a_k)-1), $$ with $g$ given by the formula \eqref{genus-delta}. \item $H^0(C)=\mathbb{C}$ is pure of type $(0,0)$. \item $H^2(C)=\mathbb{C}$ is pure of type $(1,1)$. \item The mixed Hodge numbers of the MHS on $H^1(C)$ are given by $$h^{0,0}(H^1(C))=\sum_k (r(C,a_k)-1), \ \ h^{1,0}(H^1(C))= h^{0,1}(H^1(C))= g.$$ In particular, one has the following formulas for the first Betti number of $C$. $$b_1(C)= \sum_k (r(C,a_k)-1) +2g= (N-1)(N-2)-\sum_k \mu(C,a_k).$$ \end{enumerate} \end{prop} Now we consider the case of a curve $C$ having several irreducible components. More precisely, let $C=\bigcup_{j=1,r}C_j$ be the decomposition of $C$ as a union of irreducible components $C_j$, let $\nu_j: \tilde{C}_j \rightarrow C_j$ be the normalization mappings and set $g_j=g(\tilde{C}_j)$. Suppose that the curve $C_j$ has degree $N_j$, denote by $a_k^j$ for $k=1,...,p_j$ be the singular points of $C_j$ and let $r(C_j,a_k^j)$ be the number of branches of the germ $(C_j,a_k^j)$. Then the formulas \eqref{genus-delta} and \eqref{Milnor} can be applied to each irreducible curve $C_j$, as well as Proposition \ref{propirred}. Let $A$ be the union of the singular sets of the curves $C_j$. Let $B$ be the set of points in $C$ sitting on at least two distinct components $C_i$ and $C_j$. For $b \in B$, let $n(b)$ be the number of irreducible components $C_j$ passing through $b$. By definition, $n(b) \geq 2$. Moreover, note that the sets $A$ and $B$ are not disjoint in general, and their union is precisely the singular set of $C$. Using the additivity of Hodge-Deligne polynomials we get $$ P(C)=P(C_1\cup\cdots\cup C_r) =\sum_{j=1}^r P(C_j)+(-1)^{l-1}\sum_{0\leq i_1<\cdots<i_l\leq r}P(C_{i_1}\cap\cdots \cap C_{i_l}).$$ The first sum is easy to determine using Proposition \ref{propirred}. $$\sum_{j=1}^r P(C_j) =ruv - \left(\sum_{j=1}^r g_j\right)u - \left(\sum_{j=1}^r g_j\right)v+r - \sum_{j,k}( (r(C_j,a_k^j)-1).$$ Consider now the alternated sum, where $l \geq 2$. The only points of $C$ that give a contribution to this sum are the points in $B$. Now, for a point $b \in B$, its contribution to the alternated sum is clearly given by $$c(b)=-{n(b) \choose 2}+ {n(b) \choose 3} -...+(-1)^{n(b)-1}{n(b) \choose n(b)} =-n(b)+1. $$ \begin{prop}\label{propred} With the above notation and assumptions, we have the following for a reducible plane curve $C=\bigcup_{j=1,r}C_j$. \begin{enumerate}[(i)] \item The Hodge-Deligne polynomial of $C$ is given by $$P(C)(u,v)=ruv - \left(\sum_{j=1}^r g_j\right)u - \left(\sum_{j=1}^r g_j\right)v+r - \sum_{j,k}( (r(C_j,a_k^j)-1)- \sum_{b\in B}(n(b)-1). $$ with $g_j$ given by the formula \eqref{genus-delta}. \item $H^0(C)=\mathbb{C}$ is pure of type $(0,0)$. \item $H^2(C)=\mathbb{C}^r$ is pure of type $(1,1)$. \item The mixed Hodge numbers of the MHS on $H^1(C)$ are given by $$h^{0,0}(H^1(C))= \sum_{j,k}( (r(C_j,a_k^j)-1)+ \sum_{b\in B}(n(b)-1) -r+1, $$ $$ h^{1,0}(H^1(C))= h^{0,1}(H^1(C))= \sum_jg_j.$$ In particular, one has the following formula for the first Betti number of $C$. $$b_1(C)= \sum_{j,k}( (r(C_j,a_k^j)-1)+ \sum_{b\in B}(n(b)-1) -r+1 +2 \sum_jg_j.$$ \end{enumerate} \end{prop} Note that a point in the intersection $A \cap B$ will give a contribution to the last two sums in the above formula for $P(C)$. \begin{ex}\label{exnodes} Suppose $C$ is a nodal curve. Then for each singularity $a_k^j\in A$ one has $a_k^j \notin B$ (otherwise we get worse singularities than nodes) and $r(a_k^j)=2$. Moreover, each point $b \in B$ satisfies $n(b)=2$. It follows that in this case we get $$P(C)(u,v)=ruv - \left(\sum_{j=1}^r g_j\right)u - \left(\sum_{j=1}^r g_j\right)v+r -n_2, $$ with $n_2$ the number of nodes of $C$. More precisely, in this case we have $n_2=n_2'+n_2''$, where $n_2'$ (resp. $n_2''$) is the number of nodes of $C$ in $A$ (resp. in $B$) and one clearly has $$n_2'=S_1:=\sum_{j,k}( (r(C_j,a_k^j)-1), \ \ n_2''=S_2:=\sum_{b\in B}(n(b)-1).$$ \end{ex} \begin{ex}\label{exnodes+triple} Suppose $C$ has only nodes and ordinary triple points as singularities. Then let $n_3$ be the number of triple points and note that we can write as above $n_3=n_3'+n_3''$, where $n_3'$ (resp. $n_3''$) is the number of triple points of $C$ in $A_0=A\setminus B$ (resp. in $B$). For a point $a \in A_0$, the contribution to the sum $S_1$ is $2$, while the contribution to the sum $S_2$ is $0$. A point $b \in B$ can be of two types. The first type, corresponding to the partition $3=1+1+1$, is when $b$ is the intersection of three components $C_j$, all smooth at $b$. The contribution of such a point $b$ is $0$ to the sum $S_1$ and $2$ to the sum $S_2$. The second type, corresponding to the partition $3=2+1$, is when $b$ is the intersection of two components, say $C_i$ and $C_j$, such that $C_i$ has a node at $b$, and $C_j$ is smooth at $b$. The contribution of such a point $b$ is $1$ to the sum $S_1$ and $1$ to the sum $S_2$. It follows that the contribution of any triple point to the sum $S_1+S_2$ is equal to $2$. Since the double points in $C$ can be treated exactly as in Example \ref{exnodes}, this yields the following. $$P(C)(u,v)=ruv - \left(\sum_{j=1}^r g_j\right)u - \left(\sum_{j=1}^r g_j\right)v+r -n_2 -2n_3. $$ When there are only triple points in $B$ of the first type, then we obviously have the following additional relations $$S_1=n_2'+2n_3', \ \ S_2=n_2''+2n_3''.$$ \end{ex} \begin{ex}\label{exnodes+triple+4fold} Suppose $C$ has only ordinary points of multiplicity 2, 3 and 4 as singularities. Then let $n_4$ be the number of points of multiplicity 4 and note that we can write as above $n_4=n_4'+n_4''$, where $n_4'$ (resp. $n_4''$) is the number of points of multiplicity 4 of $C$ in $A_0=A\setminus B$ (resp. in $B$). For a point $a \in A_0$ of multiplicity $4$, the contribution to the sum $S_1$ is $3$, while the contribution to the sum $S_2$ is $0$. A point $b \in B$ can be of 4 types. The first type, corresponding to the partition $4=1+1+1+1$, is when $b$ is the intersection of 4 components $C_j$, all smooth at $b$. The contribution of such a point $b$ is $0$ to the sum $S_1$ and $3$ to the sum $S_2$. The second type, corresponding to the partition $4=2+1+1$, is when $b$ is the intersection of 3 components, say $C_i$, $C_j$ and $C_k$, such that $C_i$ has a node at $b$, and $C_j$ and $C_k$ are smooth at $b$. The contribution of such a point $b$ is $1$ to the sum $S_1$ and $2$ to the sum $S_2$. The third type, corresponding to the partition $4=2+2$, is when $b$ is the intersection of 2 components, say $C_i$ and $C_k$, such that $C_i$ and $C_k$ have a node at $b$. The contribution of such a point $b$ is $2$ to the sum $S_1$ and $1$ to the sum $S_2$. The fourth type, corresponding to the partition $4=3+1$, is when $b$ is the intersection of 2 components, say $C_i$ and $C_k$, such that $C_i$ has a triple point at $b$, and $C_k$ is smooth at $b$. The contribution of such a point $b$ is $2$ to the sum $S_1$ and $1$ to the sum $S_2$. It follows that the contribution of any point of multiplicity 4 to the sum $S_1+S_2$ is equal to $3$. Since the double and triple points in $C$ can be treated exactly as in Example \ref{exnodes+triple}, this yields the following. $$P(C)(u,v)=ruv - \left(\sum_{j=1}^r g_j\right)u - \left(\sum_{j=1}^r g_j\right)v+r -n_2 -2n_3-3n_4. $$ When there are only points of multiplicity 4 in $B$ of the first type, then we obviously have the following additional relations $$S_1=n_2'+2n_3'+3n_4'' , \ \ S_2=n_2''+2n_3''+3n_4''.$$ \end{ex} Let's look now at the cohomology of the smooth surface $U=\mathbb{P}^2 \setminus C$. By the additivity we get $P(U)=P(\mathbb{P}^2)-P(C)$ where $P(\mathbb{P}^2)=u^2v^2+uv+1$. This yields the following consequence. \begin{cor}\label{PU} $$P(U)(u,v)=u^2v^2-(r-1)uv + \left(\sum_{j=1}^r g_j\right)u + \left(\sum_{j=1}^r g_j\right)v-(r-1) + $$ $$+\sum_{j,k}( (r(C_j,a_k^j)-1)+ \sum_{b\in B}(n(b)-1).$$ \end{cor} The contribution of $H^4_c(U,\mathbb{C})$ to $P(U)$ is the term $u^2v^2$, and that of $H^3_c(U,\mathbb{C})$ is the term $-(r-1)uv$. Moreover, the dimension $\dim Gr^1_FH^2(U,\mathbb{C})$ is the number of independent classes of type (1,2), which correspond to classes of type $(1,0)$ in $H^2_c(U)$, and hence to the terms in $u$ in $P(U)$. For both statements see the proof of Theorem 2.1 in \cite{A}. This proves the following result. \begin{thm} \label{mhnU} $$\dim Gr^1_FH^2(U,\mathbb{C})=\sum_{j=1}^r g_j $$ and $$\dim Gr^2_FH^2(U,\mathbb{C})=\sum_{j=1}^r g_j + \sum_{j,k}( (r(C_j,a_k^j)-1)+ \sum_{b\in B}(n(b)-1) -r+1. $$ In particular, all the components $C_j$ of the curve $C$ are rational if and only if $H^2(U)$ is pure of type $(2,2)$. \end{thm} \begin{ex} \label{exnodes+triple+4fold2} Suppose $C$ has only ordinary points of multiplicity 2, 3 and 4 as singularities. Then let $n_k$ be the number of points of multiplicity $k$, for $k=2,3,4$ Then using Example \ref{exnodes+triple+4fold}, we get the formula $$\dim Gr^2_FH^2(U,\mathbb{C})=\sum_{j=1}^r g_j -r+1+ n_2+2n_3+3n_4. $$ \end{ex} \section{Arrangements of transversely intersecting curves} Recall that $C=\bigcup_{j=1,r}C_j$ is the decomposition of $C$ as a union of irreducible components $C_j$, and the curve $C_j$ has degree $N_j$. In this section we assume that any curve $C_j$ has only ordinary multiple points as singularities and let $n_k(C_j)$ denote the number of ordinary points on $C_j$ of multiplicity $k$. We also assume that the intersection of any two distinct components $C_i$ and $C_j$ is transverse, i.e. the points in $C_i \cap C_j$ are nodes of the curve $C_i \cup C_j$. This implies in particular that $A \cap B=\emptyset$. The formulas \eqref{genus-delta} and \eqref{Milnor} yield the equality. \begin{equation}\label{genus-delta2} g_j=\frac{(N_j-1)(N_j-2)}{2} -\frac{1}{2} \sum_k\left( \mu(C_j,a_k^j)+r(C,a_k^j)-1\right), \end{equation} Using this, Theorem \ref{mhnU} gives the formula $$ \dim Gr^2_FH^2(U,\mathbb{C})=\sum_{j=1}^r\frac{(N_j-1)(N_j-2)}{2} -\frac{1}{2} \sum_{j,k}\left( \mu(C_j,a_k^j)-r(C,a_k^j)+1\right) +$$ $$+\sum_{b\in B}(n(b)-1) -r+1. $$ If $a_k^j$ is an ordinary $m$-multiple point on the curve $C_j$, one has $\mu(C_j,a_k^j)=(m-1)^2$ and hence $$\mu(C_j,a_k^j)-r(C,a_k^j)+1=(m-1)(m-2).$$ If we denote by $n_m'$ (resp. $n_m''$) the number of $m$-multiple points of $C$ coming from just one component $C_j$ (resp. from the intersection of several components $C_j$), we see that we have $$\sum_{j,k}\left( \mu(C_j,a_k^j)-r(C,a_k^j)+1\right)= \sum_m (m-1)(m-2)n_m'.$$ This equality explains the contribution of the points in $A$. Now let $b \in B$ such that $n(b)=m$. The number of such points is precisely $n_m''$. It follows that $$\sum_{b\in B}(n(b)-1)=\sum_m(m-1)n_m''.$$ Let $1 \leq i<j\leq r$ and consider the intersection $C_i \cap C_j$. It contains exactly $N_iN_j$ points, since $C_i$ and $C_j$ intersects transversely. The sum $S=\sum_{1 \leq i<j\leq r}N_iN_j$ represents the number of all such intersection points. Note that a point $b\in B$ is counted in this sum exactly ${n(b) \choose 2}$ times. This yields the following formula $$2S=\sum_mm(m-1)n_m''.$$ These formulas give the following result. \begin{thm} \label{mhnU2} With the above assumptions and notation, one has $$\dim Gr^2_FH^2(U,\mathbb{C})=\frac{(N-1)(N-2)}{2} -\sum_m{m-1 \choose 2}n_m,$$ with $n_m=n_m'+n_m''$ the number of ordinary $m$-tuple points of $C$. \end{thm} The following consequence of Theorem \ref{mhnU} and Theorem \ref{mhnU2} applies in particular to any projective line arrangement. \begin{cor}\label{linearr} Assume that $C=\bigcup_{j=1,r}C_j$ is the decomposition of $C$ as a union of irreducible components $C_j$, with any curve $C_j$ having only ordinary multiple points as singularities and being rational, i.e. $g_j=0$. If the intersection of any two distinct components $C_i$ and $C_j$ is transverse, i.e. the points in $C_i \cap C_j$ are nodes of the curve $C_i \cup C_j$, then one has $$\dim H^2(U,\mathbb{C})=\frac{(N-1)(N-2)}{2} -\sum_m{m-1 \choose 2}n_m,$$ with $n_m$ the number of ordinary $m$-tuple points of $C$. \end{cor} \section{ Curves with ordinary singularities of multiplicity $\leq 4$} Let $C\subset \mathbb{P}^2$ be a curve of degree $N$ having only ordinary singular points of multiplicity at most $4$. Set $U=\mathbb{P}^2\setminus C$, and let $C=\cup_{j=1}^rC_j$ be the decomposition of $C$ in irreducible components. Then, \begin{eqnarray*} P(C)&=&\sum_{j=1}^r P(C_j)-\sum_{0\leq i <j\leq r} P(C_i\cap C_j) + \sum_{0\leq i <j<k\leq r} P(C_i\cap C_j\cap C_k)\\ &-& \sum_{0\leq i <j<k<l\leq r} P(C_i\cap C_j\cap C_k \cap C_l). \end{eqnarray*} Let $a_m^j$ denote the number of singular points of multiplicity $m$ that belong to the component $C_j$ (note that a point can be singular on two components, being a node on each of them).\\ Denote by $b_3^k$ (respectively $b_4^k$) the number of triple points (respectively points of multiplicity 4) of $C$ that are intersection of exactly $k$ components, for $k=2,3$ (respectively $k=3,4$). Let $b_4^2$ (respectively $\tilde{b_4^2}$) be the number of singular points $p$ of multiplicity $4$ in $C$ representing the intersection of exactly 2 components, such that one of which has a triple point at $p$ (respectively each one has a node at $p$). Then one has $$\sum_{0\leq i <j\leq r} P(C_i\cap C_j)=\sum_{0\leq i <j\leq r} N_iN_j - b^2_3-3\tilde{b_4^2}-2b_4^2-2b_4^3.$$ Indeed, a point of type $b_3^2$ (resp. $b_4^2$, resp. $\tilde{b_4^2}$) occurs only in one intersection $C_i \cap C_j$, and has the multiplicy 2 (resp.3, resp. 4) in this intersection. A point of type $b_4^3$ occurs in 3 intersections $C_i \cap C_j$ with multiplicitities $1,2,2$, and this accounts for the correction term $-2b_4^3$. Then one has $$\sum_{0\leq i <j<k\leq r} P(C_i\cap C_j\cap C_k)= b^3_3+b_4^3+{4\choose 3}b_4^4,$$ and $$\sum_{0\leq i <j<k<l\leq r} P(C_i\cap C_j\cap C_k \cap C_l)=b^4_4.$$ Hence, by Proposition \ref{propirred}, we get the following. \begin{eqnarray*} P(C)&=&ruv-(\sum_{j=1}^r g_j) u-(\sum_{j=1}^r g_j) v-\sum_{j=1}^r (a_2^j+2a_3^j+3a^j_4)-\sum N_iN_j\\ &+&b_3^2+3\tilde{b^2_4}+2b^2_4+3b_4^3+b^3_3+3b^4_4. \end{eqnarray*} Therefore, as above, we obtain \begin{eqnarray*} P(U)&=&u^2v^2-(r-1)uv+1-r+(\sum_{j=1}^r g_j) u+(\sum_{j=1}^r g_j) v+\sum_{j=1}^r (a_2^j+3a_3^j+6a^j_4)\\ &-&\sum_{j=1}^r(a_3^j+3a_4^j)+\sum N_iN_j-b_3^2-3\tilde{b^2_4}-2b^2_4-3b_4^3-b^3_3-3b^4_4. \end{eqnarray*} Finally we get \begin{eqnarray*} \dim Gr^2_F H^2(U)&=&\sum_{j=1}^r (g_j+a_2^j+3a_3^j+6a^j_4-1)+\sum N_iN_j+1-(\sum_{j=1}^r a_3^j+b_3^2+b^3_3) \\ &-&3(\sum_{j=1}^r a_4^j+\tilde{b^2_4}+b^2_4+b_4^3+b^4_4)+{b^2_4}\\ &=&\frac{(N-1)(N-2)}{2}-n_3-3n_4+{b_4^2}, \end{eqnarray*} with $n_m$ the number of ordinary $m$-tuple points of $C$. \begin{thm} \label{mhnUformult4} Let $C\subset \mathbb{P}^2$ be a curve of degree $N$ having only ordinary singular points of multiplicity at most $4$. If $U=\mathbb{P}^2\setminus C$, then one has $$\dim Gr^2_FH^2(U,\mathbb{C})=\frac{(N-1)(N-2)}{2} -\sum_{m=3,4}{m-1 \choose 2}n_m +b_4^2,$$ with $n_m$ the number of ordinary $m$-tuple points of $C$ and $b_4^2$ the number of singular points $p$ of $C$ which are smooth on one component $C_i$ of $C$ and have multiplicity $3$ on the other component $C_j$ of $C$ passing through $p$. \end{thm} \section{Pole order filtration versus Hodge filtration for plane curve complements} For any hypersurface $V$ in a projective space $\mathbb{P}^n$, the cohomology groups $H^*(U,\mathbb{C})$ of the complement $U =\mathbb{P}^n \setminus V$ have a pole order filtration $P^k$, see for instance \cite{DSt2}, and it is known by the work of P. Deligne, A. Dimca \cite{DD} and M. Saito \cite{MS} that one has $$F^kH^m(U,\mathbb{C}) \subset P^kH^m(U,\mathbb{C})$$ for any $k$ and any $m$. For $m=0$ and $m=1$, the above inclusions are in fact equalities (the case $m=0$ is obvious and the case $m=1$ follows from the equality $F^1H^1(U,\mathbb{C})=H^1(U,\mathbb{C})$). For $m=2$, we have again $F^kH^2(U,\mathbb{C}) = P^kH^2(U,\mathbb{C})$ for $k=0,1$ for obvious reasons, but one may get strict inclusions $$F^2H^2(U,\mathbb{C}) \ne P^2H^2(U,\mathbb{C})$$ already in the case when $V=C$ is a plane curve, see \cite{DS}, Remark 2.5 or \cite{Dbk}. However, to give such examples of plane curves was until now rather complicated. We give below a numerical condition which tells us exactly when the above strict inclusion holds. We need first to recall some basic definitions. Let $S=\oplus_rS_r=\mathbb{C}[x,y,z]$ be the graded ring of polynomials with complex coefficients, where $S_r$ is the vector space of homogeneous polynomials of $S$ of degree $r$. For a homogeneous polynomial $f$ of degree $N$, define the Jacobian ideal of $f$ to be the ideal $J_f$ generated in $S$ by the partial derivatives $f_x,f_y,f_z$ of $f$ with respect to $x$, $y$ and $z$. The graded \textit{Milnor algebra} of $f$ is given by $$M(f)=\oplus_rM(f)_r=S/J_f.$$ Note that the dimensions $\dim M(f)_r$ can be easily computed in a given situation using some computer software e.g. Singular. Now we can state the main result of this section. \begin{thm} \label{F=P} Let $C:f=0$ be a reduced curve of degree $N$ in $\mathbb{P}^2$ having only weighted homogeneous singularities and let $C_i$ for $i=1,...,r$ be the irreducible components of $C$. If $U=\mathbb{P}^2\setminus C$, then $$\dim P^2H^2(U,\mathbb{C})-\dim F^2H^2(U,\mathbb{C}) =\tau(C) + \sum_{i=1,r}g_i-\dim M(f)_{2N-3},$$ where $\tau(C)$ is the global Tjurina number of $C$ (that is the sum of the Tjurina numbers of all the singularities of $C$) and $g_i$ is the genus of the normalization of $C_i$ for $i=1,...,r$. \end{thm} In particular we get the following result, which yields in particular a new proof for Theorem 1.3 in \cite{DSt}. \begin{cor}\label{rational} If a reduced plane curve has only nodes as singularities, then one has $$\dim M(f)_{2N-3}=\tau(C) + \sum_{i=1,r}g_i.$$ \end{cor} \proof Indeed, it is known that for a nodal curve one has the equality $F^2H^2(U,\mathbb{C}) = P^2H^2(U,\mathbb{C})$, see \cite{De} or \cite{MS}. \endproof Note that we have the following obvious consequence of Theorem \ref{mhnU}. \begin{cor}\label{stabilization} For a reduced plane curve $C$ one has $$\dim P^2H^2(U,\mathbb{C})-\dim F^2H^2(U,\mathbb{C}) \leq \sum_{i=1,r}g_i.$$ \end{cor} \proof Indeed, Theorem \ref{mhnU} can be restated as $$\dim H^2(U,\mathbb{C})-\dim F^2H^2(U,\mathbb{C}) = \sum_{i=1,r}g_i,$$ in view of the equality $F^1H^2(U,\mathbb{C})=H^2(U,\mathbb{C})$, see \cite{Dbk}, proof of Corollary 1.32, page 185. \endproof \begin{rk}\label{rkrational} If a reduced plane curve $C$ has only rational irreducible components, i.e. $g_i=0$ for all $i$, then the above inequality implies $F^2H^2(U,\mathbb{C}) = P^2H^2(U,\mathbb{C})$. This result can be regarded as an improvement of a part of the Remark 2.5 in \cite{DS}, where the result is claimed only for curves with nodes and cusps as singularities. \end{rk} The above discussion implies also the following result, which can be regarded as a generalization of Theorem 4.1 (A) in \cite{A}. \begin{cor}\label{rational2} If a reduced plane curve $C:f=0$ has only weighted homogeneous singularities, then one has $$0\leq \dim M(f)_{2N-3}-\tau(C) \leq \sum_{i=1,r}g_i.$$ In particular, if in addition the curve $C$ has only rational irreducible components, then one has $$\dim M(f)_{2N-3}=\tau(C) .$$ \end{cor} Now we give the proof of Theorem \ref{F=P}. Corollary 1.3 in \cite{DSt2} implies that $$\dim P^2H^2(U,\mathbb{C})= \dim H^2(U,\mathbb{C})+\tau(C)-\dim M(f)_{2N-3}.$$ On the other hand, Theorem \ref{mhnU} and the fact $\dim F^1H^2(U,\mathbb{C})= H^2(U,\mathbb{C})$ yield $$\dim F^2H^2(U,\mathbb{C})= \dim H^2(U,\mathbb{C})-\sum_{i=1,r}g_i,$$ which clearly completes the proof of Theorem \ref{F=P}. \begin{ex}\label{free} In this example we present a free divisor $C:f=0$, whose irreducible components consist of 12 lines and one elliptic curve, and where $F^2H^2(U,\mathbb{C}) \ne P^2H^2(U,\mathbb{C})$. Let $f=xyz(x^3+y^3+z^3)[(x^3+y^3+z^3)^3-27x^3y^3z^3].$ If we consider the pencil of cubic curves $(x^3+y^3+z^3, xyz)$, then the curve $C$ contains all the singular fibers of this pencil, and this accounts for the 12 lines given by $$xyz[(x^3+y^3+z^3)^3-27x^3y^3z^3]=0,$$ and the elliptic curve (hence of genus 1) given by $x^3+y^3+z^3=0$. Then $C$ is a free divisor, see \cite{JV} or by a direct computation using Singular, which shows that $I=J_f$, where $I$ is the saturation of the Jacobian ideal $J_f$, see Remark 4.7 in \cite{DSe}. The direct computation by Singular also yields $\tau(C)=156$ and $\dim M(f)_{2N-3}=\dim M(f)_{27}=156$. Moreover, applying Corollary 1.5 in \cite{DSt3}, we see via a Singular computation that all singularities of the curve $C$ are weighted homogeneous. Alternatively, there are 12 nodes, 3 in each of the 4 singular fibers of the pencils (which are triangles), and the 9 base points of the pencil, each an ordinary point of multiplicity 5. Each of the 12 lines contains exactly 3 of these base points, and they are exactly the intersection of the elliptic curve with the line. This description implies that there are no other singularities, in accord with $$12+9 \times 16=156= \tau(C).$$ It follows from Theorem \ref{F=P} that $\dim P^2H^2(U,\mathbb{C})-\dim F^2H^2(U,\mathbb{C}) =1.$ Hence the presence of a single irrational component of $C$ leads to $F^2H^2(U,\mathbb{C}) \ne P^2H^2(U,\mathbb{C})$. \end{ex} \bigskip \small \textbf{Acknowledegment:} I gratefully acknowledge the support of the Lebanese National Council for Scientific Research, without which the present study could not have been completed.
{ "timestamp": "2015-04-08T02:05:10", "yymm": "1404", "arxiv_id": "1404.7254", "language": "en", "url": "https://arxiv.org/abs/1404.7254", "abstract": "The dimensions of the graded quotients of the cohomology of a plane curve complement with respect to the Hodge filtration are described in terms of simple geometrical invariants. The case of curves with ordinary singularities is discussed in detail.", "subjects": "Algebraic Geometry (math.AG)", "title": "On Hodge Theory of Singular Plane Curves", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9879462183543603, "lm_q2_score": 0.8397339696776499, "lm_q1q2_score": 0.8296119997667293 }
https://arxiv.org/abs/1711.04909
An Optimal Convergence Rate for the Gaussian Regularized Shannon Sampling Series
We consider the reconstruction of a bandlimited function from its finite localized sample data. Truncating the classical Shannon sampling series results in an unsatisfactory convergence rate due to the slow decay of the sinc function. To overcome this drawback, a simple and highly effective method, called the Gaussian regularization of the Shannon series, was proposed in the engineering and has received remarkable attention. It works by multiplying the sinc function in the Shannon series with a regularized Gaussian function. Recently, it was proved that the upper error bound of this method can achieve a convergence rate of the order $O(\frac{1}{\sqrt{n}}\exp(-\frac{\pi-\delta}{2}n))$, where $0<\delta<\pi$ is the bandwidth and $n$ the number of sample data. The convergence rate is by far the best convergence rate among all regularized methods for the Shannon sampling series. The main objective of this article is to present the theoretical justification and numerical verification that the convergence rate is optimal when $0<\delta<\pi/2$ by estimating the lower error bound of the truncated Gaussian regularized Shannon sampling series.
\section{Introduction} The classical Shannon sampling theorem \cite{Jerri, Kotelnikov33,Nyquist28, Shannon,Unser,Whittaker} states that any bandlimited function with bandwidth $\pi$ can be completely reconstructed by its infinite samples at integers. In practice, however, we can only sum over finite sample data ``near'' $t$ to approximate the function value at $t$. Truncating the classical Shannon sampling series \cite{Helms,Jagerman} results in a convergence rate of the order $O(\frac{1}{\sqrt{n}})$ due to the slow decay of the sinc function, where $n$ denotes the number of samples. Moreover, this convergence rate for the truncated Shannon series is optimal in the worst case scenario (see, e.g., Lemma 1.1, \cite{Micchelli}). A useful way to significantly improve the convergence rate is by oversampling. Led by this idea, three regularization methods \cite{Jagerman,Micchelli,Qian} for Shannon's sampling series were proposed to reconstruct a bandlimited function $f$ with bandwidth $0<\delta<\pi$ from its finite oversampling data $\{f(j):j=-n+1,-n+2,\dots,n\}$ with an exponentially decaying approximation error. They work by multiplying the sinc function in the Shannon series with a rapidly-decaying regularization function, namely a power of a sinc function \cite{Jagerman}, a spline function \cite{Micchelli}, or a Gaussian function \cite{LinZhang17,Qian,Qianthesis}. To be precise and to state the purpose of the paper, the {\it truncated Gaussian regularized Shannon sampling series} proposed by Wei \cite{Wei98} is defined as \begin{equation}\label{GR} (S_{n,r}f)(t):=\sum_{j=-n+1}^n f(j)\,{\rm sinc}\,(t-j)e^{-\frac{(t-j)^2}{2r^2}},\ t\in(0,1),\ f\in{\cal B}_\delta, \end{equation} where $r>0$, $n\ge2$, $0<\delta<\pi$, and $\,{\rm sinc}\,(x):=\sin(\pi x)/(\pi x)$, $x\in\mathbb R$. The {\it Paley-Wiener space} ${\cal B}_\delta$ is defined as $$ {\cal B}_\delta:=\Big\{f\in L^2(\mathbb R)\cap C(\mathbb R): \,{\rm supp}\,\hat{f}\subseteq [-\delta,\delta]\Big\} $$ with the norm $\|f\|_{{\cal B}_\delta}:=\|f\|_{L^2(\mathbb R)}=\|\hat{f}\|_{L^2(\mathbb R)}$. Denote by $\,{\rm supp}\, h$ the support of a function $h$. In this note, for each $f\in L^1(\mathbb R)$, its {\it Fourier transform} \cite{Debnath15} takes the form: $$ \hat{f}(\xi):=\frac{1}{\sqrt{2\pi}} \int_{\mathbb R} f(x)e^{-ix\xi }dx,\ \xi\in\mathbb R. $$ We can extend the Fourier transform to $L^2(\mathbb R)$ by the standard approximation process. Given a bandlimited function with bandwidth $0<\delta<\pi$, it was proved in \cite{Qian} that the upper error bound of the truncated Gaussian regularized Shannon sampling series (\ref{GR}) is of the convergence rate of the order $O(\sqrt{n}\exp(-\frac{\pi-\delta}2n))$ after optimizing about the variance $r$ of the Gaussian function as done in (\cite{Micchelli}, pp. 106-107). Recently, the paper \cite{LinZhang17} provided a better estimate for the second error term, namely $E_2f$ in (\ref{E1E2eq}) or Equation (2.5) in \cite{Qian}, and hence improved the convergence rate for (\ref{GR}) to the order \begin{equation}\label{rate} O\Big(\frac{1}{\sqrt{n}}\exp(-\frac{\pi-\delta}{2}n)\Big). \end{equation} The latter rate (\ref{rate}) is by far the best convergence rate among all regularized methods \cite{Jagerman,LinZhang17,Micchelli,Qian} for the Shannon sampling series. Due to its simplicity and high accuracy, the Gaussian regularized Shannon sampling series has been widely applied to scientific and engineering computations. We notice that more than a hundred such papers have appeared (see \url{http://www.math.msu.edu/~wei/pub-sec.html} for the list, and \cite{Wei98,Wei00,Wei000,Wei08,Zhao} for comments and discussion). Furthermore, many generalizations of Shannon's sampling theorem have been established (see, e.g., \cite{Aldroubi,Chen15,Chen1,Grochenig,Marks,SongSun07,SunZhou02,Vaidyanathan} and references therein). Numerical experiments in \cite{LinZhang17} indicated that the convergence rate (\ref{rate}) should be optimal. We mathematically certify this indication by estimating the lower error bound of (\ref{GR}). In addition, we are able to justify that the selection of the variance $r=\sqrt{\frac{n-1}{\pi-\delta}}$ in (\ref{GR}) is optimal. We emphasize that it is the first lower error bound estimation for (\ref{GR}) in the literature. More importantly, the lower bound is of exponential decay. For any $x>0$, we denote by $\lceil x\rceil$ the smallest integer greater than or equal to $x$. We now are ready to present our main results. \begin{theorem}\label{Theorem} Let $0<\delta<\frac{\pi}{2}$ and $0<\varepsilon<1$. If there exists $\frac{2}{\sqrt{\varepsilon(2+\varepsilon)}(\pi-\delta)}\le r\le \sqrt{\frac{n-1}{\pi-\delta}}$ such that $C_{r,\delta,\varepsilon}>0$, then the lower error bound for the truncated Gaussian regularized Shannon sampling series (\ref{GR}) is $$ \sup_{\|f\|_{{\cal B}_\delta}\le 1}\sup_{t\in(0,1)}\big|f(t)-(S_{n,r}f)(t)\big| \ge \frac{1}{\pi\sqrt{2\delta}} \Big[C_{r,\delta,\varepsilon} \frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{r^3}-\frac{2\sqrt{2} r^2 e^{-\frac{(n-1)^2}{2r^2}}}{n(n+\frac12) (n-1)\sqrt{\pi}}\Big], $$ where $n\ge \lceil \frac{4}{\varepsilon(2+\varepsilon)(\pi-\delta)}+1\rceil$ is sufficiently large and \begin{equation}\label{Crdelta} C_{r,\delta,\varepsilon}:=\sin\Big(\frac{\delta}{2}\Big)\Big[\frac{4}{(2+\varepsilon)(\pi+\delta) \delta}\Big(\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-2\pi\delta r^2}}{\pi+\delta}\Big) -\frac{2}{(2\pi-\delta)(\pi-\delta)^2} \Big]. \end{equation} \end{theorem} We should make some comments for Theorem \ref{Theorem} here. To begin with, one sees that $e^{-\frac{(\pi-\delta)^2r^2}{2}}$ is decreasing on $(0,\infty)$ with respect to the variable $r$. The condition $0<r\le \sqrt{\frac{n-1}{\pi-\delta}}$ guarantees $e^{-\frac{(\pi-\delta)^2r^2}{2}}\ge e^{-\frac{(n-1)^2}{2r^2}}$. Second, the parameter $0<\varepsilon<1$ can be arbitrarily small and will be formally introduced in (\ref{Millseq2}). Third, an easy computation shows that $\sqrt{\frac{n-1}{\pi-\delta}}\ge \frac{2}{\sqrt{\varepsilon(2+\varepsilon)}(\pi-\delta)}$ implies $n\ge \frac{4}{\varepsilon(2+\varepsilon)(\pi-\delta)}+1$. Finally, $0<\delta<\frac{\pi}{2}$ in Theorem \ref{Theorem} is a necessary condition such that $C_{r,\delta,\varepsilon}>0$. It is straightforward to see that $$ 0<\frac{4}{(2+\varepsilon)(\pi+\delta) \delta}\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{2}{(2\pi-\delta)(\pi-\delta)^2}<\frac{1}{(\pi+\delta) \delta}\frac{2}{\pi-\delta}-\frac{2}{(2\pi-\delta)(\pi-\delta)^2}, $$ namely, $\frac{1}{(\pi+\delta)\delta}>\frac{1}{(2\pi-\delta)(\pi-\delta)}$, which implies $\delta<\frac{\pi}{2}$. It was proved in \cite{LinZhang17,Micchelli} that the best convergence rate (\ref{rate}) for the upper error bound of (\ref{GR}) can be obtained when $r=\sqrt{\frac{n-1}{\pi-\delta}}$. To facilitate a comparison with (\ref{rate}), we obtain a lower error bound of the order $O\big(\frac{1}{n\sqrt{n}}\exp(-\frac{\pi-\delta}{2}n)\big)$. \begin{corollary}\label{Corollary} Let $r:=\sqrt{\frac{n-1}{\pi-\delta}}$ in Theorem \ref{Theorem} and $0<\varepsilon<1$. If $C_{r,\delta,\varepsilon}>0$, then \begin{equation}\label{remarkeq1} \sup_{\|f\|_{{\cal B}_\delta}\le 1}\sup_{t\in(0,1)}\big|f(t)-(S_{n,r}f)(t)\big|\ge \Big[C_{r,\delta,\varepsilon}(\pi-\delta)\sqrt{\pi-\delta}-\frac{2\sqrt{2}}{(\pi-\delta)\sqrt{\pi}} \frac{(n-1)\sqrt{n-1}}{n(n+\frac12)}\Big] \frac{e^{-\frac{(\pi-\delta)(n-1)}{2}}}{\pi\sqrt{2\delta}(n-1)\sqrt{n-1}}, \end{equation} where \begin{equation}\label{n} n\ge \max\Big\{2, \Big\lceil \frac{4}{\varepsilon(2+\varepsilon)(\pi-\delta)}+1\Big\rceil, \Big\lceil \frac{8}{\pi C_{r,\delta,\varepsilon}^2(\pi-\delta)^5}-1 \Big\rceil \Big\}. \end{equation} \end{corollary} Some remarks should be made for Corollary \ref{Corollary}. First of all, we have $\frac{(n-1)\sqrt{n-1}}{n(n+\frac12)}<\frac{1}{\sqrt{n+1}}$ for all $n\ge2$. As a result, the condition $n\ge \frac{8}{\pi C_{r,\delta,\varepsilon}^2(\pi-\delta)^5}-1$ guarantees $$ C_{r,\delta,\varepsilon}(\pi-\delta)\sqrt{\pi-\delta}-\frac{2\sqrt{2}}{(\pi-\delta)\sqrt{\pi}} \frac{(n-1)\sqrt{n-1}}{n(n+\frac12)}>C_{r,\delta,\varepsilon}(\pi-\delta)\sqrt{\pi-\delta}-\frac{2\sqrt{2}}{(\pi-\delta)\sqrt{\pi}}\frac{1}{\sqrt{n+1}}\ge0. $$ Secondly, note that $\sqrt{\frac{n-1}{\pi-\delta}}\ge \frac{1}{\sqrt{\pi-\delta}}$ for all $n\ge2$. Clearly, a {\it sufficient condition} for $\delta$ such that $C_{r,\delta,\varepsilon}>0$ is $$ \frac{4}{(2+\varepsilon)(\pi+\delta) \delta}\Big(\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-\frac{2\pi\delta}{\pi-\delta}}}{\pi+\delta}\Big)-\frac{2}{(2\pi-\delta)(\pi-\delta)^2}>0. $$ Numerical result shows that $\frac{1}{200}\pi\le \delta\le \frac{49}{100}\pi$ is a sufficient condition such that $C_{r,\delta,\varepsilon}>0$ when $\varepsilon=1/20$, where $C_{r,\delta,\varepsilon}$ is defined by (\ref{Crdelta}). Finally, we should mention that $C_{r,\delta,\varepsilon}<0$ as the bandwidth $\delta\to0$ or $\delta\to\frac{\pi}{2}$. Now, we are in a position to present the upper error bound for (\ref{GR}). It is worth pointing out that an additional error term was lost in \cite{Qian}. The error term $\frac{1}{\sqrt{2\pi}}\|\mathbb E_1\hat{f}\|_{L^1(\mathbb R)}$ instead of $\frac{1}{\sqrt{2\pi}}\|\widehat{E_1f}\|_{L^1(\mathbb R)}$ was actually estimated in (\cite{Qian}, p. 1173), where $\widehat{E_1f}=\mathbb E_1\hat{f}+\mathbb E_2\hat{f}$ (see (\ref{E1E2eq0}) and (\ref{E1bE1bE2})). Hence, the problem amounts to estimating the additional term $\frac{1}{\sqrt{2\pi}}\|\mathbb E_2\hat{f}\|_{L^1(\mathbb R)}$ (see (\ref{upperboundeq1})). We will see that the missing term $\mathbb E_2 \hat{f}$ has the same convergence rate as $\mathbb E_1 \hat{f}$ (see (\ref{upperboundeq1}) and (\ref{upperboundeq2})). As a result, the new convergence rate for the first error term $E_1f$ in (\ref{E1E2eq}) remains the same as $\mathbb E_1\hat{f}$. Precisely speaking, we actually obtain the following upper error bound estimation. \begin{theorem}\label{Theorem2} Let $0<\delta<\pi$, $r>0$ and $n\ge2$. Then the upper error bound for the truncated Gaussian regularized Shannon sampling series (\ref{GR}) is $$ \sup_{\|f\|_{{\cal B}_\delta}\le 1}\sup_{t\in(0,1)}\big|f(t)-(S_{n,r}f)(t)\big|\le \Big[\frac{1+\big(1+\frac{1}{2\pi(3\pi-\delta)r^2}\big)e^{-2\pi(2\pi-\delta)r^2}}{\sqrt{2(\pi-\delta)}r}+\sqrt{2\delta}\Big]\frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{\pi (\pi-\delta)r} +\frac{r e^{-\frac{(n-1)^2}{2r^2}}}{\pi n\sqrt{n-1}}. $$ \end{theorem} We should again discuss the special case: $r:=\sqrt{\frac{n-1}{\pi-\delta}}$. In this case, note that $r^2=\frac{n-1}{\pi-\delta}\ge \frac{1}{\pi-\delta}$ for all $n\ge2$ and $$ 1+\Big(1+\frac{1}{2\pi(3\pi-\delta)r^2}\Big)e^{-2\pi(2\pi-\delta)r^2}\le 1+\Big(1+\frac{\pi-\delta}{2\pi(3\pi-\delta)}\Big)e^{-\frac{2\pi(2\pi-\delta)}{\pi-\delta}}\le 1+(1+\frac{1}{6\pi})e^{-4\pi}. $$ \begin{corollary} Let $r=\sqrt{\frac{n-1}{\pi-\delta}}$ in Theorem \ref{Theorem2}. Then \begin{equation}\label{remarkeq2} \sup_{\|f\|_{{\cal B}_\delta}\le 1}\sup_{t\in(0,1)}\big|f(t)-(S_{n,r}f)(t)\big|\le\Big(\sqrt{2\delta}+\frac{\sqrt{n-1}}{n}+\frac{1+(1+\frac{1}{6\pi})e^{-4\pi}}{\sqrt{2(n-1)}}\Big)\frac{e^{-\frac{(\pi-\delta)(n-1)}{2}}}{\pi\sqrt{(\pi-\delta)(n-1)}}. \end{equation} \end{corollary} By Theorems \ref{Theorem} and \ref{Theorem2}, one sees that the convergence rate of the order $O\big(\frac{1}{r}\exp(-\frac{(\pi-\delta)^2r^2}{2})\big)$ for the truncated Gaussian regularized Shannon sampling series is {\it optimal} when $\frac{2}{\sqrt{\varepsilon(2+\varepsilon)}(\pi-\delta)}\le r\le \sqrt{\frac{n-1}{\pi-\delta}}$ and $C_{r,\delta,\varepsilon}>0$. Recall that $e^{-\frac{(\pi-\delta)^2r^2}{2}}$ is decreasing on $(0,\infty)$. Thus, when $r= \sqrt{\frac{n-1}{\pi-\delta}}$ and $C_{r,\delta,\varepsilon}>0$, by (\ref{remarkeq1}) and (\ref{remarkeq2}), we conclude that the convergence rate (\ref{rate}) is optimal. In other words, it turns out that the selection $r=\sqrt{\frac{n-1}{\pi-\delta}}$ is optimal in Theorem \ref{Theorem2} among all $\frac{2}{\sqrt{\varepsilon(2+\varepsilon)}(\pi-\delta)}\le r\le \sqrt{\frac{n-1}{\pi-\delta}}$, where $0<\varepsilon<1$ and $n$ is given by (\ref{n}). The paper is organized as follows. We shall exploit Fourier analysis techniques to prove Theorems \ref{Theorem} and \ref{Theorem2} in the following section. In the last section, numerical experiments are presented to demonstrate our theoretical results. Specifically, the lower error bound, the reconstruction error, and the upper error bound of the truncated Gaussian regularized Shannon sampling series (\ref{GR}) will be directly compared. \section{Proof of Theorems \ref{Theorem} and \ref{Theorem2}} In this section, we are devoted to proving Theorems \ref{Theorem} and \ref{Theorem2}. Assume $0<\delta<\pi$, $0<\varepsilon<1$, and $r>0$. For any subset $E$ of $\mathbb R$, denote by ${\bf 1}_{E}$ the characteristic function on $E$, namely, ${\bf 1}_{E}(x)=1$ if $x\in E$ and $0$ otherwise. Denote by $\mathbb Z$ the set of all integers and $\mathbb N$ the set of all positive integers. We first deal with Theorem \ref{Theorem}. To do so, we divide the error $f-S_{n,r}f$ into two terms. Specifically, for every $f\in{\cal B}_\delta$, $0<\delta<\pi$, we set \begin{equation}\label{E1E2eq} f(t)-(S_{n,r}f)(t):=(E_1 f)(t)+(E_2f)(t),\ t\in(0,1), \end{equation} where the first error term $E_1f$ and the second error term $E_2f$ are defined by \begin{equation}\label{E1E2eq0} (E_1f)(t):=f(t)-\sum_{j\in\mathbb Z} f(j)\,{\rm sinc}\,(t-j)e^{-\frac{(t-j)^2}{2r^2}}\mbox{ and } (E_2f)(t):=\sum_{j\notin(-n,n]} f(j)\,{\rm sinc}\,(t-j)e^{-\frac{(t-j)^2}{2r^2}}, \end{equation} respectively. It follows that \begin{equation}\label{E1E2} \sup_{\|f\|_{{\cal B}_\delta}\le 1}\sup_{t\in(0,1)}\big|f(t)-(S_{n,r}f)(t)\big|\ge \big|(E_1 f_0)(t_0)+(E_2f_0)(t_0)\big|\ge |(E_1 f_0)(t_0)|-|(E_2f_0)(t_0)|, \end{equation} where $f_0\in{\cal B}_\delta$ with $\|f_0\|_{{\cal B}_\delta}\le 1$ and the point $t_0\in(0,1)$ is appropriately chosen. Throughout this note, we always take \begin{equation}\label{f0} f_0(t):=\frac{1}{\sqrt{\pi\delta}}\frac{\sin((t-\frac12)\delta)}{(t-\frac12)}, \ t\in\mathbb R, \end{equation} and $t_0\in (0,1)$ in (\ref{E1E2}) such that the function value of the function $|E_1 f_0|$ at $t_0$ is not less than its average on $(0,1)$, that is, \begin{equation}\label{t0} |(E_1 f_0)(t_0)|\ge \|(E_1 f_0)(t)\|_{L^1((0,1))}. \end{equation} Clearly, $f_0\in{\cal B}_\delta$, $\|f_0\|_{{\cal B}_\delta}=1$, and its Fourier transform is $\widehat{f_0}(\xi)=\frac{1}{\sqrt{2\delta}}{\bf 1}_{[-\delta,\delta]}(\xi) e^{-i \xi/2}$, $\xi\in\mathbb R$. Therefore, our task reduces to give a ``big'' lower bound for $|(E_1 f_0)(t_0)|$ and a ``small'' upper bound for $|(E_2 f_0)(t_0)|$, where $|(E_1 f_0)(t_0)|$ and $|(E_2 f_0)(t_0)|$ are defined by (\ref{E1E2}). For the sake of clarity, two lemmas are needed. \begin{lemma}\label{Lemma1} Let $0<\delta<\pi$ and $0<\varepsilon<1$. If there exists $r\ge \frac{2}{\sqrt{\varepsilon(2+\varepsilon)}(\pi-\delta)}$ such that $\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-2\pi\delta r^2}}{\pi+\delta}>0$, then $$ \int_{-\delta}^{\delta} \Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big)d\xi >\frac{2\sqrt{2}}{(2+\varepsilon)(\pi+\delta) \sqrt{\pi}}\Big(\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-2\pi\delta r^2}}{\pi+\delta}\Big)\frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{r^3}. $$ \end{lemma} \begin{proof} By Mills' ratio \cite{Pollak} \begin{equation}\label{Millseq} \frac{e^{-x^2}}{x+\sqrt{x^2+2}}\le \int_x^{\infty} e^{-\tau^2}d\tau\le \frac{e^{-x^2}}{x+\sqrt{x^2+\frac{4}{\pi}}},\ x>0, \end{equation} we obtain \begin{equation}\label{Millseq1} \int_x^{\infty} e^{-\tau^2}d\tau<\frac{e^{-x^2}}{2x},\ x>0. \end{equation} By (\ref{Millseq}), for any $0<\varepsilon<1$, we have \begin{equation}\label{Millseq2} \frac{e^{-x^2}}{(2+\varepsilon)x}< \int_x^{\infty} e^{-\tau^2}d\tau\mbox{ for all }x\ge\sqrt{\frac{2}{\varepsilon(2+\varepsilon)}}. \end{equation} Clearly, the parameter $\varepsilon$ serves to improve the accuracy of the inequality (\ref{Millseq2}). Observe that $$ \begin{array}{ll} \displaystyle{1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau } &\displaystyle{=\frac{1}{\sqrt{\pi}}\int_{\mathbb R} e^{-\tau^2}d\tau-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau }\\ &\displaystyle{= \frac{1}{\sqrt{\pi}} \Big[\int_{\frac{(\xi+\pi)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau + \int^{\frac{(\xi-\pi)r}{\sqrt{2}}}_{-\infty} e^{-\tau^2}d\tau \Big]}\\ &\displaystyle{= \frac{1}{\sqrt{\pi}} \Big[\int_{\frac{(\xi+\pi)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau + \int_{\frac{(\pi-\xi)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau \Big].} \end{array} $$ By (\ref{Millseq2}), for any $\xi\in[-\delta,\delta]$ and $r\ge \frac{2}{\sqrt{\varepsilon(2+\varepsilon)}(\pi-\delta)}$ (i.e., $\frac{(\pi-\delta)r}{\sqrt{2}}\ge\sqrt{\frac{2}{\varepsilon(2+\varepsilon)}}$), we have $$ 1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau > \frac{1}{(2+\varepsilon)\sqrt{\pi}} \Big[\frac{e^{-\frac{(\pi+\xi)^2r^2}{2}}}{(\pi+\xi)r/\sqrt{2}}+\frac{e^{-\frac{(\pi-\xi)^2r^2}{2}}}{(\pi-\xi)r/\sqrt{2}}\Big] :=\frac{\sqrt{2}}{(2+\varepsilon) r\sqrt{\pi}}G(\xi), $$ where $$ G(\xi):=\frac{e^{-\frac{(\pi+\xi)^2r^2}{2}}}{\pi+\xi}+\frac{e^{-\frac{(\pi-\xi)^2r^2}{2}}}{\pi-\xi},\ \xi\in[-\delta,\delta]. $$ We compute $$ \begin{array}{ll} \displaystyle{ \int_{-\delta}^{\delta} G(\xi)d\xi=2\int_{-\delta}^{\delta}\frac{e^{-\frac{(\pi+\xi)^2r^2}{2}}}{\pi+\xi}d\xi } &\displaystyle{ >\frac{2}{\pi+\delta}\int_{-\delta}^{\delta}e^{-\frac{(\pi+\xi)^2r^2}{2}}d\xi }\\ &\displaystyle{=\frac{2\sqrt{2}}{(\pi+\delta)r}\int_{\frac{(\pi-\delta)r}{\sqrt{2}}}^{\frac{(\pi+\delta)r}{\sqrt{2}}} e^{-\tau^2}d\tau }\\ &\displaystyle{=\frac{2\sqrt{2}}{(\pi+\delta)r}\Big[\int_{\frac{(\pi-\delta)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau-\int_{\frac{(\pi+\delta)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau\Big]. }\\ \end{array} $$ Combining (\ref{Millseq1}) with (\ref{Millseq2}), we obtain $$ \begin{array}{ll} \displaystyle{\int_{-\delta}^{\delta} G(\xi)d\xi} &\displaystyle{>\frac{2\sqrt{2}}{(\pi+\delta)r}\Big(\frac{\sqrt{2}}{(2+\varepsilon)(\pi-\delta)r}e^{-\frac{(\pi-\delta)^2r^2}{2}}-\frac{1}{\sqrt{2}(\pi+\delta)r}e^{-\frac{(\pi+\delta)^2r^2}{2}}\Big) }\\ &\displaystyle{=\frac{2}{(\pi+\delta)r^2}\Big(\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-2\pi\delta r^2}}{\pi+\delta}\Big)e^{-\frac{(\pi-\delta)^2r^2}{2}}.}\\ \end{array} $$ The proof is hence complete. \end{proof} \begin{lemma}\label{lemma2} Let $0<\delta<\pi$ and $r>0$. It holds $$ \sum_{k\in\mathbb N}\int_{-\delta+2k\pi}^{\delta+2k\pi} e^{-\frac{(\xi-\pi)^2r^2}{2}} d\xi<\frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{(\pi-\delta)r^2}. $$ \end{lemma} \begin{proof} By a change of variables $\tau=\frac{(\xi-\pi)r}{\sqrt{2}}$, we have by (\ref{Millseq1}) $$ \sum_{k\in\mathbb N}\int_{-\delta+2k\pi}^{\delta+2k\pi} e^{-\frac{(\xi-\pi)^2r^2}{2}} d\xi =\frac{\sqrt{2}}{r}\sum_{k\in\mathbb N}\int_{\frac{[(2k-1)\pi-\delta]r}{\sqrt{2}}}^{\frac{[(2k-1)\pi+\delta]r}{\sqrt{2}}} e^{-\tau^2} d\tau <\frac{\sqrt{2}}{r}\int_{\frac{(\pi-\delta)r}{\sqrt{2}}}^{\infty} e^{-\tau^2} d\tau <\frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{(\pi-\delta)r^2}, $$ which completes the proof. \end{proof} The convolution of two functions $f,g$ on $\mathbb R$ is given as $$ (f*g)(x):=\frac{1}{\sqrt{2\pi}}\int_{\mathbb R} f(x-t)g(t)dt,\ x\in\mathbb R, $$ whenever the integral is well-defined (see, e.g., \cite{Debnath15}, pp. 39--41). Now, we are ready to estimate the lower bound of $|(E_1f_0)(t_0)|$ mentioned at the beginning of this section. \begin{theorem}\label{Theorem1} Let $0<\delta<\frac{\pi}{2}$ and $0<\varepsilon<1$. If there exists $r\ge \frac{2}{\sqrt{\varepsilon(2+\varepsilon)}(\pi-\delta)}$ such that $C_{r,\delta,\varepsilon}>0$, then $$ |(E_1f_0)(t_0)|\ge \frac{C_{r,\delta,\varepsilon}}{\pi\sqrt{2\delta}} \frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{r^3}, $$ where $C_{r,\delta,\varepsilon}$ is given by (\ref{Crdelta}). \end{theorem} \begin{proof} Let $f\in{\cal B}_\delta$, $0<\delta<\pi$. Then \begin{equation}\label{hatf} \hat{f}(\xi)=\mathring{\hat{f}}(\xi):=\frac{1}{\sqrt{2\pi}}\sum_{j\in\mathbb Z}f(j)e^{-ij\xi},\ \xi\in[-\delta,\delta], \end{equation} where $f(j)=\frac{1}{\sqrt{2\pi}}\int_{-\pi}^{\pi} \hat{f}(\xi)e^{ij\xi}d\xi$, $j\in\mathbb Z$. Note that $\widehat{g\cdot h}=\hat{g}*\hat{h}$ for any $g,h\in L^2(\mathbb R)$, and for any $a>0$, $\widehat{e^{-a x^2}}(\xi)=\frac{1}{\sqrt{2a}}e^{-\frac{\xi^2}{4a}}$ and $\widehat{{\bf 1}_{[-a,a]}}(\xi)=\sqrt{\frac{2}{\pi}}\frac{\sin(a\xi)}{\xi}$, $\xi\in\mathbb R$. By (\ref{E1E2eq0}), we have $$ \begin{array}{ll} \displaystyle{\widehat{E_1f}(\xi) } & \displaystyle{ =\hat{f}(\xi)-\sum_{j\in\mathbb Z}f(j)\Big[\Big(\,{\rm sinc}\,(\cdot)e^{-\frac{\cdot^2}{2r^2}}\Big)(t-j)\Big]^{\hat{\,}}(\xi) }\\ & \displaystyle{=\hat{f}(\xi)-\sum_{j\in\mathbb Z}f(j)e^{-ij\xi} \Big(\,{\rm sinc}\,(\cdot)e^{-\frac{\cdot^2}{2r^2}}\Big)^{\hat{\,}}(\xi) } \\ & \displaystyle{=\hat{f}(\xi)-\sum_{j\in\mathbb Z}f(j)e^{-ij\xi} \Big(\widehat{\,{\rm sinc}\,(\cdot)} * \widehat{e^{-\frac{\cdot^2}{2r^2}}}\Big)(\xi) } \\ & \displaystyle{=\hat{f}(\xi)-\sum_{j\in\mathbb Z}f(j)e^{-ij\xi} \Big(\frac{1}{\sqrt{2\pi}}{\bf 1}_{[-\pi,\pi]}(\xi)* re^{-\frac{r^2\xi^2}{2}} \Big)}\\ & \displaystyle{=\hat{f}(\xi)-\sum_{j\in\mathbb Z}f(j)e^{-ij\xi}\frac{1}{2\pi}\int_{\xi-\pi}^{\xi+\pi} re^{-\frac{r^2\eta^2}{2}}d \eta,\ \xi\in\mathbb R.} \end{array} $$ Note that $\,{\rm supp}\,\hat{f}\subseteq[-\delta,\delta]$, and $\mathring{\hat{f}}$ in (\ref{hatf}) is a $2\pi$-periodic function on $\mathbb R$ with $\,{\rm supp}\, \mathring{\hat{f}}=\cup_{k\in\mathbb Z}[-\delta+2k\pi,\delta+2k\pi]$. By (\ref{hatf}), we obtain \begin{equation}\label{E1bE1bE2} \begin{array}{ll} \widehat{E_1f}(\xi) &\displaystyle{=\hat{f}(\xi)-\mathring{\hat{f}}(\xi)\frac{1}{\sqrt{2\pi}}\int_{\xi-\pi}^{\xi+\pi} re^{-\frac{r^2\eta^2}{2}}d \eta}\\ &\displaystyle{=\hat{f}(\xi)-\mathring{\hat{f}}(\xi)\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau}\\ &\displaystyle{=\hat{f}(\xi)\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) +\mathring{\hat{f}}(\xi){\bf 1}_{\mathbb R\setminus[-\pi,\pi]}(\xi)\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau}\\ &\displaystyle{:=(\mathbb E_1\hat{f})(\xi)+(\mathbb E_2\hat{f})(\xi),\ \xi\in\mathbb R,} \end{array} \end{equation} where \begin{equation}\label{bE1bE2} \begin{array}{ll} &\displaystyle{(\mathbb E_1\hat{f})(\xi):=\hat{f}(\xi)\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big), }\\ &\displaystyle{ (\mathbb E_2\hat{f})(\xi):=\frac{1}{\sqrt{\pi}}\sum_{k\in\mathbb Z\setminus\{0\}}\hat{f}(\xi-2k\pi){\bf 1}_{[-\delta+2k\pi,\delta+2k\pi]}(\xi)\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau.}\\ \end{array} \end{equation} By (\ref{f0}) and (\ref{t0}), we get $$ |(E_1f_0)(t_0)|\ge \|E_1f_0\|_{L^1((0,1))}=\int_{\mathbb R} \big|(E_1f_0)(x){\bf 1}_{(0,1)}(x)\big|dx\ge \Big|\int_{\mathbb R} (E_1f_0)(x){\bf 1}_{(0,1)}(x)dx\Big|. $$ By the Plancherel theorem (see, e.g., Theorem 2.5.6 in \cite{Debnath15}) and (\ref{bE1bE2}), we have \begin{equation}\label{E1} \begin{array}{ll} \displaystyle{|(E_1f_0)(t_0)| } &\displaystyle{\ge \Big|\int_{\mathbb R} \widehat{E_1f_0}(\xi)\overline{\widehat{{\bf 1}_{(0,1)}}}(\xi)d\xi \Big| }\\ &\displaystyle{=\Big|\int_{\mathbb R} (\mathbb E_1 \widehat{f_0})(\xi)\widehat{{\bf 1}_{(0,1)}}(-\xi) d\xi+\int_{\mathbb R} (\mathbb E_2\widehat{f_0})(\xi)\widehat{{\bf 1}_{(0,1)}}(-\xi) d\xi \Big| }\\ &\displaystyle{\ge \Big|\int_{-\delta}^{\delta}\widehat{f_0}(\xi)\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big)\widehat{{\bf 1}_{(0,1)}}(-\xi) d\xi}\\ &\displaystyle{\quad +\frac{1}{\sqrt{\pi}}\sum_{k\in\mathbb Z\setminus\{0\}}\int_{-\delta+2k\pi}^{\delta+2k\pi}\widehat{f_0}(\xi-2k\pi)\Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big)\widehat{{\bf 1}_{(0,1)}}(-\xi) d\xi\Big|.}\\ \end{array} \end{equation} Note that $\widehat{{\bf 1}_{(0,1)}}(\xi)=\frac{1}{\sqrt{2\pi}} \frac{e^{-i\xi}-1}{-i\xi}$ and $\widehat{f_0}(\xi)=\frac{1}{\sqrt{2\delta}}{\bf 1}_{[-\delta,\delta]}(\xi) e^{-i \xi/2}$, $\xi\in\mathbb R$. By (\ref{Millseq1}) and (\ref{Millseq2}), we have {\small$$ \begin{array}{ll} &\displaystyle{|(E_1f_0)(t_0)|}\\ &\displaystyle{\ge \frac{1}{2\sqrt{\pi\delta}}\Big| \int_{-\delta}^{\delta}e^{-i \frac{\xi}{2}}\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) \frac{e^{i\xi}-1}{i\xi} d\xi +\frac{1}{\sqrt{\pi}}\sum_{k\in\mathbb Z\setminus\{0\}}\int_{-\delta+2k\pi}^{\delta+2k\pi}e^{ik\xi}e^{-i \frac{\xi}{2}}\Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) \frac{e^{i\xi}-1}{i\xi} d\xi\Big|}\\ &\displaystyle{= \frac{1}{2\sqrt{\pi\delta}}\Big| \int_{-\delta}^{\delta}\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big)\frac{\sin(\xi/2)}{\xi/2} d\xi +\frac{1}{\sqrt{\pi}}\sum_{k\in\mathbb Z\setminus\{0\}}(-1)^k\int_{-\delta+2k\pi}^{\delta+2k\pi}\Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) \frac{\sin(\xi/2)}{\xi/2} d\xi\Big|. }\\ \end{array} $$} Observe that the integrand $$ \Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) \frac{\sin(\xi/2)}{\xi/2},\ \xi\in\mathbb R, $$ is an even function on $\cup_{k\in\mathbb Z\setminus\{0\}}[-\delta+2k\pi,\delta+2k\pi]$, and $(-1)^k=(-1)^{-k}$ for all $k\in\mathbb N$. Thus, $$ \begin{array}{ll} &\displaystyle{|(E_1f_0)(t_0)|}\\ &\displaystyle{\ge \frac{1}{2\sqrt{\pi\delta}}\Big| \int_{-\delta}^{\delta}\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big)\frac{\sin(\xi/2)}{\xi/2} d\xi +\frac{4}{\sqrt{\pi}}\sum_{k\in\mathbb N}(-1)^k\int_{-\delta+2k\pi}^{\delta+2k\pi}\Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) \frac{\sin(\xi/2)}{\xi} d\xi\Big| }\\ &\displaystyle{\ge \frac{1}{2\sqrt{\pi\delta}} \Big[\int_{-\delta}^{\delta}\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big)\frac{\sin(\xi/2)}{\xi/2} d\xi -\frac{4}{\sqrt{\pi}}\sum_{k\in\mathbb N} \int_{-\delta+2k\pi}^{\delta+2k\pi}\Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) \frac{|\sin(\xi/2)|}{\xi} d\xi\Big]. } \end{array} $$ Notice that $ \frac{\sin(\delta/2)}{\delta/2}\le \frac{\sin(\xi/2)}{\xi/2}\le 1$ for all $\xi\in[-\delta,\delta]$ and $0\le \frac{|\sin(\xi/2)|}{\xi}\le \frac{\sin(\delta/2)}{2\pi-\delta}$ for all $\xi\in \cup_{k\in\mathbb N}[-\delta+2k\pi,\delta+2k\pi]$. As a result, we compute $$ \begin{array}{ll} &\displaystyle{|(E_1f_0)(t_0)|}\\ &\displaystyle{\ge \frac{1}{2\sqrt{\pi\delta}}\Big[ \frac{2\sin(\delta/2)}{\delta} \int_{-\delta}^{\delta}\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) d\xi -\frac{4\sin(\delta/2)}{\sqrt{\pi}(2\pi-\delta)}\sum_{k\in\mathbb N}\int_{-\delta+2k\pi}^{\delta+2k\pi}\Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau\Big) d\xi\Big]}\\ &\displaystyle{\ge \frac{\sin(\delta/2)}{\sqrt{\pi\delta}}\Big[ \frac{1}{\delta} \int_{-\delta}^{\delta}\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) d\xi -\frac{2}{\sqrt{\pi}(2\pi-\delta)}\sum_{k\in\mathbb N}\int_{-\delta+2k\pi}^{\delta+2k\pi}\frac{e^{-\frac{(\xi-\pi)^2r^2}{2}}}{\sqrt{2}(\xi-\pi)r} d\xi\Big]}\\ &\displaystyle{\ge \frac{\sin(\delta/2)}{\sqrt{\pi\delta}}\Big[ \frac{1}{\delta} \int_{-\delta}^{\delta}\Big(1-\frac{1}{\sqrt{\pi}}\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big) d\xi -\frac{2}{\sqrt{2\pi}(2\pi-\delta)(\pi-\delta)r} \sum_{k\in\mathbb N}\int_{2k\pi-\delta}^{2k\pi+\delta} e^{-\frac{(\xi-\pi)^2r^2}{2}} d\xi\Big].}\\ \end{array} $$ By Lemmas \ref{Lemma1} and \ref{lemma2}, we have $$ \begin{array}{ll} &\displaystyle{|(E_1f_0)(t_0)|}\\ &\displaystyle{\ge \frac{\sin(\delta/2)}{\sqrt{\pi\delta}}\Big[\frac{1}{\delta}\frac{2\sqrt{2}}{(2+\varepsilon)(\pi+\delta) \sqrt{\pi}}\Big(\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-2\pi\delta r^2}}{\pi+\delta}\Big)\frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{r^3} -\frac{2}{\sqrt{2\pi}(2\pi-\delta)(\pi-\delta)r}\frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{(\pi-\delta)r^2}\Big]}\\ &\displaystyle{\ge \frac{\sin(\delta/2)}{\pi\sqrt{\delta}}\Big[\frac{2\sqrt{2}}{(2+\varepsilon)(\pi+\delta) \delta}\Big(\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-2\pi\delta r^2}}{\pi+\delta}\Big) -\frac{2}{\sqrt{2}(2\pi-\delta)(\pi-\delta)^2} \Big] \frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{r^3} }\\ &\displaystyle{\ge \frac{\sin(\delta/2)}{\pi\sqrt{2\delta}}\Big[\frac{4}{(2+\varepsilon)(\pi+\delta) \delta}\Big(\frac{2}{(2+\varepsilon)(\pi-\delta)}-\frac{e^{-2\pi\delta r^2}}{\pi+\delta}\Big) -\frac{2}{(2\pi-\delta)(\pi-\delta)^2} \Big] \frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{r^3},}\\ \end{array} $$ which completes the proof. \end{proof} \vspace{1cm} {\bf Proof of Theorem \ref{Theorem}:} By (\ref{f0}), for any $n\ge2$ and $t\in(0,1)$, we estimate $$ |(E_2f_0)(t)|\le \frac{1}{\sqrt{\pi\delta}}\sum_{j\notin(-n,n]} \Big|\frac{\sin((j-\frac12)\delta)}{(j-\frac12)} \,{\rm sinc}\,(t-j) \Big| e^{-\frac{(t-j)^2}{2r^2}} < \frac{1}{\pi n(n+\frac12)\sqrt{\pi\delta}}\sum_{j\notin(-n,n]} e^{-\frac{(t-j)^2}{2r^2}}. $$ By (\ref{Millseq1}), for any $n\ge2$ and $t\in(0,1)$, we arrive at $$ \sum_{j\notin(-n,n]} e^{-\frac{(t-j)^2}{2r^2}}\le 2\sum_{j=n}^{\infty}e^{-\frac{j^2}{2r^2}}=2\sqrt{2}r\sum_{j=n}^{\infty}e^{-\frac{j^2}{2r^2}}\frac{1}{\sqrt{2}r} <2\sqrt{2}r\int_{\frac{n-1}{\sqrt{2}r}}^{\infty}e^{-\tau^2}d\tau <\frac{2r^2}{n-1}e^{-\frac{(n-1)^2}{2r^2}}. $$ Thus, \begin{equation}\label{E2} |(E_2f_0)(t_0)|<\frac{2r^2}{\pi n(n+\frac12) (n-1)\sqrt{\pi\delta}}e^{-\frac{(n-1)^2}{2r^2}}, \ n\ge2. \end{equation} Combining (\ref{E1E2}), (\ref{f0}), (\ref{t0}), (\ref{E2}), and Theorem \ref{Theorem1}, it follows easily that $$ \begin{array}{ll} \displaystyle{\sup_{\|f\|_{{\cal B}_\delta}\le 1}\sup_{t\in(0,1)} |f(t)-S_{n,r}f(t)|} &\displaystyle{\ge |(E_1f_0)(t_0)|- |(E_2f_0)(t_0)| }\\ &\displaystyle{\ge \frac{C_{r,\delta,\varepsilon}}{\pi\sqrt{2\delta}} \frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{r^3}-\frac{2r^2 e^{-\frac{(n-1)^2}{2r^2}}}{\pi n(n+\frac12) (n-1)\sqrt{\pi\delta}}, }\\ \end{array} $$ where $C_{r,\delta,\varepsilon}$ is given by (\ref{Crdelta}). The proof is complete. \hspace{6.5cm} $\square$ \vspace{1cm} At the end of this section, we turn to proving Theorem \ref{Theorem2}. \\ {\bf Proof of Theorem \ref{Theorem2}: } Let $f\in{\cal B}_\delta$ and $\|f\|_{{\cal B}_\delta}\le 1$. Using (\ref{bE1bE2}) and the Cauchy-Schwartz inequality, we have $$ \begin{array}{ll} \displaystyle{\frac{1}{\sqrt{2\pi}}\|\mathbb E_2\hat{f}\|_{L^1(\mathbb R)}} &\displaystyle{\le \frac{1}{\sqrt{2}\pi} \sum_{0\ne k\in\mathbb Z} \int_{-\delta+2k\pi}^{\delta+2k\pi}|\hat{f}(\xi-2k\pi)|\Big(\int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau \Big) d\xi }\\ &\displaystyle{\le \frac{\sqrt{2}}{\pi} \sum_{k\in\mathbb N} \Big[\int_{-\delta+2k\pi}^{\delta+2k\pi} \Big( \int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\frac{(\xi+\pi)r}{\sqrt{2}}} e^{-\tau^2}d\tau\Big)^2 d\xi\Big]^{1/2} }\\ &\displaystyle{\le \frac{\sqrt{2}}{\pi} \sum_{k\in\mathbb N} \Big[\int_{-\delta+2k\pi}^{\delta+2k\pi} \Big( \int_{\frac{(\xi-\pi)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau\Big)^2 d\xi\Big]^{1/2}. } \end{array} $$ By (\ref{Millseq1}), we obtain $$ \begin{array}{ll} \displaystyle{\frac{1}{\sqrt{2\pi}}\|\mathbb E_2\hat{f}\|_{L^1(\mathbb R)}} &\displaystyle{\le \frac{\sqrt{2}}{\pi} \sum_{k\in\mathbb N} \Big[\int_{-\delta+2k\pi}^{\delta+2k\pi} \Big(\frac{e^{-\frac{(\xi-\pi)^2r^2}{2}}}{\sqrt{2}(\xi-\pi)r}\Big)^2 d\xi\Big]^{1/2}}\\ &\displaystyle{\le \frac{1}{\pi(\pi-\delta)r} \sum_{k\in\mathbb N} \Big[\int_{-\delta+2k\pi}^{\delta+2k\pi} e^{-(\xi-\pi)^2r^2} d\xi\Big]^{1/2}}\\ &\displaystyle{= \frac{1}{\pi(\pi-\delta)r} \sum_{k\in\mathbb N} \Big[\frac{1}{r} \int_{[(2k-1)\pi-\delta]r}^{[(2k-1)\pi+\delta]r} e^{-\tau^2} d\tau\Big]^{1/2}}\\ &\displaystyle{\le \frac{1}{\pi(\pi-\delta)r} \sum_{k\in\mathbb N} \Big[\frac{1}{r} \int_{[(2k-1)\pi-\delta]r}^{\infty} e^{-\tau^2} d\tau\Big]^{1/2}}\\ &\displaystyle{\le \frac{1}{\pi(\pi-\delta)r} \sum_{k\in\mathbb N} \Big[\frac{1}{r}\frac{1}{2[(2k-1)\pi-\delta]r}e^{-[(2k-1)\pi-\delta]^2r^2}\Big]^{1/2}}\\ &\displaystyle{\le \frac{1}{\pi(\pi-\delta)\sqrt{2(\pi-\delta)}r^2} \sum_{k\in\mathbb N} e^{-[(2k-1)\pi-\delta]^2r^2/2}.}\\ \end{array} $$ By (\ref{Millseq1}), we compute $$ \sum_{k=3}^{\infty}e^{-\frac{[(2k-1)\pi-\delta]^2r^2}{2}}=\frac{1}{\sqrt{2}\pi r}\sum_{k=3}^{\infty}e^{-\frac{[(2k-1)\pi-\delta]^2r^2}{2}}\sqrt{2}\pi r\le \frac{1}{\sqrt{2}\pi r}\int_{\frac{(3\pi-\delta)r}{\sqrt{2}}}^{\infty} e^{-\tau^2}d\tau \le \frac{e^{-\frac{(3\pi-\delta)^2r^2}{2}}}{2\pi(3\pi-\delta)r^2}. $$ It follows that $$ \begin{array}{ll} \displaystyle{\sum_{k\in\mathbb N} e^{-\frac{[(2k-1)\pi-\delta]^2r^2}{2}} } &\displaystyle{=e^{-\frac{(\pi-\delta)^2r^2}{2}}+e^{-\frac{(3\pi-\delta)^2r^2}{2}}+\sum_{k=3}^{\infty}e^{-\frac{[(2k-1)\pi-\delta]^2r^2}{2}} }\\ &\displaystyle{\le \Big[1+\big(1+\frac{1}{2\pi(3\pi-\delta)r^2}\big)e^{-2\pi(2\pi-\delta)r^2}\Big] e^{-\frac{(\pi-\delta)^2r^2}{2}}. }\\ \end{array} $$ Thus, we obtain \begin{equation}\label{upperboundeq1} \frac{1}{\sqrt{2\pi}}\|\mathbb E_2\hat{f}\|_{L^1(\mathbb R)}\le \frac{1+\big(1+\frac{1}{2\pi(3\pi-\delta)r^2}\big)e^{-2\pi(2\pi-\delta)r^2} }{\pi(\pi-\delta)\sqrt{2(\pi-\delta)}r^2}e^{-\frac{(\pi-\delta)^2r^2}{2}}. \end{equation} By (2.12) in \cite{LinZhang17}, this leads to \begin{equation}\label{upperboundeq2} \frac{1}{\sqrt{2\pi}}\|\mathbb E_1\hat{f}\|_{L^1(\mathbb R)}\le \frac{\sqrt{2\delta} }{\pi(\pi-\delta)r}e^{-\frac{(\pi-\delta)^2r^2}{2}}. \end{equation} By (2.13) in \cite{LinZhang17}, we can assert that \begin{equation}\label{upperboundeq3} |E_2(t)|\le \frac{r e^{-\frac{(n-1)^2}{2r^2}}}{\pi n\sqrt{n-1}},\ t\in(0,1). \end{equation} Applying the Riemann-Lebesgue lemma, we get $$ |(E_1f)(t)+(E_2f)(t)|\le |(E_1f)(t)|+|(E_2f)(t)|\le \frac{1}{\sqrt{2\pi}}\|\widehat{E_1f}\|_{L^1(\mathbb R)}+|(E_2f)(t)|,\ t\in(0,1). $$ Combining (\ref{bE1bE2}), (\ref{upperboundeq1}), (\ref{upperboundeq2}), and (\ref{upperboundeq3}), it follows immediately that for each $t\in(0,1)$ $$ \begin{array}{ll} \displaystyle{ |(E_1f)(t)+(E_2f)(t)| } &\displaystyle{\le \frac{1}{\sqrt{2\pi}}\|\mathbb E_1\hat{f}+\mathbb E_2\hat{f}\|_{L^1(\mathbb R)} +|(E_2f)(t)| }\\ &\displaystyle{\le \frac{1}{\sqrt{2\pi}}\|\mathbb E_1\hat{f}\|_{L^1(\mathbb R)}+\frac{1}{\sqrt{2\pi}}\|\mathbb E_2\hat{f}\|_{L^1(\mathbb R)}+|(E_2f)(t)| }\\ &\displaystyle{\le \Big[\frac{1+\big(1+\frac{1}{2\pi(3\pi-\delta)r^2}\big)e^{-2\pi(2\pi-\delta)r^2}}{\sqrt{2(\pi-\delta)}r}+\sqrt{2\delta}\Big]\frac{e^{-\frac{(\pi-\delta)^2r^2}{2}}}{\pi (\pi-\delta)r} +\frac{r e^{-\frac{(n-1)^2}{2r^2}}}{\pi n\sqrt{n-1}}. }\\ \end{array} $$ The proof is hence complete. \hspace{11cm} $\square$ \section{Numerical experiments} We shall present numerical experiments to demonstrate that the lower error bound estimation in Theorem \ref{Theorem} for the truncated Gaussian regularized Shannon sampling series (\ref{GR}) is optimal. To this end, we need to recall the lower error bound (\ref{remarkeq1}) and the upper error bound (\ref{remarkeq2}). In what follows, we let $\delta=\frac{\pi}{4}$, $\varepsilon=\frac{1}{7}$, $n\ge 7$ and $r=\sqrt{\frac{n-1}{\pi-\delta}}$. Remember that $C_{r,\delta,\varepsilon}$ and $f_0$ are given by (\ref{Crdelta}) and (\ref{f0}), respectively. An easy computation shows $C_{r,\delta,\varepsilon}\ge 0.0666687$ for all $n\ge7$. We can easily verify that $n\ge 7$ satisfies the condition (\ref{n}). We shall reconstruct the function values of $f_0$ on $(0,1)$ from $\{f_0(j):j=-n+1,-n+2,\dots, n\}$ by the truncated Gaussian regularized Shannon sampling series $$ (S_{n,r} f_0)(t)=\sum_{j=-n+1}^n f_0(j)\,{\rm sinc}\,(t-j)e^{-\frac{(t-j)^2(\pi-\delta)}{2(n-1)}},\ t\in(0,1). $$ The error of reconstruction is measured by $$ E(f_0-S_{n,r}f_0):=\max_{1\le j\le 99}\big| (f_0-S_{n,r}f_0)\big(\frac{j}{100}\big)\big|. $$ This reconstruction error is to be compared with both the lower error bound (\ref{remarkeq1}) denoted by $$ E_{\delta,n}:=\Big[0.0666687(\pi-\delta)\sqrt{\pi-\delta}-\frac{2\sqrt{2}}{(\pi-\delta)\sqrt{\pi}} \frac{(n-1)\sqrt{n-1}}{n(n+\frac12)}\Big] \frac{e^{-\frac{(\pi-\delta)(n-1)}{2}}}{\pi\sqrt{2\delta}(n-1)\sqrt{n-1}},\ n\ge7 $$ and the upper error bound (\ref{remarkeq2}) denoted by $$ {\bf E}_{\delta,n}:=\Big(\sqrt{2\delta}+\frac{\sqrt{n-1}}{n}+\frac{1+(1+\frac{1}{6\pi})e^{-4\pi}}{\sqrt{2(n-1)}}\Big)\frac{e^{-\frac{(\pi-\delta)(n-1)}{2}}}{\pi\sqrt{(\pi-\delta)(n-1)}}, n\ge2. $$ The above three errors, namely the lower error bound $E_{\delta,n}$, the reconstruction error $E(f_0-S_{n,r}f_0)$ and the upper error bound ${\bf E}_{\delta,n}$, for $n=7,9,\dots,23,25$ are listed in Table \ref{Tab1}. We also plot $\log E_{\delta,n}$, $\log E(f_0-S_{n,r}f_0)$, and $\log {\bf E}_{\delta,n}$ for $n=7,8,\dots,24,25$ in Figure \ref{fig1}. To sum up, we proved that the truncated Gaussian regularized Shannon sampling formula converges exponentially rapidly to the bandlimited function and our lower error bound is optimal. \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|c|} \hline & $E_{\delta,n}$ &$E(f_0-S_{n,r}f_0)$ &${\bf E}_{\delta,n}$ \\ \hline n=7 & 7.5816e-07 & 1.6125e-05 &1.3637e-04 \\ \hline n=9 & 5.6056e-08 & 1.0218e-06 & 1.0754e-05 \\ \hline n=11 & 4.4118e-09 & 7.1272e-08 &8.8497e-07 \\ \hline n=13 & 3.5746e-10 & 5.2752e-09 & 7.4813e-08 \\ \hline n=15 & 2.9493e-11 &4.0037e-10 &6.4423e-09 \\ \hline n=17 & 2.4661e-12 & 3.1085e-11 &5.6227e-10 \\ \hline n=19 &2.0841e-13 &2.4961e-12 &4.9577e-11 \\ \hline n=21 &1.7768e-14 &2.0497e-13 &4.4065e-12 \\ \hline n=23& 1.5261e-15 & 1.6963e-14 & 3.9420e-13 \\ \hline n=25 & 1.3193e-16 &1.4843e-15 & 3.5451e-14 \\ \hline \end{tabular} \caption{Three errors when $\delta=\frac{\pi}{4}$.} \label{Tab1} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=7in]{Errors}\\ \caption{Comparison of $\log (E_{\delta,n})$, $\log E(f_0-S_{n,r}f_0)$, and $\log({\bf E}_{\delta,n})$ when $\delta=\frac{\pi}{4}$.}\label{fig1} \end{figure} {\small \bibliographystyle{amsplain}
{ "timestamp": "2018-11-07T02:19:58", "yymm": "1711", "arxiv_id": "1711.04909", "language": "en", "url": "https://arxiv.org/abs/1711.04909", "abstract": "We consider the reconstruction of a bandlimited function from its finite localized sample data. Truncating the classical Shannon sampling series results in an unsatisfactory convergence rate due to the slow decay of the sinc function. To overcome this drawback, a simple and highly effective method, called the Gaussian regularization of the Shannon series, was proposed in the engineering and has received remarkable attention. It works by multiplying the sinc function in the Shannon series with a regularized Gaussian function. Recently, it was proved that the upper error bound of this method can achieve a convergence rate of the order $O(\\frac{1}{\\sqrt{n}}\\exp(-\\frac{\\pi-\\delta}{2}n))$, where $0<\\delta<\\pi$ is the bandwidth and $n$ the number of sample data. The convergence rate is by far the best convergence rate among all regularized methods for the Shannon sampling series. The main objective of this article is to present the theoretical justification and numerical verification that the convergence rate is optimal when $0<\\delta<\\pi/2$ by estimating the lower error bound of the truncated Gaussian regularized Shannon sampling series.", "subjects": "Numerical Analysis (math.NA)", "title": "An Optimal Convergence Rate for the Gaussian Regularized Shannon Sampling Series", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.982823294006359, "lm_q2_score": 0.843895098628499, "lm_q1q2_score": 0.8293997606298826 }
https://arxiv.org/abs/2110.02264
Generic Generalized Diagonal Matrices
Generalized diagonal matrices are matrices that have two ladders of entries that are zero in the upper right and bottom left corners. The minors of generic generalized diagonal matrices have square-free initial ideals. We give a description of the facets of their Stanley-Reisner complex. With this description, we characterize the configuration of ladders that yield Cohen-Macaulay ideals. In the special case where both ladders are triangles, we show that the corresponding complex is vertex decomposable. Also in this case, we compute the height and multiplicity of the ideals.
\section{Introduction} The initial ideals of determinantal ideals are square-free, hence a combinatorial approach, using Stanley-Reisner complexes, can be applied to study them. Herzog and Trung in \cite{Herzog1992} gave a description of the Stanley-Reisner complexes of generic determinantal ideals. They use this description to compute the Hilbert multiplicity of these ideals. We introduce a more general class of matrices, generalized diagonal matrices, for which the methods in \cite{Herzog1992} are applicable. Generalized diagonal matrices are matrices with two ladders of zeros in the bottom left and top right corner. We are able to extend the results of \cite{Herzog1992} to generalized diagonal matrices. Furthermore, as our main result, we classify the shapes of zeros which yield an ideal that is Cohen-Macaulay. \begin{Def} A \textit{generalized diagonal} (GD) matrix is a $n \times m$ matrix with two ladders of zeros, $L_1$ and $L_2$, in the bottom left and top right, respectively. The ladders are described as follows. Let $c_1\geq c_2 \geq ...\geq c_s$ be a non-increasing set of positive integers, with $s < m$ and $c_1 < n$, then $L_1$ consist of the last $c_i$ entries in column $i$. Similarly, let $d_t \leq d_{t+1} \leq ... \leq d_m$ be a non-decreasing set of positive integers, with $t >1 $ and $d_m < n$, then $L_2$ consist of the first $d_i$ entries in column $i$. We allow $L_1$ or $L_2$ to be empty. \end{Def} In section \ref{Sec-Groebner-Basis-GD} we will compute the initial ideals of the ideals of minors of generic GD matrices. In section \ref{Sec-Complex} we analyze their Stanley-Reisner complex. We give a description of the facets of these complexes, cf. \cite{Herzog1992},\cite{Miller2005}, and \cite{Conca2003}. Additionally, these complexes can also be realized as complexes associated to certain posets and have been studied in \cite[Section 7]{Bjorner1980}. \\ A special case of a generic GD matrix is one where there are triangles of zeros in the two corners. This is precisely when $c_1 = t_1, c_2 = t_1 - 1, ..., c_{t_1} = 1$ for some $t_1 \geq 0$ and $d_{m-t_2+1} = 1, d_{m-t_2+2} = 2,..., d_{m} = t_2$ for some $t_2 \geq 0$. These matrices are determined completely by their size and the parameters, $t_1$ and $t_2$, the sizes of the triangles of zeros. The Stanley-Reisner complex of the initial ideals of their minors behaves nicely. In section \ref{Sec-Dim-Purity}, using the description of the facets of these complexes, we are able to give an explicit formula for the height of the $r \times r$ minors in terms of $n,m,t$, and $r$. In section \ref{Sec-CM} we show that these ideals are Cohen-Macaulay. Finally in section \ref{Sec-Multiplicity} we give a formula for the multiplicity of these ideals. Two more special cases occur when $t_1 = t_2 = 0$ and $t_1 = n-1 = m-1, t_2 = 0$. These parameters describe generic matrices and generic upper triangular square matrices, respectively. Hence we recover the results in \cite{Herzog1992}. \section{Generic GD Matrices}\label{Sec-Groebner-Basis-GD} \begin{Not} \label{Notation-Matrix} Throughout, $X = (x_{ij})$ denotes a generic $n \times m$ GD matrix with ladders of zeros $L_1$ and $L_2$. $I = I_r(X)$, the ideal of $r \times r$ minors of $X$, and $R = k[X]$. We order the non-zero entries of $X$ lexicographically, i.e., $x_{ij} > x_{kl}$ if $i < k$, or $i = k$ and $j < l$. We then extend this order to a lexicographic order on the monomials of $R$. \\ For convenience, after determining the shape of $L_1$ and $L_2$, we mean the non-zero entries of $X$ when we refer to entries of $X$. \end{Not} As is standard practice, to deduce properties about $I$, we first compute $\textrm{in}(I)$ and determine a Gröbner basis for it. After finding $\textrm{in}(I)$, we describe the facets of its Stanley-Reisner complex. Loosely speaking, the facets are union of stair shape paths starting next to $L_2$ and ending next to $L_1$.\\ We first need a lemma that describes how initial ideals in $R$ and in $S$ interact. \begin{Lemma}\label{Lemma-GB-Lift} Let $S$ be a polynomial ring over a field and let $T$ be an ideal generated by indeterminates of $S$. Let $R = S/T$ and let ``$-$" denote images in $R$. Fix a monomial ordering on $S$ and consider the induced monomial ordering on $R$. Suppose $\{g_1,...,g_k\}$ is a Gröbner basis for a homogeneous ideal $I$ of $S$ such that $\overline{{\normalfont{\normalfont \textrm{in}}}(g_i)} = {\normalfont{\normalfont \textrm{in}}}(\overline{g_i})$ for all $i$ and ${\normalfont{\normalfont \textrm{in}}}(I + T) = {\normalfont{\normalfont \textrm{in}}}(I) + T$, then $\{\overline{g_1},...,\overline{g_k}\}$ is a Gröbner basis for $\overline{I}$. \end{Lemma} \begin{proof} We only need to show that $ \big( \;\overline{\textrm{in}(g_i)}\; \big) = \textrm{in}(\overline{I})$, then we are done by the assumption $\big(\;\overline{\textrm{in}(g_i)}\;\big) = (\textrm{in}(\overline{g_i}))$. Again, by that assumption, since $(\textrm{in}(\overline{g_i})) \subset \textrm{in}(\overline{I})$, we have $\big(\;\overline{\textrm{in}(g_i)}\;\big)\subset \textrm{in}(\overline{I})$. Equality follows if their Hilbert functions are the same. We compare their functions as follows, $$\textrm{HF}_{R/\textrm{in}(\overline{I})} = \textrm{HF}_{R/\overline{I}} = \textrm{HF}_{S/I+T} = \textrm{HF}_{S/\textrm{in}(I + T)} $$ $$= \textrm{HF}_{S/\textrm{in}(I) + T} = \textrm{HF}_{S/ ( \textrm{in}(g_i) ) + T} = \textrm{HF}_{R/ \left( \; \overline{\textrm{in}(g_i)} \; \right)}.$$ \end{proof} \begin{Prop}\label{GB-I} The minors of $X$ form a Gröbner basis for $I_r(X)$. \end{Prop} \begin{proof} Let $X'$ denote the fully generic $n \times m$ matrix, where all of the entries are non-zero. Let $S = k[X']$, $I' = I_r(X')$, and $T = (L_1 \cup L_2)$. Then we identify $R$ as $S/T$ and we have $I = I_r(X) = I' + T / T$. Further, we may choose an ordering on $S$ so that the ordering on $R$ is induced from that of $S$. Let $I = (\delta_1,...,\delta_s)$ where $\delta_i$ are the $r \times r$ minors of $X$. Let ${\delta_i}'$ be the lifts $\delta_i$ in $X'$, i.e., they are the determinants of the corresponding submatrix in $X$. By a well-known result, $\{{\delta_1}',...,{\delta_s}'\}$ forms a Gröbner basis for $I'$ \cite[5.3]{Conca2003}, \cite[2.4]{Herzog1992}. \\ We check that $\{{\delta_1}',...,{\delta_s}'\}$ satisfies the conditions of Lemma \ref{Lemma-GB-Lift}. By \cite[1.9]{Gorla2007}, $\textrm{in}(I' + T) = \textrm{in}(I') + T$ if for any minor ${\delta_i}' \in I'$ and any entry $x \in T$, there is an element $f \in I' \cap T$ such that $\textrm{in}(f) = \textrm{lcm}(x, \textrm{in}({\delta_i}'))$. Pick ${\delta_i}' \in I'$ and $x \in T$. Now either $\textrm{lcm}(x, \textrm{in}({\delta_i}')) = x \textrm{in}({\delta_i}')$ or $\textrm{lcm}(x, \textrm{in}({\delta_i}')) = \textrm{in}({\delta_i}')$, depending on whether or not $x$ is along the main diagonal of ${\delta_i}'$. In the former case, take $f = x{\delta_i}'$. In the latter case, ${\delta_i}'$ or its transpose is of the correct form to apply Lemma \ref{Lemma-Div-Det}, hence ${\delta_i}' \in (x) \cap T$ and we take $f = {\delta_i}'$. \\ We are done once we check that $ \textrm{in}(\delta_i) = \overline{\textrm{in}({\delta_i}')}$, where ``$-$" denotes image in $R$. Let $d_i,{d_i}'$ be the submatrices of $X,X'$ corresponding to $\delta_i,{\delta_i}'$, respectively. If $\delta_i \neq 0$, then by Lemma \ref{Lemma-Div-Det}, the main diagonal of ${d_i}'$ does not meet $T$. Hence the image of the main diagonal of $d_i$ is equal to the main diagonal of $\overline{{d_i}'}$ thus $ \textrm{in}(\delta_i) = \overline{\textrm{in}({\delta_i}')}$. If $\delta_i = 0$, then ${\delta_i}'\in T$, since $T$ is a monomial ideal, we have that each term of ${\delta_i}'$ is in $T$. In particular, $ \textrm{in}({\delta_i}')\in T$ so $\overline{\textrm{in}({\delta_i}')}=0$. \end{proof} \begin{Lemma}\label{Lemma-Div-Det} Let $Y$ be a $n\times n$ matrix of the form $\begin{bmatrix} * & * \\ A & * \end{bmatrix}$, where $A$ is a $(n-i+1)\times i$ submatrix. Then $\det Y\subset I_1(A)$. In particular, if $A$ is a submatrix of the form $\begin{bmatrix} 0 & x \\ 0 & 0 \end{bmatrix}$, where $x \neq 0$, then $x$ divides $\det Y$. \end{Lemma} \begin{proof} Take a $n-i$ order expansion for the determinant of $Y$ along its last $n-i$ rows. Observe that in this expansion, all but one $n-i\times n-i$ minor is in $I_1(A)$, and that minor term corresponds to an $i \times i$ minor whose last row is the first row of $A$. \end{proof} \begin{Rmk}\label{Rmk-GB} The fact that the minors form a Gröbner basis is the cornerstone for the deductions in the following section. From the proof, we see that the placement of the zeros in a generic GD matrix ensures that diagonal terms of minors of $X'$ either survive in $X$ or correspond to a minor that is zero. If we attempt to generalize to sparse matrices, this approach breaks down as we cannot control the image in $X$ of the main diagonal terms of minors in $X'$. In \cite{Giusti1982}, the authors computed the height and classified the Cohen-Macaulayness of maximal minors of sparse matrices. In the case that a sparse matrix is, up to row and column operations, a GD matrix, we recover \cite[1.6.2]{Giusti1982} and extend their results to arbitrary size minors. \end{Rmk} \section{The Stanley-Reisner Complex of $I_r(X)$}\label{Sec-Complex} We first establish some notation and definitions for some standard operations on simplicial complexes. \begin{Def} Let $\Delta$ be a complex. \begin{enumerate} \item Let $F$ be a face of $\Delta$, then the \textit{deletion} of $\Delta$ at $F$ is the subcomplex $${\normalfont \textrm{Del}}_F(\Delta) = \{ G \in \Delta : G \cap F = \emptyset\}.$$ \item Let $F$ be a face of $\Delta$, then the \textit{link} of $\Delta$ at $F$ is the subcomplex $${\normalfont \textrm{Link}}_F(\Delta) = \{ G \in \Delta : G \cap F = \emptyset\ \text{ and } G \cup F \in \Delta\}.$$ \item Let $\Sigma$ be a complex whose vertex set is disjoint from the vertex set of $\Delta$. Then the \textit{join} of $\Sigma$ with $\Delta$ is the following complex, $$\Delta * \Sigma = \{G \cup F \; : \; G \in \Delta, F \in \Sigma \}.$$ \end{enumerate} \end{Def} Next, we recall the Stanley-Reisner correspondence between square-free monomial ideals and simplicial complexes. \begin{Def} $($\cite{Herzog2011}$)$ Given a square-free monomial ideal $\mathbf{a}\in k[x_1,...,x_n]$, let $\Delta_\mathbf{a}$ be the simplicial complex on the vertex set $x_1,...,x_n$ whose faces correspond to monomials which are not in $\mathbf{a}$. We call $\Delta_{\mathbf{a}}$ the \textit{Stanley-Reisner complex} of $\mathbf{a}$. \end{Def} By Proposition \ref{GB-I}, we have that the initial ideal of $I$ is a square-free monomial ideal, hence we introduce the following notation. \begin{Not} For a generic GD matrix $X$, we set $\Delta_{I_r(X)}=\Delta_I:=\Delta_{\textrm{in}(I)}$, the Stanley-Reisner complex of ${\normalfont \textrm{in}}(I)$. We will label the vertices of $\Delta_I$ using the entries of $X$. \end{Not} We will abuse notation and will not distinguish between $x_{ij}$ as an entry of $X$ or as a vertex of $\Delta_I$. The meaning of $x_{ij}$ will be clear from context. \begin{Def} A \textit{$k$-diagonal} is a set of entries of $X$ that form the main diagonal of some $k \times k$ submatrix of $X$. A \textit{$k$-chain} $x_1 < x_2 < ... < x_k$ is an ordered $k$-diagonal using the ordering on $R$. \end{Def} Note that a face $C$ on the vertex set $\{x_{ij}\}$ is a face of $\Delta_I$ if and only if the monomial support of $C$ is not in $\textrm{in}(I)$. That is to say, a face $C$ of $\Delta_I$ corresponds to a set of entries of $X$ whose product is not divisible by the leading term of a $r\times r$ minor of $X$. This is precisely the condition that $C$ does not contain any $r$-diagonals. This leads to the following definition. \begin{Def}[F-Condition] \label{Def-Fk} Let $k$ be a positive integer, then we say that a set of entries $C$, satisfies condition $F_k$ if no subset of $C$ is a $k$-diagonal. Note that if $C$ satisfies condition $F_k$, then $C$ also satisfies condition $F_{k+1}$. The faces of $\Delta_I$ correspond to exactly those sets that satisfy condition $F_k$ for some $k \leq r$. \end{Def} Next, we describe a technical process, which yields a particular subset of $C$, that allows for induction on condition $F_k$. Loosely speaking, this process scrapes off the top perimeter of $C$. If $C$ satisfies condition $F_k$, then after scraping, what remains will satisfy condition $F_{k-1}$. \begin{Def} \label{Def-Order} We define a new ordering on the entries of $X$ in the following way, $$x \succ y \textrm{ iff } \textrm{col}(x) < \textrm{col}(y) \textrm{ or col}(x) = \textrm{col}(y) \textrm{ and row}(x) > \textrm{row}(y).$$ If $C$ is empty, then set $S(C) = C$. Otherwise, suppose $C$ is not empty, we inductively construct a subset $S(C) = \{y_1,...,y_s\} \subset C$ as follows. Choose $y_1 = \textrm{max}_{\succ}\{y \in C \}$. After choosing $y_i$, let $$B(y_i) = \{ y\neq 0 : \textrm{row}(y) \leq \textrm{row}(y_i) \textrm{ and col}(y) \geq \textrm{col}(y_i) \} \setminus \{y_i\}. $$ If $B(y_i)\cap C = \emptyset$ stop, otherwise choose $y_{i+1} = \textrm{max}_{\succ} \{ y \in B(y_i)\cap C\}$. Notice by construction, the elements of $S(C)$ are ordered in the following way, $y_1 \succ y_2 \succ ... \succ y_s$. We also have $B(y_1) \supset B(y_2) \supset ... \supset B(y_s)$, and each element in $B(y_i)$ is smaller than $y_i$. \end{Def} For example if $C=\{a,\cdots, m\}$ are in the positions indicated below, then $S(C)=\{j,k,l,h,c,d,a,b\}$ (bold entries). \[\begin{bmatrix} & & & & \mathbf{a}& \mathbf{b}\\ & & & \mathbf{c}& \mathbf{d}& e\\ & & & & f&g \\ & & &\mathbf{h} &i & \\ \phantom{x_5} &\mathbf{j} & \mathbf{k} & \mathbf{l} &m & \end{bmatrix}.\] We will refer to the above process as \textit{scraping}. Next, we prove some statements about $S(C)$. \begin{Prop}\label{Prop-Scrape} Let $C$ be a set of entries of $X$. \begin{enumerate} \item $S(C)$ satisfies condition $F_2$ and is the top left perimeter of $C$ in the following sense. For a non-zero entry $x$, if for some $y \in S(C)$, we have ${\normalfont \textrm{row}}(x) \leq {\normalfont \textrm{row}}(y)$ and ${\normalfont \textrm{col}}(x) \leq {\normalfont \textrm{col}}(y)$, then $x \notin C \setminus S(C)$. In particular, this property says that $S(C)$ is closed under ``going up in a column" or ``going left in a row". Meaning for any $y \in S(C)$ and any $x \in C$, if ${\normalfont \textrm{row}}(x) \leq {\normalfont \textrm{row}}(y)$ and ${\normalfont \textrm{col}}(x) = {\normalfont \textrm{col}}(y)$, then $x \in S(C)$, or if ${\normalfont \textrm{row}}(x) = {\normalfont \textrm{row}}(y)$ and ${\normalfont \textrm{col}}(x) \leq {\normalfont \textrm{col}}(y)$, then $x \in S(C)$. \item For $k \geq 2$, if $C$ satisfies condition $F_k$, then $C \setminus S(C)$ satisfies condition $F_{k-1}$. Conversely, if $C$ does not satisfy condition $F_k$, then $C \setminus S(C)$ also does not satisfy condition $F_{k-1}$. \end{enumerate} \end{Prop} \begin{proof} (1) It is evident from the construction of $S(C)$ that it satisfies condition $F_2$, as each $B(y_i)$ consist entirely of entries to the right and up from $y_i$. \\ Now we show the statement about $S(C)$ being the top left perimeter of $C$. Let $x$ be an entry satisfying the property in the proposition. We may assume $x \notin S(C)$, and we will show then that $x \notin C$. Consider the set $E = \{ y \in C : y \succ x \}$. We may assume $E \cap S(C)$ is not empty. For if it is empty, then $y_1$, the maximal element of $C$, is not in $E$, which means $x \succ y_1$, hence $x \notin C$. Next, we will show that for any $y \in E \cap S(C)$ that $x \in B(y)$. Let $z \in S(C)$ be minimal with respect to $\succ$ such that ${\normalfont \textrm{row}}(x) \leq {\normalfont \textrm{row}}(z)$ and ${\normalfont \textrm{col}}(x) \leq {\normalfont \textrm{col}}(z)$. By minimality of $z$, we have $y \succeq z$, and since $z \in S(C)$, we have ${\normalfont \textrm{row}}(z) \leq {\normalfont \textrm{row}}(y)$. Hence ${\normalfont \textrm{row}}(x) \leq {\normalfont \textrm{row}}(y)$. As $y \in E$, we have that $x \in B(y)$. \\ Next, let $y_t$ be the minimal element of $E \cap S(C)$. From above, we know $x \in B(y_t)$. If $y_t$ is the last element of $S(C)$, then $x \notin C$ as $B(y_t) \cap C = \emptyset$. Hence we may assume there is a $y_{t+1} \in S(C)$. By minimality of $y_t$, we have $x \succ y_{t+1}$. Finally, assume that $x \in C$, then $x \in B(y_t)\cap C$, but then $ y_{t+1} \succeq x$ so that $x = y_{t+1}$ which is a contradiction. The in particular follows immediately. \\ \newline (2) Suppose $C\setminus S(C)$ does not satisfy condition $F_{k-1}$, then there is a $(k-1)$-chain $x_1 < ... < x_{k-1} := x$ in $C\setminus S(C)$. We want to find an element, $w \in S(C)$, such that ${\normalfont \textrm{col}}(w) < {\normalfont \textrm{col}}(x)$ and ${\normalfont \textrm{row}}(w) < {\normalfont \textrm{row}}(x)$. This would yield a $k$-chain $x_1 <... < x_{k-1} < w$ in $C$ which would be a contradiction. Consider $E = \{ y \in S(C) : y \succ x\}$, and let $y_t$ be the minimal element of $E$. By $(1)$, it must be that ${\normalfont \textrm{col}}(y_t) < {\normalfont \textrm{col}}(x)$, otherwise $x \in S(C)$. If ${\normalfont \textrm{row}}(y_t) < {\normalfont \textrm{row}}(x)$, we are done. Hence assume ${\normalfont \textrm{row}}(y_t) \geq {\normalfont \textrm{row}}(x)$, but we will see that this leads to a contradiction. Notice that $x \in B(y_t)\cap C$, as $x \notin S(C)$, hence we must have that $y_{t+1} \succ x$, but since $y_{t} \succ y_{t+1}$, this contradicts the minimality of $y_t$.\\ For the converse, let $x_1< ... <x_k$ be a $k$-chain in $C$. Then since $S(C)$ satisfies condition $F_2$, there can be at most one $x_i \in S(C)$, hence $C \setminus S(C)$ contains at least a $(k-1)$-chain. \end{proof} \begin{Def} Let $C$ be a set of non-zero entries of $X$, we say that $C$ is a \textit{stair} if it satisfies condition $F_2$. We now use stairs to define a covering condition on sets of non-zero entries. For $k\geq 1$, we say that $C$ is a \textit{$k$-stair}, if we can write $C = \bigcup_{j=1}^k S_j$, where each $S_j$ is a stair, and $k$ is as small as possible. For convenience, for $k \leq 0$, a $k$-stair is the empty set. Note that a non-empty set of non-zero entries of $X$ is a $k$-stair for exactly one $k$. We say that $C$ is a maximal $k$-stair if it's maximal with respect to inclusion and being a $k$-stair.\end{Def} Returning to the example before \ref{Prop-Scrape}, $C=\{a,\cdots, m\}$ is a non-maximal $2$-stair. Clearly $C$ does not satisfy $F_2$, consider for instance $\{a,e\}\subset C$, so we need to demonstrate that $2$ stairs are sufficient to cover $C$. Take $S_1=S(C)=\{j,k,l,h,c,d,a,b\}$ and $S_2=\{m,i,f,g,e\}$. Note that $S_1$ and $S_2$ are not unique choices. Moreover $C\cup \{x_1,\cdots, x_5\}$ is a maximal $2$-stair containing $C$. \[\begin{bmatrix} & & & & a& b\\ & & & c& d& e\\ & & & x_1 & f&g \\ x_2 & x_3 & x_4 &h &i & \\ x_5 &j & k & l &m & \end{bmatrix}.\] \begin{Prop} \label{Prop-Stairs} Let $C$ be a set of entries of $X$. \begin{enumerate} \item If $C$ is a $k$-stair, then $C$ satisfies condition $F_{k+1}$. \item If $C$ satisfies condition $F_k$, then $C$ is contained in a $j$-stair for some $j < k$. In particular, if $C$ maximally satisfies condition $F_{k+1}$ but does not satisfy condition $F_{k}$, then $C$ is a maximal $k$-stair. \item If $C$ is a maximal $k$-stair, then $C \setminus S(C)$ is a maximal $(k-1)$-stair in the submatrix obtained by removing the first row and column. In particular, if $C$ is a non-empty maximal $k$-stair, then $C$ maximally satisfies condition $F_{k+1}$ but does not satisfy condition $F_k$. \item All faces of $\Delta_I$ are contained in a $j$-stair for some $j < r$. In particular, $C$ is a facet of $\Delta_I$ if and only if $C$ is a maximal $(r-1)$-stair. \item If $C$ is a stair, then $S(C) = C$. \item If $C$ is a non-empty maximal $k$-stair, then $S(C)$ is a maximal stair. \end{enumerate} \end{Prop} \begin{proof} (1) Choose $k+1$ entries of $C$. By the pigeonhole principle, at least two of the chosen entries are contained in a stair $S$. Since $S$ satisfies condition $F_2$, they cannot be part of a $2$-diagonal, let alone a $(k+1)$-diagonal. \\ (2) We proceed by induction on $k$. The base case, when $k=2$, follows from the definition of a stair.\\ For the inductive step, suppose $C$ satisfies condition $F_k$ for some $k > 2$. We apply Proposition \ref{Prop-Scrape} to obtain subsets $S(C)\subset C$ and $C'= C\setminus S(C)$, where $S(C)$ satisfies condition $F_2$ and $C'$ satisfies condition $F_{k-1}$. By induction, $C'$ is contained in $S$, a $j$-stair for some $j < k-1$, and $S(C)$ is a stair. Hence $C = C' \cup S(C)$ is contained in $S \cup S(C) $, a $l$-stair for some $l < k$. \\ The in particular follows from the statement we just proved and $(1)$. \\ (3) We apply Proposition \ref{Prop-Scrape} to write $C = S(C) \sqcup C'$, where $S(C)$ satisfies condition $F_2$ and $C' = C \setminus S(C)$ satisfies condition $F_{k}$. Also by Proposition \ref{Prop-Scrape} (1), $C'$ does not meet the first row or column. Thus, we can consider $C'$ in $Y$, the submatrix obtained by removing the first row and column. Since $C'$ satisfies condition $F_{k}$, by (2), it is contained in a maximal $j$-stair, $E$, of $Y$ for some $j < k$. Back in $X$, we have $C \subset S(C) \cup E$. Now $S(C) \cup E$ satisfies condition $F_{k+1}$, otherwise, if it contains a $(k+1)$-diagonal, then at most one of the entries in the diagonal can be in $S(C)$. This implies that $E$ contains at least a $k$-diagonal, a contradiction. As $C$ is maximal, we have $C = S(C) \cup E$. Since $S(C)$ is a stair, $E$ must be a $(k-1)$-stair, otherwise $C$ could be written as a union of less than $k$ stairs.\\ It remains to show that back in $X$, $E \cap S(C) = \emptyset$ as then we would have $E = C \setminus S(C)$. Suppose not, then let $z = x_{ab}$ be the minimal element of $E \cap S(C)$ with respect to $\succ$. Since $E$ is contained in $Y$, there exists an entry $w = x_{(a-1)(b-1)}$ directly to the right and up from $z$. By Proposition \ref{Prop-Scrape}, $S(C)$ is the top perimeter of $C$, hence $w \notin C$. We will show that $A = C \cup \{w\}$ satisfies condition $F_{k+1}$, then using $(2)$, this will contradict the assumption that $C$ is a maximal $k$-stair. Suppose not, then $A$ contains a $(k+1)$-chain, $x_1 < ... < x_{k+1}$. Now this $(k+1)$-chain must contain $w$, otherwise it would be a $(k+1)$-chain completely contained in $C$, which contradicts $C$ satisfying condition $F_{k+1}$. As $w$ is above $S(C)$, we must have $x_{k+1} = w$. Now, we can replace $x_{k}$ with any element in $C$, whose row and column index is greater than that of $w$ and less than or equal to that of $x_{k}$, and preserve the chain. But $z$ is such an element, hence we may assume $x_{k} = z$. Now as $S(C)$ satisfies condition $F_2$, $x_1,...,x_{k-1}$ must be in $E$. But then $x_1 < ... <x_{k-1}$ is a $(k-1)$-chain in $E$, which is a contradiction as $E$ satisfies condition $F_{k-1}$. \\ For the in particular, from $(1)$ we only need to show that $C$ does not satisfy condition $F_k$. But this follows from induction and Proposition \ref{Prop-Scrape} (2). \\ (4) Follows immediately from (2) and (3). \\ (5) Suppose $C \setminus S(C)$ is not empty; then by Proposition \ref{Prop-Scrape}, $C \setminus S(C)$ satisfies condition $F_{1}$, a contradiction. \\ (6) Let $E$ be a maximal stair containing $S(C)$. We first show that $E \subset C$. By (3), $C \setminus S(C)$ is a $(k-1)$-stair considered in $X$. Let $C' = E \cup (C \setminus S(C))$, clearly $C \subset C'$. Now $C'$ is a union of $k$ stairs, for contradiction, suppose it is a $j$-stair for some $j < k$. Then $C'$ would satisfy condition $F_k$, but this would imply that $C$ satisfies condition $F_k$ which contradicts (3). Hence $C'$ is a $k$-stair. As $C$ is a maximal $k$-stair, $C = C'$ and hence $E \subset C$.\\ Next, we show that $E = S(C)$. For contradiction, suppose $E \setminus S(C)$ is not empty. Using the notation in the scraping process, let $S(C) = \{y_1,...,y_u\}$, and since $S(E) = E$, we may also write $E = \{z_1,...,z_s\}$. Let $z_l$ be the maximal element in $E \setminus S(C)$, we claim that $z_l \notin C$ which leads to a contradiction. Now after ordering $S(C) \cup \{z_l\}$ using $\succ$, there are two edge cases depending on if $z_l$ is the maximal or minimal element. If $z_l$ is the maximal element, then $z_l \succ y_1$, and since $y_1$ is the maximal element of $C$, the claim follows. If $z_l$ is the minimal element, then by construction of $S(C)$, we have $B(y_u)\cap C = \emptyset$. Since $z_l \in B(y_u)$, the claim follows.\\ Next, assume we are outside the two edge cases, in particular, $l \neq 1$ and $l \neq s$. By maximality of $z_l$, we have that $z_{l-1}$ is in $S(C)$. Write $z_{l-1} = y_t$ for some $y_t \in S(C)$. Then $z_l \in B(z_{l-1}) = B(y_t)$. Assume for contradiction that $z_l \in C$. Then $B(y_t) \cap C \neq \emptyset$. By construction of $S(C)$, there is a $y_{t+1} \in S(C)$ with $y_{t+1} \succeq z_l$. Now $y_{t+1} \in B(z_{l-1}) \cap E$, but $z_l$ is the maximal element in $B(z_{l-1}) \cap E$, hence $z_l = y_{t+1}$. This contradicts $z_l \notin S(C)$.\end{proof} The scraping process and Proposition \ref{Prop-Stairs} (5) justifies the following description of a maximal stair. Start with an entry $x$, where $B(x)$ is empty. Then at each step pick an entry either down or to the left from the previous entry, proceed until there are no more entries down or to the left. In other words, maximal stairs are formed by maximal paths winding downwards and leftwards. This description is inverse to the scraping process, hence the proposition justifies this description of a stair, cf. \cite[Section 3]{Herzog1992}. \\ We now give a more precise description of what the facets look like. \begin{Not} Let $X$ be a matrix. Set $U_k(X)$ to be the set of non-zero entries in the $k \times k$ triangle in the upper right corner of $X$, i.e., $U_k(X) = \{x_{ij} \neq 0 : j - i \geq m-k \}$. Set $D_k(X)$ be the set of non-zero entries in the $k \times k$ triangle in the lower left corner of $X$, i.e., $D_k(X) = \{x_{ij} \neq 0: i - j \geq n-k \}$. Note that $U_k(X)$ or $D_k(X)$ may be empty. If the matrix $X$ is clear from context we omit $X$ and just write $U_k$ and $D_k$ \end{Not} \begin{Prop} \label{Prop-Contain-Triangle} Let $C$ be a set of entries that satisfies condition $F_k$, then $C \cup U_{k-1} \cup D_{k-1}$ satisfies condition $F_k$. In particular, if $C$ is a facet, then $C$ contains $U_{r-1}$ and $D_{r-1}$. \end{Prop} \begin{proof} This is clear as any entry in $U_{k-1} \cup D_{k-1}$ cannot be part of any $k$-diagonal. The in particular is also clear as $C$ is maximal with respect to satisfying condition $F_r$. \end{proof} This proposition shows that if we let $X'$ be the matrix obtained from $X$ by zeroing the entries in $U_{r-1}$ and $D_{r-1}$, then $\Delta_I=\Delta_{I_r(X')} * (U_{r-1}\cup D_{r-1})$. Since $U_{r-1}\cup D_{r-1}$ is a simplex, the join is the same as an iterated cone over the vertices of $U_{r-1}\cup D_{r-1}$. Hence if we are concerned with any property that is preserved by taking cones, such as purity, we may replace $X$ with $X'$.\\ A $k$-stair $C$ can be described as the triangles $U_{k-1}$ and $D_{k-1}$ along with $k$ disjoint tendrils going from $U_{k-1}$ to $D_{k-1}$. These tendrils come from a particular stair decomposition of $C$. This is the content of the next proposition. \begin{Prop} \label{Prop-Tendrils} Let $C$ be a maximal $k$-stair. Then there exist a stair decomposition $C = \bigcup_{j=1}^{k} S_j$ such that $T_j = S_j \setminus (U_{k-1} \cup D_{k-1})$ are pairwise disjoint. \end{Prop} \begin{proof} We proceed by induction on $k$. The base case is $k=1$ and is immediate. For the induction step, set $S_1 = S(C)$. Then by Proposition \ref{Prop-Stairs} (3), $S\setminus S(C)$ is a maximal $(k-1)$-stair in the matrix $Y$, obtained by deleting the first row and column. Hence by induction, there is a stair decomposition $S \setminus S(C) = \bigcup_{j=2}^{k} S_j$ where the $T_j$'s are pairwise disjoint. Hence $S =\bigcup_{j=1}^{k} S_j$ gives the desired stair decomposition. \end{proof} We will call the sets $T_j$ the \textit{tendrils} of a $k$-stair. Notice that the proof shows that a stair decomposition of a $k$-stair can be obtained by repeated scraping. The facets of $\Delta_I$ can be described as a union of $r-1$ disjoint tendrils along with $U_{r-1}$ and $D_{r-1}$. \\ \section{Dimension and Purity of $\Delta_I$}\label{Sec-Dim-Purity} Now that we have description of the facets of $\Delta_I$, we can characterize when $\Delta_I$ is pure based on the shape of the ladders of zeroes, $L_1$ and $L_2$, in $X$. In the case that $L_1$ and $L_2$ are triangles, there is a simple formula for the dimension of $\Delta_I$. The formula is in terms of the height of the ideal of minors of a completely generic matrix, and a correction term corresponding to heights of ideals of minors of generic symmetric matrices. \\ Before proceeding, we setup some definitions. \begin{Def} A GD matrix is \textit{unpinched} if there is no contiguous $2 \times 2$ submatrix of the form $$\begin{bmatrix} * & 0 \\ 0 & * \end{bmatrix}.$$ Any GD matrix can be decomposed as a block diagonal matrix where the diagonal blocks are unpinched GD matrices. \\ We call a non-zero entry $x$ a \textit{corner} of $L_2$, if $B(x)$ (see \ref{Def-Order}) is empty. We call a non-zero entry $x$ a corner of $L_1$, if after transposing $X$, $B(x)$ is empty. \end{Def} \begin{Prop}\label{Prop-Unpinched-Purity} Let $X$ be an unpinched generic GD matrix. If $\Delta_I$ is pure, then the corners of $L_1$ are on the same diagonal or $L_1$ is contained in a $r-1 \times r-1$ triangle, and the corners of $L_2$ are on the same diagonal or $L_2$ is contained in a $r-1 \times r-1$ triangle. The converse holds when $r=2$, even if $X$ is pinched. \end{Prop} \begin{proof} We may assume $L_1$ and $L_2$ contain triangles of size $r-1\times r-1$. Since doing so does not change any assumptions by Proposition \ref{Prop-Contain-Triangle}. \\ We first deal with the $r=2$ case. In this case, we will also show that the converse holds. When $r=2$, the facets of $\Delta_I$ are maximal stairs. The dimension of any maximal stair starting at $(a,b)$ and ending at $(c,d)$, where $(a,b) \succ (c,d)$, only depends on its end points and is equal to $$c - a + b - d + 1.$$ Assume $\Delta_I$ is pure. Let $x = (x_1,x_2)$ and $y = (y_1,y_2)$ be adjacent corners of $L_1$. Since $X$ is unpinched, there exist a corner $z = (z_1,z_2)$ of $L_2$ such that there exist two maximal stairs $S$ and $T$ that both start at $z$ and end at $x$ and $y$, respectively. As $\Delta_I$ is pure, the dimensions of $S$ and $T$ are equal, hence $$x_1 - z_1 + z_2 - x_2 + 1 = y_1 - z_1 + z_2 - y_2 + 1.$$ This shows $x_1 - x_2 = y_1 - y_2$ and hence they are along the same diagonal. For $L_2$, we can repeat the same calculation where we pick two maximal stairs $S$ and $T$ that end at the same point but start at two adjacent corners of $L_2$. \\ Conversely, assume that the corners of $L_1$ and $L_2$ are on the same diagonal. Any maximal stair must start at a corner of $L_2$ and end at a corner of $L_1$. But for any corner $(x_1,x_2)$ of $L_1$, we have, by assumption, that $x_1 - x_2$ is constant. Similarly, $y_1 - y_2$ is constant for $(y_1,y_2)$, a corner of $L_2$. Finally, the dimension of any maximal stair is a function of $x_1 - x_2$ and $y_1 - y_2$, hence their dimensions are equal. Notice that this argument does not require $X$ to be unpinched. \\ Now assume $r > 2$. We first show that $L_1$ and $L_2$ are not rectangles, i.e., they have more than 2 corners each. We induct on $r$. For $r=3$, suppose $\Delta_I$ is pure and for contradiction, assume that $L_1$ is a rectangle. Let $E$ be the set of all non-zero entries in the first row and column of $X$. Notice $E$ is a face of $\Delta_I$. We have that ${\normalfont \textrm{Link}}_{E}(\Delta_I)=\Delta_{J}$, where $J$ is the ideal of $r-1$ minors of $Y$, the submatrix obtained by deleting the first row and column. Since $\Delta_I$ is pure, we have that $\Delta_J$ is pure. Hence from the $r=2$ case, we have that $L_1$ is a square after removing its first column. Hence $L_1$ is a $k\times k+1$ rectangle for some $k$. Similarly, let $E'$ be the set of all non-zero entries in the last row and column of $X$, then $E'$ is also a face of $\Delta_I$. We get that ${\normalfont \textrm{Link}}_{E'}(\Delta_I)=\Delta_{J'}$, where $J'$ is the ideal of $r-1$ minors of the submatrix $Y'$, obtained from deleting the last row and column. By the same reasoning as before, we get that $L_1$ is a $l+1\times l$ rectangle for some $l$, contradicting the previously shown form of $L_1$.\\ For the inductive step, note that if $L_1$ is a rectangle, then either $L_1$ minus the first column or $L_1$ minus the last row is a rectangle, hence $\Delta_J$ or $\Delta_J'$ is not pure by the inductive hypothesis.\\ Repeat this argument with $L_2$ to get that $L_2$ cannot be a rectangle.\\ Now we compare $L_1$ in $Y$ and in $Y'$. Since $L_1$ has more than two corners, $L_1$ considered in $Y$ shares at least one corner with $L_1$ considered in $Y'$. Hence by the inductive hypothesis, all of the corners of $L_1$ are along the same diagonal when considered in $X$. The same argument holds for $L_2$. \end{proof} \begin{Prop}\label{Prop-Pure-DisjointTendril} If all tendrils of all facets of $\Delta_I$ are of the same dimension, then $\Delta_I$ is pure. Conversely, suppose $I \neq 0$, $X$ is unpinched, and $\Delta_I$ is pure, then all tendrils of all facets of $\Delta_I$ are of the same dimension. \end{Prop} \begin{proof} The first statement is immediate from Proposition \ref{Prop-Tendrils}. \\ For the converse, we proceed by induction on $r$. We may assume $L_1$ and $L_2$ contain triangles of size $r-1\times r-1$. The base case when $r=2$ follows from Proposition \ref{Prop-Unpinched-Purity}. For the inductive step, let $C$ be a facet of $X$. Then as usual, we can write $C = S(C) \cup (C \setminus S(C))$, where $C \setminus S(C)$ is a facet of $\Delta_{I_{r-1}(Y)}$, where $Y$ is the submatrix obtained by removing the first row and column. By induction, the tendrils of $C \setminus S(C)$ all have the same dimension. By the proof of Proposition \ref{Prop-Tendrils}, the tendrils of $C$ are $T = S(C)$ along with the tendrils of $C \setminus S(C)$. \\ Now let $D$ be the last row and column and let $E$ be the union of the first $r-1$ rows and columns along with $D$. It is clear that $E$ is a facet of $\Delta_I$ and that $D$ is a tendril of $E$. By the same argument above, $D$ has the same dimension as any tendril of $E \setminus S(E)$ and hence it also has the same dimension as any tendril of $C \setminus S(C)$. By construction, both $D$ and $S(C)$ start and end at corner positions of $L_1$ and $L_2$, respectively. Hence by Proposition \ref{Prop-Unpinched-Purity}, since the corners of $L_1$ and $L_2$ are on the same diagonal, we have that $D$ and $S(C)$ have the same dimension. \end{proof} The converse is false if $X$ is pinched as shown by considering the $3 \times 3$ minors of the following matrix $$\begin{bmatrix} x_{11} & x_{12} & 0 \\ 0 & x_{22} & 0 \\ 0 & 0 & x_{34} \end{bmatrix}.$$ The maximal $2$-stairs are $$\{x_{11},x_{12},x_{22}\},\{x_{11},x_{12},x_{34}\}, \textrm{ and } \{x_{12},x_{22},x_{34}\}.$$ Now consider $\{x_{11},x_{12},x_{22}\}$. Its two tendrils, obtained by repeatedly scraping, are $\{x_{11},x_{12}\}$ and $\{x_{22}\}$. The two tendrils clearly do not have the same dimension. \begin{Prop} \label{Prop-Purity} Set $X'$ to be the matrix obtained from $X$ by zeroing $U_{r-1}$ and $D_{r-1}$ and let $\Delta'$ be the complex associated to $I_r(X')$. Let $A_i$ be the diagonal blocks of an unpinched decomposition of $X'$. Assume $I \neq 0$. \begin{enumerate} \item If $\Delta'$ is pure, then $\Delta_{I_2(X')}$ is pure. \item $\Delta_{I_2(X')}$ is pure if and only if $\Delta_{I_2(A_i)}$ are pure of the same dimension. \end{enumerate} \end{Prop} \begin{proof} (1) Suppose $X'$ is unpinched, then the statement follows by Proposition \ref{Prop-Unpinched-Purity}. Hence we may assume $X'$ is pinched. We induct on $r$, the base case is $r=2$ and is trivial. Now suppose $r > 2$. As before, let $E$ and $F$ by the first row and column and last row and column, respectively. Set $Y$ and $Z$ to be the submatrix obtained by deleting $E$ and $F$, respectively. Then we can consider ${\normalfont \textrm{Link}}_{E}(\Delta_I)$ and ${\normalfont \textrm{Link}}_{F}(\Delta_I)$ which correspond to the complex associated to the $r-1$ minors of $Y$ and $Z$, respectively. By induction, $\Delta_{I_2(Y)}$ and $\Delta_{I_2(Z)}$ are pure. Finally, notice that any stair in $X'$ is completely contained in $Y$ or $Z$, as otherwise $X$ would be unpinched. Hence every stair in $X'$ are the same dimension so that $\Delta_{I_2(X')}$ is pure. \\ (2) Any stair in $\Delta_{I_2(X')}$ corresponds to a stair in some $\Delta_{I_2(A_i)}$. In fact, $\Delta_{I_2(X')} = \bigsqcup \Delta_{I_2(A_i)}$. The statement follows immediately. \end{proof} We finish this section by working out a formula for the dimension of $\Delta_I$, and thus the height of $I$, in the special case where $L_1$ and $L_2$ are triangles. In the next section, we will show that $L_1$ and $L_2$ being triangles is equivalent to $\Delta_I$ being Cohen-Macaulay. \\ Recall that if $Y$ is a $n \times m$ generic matrix, then $$\textrm{ht} \; I_r(Y) = (n - r + 1)(m - r + 1) =: \textrm{EN}(n,m,r),$$ as shown in \cite{EagonNorthcott}. Also, we define $$\textrm{ENS}(n,r) := \binom{n-r+2}{2},$$ which is the height of $I_r(Y)$ when $Y$ is a generic symmetric matrix \cite{Jozefiak}. \begin{Th} \label{Th-Triangle-Dim} Let $L_1$ be a triangle of size $t_1$ and $L_2$ be a triangle of size $t_2$. Then $\Delta_I$ is pure with $$\dim \Delta_I = \dim R - {\normalfont \textrm{EN}}(n,m,r) + {\normalfont \textrm{ENS}}(t_1,r) + {\normalfont \textrm{ENS}}(t_2,r) - 1,$$ and thus $${\normalfont \textrm{ht}} \; I = {\normalfont \textrm{EN}}(n,m,r) - {\normalfont \textrm{ENS}}(t_1,r) - {\normalfont \textrm{ENS}}(t_2,r).$$ \end{Th} \begin{proof} It suffices to show that the dimension of each facet is $$\dim R - {\normalfont \textrm{EN}}(n,m,r) + {\normalfont \textrm{ENS}}(t_1,r) + {\normalfont \textrm{ENS}}(t_2,r) - 1.$$ A facet of $\Delta_I$ is a $(r-1)$-stair, so we count the size of these stairs. \\ We induct on $r$. The base case is $r=2$. By Proposition \ref{Prop-Unpinched-Purity}, $\Delta_I$ is pure. Hence we need only count the size of a specific maximal stair, namely the first row and column. One computes that the size of the first row and column is $$n + m - 1 - t_1 - t_2 = \dim R - {\normalfont \textrm{EN}}(n,m,1) + {\normalfont \textrm{ENS}}(t_1,1) + {\normalfont \textrm{ENS}}(t_2,1).$$ For the inductive step, we apply Proposition \ref{Prop-Stairs} (3) to write an $(r-1)$-stair $C$ as $C = S(C) \sqcup C'$, where $C'$ is a maximal $(r-2)$-stair in the submatrix obtained by removing the first row and column. $S(C)$ is a maximal stair by Proposition \ref{Prop-Stairs} (6), hence it is the same size as the first row and column. Let $R'$ be the ambient ring of the submatrix, then notice that $\dim R = |S(C)| + \dim R'$. Now we apply the inductive hypothesis to get $$|C| = |S(C)| + |C'| = $$ $$|S(C)| + \; \dim R' - {\normalfont \textrm{EN}}(n-1,m-1,r-1) + {\normalfont \textrm{ENS}}(t_1-1,r-1) + {\normalfont \textrm{ENS}}(t_2-1,r-1) $$ $$= \dim R - {\normalfont \textrm{EN}}(n,m,r) + {\normalfont \textrm{ENS}}(t_1,r) + {\normalfont \textrm{ENS}}(t_2,r).$$ \end{proof} \section{Cohen-Macaulayness of $I_r(X)$}\label{Sec-CM} In this section, we will characterize the configurations of zeros of a generic GD matrix $X$ so that $I = I_r(X)$ is Cohen-Macaulay. We say that a complex is Cohen-Macaulay if its corresponding ideal is Cohen-Macaulay. As we have shown in section $2$, under the lexicographical ordering, $\textrm{in}(I)$ is square-free, hence we may apply the following. \begin{Th}\label{Th-ICM-iff-InICM}$($\cite[1.3]{Conca2020}$)$ Let $J$ be a homogeneous ideal such that $\normalfont{\textrm{in}}(J)$ is square-free. Then $J$ is Cohen-Macaulay iff $\normalfont{\textrm{in}}(J)$ is Cohen-Macaulay iff $\Delta_J$ is Cohen-Macaulay. \end{Th} It turns out that $\Delta_I$ is Cohen-Macaulay if only if the ladders of zeroes, $L_1$ and $L_2$, are triangles, or $L_1$ and $L_2$ are small. We will show later that these configurations are sufficient for $\Delta_I$ to not only be Cohen-Macaulay, but also vertex decomposable. In light of Theorem \ref{Th-ICM-iff-InICM}, this characterizes the configurations for which $I$ is Cohen-Macaulay. \\ Before proceeding we recall Reisner's Criterion for when a complex is Cohen-Macaulay. \begin{Th}\label{Th-Reisner-Criterion}$(${\cite[8.1.6]{Herzog2011}}$)$ Let $\Delta$ be a complex over a field $k$. Then $\Delta$ is Cohen-Macaulay if and only if for every face $F \in \Delta$ and for every $i < \dim {\normalfont \textrm{Link}}_{F}(\Delta)$, we have $$\widetilde{H}_i({\normalfont \textrm{Link}}_{F}(\Delta); k) = 0.$$ \end{Th} The following are some useful applications of Reisner's Criterion. \begin{Cor}\label{Cor-Reisner-Cor} Let $\Delta$ be a complex. Then \begin{enumerate} \item If $\Delta$ is Cohen-Macaulay, then for every face $F \in \Delta$, ${\normalfont \textrm{Link}}_F(\Delta)$ is Cohen-Macaulay. \item Let $F$ be a simplex whose vertex set is disjoint from that of $\Delta$, then $\Delta * F$ is Cohen-Macaulay if and only if $\Delta$ is Cohen-Macaulay. \end{enumerate} \end{Cor} We now provide necessary conditions for $\Delta_I$ to be Cohen-Macaulay. \begin{Prop}\label{Prop-Nec-CM} Suppose $I \neq 0$ and $\Delta_I$ is Cohen-Macaulay, let $X'$ be the matrix obtained from zeroing $U_{r-1}$ and $D_{r-1}$, then \begin{enumerate} \item $X'$ is unpinched or $X'$ is a diagonal matrix. \item $L_1$ is a triangle or contained in a triangle of size $r-1\times r-1$, and $L_2$ is a triangle or contained in a triangle of size $r-1\times r-1$. \end{enumerate} \end{Prop} \begin{proof} Let $\Delta'$ be the complex for $X'$. By Proposition \ref{Prop-Contain-Triangle}, we have that $\Delta_I = \Delta' * (U_{r-1}\cup D_{r-1})$, hence $\Delta'$ is Cohen-Macaulay by Corollary \ref{Cor-Reisner-Cor} (2). \\ $(1)$ Suppose $r=2$. Let $A_i$ be the diagonal blocks in the unpinched decomposition of $X'$. We have that $\Delta' = \bigsqcup \Delta_{I_2(A_i)}$. If $\dim \Delta' > 0$, then since $\Delta'$ is connected, there is only one $A_i$. If $\dim \Delta' = 0$, then $\dim \Delta_{I_2(A_i)} = 0$, hence $X'$ is a diagonal matrix. Now for $r > 2$, we let $E$ be the first row and first column and consider $\Delta'' = {\normalfont \textrm{Link}}_{E}(\Delta')$. $\Delta''$ is the complex of the $r-1$ minors of a submatrix $Y$, obtained by deleting the first row and column of $X'$. By Corollary \ref{Cor-Reisner-Cor}, $\Delta''$ is Cohen-Macaulay, hence by induction, $Y$ is unpinched or a diagonal matrix. \\ Suppose $Y$ is a diagonal matrix. As $r > 2$ and $I \neq 0$, $Y$ is at least a $2 \times 2$ matrix, this implies that $X$ has a $1 \times 1$ block in its unpinched decomposition. Now by Proposition \ref{Prop-Purity}, $\Delta_{I_2(X')}$ is pure, hence if one block of $X$ is $1 \times 1$ then all blocks of $X$ are $1 \times 1$.\\ Now suppose $Y$ is unpinched. If $X$ is pinched, then $X$ has block form $$\begin{bmatrix} A & 0 \\ 0 & Y \end{bmatrix},$$ where $A$ is a $1 \times 1$ matrix. But by the same reasoning above, this would mean that all of the blocks of $X$ are $1 \times 1$ which implies that $Y$ is a diagonal matrix. But since $Y$ is at least $2 \times 2$, $Y$ would be pinched, a contradiction.\\ $(2)$ We may replace $X$ by $X'$ and $\Delta$ by $\Delta'$ to assume $L_1$ and $L_2$ contain triangles of size $r-1\times r-1$. We will show that $L_1$ and $L_2$ are triangles. We are done if $X'$ is a diagonal matrix, hence by $(1)$, we may assumed $X'$ is unpinched.\\ We proceed by induction on $r$. The base case is $r=2$. For contradiction, assume one of $L_1$ or $L_2$ is not a triangle. We may transpose the matrix to assume $L_1$ is not a triangle. Using Theorem \ref{Th-Reisner-Criterion}, it is sufficient to find a face $F$ such that ${\normalfont \textrm{Link}}_{F}(\Delta)$ is disconnected of dimension bigger than $0$. This is equivalent to finding a face $F$ which is properly contained in exactly 2 facets $F_1$ and $F_2$ such that $F_1 \cap F_2 = F$, and $\dim F_1 \setminus F > 0$ or $\dim F_2 \setminus F > 0 $. By $(1)$, $X$ is unpinched, and as $L_1$ is not a triangle, the following submatrix must appear along the border of $L_1$, $$ \begin{bmatrix} * & * & * \\ 0 & 0 & * \\ 0 & 0 & * \end{bmatrix}.$$ We construct $F$ by taking any path from a corner of $L_2$ and ending at the top right entry of the submatrix. By the remark after Proposition \ref{Prop-Stairs}, $F$ satisfies condition $F_2$. $F$ and can be completed into a maximal stair by adding entries either to the left of, or below, the top right entry. Hence there is exactly $2$ ways to complete $F$ into a stair, since if we add the entry to the left, then we must add all entries to the left, similarly with the entry below. This yields two facets $F_1$ and $F_2$ containing $F$. It is clear that the dimension after deleting $F$ from both of them is at least $1$. \\ For the inductive step, let $E$ be the face consisting of the first row and column of $X$, and set $\Delta' = {\normalfont \textrm{Link}}_{E}(\Delta_I)$. As before, $\Delta'$ is the complex associated to the $r-1$ minors of a submatrix $Y$, obtained from $X$ by deleting the first row and first column. By induction, both blocks of zeros of $Y$ are triangles. But now, we are done as we can repeat the argument with $E$ being the last row and last column. \end{proof} Before we proceed, we provide a definition. \begin{Def}\label{Def-Vert-Decomp} (\cite[16.41]{Miller2005}) A complex $\Delta$ is \textit{vertex decomposable} if $\Delta = \{ \emptyset \}$ or it is pure and has a vertex $v$ such that ${\normalfont \textrm{Del}}_v(\Delta)$ and ${\normalfont \textrm{Link}}_v(\Delta)$ are vertex decomposable. \end{Def} Next, we need a lemma about vertex decomposability and joins of complexes. \begin{Lemma} \label{Lemma-Cone-Over-VD}$($\cite[2.4]{Provan1980}$)$ Let $\Delta$ and $\Sigma$ be complexes. Then $\Delta$ and $\Sigma$ are vertex decomposable if and only if $\Sigma * \Delta$ is vertex decomposable. \end{Lemma} Note that the definition of vertex decomposability in \cite{Provan1980} is not the same as our definition. However, they have been shown to be equivalent, see \cite{Provan1977}. Observe that simplices are vertex decomposable. Hence we will apply, quite often, the above lemma with $\Sigma$ a simplex. \\ Proposition \ref{Prop-Contain-Triangle} tells us that $\Delta_I = (U_{r-1}\cup D_{r-1}) * {\normalfont \textrm{Del}}_{U_{r-1}\cup D_{r-1}}(\Delta_I)$. If we zero the entries in $U_{r-1}$ and $D_{r-1}$, then we obtain a generic GD matrix $Y$ and we see that $\Delta_{I_r(Y)} = {\normalfont \textrm{Del}}_{U_{r-1}\cup D_{r-1}}(\Delta_I)$. As we've seen from the Lemma \ref{Lemma-Cone-Over-VD}, forming a cone with a simplex preserves vertex decomposability. Hence in the context of vertex decomposability, we may employ the above lemma to assume that $L_1$ and $L_2$ contain triangles of size $r-1\times r-1$.\\ By \cite{Miller2005} for pure simplicial complexes,$$ \textrm{vertex decomposable} \Rightarrow \textrm{shellable} \Rightarrow \textrm{Cohen-Macaulay}. $$ When $X$ is a generic matrix, $\Delta_I$ has already been shown, by numerous people, to be shellable \cite{Bjorner1980}, \cite{Conca2003}, \cite{Herzog1992}. As pure shellable complexes are Cohen-Macaulay \cite[13.45]{Miller2005}, the converse to Proposition \ref{Prop-Nec-CM} is already known in the setting where $L_1$ and $L_2$ are empty. When $L_1$ and $L_2$ are triangles, our complexes can be realized as a special case of higher order complexes, which were shown to be shellable in \cite[Section 7]{Bjorner1980}. \\ We will show that $\Delta_I$ is, in fact, vertex decomposable. We are unaware of any other proof that $\Delta_I$ is vertex decomposable when $L_1$ and $L_2$ are triangles, empty or otherwise.\\ The following theorem is a strong converse to Proposition \ref{Prop-Nec-CM}. \begin{Th}\label{Th-Triangle-VD}Suppose $L_1$ is a triangle or contained in a $r-1 \times r - 1$ triangle, and $L_2$ is also a triangle or contained in a $r-1 \times r - 1$ triangle. Then $\Delta_I$ is vertex decomposable. \end{Th} Before we proceed with the proof, we need to establish some notation and terminology. We will show that $\Delta_I$ is vertex decomposable by describing a recursive algorithm for choosing vertices that satisfy Definition \ref{Def-Vert-Decomp}. We will choose the vertices in such a way that we build a ladder in the top left corner. Let $L_1$ and $L_2$ be triangles of size $t_1$ and $t_2$, respectively. We say that $\mathcal{L}$ is a ladder of entries if for a non-increasing sequence of non-negative integers $a_1,...,a_{m - t_2}$, with $a_1 \leq n - t_1$, $\mathcal{L}$ consist of the first $a_k$ entries in the $k$-th column. We allow $\mathcal{L}$ to be empty. We will augment a ladder $\mathcal{L}$ with additional data: two subsets $\textbf{D}$ and $\textbf{L}$ that form a partition of $\mathcal{L}$. \\ Now given a non-empty set of non-zero entries $\mathbf{A} = \{x_1,...,x_k\}$ of $X$ and a subcomplex $\Delta$ of $\Delta_I$, we set ${\normalfont \textrm{Link}}(\mathbf{A}, \Delta)$ to be the iterated link along $\mathbf{A}$, i.e., $${\normalfont \textrm{Link}}(\mathbf{A}, \Delta) = {\normalfont \textrm{Link}}_{x_k}({\normalfont \textrm{Link}}_{x_{k-1}}( ... {\normalfont \textrm{Link}}_{x_1}(\Delta))).$$ Notice that by \cite[1.1]{Provan1980}, ${\normalfont \textrm{Link}}(\mathbf{A})$ does not depend on the ordering of the entries of $\mathbf{A}$ and hence is well defined. For convenience, we set ${\normalfont \textrm{Link}}(\emptyset, \Delta) = \Delta$. Similarly, we define ${\normalfont \textrm{Del}}(\mathbf{A})$, and set ${\normalfont \textrm{Del}}(\emptyset, \Delta) = \Delta$. Now given $(\mathcal{L}, \mathbf{D}, \mathbf{L})$, we define $\Delta_I(\mathcal{L}) = \Delta_I(\mathcal{L}, \mathbf{D}, \mathbf{L}) = {\normalfont \textrm{Link}}(\mathbf{L}, {\normalfont \textrm{Del}}(\mathbf{D},\Delta_I))$. We note that by \cite[1.1]{Provan1980}, $\Delta(\mathcal{L})$ is the same as the iterated deletion and link along $\mathcal{L}$ in any order. If the underlying ideal of minors $I$ is understood, we will drop it as a subscript and write $\Delta(\mathcal{L})$. Before we prove the theorem, we need the following lemma. \begin{Lemma} Suppose $L_1$ is a triangle or contained in a $r-1 \times r - 1$ triangle, and $L_2$ is also a triangle or contained in a $r-1 \times r - 1$ triangle. Given $(\mathcal{L}, \mathbf{D}, \mathbf{L})$ with $\mathbf{D} = \mathcal{L}$, $\Delta(\mathcal{L})$ is pure of the same dimension as $\Delta_I$. \end{Lemma} \begin{proof} By Lemma \ref{Lemma-Cone-Over-VD} and Proposition \ref{Prop-Contain-Triangle}, we may assume $L_1$ and $L_2$ are triangles of size at least $r-1\times r-1$. We induct on $k = |\mathcal{L}|$. If $k = 0$, then $\Delta(\mathcal{L}) = \Delta_I$ is pure by Theorem \ref{Th-Triangle-Dim}. \\ Now suppose $k > 0$. Let $v=x_{ab}$ be an entry of $\mathcal{L}$ such that $x_{a+1,b} \notin \mathcal{L}$ and $x_{a,b+1} \notin \mathcal{L}$. In other words, $v$ is an inner corner of $\mathcal{L}$. Consider $\mathcal{L'} = \mathcal{L} \setminus \{v\}$. By induction, $\Delta(\mathcal{L'})$ is pure, furthermore since $\dim \Delta(\mathcal{L'}) = \dim \Delta_I$, we have that the facets of $\Delta(\mathcal{L'})$ are facets of $\Delta$ that do not meet $\mathcal{L'}$. \\ It suffices to show that a facet of $\Delta(\mathcal{L})$ is a facet of $\Delta(\mathcal{L}')$. Let $F$ be a facet of $\Delta(\mathcal{L})$. Then $F$ is a facet of $\Delta(\mathcal{L}')$ or $F \cup \{v\}$ is a facet of $\Delta(\mathcal{L}')$. We may assume the later case, otherwise we are done. We will show that this case leads to a contradiction by constructing a face $G$ of $\Delta(\mathcal{L})$ so that $F \subsetneq G$. By choice of $v$, we have that $\{x_{a+1,b+1}, ..., x_{a+r-1, b+r-1} \}$ is completely outside of $\mathcal{L}$. As $F \cup \{v\}$ is a facet of $\Delta_I$, we may consider its tendrils $T_1,..,T_{r-1}$. Since there are $r-1$ many tendrils and they are disjoint there must be an element of $\{x_{ab}, ..., x_{a+r-1, b+r-1}\}$ outside of $F \cup \{v\}$. Let $x_{a+l,b+l}$ be such an element with the smallest row index, and for $j = 1,...,l$, we relabel the tendrils so that $T_j$ contains $x_{a+j-1,b+j-1}$. Now set $G = F \cup \{ x_{a+l,b+l}\}$. We will be done once we show that $G$ is a face of $\Delta_I$, and since $G$ does not meet $\mathcal{L}$, it must also be a face of $\Delta(\mathcal{L})$. \\ We will show that $G$ satisfies condition $F_r$. Suppose not, then $G$ contains a $r$-chain $x_1<...<x_r$. We may assume the chain contains $x_{a+l,b+l}$. Otherwise, the chain would be in $F$, but $F$, considered as a face of $\Delta_I$, satisfies condition $F_r$. Now removing $x_{a+l,b+l}$ from this chain yields a $r-1$ chain in $F \subset F \cup \{v\}$. Since the tendrils of $F \cup \{v\}$ are disjoint and satisfy condition $F_2$, this forces each tendril to contain a unique element of this $r-1$ chain. Hence $T_1,...,T_{l}$ contains a $l$-chain, which we relabel as $x_1,...,x_{l}$. \\ Notice that for $j=1,...,l$, either $x_j < x_{a+l,b+l}$ or $x_{a+l,b+l} < x_j$. The latter case leads to a contradiction as then $x_{a+j-1,b+j-1} < x_j$ is a $2$-chain in $T_j$. Hence $x_j < x_{a+l,b+l}$ for all $j$. Now $x_1,...,x_l,x_{a+l,b+l}$ forms an $l+1$ chain but the shape of $\mathcal{L}$ and the choice of $x_{ab}$ makes this impossible. \end{proof} Since the link of any pure complex is still pure, we immediately arrive at the following corollary. \begin{Cor}\label{Cor-Link-Del-Pure} Suppose $L_1$ is a triangle or contained in a $r-1 \times r - 1$ triangle, and $L_2$ is also a triangle or contained in a $r-1 \times r - 1$ triangle. Given $(\mathcal{L}, \mathbf{D}, \mathbf{L})$ where $\mathbf{D}$ is a ladder shape, then $\Delta(\mathcal{L})$ is pure. \end{Cor} We establish a terminology before proceeding. We say that a non-zero entry $x_{ij}$ is a corner of a ladder $\mathcal{L}$ if $x_{ij} \notin \mathcal{L}$ but $x_{i-1,j} \in \mathcal{L}$ or $i = 1$, and $x_{i,j-1} \in \mathcal{L}$ or $j=1$. Notice that each row can contain at most one corner of $\mathcal{L}$, hence we may order the corners of $\mathcal{L}$ in ascending order by their row index. Also notice that if $x$ is a corner of $\mathcal{L}$, then $\mathcal{L} \cup \{x\}$ is also a ladder.\\ Now since we plan to prove Theorem \ref{Th-Triangle-VD} inductively, we would like to strengthen the statement and prove a stronger statement below. Theorem \ref{Th-Triangle-VD} will follow. \begin{Th} Let $X$ be a generic $n \times m$ GD matrix. Suppose $L_1$ is a triangle or contained in a $r-1 \times r - 1$ triangle, and $L_2$ is also a triangle or contained in a $r-1 \times r - 1$ triangle. Let $(\mathcal{L}, \mathbf{D}, \mathbf{L})$ be a ladder such that $\mathbf{D}$ is a ladder shape, and $\mathbf{L}$ consist of the first few consecutive corners of $\mathbf{D}$, then $\Delta(\mathcal{L})$ is vertex decomposable. \end{Th} \begin{proof} We induct on $n+m$. The base case when $n + m =2$ is clear. Now suppose $n + m > 2$. Given a ladder $(\mathcal{L}, \mathbf{D}, \mathbf{L})$ in the form of the statement, we will describe a recursive algorithm for picking a vertex $v$ that satisfies Definition $\ref{Def-Vert-Decomp}$. \\ If $\mathbf{L}$ equals the set of all corners of $\mathbf{D}$, we stop. Otherwise, choose $v$ as the first corner of $\mathbf{D}$ not in $\mathbf{L}$. Now $(\mathcal{L} \cup \{v\}, \mathbf{D}\cup \{v\}, \mathbf{L})$ and $(\mathcal{L} \cup \{v\}, \mathbf{D}, \mathbf{L} \cup \{v \})$ are ladders satisfying the statement of the theorem, hence we may repeat the process with the two ladder shapes. \\ It is clear that the algorithm must terminate since there is only finitely many corner positions for any given ladder and the size of any ladder is bounded. Now in each step of the algorithm, $(\mathcal{L}, \mathbf{D}, \mathbf{L})$ satisfies the conditions of Corollary \ref{Cor-Link-Del-Pure}, hence $\Delta(\mathcal{L})$ is pure. It remains to show that the terminating ladder shapes are vertex decomposable. \\ Let $\mathcal{L}$ be a terminating ladder shape, i.e., all of the corners of $\mathbf{D}$ are in $\mathbf{L}$. If all of the non-zero entries in the first row are in $\mathbf{D}$, then $\Delta(\mathcal{L})$ is equal to $\Delta_{I'}({\mathcal{L}'})$ for some ladder $\mathcal{L}'$ in the matrix after removing the first row and $I'$ is the $r \times r$ minors of that matrix. Hence by induction, $\Delta(\mathcal{L})$ is vertex decomposable. A similar argument applies if all of the non-zero entries in the first column are in $\mathbf{D}$. \\ Now we may assume that not all of the non-zero entries in the first row and first column are in $\mathbf{D}$. Then $\mathbf{D}$ has a corner in the first row and a corner in the first column, which by assumption will both be in $\mathbf{L}$. Now every facet of $\Delta(\mathcal{L})$ comes from a facet of $\Delta_I$ that contains all of $\mathbf{L}$ and does not meet $\mathbf{D}$. Since $\mathbf{L}$ consist of the all of the corners of a ladder shape, for any facet of $\Delta( \mathcal{L})$, there is a uniquely determined tendril, $T$, that goes through $\mathbf{L}$. Furthermore, $T$ is obtained by scraping. Hence after removing $T$, what remains can be considered as a facet of a submatrix $Y$, obtained by removing the first row. Hence $\Delta(\mathcal{L})$ is a cone over the complex of some ladder in $Y$. More precisely, consider $(\mathcal{L'} = (\mathcal{L} \cup T) \setminus \{\textrm{first row} \}, \mathbf{D}' = (\mathbf{D} \cup T) \setminus \{\textrm{first row} \}, \emptyset)$. Then $\Delta(\mathcal{L}) = (T\setminus\mathbf{L}) * \Delta_{I'}(\mathcal{L}')$, where ${\mathcal{L}}'$ is a ladder for the $r-1$ minors, $I'$, of $Y$. Hence $\Delta(\mathcal{L})$ is vertex decomposable by induction and Lemma \ref{Lemma-Cone-Over-VD}. \end{proof} Theorem \ref{Th-Triangle-VD} follows since by Proposition \ref{Prop-Contain-Triangle} and Lemma \ref{Lemma-Cone-Over-VD} we may assume $L_1$ and $L_2$ are triangles of size at least $r-1\times r-1$. Now, combining Proposition \ref{Prop-Nec-CM} and Theorem \ref{Th-Triangle-VD}, we achieve the following. \begin{Cor} $\Delta_I$ is Cohen-Macaulay if and only if $L_1$ is a triangle or contained in a $r-1 \times r - 1$ triangle, and $L_2$ is a triangle or contained in a $r-1 \times r - 1$ triangle. \end{Cor} \section{Multiplicity of $I_r(X)$}\label{Sec-Multiplicity} The $f$-vector of a simplicial complex is the vector whose $j$th component is the number of faces of that complex of dimension $j$. With the above description of $\Delta_I$ by \cite[6.2.1]{Herzog2011}, one can compute the multiplicity of $I$ by counting the number of maximal $(r-1)$-stairs. We demonstrate the computation of $f_d$, where $d = \dim \Delta_I$, in the case that the ladders of zeroes, $L_1$ and $L_2$, are triangles of size $t_1$ and $t_2$, respectively, cf. \cite[3.5]{Herzog1992}. If $L_1$ and $L_2$ are not triangles, the computation becomes more complicated. \begin{Def} (\cite[Section 2.7]{Stanley2011}) A \textit{lattice path} $P=(v_0,...,v_k)$ in $\mathbb{N}^2$ with steps $(-1,0),(0,1)$ is a collection of points such that $v_i-v_{i-1}= (-1,0) \text{ or } (0,1)$.\\ An \textit{$n$-path} $\textbf{P}$ of type $(\alpha,\beta,\gamma, \delta)\in (\mathbb{N}^n)^4$ is a collection of paths $\textbf{P}=(P_1,...,P_n)$ such that $P_i$ is a path from $(\beta_i,\gamma_i)$ to $(\alpha_i,\delta_i)$. $\textbf{P}$ is said to be non-intersecting if $P_i\cap P_j=\emptyset$ for $i\neq j$. \end{Def} \begin{Th}\label{Th-pathcount} $($\cite[2.7.1]{Stanley2011}$)$ Let $(\alpha,\beta,\gamma, \delta)\in (\mathbb{N}^n)^4$ such that for all $w\in S_n$, the set of non-intersecting $n$-paths of type $(w(\alpha),\beta,\gamma, w(\delta))$ is empty unless $w$ is the identity. Then the number of non-intersecting $n$-paths of type $(\alpha,\beta,\gamma, \delta)$ is \[\det [h_{i,j}],\] where $h_{i,j}$ counts the number of paths from $ (\beta_i,\gamma_i)$ to $ (\alpha_j,\delta_j)$. \end{Th} Write $(\alpha,\beta,\gamma, \delta)=(A,B)\in (\mathbb{N}^2)^2$ where $A=(\alpha,\delta)$ and $B=(\beta,\gamma)$. Then it becomes clear that the condition on permutations informally translates to: Any non-trivial permutation of $A$ would force any $n$-path of type $(A,B)$ to be intersecting. \begin{Not} Let $\Gamma(A,B)$ denote the number of non-intersecting $n$-paths of type $(A,B)=(\alpha,\beta,\gamma, \delta)$, and let $h(A_j,B_i)$ denote the number of paths from $B_i$ to $A_j$. \end{Not} Notice that \[h(A_j,B_i)= \begin{cases} {{ \alpha_j-\beta i+\gamma_i-\delta_j }\choose {\alpha_j-\beta_i}} ={{ \alpha_j-\beta i+\gamma_i-\delta_j }\choose {\gamma_i-\delta_j}} & \alpha_j-\beta_i\geq 0 \text{ and }\gamma_i-\delta_j \geq 0\\ 0 & \textrm{otherwise} \\ \end{cases}. \] \begin{Not} Given an ordered collection $V=(v_1,...,v_n) \subset \mathbb{N}^2$ and a set $N=\{k_1,...,k_m\}$ with $1\leq k_1 < k_2<...<k_m\leq n$, write $V_N:=(v_{k_1},...,v_{k_m})$. \end{Not} \begin{Lemma} Let $L_1$ and $L_2$ be triangles of size $t_1$ and $t_2$ respectively, and $d=\dim \Delta_I$. Set $T_1=\max\{r-2,t_1\}$ and $T_2=\max\{r-2, t_2\}$ then \[ f_d(\Delta_I)=\sum_{\substack{{1\leq a_1<...<a_{r-1}\leq T_1+1}\\{1\leq b_1<...<b_{r-1}\leq T_2+1} }} \det\left[\begin{cases} {{n+m-T_1-T_2-2} \choose { m-1-T_2+b_j-a_i } } & T_2-m+1\leq b_j-a_i \leq n-1-T_1 \\ 0 & \normalfont{\textrm{otherwise } } \end{cases} \right] . \] \end{Lemma} \begin{proof} Set $\Delta=\Delta_I$. Joining a complex with a simplex does not change the top value of the $f$-vector, only its index. Hence by Proposition \ref{Prop-Contain-Triangle}, we may replace $\Delta$ by a complex $\Delta'$, where $\Delta '$ is the complex for a generic $n\times m$ GD matrix with $L_1$, a size $T_1$ triangle, and $L_2$, a size $T_2$ triangle. In this setting, it follows from Theorem \ref{Th-Triangle-Dim} that a maximal $(r-1)$-stair is a disjoint union of $r-1$ disjoint maximal stairs, hence a $(r-1)$-stair is a $(r-1)$-path.\\ Set $A_a=(n-T_1-1+a,a)$ for $1\leq a \leq T_1+1$, and set $B_b=(b,m-T_2-1+b)$ for $1\leq b \leq T_2 +1$. These are the corners of $L_1$ and $L_2$, respectively.\\ Then each facet of $\Delta'$ correspond to a $(r-1)$-path of type $(A_J,B_K)$, where $J={a_1,...,a_{r-1}}$ with $1\leq a_1 <a_2<...<a_{r-1}\leq T_1 +1$, and $K={b_1,...,b_{r-1}}$ with $1\leq b_1 <b_2<...<b_{r-1}\leq T_2 +1$, hence \[f_d=\sum_{\substack{J={1\leq a_1<...<a_{r-1}\leq T_1+1}\\K={1\leq b_1<...<b_{r-1}\leq T_2+1} }} \Gamma(A_J,B_K) .\] Now it is clear that $(A_J,B_K)$ satisfies the assumption of Theorem \ref{Th-pathcount} hence $\Gamma(A_J,B_K)=\det[h((A_J)_i,(B_K)_j)]=\det[h( (n-T_1-1+a_i,a_i),(b_j,m-T_2-1+b_j))] $. By the above formula, we have that \[ h( (n-T_1-1+a_i,a_i),(b_j,m-T_2-1+b_j))\] \[ =\begin{cases} {{(n-T_1-1+a_i-b_j)+(m-T_2-1+b_j-a_i)} \choose { m-T_2-1+b_j-a_i} } & T_2-m+1\leq b_j-a_i \leq n-1-T_1\\ 0 & \textrm{otherwise } \end{cases} \] \[ =\begin{cases} {{n+m-T_1-T_2-2} \choose { m-1-T_2+b_j-a_i } } & T_2-m+1\leq b_j-a_i \leq n-1-T_1 \\ 0 & \text{otherwise } \end{cases} .\] \end{proof} \begin{Rmk} In \cite{Herzog1992}, since the authors are only concerned with generic matrices and do not need to avoid zeros, they choose their starting points and ending points to be the last $r-1$ entries on the first row and column, respectively. Specializing our proof to the generic case will yield a different matrix with the same determinant. This is an artifact of the different counting methods as can be seen in the example below. \\ Let $m=n=4$, $t_1=t_2=0$, and $r=3$. That is to say, we are considering the $3 \times 3$ minors of a generic $4\times 4$ matrix. Then we have that \[ \renewcommand\arraystretch{1.2} f_d=\det \begin{bmatrix} \binom{4}{2} & {4\choose 3} \\ {4\choose 3} & {4\choose 2} \end{bmatrix}= \det \begin{bmatrix} 6 & 4\\ 4 & 6 \end{bmatrix} =20,\] compared to the the formula in \cite[3.5]{Herzog1992}: \[ \renewcommand\arraystretch{1.2} f_d=\det \begin{bmatrix} {6\choose 3} & {5\choose 3} \\ {5\choose 2} & {4\choose 2} \end{bmatrix}= \det \begin{bmatrix} 20 & 10\\ 10 & 6 \end{bmatrix} =20.\] \end{Rmk} \bibliographystyle{amsalpha}
{ "timestamp": "2022-06-06T02:15:13", "yymm": "2110", "arxiv_id": "2110.02264", "language": "en", "url": "https://arxiv.org/abs/2110.02264", "abstract": "Generalized diagonal matrices are matrices that have two ladders of entries that are zero in the upper right and bottom left corners. The minors of generic generalized diagonal matrices have square-free initial ideals. We give a description of the facets of their Stanley-Reisner complex. With this description, we characterize the configuration of ladders that yield Cohen-Macaulay ideals. In the special case where both ladders are triangles, we show that the corresponding complex is vertex decomposable. Also in this case, we compute the height and multiplicity of the ideals.", "subjects": "Commutative Algebra (math.AC)", "title": "Generic Generalized Diagonal Matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9780517424466175, "lm_q2_score": 0.8479677602988601, "lm_q1q2_score": 0.8293563454988558 }
https://arxiv.org/abs/math/0610498
Bounds on changes in Ritz values for a perturbed invariant subspace of a Hermitian matrix
The Rayleigh-Ritz method is widely used for eigenvalue approximation. Given a matrix $X$ with columns that form an orthonormal basis for a subspace $\X$, and a Hermitian matrix $A$, the eigenvalues of $X^HAX$ are called Ritz values of $A$ with respect to $\X$. If the subspace $\X$ is $A$-invariant then the Ritz values are some of the eigenvalues of $A$. If the $A$-invariant subspace $\X$ is perturbed to give rise to another subspace $\Y$, then the vector of absolute values of changes in Ritz values of $A$ represents the absolute eigenvalue approximation error using $\Y$. We bound the error in terms of principal angles between $\X$ and $\Y$. We capitalize on ideas from a recent paper [DOI:https://doi.org/10.1137/060649070] by A. Knyazev and M. Argentati, where the vector of absolute values of differences between Ritz values for subspaces $\X$ and $\Y$ was weakly (sub-)majorized by a constant times the sine of the vector of principal angles between $\X$ and $\Y$, the constant being the spread of the spectrum of $A$. In that result no assumption was made on either subspace being $A$-invariant. It was conjectured there that if one of the trial subspaces is $A$-invariant then an analogous weak majorization bound should only involve terms of the order of sine squared. Here we confirm this conjecture. Specifically we prove that the absolute eigenvalue error is weakly majorized by a constant times the sine squared of the vector of principal angles between the subspaces $\X$ and $\Y$, where the constant is proportional to the spread of the spectrum of $A$. For many practical cases we show that the proportionality factor is simply one, and that this bound is sharp. For the general case we can only prove the result with a slightly larger constant, which we believe is artificial.
\section{Introduction} \label{sec:1intro} Eigenvalue problems appear in many applications. For example eigenvalues represent the frequencies of vibration in mechanical vibrations, while the energy levels of a system are the eigenvalues of the Hamiltonian in quantum mechanics. Eigenvalue problems are used today in these and many other applications, including spectral data clustering and internet search engines. Eigenvalues cannot be computed exactly except in some trivial cases, so numerical approximation is required. Eigenvalue \emph{a posteriori} and \emph{a priori} error bounds describe the eigenvalue approximation quality, and this is a classical and important topic in matrix analysis. A posteriori bounds are based on information readily computable, e.g., the eigenvector residuals, and are necessary, e.g., for adaptive numerical methods for eigenvalue approximation. A priori bounds are given in terms of theoretical properties, and can be very useful in assessing relative performance of algorithms. The widely used Rayleigh-Ritz method is well known for its ability to generate high quality approximations to eigenvalues of Hermitian matrices. It is the basis for many numerical procedures for computing eigenvalues, such as finite element methods and the Lanczos eigenproblem iteration. Eigenvalue error bounds for the Rayleigh-Ritz method are important since they provide estimates and predictions of the quality of eigenvalue approximations, and can be used, e.g., to predict the number of iterations needed in the Lanczos method for computing some eigenvalues to within a given accuracy. There is a vast literature on Rayleigh-Ritz eigenvalue methods and error bounds, see, e.g.,\ \cite[Chapter 4]{kvzrs}, \cite[Chapters 10--13]{Par80}, and \cite[Chapters 3-5]{MR0400004}. We contribute to this traditional area of research with a new twist---using weak majorization. Majorization is a classical technique that can be used to formulate and prove a great variety of inequalities in a concise and elegant way. It is widely used in matrix analysis, e.g.,\ to bound perturbations of eigenvalues via Lidskii's beautiful theorem \cite{Lid50}. In the context of Rayleigh-Ritz eigenvalue error bounds, weak majorization was introduced in the celebrated work of Davis and Kahan \cite{DavKah} to bound eigenvalue errors \emph{a posteriori}. In the present paper we propose and prove what appear to be the first theorems based on weak majorization for \emph{a priori} Rayleigh-Ritz eigenvalue error bounds. Our results provide a theoretical foundation that can be applied in a number of situations, e.g.,\ for finite element methods, e.g.,\ \cite{d}, and for block Lanczos iterations such as in \cite{GolU77}, see \cite{ka07}. We use several well known majorization results found, e.g.,\ in \cite{Bha97,Horn2,MarO79}. We give references throughout the paper for the concepts we introduce. For a more thorough background and reference list, see \cite{KnyA06}. The rest of the paper is organized as follows. Section \ref{sec:Prereq} contains all necessary definitions and basic facts on majorization that we need for our eigenvalue and singular value bounds. Section \ref{sec:intro} is the main part of the paper, where we motivate and formulate our conjectures and theorems. Section \ref{sec:pertbds} has all our proofs. In section \ref{sec:diss} we show that our main results are sharp; we also discuss our proofs, and the possibility that our bound for the most general case might be slightly improved. \section{Definitions and Prerequisites} \label{sec:Prereq} We introduce the definitions and tools we need, together with some mild motivation. We do not provide proofs for the results in this section---instead we refer the reader to some of the relevant literature. \subsection{Notation}\label{sec:notation} For a real vector $x=[x_1,\ldots, x_n]^T$, we use $x^{\downarrow}\equiv[x_1^{\downarrow},\ldots, x_n^{\downarrow}]^T$ to denote $x$ with its elements rearranged in descending order, while $x^{\uparrow}\equiv[x_1^{\uparrow},\ldots, x_n^{\uparrow}]^T$ denotes $x$ with its elements rearranged in ascending order. We use $|x|$ to denote the vector $x$ with the absolute value of its components. We use the `$\leq$' symbol to compare real vectors component-wise. For real vectors $x$ and $y$ the expression $x\prec y$ means that $x$ is majorized by $y$, while $x{\ \prec}_{w}\ y$ means that $x$ is weakly (sub-)majorized by $y$, see section~\ref{sec:maj}. We consider the Euclidean space $\C^n$ of column vectors equipped with the standard scalar product $x^Hy$ and the norm $\|x\|=\sqrt{x^Hx}$. We use the same notation $\|A\|$ for the induced matrix norm of a complex matrix $A \in \C^{n\times n}$. $\mathcal{X}=\Ra(X)\subset \C^n$ means the subspace $\mathcal{X}$ is equal to the range of the matrix $X$ with $n$ rows. The unit matrix is $I$ and the zero matrix (not necessarily square) is $0$, while $e=[1,\ldots,1]^T$. We use $\mathcal{H}(n)$ to denote the set of $n\times n$ Hermitian matrices and $\mathcal{U}(n)$ to denote the set of $n\times n$ unitary matrices in the set $\C^{n\times n}$ of all $n\times n$ complex matrices. We write $\lambda(A)=\lambda^{\downarrow}(A)$ for the vector of eigenvalues of $A \in \mathcal{H}(n)$ arranged in descending order, and we write $s(B)= s^{\downarrow}(B)$ for the vector of singular values of $B$ arranged in descending order. Individual eigenvalues and singular values are denoted by $\lambda_i(A)$ and $s_i(B)$, respectively, so, e.g.,\ ${\rm{spr}}(A)=\lambda_1(A)-\lambda_n(A)$ and $s_1(B)=\|B\|$. Let subspaces $\mathcal{X}$ and $\mathcal{Y}\subseteq \C^n$ have the same dimension, with orthonormal bases given by the columns of the matrices $X$ and $Y$ respectively. We denote the vector of principal angles between $\mathcal{X}$ and $\mathcal{Y}$ arranged in descending order by $\theta(\mathcal{X},\mathcal{Y})=\theta^{\downarrow}(\mathcal{X},\mathcal{Y})$, and define it using $\cos \theta (\mathcal{X},\mathcal{Y})=s^{\uparrow}(X^HY)$, e.g.,\ \cite{BjoG73}, \cite[\S12.4.3]{GolL89}. \subsection{Majorization and Weak Majorization} \label{sec:maj} We now briefly define the concepts of majorization and weak majorization which are comparison relations between two real vectors. For detailed information we refer the reader to \cite{Bha97,Horn2,MarO79}. We say that $x\in\R^n$ is weakly (sub-)majorized by $y\in\R^n$, written $x{\ \prec}_{w}\ y$, if \begin{equation} \sum_{i=1}^k x_i^{\downarrow} \leq \sum_{i=1}^k y_i^{\downarrow},\qquad 1\leq k\leq n, \label{eq:wm} \end{equation} while $x$ is (strongly) majorized by $y$, written $x\prec y$, if (\ref{eq:wm}) holds together with \begin{equation} \sum_{i=1}^n x_i = \sum_{i=1}^n y_i. \label{eq:maj} \end{equation} Our final results in the paper are weak majorization bounds of the form $x\prec_w y$ with $x\geq 0$. On the one hand, we can see from (\ref{eq:wm}) that $x\leq y \Rightarrow x{\ \prec}_{w}\ y$, i.e., the inequality implies weak majorization. In our case the advantage of using weak majorization is that the inequality $x\leq y$ (the values of $x$ and $y$ become apparent later) is simply wrong, while the weak majorization bound $x\prec_w y$ does hold. On the other hand, a weak majorization bound $x\prec_w y$ implies $\max(x)\leq \max(y)$. So if the bound $\max(x)\leq\max(y)$ is already known, but it is also known that $x\leq y$ does not hold, it makes sense to conjecture and to try to prove $x\prec_w y$ . Strong `$\prec$' and weak `$\prec_w$' majorization relations share only some properties with the usual inequality `$\leq$' relation, so one should deal with them carefully. For example `$\prec$' and `$\prec_w$' are reflexive and transitive, but $x\prec y$ and $y\prec x$ do not imply $x=y$; e.g., \cite[Remark II.1.2]{Bha97}. Similarly $x\prec y$ does not imply the intuitive $x+z\prec y+z$, as is seen in the example $x=(0,0,0)$, $y=(2,-1,-1)$, $z=(-2,0,0)$. So we must be particularly careful of the ordering when we \emph{combine} results. Thus it can be seen from (\ref{eq:wm}) and (\ref{eq:maj}) that: $x + u \prec x^{\downarrow} + u^{\downarrow}$, e.g., \cite[Corollary II.4.3]{Bha97}, and \begin{equation} \{x \prec_w y\}\ \&\ \{u \prec_w v\}\ \& \ \cdots \ \ \Rightarrow \ \ x + u + \cdots \ \prec \ x^{\downarrow} + u^{\downarrow} + \cdots \ \prec_w \ y^{\downarrow} + v^{\downarrow} + \cdots, \label{eq:gen} \end{equation} where this also holds with `$\prec_w$' replaced by `$\prec$'. Some of the other basic majorization and related results we use are fairly obvious: \begin{align} &A\in \mathcal{H}(n) \Rightarrow |\lambda(\pm A)|^{\downarrow} = s(A); \label{eq:evsv}\\ &|x\pm y|{\ \prec}_{w}\ |x|^{\downarrow} + |y|^{\downarrow},\quad\mbox{since from (\ref{eq:gen})}\quad |x\pm y|\leq |x| + |y| \prec |x|^{\downarrow} + |y|^{\downarrow}; \label{eq:absd}\\ &x\prec y \Rightarrow |x|{\ \prec}_{w}\ |y|, \quad\mbox{see, e.g., \cite[Example II.3.5]{Bha97}.} \label{eq:abs} \end{align} Arithmetic operations, e.g.,\ the sum and the product, on vectors used in majorization are performed component-wise. In the subsequent Theorems \ref{thm:product} and \ref{thm:norm} for rectangular matrices we may need to operate with nonnegative vectors of different lengths. A standard agreement in this case is to add zeroes at the end of the shorter vector to match the sizes needed for component-wise arithmetic operations and comparisons. We also use this agreement in later proofs. Many inequality relations between eigenvalues and singular values are succinctly expressed as majorization or weak majorization relations; and a beautiful example is \begin{theorem}\label{thm:Lid} \textup{(Lidskii \cite{Lid50}, see also, e.g.,\ \cite[p. 69]{Bha97})}. Let $A$ and $B \in \mathcal{H}(n)$. The eigenvalues of $A$, $B$, and $A-B$ satisfy $\lambda(A)-\lambda(B)\prec \lambda(A-B)$. \end{theorem} Recall here that $\lambda(A)-\lambda(B)=\lambda^{\downarrow}(A)-\lambda^{\downarrow}(B)$. Note that the equivalent of (\ref{eq:maj}) holds here using ${\rm trace}(A)=\sum_i \lambda_i(A)$. % We will use the following corollary: \begin{corollary} \label{cor:sum} \textup{(E.g.,\ \cite[Chapter 9, G.1.d]{MarO79}, \cite[Corollary 3.4.3]{Horn2}).} If $A$ and $B \in \C^{n\times n}$ then $s(A \pm B){\ \prec}_{w}\ s(A)+s(B)$. \end{corollary} This corollary also follows from a weaker statement than Lidskii's theorem, e.g.,\ \cite[Exercises II.1.14, II.1.15]{Bha97}. By using (\ref{eq:gen}) we can see that Corollary \ref{cor:sum} extends to the case of three or more matrices, because all vectors $s(A), s(B), \ldots$ are nonincreasing. We also use results for the singular values of a product of matrices: \begin{theorem}\label{thm:product} \textup{(E.g.,\ \cite[Theorem 3.3.14]{Horn2}).} $s(AB){\ \prec}_{w}\ s(A)s(B)$ for arbitrary, possibly rectangular, matrices $A$ and $B$ such that $AB$ exists. \end{theorem} \begin{theorem}\label{thm:norm} \textup{(E.g.,\ \cite[Theorem 3.3.16]{Horn2}, \cite[Problem III.6.2]{Bha97}).}\\ $s(AB)\leq \|A\|s(B)$ and $s(AB)\leq\|B\|s(A)$ for arbitrary, possibly rectangular, matrices $A$ and $B$ such that $AB$ exists. \end{theorem} \section{Motivation and Main Results} \label{sec:intro} The Rayleigh-Ritz method for approximating eigenvalues of a Hermitian matrix $A$ finds the eigenvalues of $X^HAX$, where the columns of the matrix $X$ form an orthonormal basis for a subspace $\mathcal{X}$. Here $\mathcal{X}$ is called a trial subspace. The eigenvalues of $X^HAX$ do not depend on the particular choice of basis and are called Ritz values of $A$ with respect to $\mathcal{X}$. If $\mathcal{X}$ is one-dimensional and spanned by the unit vector $x$ there is only one Ritz value---namely the Rayleigh quotient $x^HAx$. When the trial subspace $\mathcal{X}$ is perturbed to become the subspace $\mathcal{Y}$, it is useful to know how the Ritz values of $A$ vary. For one-dimensional $\mathcal{X}$ and $\mathcal{Y}$, spanned by unit vectors $x$ and $y$ respectively, the following result appears in, e.g., \cite[Theorem 1]{ka03}: \begin{equation} |x^HAx-y^HAy|\leq {\rm{spr}}(A)\sin {\theta}(x,y). \label{eq:knya1} \end{equation} Here and below $\theta(x,y)$ is the acute angle between the two unit vectors $x$ and $y$ defined by $\theta(x,y)=\mathrm{arccos} {|x^Hy|}\in [0,\pi/2]$. It is well known that every eigenvector is a stationary point of the Rayleigh quotient (considered as a function of a vector)---i.e., in the vicinity of an eigenvector the Rayleigh quotient changes very slowly. The classic result that motivates this paper is the following: the Rayleigh quotient approximates an eigenvalue of a Hermitian matrix with accuracy proportional to the \emph{square} of the eigenvector approximation error. The following simple bound, e.g.,\ \cite[Theorem 4]{ka03}, demonstrates this: \begin{equation} |x^HAx-y^HAy|\leq {\rm{spr}}(A)\sin^2 \theta(x,y), \label{eq:knya2} \end{equation} where we assume that one of the unit vectors $x$ or $y$ is an eigenvector of $A$. To give a thorough background to our results we re-derive this important basic bound. Let $Ax=x\lambda$, then $x^HAx=\lambda$ so $|x^HAx-y^HAy|=|y^H(A-\lambda I)y|$. We now plug in the orthogonal decomposition $y=u+v$ where $u\in\mathrm{span}\{x\}$ and $v\in(\mathrm{span}\{x\})^\perp$. Thus $(A-\lambda I)u=0$ and $\|v\|=\sin\theta(x,y)$, which results in $|y^H(A-\lambda I)y|=|v^H(A-\lambda I)v|\leq \|A-\lambda I\| \|v\|^2= \|A-\lambda I\|\sin^2\theta(x,y)$. But $\|A-\lambda I\|\leq{\rm{spr}}(A)$, giving (\ref{eq:knya2}). Let us now discuss some generalizations of (\ref{eq:knya1}) and (\ref{eq:knya2}) for subspaces $\mathcal{X}$ and $\mathcal{Y}$ of dimensions higher than one, with $\dim{\mathcal{X}}=\dim{\mathcal{Y}}$. Let $X$ and $Y$ be two matrices whose columns form orthonormal bases for $\mathcal{X}$ and $\mathcal{Y}$ respectively, and suppose that the Ritz values of $A$ with respect to $\mathcal{X}$ and $\mathcal{Y}$ are arranged in descending (more precisely ``nonincreasing'') order. To generalize (\ref{eq:knya1}) and (\ref{eq:knya2}) we replace the usual notion of angles between vectors by a more general one of principal angles between subspaces, and replace the inequality symbol by the weak (sub-)majorization symbol `$\prec_w$'. Let $\lambda(A)$ denote the vector of descending eigenvalues $\lambda_i(A)$ of a Hermitian matrix $A$, $s(B)$ the vector of descending singular values of a matrix $B$, and $\theta (\mathcal{X},\mathcal{Y})$ the vector of descending principal angles $\theta_i(\mathcal{X},\mathcal{Y})$ between the subspaces $\mathcal{X}$ and $\mathcal{Y}$, defined such that the vectors $\cos \theta (\mathcal{X},\mathcal{Y})$ and $s(X^HY)$ are the same, except for the reversed order, see, e.g., \cite{BjoG73}, \cite[Section 12.4.3]{GolL89}. A recent paper \cite{KnyA06} generalizes (\ref{eq:knya1}) to: \begin{equation} |\lambda(X^HAX)-\lambda(Y^HAY)|{\ \prec}_{w}\ {\rm{spr}}(A)\sin \theta (\mathcal{X},\mathcal{Y}). \label{eq:knya} \end{equation} The weak majorization bound (\ref{eq:knya}) implies, e.g.,\ a bound for its largest term: \begin{equation} \max_i |\lambda_i(X^HAX)-\lambda_i(Y^HAY)| \leq {\rm{spr}}(A)\,{\rm{gap}}(\mathcal{X},\mathcal{Y}), \label{eq:knyamax} \end{equation} where ${\rm{gap}}(\mathcal{X},\mathcal{Y})= \max_i \{\sin \theta_i(\mathcal{X},\mathcal{Y})\}$ in this case, e.g.,\ \cite{ka02,KnyA06}. Both bounds (\ref{eq:knya}) and (\ref{eq:knyamax}) generalize (\ref{eq:knya1}) to multidimensional subspaces, but no assumption of $A$-invariance is made in either case. What is the bound that generalizes (\ref{eq:knya2}), assuming that one of the subspaces $\mathcal{X}$ or $\mathcal{Y}$ is $A$-invariant? A natural conjecture, made in \cite{KnyA06}, is that such a bound could be obtained in terms of $\sin^2\theta (\mathcal{X},\mathcal{Y})$. No majorization result of this kind is known, but simpler results---for the largest error only---are available; e.g.,\ the following important bound is proved in \cite{k86}, reproduced in \cite[Theorem 2, p. 477]{d}, and \cite[Theorem 2.4]{MR2206452}, with a different proof suggested in \cite[Theorem 2.2.3, p. 56]{k}; for an English translation of the latter see \cite[Theorem 2.3, p. 383]{ks}. We present here a slightly modified formulation to make it consistent with (\ref{eq:knyamax}): if $\mathcal{X}$ or $\mathcal{Y}$ is $A$-invariant and corresponds to a contiguous set of the extreme, i.e., largest or smallest, eigenvalues of $A$, then \begin{equation} \max_i |\lambda_i(X^HAX)-\lambda_i(Y^HAY)| \leq {\rm{spr}}(A)\,{\rm{gap}}^2(\mathcal{X},\mathcal{Y}). \label{eq:knyamax1} \end{equation} Bound (\ref{eq:knyamax1}) generalizes (\ref{eq:knya2}), but does not take advantage of majorization. Comparing (\ref{eq:knya}) and (\ref{eq:knyamax}) with (\ref{eq:knya1}), and (\ref{eq:knyamax1}) with (\ref{eq:knya2}), we make an educated guess for the general case where the invariant subspace is not necessarily associated with a contiguous set of extreme eigenvalues: \begin{conj} \label{conj:main} Let the subspaces $\mathcal{X}$ and $\mathcal{Y}$ have the same dimension, with orthonormal bases given by the columns of the matrices $X$ and $Y$ respectively. Let the matrix $A$ be Hermitian, and $\mathcal{X}$ or $\mathcal{Y}$ be $A$-invariant. Then \begin{equation} |\lambda(X^HAX)-\lambda(Y^HAY)|{\ \prec}_{w}\ {\rm{spr}}(A)\sin^2 \theta (\mathcal{X},\mathcal{Y}). \label{eq:knyamain} \end{equation} \end{conj} We emphasize that the bound (\ref{eq:knyamain}) involves the sine \emph{squared} and, since convergence analyses are of particular interest for small angles, this is a great improvement over (\ref{eq:knya}). This is just as we would hope, since one of the subspaces is $A$-invariant in (\ref{eq:knyamain}). The exact $A$-invariance assumption is equivalent to the subspace being spanned by some exact eigenvectors of $A$, and Conjecture \ref{conj:main} is an a~priori Rayleigh-Ritz eigenvalue error bound which can be used to examine how the subspaces $\mathcal{Y}$ of an iterative eigenproblem algorithm approach an ideal $A$-invariant subspace $\mathcal{X}$. As we mentioned in the introduction, eigenvalue error bounds are important in many applications. We refer the reader to the follow-up paper \cite{ka07} where we extend some results of this paper to Hilbert spaces, and discuss in detail applications to finite element methods and subspace iterations. The implications of the weak majorization inequality (\ref{eq:knyamain}) in Conjecture \ref{conj:main} may not be obvious to every reader. The weak majorization bound (\ref{eq:knyamain}) directly implies \[ \sum_{i=1}^j |\lambda_i(X^HAX)-\lambda_i(Y^HAY)|^\downarrow \leq {\rm{spr}}(A) \sum_{i=1}^j \sin^2(\theta_i(\mathcal{X},\mathcal{Y}))^\downarrow ,\quad \ j=1,\ldots,k, \] see (\ref{eq:wm}), where $k=\dim{\mathcal{X}}=\dim{\mathcal{Y}}$. For example for $j=k$ we obtain \[ \sum_{i=1}^k |\lambda_i(X^HAX)-\lambda_i(Y^HAY)| \leq {\rm{spr}}(A) \sum_{i=1}^k \sin^2(\theta_i(\mathcal{X},\mathcal{Y})), \] and for $j=1$ we get (\ref{eq:knyamax1}). Moreover, for real vectors $x$ and $y$ the weak majorization $x \prec_w y$ is equivalent to the inequality $\sum _{i=1}^n \phi(x_i) \leq \sum_{i=1}^n \phi(y_i)$ holding for any continuous nondecreasing convex real valued function $\phi$, see, e.g.,\ \cite[Statement 4.B.2]{MarO79}. If for example we take $\phi(t)=t^p$ with $p \geq 1$, the bound (\ref{eq:knyamain}) also implies \[ \left(\sum_{i=1}^k |\lambda_i(X^HAX)-\lambda_i(Y^HAY)|^p\right)^{\frac{1}{p}} \leq {\rm{spr}}(A) \left(\sum_{i=1}^k \sin^{2p}(\theta_i(\mathcal{X},\mathcal{Y}))\right)^{\frac{1}{p}}. \] We have not proven that Conjecture \ref{conj:main} holds in \emph{all} circumstances, and indeed it might not (but we suspect it does). But we have proven it \emph{always} holds if we multiply the bound by $1.5$. In section \ref{sec:pertbds} we also show that Conjecture \ref{conj:main} does hold in some very useful circumstances: \begin{theorem} \label{thm:mainex} The bound (\ref{eq:knyamain}) of Conjecture \ref{conj:main} holds if, in addition to the assumptions of Conjecture \ref{conj:main}, either or both of the following conditions hold: \begin{description} \item[(a)] The $A$-invariant subspace $\mathcal{X}$ or $\mathcal{Y}$ corresponds to a contiguous set of the largest (or smallest) eigenvalues of $A$. \item[(b)] All the eigenvalues of $A$ corresponding to the $A$-invariant subspace $\mathcal{X}$ or $\mathcal{Y}$ lie between (and possibly including) one extreme eigenvalue of $A$ and the midpoint $[\lambda_1(A)\!+\!\lambda_n(A)]/2$ of $A$'s spectrum. \end{description} \end{theorem} This does not cover all known cases where (\ref{eq:knyamain}) holds, but it does cover many practical cases. For example in approximating the eigenvalues of a Hermitian matrix, perhaps using Lanczos' eigenvalue algorithm, e.g., \cite[\S 9]{GolL89}, we are often interested in just one end of the spectrum. In section \ref{sec:pertbds} we also show a weaker result \emph{always} holds: \begin{theorem} \label{thm:mainin} Under the assumptions of Conjecture \ref{conj:main} we have \begin{equation} |\lambda(X^HAX)-\lambda(Y^HAY)| {\ \prec}_{w}\ {\rm{spr}}(A) \left[{e} -\cos \theta (\mathcal{X},\mathcal{Y})+\frac{1}{2} \sin^2\theta (\mathcal{X},\mathcal{Y})\right]. \label{eq:bound} \end{equation} \end{theorem} Here and below we use `${e}$' to indicate a vector of ones. Note that the individual elements for both vectors ${e}-\cos\theta(\mathcal{X},\mathcal{Y})$ and $\sin^2\theta(\mathcal{X},\mathcal{Y})$ are decreasing, since both functions $1 \!-\!\cos \theta$ and $\sin^2\theta$ are monotonically increasing within $[0,\pi/2]$, and the vector $\theta(\mathcal{X},\mathcal{Y})$ is chosen to be decreasing. % We now deduce two simple corollaries of Theorem~\ref{thm:mainin}. Using elementary trigonometry, for $\theta \in [0,\pi/2]$: \begin{align*} 2-2\cos{\theta}&=2-2\cos{\theta}-(1-\cos{\theta})^2+ (1-\cos{\theta})^2 \nonumber\\ &=\sin^2{\theta}+(1-\cos{\theta})^2 = \sin^2{\theta}+{\sin^4{\theta}}/{(1+\cos{\theta})^2} \nonumber\\ &\leq \sin^2{\theta}+\sin^4{\theta}. \nonumber \end{align*} We first conclude that bound (\ref{eq:bound}) is slightly worse than bound (\ref{eq:knyamain}) from Conjecture \ref{conj:main}; and second, we immediately obtain from (\ref{eq:bound}): \begin{corollary} Under the assumptions of Conjecture \ref{conj:main}, we have \begin{align} |\lambda(X^HAX)-\lambda(Y^HAY)| & {\ \prec}_{w}\ {\rm{spr}}(A) \left[\sin^2 \theta (\mathcal{X},\mathcal{Y})+\frac{1}{2} \sin^4\theta (\mathcal{X},\mathcal{Y})\right] \label{eq:bound1a} \\ & \leq \ \ \ \frac{3}{2} {\rm{spr}}(A) \sin^2 \theta (\mathcal{X},\mathcal{Y}). \label{eq:bound1} \end{align} \end{corollary} Extending the above trigonometric relation we see that $$ 2-2\cos{\theta} =\sin^2\theta\left(1+\frac{\sin^2\theta}{(1+\cos\theta)^2}\right) =\frac{2\sin^2\theta}{1+\cos\theta} =\frac{\sin^2{\theta}}{\cos^2({\theta/2})} \leq \tan^2{\theta}, $$ for $\theta \in [0,\pi/2]$; and with $\sin^2{\theta}\leq \tan^2{\theta}$, bound (\ref{eq:bound}) implies another corollary: \begin{corollary} Under the assumptions of Conjecture \ref{conj:main}, we have \begin{equation} |\lambda(X^HAX)-\lambda(Y^HAY)| {\ \prec}_{w}\ {\rm{spr}}(A) \tan^2\theta (\mathcal{X},\mathcal{Y}). \label{eq:boundtan} \end{equation} \end{corollary} We give an example in section \ref{sec:diss} demonstrating that the conjectured bound (\ref{eq:knyamain}) cannot be any tighter. Our numerical tests suggest that Conjecture \ref{conj:main} holds, i.e., that bound (\ref{eq:bound}) can probably be improved to (\ref{eq:knyamain}). However we show in section \ref{sec:diss} that already the first step in our proof of Theorem \ref{thm:mainin} does not allow us to prove the better bound (\ref{eq:knyamain}), so a completely different approach is apparently needed to support Conjecture \ref{conj:main} in all cases---see section \ref{sec:diss} for more thoughts on this. Conjecture \ref{conj:main} turns out to be easy to formulate, but hard to prove in its generality. We believe that the present publication, which proves Conjecture \ref{conj:main} in several practically interesting particular cases and provides slightly weaker bounds (\ref{eq:bound})--(\ref{eq:boundtan}) for the general case, is important since it serves as a theoretical foundation for our future work on applications, e.g.,\ \cite{ka07}. It is also novel---we know of no other case where majorization is used for \emph{a priori} Rayleigh-Ritz error bounds. The only somewhat related result known to us is the pioneering work of \cite{DavKah}, where majorization is applied to bound eigenvalue errors \emph{a posteriori}. \section{Proofs}\label{sec:pertbds} We have all the tools needed to prove our main results Theorem \ref{thm:mainex} and Theorem \ref{thm:mainin}. At first both proofs develop along the same lines; later they split. By the assumptions in the theorems, $\mathcal{X}$ and $\mathcal{Y}$ are two subspaces of $\C^n$ of the same dimension $k$, and are the column ranges of matrices $X$ and $Y$ with orthonormal columns that are arbitrary up to unitary transformations of their columns. Using the singular value decomposition we choose such a pair of matrices $X$ and $Y$ with orthonormal columns so that $C\equiv X^HY$ is real, square and diagonal, with the diagonal entries in increasing order. Thus by the definition of angles between subspaces, \begin{equation} C=\diag\left(s^{\uparrow}(X^HY)\right)=\diag\left(\cos\theta (\mathcal{X},\mathcal{Y})\right). \label{eq:ce} \end{equation} We arbitrarily complete $X$ and $Y$ to unitary matrices $[X,X_\perp]$, and $[Y,Y_\perp]\in\mathcal{U}(n)$ and consider the $2\times2$ partition of their unitary product $[X,X_\perp]^H[Y,Y_\perp]$. By construction of $X$ and $Y$, its $k\times k$ upper left block is $C$. We denote its $(n\!-\!k)\times k$ lower left block by $S\equiv(X_\perp)^H Y$. Since $[X,X_\perp]^H[Y,Y_\perp]$ is unitary, the entries $C$ and $S$ of its first block column satisfy $C^2+S^HS=I$. So $\lambda(S^H S)=\lambda(I\!-\!C^2)= {e} \!-\!\cos^2 \theta(\mathcal{X},\mathcal{Y}) = \sin^2 \theta(\mathcal{X},\mathcal{Y}),$ where ${e}$ is the vector of ones, and so the vectors of singular values $s(C)$ and $s(S)$ are closely connected and we derive from this that \begin{equation} \sin\theta(\mathcal{X},\mathcal{Y})=[s(S),0,\ldots,0], \label{eq:s} \end{equation} where $\max\{2k\!-\!n,0\}$ zeroes are added on the right-hand side to match the number $k$ of angles in the vector $\theta(\mathcal{X},\mathcal{Y})$ with the number $\min\{k,n\!-\!k\}$ of singular values in the vector $s(S)$. Both theorems assume that either $\mathcal{X}$ or $\mathcal{Y}$ is $A$-invariant, so without loss of generality let $\mathcal{X}$ be $A$-invariant. Then since $[X,X_\perp]$ is unitary: $$ [X,X_\perp]^HA\ [X,X_\perp]=\diag(A_{11},A_{22}), \mbox{ and } A=[X,X_\perp]\diag(A_{11},A_{22})[X,X_\perp]^H. $$ Here $X^HAX=A_{11}\in\mathcal{H}(k)$ and $(X_\perp)^HAX_\perp=A_{22}\in{\mathcal{H}(n-k)}$. We can now use $Y^H[X,X_\perp]=[C^H,S^H]=[C,S^H]$ to show that \begin{align} Y^HAY&= Y^H \left([X,X_\perp]\diag(A_{11},A_{22})[X,X_\perp]^H\right)Y = C A_{11} C + S^H A_{22} S. \label{eq:yay} \end{align} The expression we want to bound in Theorems \ref{thm:mainex} and \ref{thm:mainin} now takes the form \begin{align} \lambda(X^HAX)-\lambda(Y^HAY) &=\lambda(A_{11})-\lambda(CA_{11}C+S^HA_{22}S) \nonumber\\ =\lambda(A_{11})-&\lambda(CA_{11}C)+\lambda(CA_{11}C)-\lambda(CA_{11}C+S^HA_{22}S) \nonumber\\ \prec [\lambda(A_{11})-&\lambda(CA_{11}C)]^{\downarrow}+\lambda(-S^HA_{22}S), \label{eq:ident} \end{align} where this last line used Lidskii's Theorem \ref{thm:Lid} with (\ref{eq:gen}). See the discussion following (\ref{eq:ex}) for more about this choice. Next (\ref{eq:evsv}), Theorems \ref{thm:product} and \ref{thm:norm}, and (\ref{eq:s}) give \begin{equation} \left|\lambda(-S^HA_{22}S)\right|^{\downarrow} = s(S^HA_{22}S) {\ \prec}_{w}\ \|A_{22}\|\sin^2\theta (\mathcal{X},\mathcal{Y}). \label{eq:bd22} \end{equation} At this point the proofs split. Each proof will use a different majorization of $\lambda(A_{11})-\lambda(CA_{11}C)$ in (\ref{eq:ident}), but both will use (\ref{eq:bd22}). We first establish Theorem \ref{thm:mainex}. Neither (\ref{eq:knyamain}) nor (\ref{eq:bound}) is altered by replacing $A$ by $\pm A\!+\!\alpha I$ where $\alpha$ is an arbitrary real constant, and so we can make the new $A_{11}$ nonnegative definite in each of the parts \textbf{(a)} and \textbf{(b)} of Theorem~\ref{thm:mainex} by choosing the appropriate sign and the shift $\alpha$. \begin{proof}[of Theorem \ref{thm:mainex}] The starting point of the proof is (\ref{eq:ident}), but now we assume $A_{11}$ is nonnegative definite and so has a nonnegative definite square root $\sqrt{A_{11}}$. We deal with $\lambda(A_{11})-\lambda(CA_{11}C)$ first. For arbitrary square matrices $F$ and $G$ we have $\lambda(FG)=\lambda(GF)$. Taking $F=C\sqrt{A_{11}}$ and $G= \sqrt{A_{11}} C$, we get $\lambda(CA_{11}C)=\lambda(\sqrt{A_{11}}C^2\sqrt{A_{11}})$. Using this and Lidskii's Theorem \ref{thm:Lid} we see that \begin{align*} \lambda(A_{11})-\lambda(CA_{11}C) &=\lambda(A_{11})-\lambda\left(\sqrt{A_{11}}C^2\sqrt{A_{11}}\right)\\ &\prec \lambda\left(\sqrt{A_{11}}\sqrt{A_{11}}-\sqrt{A_{11}}C^2\sqrt{A_{11}}\right)\nonumber\\ &= \lambda\left(\sqrt{A_{11}}\left(I-C^2\right)\sqrt{A_{11}}\right) = \lambda\left(\sqrt{A_{11}}S^HS\sqrt{A_{11}}\right), \end{align*} since $C^2+S^HS=I$. Then using (\ref{eq:abs}) with Theorem \ref{thm:norm} (twice), and (\ref{eq:s}) we obtain $$ \left| \lambda(A_{11})-\lambda(CA_{11}C) \right| {\ \prec}_{w}\ s\left(\sqrt{A_{11}}S^HS\sqrt{A_{11}}\right) \leq \|A_{11}\|\sin^2\theta(\mathcal{X},\mathcal{Y}). $$ Apply (\ref{eq:abs}) to (\ref{eq:ident}); then (\ref{eq:absd}), (\ref{eq:gen}), and (\ref{eq:bd22}) with the above bound give \begin{align} \left|\lambda(X^HAX)-\lambda(Y^HAY)\right| &{\ \prec}_{w}\ \left| [\lambda(A_{11})-\lambda(CA_{11}C)]^{\downarrow}+ \lambda(-S^HA_{22}S)\right| \nonumber\\ &{\ \prec}_{w}\ \left| \lambda(A_{11})-\lambda(CA_{11}C)\right|^{\downarrow}+ \left| \lambda(-S^HA_{22}S)\right|^{\downarrow} \nonumber\\ &{\ \prec}_{w}\ (\|A_{11}\|+\|A_{22}\|)\sin^2\theta(\mathcal{X},\mathcal{Y}). \label{eq:normsum} \end{align} Here this proof splits, and we first prove part \textbf{(a)} of Theorem~\ref{thm:mainex}. By assumption the invariant subspace $\mathcal{X}$ corresponds to a contiguous set of the largest (or smallest) eigenvalues of $A$. Here we present the proof for the case of the largest eigenvalues. The case of the smallest eigenvalues follows immediately by substituting $-A$ for $A$. We replace $A$ with $A+\alpha I$ where $\alpha$ is chosen as the constant real shift that makes the new $A_{11}$ positive semidefinite (nonnegative definite and singular), so that $\sqrt{A_{11}}$ exists. Since $\dim \mathcal{X} = k$, and the invariant subspace $\mathcal{X}$ corresponds to a contiguous set of the largest eigenvalues of $A$, $\alpha= -\lambda_k(A)$. After the shift $\lambda_k(A)$ becomes zero, the eigenvalues of the block $A_{11}$ become nonnegative with $\lambda_1(A)$ being the largest in absolute value, while the eigenvalues of the block $A_{22}$ become nonpositive with $ \|A_{22}\| = -\lambda_n(A). $ Thus $ \|A_{11}\|+\|A_{22}\| = \lambda_1(A) -\lambda_n(A) = {\rm{spr}}(A). $ Using this together with (\ref{eq:normsum}) gives (\ref{eq:knyamain}), completing the proof of part \textbf{(a)}. For part \textbf{(b)} of Theorem~\ref{thm:mainex} we prove the case where the eigenvalues of $A_{11}$ lie in the top half of the spectrum of $A$, the remaining case is proven by substituting $-A$ for $A$. Choose the shift so that for the new $A$, $\lambda_1(A)=-\lambda_n(A)$, ensuring with the assumptions that $A_{11}$ is nonnegative definite and that $\|A_{11}\|\leq{\rm{spr}}(A)/2$ and $\|A_{22}\|\leq{\rm{spr}}(A)/2$, so that (\ref{eq:normsum}) again leads to (\ref{eq:knyamain}). \end{proof} In fact whenever we can choose the sign and shift in $\pm A+\alpha I$ so that this new $A$ has $A_{11}$ nonnegative definite with $\|A_{11}\|+\|A_{22}\|\leq {\rm{spr}}(A)$, then (\ref{eq:knyamain}) will be satisfied. We return again to (\ref{eq:ident}) and (\ref{eq:bd22}) to establish Theorem \ref{thm:mainin}. \begin{proof}[of Theorem \ref{thm:mainin}] Applying Lidskii's Theorem \ref{thm:Lid} with (\ref{eq:gen}) to (\ref{eq:ident}) gives \begin{align} \lambda(X^HAX)-\lambda(Y^HAY) &\prec [\lambda(A_{11})-\lambda(CA_{11}C)]^{\downarrow}+\lambda(-S^HA_{22}S) \nonumber\\ &\prec \lambda(A_{11}-CA_{11}C)+\lambda(-S^HA_{22}S). \label{eq:ident2} \end{align} In order to bound this we will use the identity \begin{equation} A_{11}-CA_{11}C=(I\!-\!C)A_{11}+CA_{11}(I\!-\!C), \label{eq:ident3} \end{equation} together with the following results obtained using (\ref{eq:ce}) with Theorems \ref{thm:norm} and \ref{thm:product}: \begin{align} s((I-C)A_{11})&\leq \|A_{11}\|s(I-C) =\|A_{11}\|({e}-\cos\theta(\mathcal{X},\mathcal{Y})), \label{eq:step3term1}\\ s(CA_{11}(I-C)) &{\ \prec}_{w}\ s(C)s(A_{11}(I-C))\leq s(A_{11}(I-C)) \nonumber\\ &\leq \|A_{11}\|s(I-C)=\|A_{11}\|({e}-\cos\theta(\mathcal{X},\mathcal{Y})). \label{eq:step3term2} \end{align} Discarding the first $C$ in $s(CA_{11}(I-C))$ is no real loss, see section~\ref{sec:diss}. Using (\ref{eq:ident3}) and applying (\ref{eq:evsv}), Corollary \ref{cor:sum}, and (\ref{eq:gen}) with (\ref{eq:step3term1}) and (\ref{eq:step3term2}), gives \begin{align} \left|\lambda(A_{11}-CA_{11}C)\right|^{\downarrow} &= s((I-C)A_{11}+CA_{11}(I-C)) \nonumber\\ &{\ \prec}_{w}\ s((I-C)A_{11})+s(CA_{11}(I-C)) \nonumber\\ &{\ \prec}_{w}\ 2\|A_{11}\|({e}-\cos\theta(\mathcal{X},\mathcal{Y})). \label{eq:step3} \end{align} Now apply (\ref{eq:abs}) to (\ref{eq:ident2}), followed by (\ref{eq:absd}), and use (\ref{eq:step3}) and (\ref{eq:bd22}) with (\ref{eq:gen}), together with $\|A_{11}\|, \|A_{22}\|\leq \|A\|$, to obtain: \begin{align} \left|\lambda(X^HAX)-\lambda(Y^HAY)\right| &{\ \prec}_{w}\ \left| \lambda(A_{11}-CA_{11}C)+\lambda(-S^HA_{22}S)\right| \nonumber\\ &{\ \prec}_{w}\ \left|\lambda(A_{11}-CA_{11}C)\right|^{\downarrow}+\left|\lambda(-S^HA_{22}S)\right|^{\downarrow} \nonumber\\ &{\ \prec}_{w}\ \|A\|\left[2({e} - \cos{\theta(\mathcal{X},\mathcal{Y})})+ \sin^2{\theta(\mathcal{X},\mathcal{Y})}\right]. \label{eq:step4} \end{align} Our final step is to replace $\|A\|$ by an expression involving ${\rm{spr}}(A)$. Observe here that the difference between Ritz values is invariant under \emph{any} shift $\alpha \in \R$. So we shift $A$ in a way to minimize $\|A\|$. This situation occurs when 0 is exactly in the middle of the spectrum, in which case $\|A\|={\rm{spr}}(A)/2$. Combining this observation with (\ref{eq:step4}) completes the proof of (\ref{eq:bound}). \end{proof} \section{Discussion}\label{sec:diss} The following example shows that the conjectured bound (\ref{eq:knyamain}) cannot be improved as a general result. Let $n=2m$ and let an arbitrary set of $m$ angles $\theta_i$ be given, where $\pi/2 \geq\theta_1\geq \ldots\geq\theta_{m}\geq 0$. Let $C=\diag(\cos(\theta_1),\ldots,\cos(\theta_{m}))$, $X=[I,0]^H$, $Y=\left[C,\sqrt{I-C^2}\right]^H$, and $ A=\left[\begin{array}{cc} I & 0\\ 0 & -I\ \end{array}\right], $ where all unit matrices $I$ are of size $m$, so that $X$ and $Y$ are $n\times m$ and $A$ is $n\times n$. Then the $\theta_i$ become the principal angles between the pair of $k=m$ dimensional subspaces $\mathcal{X}\equiv\Ra(X)$ and $\mathcal{Y}\equiv\Ra(Y)$. Moreover the Ritz values are the eigenvalues of $X^HAX=I$ and $Y^HAY=2C^2-I$, and so $\left|\lambda(X^HAX)-\lambda(Y^HAY)\right|^{\downarrow}= 2\sin^2 \theta (\mathcal{X},\mathcal{Y})$. In this example ${\rm{spr}}(A) = 1 - (-1) = 2$, so (\ref{eq:knyamain}) turns into an equality. Asymptotically where all of the angles are small, bounds (\ref{eq:knyamain}), (\ref{eq:bound}), (\ref{eq:bound1a}) and (\ref{eq:boundtan}) are all equivalent. Moreover our numerical tests support Conjecture \ref{conj:main} in all cases. Perhaps in practical terms, from the point of view of a numerical analyst we are done. However it would be pleasing to know whether Conjecture \ref{conj:main} holds theoretically in its generality, since bound (\ref{eq:knyamain}) looks more aesthetic and cannot be improved as a general result. One important thing we know is that our approach of starting with Theorem \ref{thm:Lid} to deduce (\ref{eq:ident}) (used in the proof of (\ref{eq:bound})) cannot reduce bound (\ref{eq:bound}) to bound (\ref{eq:knyamain}) in general, no matter how we modify the rest of the proof. This can be seen from the following example in $\C^4$. Let $A=\diag(A_{11},A_{22})$ and $C=X^HY$, $S=X_\perp^HY$ be as in \begin{equation} A=\left[ \begin{array}{cc|cc} 0&1&0&0 \\ 1&0&0&0\\ \hline 0&0&1&0\\ 0&0&0&1\\ \end{array} \right], \ \ [X,X_\perp]=I_4, \ \ [Y,Y_\perp]=\left[ \begin{array}{cc|cc} 0&0&-1&0 \\ 0&1&0&0\\ \hline 1&0&0&0\\ 0&0&0&1\\ \end{array} \right], \label{eq:ex} \end{equation} where $I_4$ is the $4\times 4$ unit matrix, so that $X,X_\perp,Y,Y_\perp$ all have two columns. Then $[X,X_\perp]^H[Y,Y_\perp] = [Y,Y_\perp]$ are chosen as in our proofs and we see that $\theta (\mathcal{X},\mathcal{Y}) = [\pi/2,0]^T$, $CA_{11}C=0$, $S^HA_{22}S=\diag(1,0)$. Here the largest and smallest eigenvalues of $A$ are $\pm 1$, so ${\rm{spr}}(A)=2$. Hence by direct calculation $$ X^HAX = A_{11}= \left[\begin{array} {cc} 0 & 1 \\ 1 & 0 \end{array} \right],\quad Y^HAY =CA_{11}C+S^HA_{22}S=S^HA_{22}S= \left[\begin{array} {cc} 1 & 0 \\ 0 & 0 \end{array} \right], $$ $$ \left|\lambda(X^HAX)-\lambda(Y^HAY)\right| = \left|\begin{bmatrix} 1\\ -1 \end{bmatrix} - \begin{bmatrix} 1\\ 0 \end{bmatrix}\right| = \begin{bmatrix} 0\\ 1 \end{bmatrix} {\ \prec}_{w}\ \begin{bmatrix} 2\\ 0 \end{bmatrix} ={\rm{spr}}(A)\sin^2{\theta (\mathcal{X},\mathcal{Y})}, $$ so example (\ref{eq:ex}) \emph{does} satisfy (\ref{eq:knyamain}). Let us now attempt to use (\ref{eq:ident}) for (\ref{eq:ex}). % The right hand side of (\ref{eq:ident}) is $$ a\equiv [\lambda(A_{11})-\lambda(CA_{11}C)]^{\downarrow}+\lambda(-S^HA_{22}S)= \begin{bmatrix} 1 \\-1\end{bmatrix}-\begin{bmatrix} 0\\ 0\end{bmatrix}+ \begin{bmatrix} 0 \\ -1\end{bmatrix}=\begin{bmatrix} 1 \\-2 \end{bmatrix}, $$ where it is \emph{not} true that $|a|{\ \prec}_{w}\ {\rm{spr}}(A)\sin^2{\theta(\mathcal{X},\mathcal{Y})}$. That is, the absolute value of the right-hand side of (\ref{eq:ident}) is not always weakly majorized by ${\rm{spr}}(A)\sin^2{\theta(\mathcal{X},\mathcal{Y})}$, so we cannot obtain a general proof of (\ref{eq:knyamain}) starting from the majorization in (\ref{eq:ident}). Example (\ref{eq:ex}) can tell us even more. For \emph{any} matrix $M=M^H$ we have the following generalization of (\ref{eq:ident2}) ($M=CA_{11}C$ in (\ref{eq:ident}) and (\ref{eq:ident2})): \begin{align*} \lambda(X^HAX)-\lambda(Y^HAY) &=\lambda(X^HAX)-\lambda(M)+\lambda(M)-\lambda(Y^HAY)\\ &\prec \lambda(X^HAX-M)+\lambda(M-Y^HAY)\equiv \tilde{a}. \end{align*} It might be thought that if, e.g.,\ $X^HAX$ is indefinite, some such $M$ could be chosen to minimize $\tilde{a}$ and to prove (\ref{eq:knyamain}). But in example (\ref{eq:ex}) it can be shown that there is \emph{no} real symmetric $M$ giving $\tilde{a}$ satisfying the desired bound $|\tilde{a}|\!{\ \prec}_{w}\ \! {\rm{spr}}(A)\sin^2{\theta (\mathcal{X},\mathcal{Y})}$. In particular $M\!=\!Y^H\!AY$ will not give this bound, as the reader can check via (\ref{eq:ex}). That is, using $\lambda(X^HAX)-\lambda(Y^HAY) \prec \lambda(A_{11}-CA_{11}C-S^HA_{22}S)$ in place of (\ref{eq:ident}) will still not give (\ref{eq:knyamain}) via our approach. So on the one hand we cannot improve bound (\ref{eq:bound}) to give (\ref{eq:knyamain}) except possibly by considering a different approach to our present way of using Lidskii's Theorem \ref{thm:Lid} or equivalent in the first step, see (\ref{eq:ident}) and (\ref{eq:ident2}). On the other hand our numerical tests suggest that the tighter bound (\ref{eq:knyamain}) holds. Thus if we are to prove (\ref{eq:knyamain}) for widely spread interior eigenvalues, we appear to need an approach more sophisticated than our particular application of Lidskii's theorem in the first step. An essentially equivalent first step was used in \cite[Theorem 10]{ka03} in an earlier attempt to prove (\ref{eq:knya}), where it led to an artificial multiplier $\sqrt{2}$ in the right-hand side of (\ref{eq:knya}). The subsequent paper \cite{KnyA06} used an unusual technique to extend an arbitrary Hermitian operator to an orthogonal projector in a higher dimensional space, preserving its Ritz values, to prove (\ref{eq:knya}) as it is stated, without the multiplier $\sqrt{2}$. Perhaps the same technique might shed light here, and help us to establish Conjecture \ref{conj:main}, but this currently remains an open question. \section*{Conclusions} We clarify a conjecture of Knyazev and Argentati \cite{KnyA06} on a bound for the absolute difference between Ritz values of a Hermitian matrix $A$ for two trial subspaces, one of which is $A$-invariant. We prove the conjecture for the cases where \textbf{(a)}: the $A$-invariant subspace corresponds to a contiguous set of the largest (or smallest) eigenvalues of $A$, and \textbf{(b)}: the eigenvalues of $A$ corresponding to the $A$-invariant subspace all lie in the top (or the bottom) half of the spectrum of $A$. We prove a slightly weaker bound for general invariant subspaces. We believe that the conjecture holds, i.e., that this weaker bound can be improved, and this is supported by our numerical tests, but the proof of the conjecture in its generality (if it is true) may require an unorthodox approach, perhaps one such as that used in \cite{KnyA06}. These results are useful in practice, and for example are applicable to the analysis of routines which use the Rayleigh-Ritz method, such as some Krylov subspace methods. We refer the reader to the subsequent paper \cite{ka07}, where we extend some results of this paper to Hilbert spaces, and discuss in detail their application to finite element methods and subspace iterations. \vskip12pt \def\centerline{\footnotesize\rm REFERENCES}{\centerline{\footnotesize\rm REFERENCES}}
{ "timestamp": "2008-02-04T03:09:54", "yymm": "0610", "arxiv_id": "math/0610498", "language": "en", "url": "https://arxiv.org/abs/math/0610498", "abstract": "The Rayleigh-Ritz method is widely used for eigenvalue approximation. Given a matrix $X$ with columns that form an orthonormal basis for a subspace $\\X$, and a Hermitian matrix $A$, the eigenvalues of $X^HAX$ are called Ritz values of $A$ with respect to $\\X$. If the subspace $\\X$ is $A$-invariant then the Ritz values are some of the eigenvalues of $A$. If the $A$-invariant subspace $\\X$ is perturbed to give rise to another subspace $\\Y$, then the vector of absolute values of changes in Ritz values of $A$ represents the absolute eigenvalue approximation error using $\\Y$. We bound the error in terms of principal angles between $\\X$ and $\\Y$. We capitalize on ideas from a recent paper [DOI:https://doi.org/10.1137/060649070] by A. Knyazev and M. Argentati, where the vector of absolute values of differences between Ritz values for subspaces $\\X$ and $\\Y$ was weakly (sub-)majorized by a constant times the sine of the vector of principal angles between $\\X$ and $\\Y$, the constant being the spread of the spectrum of $A$. In that result no assumption was made on either subspace being $A$-invariant. It was conjectured there that if one of the trial subspaces is $A$-invariant then an analogous weak majorization bound should only involve terms of the order of sine squared. Here we confirm this conjecture. Specifically we prove that the absolute eigenvalue error is weakly majorized by a constant times the sine squared of the vector of principal angles between the subspaces $\\X$ and $\\Y$, where the constant is proportional to the spread of the spectrum of $A$. For many practical cases we show that the proportionality factor is simply one, and that this bound is sharp. For the general case we can only prove the result with a slightly larger constant, which we believe is artificial.", "subjects": "Numerical Analysis (math.NA); Spectral Theory (math.SP)", "title": "Bounds on changes in Ritz values for a perturbed invariant subspace of a Hermitian matrix", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683506103591, "lm_q2_score": 0.8397339676722393, "lm_q1q2_score": 0.8292946894055659 }
https://arxiv.org/abs/0906.2809
Sandpile groups and spanning trees of directed line graphs
We generalize a theorem of Knuth relating the oriented spanning trees of a directed graph G and its directed line graph LG. The sandpile group is an abelian group associated to a directed graph, whose order is the number of oriented spanning trees rooted at a fixed vertex. In the case when G is regular of degree k, we show that the sandpile group of G is isomorphic to the quotient of the sandpile group of LG by its k-torsion subgroup. As a corollary we compute the sandpile groups of two families of graphs widely studied in computer science, the de Bruijn graphs and Kautz graphs.
\section{Introduction} Let $G=(V,E)$ be a finite directed graph, which may have loops and multiple edges. Each edge $e \in E$ is directed from its source vertex $\mathtt{s}(e)$ to its target vertex $\mathtt{t}(e)$. The \emph{directed line graph} $\line G = (E,E_2)$ has as vertices the edges of $G$, and as edges the set \[ E_2 = \{ (e_1,e_2) \in E\times E \,|\, \mathtt{s}(e_2)=\mathtt{t}(e_1) \}. \] For example, if $G$ has just one vertex and~$n$ loops, then $\line G$ is the complete directed graph on $n$ vertices (which includes a loop at each vertex). If $G$ has two vertices and no loops, then $\line G$ is a bidirected complete bipartite graph. An \emph{oriented spanning tree} of $G$ is a subgraph containing all of the vertices of~$G$, having no directed cycles, in which one vertex, the \emph{root}, has outdegree~$0$, and every other vertex has outdegree~$1$. The number~$\kappa(G)$ of oriented spanning trees of~$G$ is sometimes called the \emph{complexity} of~$G$. Our first result relates the numbers $\kappa(\line G)$ and $\kappa(G)$. Let $\{x_e\}_{e\in E}$ and $\{x_v\}_{v\in V}$ be indeterminates, and consider the polynomials \pagebreak \[ \kappa^{edge}(G,\mathbf{x}) = \sum_T \prod_{e \in T} x_e \] \[ \kappa^{vertex}(G,\mathbf{x}) = \sum_T \prod_{e \in T} x_{\mathtt{t}(e)}. \] The sums are over all oriented spanning trees~$T$ of~$G$. Write \[ \mathrm{indeg}(v) = \# \{e \in E \,|\, \mathtt{t}(e)=v\} \] \[ \mathrm{outdeg}(v) = \# \{e \in E \,|\, \mathtt{s}(e)=v\} \] for the indegree and outdegree of vertex~$v$ in~$G$. We say that~$v$ is a \emph{source} if $\mathrm{indeg}(v)=0$. \begin{theorem} \label{thm:weightedtreeenum} Let $G=(V,E)$ be a finite directed graph with no sources. Then \begin{equation} \label{weightedtreeenum} \kappa^{vertex}(\line G,\mathbf{x}) = \kappa^{edge}(G,\mathbf{x}) \prod_{v \in V} \left( \sum_{\mathtt{s}(e)=v} x_e \right)^{\mbox{\em \scriptsize indeg}(v)-1}. \end{equation} \end{theorem} Note that since the vertex set of $\line G$ coincides with the edge set of $G$, both sides of (\ref{weightedtreeenum}) are polynomials in the same set of variables $\{x_e\}_{e\in E}$. Setting all $x_e=1$ yields the product formula \begin{equation} \label{outtothein} \kappa(\line G) = \kappa(G) \prod_{v \in V} \mathrm{outdeg}(v)^{\mathrm{indeg}(v)-1} \end{equation} due in a slightly different form to Knuth \cite{Knuth67}. Special cases of (\ref{outtothein}) include Cayley's formula $n^{n-1}$ for the number of rooted spanning trees of the complete graph $K_n$, as well as the formula $(m+n)m^{n-1}n^{m-1}$ for the number of rooted spanning trees of the complete bipartite graph $K_{m,n}$. These are respectively the cases that~$G$ has just one vertex with~$n$ loops, or~$G$ has just two vertices~$a$ and~$b$ with ~$m$ edges directed from~$a$ to~$b$ and~$n$ edges directed from~$b$ to~$a$. Suppose now that $G$ is \emph{strongly connected}, that is, for any~$v,w \in V$ there are directed paths in~$G$ from~$v$ to~$w$ and from~$w$ to~$v$. Then associated to any vertex~$v_*$ of~$G$ is an abelian group $K(G,v_*)$, the \emph{sandpile~group}, whose order is the number of oriented spanning trees of $G$ rooted at~$v_*$. Its definition and basic properties are reviewed in section~\ref{sandpilegroups}. Other common names for this group are the critical group, Picard group, Jacobian, and group of components. In the case when $G$ is \emph{Eulerian} (that is, $\mathrm{indeg}(v)=\mathrm{outdeg}(v)$ for all vertices $v$) the groups $K(G,v_*)$ and $K(G,v'_*)$ are isomorphic for any $v_*,v'_* \in V$, and we often denote the sandpile group just by $K(G)$. When $G$ is Eulerian, we show that there is a natural map from the sandpile group of $\line G$ to the sandpile group of $G$, descending from the $\Z$-linear map \[ \phi : \Z^E \to \Z^V \] which sends $e \mapsto \mathtt{t}(e)$. Let $k$ be a positive integer. We say that $G$ is \emph{balanced $k$-regular} if $\mathrm{indeg}(v)=\mathrm{outdeg}(v)=k$ for every vertex~$v$. \begin{theorem} \label{mainsequence} Let $G=(V,E)$ be a strongly connected Eulerian directed graph, fix $e_* \in E$ and let $v_* = \mathtt{t}(e_*)$. The map $\phi$ descends to a surjective group homomorphism \[ \bar{\phi}: \K(\line G,e_*) \to \K(G,v_*). \] Moreover, if $G$ is balanced $k$-regular, then $\ker(\bar{\phi})$ is the $k$-torsion subgroup of $\K(\line G,e_*)$. \end{theorem} This result extends to directed graphs some of the recent work of Berget, Manion, Maxwell, Potechin and Reiner~\cite{linegraphs} on undirected line graphs. If $G=(V,E)$ is an undirected graph, the (undirected) line graph $\mbox{line}(G)$ of $G$ has vertex set $E$ and edge set \[ \{ \{e,e' \} \,|\, e,e' \in E, ~e \cap e' \neq \emptyset \}. \] The results of~\cite{linegraphs} relate the sandpile groups of $G$ and $\mbox{line}(G)$. The undirected case is considerably more subtle, because although there is still a natural map $\K (\uline G) \to \K (G)$ when $G$ is regular, this map may fail to be surjective. A particularly interesting family of directed line graphs are the \emph{de~Bruijn graphs} $DB_n$, defined recursively by \[ DB_n = \line (DB_{n-1}), \qquad n\geq 1, \] where $DB_0$ is the graph with just one vertex and two loops. The~$2^n$ vertices of $DB_n$ can be identified with binary words $b_1 \ldots b_n$ of length~$n$; two such sequences~$b$ and~$b'$ are joined by a directed edge $(b,b')$ if and only if $b'_i=b_{i+1}$ for all $i=1,\ldots,n-1$. Using Theorem~\ref{mainsequence}, we obtain the full structure of the sandpile groups of the de Bruijn graphs. \begin{theorem} \label{deBruijn} \[ \K(DB_n) = \bigoplus_{j=1}^{n-1} \,(\Z/2^j\Z)^{2^{n-1-j}}. \] \end{theorem} Closely related to the de Bruijn graphs are the \emph{Kautz graphs}, defined by \[ \mathrm{Kautz}_1 = (\{1,2,3\}, \{(1,2),(1,3),(2,1),(2,3),(3,1),(3,2)\}) \] and \[ \mathrm{Kautz}_n = \line (\mathrm{Kautz}_{n-1}), \qquad n \geq 2. \] The Kautz graphs are useful in network design because they have close to the maximum possible number of vertices given their diameter and degree~\cite{FYM84} and because they contain many short vertex-disjoint paths between any pair of vertices~\cite{DLH93}. The following result gives the sandpile group of $\mathrm{Kautz}_n$. \begin{theorem} \label{Kautz} \[ \K(\mbox{\em Kautz}_n) = (\Z/3\Z) \oplus (\Z/2^{n-1}\Z)^2 \oplus \bigoplus_{j=1}^{n-2} (\Z/2^j\Z)^{3\cdot 2^{n-2-j}}. \] \end{theorem} The remainder of the paper is organized as follows. In section~\ref{spanningtrees}, we prove Theorem~\ref{thm:weightedtreeenum} and state a variant enumerating spanning trees with a fixed root. Section~\ref{sandpilegroups} begins by defining the sandpile group, and moves on from there to the proof of Theorem~\ref{mainsequence}. In section~\ref{iteratedlinegraphs} we enumerate spanning trees of iterated line digraphs. Huaxiao, Fuji and Qiongxiang \cite{iterated} prove that for a balanced $k$-regular directed graph~$G$ on~$N$ vertices, \[ \kappa(\line^n G) = \kappa(G) k^{(k^n-1)N}. \] Theorem~\ref{iterated} generalizes this formula to an arbitrary directed graph~$G$ having no sources. This section also contains the proofs of Theorems~\ref{deBruijn} and~\ref{Kautz}. Lastly, in section~\ref{concluding} we pose two questions for future study. \section{Spanning Trees} \label{spanningtrees} Let $G=(V,E)$ be a finite directed graph, loops and multiple edges allowed. We denote its vertices by $v,w,\ldots$ and edges by $e,f,\ldots$. Each edge $e \in E$ is directed from its source $\mathtt{s}(e)$ to its target $\mathtt{t}(e)$. In this section we prove Theorem~\ref{thm:weightedtreeenum} relating the spanning trees of~$G$ and~$\line G$, and discuss some interesting special cases. If~$k$ is a field, we write~$k^V$ and~$k^E$ for the $k$-vector spaces with bases indexed by~$V$ and~$E$ respectively. We think of the elements of~$k^V$ or~$k^E$ as formal $k$-linear combinations of vertices or of edges. Consider the field of rational functions $\Q(\mathbf{x}) = \Q((x_e)_{e\in E}, (x_v)_{v \in V})$. The \emph{edge-weighted Laplacian} and \emph{vertex-weighted Laplacian} of $G$ are the $\Q(\mathbf{x})$-linear transformations \[ \Delta^{edge}, \Delta^{vertex} : \Q(\mathbf{x})^V \to \Q(\mathbf{x})^V \] sending \[ \begin{split} \Delta^{edge}(v) &= \sum_{\mathtt{s}(e)=v} x_e (\mathtt{t}(e) - v); \\ \Delta^{vertex}(v) &= \sum_{\mathtt{s}(e)=v} x_{\mathtt{t}(e)} (\mathtt{t}(e) - v). \end{split} \] The sums are over all edges~$e\in E$ such that $\mathtt{s}(e)=v$. We will use the following form of the matrix-tree theorem for directed graphs. Here $[t]\,p(t)$ denotes the coefficient of $t$ in the polynomial $p(t)$. \begin{theorem}[Matrix-Tree Theorem] \[ \kappa^{edge}(G,\mathbf{x}) = [t] \det(t \cdot \mbox{\em Id} - \Delta^{edge}). \] \[ \kappa^{vertex}(G,\mathbf{x}) = [t] \det(t \cdot \mbox{\em Id} - \Delta^{vertex}). \] \end{theorem} For a proof, see for example \cite[Theorem~2]{CL96} for the vertex weighted-version, and \cite{Chaiken82} for the edge-weighted version. \begin{proof}[Proof of Theorem~\ref{thm:weightedtreeenum}] Consider the $V\times E$ matrix \[ A_{ve} = \begin{cases} 1, & v=\mathtt{t}(e) \\ 0, & \mbox{else}. \end{cases} \] and the $E\times V$ matrix \[ B_{ev} = \begin{cases} x_e, & v=\mathtt{s}(e) \\ 0, & \mbox{else}. \end{cases} \] Let $\Delta$ be the edge-weighted Laplacian of $G$, and let $\Delta^\line$ be the vertex-weighted Laplacian of $\line G$. Then \[ \Delta = AB - D \] and \begin{equation} \label{biglaplacian} \Delta^{\line} = BA - D^\line \end{equation} where $D$ and $D^\line$ are the diagonal matrices with diagonal entries \[ D_{vv} = \sum_{\mathtt{s}(f)=v} x_f, \qquad v \in V \] and \[ D^{\line}_{ee} = \sum_{\mathtt{s}(f)=\mathtt{t}(e)} x_f, \qquad e \in E. \] Since $AD^\line = DA$, we have \begin{equation} \label{intertwining} A\Delta^\line = A(BA-D^\line) = ABA - DA = (AB-D)A = \Delta A. \end{equation} In particular, $\Delta^\line(\ker(A)) \subset \ker(A)$, so the vector space decomposition \[ \Q(\mathbf{x})^E = \ker(A) \oplus \ker(A)^\perp \] exhibits $\Delta^\line$ in block triangular form. Hence the characteristic polynomial $\chi(t)$ of $\Delta^\line$ factors as \[ \chi(t) = \chi_1(t) \chi_2(t) \] where $\chi_1$ and $\chi_2$ are respectively the characteristic polynomials of $\Delta^\line|_{\ker(A)}$ and $\Delta^\line|_{\ker(A)^\perp}$. By hypothesis, $G$ has no sources, so $A$ has full rank. In particular, $AA^T$ is invertible. Hence the restriction $A|_{\ker(A)^\perp}$ is an isomorphism of $\ker(A)^\perp = \mathrm{Im}(A^T)$ onto $\Q(\mathbf{x})^V$. By (\ref{intertwining}) it follows that $\Delta^\line|_{\ker(A)^\perp}$ and~$\Delta$ have the same characteristic polynomial \[ \chi_2(t) = \det(t\cdot\mathrm{Id} - \Delta). \] Since the rows of $\Delta$ sum to zero, $\chi_2(t)$ has no constant term. By the matrix-tree theorem, \begin{equation*} \label{coefficientoft} \begin{split} \kappa^{vertex}(\line G,\mathbf{x}) = [t]\chi(t) &= \chi_1(0) \cdot [t] \chi_2(t) \\ &= \det \left(-\Delta^\line|_{\ker(A)}\right) \cdot \kappa^{edge}(G,\mathbf{x}). \end{split} \end{equation*} It remains to find the determinant of $-\Delta^\line|_{\ker(A)}$. For each vertex $v \in V$, fix an edge $e_0(v)$ with $\mathtt{t}(e_0(v))=v$. Then a basis for $\ker(A)$ is given by the vectors \[ \alpha_e = e - e_0(v), \qquad v\in V,~e \in E,~\mathtt{t}(e)=v,~e\neq e_0(v). \] By (\ref{biglaplacian}) we have \[ \Delta^\line \alpha_e = -\left(\sum_{\mathtt{s}(f)=\mathtt{t}(e)} x_f \right) \alpha_e \] so the vectors $\alpha_e$ form an eigenbasis for $\Delta^\line|_{\ker(A)}$. As each eigenvalue $-\sum_{\mathtt{s}(f)=v} x_f$ occurs with multiplicity $\mathrm{indeg}(v)-1$, we conclude that \[ \det \left(-\Delta^\line|_{\ker(A)}\right) = \prod_{v \in V} \left( \sum_{\mathtt{s}(f)=v} x_f \right)^{\mathrm{indeg}(v)-1}. \qed \] \renewcommand{\qedsymbol}{} \end{proof} We remark that the idea of using the incidence matrices $A$ and $B$ to relate the adjacency matrices of $G$ and $\line G$ has appeared before. See, for example, Yan and Zhang \cite[Proposition~1.4]{YZ03}, who in turn cite Lin and Zhang \cite{LZ83} and Liu \cite{Liu96}. Theorem~\ref{thm:weightedtreeenum} enumerates all oriented spanning trees of $\line G$, while in many applications one wants to enumerate spanning trees with a fixed root. Given a vertex $v_* \in V$, let \[ \kappa^{edge}(G,v_*,\mathbf{x}) = \sum_{\mathrm{root}(T)=v_*} \, \prod_{e \in T} x_e \] and \[ \kappa^{vertex}(G,v_*,\mathbf{x}) = \sum_{\mathrm{root}(T)=v_*} \, \prod_{e \in T} x_{\mathtt{t}(e)}. \] We will use the following variant of the matrix-tree theorem; see \cite{Chaiken82} and \cite[Theorem~5.6.4]{Stanley}. \begin{theorem}[Matrix-Tree Theorem, rooted version] \label{thm:rootedmatrixtree} Let $\Delta^{edge}_0$ and $\Delta^{vertex}_0$ be the submatrices of $\Delta^{edge}$ and $\Delta^{vertex}$ omitting row and column~$v_*$. Then \[ \kappa^{edge}(G,v_*,\mathbf{x}) = \det (-\Delta^{edge}_0). \] \[ \kappa^{vertex}(G,v_*,\mathbf{x}) = \det (-\Delta^{vertex}_0). \] \end{theorem} The following variant of Theorem~\ref{thm:weightedtreeenum} enumerates spanning trees of~$\line G$ with a fixed root $e_*$ in terms of spanning trees of~$G$ with root $w_*=\mathtt{s}(e_*)$. \begin{theorem} \label{thm:rootedtreeenum} Let $G=(V,E)$ be a finite directed graph, and let $e_*=(w_*,v_*)$ be an edge of~$G$. If $\mbox{\em indeg}(v) \geq 1$ for all vertices $v \in V$, and $\mbox{\em indeg}(v_*) \geq 2$, then \begin{equation*} \frac{\kappa^{vertex}(\line G,e_*,\mathbf{x})}{x_{e_*} \kappa^{edge}(G,w_*,\mathbf{x})} = \left( \sum_{\mathtt{s}(e)=v_*} x_e \right)^{\mbox{\em \scriptsize indeg}(v_*)-2} \prod_{v \neq v_*} \left( \sum_{\mathtt{s}(e)=v} x_e \right)^{\mbox{\em \scriptsize indeg}(v)-1}. \end{equation*} \end{theorem} \begin{proof} The proof is analogous to that of Theorem~\ref{thm:weightedtreeenum}, except that it uses reduced incidence matrices \[ A_0 : \Q(\mathbf{x})^{E-\{e_*\}} \to \Q(\mathbf{x})^V \] and \[ B_0 : \Q(\mathbf{x})^V \to \Q(\mathbf{x})^{E-\{e_*\}}. \] The edge-weighted Laplacian of the graph $G\setminus e_* = (V,E-\{e_*\})$ is given by \[ \Delta_{G\setminus e_*} = A_0 B_0 - D + M \] where the matrix~$M$ has a single nonzero entry~$x_{e_*}$ in row and column~$w_*$. Expanding $\det(D - A_0 B_0)$ along column~$w_*$ we find \[ \det (D - A_0 B_0) = \det(-\Delta_{G\setminus e}) + x_{e_*} \det(-\Delta_0) \] where $\Delta_0$ is the submatrix of the edge-weighted Laplacian of~$G$ omitting the row and column~$w_*$. By Theorem~\ref{thm:rootedmatrixtree} we have $\det (-\Delta_0) = \kappa^{edge}(G,w_*,\mathbf{x})$. Since the rows of $\Delta_{G\setminus e_*}$ sum to zero, it follows that \[ \det (D - A_0 B_0) = x_{e_*} \kappa^{edge}(G,w_*,\mathbf{x}). \] The submatrix $\Delta_0^\line$ of the vertex-weighted Laplacian of $\line G$ omitting the row and column~$e_*$ equals $B_0 A_0 - D_0^\line$, where $D_0^\line$ is the submatrix of $D^\line$ omitting row and column~$e_*$. Since $A_0 D_0^\line = D A_0$, we have \[ A_0 \Delta_0^\line = A_0 (B_0 A_0 - D_0^\line) = A_0 B_0 A_0 - DA_0 = (A_0 B_0-D) A_0 \] hence $\Delta_0^\line(\ker(A_0)) \subset \ker(A_0)$. Now by Theorem~\ref{thm:rootedmatrixtree}, \begin{align*} \kappa^{vertex}(\line G, e_*, \mathbf{x}) &= \det \left(-\Delta_0^\line \right) \\ &= \det \left(-\Delta_0^\line|_{\ker(A_0)} \right) \det \left(-\Delta_0^\line|_{\ker(A_0)^\perp} \right). \end{align*} By hypothesis, the graph $G\setminus e_*$ has no sources, so $A_0$ has full rank. The rest of the proof proceeds as before, giving \[ \det \left(-\Delta_0^\line|_{\ker(A_0)^\perp} \right) = \det(D - A_0 B_0) = x_{e_*} \kappa^{edge}(G,w_*,\mathbf{x}) \] and \[ \det \left(-\Delta_0^\line|_{\ker(A_0)} \right) = \left( \sum_{\mathtt{s}(e)=v_*} x_e \right)^{\mathrm{indeg}(v_*)-2} \prod_{v \neq v_*} \left( \sum_{\mathtt{s}(e)=v} x_e \right)^{\mathrm{indeg}(v)-1}. \qed \] \renewcommand{\qedsymbol}{} \end{proof} Setting all $x_e=1$ in Theorem~\ref{thm:rootedtreeenum} yields the enumeration \begin{equation} \label{rootedproductformula} \kappa(\line G, e_*) = \frac{\kappa(G,w_*)}{\mathrm{outdeg}(v_*)} \pi(G) \end{equation} where $\kappa(G,w_*)$ is the number of oriented spanning trees of~$G$ rooted at~$w_*$, and \[ \pi(G) = \prod_{v \in V} \mathrm{outdeg}(v)^{\mathrm{indeg}(v)-1}. \] It is interesting to compare this formula to the theorem of Knuth~\cite{Knuth67}, which in our notation reads \begin{equation} \label{knuthproduct} \kappa(\line G, e_*) = \left( \kappa(G,v_*) - \frac{1}{\mathrm{outdeg}(v_*)}\sum_{\substack{\mathtt{t}(e)=v_* \\ e\neq e_*}} \kappa(G,\mathtt{s}(e)) \right) \pi(G). \end{equation} To see directly why the right sides of (\ref{rootedproductformula}) and (\ref{knuthproduct}) are equal, we define a \emph{unicycle} to be a spanning subgraph of~$G$ which contains a unique directed cycle, and in which every vertex has outdegree~$1$. If vertex $v_*$ is on the unique cycle of a unicycle~$U$, we say that $U$ goes through $v_*$. \begin{lemma} \label{unicycles} \[ \kappa^{edge}(G,v_*,\mathbf{x}) \sum_{\mathtt{s}(e)=v_*} x_e = \sum_{\mathtt{t}(e)=v_*} \kappa^{edge}(G,\mathtt{s}(e),\mathbf{x}) \,x_e. \] \end{lemma} \begin{proof} Removing~$e$ gives a bijection from unicycles containing a fixed edge~$e$ to spanning trees rooted at~$\mathtt{s}(e)$. If~$U$ is a unicycle through~$v_*$, then the cycle of~$U$ contains a unique edge~$e$ with~$\mathtt{s}(e)=v_*$ and a unique edge~$e'$ with~$\mathtt{t}(e')=v_*$, so both sides are equal to \[ \sum_{U} \prod_{e \in U} x_e \] where the sum is over all unicycles $U$ through $v_*$. \end{proof} Setting all $x_e=1$ in Lemma~\ref{unicycles} yields \[ \kappa(G,v_*) \,\mathrm{outdeg}(v_*) = \sum_{\mathtt{t}(e)=v_*} \kappa(G,\mathtt{s}(e)). \] Hence the factor appearing in front of $\pi(G)$ in Knuth's formula (\ref{knuthproduct}) is equal to $\kappa(G,w_*)/\mathrm{outdeg}(v_*)$. We conclude this section by discussing some special cases and interesting examples of Theorem~\ref{thm:weightedtreeenum}. \subsection{Deletion and contraction} Fix an edge $e \in E$ which is not a loop, i.e., $\mathtt{s}(e)\neq \mathtt{t}(e)$. Let \[ G \setminus e = (V,E-\{e\}) \] be the graph obtained by deleting~$e$ from~$G$. While there is more than one sensible way to define contraction for directed graphs, the following definition is natural from the point of view of oriented spanning trees. Let $G/e$ be the graph obtained from~$G$ by first deleting all edges~$f$ with $\mathtt{s}(f)=\mathtt{s}(e)$, and then identifying the vertices~$\mathtt{s}(e)$ and~$\mathtt{t}(e)$. Formally, $G/e = (V/e, E/e)$, where \[ V/e = V - \{\mathtt{s}(e),\mathtt{t}(e)\} \cup \{e\} \] and \[ E/e = E - \{f | \mathtt{s}(f)=\mathtt{s}(e)\}. \] The source and target maps for $G/e$ are given by $p \circ \mathtt{s} \circ i$ and $p \circ \mathtt{t} \circ i$, where $i : E/e \to E$ is inclusion, and $p : V \to V/e$ is given by $p(\mathtt{s}(e))=p(\mathtt{t}(e))=e$, and $p(v)=v$ for $v \neq \mathtt{s}(e),\mathtt{t}(e)$. With these definitions, the spanning tree enumerator $\kappa^{edge}$ satisfies the following deletion-contraction recurrence. \begin{lemma} \label{deletioncontraction} Let $G$ be a finite directed graph, and let~$e$ be a non-loop edge of~$G$. Then \[ \kappa^{edge}(G,\mathbf{x}) = \kappa^{edge}(G \setminus e, \mathbf{x}) + x_e \kappa^{edge}(G/e, \mathbf{x}). \] \end{lemma} \begin{proof} Oriented spanning trees of $G \setminus e$ are in bijection with oriented spanning trees of~$G$ that do not contain the edge~$e$. With the above definition of $G/e$, one easily checks that the map $T \mapsto T \cup \{e\}$ defines a bijection from oriented spanning trees of $G/e$ to oriented spanning trees of $G$ that contain the edge $e$. \end{proof} Suppose now that we set $x_f=1$ for all $f\neq e$. The coefficient of $x_e^\ell$ in $\kappa^{vertex}(\line G,\mathbf{x})$ then counts the number of oriented spanning trees~$T$ of~$\line G$ with $\mathrm{indeg}_T(e)=\ell$. If $v=\mathtt{s}(e)$ has indegree~$k$ and outdegree~$m$, then by Theorem~\ref{thm:weightedtreeenum} and Lemma~\ref{deletioncontraction}, this number is given by the coefficient of $x_e^\ell$ in the product \[ \left[ \kappa(G\setminus e) + x_e \kappa(G/e) \right] (m-1+x_e)^{k-1} \prod_{w \neq v} \mathrm{outdeg}(w)^{\mathrm{indeg}(w)-1}. \] Using the binomial theorem, we obtain the following. \begin{prop} Let $G=(V,E)$ be a finite directed graph with no sources. Fix a non-loop edge $e \in E$ and an integer $\ell \geq 0$. The number of oriented spanning trees~$T$ of~$\line G$ satisfying $\mathrm{indeg}_T(e)=\ell$ is given by \begin{align*} \prod_{w\neq v} \mathrm{outdeg}(w)^{\mathrm{indeg}(w)-1} \left( \binomial{k-1}{\ell} \kappa(G \setminus e) (m-1)^{k-1-\ell} \,+ \right. \\ + \left. \binomial{k-1}{\ell-1} \kappa(G/e) (m-1)^{k-\ell} \right) \end{align*} where $v=\mathtt{s}(e)$, $k = \mathrm{indeg}(v)$ and $m=\mathrm{outdeg}(v)$. \end{prop} \subsection{Complete graph} Taking $G$ to be the graph with one vertex and $n$ loops, so that $\line G$ is the complete directed graph $\vec{K}_n$ on~$n$ vertices (including a loop at each vertex), we obtain from Theorem~\ref{thm:weightedtreeenum} the classical formula \[ \kappa^{vertex}(\vec{K}_n) = (x_1 + \ldots + x_n)^{n-1}. \] For a generalization to forests, see \cite[Theorem 5.3.4]{Stanley}. Note that oriented spanning trees of $\vec{K}_n$ are in bijection with rooted spanning trees of the complete undirected graph $K_n$, by forgetting orientation. \subsection{Complete bipartite graph} Taking $G$ to have two vertices, $a$ and $b$, with $m$ edges directed from $a$ to $b$ and $n$ edges directed from $b$ to $a$, we obtain from Theorem~\ref{thm:weightedtreeenum} \[ \begin{split} \kappa^{vertex}(\vec{K}_{m,n}) = (x_1 + \ldots + x_m + y_1 + \ldots + y_n) \,\times \quad \\ \times\, (x_1 + \ldots + x_m)^{n-1} (y_1 + \ldots + y_n)^{m-1}. \end{split} \] where $\vec{K}_{m,n} = \line G$ is the bidirected complete bipartite graph on $m+n$ vertices. The variables $x_1,\ldots,x_m$ correspond to vertices in the first part, and $y_1,\ldots,y_n$ correspond to vertices in the second part. As with the complete graph, oriented spanning trees of~$\vec{K}_{m,n}$ are in bijection with rooted spanning trees of the undirected complete bipartite graph $K_{m,n}$ by forgetting orientation. \subsection{De Bruijn graphs} The spanning tree enumerators for the first few de Bruijn graphs are \begin{align*} \kappa^{vertex}(DB_1) = x_0 + x_1; \end{align*} \begin{align*} \kappa^{vertex}(DB_2) = (x_{00}+x_{01})(x_{10}+x_{11})(x_{01} + x_{10}); \end{align*} \begin{align*} \kappa^{vertex}(DB_3) = (x_{000}+x_{001}) (x_{010} + x_{011}) (x_{100} + x_{101}) (x_{110} + x_{111}) \times \\ \times \big(x_{011}x_{110}x_{100} + x_{010}x_{110}x_{100} + x_{110}x_{101}x_{001} + x_{110}x_{100}x_{001} \, + \\ +\, x_{100}x_{001}x_{011} + x_{101}x_{001}x_{011} + x_{001}x_{010}x_{110} + x_{001}x_{011}x_{110}\big). \end{align*} \section{Sandpile Groups} \label{sandpilegroups} Let $G=(V,E)$ be a strongly connected finite directed graph, loops and multiple edges allowed. Consider the free abelian group $\Z^V$ generated by the vertices of $G$; we think of its elements as formal linear combinations of vertices with integer coefficients. For $v \in V$ let \[ \Delta_v = \sum_{\mathtt{s}(e)=v} (\mathtt{t}(e) - v) \in \Z^V \] where the sum is over all edges $e \in E$ such that $\mathtt{s}(e)=v$. Fixing a vertex~$v_* \in V$, let $L_V$ be the subgroup of $\Z^V$ generated by $v_*$ and $\{\Delta_v\}_{v\neq v_*}$. The \emph{sandpile group} $\K(G,v_*)$ is defined as the quotient group \[ \K(G,v_*) = \Z^V / L_V. \] The $V \times V$ integer matrix whose column vectors are $\{\Delta_v\}_{v \in V}$ is called the \emph{Laplacian} of $G$. By Theorem~\ref{thm:rootedmatrixtree}, its principal minor omitting the row and column corresponding to~$v_*$ counts the number $\kappa(G,v_*)$ of oriented spanning trees of~$G$ rooted at $v_*$. Since this minor is also the index of $L_V$ in $\Z^V$, we have \[ \# \K(G,v_*) = \kappa(G,v_*). \] Recall that $G$ is \emph{Eulerian} if $\mathrm{indeg}(v)=\mathrm{outdeg}(v)$ for every vertex $v$. If~$G$ is Eulerian, then the groups $\K(G,v_*)$ and $\K(G,v'_*)$ are isomorphic for any vertices $v_*$ and $v'_*$ \cite[Lemma~4.12]{HLMPPW}. In this case we usually denote the sandpile group just by~$\K(G)$. The sandpile group arose independently in several fields, including arithmetic geometry~\cite{Lor89,Lor91}, statistical physics~\cite{Dhar} and algebraic combinatorics~\cite{Biggs}. Often it is defined for an undirected graph $G$; to translate this definition into the present setting of directed graphs, replace each undirected edge by a pair of directed edges oriented in opposite directions. Sandpiles on directed graphs were first studied in \cite{Speer}. For a survey of the basic properties of sandpile groups of directed graphs and their proofs, see \cite{HLMPPW}. The goal of this section is to relate the sandpile groups of an Eulerian graph $G$ and its directed line graph $\line G$. To that end, let $\Z^E$ be the free abelian group generated by the edges of $G$. For $e \in E$ let \[ \Delta_e = \sum_{\mathtt{s}(f)=\mathtt{t}(e)} (f-e) \in \Z^E. \] Fix an edge $e_* \in E$, and let $v_*=\mathtt{t}(e_*)$. Let $L_E \subset \Z^E$ be the subgroup generated by $e_*$ and $\{\Delta_e\}_{e \neq e_*}$. Then the sandpile group associated to~$\line G$ and $e_*$ is \[ \K(\line G, e_*) = \Z^E/L_E. \] Note that $\line G$ may not be Eulerian even when $G$ is Eulerian. For example, if $G$ is a bidirected graph (i.e., a directed graph obtained by replacing each edge of an undirected graph by a pair of oppositely oriented directed edges) then $G$ is Eulerian, but $\line G$ is not Eulerian unless all vertices of~$G$ have the same degree. We will work with maps $\phi$ and $\psi$ relating the sandpile groups of~$G$ and~$\line G$. These maps are analogous to the incidence matrices~$A$ and~$B$ from section~\ref{spanningtrees}, except that now we work over~$\Z$ instead of the field~$\Q(\mathbf{x})$. \begin{lemma} \label{phidescends} Let $\phi : \Z^E \to \Z^V$ be the $\Z$-linear map sending $e \mapsto \mathtt{t}(e)$. If $G$ is Eulerian, then $\phi$ descends to a surjective group homomorphism \[ \bar{\phi} : \K(\line G, e_*) \to \K(G, v_*). \] \end{lemma} \begin{proof} To show that $\phi$ descends, it suffices to show that $\phi(L_E) \subset L_V$. For any $e \in E$, we have \begin{align*} \phi(\Delta_e) &= \sum_{\mathtt{s}(f)=\mathtt{t}(e)} (\mathtt{t}(f)-\mathtt{t}(e)) = \Delta_{\mathtt{t}(e)}. \end{align*} The right side lies in $L_V$ by definition if $\mathtt{t}(e)\neq v_*$. Moreover, since $G$ is Eulerian, \[ \sum_{v \in V} \Delta_v = \sum_{e \in E} (\mathtt{t}(e)-\mathtt{s}(e)) = \sum_{v \in V} (\mathrm{indeg}(v)-\mathrm{outdeg}(v)) v = 0, \] so $\Delta_{v_*} = - \sum_{v\neq v_*} \Delta_v$ also lies in $L_V$. Finally, $\phi(e_*) = v_* \in L_V$, and hence $\phi(L_E) \subset L_V$. Since $G$ is strongly connected, every vertex has at least one incoming edge, so $\phi$ is surjective, and hence $\bar{\phi}$ is surjective. \end{proof} Let $k$ be a positive integer. We say that $G$ is \emph{balanced $k$-regular} if $\mathrm{indeg}(v)=\mathrm{outdeg}(v)=k$ for every vertex~$v$. Note that any balanced $k$-regular graph is Eulerian; and if $G$ is balanced $k$-regular, then its directed line graph $\line G$ is also balanced $k$-regular. In particular, this implies \[ \sum_{e \in E} \Delta_e = 0 \] so that $\Delta_{e_*} \in L_E$. Now consider the $\Z$-linear map \begin{equation*} \begin{split} \psi : \Z^V \to \Z^E \\ \end{split} \end{equation*} sending $v \mapsto \sum_{\mathtt{s}(e)=v} e$. For a group $\Gamma$, write $k\Gamma = \{ kg | g \in \Gamma\}$. \begin{lemma} \label{multiplesofk} If $G$ is balanced $k$-regular, then $\psi$ descends to a group isomorphism \[ \bar{\psi} : \K(G) \xrightarrow{\simeq} k\,\K(\line G). \] \end{lemma} \begin{proof} We have \[ \psi(v_*) = \Delta_{e_*} + k e_* \in L_E \] and for any vertex $v \in V$, \begin{align*} \psi(\Delta_v) &= \sum_{\mathtt{s}(e)=v} \psi(\mathtt{t}(e)) - k\psi(v) \\ &= \sum_{\mathtt{s}(e)=v} \sum_{\mathtt{s}(f)=\mathtt{t}(e)} f - k\sum_{\mathtt{s}(g)=v} g \\ &= \sum_{\mathtt{s}(e)=v} \left( \sum_{\mathtt{s}(f)=\mathtt{t}(e)} f - ke \right) \\ &= \sum_{\mathtt{s}(e)=v} \Delta_e. \end{align*} Since $\line G$ is Eulerian, the right side lies in $L_E$. Hence $\psi(L_V) \subset L_E$, and $\psi$ descends to a group homomorphism \[ \bar{\psi} : \K(G) \to \K(\line G). \] If $v$ is any vertex of $G$, and $e$ is any edge with $\mathtt{t}(e)=v$, then \[ \psi(v) = ke + \Delta_{e}, \] so the image of $\bar{\psi}$ is $k\,\K(\line G)$. To complete the proof it suffices to show that $\psi^{-1}(L_E) \subset L_V$, so that $\bar{\psi}$ is injective. If $k=1$ then $K(G)$ is the trivial group, so there is nothing to prove. Assume now that $k\geq 2$. Given $\eta \in \Z^V$ with $\psi(\eta) \in L_E$, write \[ \psi(\eta) = \sum_{e\in E} b_e \Delta_e + b_*e_* \] for some coefficients $b_e, b_* \in \Z$. Then \begin{align*} \psi(\eta) - b_*e_* &= \sum_{e\in E} b_e \left( \sum_{\mathtt{s}(f)=\mathtt{t}(e)} f - ke \right) \\ &= \sum_{f \in E} \left( \sum_{\mathtt{t}(e)=\mathtt{s}(f)} b_e \right) f - \sum_{e \in E} kb_e e \\ &= \sum_{f \in E} \left( \sum_{\mathtt{t}(e)=\mathtt{s}(f)} b_e - kb_f \right) f. \end{align*} Now writing $\eta = \sum_{v \in V} a_v v$, so that $\psi(\eta) = \sum_{f \in E} a_{\mathtt{s}(f)} f$, equating coefficients of $f$ gives \begin{equation} \label{equatingedgecoefs} kb_f = \sum_{\mathtt{t}(e)=\mathtt{s}(f)} b_e - a_{\mathtt{s}(f)}, \qquad f\neq e_*. \end{equation} Note that the right side depends only on $\mathtt{s}(f)$. For $v \in V$, let \[ F(v) = \frac{1}{k} \sum_{\mathtt{t}(e)=v} b_e - \frac{1}{k} a_v. \] Then $b_f = F(\mathtt{s}(f))$ for all edges $f \neq e_*$. Since $k \geq 2$, for any $v \in V$ there exists an edge $f \neq e_*$ with $\mathtt{s}(f)=v$. Moreover if $v \neq v_*$ and $\mathtt{t}(e)=v$, then $e\neq e_*$. From (\ref{equatingedgecoefs}) we obtain \[ a_v = \sum_{\mathtt{t}(e)=v} b_e - kb_f = \sum_{\mathtt{t}(e)=v} F(\mathtt{s}(e)) - k F(v), \qquad v\neq v_*. \] Hence \begin{align*} \eta - a_{v_*}v_* = \sum_{v \neq v_*} a_v v &= \sum_{e\in E, ~\mathtt{t}(e)\neq v_*} F(\mathtt{s}(e)) \mathtt{t}(e) - \sum_{v\neq v_*} k F(v) v \\ &= \sum_{v \in V} F(v) \left( \sum_{\mathtt{s}(e)=v, ~\mathtt{t}(e)\neq v_*} \mathtt{t}(e) - k v \right) + kF(v_*)v_* \\ &= \sum_{v \in V} F(v) \Delta_v + \left(kF(v_*) - \sum_{\mathtt{t}(e)=v_*} F(\mathtt{s}(e)) \right) v_*. \end{align*} The right side lies in $L_V$, so $\eta \in L_V$, completing the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{mainsequence}] If $G$ is Eulerian, then $\phi$ descends to a surjective homomorphism of sandpile groups by Lemma~\ref{phidescends}. If $G$ is balanced $k$-regular, then $\bar{\psi}$ is injective by Lemma~\ref{multiplesofk}, so \[ \ker(\bar{\phi}) = \ker(\bar{\psi} \circ \bar{\phi}). \] Moreover for any edge $e \in E$ \[ (\psi \circ \phi)(e) = \sum_{\mathtt{s}(f)=\mathtt{t}(e)} f = ke + \Delta_e. \] Hence $\bar{\psi} \circ \bar{\phi}$ is multiplication by $k$, and $\ker(\bar{\phi})$ is the $k$-torsion subgroup of $\K(\line G)$. \end{proof} \section{Iterated Line Graphs} \label{iteratedlinegraphs} Let $G=(V,E)$ be a finite directed graph, loops and multiple edges allowed. The \emph{iterated line digraph} $\line^n G = (E_n,E_{n+1})$ has as vertices the set \[ E_n = \{ (e_1,\ldots,e_n) \in E^n \,|\, \mathtt{s}(e_{i+1})=\mathtt{t}(e_i), ~i=1,\ldots,n-1 \} \] of directed paths of $n$ edges in $G$. The edge set of $\line^n G$ is $E_{n+1}$, and the incidence is defined by \begin{align*} \mathtt{s}(e_1,\ldots,e_{n+1}) &= (e_1,\ldots,e_n); \\ \mathtt{t}(e_1,\ldots,e_{n+1}) &= (e_2,\ldots,e_{n+1}). \end{align*} (We also set $E_0=V$, and $\line^0 G = G$.) For example, the de~Bruijn graph~$DB_n$ is~$\line^n(DB_0)$, where~$DB_0$ is the graph with one vertex and two loops. Our next result relates the number of spanning trees of $G$ and $\line^n G$. Given a vertex $v \in V$, let \[ p(n,v) = \# \{ (e_1,\ldots,e_n)\in E_n \,|\, \mathtt{t}(e_n)=v \} \] be the number of directed paths of $n$ edges in $G$ ending at vertex $v$. \begin{theorem} \label{iterated} Let $G=(V,E)$ be a finite directed graph with no sources. Then \[ \kappa(\line^n G) = \kappa(G) \prod_{v \in V} \mbox{\em outdeg}(v)^{p(n,v)-1}. \] \end{theorem} \begin{proof} For any $j \geq 0$, by Theorem~\ref{thm:weightedtreeenum} applied to $\line^{j} G$ with all edge weights~$1$, \begin{align*} \frac{\kappa(\line^{j+1} G)}{\kappa(\line^{j} G)} &= \prod_{(e_1,\ldots,e_{j}) \in E_{j}} \mathrm{outdeg}(\mathtt{t}(e_{j}))^{\mathrm{indeg}(\mathtt{s}(e_1))-1} \\ &= \prod_{v \in V} \mathrm{outdeg}(v)^{p(j+1,v)-p(j,v)}. \end{align*} Taking the product over $j=0,\ldots,n-1$ yields the result. \end{proof} When $G$ is balanced $k$-regular, we have $p(n,v)=k^n$ for all vertices~$v$, so we obtain as a special case of Theorem~\ref{iterated} the result of Huaxiao, Fuji and Qiongxiang \cite[Theorem~1]{iterated} \[ \kappa(\line^n G) = \kappa(G) k^{(k^n -1)\#V}. \] In particular, taking $G=DB_0$ yields the classical formula \[ \kappa(DB_n) = 2^{2^n-1}. \] Since $DB_n$ is Eulerian, the number $\kappa(DB_n,v_*)$ of oriented spanning trees rooted at~$v_*$ does not depend on $v_*$, so \begin{equation} \label{twotothenminusnminusone} \kappa(DB_n,v_*)= 2^{-n}\kappa(DB_n) = 2^{2^n-n-1}. \end{equation} This familiar number counts \emph{de Bruijn sequences} of order~$n+1$ (Eulerian tours of $DB_n$) up to cyclic equivalence. De Bruijn sequences are in bijection with oriented spanning trees of $DB_n$ rooted at a fixed vertex~$v_*$; for more on the connection between spanning trees and Eulerian tours, see \cite{EdB51} and \cite[section 5.6]{Stanley}. Perhaps less familiar is the situation when $G$ is not regular. As an example, consider the graph \[ G = (\{0,1\},\{(0,0),(0,1),(1,0)\}). \] The vertices of its iterated line graph $\line^n G$ are binary words of length $n+1$ containing no two consecutive~$1$'s. The number of such words is the Fibonacci number~$F_{n+3}$, and the number of words ending in~$0$ is~$F_{n+2}$. By Theorem~\ref{iterated}, the number of oriented spanning trees of~$\line^n G$ is \[ \kappa(\line^n G) = 2 \cdot 2^{p(n,0)-1} = 2^{F_{n+2}}. \] Next we turn to the proofs of Theorems~\ref{deBruijn} and~\ref{Kautz}. If~$a$ and~$b$ are positive integers, we write $\Z_b^a$ for the group $(\Z/b\Z) \oplus \ldots \oplus (\Z/b\Z)$ with~$a$ summands. \begin{proof}[Proof of Theorem~\ref{deBruijn}] Induct on $n$. From (\ref{twotothenminusnminusone}) we have \[ \# \K(DB_n) = 2^{2^n -n-1} \] hence \[ \K(DB_n) = \Z_2^{a_1} \oplus \Z_4^{a_2} \oplus \Z_8^{a_3} \oplus \ldots \oplus \Z_{2^m}^{a_m} \] for some nonnegative integers $m$ and $a_1, \ldots, a_m$ satisfying \begin{equation} \label{totalexponent} \sum_{j=1}^m j a_j = 2^n - n -1. \end{equation} By Lemma~\ref{multiplesofk} and the inductive hypothesis, \begin{align*} \Z_2^{a_2} \oplus \Z_4^{a_3} \oplus \ldots \oplus \Z_{2^{m-1}}^{a_m} &\simeq 2 \K(DB_n) \\ &\simeq \K(DB_{n-1}) \\ &\simeq \Z_2^{2^{n-3}} \oplus \Z_4^{2^{n-4}} \oplus \ldots \oplus \Z_{2^{n-2}}. \end{align*} hence $m=n-1$ and \[ a_2 = 2^{n-3}, a_3 = 2^{n-4}, \ldots, a_{n-1}=1. \] Solving (\ref{totalexponent}) for $a_1$ now yields $a_1=2^{n-2}$. \end{proof} For $p$ prime, by carrying out the same argument on a general balanced $p$-regular directed graph~$G$ on~$N$ vertices, we find that \[ \K(\line^n G) \simeq \tilde{K} \oplus \bigoplus_{j=1}^{n-1} (\Z_{p^j})^{p^{n-1-j}(p-1)^2 N} \oplus (\Z_{p^n})^{(p-1)N-r-1} \oplus \bigoplus_{j=1}^{m} (\Z_{p^{n+j}})^{a_j} \] where \[ \mathrm{Sylow}_p(\K(G)) = (\Z_p)^{a_1} \oplus \ldots \oplus (\Z_{p^m})^{a_m}; \] \[ \tilde{K} = \K(G) / \mathrm{Sylow}_p(\K(G)); \] \[ r = a_1 + \ldots + a_m. \] In particular, taking $G=\mathrm{Kautz}_1$ with $p=2$, we have $\K(G)=\tilde{K}=\Z_3$, and we arrive at Theorem~\ref{Kautz}. \section{Concluding Remarks} \label{concluding} Theorem~\ref{mainsequence} describes a map from the sandpile group $\K(\line G, e_*)$ to the group $\K(G,v_*)$ when $G$ is an Eulerian directed graph and $e_*=(w_*,v_*)$ is an edge of~$G$. There is also a suggestive numerical relationship between the orders of the sandpile groups $\K(\line G, e_*)$ and $\K(G,w_*)$, which holds even when $G$ is not Eulerian: by equation (\ref{rootedproductformula}) we have \[ \kappa(G,w_*) \,|\, \kappa(\line G, e_*) \] whenever $G$ satisfies the hypothesis of Theorem~\ref{thm:rootedtreeenum}. This observation leads us to ask whether $\K(G,w_*)$ can be expressed as a subgroup or quotient group of $\K(\line G, e_*)$. The area of spanning trees, Eulerian tours, and sandpile groups is full of simple enumerative results with no known bijective proofs. To give just one example, the number of de Bruijn sequences of order~$n$ (Eulerian tours of~$DB_{n-1}$) with distinguished starting edge is~$2^{2^{n-1}}$. Richard Stanley has posed the problem of finding a bijection between ordered pairs of such sequences and all~$2^{2^n}$ binary words of length~$2^n$. This problem and a number of others could be solved by giving a bijective proof of Theorem~\ref{thm:weightedtreeenum}.
{ "timestamp": "2010-04-08T02:00:16", "yymm": "0906", "arxiv_id": "0906.2809", "language": "en", "url": "https://arxiv.org/abs/0906.2809", "abstract": "We generalize a theorem of Knuth relating the oriented spanning trees of a directed graph G and its directed line graph LG. The sandpile group is an abelian group associated to a directed graph, whose order is the number of oriented spanning trees rooted at a fixed vertex. In the case when G is regular of degree k, we show that the sandpile group of G is isomorphic to the quotient of the sandpile group of LG by its k-torsion subgroup. As a corollary we compute the sandpile groups of two families of graphs widely studied in computer science, the de Bruijn graphs and Kautz graphs.", "subjects": "Combinatorics (math.CO)", "title": "Sandpile groups and spanning trees of directed line graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9925393558740028, "lm_q2_score": 0.8354835309589074, "lm_q1q2_score": 0.8292502856612913 }
https://arxiv.org/abs/1311.5718
On floors and ceilings of the k-Catalan arrangement
The set of dominant regions of the $k$-Catalan arrangement of a crystallographic root system $\Phi$ is a well-studied object enumerated by the Fuß-Catalan number $Cat^{(k)}(\Phi)$. It is natural to refine this enumeration by considering floors and ceilings of dominant regions. A conjecture of Armstrong states that counting dominant regions by their number of floors of a certain height gives the same distribution as counting dominant regions by their number of ceilings of the same height. We prove this conjecture using a bijection that provides even more refined enumerative information.
\section{Introduction} Let $\Phi$ be a crystallographic root system of rank $n$ with simple system $S$, positive system $\Phi^+$, and ambient vector space $V$. For background on root systems see \cite{humphreys90reflection}. For $k$ a positive integer, we define the \emph{$k$-Catalan arrangement} of $\Phi$ as the hyperplane arrangement given by the hyperplanes $H_{\alpha}^r=\{x\in V\mid\langle x,\alpha\rangle=r\}$ for $\alpha\in\Phi$ and $r\in\{0,1,\ldots,k\}$. The complement of this arrangement falls apart into connected components which we call the \emph{regions} of the arrangement. Those regions $R$ that have $\langle x,\alpha\rangle>0$ for all $\alpha\in\Phi^+$ and all $x\in R$ we call \emph{dominant}. The number of dominant regions of the $k$-Catalan arrangement equals the Fu{\ss}-Catalan number $Cat^{(k)}(\Phi)$ \cite{athanasiadis04generalized} of $\Phi$. This number remains somewhat mysterious, in the sense that it also counts other objects in combinatorics, like the set of $k$-divisible noncrossing partitions $NC^{(k)}(\Phi)$ of $\Phi$ \cite[Theorem 3.5.3]{armstrong09generalized} and the number of facets of the $k$-generalised cluster complex $\Delta^{(k)}(\Phi)$ of $\Phi$ \cite[Proposition 8.4]{fomin05generalized}, but no uniform proof of this fact is known, that is every known proof of this fact appeals to the classification of irreducible crystallographic root systems.\\ \\ For a dominant region $R$ of the $k$-Catalan arrangement, we call those hyperplanes that support a facet of $R$ the \emph{walls} of $R$. Those walls of $R$ which do not contain the origin and have the origin on the same side as $R$ we call the \emph{ceilings} of $R$. The walls of $R$ that do not contain the origin and separate $R$ from the origin are called its \emph{floors}. We say a hyperplane is of \emph{height} $r$ if it is of the form $H_{\alpha}^r$ for $\alpha\in\Phi^+$.\\ \\ One reason why floors and ceilings of dominant regions are interesting is that they give a more refined enumeration of the dominant regions of the $k$-Catalan arrangement of $\Phi$ that corresponds to refined enumerations of other objects counted by the Fu{\ss}-Catalan number $Cat^{(k)}(\Phi)$. More precisely, the number of dominant regions in the $k$-Catalan arrangement of $\Phi$ that have exactly $j$ floors of height $k$ equals the Fu{\ss}-Narayana number $Nar^{(k)}(\Phi,j)$ \cite[Proposition 5.1]{athanasiadis05refinement} \cite[Theorem 1]{thiel13hf}, which also counts the number of $k$-divisible noncrossing partitions of $\Phi$ of rank $j$ \cite[Definition 3.5.4]{armstrong09generalized}, as well as equalling the $(n-j)$-th entry of the $h$-vector of the $k$-generalised cluster complex $\Delta^{(k)}(\Phi)$ \cite[Theorem 10.2]{fomin05generalized}. Similarly, the number of bounded dominant regions of the $k$-Catalan arrangement of $\Phi$ that have exactly $j$ ceilings of height $k$ equals the $(n-j)$-th entry of the $h$-vector of the positive part of $\Delta^{(k)}(\Phi)$ \cite[Conjecture 1.2]{athanasiadis06cluster} \cite[Corollary 5]{thiel13hf}.\\ \\ For the special case where $\Phi$ is of type $A_{n-1}$, more is known. For example, there is an explicit bijection between the set of dominant regions of the $k$-Catalan arrangement of $\Phi$ and the set of facets of the cluster complex of $\Phi$ \cite{fishel13facets}. There is also an enumeration of those dominant regions that have a fixed hyperplane as a floor \cite{fishel13fixed}. In contrast to those results, all results in this paper are stated and proven uniformly for all crystallographic root systems without appeal to the classification.\\ \\ If $M$ is any set of hyperplanes of the $k$-Catalan arrangement, let $U(M)$ be the set of dominant regions $R$ of the $k$-Catalan arrangement such that all hyperplanes in $M$ are floors of $R$. Similarly, let $L(M)$ be the set of dominant regions $R'$ of the $k$-Catalan arrangement such that all hyperplanes in $M$ are ceilings of $R'$. Use the standard notation $[n]:=\{1,2,\ldots,n\}$. Then we have the following theorem. \begin{theorem}\label{bij} For any set $M=\{H_{\alpha_1}^{i_1},H_{\alpha_2}^{i_2},\ldots,H_{\alpha_m}^{i_m}\}$ of $m$ hyperplanes with $i_j\in [k]$ and $\alpha_j\in\Phi^+$ for all $j\in[m]$, there is an explicit bijection $\Theta$ from $U(M)$ to $L(M)$. \end{theorem} See Figure \ref{theta} for an example. From this theorem, we obtain some enumerative corollaries. In particular, let $fl_r(l)$ be the number of dominant regions in the $k$-Catalan arrangement that have exactly $l$ floors of height $r$, and let $cl_r(l)$ be the number of dominant regions that have exactly $l$ ceilings of height $r$ \cite[Definition 5.1.23]{armstrong09generalized}. We deduce the following conjecture of Armstrong. \begin{figure}\label{theta} \begin{center} \resizebox*{12.4cm}{!}{\includegraphics{b2shi2bij3}}\\ \end{center} \caption{The bijection $\Theta$ for the 2-Catalan arrangement of the root system of type $B_2$, for $M=\{H_{\alpha_2}^1\}$ and for $M=\{H_{\alpha_1}^1,H_{\alpha_2}^2\}$. The dominant chamber is shaded in grey.} \end{figure} \begin{corollary}[\protect{\cite[Conjecture 5.1.24]{armstrong09generalized}}]\label{arm} We have $fl_r(l)=cl_r(l)$ for all $1\leq r\leq k$ and $0\leq l\leq n$. \end{corollary} Specialising to the $k=1$ case, we also give a geometric interpretation in terms of dominant regions of the Catalan arrangement of the Panyushev complement on ideals in the root poset of $\Phi$. \section{Definitions} For this section and the next one, suppose that $\Phi$ is irreducible. Define the \emph{affine Coxeter arrangement} of $\Phi$ as the union of all hyperplanes of the form $H_{\alpha}^r=\{x\in V\mid\langle x,\alpha\rangle=r\}$ for $\alpha\in\Phi$ and $r\in\mathbb{Z}$. Then the complement of this falls apart into connected components, all of which are congruent open $n$-simplices, called \emph{alcoves}. The \emph{affine Weyl group} $W_a$ generated by all the reflections through hyperplanes of the form $H_{\alpha}^r$ for $\alpha\in\Phi$ and $r\in\mathbb{Z}$ is a Coxeter group, with generating set $S_a=\{s_0,s_1,\ldots,s_n\}$, where $s_1,\ldots,s_n$ are the reflections in the hyperplanes orthogonal to the simple roots of $\Phi$ and $s_0$ is the reflection in $H^1_{\tilde{\alpha}}$, where $\tilde{\alpha}$ is the highest root of $\Phi$.\\ \\ The group $W_a$ acts simply transitively on the alcoves, so if we define the \emph{fundamental alcove} as $$A_{\circ}=\{x\in V\mid \langle x,\alpha_i\rangle >0\text{ for all }\alpha_i\in S, \langle x,\tilde{\alpha}\rangle<1\}\text{,}$$ then every alcove $A$ can be written as $w(A_{\circ})$ for a unique $w\in W_a$.\\ \\ Clearly any alcove is contained in exactly one region $R$ of the $k$-Catalan arrangement of $\Phi$. For any alcove $A$ in the affine Coxeter arrangement of $\Phi$ and $\alpha\in\Phi^+$, there exists a unique integer $r$ with $r-1<\langle x,\alpha\rangle<r$ for all $x\in A$. We denote this integer by $r(A,\alpha)$.\\ \\ Suppose that for each $\alpha\in\Phi^+$ we are given a positive integer $r_{\alpha}$. The following is due to Shi \cite[Theorem 5.2]{shi87alcoves}. \begin{lemma}[\protect{\cite[Lemma 2.3]{athanasiadis06cluster}}]\label{aff} There is an alcove $A$ with $r(A,\alpha)=r_{\alpha}$ for all $\alpha\in\Phi^+$ if and only if $r_{\alpha}+r_{\beta}-1\leq r_{\alpha+\beta}\leq r_{\alpha}+r_{\beta}$ whenever $\alpha,\beta,\alpha+\beta\in\Phi^+$. \end{lemma} Define a partial order on $\Phi^+$ by $$\alpha\leq\beta\text{ if and only if } \beta-\alpha\in \langle S\rangle_{\mathbb{N}}\text{,}$$ that is, $\beta\geq\alpha$ if and only if $\beta-\alpha$ can be written as a linear combination of simple roots with nonnegative integer coefficients. The set of positive roots $\Phi^+$ with this partial order is called the \emph{root poset}. A subset $I\subseteq\Phi^+$ is called an \emph{ideal} if for all $\alpha\in I$ and $\beta\leq\alpha$, also $\beta\in I$. A subset $J\subseteq\Phi^+$ is called an \emph{order filter} if for all $\alpha\in J$ and $\beta\geq\alpha$, also $\beta\in J$.\\ \\ Suppose $\mathcal{I}=(I_1,I_2,\ldots,I_k)$ is an ascending (multi)chain of $k$ ideals in the root poset of $\Phi$, that is $I_1\subseteq I_2\subseteq\ldots\subseteq I_k$. Setting $J_i=\Phi^+\backslash I_i$ for $i\in[k]$ and $\mathcal{J}=(J_1,J_2,\ldots,J_k)$ gives us the corresponding descending chain of order filters. That is, we have $J_1\supseteq J_2\supseteq\ldots\supseteq J_k$. The ascending chain of ideals $\mathcal{I}$ and the corresponding descending chain of order filters $\mathcal{J}$ are both called \emph{geometric} if the following conditions are satisfied simultaneously. \begin{enumerate} \item $(I_i+I_j)\cap\Phi^+\subseteq I_{i+j}\text{ for all }i,j\in\{0,1,\ldots,k\}\text{ with }i+j\leq k\text{, and}$ \item $(J_i+J_j)\cap\Phi^+\subseteq J_{i+j}\text{ for all }i,j\in\{0,1,\ldots,k\}\text{.}$ \end{enumerate} Here we set $I_0=\varnothing$, $J_0=\Phi^+$, and $J_i=J_k$ for $i>k$. We call $\mathcal{I}$ and $\mathcal{J}$ \emph{positive} if $S\subseteq I_k$, or equivalently $S\cap J_k=\varnothing$.\\ \\ Let $R$ be a dominant region of the $k$-Catalan arrangement of $\Phi$. Define $\theta(R)=(I_1,I_2,\ldots,I_k)$ and $\phi(R)=(J_1,J_2,\ldots,J_k)$, where $$I_i=\{\alpha\in\Phi^+\mid\langle x,\alpha\rangle<i\text{ for all }x\in R\}\text{ and}$$ $$J_i=\{\alpha\in\Phi^+\mid\langle x,\alpha\rangle>i\text{ for all }x\in R\}\text{,}$$ for $i\in\{0,1,\ldots,k\}$. It is not difficult to verify that $\theta(R)$ is a geometric chain of ideals and that $\phi(R)$ is the corresponding geometric chain of order filters.\\ \\ For a geometric chain of ideals $\mathcal{I}=(I_1,I_2,\ldots,I_k)$, and $\alpha\in\Phi^+$, we define $$r_{\alpha}(\mathcal{I})=\text{min}\{r_1+r_2+\ldots+r_m\mid\alpha=\alpha_1+\alpha_2+\ldots+\alpha_m\text{ and }\alpha_i\in I_{r_i}\text{ for all }i\in[m]\}\text{,}$$ where we set $r_{\alpha}(\mathcal{I})=\infty$ if $\alpha$ cannot be written as a linear combination of elements in $I_k$. So $r_{\alpha}(\mathcal{I})<\infty$ for all $\alpha\in\Phi^+$ if and only if $\mathcal{I}$ is positive.\\ \\ For a geometric chain of order filters $\mathcal{J}=(J_1,J_2,\ldots,J_k)$, and $\alpha\in\Phi^+$, we define $$k_{\alpha}(\mathcal{J})=\text{max}\{k_1+k_2+\ldots+k_m\mid\alpha=\alpha_1+\alpha_2+\ldots+\alpha_m\text{ and }\alpha_i\in J_{k_i}\text{ for all }i\in[m]\}\text{,}$$ where $k_i\in\{0,1,\ldots,k\}$ for all $i\in[m]$.\\ \\ It turns out that $\phi$ is a bijection from the set of dominant regions of the $k$-Catalan arrangement of $\Phi$ to the set of geometric chains of $k$ order filters in the root poset of $\Phi$ \cite[Theorem 3.6]{athanasiadis05refinement}. Its inverse $\psi$ is the map sending a geometric chain of order filters $\mathcal{J}$ to the region $R$ of the $k$-Catalan arrangement containing the alcove $A$ with $r(A,\alpha)=k_{\alpha}(\mathcal{J})+1$ for all $\alpha\in\Phi^+$. This alcove $A$ is called the \emph{minimal alcove} of $R$. Its floors are exactly the floors of $R$ \cite[Theorem 3.11]{athanasiadis05refinement}.\\ \\ Thus the map $\theta$ is a bijection from dominant regions $R$ of the $k$-Catalan arrangement to geometric chains of ideals $\mathcal{I}$. It restricts to a bijection between bounded dominant regions of the $k$-Catalan arrangement and positive geometric chains of ideals. The inverse of this restriction maps a positive geometric chain of ideals $\mathcal{I}$ to the bounded dominant region $R$ in the $k$-Catalan arrangement containing the alcove $B$ with $r(B,\alpha)=r_{\alpha}(\mathcal{I})$ for all $\alpha\in\Phi^+$ \cite[Theorem 3.6]{athanasiadis06cluster}. This alcove $B$ is called the \emph{maximal alcove} of $R$. Its ceilings are exactly the ceilings of $R$ \cite[Theorem 3.11]{athanasiadis06cluster}.\\ \\ We call $\alpha\in\Phi^+$ a \emph{rank $r$ indecomposable element} \cite[Definition 3.8]{athanasiadis05refinement} of a geometric chain of order filters $\mathcal{J}=(J_1,J_2,\ldots,J_k)$ if $\alpha\in J_r$ and \begin{enumerate} \item $k_{\alpha}(\mathcal{J})=r$, \item $\alpha\notin J_i+J_j\text{ for }i+j=r$ and \item if $k_{\alpha+\beta}(\mathcal{J})=t\leq k$ for some $\beta\in\Phi^+$ then $\beta\in J_{t-r}$. \end{enumerate} \vphantom{Fnord} We have that $H^r_{\alpha}$ is a floor of $R$ if and only if $\alpha$ is a rank $r$ indecomposable element of the geometric chain of order filters $\mathcal{J}=\phi(R)$ \cite[Theorem 3.11]{athanasiadis05refinement}.\\ \\ We call $\alpha\in\Phi^+$ a \emph{rank $r$ indecomposable element} \cite[Definition 3.8]{athanasiadis06cluster} of a geometric chain of ideals $\mathcal{I}=(I_1,I_2,\ldots,I_k)$ if $\alpha\in I_r$ and \begin{enumerate} \item $r_{\alpha}(\mathcal{I})=r$, \item $\alpha\notin I_i+I_j\text{ for }i+j=r$ and \item if $r_{\alpha+\beta}(\mathcal{I})=t\leq k$ for some $\beta\in\Phi^+$ then $\beta\in I_{t-r}$. \end{enumerate} \vphantom{Fnord} We will soon see that $H^r_{\alpha}$ is a ceiling of $R$ if and only if $\alpha$ is a rank $r$ indecomposable element of the geometric chain of ideals $\mathcal{I}=\theta(R)$. \section{Lemmas} Our aim for this rather technical section is to prove the following theorem. \begin{theorem}\label{ind=ceil} Let $R$ be a dominant region in the $k$-Catalan arrangement of $\Phi$, $\mathcal{I}=\theta(R)$ and $\alpha\in\Phi^+$. Then $R$ contains an alcove $B$ such that for all $r\in[k]$ the following are equivalent: \begin{enumerate} \item $H_{\alpha}^r$ is a ceiling of $R$, \item $\alpha$ is a rank $r$ indecomposable element of $\mathcal{I}$, and \item $H_{\alpha}^r$ is a ceiling of $B$. \end{enumerate} \end{theorem} It is already known that Theorem \ref{ind=ceil} holds for bounded dominant regions \cite[Theorem 3.11]{athanasiadis06cluster}. In that case, we may take the alcove $B$ to be the maximal alcove of the bounded region $R$.\\ \\ Our approach to proving Theorem \ref{ind=ceil} is to note that when a region $R$ of the $k$-Catalan arrangement is subdivided into regions of the $(k+1)$-Catalan arrangement by hyperplanes of the form $H_{\alpha}^{k+1}$ for $\alpha\in\Phi^+$, at least one of the resulting regions is bounded. We find a region $\underline{R}$ of the $(k+1)$-Catalan arrangement which, among the bounded regions of the $(k+1)$-Catalan arrangement that are contained in $R$, is the one furthest away from the origin. We call the maximal alcove $B$ of $\underline{R}$ the \emph{pseudomaximal} alcove of $R$. It equals the maximal alcove of $R$ if $R$ is bounded. The alcove $B\subseteq R$ will be seen to satisfy the assertion of Theorem \ref{ind=ceil}. Instead of working directly with the dominant regions of the $k$- and $(k+1)$-Catalan arrangements, we usually phrase our results in terms of the corresponding geometric chains of ideals. \begin{figure}[h] \begin{center} \includegraphics{b2shi24}\\ \end{center} \caption{The dominant regions of the 2-Catalan arrangement of the root system of type $B_2$ together with their pseudomaximal alcove, shaded in grey.} \end{figure} We require the following lemmas: \begin{lemma}[\protect{\cite[Lemma 2.1 (ii)]{athanasiadis05refinement}}]\label{2.1} If $\alpha_1,\alpha_2,\ldots,\alpha_r\in\Phi$ and $\alpha_1+\alpha_2+\ldots+\alpha_r=\alpha\in\Phi$, then $\alpha_1=\alpha$ or there exists $i$ with $2\leq i\leq r$ such that $\alpha_1+\alpha_i\in\Phi\cup\{0\}$. \end{lemma} \begin{lemma}[\protect{\cite[Lemma 3.2]{athanasiadis06cluster}}]\label{inI} For $\alpha\in\Phi^+$ and $r_{\alpha}(\mathcal{I})=r\leq k$, we have that $\alpha\in I_{r}$. \end{lemma} \begin{lemma}[\protect{\cite[Lemma 3.10]{athanasiadis06cluster}}]\label{rind} Suppose $\alpha$ is an indecomposable element of $\mathcal{I}$. Then \begin{enumerate} \item $r_{\alpha}(\mathcal{I})=r_{\beta}(\mathcal{I})+r_{\gamma}(\mathcal{I})-1$ if $\alpha=\beta+\gamma$ for $\beta,\gamma\in\Phi^+$ and \item $r_{\alpha}(\mathcal{I})+r_{\beta}(\mathcal{I})=r_{\alpha+\beta}(\mathcal{I})$ if $\beta,\alpha+\beta\in\Phi^+$. \end{enumerate} \end{lemma} \begin{lemma}\label{ideal} If $\alpha,\beta,\gamma\in\Phi^+$, $\beta+\gamma\in\Phi^+$ and $\alpha\leq\beta+\gamma$, then $\alpha\leq\beta$ or $\alpha\leq\gamma$ or $\alpha=\beta'+\gamma'$ with $\beta',\gamma'\in\Phi^+$, $\beta'\leq\beta$ and $\gamma'\leq\gamma$. \begin{proof} Let $\alpha=\beta+\gamma-\sum_{j\in J}\alpha_j$ with $\alpha_j\in S$ for all $j\in J$. We proceed by induction on $|J|$. If $|J|=0$, we are done. If $|J|=1$, we have that $\alpha=-\alpha_i+\beta+\gamma$ for some $\alpha_i\in S$. Thus by Lemma \ref{2.1}, we have either $\alpha=-\alpha_i$ (a contradiction), or $\beta'=\beta-\alpha_i\in\Phi\cup\{0\}$ or $\gamma'=\gamma-\alpha_i\in\Phi\cup\{0\}$. Notice that if $\beta'\neq0$, then $\beta'\in\Phi^+$, and similarly for $\gamma'$. So if $\beta'\in\Phi^+$ we may write $\alpha=\beta'+\gamma$ and otherwise we have $\gamma'\in\Phi^+$ and thus $\alpha=\beta+\gamma'$ as required.\\ \\ If $|J|>1$, we have $\alpha+\sum_{j\in J}\alpha_j=\beta+\gamma$, so by Lemma \ref{2.1}, either $\alpha=\beta+\gamma$, so we are done, or $\alpha+\alpha_j\in\Phi\cup\{0\}$ for some $j\in J$. In the latter case we even have $\alpha+\alpha_j\in\Phi^+$. By induction hypothesis, $\alpha+\alpha_j\leq\beta$ or $\alpha+\alpha_j\leq\gamma$ or $\alpha+\alpha_j=\beta'+\gamma'$ with $\beta',\gamma'\in\Phi^+$, $\beta'\leq\beta$ and $\gamma'\leq\gamma$. In the first two cases, we are done. In the latter case, we have $\alpha=-\alpha_j+\beta'+\gamma'$, so we proceed as in the $|J|=1$ case. \end{proof} \end{lemma} We are now ready to define the bounded dominant region $\underline{R}$ of the $(k+1)$-Catalan arrangement in terms of the corresponding geometric chain of $k+1$ ideals $\underline{\mathcal{I}}$. For a geometric chain of ideals $\mathcal{I}=(I_1,I_2,\ldots,I_k)$, let $\underline{I}_i=I_i$ for all $i\in [k]$ and let $\underline{I}_{k+1}=\bigcup_{i+j=k+1}((I_i+I_j)\cap\Phi^+)\cup I_k\cup S$. By Lemma \ref{ideal}, $\underline{I}_{k+1}$ is an ideal. Define $\underline{\mathcal{I}}=(\underline{I}_1,\ldots,\underline{I}_{k+1})$. \begin{lemma}\label{Ibar} If $\mathcal{I}=(I_1,I_2,\ldots,I_k)$ is a geometric chain of $k$ ideals in the root poset of $\Phi$, then $\underline{\mathcal{I}}$ is a positive geometric chain of $k+1$ ideals. The bounded dominant region $\underline{R}=\theta^{-1}(\underline{\mathcal{I}})$ of the $(k+1)$-Catalan arrangement of $\Phi$ is contained in the region $R=\theta^{-1}(\mathcal{I})$ of the $k$-Catalan arrangement. \begin{proof} By construction, $\underline{\mathcal{I}}$ is an ascending chain of ideals. If $i+j\leq k$, we have that $(\underline{I}_i+\underline{I}_j)\cap\Phi^+=(I_i+I_j)\cap\Phi^+\subseteq I_{i+j}=\underline{I}_{i+j}$ as $\mathcal{I}$ is geometric. If $i+j=k+1$ with $i,j\neq0$ (otherwise the result is trivial) we have that $(\underline{I}_i+\underline{I}_j)\cap\Phi^+=(I_i+I_j)\cap\Phi^+\subseteq\bigcup_{i+j=k+1}((I_i+I_j)\cap\Phi^+)\subseteq\underline{I}_{i+j}$.\\ \\ Let $\mathcal{J}=(J_1,J_2,\ldots,J_k)$ be the geometric chain of order filters corresponding to the geometric chain of ideals $\mathcal{I}$. Define $\underline{\mathcal{J}}$ similarly. We need to verify that $(\underline{J}_i+\underline{J}_j)\cap\Phi^+\subseteq \underline{J}_{i+j}$ for all $i,j\in[k+1]$.\\ \\ Suppose first that $i+j\leq k$. Then $(\underline{J}_i+\underline{J}_j)\cap\Phi^+=(J_i+J_j)\cap\Phi^+\subseteq J_{i+j}=\underline{J}_{i+j}$ since $\mathcal{J}$ is geometric.\\ \\ Suppose next that $i+j=k+1$. Take any region $R'$ of the $(k+1)$-Catalan arrangement that is contained in $R$. Let $\theta(R')=\mathcal{I}'=(I_1',I_2',\ldots,I_{k+1}')$ be the geometric chain of ideals corresponding to $R'$ and let $\mathcal{J}'=(J_1',J_2',\ldots,J_{k+1}')$ be the corresponding geometric chain of order filters. Then $R$ and $R'$ are on the same side of each hyperplane of the $k$-Catalan arrangement. Thus $I_l'=I_l$ and $J_l'=J_l$ for $l\in[k]$. Thus we have $\underline{I}_{k+1}=\bigcup_{i+j=k+1}((I_i+I_j)\cap\Phi^+)\cup I_k\cup S=\bigcup_{i+j=k+1}((I'_i+I'_j)\cap\Phi^+)\cup I'_k\cup S\subseteq I_{k+1}'\cup S$ since $\mathcal{I}'$ is geometric. Since $\mathcal{J}'$ is geometric, we have $(\underline{J}_i+\underline{J}_j)\cap\Phi^+=(J_i'+J_j')\cap\Phi^+\subseteq J_{i+j}'=J_{k+1}'$. The sum of two positive roots is never a simple root, so we even have $(\underline{J}_i+\underline{J}_j)\cap\Phi^+\subseteq J_{k+1}'\backslash{S}$. But $J_{k+1}'\backslash{S}\subseteq \underline{J}_{k+1}$, as $\underline{I}_{k+1}\subseteq I'_{k+1}\cup S$. Thus $(\underline{J}_i+\underline{J}_j)\cap\Phi^+\subseteq\underline{J}_{i+j}$.\\ \\ Lastly, in the case where $i+j>k+1$, we have $\underline{J}_j\subseteq\underline{J}_{k+1-i}$, so that $(\underline{J}_i+\underline{J}_j)\cap\Phi^+\subseteq(\underline{J}_i+\underline{J}_{k+1-i})\cap\Phi^+\subseteq\underline{J}_{k+1}=\underline{J}_{i+j}$.\\ \\ Thus the chain of ideals $\underline{\mathcal{I}}$ is geometric. It is also clearly positive, so $\underline{R}=\theta^{-1}(\underline{\mathcal{I}})$ is bounded. Since $\underline{I}_i=I_i$ for $i\in[k]$, $\underline{R}$ and $R$ are on the same side of each hyperplane of the $k$-Catalan arrangement, so $\underline{R}$ is contained in $R$. \end{proof} \end{lemma} For a geometric chain of $k$ ideals $\mathcal{I}=(I_1,I_2,\ldots,I_k)$, define $\mathsf{supp}(\mathcal{I})=I_k\cap S$. In particular, $\mathsf{supp}(\mathcal{I})=S$ if and only if $\mathcal{I}$ is positive. \begin{lemma}\label{r=r3} If $\alpha\in\langle \mathsf{supp}(\mathcal{I})\rangle_{\mathbb{N}}$, then $r_{\alpha}(\underline{\mathcal{I}})=r_{\alpha}(\mathcal{I})$. In particular, if $r_{\alpha}(\underline{\mathcal{I}})\leq k$, then $r_{\alpha}(\underline{\mathcal{I}})=r_{\alpha}(\mathcal{I})$. \begin{proof} First note that $\alpha\in\langle \mathsf{supp}(\mathcal{I})\rangle_{\mathbb{N}}$ implies that $r_{\alpha}(\mathcal{I})<\infty$. So may write $\alpha=\alpha_1+\alpha_2+\ldots+\alpha_m$ with $\alpha_i\in I_{r_i}$ for $i\in[m]$ and $r_1+r_2+\ldots+r_m=r_{\alpha}(\mathcal{I})$. Since $\alpha_i\in I_{r_i}=\underline{I}_{r_i}$ this implies that $r_{\alpha}(\underline{\mathcal{I}})\leq r_{\alpha}(\mathcal{I})$.\\ \\ We may write $\alpha=\alpha_1+\alpha_2+\ldots+\alpha_m$ with $\alpha_i\in \underline{I}_{r_i}$ for $i\in[m]$ and $r_1+r_2+\ldots+r_m=r_{\alpha}(\underline{\mathcal{I}})$. We wish to show that $r_{\alpha}(\mathcal{I})\leq r_{\alpha}(\underline{\mathcal{I}})$. Thus we seek to write $\alpha=\alpha_1'+\alpha_2'+\ldots+\alpha_l'$ with $\alpha_i'\in I_{r_i'}$ for $i\in[l]$ and $r_1'+r_2'+\ldots+r_l'=r_{\alpha}(\underline{\mathcal{I}})$. If $r_p=k+1$ for some $p\in[m]$, then $\alpha_p\in\underline{I}_{k+1}=\bigcup_{i+j=k+1}((I_i+I_j)\cap\Phi^+)\cup I_k\cup S$. If $\alpha_p\in I_k=\underline{I}_k$, we get a contradiction with the minimality of $r_{\alpha}(\underline{I})$. If $\alpha_p\in S$, then since $\alpha_p\in\langle \mathsf{supp}(\mathcal{I})\rangle_{\mathbb{N}}$, we have that $\alpha_p\in\mathsf{supp}(\mathcal{I})\subseteq I_k$, again a contradiction. So $\alpha_p\in\bigcup_{i+j=k+1}((I_i+I_j)\cap\Phi^+)$. Thus write $\alpha_p=\beta_p+\beta_p'$, where $\beta_p\in I_i$ and $\beta_p'\in I_j$ for some $i,j$ with $i+j=k+1$. So in the sum $\alpha=\alpha_1+\alpha_2+\ldots+\alpha_m$ replace each $\alpha_p$ with $r_p=k+1$ with $\beta_p+\beta_p'$ to obtain (after renaming) $\alpha=\alpha_1'+\alpha_2'+\ldots+\alpha_l'$ with $\alpha_i'\in I_{r_i'}$ for $i\in[l]$ and $r_1'+r_2'+\ldots+r_l'=r_{\alpha}(\underline{\mathcal{I}})$, as required.\\ \\ If $r_{\alpha}(\underline{\mathcal{I}})=r\leq k$, then $\alpha\in I_r\subseteq I_k$ by Lemma \ref{inI}, so $\alpha\in\langle \mathsf{supp}(\mathcal{I})\rangle_{\mathbb{N}}$ and thus $r_{\alpha}(\underline{\mathcal{I}})=r_{\alpha}(\mathcal{I})$. \end{proof} \end{lemma} For $R$ a dominant region of the $k$-Catalan arrangement, define the \emph{pseudomaximal} alcove of $R$ to be the maximal alcove of $\underline{R}$. This term is justified by the following proposition. \begin{proposition} If $R$ is a bounded dominant region of the $k$-Catalan arrangement, its pseudomaximal alcove is equal to its maximal alcove. \begin{proof} Let $A$ and $B$ be the maximal and pseudomaximal alcoves of $R$ respectively. If $\mathcal{I}=\theta(R)$, then $r(\alpha,A)=r_{\alpha}(\mathcal{I})$ for all $\alpha\in\Phi^+$. Since $B$ is the maximal alcove of $\underline{R}$, we have $r(\alpha,B)=r_{\alpha}(\underline{\mathcal{I}})$ for all $\alpha\in\Phi^+$. Now $\mathcal{I}$ is positive since $R$ is bounded, so $\mathsf{supp}(\mathcal{I})=S$. Thus $r_{\alpha}(\mathcal{I})=r_{\alpha}(\underline{\mathcal{I}})$ for all $\alpha\in\Phi^+$ by Lemma \ref{r=r3}. So $r(\alpha,A)=r(\alpha,B)$ for all $\alpha\in\Phi^+$ and therefore $A=B$. \end{proof} \end{proposition} \begin{lemma}\label{max} Let $R$ be a region of the $k$-Catalan arrangement of $\Phi$, let be $B$ be its pseudomaximal alcove and let $t\leq k$ be a positive integer. If $\langle x_0,\alpha\rangle>t$ for some $x_0\in R$, then $\langle x,\alpha\rangle>t$ for all $x\in B$. \begin{proof} Let $\mathcal{I}=\theta(R)$. Since $r(B,\alpha)=r_{\alpha}(\underline{\mathcal{I}})$ for all $\alpha\in\Phi^+$, it suffices to show that $r_{\alpha}(\underline{\mathcal{I}})>t$. If $r_{\alpha}(\underline{\mathcal{I}})>k$ this is immediate, so we may assume that $r_{\alpha}(\underline{\mathcal{I}})\leq k$. Thus we have $r_{\alpha}(\underline{\mathcal{I}})=r_{\alpha}(\mathcal{I})$ by Lemma \ref{r=r3}. Write $\alpha=\alpha_1+\alpha_2+\ldots+\alpha_m$, with $\alpha_i\in I_{r_i}$ for all $i\in[m]$ and $r_1+r_2+\ldots+r_m=r_{\alpha}(\mathcal{I})$. Then $\langle x,\alpha_i\rangle<r_i$ for all $i\in[m]$ and $x\in R$, so $\langle x,\alpha\rangle<r_{\alpha}(\mathcal{I})$ for all $x\in R$. So if $\langle x_0,\alpha\rangle>t$ for some $x_0\in R$, then $r_{\alpha}(\mathcal{I})>\langle x_0,\alpha\rangle>t$, so $r_{\alpha}(\underline{\mathcal{I}})=r_{\alpha}(\mathcal{I})>t$. \end{proof} \end{lemma} \begin{lemma}\label{ind} If $\alpha$ is a rank $r$ indecomposable element of $\mathcal{I}$, then $\alpha$ is a rank $r$ indecomposable element of $\underline{\mathcal{I}}$. \begin{proof} Let $\alpha$ be a rank $r$ indecomposable element of $\mathcal{I}$. Then $\alpha\in I_r=\underline{I}_r$, and $r_{\alpha}(\underline{\mathcal{I}})=r_{\alpha}(\mathcal{I})=r$ by Lemma \ref{r=r3}. We have that $\alpha\notin I_i+I_j=\underline{I}_i+\underline{I}_j$ for $i+j=r$. If $r_{\alpha+\beta}(\underline{\mathcal{I}})=t\leq k+1$, then $\alpha+\beta\in \underline{I}_t$ by Lemma \ref{inI}. So if $t\leq k$, we have $r_{\alpha+\beta}(\mathcal{I})=r_{\alpha+\beta}(\underline{\mathcal{I}})$ by Lemma \ref{r=r3}. If $t=k+1$, then $\alpha+\beta\in I_k$ or $\alpha+\beta\in\bigcup_{i+j=k+1}((I_i+I_j)\cap\Phi^+)$, since $\alpha+\beta\notin S$. Either way, $\alpha+\beta\in \langle I_k\rangle_{\mathbb{N}}$ so $r_{\alpha+\beta}(\mathcal{I})=r_{\alpha+\beta}(\underline{\mathcal{I}})$ by Lemma \ref{r=r3}. Thus we have $r_{\alpha}(\mathcal{I})+r_{\beta}(\mathcal{I})=r_{\alpha+\beta}(\mathcal{I})=r_{\alpha+\beta}(\underline{\mathcal{I}})=t$ using Lemma \ref{rind}. So $r_{\beta}(\mathcal{I})=t-r_{\alpha}(\mathcal{I})=t-r$, so $\beta\in I_{t-r}=\underline{I}_{t-r}$ by Lemma \ref{inI}. Thus $\alpha$ is a rank $r$ indecomposable element of $\underline{\mathcal{I}}$. \end{proof} \end{lemma} \begin{lemma}\label{ceil} If $\alpha\in\Phi^+$ and $H_{\alpha}^r$ is a ceiling of a dominant region $R$ of the $k$-Catalan arrangement, then $\alpha$ is a rank $r$ indecomposable element of $\mathcal{I}=\theta(R)$. \begin{proof} Since the origin and $R$ are on the same side of $H_{\alpha}^r$, we have that $\langle x,\alpha\rangle<r$ for all $x\in R$, so $\alpha\in I_r$ and thus $r_{\alpha}(\mathcal{I})\leq r$. But if $r_{\alpha}(\mathcal{I})=i<r$, then $\alpha\in I_i$ by Lemma \ref{inI}, so $\langle x,\alpha\rangle<i\leq r-1$ for all $x\in R$. So $H_{\alpha}^r$ is not a wall of $R$, a contradiction. Thus $r_{\alpha}(\mathcal{I})=r$.\\ \\ If $\alpha=\beta+\gamma$ for $\beta\in I_i$ and $\gamma\in I_j$ with $i+j=r$, then the fact that $\langle x,\alpha\rangle<r$ for all $x\in R$ is a consequence of $\langle x,\beta\rangle<i$ and $\langle x,\gamma\rangle<j$ for all $x\in R$, so $H_{\alpha}^r$ does not support a facet of $R$. So $\alpha\notin I_i+I_j$ for $i+j=r$.\\ \\ If $r_{\alpha+\beta}(\mathcal{I})=t\leq k$, then $\alpha+\beta\in I_t$ by Lemma \ref{inI}, so $\langle x,\alpha+\beta\rangle<t$ for all $x$ in $R$. If also $\langle x,\beta\rangle>t-r$ for all $x\in R$, then $\langle x,\alpha\rangle<r$ for all $x\in R$ is a consequence of these, so $H_{\alpha}^r$ does not support a facet of $R$. So $\langle x,\beta\rangle<t-r$ for all $x\in R$, so $\beta\in I_{t-r}$.\\ \\ Thus $\alpha$ is a rank $r$ indecomposable element of $\mathcal{I}$. \end{proof} \end{lemma} \begin{proof}[Proof of Theorem \ref{ind=ceil}] We take $B$ to be the pseudomaximal alcove of $R$, that is the maximal alcove of $\underline{R}$. We will show that (1) $\Rightarrow$ (2) $\Rightarrow$ (3) $\Rightarrow$ (1).\\ \\ The statement that (1) $\Rightarrow$ (2) is Lemma \ref{ceil}.\\ \\ For (2) $\Rightarrow$ (3), suppose $\alpha$ is a rank $r$ indecomposable element of $\mathcal{I}$. Then by Lemma \ref{ind}, $\alpha$ is also a rank $r$ indecomposable element of $\underline{\mathcal{I}}$. So by Lemma \ref{rind}, we have $r_{\alpha}(\underline{\mathcal{I}})=r_{\beta}(\underline{\mathcal{I}})+r_{\gamma}(\underline{\mathcal{I}})-1$ if $\alpha=\beta+\gamma$ for $\beta,\gamma\in\Phi^+$, and also $r_{\alpha}(\underline{\mathcal{I}})+r_{\beta}(\underline{\mathcal{I}})=r_{\alpha+\beta}(\underline{\mathcal{I}})$ if $\beta,\alpha+\beta\in\Phi^+$. Thus there exists an alcove $B'$ with $r(B',\beta)=r_{\beta}(\underline{\mathcal{I}})$ for $\beta\neq\alpha$ and $r(B',\alpha)=r_{\alpha}(\underline{\mathcal{I}})+1$ by Lemma \ref{aff}. Since $r(B,\beta)=r_{\beta}(\underline{\mathcal{I}})$ for all $\beta\in\Phi^+$, this means that $B'$ and $B$ are on the same side of each hyperplane of the affine Coxeter arrangement, except for $H_{\alpha}^{r_{\alpha}(\mathcal{I})}=H_{\alpha}^r$. Thus $H_{\alpha}^r$ is a wall of $B$. Since $H_{\alpha}^r$ does not separate $B$ from the origin, it is a ceiling of $B$. \\ \\ For (3) $\Rightarrow$ (1), suppose $H_{\alpha}^r$ is a ceiling of $B$. Let $B'$ be the alcove which is the reflection of $B$ in the hyperplane $H_{\alpha}^r$. Then $\langle x,\alpha\rangle>r$ for all $x\in B'$, so by Lemma \ref{max} the alcove $B'$ is not contained in $R$. Thus $H_{\alpha}^r$ is a wall of $R$. It does not separate $R$ from the origin, so it is a ceiling of $R$. This completes the proof. \end{proof} \section{Proof of Theorem \ref{bij}} We are now in a position to prove Theorem \ref{bij}. \begin{proof}[Proof of Theorem \ref{bij}] Let us at first suppose that $\Phi$ is an irreducible crystallographic root system of rank $n$. For $m=0$, the statement is immediate. Suppose that $0<m\leq n$.\\ \\ To define the bijection $\Theta$, let $R\in U(M)$ and let $A$ be the minimal alcove of $R$. The reflections $s^{i_1}_{\alpha_1},\ldots,s^{i_m}_{\alpha_m}$ in the hyperplanes $H^{i_1}_{\alpha_1},\ldots,H^{i_m}_{\alpha_m}$ are reflections in facets of the alcove $A=w(A_{\circ})$, so the set $S'=\{s^{i_1}_{\alpha_1},\ldots,s^{i_m}_{\alpha_m}\}$ equals $wJw^{-1}$ for some $J\subset S_a$ and $w\in W_a$. Thus the reflection group $W'$ generated by $S'$ is a proper parabolic subgroup of $W_a$. In particular, it is finite. With respect to the finite reflection group $W'$, the alcove $A$ is contained in the dominant Weyl chamber, that is the set $$C=\{x\in V\mid\langle x,\alpha_j\rangle>i_j\text{ for all }j\in[m]\}\text{.}$$ So if $w_0'$ is the longest element of $W'$ with respect to the generating set $S'$, the alcove $A'=w_0'(A)$ is contained in the Weyl chamber $$w_0'(C)=\{x\in V\mid\langle x,\alpha_j\rangle<i_j\text{ for all }j\in[m]\}$$ of $W'$, so it is on the other side of all the hyperplanes $H^{i_1}_{\alpha_1},\ldots,H^{i_m}_{\alpha_m}$. $A'$ is an alcove, so it is contained in some region $R'$. Set $\Theta(R)=R'$. \begin{figure}[h] \begin{center} \includegraphics{b2shi2bij4} \end{center} \caption{The bijection $\Theta$ for the 2-Catalan arrangement of the root system of type $B_2$ with $M=\{H_{\alpha_2}^1,H_{2\alpha_1+\alpha_2}^2\}$.} \end{figure} \begin{claim}\label{inlm} The region $R'$ is dominant and all hyperplanes in $M$ are ceilings of $R'$, that is $R'\in L(M)$, so $\Theta$ is well-defined. \begin{proof} The origin is contained in the Weyl chamber $w_0'(C)$ of $W'$. Thus no reflection in $W'$ fixes the origin. We can write $A'=w_0'(A)$ as $t_r\cdots t_1(A)$ where $t_i\in W'$ is a reflection in a facet of $t_{i-1}\cdots t_1(A)$ for all $i\in[r]$. In fact, if $w_0'=s_1'\cdots s_r'$ with $s_i'\in S'$ for all $i\in[r]$ is a reduced expression for $w_0'$ in $W'$, we can take $t_i=s_1'\cdots s_{i-1}'s_i's_{i-1}'\cdots s_1'$. So $t_i\cdots t_1(A)$ and $t_{i-1}\cdots t_1(A)$ are on the same side of every hyperplane in the affine Coxeter arrangement of $\Phi$ except for the reflecting hyperplane of $t_i$. Since $t_i$ does not fix the origin, if $t_{i-1}\cdots t_1(A)$ is dominant, then so is $t_{i}\cdots t_1(A)$. Thus by induction on $i$, the alcove $A'$ is dominant, so $R'$ is dominant.\\ \\ Consider the Coxeter arrangement of $W'$, which is the hyperplane arrangement given by the reflecting hyperplanes of all the reflections in $W'$. The action of $W'$ on $V$ restricts to an action on the set of these hyperplanes. Since $H^{i_1}_{\alpha_1},\ldots,H^{i_m}_{\alpha_m}$ support facets of $A$, $w_0'(H^{i_1}_{\alpha_1}),\ldots,w_0'(H^{i_m}_{\alpha_m})$ support facets of $A'=w_0'(A)$. Now the set $\{w_0'(H^{i_1}_{\alpha_1}),\ldots,w_0'(H^{i_m}_{\alpha_m})\}$ is the set of walls of $w_0'(C)$ in the Coxeter arrangement of $W'$, so it equals the set $M=\{H^{i_1}_{\alpha_1},\ldots,H^{i_m}_{\alpha_m}\}$. Since all hyperplanes in $M$ are floors of $A$, and $A'$ is on the other side of each of them, they are all ceilings of $A'$. Thus they are ceilings of $R'$. \end{proof} \end{claim} We show that $\Theta$ is a bijection by exhibiting its inverse $\Psi$, a map from $L(M)$ to $U(M)$. Suppose $R'\in L(M)$. Let $B$ be the alcove in $R'$ given by Theorem \ref{ind=ceil}. Let $R''$ be the region that contains $B'=w_0'(B)$. Similarly to the proof of Claim \ref{inlm}, we have that $R''\in U(M)$. So let $\Psi(R')=R''$. \begin{claim} The maps $\Theta$ and $\Psi$ are inverse to each other, so $\Theta$ is a bijection. \begin{proof} Suppose $R\in U(M)$, $R'=\Theta(R)$ and $R''=\Psi(R')$. Use the same notation as above for the alcoves $A,A',B$ and $B'$. Suppose for contradiction that $R''\neq R$. Then there is a hyperplane $H=H^r_{\alpha}$ of the $k$-Catalan arrangement that separates $R$ and $R''$. So $H$ separates $A$ and $B'$. Now $A$ and $B'$ are in the dominant Weyl chamber of $W'$, so they are on the same side of each reflecting hyperplane of $W'$. Thus $H$ is not a reflecting hyperplane of $W'$. Now we may write $A'$ as $t_r\cdots t_1(A)$, where $t_i\in W'$ is a reflection in a facet of $t_{i-1}\cdots t_1(A)$ for all $i\in[r]$. So $t_i\cdots t_1(A)$ and $t_{i-1}\cdots t_1(A)$ are on the same side of every hyperplane in the affine Coxeter arrangement, except for the reflecting hyperplane of $t_i$, which cannot be $H$. Thus by induction on $i$, the alcove $A'$ is on the same side of $H$ as $A$. Similarly $B$ is on the same side of $H$ as $B'$. So $A'$ and $B$ are on different sides of $H$, a contradiction, as they are contained in the same region, namely $R'$. Thus $\Psi(\Theta(R))=R''=R$, so $\Psi\circ\Theta=id$. Similarly $\Theta\circ\Psi=id$, so $\Theta$ and $\Psi$ are inverse to each other, so $\Theta$ is a bijection. \end{proof} \end{claim} For any dominant alcove, at least one of its $n+1$ facets must either be a floor or contain the origin, and at least one must be a ceiling. So it has at most $n$ ceilings and at most $n$ floors. So any dominant region $R$ of the $k$-Catalan arrangement has at most $n$ ceilings and at most $n$ floors. Thus if $m>n$, both $U(M)$ and $L(M)$ are empty. This completes the proof in the case where $\Phi$ is irreducible.\\ \\ Now suppose $\Phi$ is reducible, say $\Phi=\Phi_1\amalg\Phi_2$ with $\Phi_1\perp\Phi_2$. So $V=V_1\oplus V_2$ with $V_1=\langle\Phi_1\rangle$ and $V_2=\langle\Phi_2\rangle$, and $V_1\perp V_2$. Then the regions of the $k$-Catalan arrangement of $\Phi$ are precisely the sets of the form $R_1\oplus R_2$ where $R_i$ is a region of the $k$-Catalan arrangement of $\Phi_i$ for $i=1,2$. The region $R_1\oplus R_2$ is dominant if and only if $R_1$ and $R_2$ are both dominant. A hyperplane $H_{\alpha}^r$ is a floor of $R_1\oplus R_2$ if and only if $H_{\alpha}^r$ is a floor of $R_i$ for some $i=1,2$. The same holds for ceilings. Say $M=M_1\amalg M_2$ with $H_{\alpha_j}^{i_j}\in M_i$ if $\alpha_j\in\Phi_i$ for $j\in[m]$ and $i=1,2$. Assume the theorem holds for $\Phi_1$ and $\Phi_2$, giving us bijections $\Theta_1$ and $\Theta_2$ for $\Phi_1$ together with $M_1$ and $\Phi_2$ together with $M_2$ respectively. Then $\Theta(R_1\oplus R_2)=\Theta_1(R_1)\oplus \Theta_2(R_2)$ gives the required bijection for $\Phi$ together with $M$. This completes the proof by induction on the number of irreducible components of $\Phi$. \end{proof} \section{Corollaries} We deduce some enumerative corollaries of Theorem \ref{bij}. For any set $M$ of hyperplanes of the $k$-Catalan arrangement, let $U_=(M)$ be the set of dominant regions $R$ of the $k$-Catalan arrangement such that the floors of $R$ are exactly the hyperplanes in $M$, and let $L_=(M)$ be the set of dominant regions $R'$ of the $k$-Catalan arrangement such that the ceilings of $R'$ are exactly the hyperplanes in $M$. \begin{corollary}\label{inex} For any set $M=\{H_{\alpha_1}^{i_1},H_{\alpha_2}^{i_2},\ldots,H_{\alpha_m}^{i_m}\}$ of $m$ hyperplanes with $i_j\in [k]$ and $\alpha_j\in\Phi^+$ for all $j\in[m]$, we have that $|U_=(M)|=|L_=(M)|$. \begin{proof} This follows from Theorem \ref{bij} by an application of the Principle of Inclusion and Exclusion. \end{proof} \end{corollary} \begin{corollary}\label{sum} For any tuple $(a_1,a_2,\ldots,a_k)$ of nonnegative integers, the number of dominant regions $R$ that have exactly $a_j$ floors of height $j$ for all $j\in[k]$ is the same as the number of dominant regions $R'$ that have exactly $a_j$ ceilings of height $j$ for all $j\in[k]$. \begin{proof} Sum Corollary \ref{inex} over all sets $M$ containing exactly $a_j$ hyperplanes of height $j$ for all $j\in[k]$. \end{proof} \end{corollary} \begin{proof}[Proof of Corollary \ref{arm}] Set $a_r=l$ and sum Corollary \ref{sum} over all choices of $a_j$ for all $j\neq r$. \end{proof} \section{The Panyushev complement} In the special case where $k=1$, a geometric chain of ideals $\mathcal{I}$ is simply the single ideal $I_1$, similarly a geometric chain of order filters $\mathcal{J}$ is just the single order filter $J_1$. The indecomposable elements of an ideal $I$ are then just its maximal elements \cite[Lemma 3.9]{athanasiadis06cluster}. The indecomposable elements of an order filter $J$ are just its minimal elements \cite[Lemma 3.9]{athanasiadis05refinement} \cite[Lemma 1]{thiel13hf}.\\ \\ There is a natural bijection between ideals and antichains of any poset that sends an ideal to the set of its maximal elements. Similarly, there is a natural bijection between order filters and antichains that sends an order filter to the set of its minimal elements.\\ \\ So for an ideal $I$ in the root poset of $\Phi$, we define the Panyushev complement $\mathbf{Pan}(I)$ as the ideal generated by the minimal elements of the order filter $J=\Phi^+\backslash I$. From the above considerations, this is a bijection from the set of order ideals of the root poset of $\Phi$ to itself.\\ \\ For a region $R$ of the Catalan arrangement, let $$CL(R)=\{\alpha\in\Phi^+\mid H_{\alpha}^1\text{ is a ceiling of $R$}\}\text{ and}$$ $$FL(R)=\{\alpha\in\Phi^+\mid H_{\alpha}^1\text{ is a floor of $R$}\}\text{.}$$ Since a region $R$ in the Catalan arrangement corresponds to a unique ideal $I=\theta(R)$, which corresponds uniquely to the set of its maximal elements, which equals $CL(R)$ by Theorem \ref{ind=ceil}, the map $CL:R\mapsto CL(R)$ gives a bijection from the set of dominant regions in the Catalan arrangement to the set of antichains in the root poset. That the same holds for the map $FL:R\mapsto FL(R)$ follows from an analogous argument that can already be deduced from \cite[Theorem 3.11]{athanasiadis05refinement}. \begin{figure}[h] \begin{center} \includegraphics{pan4}\\ \end{center} \caption{The action of $\theta^{-1}\circ\mathbf{Pan}\circ\theta=CL^{-1}\circ FL$ on the dominant regions of the Catalan arrangement of the root system of type $B_2$.} \end{figure} \begin{theorem}\label{pan} For an ideal $I$ in the root poset of $\Phi$, the region $\theta^{-1}(\mathbf{Pan}(I))$ is the unique region of the Catalan arrangement of $\Phi$ whose ceilings are exactly the floors of the region $\theta^{-1}(I)$. \begin{proof} The set $CL(\theta^{-1}(\mathbf{Pan}(I)))$ is the set of maximal elements of $\mathbf{Pan}(I)$, which equals the set of minimal elements of $J=\Phi^+\backslash I$, which equals $FL(\theta^{-1}(I))$. Since $CL$ is a bijection, $\theta^{-1}(\mathbf{Pan}(I))$ is the only region $R'$ with $CL(R')=FL(\theta^{-1}(I))$. \end{proof} \end{theorem} We could rephrase Theorem \ref{pan} as $\mathbf{Pan}=\theta\circ\ CL^{-1}\circ FL\circ\theta^{-1}$. The fact that the Panyushev complement has a natural interpretation in terms of the dominant regions of the Catalan arrangement may serve to explain why it seems to be of particular interest for root posets. \section{Acknowledgements} I thank Myrto Kallipoliti, Henri M{\"u}hle, Vivien Ripoll and Eleni Tzanaki for helpful comments and suggestions. I also wish to express my gratitude to the anonymous referee for a very careful reading of the manuscript. \bibliographystyle{alpha}
{ "timestamp": "2014-11-06T02:10:21", "yymm": "1311", "arxiv_id": "1311.5718", "language": "en", "url": "https://arxiv.org/abs/1311.5718", "abstract": "The set of dominant regions of the $k$-Catalan arrangement of a crystallographic root system $\\Phi$ is a well-studied object enumerated by the Fuß-Catalan number $Cat^{(k)}(\\Phi)$. It is natural to refine this enumeration by considering floors and ceilings of dominant regions. A conjecture of Armstrong states that counting dominant regions by their number of floors of a certain height gives the same distribution as counting dominant regions by their number of ceilings of the same height. We prove this conjecture using a bijection that provides even more refined enumerative information.", "subjects": "Combinatorics (math.CO)", "title": "On floors and ceilings of the k-Catalan arrangement", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429151632046, "lm_q2_score": 0.8418256472515683, "lm_q1q2_score": 0.8292343896278364 }
https://arxiv.org/abs/0710.2216
The dying rabbit problem revisited
In this paper we study a generalization of the Fibonacci sequence in which rabbits are mortal and take more that two months to become mature. In particular we give a general recurrence relation for these sequences (improving the work in the paper Hoggatt, V. E., Jr.; Lind, D. A. "The dying rabbit problem". Fibonacci Quart. 7 1969 no. 5, 482--487) and we calculate explicitly their general term (extending the work in the paper Miles, E. P., Jr. Generalized Fibonacci numbers and associated matrices. Amer. Math. Monthly 67 1960 745--752). In passing, and as a technical requirement, we also study the behavior of the positive real roots of the characteristic polynomial of the considered sequences.
\section{Introduction} Fibonacci numbers arose in the answer to a problem proposed by Leonardo de Pisa who asked for the number of rabbits at the $n^{th}$ month if there is one pair of rabbits at the $0^{th}$ month which becomes mature one month later and that breeds another pair in each of the succeeding months, and if these new pairs breed again in the second month following birth. It can be easily proved by induction that the number of pairs of rabbits at the $n^{th}$ month is given by $f_{n}$, with $f_n$ satisfying the recurrence relation: $$f_0=f_1=1.$$ $$f_n=f_{n-1}+f_{n-2},\ \forall n\geq2.$$ It is not the point here to state any of the many properties of these numbers (see \cite{VAJ} for a good account of them), nevertheless we will recall that if $r_1<r_2$ are the roots of the polynomial $g(x)=x^2-x-1$ then we can see that: \begin{equation}\label{k2}\displaystyle{f_n=\frac{r_1^n}{r_1-r_2}+\frac{r_2^n}{r_2-r_1}}.\end{equation} In \cite{MIL} the $k$-generalized Fibonacci numbers $f_n^{(k)}$ are defined as follows: $$f_n^{(k)}=1,\ \forall 0\leq n\leq k-1.$$ $$f_n^{(k)}=\sum_{i=1}^k f_{n-i}^{(k)},\ \forall n\geq k.$$ In this paper Miles proves, among other results, that if $r_{1},\dots,r_{k}$ are the (distinct) roots of $g_k(x)=x^k-x^{k-1}-\dots-x-1$ then: \begin{equation}f_n^{(k)}=\sum_{i=1}^k\left(\prod_{\substack{i\neq j\\ 1\leq j\leq k}}(r_i-r_j)^{-1}\right)r_i^n.\end{equation} which, of course reduces to (\ref{k2}) if we set $k=2$. Later, in \cite{HOG}, Hoggat and Lind consider the so called ``dying rabbit problem'', previously introduced in \cite{AL1} and studied in \cite{AL2} or \cite{COH}, which consists in letting rabbits die. In the mentioned paper Hoggat and Lind consider a pair of rabbits at the $0^{th}$ time point which produces $B_n$ pairs at the $n^{th}$ time point and which die at the $m^{th}$ time point after birth. Then if $T_n$ is the total number of live pairs of rabbits at the $n^{th}$ time point, and defining $\displaystyle{B(x)=\sum_{n\geq0}B_nx^n}$, $D(x)=x^m$ and $\displaystyle{T(x)=\sum_{n\geq 0}T_nx^n}$, they prove that $$T(x)=\frac{1-D(x)}{(1-x)(1-B(x))}.$$ and use this formula to find recurrence relations for $T_n$ in various cases. The goal of this paper is to give a new look at the dying rabbit problem. In the second section we study a family of polynomials, focusing on the behavior of their positive roots. Although motivated by technical requirements, this study turns out to be of intrinsic interest. In the third section we will find a general recurrence relation for the sequence arising in this problem (which is only given in \cite{HOG} for some particular cases) and we will deduce an explicit formula (which also generalizes the work by Miles) for the total number of live pairs at the $n^{th}$ time point. Finally, in an appendix, we give a procedure written using $\textrm{Maple}^{\circledR}$ to calculate terms of the considered sequences. \section{A family of polynomials and their roots} Given natural numbers $h,k\geq1$ we define the following polynomial: $$g_{k,h}(x)=x^{k+h-1}-x^{k-1}-\dots-x-1.$$ In this section we will study, in some sense, the behavior of the roots of this polynomial in terms of $k$ and $h$. In particular we will be interested in the unique positive real root of $g_{k,h}(x)$. \begin{prop} $g_{k,h}(x)$ has a unique positive real root $\alpha_{k,h}$ which lies in $(1,2)$. \end{prop} \begin{proof} Just apply Descartes' rule of signs together with Bolzano's theorem. \end{proof} \begin{rem} Note that $\alpha_{1,h}=1$ for all $h\geq 1$. \end{rem} As a consequence of the previous proposition we have the following lemma. The proof is elementary and we omit. \begin{lem} Let $h,k\geq1$ and $0\leq y\in\mathbb{R}$, then $y>\alpha_{k,h}$ if and only if $g_{k,h}(y)>0$. \end{lem} Now, for every fixed $h$ we consider the sequence $\{\alpha_{k,h}\}_{k\geq1}$. Similarly, for every fixed $k$ we have a sequence $\{\alpha_{k,h}\}_{h\geq1}$. The following proposition studies the monotony of these sequences. \begin{prop} Let $h,k\geq 1$. Then: \begin{enumerate} \item $\alpha_{k,h}<\alpha_{k+1,h}$. \item $\alpha_{k,h}\geq\alpha_{k,h+1}$, and the equality holds if and only if $k=1$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item By definition we know that $g_{k+1,h}(\alpha_{k+1,h})=0$. Now, we have that $\displaystyle{\alpha_{k+1,h}^{k+h-1}=\frac{\alpha_{k+1,h}^{k+h}}{\alpha_{k+1,h}}=\frac{\alpha_{k+1,h}^k+\alpha_{k+1,h}^{k-1}+\cdots+\alpha_{k+1,h}+1}{\alpha_{k+1,h}}}>\alpha_{k+1,h}^{k-1}+\cdots+\alpha_{k+1,h}+1$, so $g_{k,h}(\alpha_{k+1,h})>0$ and the result follows from the previous lemma. \item Again by definition $g_{k,h}(\alpha_{k,h})=0$ and we can write $g_{k,h+1}(\alpha_{k,h})=\alpha_{k,h}^{k+h}-\alpha_{k,h}^{k-1}-\cdots-\alpha_{k,h}-1=\alpha_{k,h}^{k+h}-\alpha_{k,h}^{k+h-1}=\alpha_{k,h}^{k+h-1}(\alpha_{k,h}-1)\geq0$, with the equality holding if and only if $k=1$. An application of the lemma completes the proof. \end{enumerate} \end{proof} Now, as we have seen that both sequences $\{\alpha_{k,h}\}_{k\geq1}$ and $\{\alpha_{k,h}\}_{h\geq1}$ are monotonic, and as they are clearly bounded, they must be convergent. The next result is devoted to calculate their limits. \begin{prop} \begin{enumerate} \item $\displaystyle{\lim_{k\rightarrow\infty}\alpha_{k,h}=\alpha_h}$ for all $h\geq 1$, where $\alpha_h$ is the unique positive root of the polynomial $p_h(x)=x^h-x^{h-1}-1$. \item $\displaystyle{\lim_{h\rightarrow\infty}\alpha_{k,h}=1}$ for all $k\geq1$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Let us fix $h\geq1$. Then for any $k\geq2$ we have $\alpha_{k,h}^{k+h-1}=1+\alpha_{k,h}+\cdots+\alpha_{k,h}^{k-1}=\displaystyle{\frac{\alpha_{k,h}^k-1}{\alpha_{k,h}-1}}$ and thus $\alpha_{k,h}^h-\alpha_{k,h}^{h-1}-1=\displaystyle{\frac{-1}{\alpha_{k,h}^k}}$. Now, as we know that $\displaystyle{\alpha_h=\lim_{k\rightarrow\infty}\alpha_{k,h}>1}$ it is enough to take limits in the previous equality to obtain the result. \item Let us fix now $k\geq1$. Then for any $h\geq2$ we have $\alpha_{k,h}^{k+h-1}=1+\alpha_{k,h}+\cdots+\alpha_{k,h}^{k-1}$ so, we take logarithms in both sides to obtain the equality $\displaystyle{\log\alpha_{k,h}=\frac{\log(1+\alpha_{k,h}+\cdots+\alpha_{k,h}^{k-1})}{k+h-1}}$. Finally, if we call $\displaystyle{\beta_k=\lim_{h\rightarrow\infty}\alpha_{k,h}}$ and take limits in the previous expression we arrive at $\log\beta_k=0$ for every $k\geq1$ and the proof is complete. \end{enumerate} \end{proof} The previous propositions can be summarized in the following diagram: $$\begin{array}{ccccccccccccccc} \alpha_{1,1}&<&\alpha_{2,1}& < & \alpha_{3,1} & < & \alpha_{4,1} & < & \dots & < & \alpha_{k,1} & < & \dots & \rightarrow & \alpha_1 \\ \shortparallel & & \vee & & \vee & & \vee & & & & \vee & & & & \vee\\ \alpha_{1,2}&<&\alpha_{2,2}& < & \alpha_{3,2} & < & \alpha_{4,2} & < & \dots & < & \alpha_{k,2} & < & \dots & \rightarrow & \alpha_2 \\ \shortparallel& & \vee & & \vee & & \vee & & & & \vee & & & & \vee\\ \alpha_{1,3}& <& \alpha_{2,3}& < & \alpha_{3,3} & < & \alpha_{4,3} & < & \dots & < & \alpha_{k,3} & < & \dots & \rightarrow & \alpha_3 \\ \shortparallel & & \vee & & \vee & & \vee & & & & \vee& & & & \vee\\ \alpha_{1,4} &<&\alpha_{2,4}& < & \alpha_{3,4} & < & \alpha_{4,4} & < & \dots & < & \alpha_{k,4} & < & \dots & \rightarrow & \alpha_4 \\ \shortparallel & & \vee & & \vee & & \vee & & & & \vee & & & & \vee\\ \alpha_{1,5} &<&\alpha_{2,5}& < & \alpha_{3,5} & < & \alpha_{4,5} & < & \dots & < & \alpha_{k,5} & < & \dots & \rightarrow & \alpha_5 \\ \shortparallel & & \vee& & \vee & & \vee & & & & \vee & & & & \vee \\ \vdots & & \vdots& &\vdots & & \vdots & & \ddots & & \vdots & & \ddots & &\vdots \\ \shortparallel & & \vee & &\vee & & \vee & & & & \vee & & & & \vee \\ \alpha_{1,h} &<&\alpha_{2,h}& < & \alpha_{3,h} & < & \alpha_{4,h} & < & \dots & < & \alpha_{k,h} & < & \dots & \rightarrow & \alpha_h \\ \shortparallel & & \vee & & \vee & & \vee & & & & \vee & & & & \vee\\ \vdots & & \vdots & &\vdots & & \vdots & & & & \vdots & & & &\vdots \\ \shortparallel & & \downarrow & & \downarrow & & \downarrow & & & & \downarrow & & & & \downarrow\\ 1 & = & 1 & = & 1 & = & 1 & =&\dots & = & 1 & = & \dots & \rightarrow & 1 \end{array}$$ Where $\alpha_h$ is the unique positive root of $p_h(x)=x^h-x^{h-1}-1$ and every inequality is strict. Before we go on, we introduce a result by Cauchy (see \cite{CAU}) which will be useful in the forthcoming. This result gives a bound on the modulus of the roots of a polynomial with complex coefficients. We present it without proof. \begin{teor} Let $f(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_1z+a_0$ be a polynomial with complex coefficients ($a_i\neq0$ for at least one $i$) and let $Z[f(z)]$ the set of its complex roots. Then $Z[f(z)]\subset\{z\in\mathbb{C}\ |\ |z|\leq\eta\}$ where $\eta$ is the unique positive root of the real polynomial $\widetilde{f}(x)=x^n-|a_{n-1}|x^{n-1}-\cdots-|a_1|x-|a_0|$. \end{teor} \begin{coro} $Z[g_{k,h}(x)]\subset\{z\in\mathbb{C}\ |\ |z|\leq\alpha_{k,h}\}$. \end{coro} \begin{proof} Since $\widetilde{g}_{k,h}(x)=g_{k,h}(x)$, it is enough to apply the previous theorem. \end{proof} Now, we can refine the previous corollary in the following way. \begin{prop} If $g_{k,h}(w)=0$ then $|w|\leq\alpha_{k,h}$ and moreover, $|w|=\alpha_{k,h}$ if and only if $w=\alpha_{k,h}$. In other words, $\alpha_{k,h}$ is the largest root of $g_{k,h}(x)$. \end{prop} \begin{proof} In what follows we will put $\alpha=\alpha_{k,h}$. Let $w$ be a root of $g_{k,h}(x)$ such that $|w|=\alpha$. As $g_{k,h}(w)=0$ we have that $w^{k-1}(w^h-1)=w^{k-2}+\cdots+w+1$ and, in particular, $|w^{k-1}|^2|w^h-1|^2|w-1|^2=|w^{k-1}-1|^2$. If we put $w=\alpha\cos\theta+i\alpha\sin\theta$ we obtain the equality $\alpha^{2k-2}\left((\alpha^h\cos h\theta-1)^2+\alpha^{2h}\sin^2h\theta\right)\left((\alpha\cos\theta-1)^2+\alpha^2\sin^2\theta\right)=\left((\alpha^{k-1}\cos(k-1)\theta-1)^2+\alpha^{2k-2}\sin^2(k-1)\theta\right)$ and operating we get $\alpha^{2k-2}(\alpha^{2h}+1-2\alpha^h\cos h\theta)(\alpha^2+1-2\alpha\cos\theta)=\alpha^{2k-2}+1-2\alpha^{k-1}\cos(k-1)\theta$. Let us now define polynomials $p(x)$ and $q(x)$ by: $$p(x)=\sum_{i=0}^{\left[\frac{h}{2}\right]}\binom{h}{2i}x^{h-2i}(1-x^2)^{2i},\quad q(x)=\sum_{i=0}^{\left[\frac{k-1}{2}\right]}\binom{k-1}{2i}x^{k-1-2i}(1-x^2)^{2i}$$ then, it is easy to see that $\cos h\theta=p(\cos\theta)$ and $\cos(k-1)\theta=q(\cos\theta)$. Moreover, $p(1)=q(1)=1$ and if we define $r(x)$ to be $$r(x)=\alpha^{2k-2}(\alpha^{2h}+1-2\alpha^hp(x))(\alpha^2+1-2\alpha x)-\left(\alpha^{2k-2}+1-2\alpha^{k-1}q(x)\right)$$ then, $g_{k,h}(\alpha)=0$ implies $r(1)=0$. Also we can see that $r(x)$ has no roots in $(-1,1)$. Thus, if $w\in\mathbb{C}$ is a root of $g_{k,h}(x)$ with $|w|=\alpha$ it must be real and as $g_{k,h}(-\alpha)\neq0$, we conclude that $$Z[g_{k,h}(x)]\setminus\{\alpha_{k,h}\}\subset\{z\in\mathbb{C}\ |\ |z|<\alpha_{k,h}\}$$ and the proof is complete. \end{proof} We will finish this section with the following proposition which will be of great technical importance in the next section. \begin{prop} All the roots of $g_{k,h}$ are distinct. \end{prop} \begin{proof} We will show that $g_{k,h}(x)$ and $g'_{k,h}(x)$ have no common root. To see so it is enough to prove that $\textrm{g.c.d.}(g_{k,h},g'_{k,h})$ is a constant. But we can use Euclid's algorithm repeatedly (multiplying by appropriate constants in every step if necessary) to arrive to the result. \end{proof} \section{The dying rabbit sequence} As we mentioned in the introduction, we are interested in generalizing the Fibonacci sequence by considering that rabbits become mature $h$ months after their birth and that they die $k$ months after their matureness. We will denote by $C_n^{(k,h)}$ the number of couples of rabbits at the $n^{th}$ month. Obviously we have: $$C_0^{(k,h)}=\dots=C_{h-1}^{(k,h)}=1.$$ Now let us denote by $C_n^{(h)}$ the recurrence sequence defined by: $$C_0^{(h)}=\dots=C_{h-1}^{(h)}=1,\ C_n^{(h)}=C_{n-1}^{(h)}+C_{n-h}^{(h)}\ \forall n\geq h$$ then an easy induction argument let us see that: $$C_n^{(k,h)}=\begin{cases} C_n^{(h)}, & \text{if $0\leq n \leq k+h-2$} \\ C_{n-h}^{(k,h)}+C_{n-h-1}^{(k,h)}+\cdots+C_{n-k-h+1}^{(k,h)}, & \text{if $n > k+h-2$} \\ \end{cases}$$ \begin{exa} (See the appendix) \begin{itemize} \item If $k=3$ and $h=2$, the beginning terms of $C_n^{(3,2)}$ are: $$1,1,2,3,4,6,9,13,19,28,41,\dots$$ \item If $k=7$ and $h=4$, then the beginning terms of $C_n^{(7,4)}$ are: $$1,1,1,1,2,3,4,5,7,10,13,17,23,32,\dots$$ \end{itemize} \end{exa} \begin{rem} The characteristic polynomial of the recurrence sequence $C_n^{(k,h)}$ is precisely the polynomial $g_{k,h}(x)$ studied in the previous section. \end{rem} If we denote by $r_1,r_2,\dots,r_{k+h-1}$ the (distinct) complex roots of $g_{k,h}(x)$ it follows from section 2 and from well-known facts from the theory of recurrence sequences that there exist constants $a_1,a_2,\dots,a_{k+h-1}$ such that: $$C_n^{(k,h)}=a_1r_1^n+a_2r_2^n+\cdots+a_{k+h-1}r_{k+h-1}^n$$ where we can suppose that $r_1=\alpha_{k,h}$. In particular we can calculate those constants solving the system of linear equations given by: \begin{equation}\label{eq}\sum_{i=1}^{k+h-1}a_ir_i^l=C_l^{(h)},\quad 0\leq l\leq k+h-2.\end{equation} which can be expressed matricially: $$\begin{pmatrix} 1 & 1 & \cdots & 1\\ r_1 & r_2 & \cdots & r_{k+h-1}\\ \dots & \vdots & \ddots & \vdots\\ r_1^{k+h-2} & r_2^{k+h-2} & \cdots & r_{k+h-1}^{k+h-2}\end{pmatrix}\begin{pmatrix}a_1\\ a_2\\ \vdots\\ a_{k+h-1}\end{pmatrix}=\begin{pmatrix} C_0^{(h)}\\ C_1^{(h)}\\ \vdots\\ C_{k+h-2}^{(h)}\end{pmatrix}$$ and which has unique solution because all the $r_i$ are distinct. To solve this system of equations we will use Cramer's rule. Recall that if we put $$V=\begin{vmatrix} 1 & 1 & \cdots & 1\\ r_1 & r_2 & \cdots & r_{k+h-1}\\ \dots & \vdots & \ddots & \vdots\\ r_1^{k+h-2} & r_2^{k+h-2} & \cdots & r_{k+h-1}^{k+h-2}\end{vmatrix}=\prod_{k+h-1\geq i>j\geq1} (r_i-r_j)$$ $$D_n=\begin{vmatrix}1 & \dots & 1 & C_0 & 1 & \dots & 1\\ r_1 & \dots & r_{n-1} & C_1 & r_{n+1} & \dots & r_{k+h-1}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ r_1^{k+h-2} & \dots & r_{n-1}^{k+h-2} & C_{k+h-2} & r_{n+1}^{k+h-1} & \dots & r_{k+h-1}^{k+h-2}\end{vmatrix}$$ then $a_n=\displaystyle{\frac{D_n}{V}}$. So it is enough to find $D_n$ for $n=1,\dots,k+h-1$. We will work out the case $n=1$ completely, the other cases being analogous. Also note that we have replaced the values $C_j^{(h)}$ ($j=0,\dots,k+h-2$) by arbitrary constants $C_j$ ($j=0,\dots,k+h-2$), as their value is of no importance when solving (\ref{eq}) formally. To calculate $D_1$ we first need the following generalization of Vandermonde determinant which can be found in \cite{ERN} (Lemma 2.1). \begin{lem} If $e_n$ is the $n^{th}$ elementary symmetric polynomial in the variables $\{x_1,\dots,x_n\}$, then: $$\begin{vmatrix} 1 &\dots & 1 & 1\\ x_1 & \dots & x_{n-1} & x_n \\ \vdots & \ddots & \vdots & \vdots\\ \widehat{x_1^l} & \dots & \widehat{x_{n-1}^l} & \widehat{x_n^l}\\ \vdots & \ddots & \vdots & \vdots\\ x_1^n & \dots & x_{n-1}^n & x_n^n\end{vmatrix}=\left(\prod_{n\geq i>j\geq 1}(x_i-x_j)\right)e_{n-l}(x_1,\dots,x_n).$$ \end{lem} Now, if we apply this lemma and we expand the determinant $D_1$ by its first column we obtain: $$D_1=\sum_{l=0}^{k+h-2}(-1)^lC_l\left(\prod_{k+h-1\geq i>j\geq 2}(r_i-r_j)\right)e_{k+h-2-l}(r_2,\dots, r_{k+h-1})$$ and, consequently: \begin{equation}\label{a1}a_1=\frac{D_1}{V}=\frac{1}{\displaystyle{\prod_{k+h-1\geq i\geq2}(r_i-r_1)}}\sum_{l=0}^{k+h-2}(-1)^lC_le_{k+h-2-l}(r_2,\dots, r_{k+h-1}).\end{equation} So, we are now interested in calculating the values $e_j(r_2,\dots,r_{k+h-1})$ for $0\leq j\leq k+h-2$. From Cardano's formulae and considering that $r_1,\dots,r_{k+h-1}$ are the roots of $g_{k,h}(x)$ we have that: $$e_0(r_1,\dots,r_{k+h-1})=1.$$ $$e_1(r_1,\dots,r_{k+h-1})=\dots=e_{h-1}(r_1,\dots,r_{k+h-1})=0.$$ $$e_s(r_1,\dots,r_{k+h-1})=(-1)^{s+1}\quad \forall h\leq s\leq k+h-1.$$ On the other hand, the following lemma is easy to prove. \begin{lem} $\displaystyle{e_t(x_2,\dots,x_n)=\sum_{i=1}^{n-t}(-1)^{i+1}\frac{e_{t+i}(x_1,\dots,x_n)}{x_1^i}\quad \forall 0\leq t<n}$. \end{lem} We put this together to obtain: $$e_0(r_2,\dots,r_{k+h-1})=1.$$ $$e_s(r_2,\dots,r_{k+h-1})=(-1)^{s}\sum_{i=1}^k\frac{1}{r_1^{i+h-1-s}}\quad \forall 1\leq s\leq h-1.$$ $$e_s(r_2,\dots,r_{k+h-1})=(-1)^{s}\sum_{i=1}^{k+h-1-s}\frac{1}{r_1^i}\quad \forall h\leq s\leq k+h-2.$$ and summing the geometric series: $$e_0(r_2,\dots,r_{k+h-1})=1.$$ $$e_s(r_2,\dots,r_{k+h-1})=(-1)^{s}\frac{r_1^k-1}{r_1^{k+h-1-s}(r_1-1)}\quad \forall 1\leq s\leq h-1.$$ $$e_s(r_2,\dots,r_{k+h-1})=(-1)^{s}\frac{r_1^{k+h-1-s}-1}{r_1^{k+h-1-s}(r_1-1)}\quad \forall h\leq s\leq k+h-2.$$ Finally if we substitute in (\ref{a1}) we get: $$a_1=\frac{(-1)^{k+h}}{\displaystyle{\prod_{k+h-1\geq i>2}(r_i-r_1)}}\left[\sum_{l=0}^{k-2}C_l\frac{r_1^{l+1}-1}{r_1^{l+1}(r_1-1)}+\sum_{l=k-1}^{k+h-3}C_l\frac{r_1^k-1}{r_1^{l+1}(r_1-1)}+C_{k+h-2}\right].$$ Reasoning in a similar way and taking into account the symmetry of the $e_s$ we can calculate $a_n$ for every $1\leq n\leq k+h-1$. In fact: $$a_n=\frac{(-1)^{k+h+n-1}}{\displaystyle{\prod_{i>n}(r_i-r_n)}\prod_{n>j}(r_n-r_j)}\left[\sum_{l=0}^{k-2}C_l\frac{r_n^{l+1}-1}{r_n^{l+1}(r_n-1)}+\sum_{l=k-1}^{k+h-3}C_l\frac{r_n^k-1}{r_n^{l+1}(r_n-1)}+C_{k+h-2}\right].$$ \begin{rem} Observe that the previous considerations are only valid for $k,h\geq2$. For the case $h=1$ we refer to Miles' paper \cite{MIL} and the case $k=1$ is trivial. \end{rem} \begin{rem} It is interesting to observe that $a_1\neq 0$. As a consequence and recalling that $|r_i|<|r_1|$ for all $i\geq2$ we have that $\displaystyle{\lim_{n\rightarrow\infty}\frac{C_{n+1}^{(k,h)}}{C_n^{(k,h)}}=r_1=\alpha_{k,h}}$. This generalizes the fact that $\displaystyle{\frac{f_{n+1}}{f_n}=\Phi}$ where $f_n$ is the $n^{th}$ Fibonacci number and $\Phi$ is de golden section (note that $\alpha_{2,1}=\Phi$). The previous expression for $a_n$ also generalizes the work by Miles just by setting $h=1$. \end{rem} \begin{exa} (Padovan sequence)\newline Recall that the so-called Padovan sequence is defined by $$P_0=P_1=P_2=1.$$ $$P_n=P_{n-2}+P_{n-3},\quad \forall n\geq3.$$ Thus, it is clear that in our notation $P_n=C_n^{(2,2)}$ with the initial conditions $C_0=C_1=C_2=1$. So we can apply our previous results to obtain that: $$P_n=\frac{r_1^2+r_1+1}{2r_1+3}r_1^n+\frac{r_2^2+r_2+1}{2r_2+3}r_2^n+\frac{r_3^2+r_3+1}{2r_3+3}r_3^n.$$ which was already known to hold.\newline If we keep the same recurrence relation but replace the initial conditions by $P_0=3,P_1=0,P_2=2$ we obtain the so-called Perrin sequence, whose general term can be again calculated with our formulas to obtain: $$P_n=r_1^n+r_2^n+r_3^n.$$ Finally, if we keep our original initial conditions, that is, $C_0^{(2,2)}=C_1^{(2,2)}=1, C_2^{(2,2)}=2$, then the general term of our Padovan-Perrin like sequence turns out to be: $$C_n^{(2,2)}=\frac{(r_1+1)^2}{2r_1+3}r_1^n+\frac{(r_2+1)^2}{2r_2+3}r_2^{n}+\frac{(r_3+1)^2}{2r_3+3}r_3^{n}.$$ \end{exa} \section{Appendix} In this appendix we give a short and easy procedure, written with $\textrm{Maple}^{\circledR}$, which calculates any number of terms of $C_n^{(k,h)}$. It goes as follows: \begin{verbatim} dr:=proc(k,h,t) local i; for i from 0 by 1 to h-1 do c(i):=1 end do; for i from h by 1 to k+h-2 do c(i):=c(i-1)+c(i-h); end do; for i from k+h-1 to t do c(i):=sum(c(n), n=i-k-h+1..i-h); end do; print(seq(c(n),n=0..t)); end proc: \end{verbatim}
{ "timestamp": "2007-10-11T13:16:43", "yymm": "0710", "arxiv_id": "0710.2216", "language": "en", "url": "https://arxiv.org/abs/0710.2216", "abstract": "In this paper we study a generalization of the Fibonacci sequence in which rabbits are mortal and take more that two months to become mature. In particular we give a general recurrence relation for these sequences (improving the work in the paper Hoggatt, V. E., Jr.; Lind, D. A. \"The dying rabbit problem\". Fibonacci Quart. 7 1969 no. 5, 482--487) and we calculate explicitly their general term (extending the work in the paper Miles, E. P., Jr. Generalized Fibonacci numbers and associated matrices. Amer. Math. Monthly 67 1960 745--752). In passing, and as a technical requirement, we also study the behavior of the positive real roots of the characteristic polynomial of the considered sequences.", "subjects": "Number Theory (math.NT)", "title": "The dying rabbit problem revisited", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9899864302631448, "lm_q2_score": 0.8376199714402812, "lm_q1q2_score": 0.8292324054432814 }
https://arxiv.org/abs/1810.08742
Some remarks on the correspondence between elliptic curves and four points in the Riemann sphere
In this paper we relate some classical normal forms for complex elliptic curves in terms of 4-point sets in the Riemann sphere. Our main result is an alternative proof that every elliptic curve is isomorphic as a Riemann surface to one in the Hesse normal form. In this setting, we give an alternative proof of the equivalence between the Edwards and the Jacobi normal forms. Also, we give a geometric construction of the cross ratios for 4-point sets in general position.
\section*{Introduction} A complex elliptic curve is by definition a compact Riemann surface of genus 1. By the \emph{uniformization theorem}, every elliptic curve is conformally equivalent to an algebraic curve given by a cubic polynomial in the form \begin{equation}\label{Weierstrass-intro} E\colon \ y^2=4x^3-g_2x-g_3, \quad \text{with}\ \Delta=g_2^3-27g_3^2\neq 0, \end{equation} \noindent this is called the \emph{Weierstrass normal form}. For computational or geometric purposes it may be necessary to find a Weierstrass normal form for an elliptic curve, which could have been given by another equation. At best, we could predict the right changes of variables in order to transform such equation into a Weierstrass normal form, but in general this is a difficult process. \par A different method to find the normal form (\ref{Weierstrass-intro}) for a given elliptic curve, avoiding any change of variables, requires a degree 2 meromorphic function on the elliptic curve, which by a classical theorem always exists and in many cases it is not difficult to compute. By the Riemann-Hurwitz formula, this map has four branch points and, by composing with a Möbius transformation, we can assume that one point is at $\infty$ and the other points are three complex numbers $\{z_{1},z_{2},z_{3}\}$. Let $C$ be the centroid of the points $z_{i}$ and $e_{i}=z_{i}-C$. With this points we are able to find the constants $g_{2}$ and $g_{3}$ of (\ref{Weierstrass-intro}) as the coefficients of the polynomial \begin{equation} 4(x-e_{1})(x-e_{2})(x-e_{3}). \end{equation} \par This motivates the study of another classical normal forms in this setting. We are interested in relating the branch points of degree 2 meromorphic functions on the elliptic curves with 4-point configurations derived from a given normal form. This relation naturally leads us to the study of the cross ratio of 4-point sets. See Example \ref{example-Fermat} and Example \ref{example-Steinmetz} for illustrate the above discussion. \par Our main result is an alternative proof that every elliptic curve is isomorphic as a Riemann surface to one in the Hesse normal form (see Theorem \ref{Theorem-Hesse}). In this setting, we give an alternative proof of the equivalence between the Edwards and the Jacobi normal forms (see Theorem \ref{Jacobi-Edwards-equivalence}). We give a geometric construction of the cross ratios for 4-point sets in general position in terms of the angles of certain curvilinear triangles (see Figure \ref{curvilinear-triangles} and Figure \ref{three-complex-points}). We don't know whether this construction was known earlier. \par This paper is organized as follows: in Section \ref{Section-correspondence} we explain briefly the correspondence between elliptic curves and 4-point sets in the Riemann sphere. Also we give some remarks on the cross ratio of 4-point sets. In Section \ref{Section-Legendre-Weierstrass} we describe the configuration associated to the Legendre and the Weierstrass normal forms. In Section \ref{Section-Jacobi-Edwards} we describe the configuration of points for the Jacobi and the Edwards normal forms and how they are related. Finally, in Section \ref{Section-Hesse} we prove the main theorem. \section{Correspondence between $\mathcal{M}$ and four points}\label{Section-correspondence} Recall that an elliptic curve is a Riemann surface of genus one. We denote by $\mathcal{M}$ the \emph{moduli of elliptic curves}, i.e., the set of classes of isomorphism of elliptic curves, where by isomorphism we mean a biholomorphism between Riemann surfaces. Every elliptic curve admits a meromorphic function of degree 2 and by the Riemann-Hurwitz formula this map has four critical values. Reciprocally, given a 4-point set in the Riemann sphere there is a degree 2 meromorphic function with critical values on those points. The above correspondence is summed in following theorem (see \cite[\S 6.3.2]{Donaldson}): \begin{theorem}\label{the-correspondence} There is a bijection between $\mathcal{M}$ and 4-point sets on the Riemann sphere modulo Möbius transformations. \end{theorem} \par \noindent \textit{Remark.} In this paper we assume that the complex plane $\mathbb{C}$ has the canonical orientation, the positive orientation is the counterclockwise orientation. \par Now we will discuss a geometric interpretation of the cross ratio of 4-point sets when one of them is at $\infty$ and the other three complex numbers are not collinear. \subsection{The cross ratio and three points in the complex plane.}\label{triangle-shapes} Recall that the cross ratio $\chi(z_1,z_2,z_3,z_4)$ of four distinct and ordered points $(z_1,z_2,z_3,z_4)$ in the Riemann sphere, is the value $\mu(z_4)$ where $\mu$ is the unique Möbius transformation which sends orderly the three points $(z_1,z_2,z_3)$ to $(0,1,\infty)$. When the four points are complex numbers the cross ratio becomes: \begin{equation}\label{cross-ratio} \chi(z_1,z_2,z_3,z_4)=\frac{(z_4-z_1)(z_2-z_3)}{(z_1-z_2)(z_3-z_4)}. \end{equation} \par We begin with this elementary remark: \begin{lemma}\label{elementary-remark} Suppose that $\{z_1,z_2,z_3, \infty\}$ and $\{w_1,w_2,w_3,\infty\}$ are two 4-point sets in the Riemann sphere which are Möbius equivalent, then there exists an affine transformation $az+b$ that maps one set to the other. \end{lemma} \begin{proof} Suppose that $\mu(z)$ is the Möbius transformation which sends $\{z_1,z_2,z_3, \infty\}$ to $\{w_1,w_2,w_3,\infty\}$. If $\mu$ fixes $\infty$ there is nothing to do. If $\mu(\infty)=w_{i_0}$, we can take the Möbius transformation $T$ that interchanges $\infty$ with $w_{i_0}$ and interchange the other points. Such transformation exists because this kind of permutations do not change the cross ratio. Then the affine transformation is $T\circ \mu$. \end{proof} The Lemma \ref{elementary-remark} has interesting consequences. First, since we can send every point to $\infty$ by a Möbius transformation it follows directly from Lemma \ref{elementary-remark} and Theorem \ref{the-correspondence} the corollary: \begin{corollary}\label{correspondence-triangles} There is a bijection between $\mathcal{M}$ and the sets of three points on the complex plane $\mathbb{C}$ modulo complex affine transformations. \end{corollary} \par \noindent \emph{Remark.} In what follows we assume that our three points are not collinear; also we suppose that the triangle formed by these points is positively oriented.\\ \par We say that two sets of three points in the complex plane are \emph{equivalent} if there is an affine map $az+b$ which maps one set to the other. Recall that, by definition, two triangles are similar if they have the same angles. And two (oriented) triangles are similar if and only if there is an affine transformation $az+b$, with $a,b\in \mathbb{C}$ and $a\neq 0$, which maps one triangle to the other. It follows from the Corollary \ref{correspondence-triangles} that two sets of three points are equivalent if and only if their (oriented) triangles are similar. \par From the above discussion follows the proposition: \begin{proposition} Let $z_{1}$, $z_{2}$ $z_{3}$ be three non collinear points in the complex plane. The number $\lambda$ is a cross ratios for the 4-point set $\{z_1,z_2,z_3,\infty\}$ if and only if the oriented triangle $\{0,1,\lambda\}$ is similar to the oriented triangle $\{z_1,z_2,z_3\}$. Therefore there are generically six equivalent cross ratios of each 4-point set $\{z_1,z_2,z_3,\infty\}$ (see Figure \ref{similar-triangles}). \end{proposition} \begin{figure} \begin{center} \includegraphics[width=10cm]{cross-ratios.pdf} \caption{Six similar triangles representing equivalent cross ratios.} \label{similar-triangles} \end{center} \end{figure} \par We will see in Section \ref{Section-Hesse} that for not collinear points the only exception is when the oriented triangle $\{z_1,z_2,z_3\}$ is equilateral. If $\lambda$ is a cross ratio for some 4-point set, the other values are given by (\ref{equivalent-cross-ratios}). \par The proof of Lemma \ref{elementary-remark} tells us how to construct a cross ratio for a particular order: \begin{proposition} Let $z_{1}$, $z_{2}$ $z_{3}$ be three non collinear points in the complex plane forming an oriented triangle with respective angles $\alpha,\beta,\gamma$. The cross ratios $\lambda_1=\chi(z_1,z_2,z_3,\infty)$ and $\lambda_2=\chi(z_1,z_2,\infty,z_3)$ can be obtained by the geometric construction illustrated in Figure \ref{cross-ratio-ordered}. \end{proposition} \begin{figure} \begin{center} \includegraphics[width=12cm]{cross-ratio-ordered.pdf} \caption{Geometric construction of $\lambda_1=\chi(z_1,z_2,z_3,\infty)$ and $\lambda_2=\chi(z_1,z_2,\infty,z_3)$.} \label{cross-ratio-ordered} \end{center} \end{figure} \par In order to find geometrically the cross ratio for a 4-point set in general position in the complex plane we need to find the angles of the curvilinear triangles of we call the \emph{shape} of a 4-point set, which we describe bellow. \subsection{The 4-point shape.}\label{shape-four-points} In what follows we assume that our four points are in general position, i.e. they not lie in a same circle. \par Let $\{z_1,z_2,z_3,z_4\}$ be a 4-point set in the complex plane, for each three points there are exactly one circle passing through them, then there are four circles associated to these points. Consider the curvilinear triangles formed by arcs of these circles whose vertices are in each set of three points of $\{z_1,z_2,z_3,z_4\}$ (see Figure \ref{curvilinear-triangles}), we suppose that they are oriented (with the positive orientation of $\mathbb{C})$. \begin{figure} \begin{center} \includegraphics[width=12cm]{curvilinear-triangles.pdf} \caption{Curvilinear triangles of four points in the complex plane.} \label{curvilinear-triangles} \end{center} \end{figure} These four curvilinear triangles have the same angles: to see this, observe, like in the Lemma \ref{elementary-remark} that the permutations of kind 2+2, i.e, product of two transpositions do not change the cross ratio, then there exist a Möbius transformation which permutes the four points interchanging a pair of points $z_i$ with $z_j$ and interchanging the remainder two. This kind of transformations form a group isomorphic to the Klein group. As Möbius transformations maps circles to circles conformally, preserving the orientation, this group acts transitively in the four triangles. \par From the last argument also follows that two equivalent 4-point sets in the complex plane have oriented curvilinear triangles with the same angles. \par If we allow \emph{generalized circles} the same is true when one point is $\infty$ and the other are non collinear complex numbers, in this case one curvilinear triangle is an euclidean triangle, the other triangles have an arc and two lines as its sides (see Figure \ref{three-complex-points}). The euclidean triangle is positively oriented, which induce an orientation to the other triangles. \par Reciprocally, if their oriented curvilinear triangles have the same angles they are equivalent, we can see it sending one point to $\infty$ and considering the previous case of euclidean triangles. So we can define the \emph{shape} of a 4-point set in the Riemann sphere as the oriented curvilinear triangles obtained in the before construction. Then we have the following theorem: \begin{theorem}\label{shape-theorem} Two 4-point set in general position in the Riemann sphere are Möbius equivalent if and only if their shapes have the same angles (see Figure \ref{curvilinear-triangles} and Figure \ref{three-complex-points}). \end{theorem} \begin{figure} \begin{center} \includegraphics[width=10cm]{three-complex-points.pdf} \caption{Curvilinear triangles of four points in the Riemann sphere with one point at $\infty$.} \label{three-complex-points} \end{center} \end{figure} Then, the cross ratios of four points in the complex plane can be found geometrically analogously to \ref{triangle-shapes}, we exemplify the construction of two of them: \begin{proposition} Let $\{z_1,z_2,z_3,z_4\}$ four points in the complex plane in general position. Consider the oriented curvilinear triangle with vertices in $z_{1}$, $z_{2}$ and $z_{3}$ with respective angles $\alpha,\beta,\gamma$. The cross ratios $\lambda_1=\chi(z_1,z_2,z_3,z_4)$ and $\lambda_2=\chi(z_1,z_2,z_4,z_3)$ can be obtained by the geometric construction illustrated in Figure \ref{cross-ratio-ordered-finite}. \end{proposition} \begin{figure} \begin{center} \includegraphics[width=12cm]{cross-ratio-ordered-finite.pdf} \caption{Geometric construction of $\lambda_1=\chi(z_1,z_2,z_3,z_4)$ and $\lambda_2=\chi(z_1,z_2,z_4,z_3)$.} \label{cross-ratio-ordered-finite} \end{center} \end{figure} \section{Legendre and Weierstrass normal form.}\label{Section-Legendre-Weierstrass} Recall that if $p(x)$ is a cubic polynomial with complex coefficients and different roots, the Riemann surface $E\colon\ y^2=p(x)$ has the projection to the first coordinate as a meromorphic function of degree two which ramifies exactly in the roots of $p(x)$ and $\infty$ (see \cite[8.10]{Forster}), then, by the Corollary \ref{correspondence-triangles} every elliptic curve is isomorphic to one elliptic curve of that form. By the discussion in the previous section, two such elliptic curves $E_1\colon \ y^2=p(x)$, $E_2\colon \ y^2=q(x)$ are isomorphic if and only if the roots of $p(x)$ and $q(x)$ are equivalent or equivalently if their oriented triangles have the same angles, for non-degenerated triangles. \par Given four points we can also take some cross ratio of them, so we can take the elliptic curve as \begin{equation} E\colon \quad y^2=x(x-1)(x-\lambda),\quad \lambda\in\mathbb{C}-\{0,1\}. \end{equation} This is called a Legendre normal form of an elliptic curve. Two such elliptic curves $E_1\colon \ y^2=x(x-1)(x-\lambda_1)$, $E_2\colon \ y^2=x(x-1)(x-\lambda_2)$ are isomorphic if and only if the $\lambda_1$ and $\lambda_2$ are equivalent, we explained in \ref{triangle-shapes} how to read geometrically into this equivalence: they are equivalent if and only if the oriented triangles with vertices at $0,1,\lambda_{1}$ and $0,1,\lambda_{2}$ are similar. \par Another normal form is obtained considering the centroid $C$ of three points $\{z_1,z_2,z_3\}$ in the complex plane to obtain three points $e_i=z_i-C$ whose centroid is at 0, so the polynomial $4(x-e_1)(x-e_2)(x-e_3)$ has the form $4x^3-g_2x-g_3$, then the elliptic curve is given by: \begin{equation} E\colon\quad y^2=4x^3-g_2x-g_3, \quad \Delta=g_2^3-27g_3^2\neq 0, \end{equation} where $\Delta=g_2^3-27g_3^2$ is the discriminant of the cubic polynomial. Recall $\Delta\neq 0$ if and only if the cubic polynomial has distinct roots. This kind of elliptic curves are called a Weierstrass normal form. Thus we can interpret geometrically the Weierstrass normal form as all the sets of three points in the complex plane whose centroid is at $0$. As we know the centroid can be found geometrically, as the intersection of the three medians. As affine transformations preserve centroids, we have that two elliptic curves in the Weierstrass normal form are isomorphic if and only if the roots of the polynomials differ by a complex factor. \subsection{Some examples} To illustrate the above discussion we give a couple of examples. First we compute a Weierstrass and a Legendre normal form of the Fermat cubic in this setting: \begin{example}\label{example-Fermat} The Fermat cubic is an elliptic curve given by $x^3+y^3=1$. We can take for example the meromorphic function $x+y$, it is of degree 2 with critical values 0 and the cubic roots of 4, sending 0 to $\infty$ by $1/z$, we obtain that the Fermat cubic is the equilateral triangle with vertices in the cubic roots of $1/4$. In this case the centroid is already in the origin, then the Weierstrass normal form is $y^2=4x^3-1$. Denote by $\rho$ the cubic root $\exp(2\pi i/3)$. To obtain the cross ratio hang the equilateral triangle on the interval $[0,1]$, the vertices are $\{0,1,-\rho\}$. So the Legendre normal form is $y^2=x(x-1)(x+\rho)$. \end{example} \par For the next example we need elliptic curves given by quartic equations. Given a quartic polynomial $q(x)$ with complex coefficients and distinct roots, the Riemann surface $E\colon \ y^2=q(x)$ has the projection to the first coordinate as a meromorphic function of degree 2 whose critical values are exactly the roots of $q(x)$ (see \cite[8.10]{Forster}). Note that the curve $E$ has not smooth projective closure in $\mathbb{CP}^2$. \begin{example}\label{example-Steinmetz} In \cite[\S 2.3.5]{Steinmetz} is considered the following family of algebraic curves: $F\colon\quad x^n+y^m=1\quad (n\geq m\geq 2)$. For the cases (4,2), (3,3) and (3,2) they are the elliptic curves: \begin{equation*} x^4+y^2=1,\quad x^3+y^3=1,\quad x^3+y^2=1. \end{equation*} With our method we can see that the last two are isomorphic between them and non-isomorphic to the first. For that, consider the projection to the first coordinate on the curve $(n,m)=(4,2)$, it ramifies on the fourth roots of 1, as they are in the same circle, the cross ratio is real. The second case was calculated in the previous example, it corresponds to equilateral triangle. In the third case the projection to the first coordinate ramifies on the cubic roots of unity and $\infty$. By the previous discussion it follows the affirmation. \end{example} \section{The Jacobi and Edwards normal form}\label{Section-Jacobi-Edwards} Recall that every 4-point set in the real line can be mapped by some Möbius function to four symmetric points respect to zero $\{a,-1/a,-a,1/a\}$. In fact, this can be done with every four points in the complex plane. Because, for $a\in \mathbb{C}-\{0\}$ such that $a^4\neq 1$, the cross ratio \begin{equation} \varphi(a)=\chi\left(a,-\frac{1}{a},-a,\frac{1}{a}\right)= \left(\frac{1-a^2}{1+a^2}\right)^2 \end{equation} \noindent is well defined, and as function of $a$ is meromorphic in all Riemann sphere, therefore take all values in $\widehat{\mathbb{C}}-\{0,1,\infty\}$ when $a^4\neq 1$ and $a\neq 0$. Then all elliptic curves are isomorphic to one elliptic curve in the form \begin{equation}\label{jacobi2} y^2=(x^2-a^2)\left(a^2x^2-1\right),\quad \text{with}\ a^{4}\neq 1 \ \text{and}\ a\neq 0. \end{equation} The projection in the first coordinate is a meromorphic function of degree 2 whose critical values are $\{a,-1/a,-a,1/a\}$. Note that (\ref{jacobi2}) can be reduced to the curve: \begin{equation}\label{jacobi-form} y^2=(x^2-1)\left(k^2x^2-1\right),\quad \text{with}\ k^2\neq \pm 1\ \text{and}\ k\neq 0, \end{equation} \noindent where $k=a^2$, this can be achieved applying the transformation $\mu(z)=z/a$ to the points $\{a,-1/a,-a,1/a\}$. The equation (\ref{jacobi-form}) is called a Jacobi normal form of an elliptic curve. Therefore every elliptic curve is isomorphic to one elliptic curve in the Jacobi normal form. In fact this elliptic curve is isomorphic to the normal form defined by Edwards, see \cite{Edwards}. The Edwards normal form is by definition: \begin{equation}\label{edwards-form} x^2+y^2=a^2+a^2x^2y^2,\quad \text{with}\ a^{4}\neq 1 \ \text{and}\ a\neq 0, \end{equation} in this case the projection to the first coordinate is of degree 2 and ramifies exactly on $\{a,-1/a,-a,1/a\}$ (see Figure \ref{symmetric-points}). \begin{figure} \begin{center} \includegraphics[width=10cm]{symmetric-points.pdf} \caption{A configuration of points for the Edwards normal form with $a=1.34023+1.032 i$.} \label{symmetric-points} \end{center} \end{figure} \par We summarize the previous discussion in the following theorem: \begin{theorem}\label{Jacobi-Edwards-equivalence} Every elliptic curve $X$ is isomorphic to one elliptic curve in the form \begin{equation*} y^2=(x^2-a^2)\left(a^2x^2-1\right),\quad \text{with}\ a^{4}\neq 1 \ \text{and}\ a\neq 0, \end{equation*} \noindent which is isomorphic to the elliptic curve in the Edwards normal form (\ref{edwards-form}) and isomorphic to the elliptic curve in the Jacobi normal form (\ref{jacobi-form}) with $k=a^{2}$. \end{theorem} For different approach of the above equivalence consult \cite{Edwards}. \section{The $J$ invariant and the Hesse normal form}\label{Section-Hesse} We say that $\lambda$ and $\lambda'\in \widehat{\mathbb{C}}-\{0,1,\infty\}$ are equivalent if $\{0,1,\lambda,\infty\}$ and $\{0,1,\lambda',\infty\}$ are Möbius equivalent, it occurs if and only if there is a Möbius transformation $\mu$ such that permutes $\{0,1,\infty\}$ and satisfies $\mu(\lambda)=\lambda'$. Computing these cross ratios, we obtain that the points equivalent to $\lambda$ are: \begin{equation}\label{equivalent-cross-ratios} \lambda,\quad \frac{1}{\lambda},\quad \lambda-1,\quad \frac{1}{\lambda-1},\quad \frac{\lambda}{\lambda-1},\quad \frac{\lambda-1}{\lambda}. \end{equation} Define the rational function \begin{equation}\label{j-invariant} J(\lambda)=\frac{(\lambda^2-\lambda+1)^3}{\lambda^2(\lambda-1)^2}, \end{equation} \noindent this is a rational map of degree 6 called the $J$ invariant, it is an invariant because $\lambda$ is equivalent to $\lambda'$ if and only if $J(\lambda)=J(\lambda')$ (our definition of $J$ is according to \cite[\S 6.3.2]{Donaldson}, it differs from the definition of other authors by some factor). Then, the $J$ invariant is a function well-defined in $\mathcal{M}$ (viewed as four points on the sphere modulo Möbius transformations), the value of $J$ in a set of four points $\{z_1, z_2, z_3, z_4 \}$ is obtained by $J(\lambda)$, where $\{0,1,\lambda,\infty\}$ is a 4-point set equivalent to $\{z_1, z_2, z_3, z_4 \}$. \par In the following diagram we represent the critical values and the critical points of $J$, the weight of the arrows represent the multiplicity of the critical point: \begin{equation}\label{branching-diagram} \xymatrix{ 0\ar^2[rd] & 1\ar^2[d] & \infty\ar_2[dl] & 2\ar^2[rd] & \frac{1}{2}\ar^2[d] & -1\ar_2[ld] & -\rho\ar^3[d] & -\rho^2\ar_3[ld] \\ & \infty & & & \frac{3^3}{4} & & 0 & } \end{equation} \noindent where $\rho$ is the cubic root $\exp(2\pi i/3)$. Note that the Fermat cubic is a critical point of $J$, which corresponds to an equilateral triangle. The $J$ invariant can be factorized by the following rational functions (see \cite[p. 93]{Donaldson}): \begin{equation}\label{decomposition} \widehat{\mathbb{C}}\xrightarrow{\frac{\rho z+\rho^2}{z+\rho^2}} \widehat{\mathbb{C}}\xrightarrow{z^3+\frac{1}{z^3}} \widehat{\mathbb{C}} \xrightarrow{-\frac{3^3}{z-2}} \widehat{\mathbb{C}}, \end{equation} \par \noindent (the constant $-3^3$ in the last function in (\ref{decomposition}) differs from the constant that appear in \cite[p. 93]{Donaldson}, which seems to contain a mistake). \par Consider the elliptic curve in the form \begin{equation} x^3+y^3+1=3kxy, \quad k\in \mathbb{C}\ \text{with}\ k^3\neq 1, \end{equation} called the Hesse normal form. As in Example \ref{example-Fermat} we take the function $x+y$, this is also of degree two with critical values on $-k$ and on the roots of the cubic polynomial \begin{equation} z^3-3kz^2+4 \end{equation} \noindent this polynomial has discriminant $\Delta = 4^23^3(k^3-1)$, then by our hypothesis there are three distinct roots. So we can take the value of the $J$ invariant in the set of critical values, obtaining a function $\varphi(k)$ depending on $k$, and well defined in $\widehat{\mathbb{C}}-\{1,\rho,\rho^2,\infty\}$. Since the roots of a polynomial depends continuously of the coefficients, $\varphi(k)$ is a continuous function on $\widehat{\mathbb{C}}-\{1,\rho,\rho^2,\infty\}$. And it is also analytic in $\mathbb{C}-\{0,1,\rho,\rho^2\}$, as we can see looking at the general formulas of cubic equations (taking locally suitable branches of the roots) and taking the formula (\ref{cross-ratio}) with $z_4=-k$: \begin{equation}\label{z-nu} z_\nu=k-\rho^\nu\alpha-\frac{1}{\rho^\nu}\frac{k^2}{\alpha},\quad \nu=1,2,3, \quad \text{where}\ \alpha= \sqrt[3]{2-k^3+2i\sqrt{k^3-1}}. \end{equation} \par Let us analyse what is happening in $\infty$ and in the cubic roots of 1. If $k_n$ is a sequence such that $k_n\to \rho^\nu$, again as the roots depend continuously of the coefficients, for each $n$, we can choose a labelling of the roots of $z^2-3k_nz^2+4$ such that the cross ratios tends to $\infty$, then $\varphi(k_n)\to \infty$. \par On the other hand, if $k_n\to \infty$, we affirm that $\varphi(k_n)\to \infty$. Here is a sketch of the proof: considering the equation (\ref{z-nu}), we have that $\alpha/k$ tends to $-1,-\rho$ or $-\rho^2$, when $k\to \infty$, depending of the choice of the branch of the cubic root in $\alpha$. With this, it is straightforward to check that $\chi(z_1,z_2,z_3,-k)$ tends to $0,1$ or $\infty$, when $k\to \infty$, according to the choice of the branch. Then, if we take for each $n$, $k=k_n$ in (\ref{z-nu}) we have that the cross ratios $\chi(z_1,z_2,z_3,-k_n)$ accumulate on an subset of $\{0, 1,\infty\}$ when $k_n\to \infty$. In any case $J$ tends to $\infty$. \par Then $\varphi(k)$ is a continuous function on all the Riemann sphere. Hence a rational function, whose poles are $\infty$ and the cubic roots of 1. \par Hence, for every $\lambda \in \widehat{\mathbb{C}}-\{0,1,\infty\}$ there exists $k\in \mathbb{C}$ with $k^3\neq 1$, such that $\varphi(k)=J(\lambda)$, then $\lambda$ is equivalent to one cross ratio of the set formed by $-k$ and the roots of $z^3-3k z^2+4$. \par We summarize the previous discussion in the following theorem: \begin{theorem}\label{Theorem-Hesse} Every elliptic curve $X$ is isomorphic to one elliptic curve given in the Hesse normal form: \begin{equation}\label{hesse-normal-form} x^3+y^3+1=3kxy, \quad k\in \mathbb{C}\ \text{with}\ k^3\neq 1, \end{equation} \noindent which is isomorphic to the elliptic curve in the form \begin{equation}\label{new-form} y^2=(x+k)(x^3-3kx^2+4),\quad k\in \mathbb{C}\ \text{with}\ k^3\neq 1. \end{equation} \end{theorem} \begin{figure} \begin{center} \includegraphics[width=12cm]{configuration-Hesse.pdf} \caption{A configuration of points for the Hesse normal form with $k=-2.39882+1.27189 i$.} \label{configuration-Hesse} \end{center} \end{figure} For different approaches consult \cite{Bonifant-Milnor}, \cite{Frium} and \cite{Popescu}. According to the formula obtained in \cite{Popescu} or in \cite{Bonifant-Milnor} the rational function $\varphi(k)$ should be: \begin{equation} \varphi(k)=\frac{27}{4}\left(\frac{k(k^3+8)}{4(k^3-1)}\right)^3. \end{equation} \FloatBarrier
{ "timestamp": "2018-10-23T02:04:24", "yymm": "1810", "arxiv_id": "1810.08742", "language": "en", "url": "https://arxiv.org/abs/1810.08742", "abstract": "In this paper we relate some classical normal forms for complex elliptic curves in terms of 4-point sets in the Riemann sphere. Our main result is an alternative proof that every elliptic curve is isomorphic as a Riemann surface to one in the Hesse normal form. In this setting, we give an alternative proof of the equivalence between the Edwards and the Jacobi normal forms. Also, we give a geometric construction of the cross ratios for 4-point sets in general position.", "subjects": "Complex Variables (math.CV)", "title": "Some remarks on the correspondence between elliptic curves and four points in the Riemann sphere", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750492324249, "lm_q2_score": 0.839733963661418, "lm_q1q2_score": 0.8291323637123319 }
https://arxiv.org/abs/1712.00887
Regularity of Edge Ideals and Their Powers
We survey recent studies on the Castelnuovo-Mumford regularity of edge ideals of graphs and their powers. Our focus is on bounds and exact values of $\text{reg} I(G)$ and the asymptotic linear function $\text{reg} I(G)^q$, for $q \geq 1,$ in terms of combinatorial data of the given graph $G.$
\section{Introduction} Monomial ideals are classical objects that live at the crossroad of three areas in mathematics: algebra, combinatorics and topology. Investigating monomial ideals has led to many important results in these areas. The new construction of edge ideals of (hyper)graphs has again resurrected much interest in and regenerated a large amount of work on this class of ideals (cf. \cite{HVTsurvey, MV, V} and references therein). In this paper, we survey recent works on the Castelnuovo-Mumford regularity of edge ideals of graphs and their powers. Castelnuovo-Mumford regularity is an important algebraic invariant which, roughly speaking, measures the complexity of ideals and modules. Restricting to the class of edge ideals of graphs, our focus is on studies that relate this algebraic invariant to the combinatorial data of given graphs. Our interest in powers of edge ideals is driven by the well-celebrated result of Cutkosky, Herzog and Trung \cite{CHT}, and independently, Kodiyalam \cite{Ko}, that for any homogeneous ideal $I$ in a standard graded $k$-algebra $R$, the regularity of $I^q$ is asymptotically a linear function in $q$; that is, there exist constants $a$ and $b$ such that for all $q \gg 0$, $\reg I^q = aq+b$. Generally, the problem of finding the exact linear form $aq+b$ and the smallest value $q_0$ such that $\reg I^q = aq+b$ for all $q \ge q_0$ has proved to be very difficult. We, thus, focus our attention also on the problem of understanding the linear form $aq+b$ and the value $q_0$ for edge ideals via combinatorial data of given graphs. The on-going research program in which algebraic invariants and properties of edge ideals and their powers are investigated through combinatorial structures of corresponding graphs has produced many exciting results and, at the same time, opened many further interesting questions and conjectures. It is our hope to collect these works together in a systematic way to give a better overall picture of the problems and the current state of the art of this research area. For this purpose, we shall state theorems and present sketches of the proofs; instead of giving full detailed arguments, our aim is to exhibit general ideas behind these results, the similarities and differences when developing from one theorem to the next. This paper can be viewed as a complement to the survey done in \cite{HaSurvey}. While in \cite{HaSurvey} the focus was on the regularity of squarefree monomial ideals in general, our attention in this paper is restricted mostly to edge ideals of graphs, but we enlarge our scope by discussing also the regularity of powers of edge ideals. The later is an important area of study itself, with a deep motivation from geometry, and has seen a surge of interest in the last few years. The paper is outlined as follows. In Section \ref{chapter2}, we collect notations and terminology used in the paper. In Section \ref{chapter3}, we present necessary tools which were used in works in this area. Particularly, we shall give Hochster's and Takayama's formulas, which relate the graded Betti numbers of a monomial ideal to the reduced homology groups of certain simplicial complexes. We shall also describe inductive techniques that have been the backbones of most of the studies being surveyed. In Section \ref{chapter4}, we survey results on the regularity of edge ideals of graphs. This section is divided into two subsections; the first one focusses on bounds on the regularity in terms of combinatorial data of the graph, and the second one exhausts cases where the regularity of an edge ideal can be computed explicitly. Section \ref{chapter5} discusses the regularity of powers of edge ideals. This section again is divided into two subsections, which in turn examine bounds for the asymptotic linear function of the regularity of powers of edge ideals and cases when this asymptotic linear function can be explicitly described. In Section \ref{chapter6}, we recall a number of results extending the study of edge ideals of graphs to hypergraphs. Since, our focus in this paper is on mostly edge ideals of graphs, our results in Section \ref{chapter6} will be representative rather than exhaustive. We end the paper with Section \ref{chapter7}, in which we state a number of open problems and questions which we would like to see answered. We hope that these problems and questions would stimulate further studies in this research area. \begin{acknowledgement} The authors would like to thank the organizers of SARC (Southern Regional Algebra Conference) 2017 for their encouragement, which led us to writing this survey. The last named author is partially supported by the Simons Foundation (grant \#279786) and Louisiana Board of Regent (grant \#LEQSF(2017-19)-ENH-TR-25). \end{acknowledgement} \section{Preliminaries}\label{chapter2} In this section we recall preliminary notations and terminology of combinatorics and algebra that will be used throughout this survey. In Subsection \ref{sub.comb} we give various definitions on graphs, hypergraphs, and simplicial complexes. In Subsection \ref{sub.alg} we recall basic homological algebra terminology. Finally, in the last subsection we define various algebraic objects associated to (hyper)graphs and simplicial complexes. \subsection{Combinatorial preliminaries} \label{sub.comb} For any finite simple graph $G$, with set of vertices $V(G)$ and set of edges $E(G)$, we define some graph-theoretic notions as follows. For any $A\subseteq V(G)$, the\textit{ induced subgraph} on $A$ is the graph whose set of vertices is $A$ and whose edges are exactly the edges of $G$ that join two elements of $A$. In the following example (Figure \ref{fig:inducedsubgraph}) $H'$ is an induced subgraph of $H$, while $H''$ is not an induced subgraph of $H$ as it misses the edge $yw$ of $H$. \begin{figure}[h] \centering \begin{tikzpicture} [scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1.3pt}] \useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4); \node [anchor=base] at (-3.3,-.1){$H$:}; \node [vertices] (1) at (0:2) {}; \node [anchor=base] at (2.4,-.1) {$w$}; \node [vertices] (2) at (60:2) {}; \node [anchor=base] at (60:2.3) {$z$}; \node [vertices] (3) at (120:2) {}; \node [anchor=base] at (120:2.3) {$y$}; \node [vertices] (4) at (180:2) {}; \node [anchor=base] at (-2.4, -.1) {$x$}; \node [vertices] (5) at (240:2) {}; \node [anchor=base] at (240:2.5) {$t$}; \node [vertices] (6) at (300:2) {}; \node [anchor=base] at (300:2.5) {$s$}; \foreach \to/\from in {2/1, 3/1, 4/1, 4/3, 5/4, 6/1} \draw [-] (\to)--(\from); \end{tikzpicture} \end{figure} \begin{figure}[h] \begin{tikzpicture} [scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1.3pt}] \useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4); \node [anchor=base] at (-3.3,-.1){$H'$:}; \node [vertices] (1) at (0:2) {}; \node [anchor=base] at (2.4,-.1) {$w$}; \node [vertices] (2) at (60:2) {}; \node [anchor=base] at (60:2.3) {$z$}; \node [vertices] (3) at (120:2) {}; \node [anchor=base] at (120:2.3) {$y$}; \node [vertices] (4) at (180:2) {}; \node [anchor=base] at (-2.4, -.1) {$x$}; \node [vertices] (6) at (300:2) {}; \node [anchor=base] at (300:2.5) {$s$}; \foreach \to/\from in {2/1, 3/1, 4/1, 4/3,6/1} \draw [-] (\to)--(\from); \end{tikzpicture} \hspace*{2.5cm}\begin{tikzpicture} [scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1.3pt}] \useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4); \node [anchor=base] at (-3.3,-.1){$H''$:}; \node [vertices] (1) at (0:2) {}; \node [anchor=base] at (2.4,-.1) {$w$}; \node [vertices] (2) at (60:2) {}; \node [anchor=base] at (60:2.3) {$z$}; \node [vertices] (3) at (120:2) {}; \node [anchor=base] at (120:2.3) {$y$}; \node [vertices] (4) at (180:2) {}; \node [anchor=base] at (-2.4, -.1) {$x$}; \node [vertices] (6) at (300:2) {}; \node [anchor=base] at (300:2.5) {$s$}; \foreach \to/\from in {2/1, 4/1, 4/3,6/1} \draw [-] (\to)--(\from); \end{tikzpicture} \caption{Induced subgraphs} \label{fig:inducedsubgraph} \end{figure} For any vertex $x$ in a graph G the \textit{degree} of $x$, denoted by $d(x)$ is the number of vertices connected to $x$. For any vertex $x$ in a graph $G$ the \textit{neighborhood of} $x$, denoted by $N_G[x]$ is the set consisting of $x$ and all its neighbors. For the graph $H''$ above, $d(x)=2$ and $N_G[x]=\{x,y,w\}$. For any graph $G$ and any vertex $x$, by $G\setminus x$ we denote the induced subgraph on $V(G)\setminus \{x\}$. For any graph $G$, the \textit{complement graph}, denoted by $G^c$, is the graph whose vertices are the vertices of $G$ and whose edges are the non-edges of $G$, i.e., for $a,b \in V(G), ab$ is an edge in $G^c$ if and only if $ab$ is \textit{not} an edge in $G$. For the graph $H$ in Figure \ref{fig:inducedsubgraph}, the following is $H^c$. \begin{figure}[h] \begin{tikzpicture} [scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1.3pt}] \useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4); \node [anchor=base] at (-3.3,-.1){$H^c$:}; \node [vertices] (1) at (0:2) {}; \node [anchor=base] at (2.4,-.1) {$w$}; \node [vertices] (2) at (60:2) {}; \node [anchor=base] at (60:2.3) {$z$}; \node [vertices] (3) at (120:2) {}; \node [anchor=base] at (120:2.3) {$y$}; \node [vertices] (4) at (180:2) {}; \node [anchor=base] at (-2.4, -.1) {$x$}; \node [vertices] (5) at (240:2) {}; \node [anchor=base] at (240:2.5) {$t$}; \node [vertices] (6) at (300:2) {}; \node [anchor=base] at (300:2.5) {$s$}; \foreach \to/\from in {1/5,2/3,2/4,2/5,2/6,3/5,3/6,4/6,5/6} \draw [-] (\to)--(\from); \end{tikzpicture} \caption{Complement graph}\label{fig:complementgraph} \end{figure} A \textit{cycle} of length $n$ in a graph G is a closed walk along its edges, $x_1x_2,x_2x_3, \ldots, x_{n-1} x_n, x_nx_1$, such that $x_i \neq x_j$ for $ i\neq j$. We denote the cycle on $n$ vertices by $C_n.$ A \textit{chord} in the cycle $C_n$ is an edge $x_i x_j$ where $x_j \neq x_{i-1},x_{i+1}$. A graph is said to be \textit{chordal} if for any cycle of length greater than or equal to $4$ there is a chord. A graph is said to be \textit{co-chordal} if the complement of $G$ is chordal. The graph $H$ in Figure \ref{fig:complementgraph} is chordal and $H^c$ is co-chordal. A \textit{forest} is a graph without any cycles. A \textit{tree} is a connected forest. A \textit{complete} graph (or \emph{clique}) on $n$ vertices is a graph where for any two vertices there is an edge joining them. It is denoted by $K_n$. A \textit{bipartite graph} is a graph whose vertices can be split into two groups such that there is no edge between vertices of same group; only edges are between vertices coming from different groups. It is easy to see that a graph is bipartite if and only if it is without any cycle of odd length. We use $K_{m, n}$ to denote the complete bipartite graph with $m$ vertices on one side, and $n$ on the other. We observe that $H''$ in Figure \ref{fig:inducedsubgraph} has no cycle, so it is a tree. As a consequence it is also a bipartite graph with bipartition $\{x,z,s\}$ and $\{y,w\}$. Let $G$ be a graph. We say two disjoint edges $uv$ and $xy$ form a \emph{gap} in $G$ if $G$ does not have an edge with one endpoint in $\{u,v\}$ and the other in $\{x,y\}$. A graph without an induced gap is called \emph{gap-free}. Equivalently, $G$ is gap-free if and only if $G^c$ contains no induced $C_4$. The graph $H$ in Figure \ref{fig:inducedsubgraph} is gap-free. In the following graph $\{x,y\}$ and $\{z,w\}$ form a gap. \begin{figure}[h] \begin{tikzpicture} [scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1.3pt}] \useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4); \node [anchor=base] at (-3.3,-.1){$G$:}; \node [vertices] (1) at (0:2) {}; \node [anchor=base] at (2.4,-.1) {$w$}; \node [vertices] (2) at (60:2) {}; \node [anchor=base] at (60:2.3) {$z$}; \node [vertices] (3) at (120:2) {}; \node [anchor=base] at (120:2.3) {$y$}; \node [vertices] (4) at (180:2) {}; \node [anchor=base] at (-2.4, -.1) {$x$}; \node [vertices] (6) at (300:2) {}; \node [anchor=base] at (300:2.5) {$s$}; \foreach \to/\from in {2/1,4/3,6/1} \draw [-] (\to)--(\from); \end{tikzpicture} \caption{A graph which is not gap-free} \label{fig:notgapfree} \end{figure} A \textit{matching} in a graph is a set of pairwise disjoint edges. A matching is called an \textit{induced matching} if the induced subgraph on the vertices of the edges forming the matching has no other edge. We observe that $\{x,y\}$ and $\{z,w\}$ forms an induced matching for graph $G$ in Figure \ref{fig:notgapfree}. Any graph isomorphic to $K_{1, 3}$ is called a \emph{claw}. Any graph isomorphic to $K_{1,n}$ is called an \emph{n-claw}. If $n >1$, the vertex with degree $n$ is called the root in $K_{1,n}$. A graph without an induced claw is called \emph{claw-free}. A graph without an induced \emph{n-claw} is called \emph{n-claw-free}. \begin{figure}[h] \begin{tikzpicture} [scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1.3pt}] \useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4); \node [vertices] (1) at (0:0) {}; \node [anchor=base] at (0.3,-.1) {$w$}; \node [vertices] (3) at (120:2) {}; \node [anchor=base] at (120:2.3) {$y$}; \node [vertices] (4) at (180:2) {}; \node [anchor=base] at (-2.4, -.1) {$x$}; \node [vertices] (5) at (240:2) {}; \node [anchor=base] at (240:2.5) {$t$}; \foreach \to/\from in {4/5,4/3,4/1} \draw [-] (\to)--(\from); \end{tikzpicture} \caption{A 3-claw (or simply, a claw)} \label{fig:claw} \end{figure} A graph is called a \emph{diamond} if it is isomorphic to the graph with vertex set $\{a,b,c,d\}$ and edge set $\{ab,bc,ac,ad,cd\}$. A graph without an induced diamond is called \emph{diamond-free}. A graph is said to be \textit{planar} if as a $1$-dimensional topological space it can be embedded in the complex plane, i.e., if it can be drawn in the plane in such a way that no pair of edges cross. The following graph in Figure \ref{fig:diamond} is a diamond. We observe that it is also planar. \begin{figure}[h] \begin{tikzpicture} [scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1.3pt}] \useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4); \node [vertices] (2) at (60:2) {}; \node [anchor=base] at (60:2.3) {$z$}; \node [vertices] (3) at (120:2) {}; \node [anchor=base] at (120:2.3) {$y$}; \node [vertices] (4) at (180:2) {}; \node [anchor=base] at (-2.4, -.1) {$x$}; \node [vertices] (5) at (240:2) {}; \node [anchor=base] at (240:2.5) {$t$}; \foreach \to/\from in { 3/2, 4/3, 5/2, 5/3, 5/4} \draw [-] (\to)--(\from); \end{tikzpicture} \caption{A diamond} \label{fig:diamond} \end{figure} Any graph isomorphic to the graph with set of vertices $\{w_1,w_2,w_3,w_4,w_5\}$ and set of edges $\{w_1w_3,w_2w_3,w_3w_4,w_3w_5,w_4w_5\}$ is called a \textit{cricket}. A graph without an induced cricket is called \emph{cricket-free}. It is easy to see that a claw-free graph is also cricket-free. \begin{figure}[h] \includegraphics[width=0.18\linewidth]{cricket.pdf} \caption{A cricket} \label{fig:cricket} \end{figure} An edge in a graph is called a \textit{whisker} if one of its vertices has degree one. A graph is called an \textit{anticycle} if its complement is a cycle. \begin{figure}[h] \includegraphics[width=0.7\linewidth]{hypergraph.pdf} \caption{A hypergraph} \label{fig:hypergraph} \end{figure} A \textit{hypergraph} is the natural higher degree analogue of graphs, in the sense that we allow an edge to be a collection of any number of vertices. Formally speaking, a hypergraph $H = (V(H),E(H))$ consists of the \textit{vertices} $V(H)=\{x_1,\ldots,x_n\}$ and the \textit{edges} $E(H)$, where $E(H)$ is a collection of nonempty subsets of the vertices. With this notation, a graph $G$ is a hypergraph whose edges are subsets of cardinality 2. By abusing notation, we often identify an edge $\{x_{i_1}, \dots, x_{i_r}\} \in E(H)$ with the monomial $x_{i_1} \dots x_{i_r}$. Figure \ref{fig:hypergraph} depicts a hypergraph with edges $\{x_1x_2x_3x_4, x_3x_4x_5, x_5x_6x_7, x_7x_8,x_9\}.$ In this example, the vertex $x_9$ can also be viewed as an \emph{isolated vertex}. A \textit{clutter} is a hypergraph none of whose edges contains any other edge as a subset. The hypergraph in Figure \ref{fig:hypergraph} is a clutter. A \textit{simplicial complex} with vertices $\{x_1,\ldots,x_n\}$ is subset of the power set of $\{x_1,\ldots,x_n\}$ which is closed under the subset operation. The sets that constitute a simplicial complex are called its \textit{faces}. Maximal faces under inclusion are called \textit{facets}. Figure \ref{fig:simplicial} gives a simpicial complex with facets $\{a,b,f\}$, $\{c,d,e\}$, $\{f,e\}$,$\{b,c\}$. \begin{figure}[h] \includegraphics[width=0.4\linewidth]{simplicial.pdf} \caption{A simplicial complex} \label{fig:simplicial} \end{figure} An \textit{independent set} in a graph $G$ is a set of vertices no two of which forms an edge. The independence complex of a graph $G,$ denoted by $\Delta(G),$ is the simplicial complex whose faces are independent sets in $G.$ \begin{figure}[hb] \includegraphics[width=0.38\linewidth]{indcmplx.pdf} \caption{A simple graph whose independence complex is in Figure \ref{fig:simplicial}} \label{fig:independence} \end{figure} Let $\Delta$ be a simplicial complex, and let $\sigma \in \Delta$. The \emph{deletion} of $\sigma$ in $\Delta$, denoted by $\del_\Delta(\sigma),$ is the simplicial complex obtained by removing $\sigma$ and all faces containing $\sigma$ from $\Delta$. The \emph{link} of $\sigma$ in $\Delta$, denoted by $\link_\Delta(\sigma)$, is the simplicial complex whose faces are $$\{ F \in \Delta ~|~ F \cap \sigma = \emptyset, \sigma \cup F \in \Delta\}.$$ A simplicial complex $\Delta$ is recursively defined to be \emph{vertex decomposable} if either: \begin{enumerate} \item[(i)] $\Delta$ is a simplex; or \item[(ii)] there is a vertex $v$ in $\Delta$ such that both $\link_\Delta(v)$ and $\del_\Delta(v)$ are vertex decomposable, and all facets of $\del_\Delta(v)$ are facets of $\Delta$. \end{enumerate} A vertex satisfying condition (2) is called a \emph{shedding vertex}, and the recursive choice of shedding vertices is called a \emph{ shedding order} of $\Delta$. Recall that a simplicial complex $\Delta$ is said to be \emph{shellable} if there exists a linear order of its facets $F_1, F_2, \dots, F_t$ such that for all $k = 2, \dots, t$, the subcomplex $\Big(\bigcup_{i=1}^{k-1} \overline{F_i}\Big) \bigcap \overline{F_k}$ is pure and of dimension $(\dim F_k - 1)$. Here $\overline{F}$ represents the simplex over the vertices of $F$. It is a celebrated fact that \emph{pure} shellable complexes give rise to \emph{Cohen-Macaulay Stanley-Reisner rings}. Recall also that a ring or module is \emph{sequentially Cohen-Macaulay} if it has a filtration in which the factors are Cohen-Macaulay and their dimensions are increasing. This property corresponds to (\emph{nonpure}) shellability in general. Vertex decomposability can be thought of as a combinatorial criterion for shellability and sequentially Cohen-Macaulayness. In particular, for a simplicial complex $\Delta$, $$\Delta \textrm{ vertex decomposable } \Rightarrow \Delta \textrm{ shellable } \Rightarrow \Delta \textrm{ sequentially Cohen-Macaulay}.$$ \subsection{Algebraic preliminaries} \label{sub.alg} Let $S = K[x_1, \dots, x_n]$ be a polynomial ring over a field $K$. Let $M$ be a finitely generated ${\mathbb Z}^n$-graded $S$-module. It is known that $M$ can be successively approximated by free modules. Formally speaking, there exists an exact sequence of minimal possible length, called \textit{a minimal free resolution} of $M$: $$0\longrightarrow \mathbb{F}_p \overset{d_p}\longrightarrow \mathbb{F}_{p-1} \cdots \overset{d_2}\longrightarrow \mathbb{F}_1 \overset{d_1}\longrightarrow \mathbb{F}_0 \overset{d_0}\longrightarrow M \longrightarrow 0 \qquad (*)$$ Here, $\mathbb{F}_i = \bigoplus_{\sigma \in {\mathbb Z}^n} S(-\sigma)^{\beta_{i, \sigma}}$, where $S(-\sigma)$ denotes the free module obtained by shifting the degrees in $S$ by $\sigma$. The numbers $\beta_{i , \sigma}$'s are positive integers and are called the multigraded Betti numbers of $M$. We often identify $\sigma$ with the monomial whose exponent vector is $\sigma$. For example, over $K[x,y]$, we may write $\beta_{i, x^2y}(M)$ instead of $\beta_{i, (2,1)}(M)$. For every $j \in {\mathbb Z}$, $\beta_{i,j}=\sum_{\{\sigma ~|~ |\sigma|=j\} } \beta_{i ,\sigma}$ is called the $(i,j)$-th standard graded Betti number of $M$. Three very important homological invariants that are related to these numbers are the Castelnuovo-Mumford regularity, or simply regularity, the depth and the projective dimension, denoted by $\text{reg}(M)$, $\text{depth}(M)$ and $\text{pd}(M)$ respectively: \begin{align*} \reg M & = \max \{|\sigma|-i ~|~ \beta_{i , \sigma} \neq 0 \} \\ \depth M & = \inf \{i ~|~\text{Ext}^i(K, M) \neq 0\} \\ \pd M & = \max \{i ~|~ \text{there is a } \sigma, \beta_{i , \sigma} \neq 0 \}. \end{align*} If $S$ is viewed as a standard graded $K$-algebra and $M$ is a graded $S$-module, then the graded Betti numbers of $M$ are also given by $\beta_{i,j}(M)=\dim_k \Tor_{i}(M,K)_j$, and so we have \begin{align*} \reg M & = \max \{j-i ~|~\Tor_{i} (M,K)_j \neq 0 \} \\ \pd M & = \max \{i~|~ \Tor_i (M,K) \neq 0 \} \\ \depth M & = n - \pd M. \end{align*} In practice, we often work with short exact sequences, so it is worthwhile to mention that the regularity can also be defined via the vanishings of \textit{local cohomology} modules with respect to the ``irrelevant maximal ideal'' $m=(x_1\dots ,x_n)$. Particularly, for $i \geq 0,$ define $$a_i(M):= \begin{cases} \max \{j~|~ H_m^i(M)_j \neq 0\} & \text{if } H_m^i (M)\neq 0 \\ - \infty & \text{otherwise.} \end{cases}$$ Then, the regularity of $M$ is also given by $$\reg M = \max_i \{a_i(M)+i\}.$$ \begin{example} \label{ex:resolution} Let $M=\displaystyle \mathbb{Q}[x_1, \dots ,x_5]/ (x_1x_2, x_2x_3, x_3x_4, x_4x_5, x_5x_1)$. Then the minimal free resolution of $M$ is: $$0\longrightarrow \mathbb{F}_3 \overset{d_3}\longrightarrow \mathbb{F}_{2} \overset{d_2}\longrightarrow \mathbb{F}_1 \overset{d_1}\longrightarrow \mathbb{F}_0 \overset{d_0}\longrightarrow M \longrightarrow 0 $$ Here: $$\beta_{0,\sigma}=1\text{ if }\sigma=1, \text{ and } \beta_{0, \sigma}=0 \text{ otherwise}$$ $$\beta_{1, \sigma}=1 \text{ if } \sigma= x_1x_2,x_2x_3,x_3x_4,x_4x_5,x_5x_1, \text{and } \beta_{1, \sigma}=0 \text{ otherwise}$$ $$\beta_{2 ,\sigma}=1 \text{ if } \sigma= x_1x_2x_3,x_2x_3x_4,x_1x_2x_5,x_1x_4x_5,x_3x_4x_5, \text{and } \beta_{2, \sigma}=0 \text{ otherwise}$$ $$\beta_{3, \sigma}=1\text{ if }\sigma=x_1x_2x_3x_4x_5, \text{ and } \beta_{3, \sigma}=0 \text{ otherwise}$$ \end{example} An ideal $I$ in $S$ is said to be \textit{unmixed} if all its associated primes are minimal of same height. We say $I$ (or equivalently $S/I$) is \textit{Cohen-Macaulay} if the Krull dimension and depth are equal. Cohen-Macaulay ideals are always unmixed and known to have many nice geometric properties. It can be checked that the module $M$ in Example \ref{ex:resolution} is Cohen-Macaulay. \subsection{Algebraic objects with underlying combinatorial structures} For any (hyper)graph $H$ over the vertex set $V(H) = \{x_1, \dots, x_n\}$, its \textit{edge ideal} is defined as follows: $$I(H)=(\prod_{x \in e} x ~\big|~ e \in E(H)) \subseteq S = K[x_1, \dots, x_n].$$ \begin{example} The edge ideal of the hypergraph, say $H,$ in Figure \ref{fig:hypergraph}, is $$I(H)=(x_1x_2x_3x_4, x_3x_4x_5, x_5x_6x_7, x_7x_8,x_9).$$ \end{example} For any simplicial complex $ \Delta$ over the vertex set $V = \{x_1, \dots, x_n\}$, its \textit{Stanley-Reisner ideal} is defined as follows: $$I_{\Delta}=(\prod_{x \in e} x ~\big|~ e \text{ is a minimal non-face of } \Delta) \subseteq S.$$ \begin{example} The Stanley-Reisner ideal of the simplicial complex in Figure \ref{fig:simplicial} is $$I_{\Delta}=(ac,ad,ae,bd,be,cf,df).$$ \end{example} As we have seen before, (hyper)graphs are related to simplicial complexes via the notion of independence complex. The algebraic view of this relation exposes the following equality. \begin{lemma}\label{lem.indcomplex} Let $G$ be a simple (hyper)graph and let $\Delta(G)$ be its independence complex. Then $$I(G)=I_{\Delta(G)}.$$ \end{lemma} \begin{example} The edge ideal of the graph $G$ in Figure \ref{fig:independence} is the same as the the Stanley-Reisner ideal of the simplicial complex $\Delta$ in Figure \ref{fig:simplicial}. Note that $\Delta=\Delta(G).$ $$I(G)= (ae,ad,bd,be,bf,ec,ef)$$ \end{example} For any simple graph $G$ with $V(G)=\{x_1,\dots,x_n\}$, we define the $t$-\textit{path ideal} as: $$I_t(G)=(x_{i_1}\cdots x_{i_t} ~|~ i_k \neq i_l \text{ for } k \neq l, x_{i_k}x_{i_{k+1}} \text{ an edge of } G).$$ \begin{example} The $3$-path ideal of the five cycle $C_5: x_1x_2x_3x_4x_5$ is $$I_3(C_5)=(x_1x_2x_3,x_2x_3x_4,x_3x_4x_5,x_4x_5x_1,x_5x_1x_2).$$ \end{example} Finally, as a matter of convention, for any (hyper)graph $G$, by $\reg G$ we mean $\reg I(G).$ \section{Formulas and inductive approaches} \label{chapter3} Computing non-vanishings of local cohomology modules or Betti numbers of an ideal can be quite complicated. As a result, inductive techniques that relate regularity of edge ideals with simplicial complexes and smaller ideals are employed as a common tool in the literature. In this context, induced structures such as induced subcomplexes and induced subgraphs have proven to be significant objects to investigate regularity of an edge ideal and its powers. In this section, we focus on the methods that enable us to bound and compute regularity of an edge ideal and its powers. We start the section by recalling two important formulas. Then we address a few inductive bounds that are widely used in the literature. \subsection{Hochster's and Takayama's Formulas} Hochster's formula has been a significant tool in the study of squarefree monomial ideals due to its power to relate the multigraded Betti numbers of a simplicial complex $\Delta$ to the non-vanishings of the reduced simplicial homology groups of $\Delta$ and its induced subcomplexes. \begin{theorem}[\protect{Hochster's Formula \cite{Hochster}}] \label{formula.hochster} Let $\Delta$ be a simplicial complex on the vertex set $V$ and let $I(\Delta)$ be its Stanley-Reisner ideal. Then $$ \beta_{i,j} (I(\Delta))= \sum_{W \subseteq V, ~ |W| = j} \dim_k\big( \widetilde{H}_{j-i-2} (\Delta_W;K)\big)$$ where $\Delta_W$ is the restriction of $\Delta$ to the vertex set $W.$ \end{theorem} For an arbitrary monomial ideal, Hochster's formula can not be employed. Takayama's formula given in \cite{Ta} can perform a similar task as Hochster's formula for this class of ideals. Let $I \subseteq S = K[x_1, \dots, x_n]$ be a monomial ideal. Takayama's formula provides a combinatorial description for the non-vanishings of ${\mathbb Z}^n$-graded component $H_{{\frak m}}^i (S/I)_{\bf a}$ for $\bf a \in {\mathbb Z}^n,$ and this description is given in terms of certain simplicial complexes $\Delta_{\bf a} (I)$ related to $I.$ Note that $S/I$ is an ${\mathbb N}^n$-graded algebra, and so $H_{{\frak m}}^i (S/I)$ is a ${\mathbb Z}^n$-graded module over $S/I.$ The simplicial complex $\Delta_{\bf a}(I)$ is called the \textit{degree complex} of $I.$ The construction of $\Delta_{\bf a}(I)$ was first given in \cite{Ta} and then simplified in \cite{MT}. We recall the construction from \cite{MT}. For $\textbf{a} =(a_1, \dots, a_n) \in {\mathbb Z}^n,$ set $x^a=x_1^{a_1} \cdots x_n^{a_n}$ and $ G_{\bf a} := \{ j \in \{1,\dots, n\} ~|~ a_j < 0\}.$ For every subset $ F \subseteq \{1,\dots, n\},$ let $S_F=S[x_j^{-1} ~|~ j \in F].$ Define $$\Delta_{\bf a} (I)= \{ F \setminus G_{\bf a} ~|~ G_{\bf a} \subseteq F, x^{\bf a} \notin IS_F \}.$$ \begin{theorem}[\protect{Takayama's Formula \cite{Ta}}] \label{formula.takayama} Let $I \subseteq S$ be a monomial ideal, and let $\Delta(I)$ denote the simplicial complex corresponding to $\sqrt{I}$. Then $$ \dim_k H_{{\frak m}}^i (S/I)_{\bf a} = \begin{cases} \dim_k\big( \widetilde{H}_{i-|G_{\bf a}|-1} (\Delta_{\bf a} (I),K\big) & \textrm{ if } G_{\bf a} \in \Delta(I),\\ 0 & \textrm{ otherwise. } \end{cases} $$ \end{theorem} The formula stated in \cite{MT} and here is different than the original formula introduced in \cite{Ta}. The original formula has additional conditions on $\bf a$ for $H_{{\frak m}}^i (S/I)_{\bf a} =0.$ It follows from the proof in \cite{Ta} that those conditions can be omitted. Due to the equality between edge ideals of graphs and Stanley-Reisner ideals of their independence complexes, we can employ Hochster's and Takayama's formulas in the study of regularity of edge ideals. In order to deal with arbitrary monomial ideals, polarization is proved to be a powerful process to obtain a squarefree monomial ideal from a given monomial ideal. For details of polarization we refer the reader to \cite{HH}. \begin{definition}\label{pol_def} Let $M=x_1^{a_1}\dots x_n^{a_n}$ be a monomial in $S=K[x_1,\dots,x_n]$. Then we define the squarefree monomial $P(M)$ ({\it polarization} of $M$) as $$P(M)=x_{11}\dots x_{1a_1}x_{21}\dots x_{2a_2}\dots x_{n1}\dots x_{na_n}$$ in the polynomial ring $R=K[x_{ij} \mid 1\leq i\leq n,1\leq j\leq a_i]$. If $I=(M_1,\dots,M_q)$ is an ideal in $S$, then the polarization of $I$, denoted by $I^{\pol}$, is defined as $I^{\pol}=(P(M_1),\dots,P(M_q))$. \end{definition} The regularity is preserved under polarization. \begin{corollary} \cite[Corollary 1.6.3.d]{HH} Let $I \subset S$ be a monomial ideal and $I^{\pol}\subset R$ be its polarization. Then $\reg (S/I) =\reg (R/I^{\pol}).$ \end{corollary} \subsection{Inductive techniques} We start the section with an easy yet essential consequence of Hochster's formula that links the regularity of a graph with regularity of its induced subgraphs. \begin{lemma}\label{lem.induced} Let $G$ be a simple graph. Then $\reg H \leq \reg G$ for any induced subgraph $H$ of $G.$ \end{lemma} For any homogeneous ideal $I \subseteq S$ and any homogeneous element $m$ of degree $d,$ the following short exact sequences are used as standard tools in commutative algebra and also proved to be very useful in computing regularity of edge ideals and their powers. \begin{eqnarray}\label{eq.colonadd} 0 \longrightarrow \frac{R}{I:m} (-d) \xrightarrow{\cdot m} \frac{R}{I} \longrightarrow \frac{R}{I+(m)} \longrightarrow 0 \end{eqnarray} Let $I$ and $J$ be ideals in $S.$ Another useful exact sequence is: \begin{eqnarray}\label{eq.intersectadd} 0 \longrightarrow \frac{R}{I \cap J} \rightarrow \frac{R}{I} \oplus \frac{R}{J} \longrightarrow \frac{R}{I+J} \longrightarrow 0 \end{eqnarray} We can see how regularity changes in a short exact sequence by taking the associated long exact sequence of local cohomology modules. Particularly, we have the following inductive bound (the second statement is the content of \cite[Lemma 2.10 ]{DHS}). \begin{lemma}\label{lem.regineq} Let $I \subseteq S$ be a monomial ideal, and let $m$ be a monomial of degree $d$. Then \[ \reg I \leq \max\{ \reg (I : m) + d, \reg (I,m)\}. \] Furthermore, if $x$ is a variable appearing in $I,$ then $$\reg I \in \{\reg (I:x) + 1, \reg (I,x)\}.$$ \end{lemma} In the statement of Lemma \ref{lem.regineq}, the expression $x$ is a variable appearing in $I$ means some of the minimal generators of $I$ is divisible by $x.$ Note that if $x$ is a variable not appearing in $I,$ then $\reg (I,x)=\reg I.$ In case of edge ideals, if $x$ is an isolated vertex in $G,$ we can drop the vertex $x$ when computing regularity. Thus $\reg (I(G):x) = \reg I(G \setminus N_G[x])$ and $\reg (I(G),x) = \reg (I(G \setminus x))$ for any vertex $x$ in $G.$ Then Lemma \ref{lem.regineq} can be restated in terms of edge ideals. \begin{lemma}\label{lem.deletecontract} Let $x$ be a vertex in $G.$ Then $$\reg G \in \{ \reg (G \setminus N_G[x]) +1, \reg (G \setminus x)\}.$$ \end{lemma} Kalai and Meshulam \cite{KM} proved the following result for squarefree monomial ideals and Herzog \cite{Herzog} generalized it to any monomial ideal. \begin{theorem} \label{thm.kalaimeshulam} Let $I_1, \dots, I_s$ be squarefree monomial ideals in $S.$ Then $$ \reg \Big( S \big/ \sum_{i=1}^s I_i \Big) \leq \sum_{i=1}^s \reg S /I_i$$ \end{theorem} In the case of edge ideals, we have the following bound. \begin{corollary}\label{cor.kalaimeshulam} Let $G$ be a simple graph. If $G_1, \dots, G_s$ are subgraphs of $G$ such that $E(G)= \bigcup_{i=1}^s E(G_i)$ then $$ \reg \Big( S / I(G) \Big) \leq \sum_{i=1}^s \reg S /I(G_i)$$ \end{corollary} If $G$ is the disjoint union of graphs $G_1, \dots, G_s$ then the above equality is achieved by using K\"unneth formula in algebraic topology. \begin{corollary}\label{cor.disjoint} Let $G$ be a simple graph. If $G$ can be written as a disjoint union of graphs $G_1, \dots, G_s$ then $$ \reg \Big( S / I(G) \Big) = \sum_{i=1}^s \reg S /I(G_i)$$ \end{corollary} In the study of powers of edge ideals, Banerjee developed the notion of even-connection and gave an important inductive inequality in \cite{Ba}. \begin{theorem} \label{thm.Banerjee} For any simple graph $G$ and any $s \geq 1,$ let the set of minimal monomial generators of $I(G)^s$ be $\{m_1, \dots, m_k\}.$ Then $$ \reg I(G)^{s+1} \leq \max \{\reg (I(G)^{s+1}: m_l)+2s , 1 \leq l \leq k, \reg I(G)^s \}.$$ \par In particular, if for all $s \geq 1$ and for all minimal monomial generators $m$ of $I(G)^s,$ $\textrm{reg }(I(G)^{s+1}: m) \leq 2$ and $\reg I(G) \leq 4,$ then $\reg I(G)^{s+1}= 2s+2$ for all $s \geq 1;$ as a consequence $I(G)^{s+1}$ has a linear minimal free resolution. \end{theorem} Working with the above inequality requires a good understanding of the ideal $(I(G)^{s+1}: m)$ and its generators when $m$ is a minimal monomial generator of $I(G)^s.$ The even-connection definition is key to attain this goal. We recall the definition of even-connectedness and its important properties from \cite{Ba}. \begin{definition}\label{even_connected} Let $G=(V,E)$ be a graph. Two vertices $u$ and $v$ ($u$ may be the same as $v$) are said to be even-connected with respect to an $s$-fold product $e_1\cdots e_s$ where $e_i$'s are edges of $G$, not necessarily distinct, if there is a path $p_0p_1\cdots p_{2k+1}$, $k\geq 1$ in $G$ such that: \begin{enumerate} \item $p_0=u,p_{2k+1}=v.$ \item For all $0 \leq l \leq k-1,$ $p_{2l+1}p_{2l+2}=e_i$ for some $i$. \item For all $i$, $ \mid\{l \geq 0 \mid p_{2l+1}p_{2l+2}=e_i \}\mid ~ \leq ~ \mid \{j \mid e_j=e_i\} \mid$. \item For all $0 \leq r \leq 2k$, $p_rp_{r+1}$ is an edge in $G$. \end{enumerate} \end{definition} Fortunately, it turns out that $(I(G)^{s+1} : m)$ is generated by monomials in degree 2. \begin{theorem}\label{even_connec_equivalent}\cite[Theorem 6.1 and Theorem 6.7]{Ba} Let $G$ be a graph with edge ideal $I = I(G)$, and let $s \geq 1$ be an integer. Let $m$ be a minimal generator of $I^s$. Then $(I^{s+1} : m)$ is minimally generated by monomials of degree 2, and $uv$ ($u$ and $v$ may be the same) is a minimal generator of $(I^{s+1} : m )$ if and only if either $\{u, v\} \in E(G) $ or $u$ and $v$ are even-connected with respect to $m$. \end{theorem} After polarization, the ideal $(I(G)^{s+1} : m)$ can be viewed as the edge ideal of a graph that is obtained from $G$ by adding even-connected edges with respect to $m.$ Some of the combinatorial properties of this ideal in relation to $G$ are studied in \cite{JNS}. \section{Regularity of edge ideals} \label{chapter4} Computing and/or bounding the regularity of edge ideals is the foundation in the study of regularity of powers of edge ideals. In this section, we identify the combinatorial structures that are related to regularity of edge ideals. First, we collect the general upper and lower bounds given for the regularity of edge ideals, then we present the list of known classes of graphs where the regularity is computed explicitly. Note that regularity of an edge ideal is bounded below by 2, which is the generating degree of an edge ideal. Thus, identifying combinatorial structures of a graph with regularity 2 can be considered as the base case of results in this section. The following combinatorial characterization of such graphs is nowadays often referred to as \emph{Fr\"oberg's characterization}. It was, in fact, given first in topological language by Wegner \cite{Wegner} and later, independently, by Lyubeznik \cite{L} and Fr\"oberg \cite{Fr} in monomial ideals language. \begin{theorem}[\protect{\cite[Theorem 1]{{Fr}}}] \label{thm.regtwo} Let $G$ be a simple graph. Then $\reg I(G) = 2$ if and only if $G$ is a co-chordal graph. \end{theorem} \subsection{Lower and upper bounds} One of the graph-theoretical invariants that can be related to the regularity is the induced matching number. The first result revealing this relation is due to Katzman and it provides a general lower bound on the regularity of edge ideals. \begin{theorem}[\protect{\cite[Lemma 2.2]{Katzman}}] \label{thm.reggenerallower} Let $G$ be a simple graph and $\nu(G)$ be the maximum size of an induced matching in $G.$ Then $$\reg I(G) \ge \nu(G)+1.$$ \end{theorem} \begin{sketch} Let $\{e_1, \dots, e_r\}$ be an induced matching of maximal size in $G.$ Suppose $H$ is the induced subgraph of $G$ with $E(H)=\{e_1, \dots, e_r\}.$ Note that all the edges in $H$ are disjoint. Thus $\reg (H) =r+1.$ By Lemma \ref{lem.induced}, $\reg (G) \ge \nu(G)+1.$ \end{sketch} Another graph-theoretical invariant of interest is the matching number and this invariant actually emerges as a general upper bound for any graph. \begin{theorem}[\protect{\cite[Theorem 6.7]{HVT2008}, \cite[Theorem 11]{Russ}}] \label{thm.regupper} Let $G$ be a simple graph. Let $\beta(G)$ be the minimum size of a maximal matching in $G.$ Then $$\reg I(G) \le \beta(G)+1.$$ \end{theorem} \begin{example} Let $G$ be the graph given in Figure \ref{fig:maxmatch}. It is clear that $\nu(G)=\beta(G)=3.$ Then $\reg G=4$ by Theorem \ref{thm.reggenerallower} and Theorem \ref{thm.regupper}. In general, if $G$ is a simple graph with disjoint edges, then the above bounds coincide and become an equality. \begin{figure}[h] \includegraphics[width=0.26\linewidth]{maxmatch.pdf} \caption{A graph with disjoint edges} \label{fig:maxmatch} \end{figure} \end{example} Comparing the existing lower and upper bounds yields to interesting classifications, particularly when these bounds coincide. For example, Cameron and Walker \cite[Theorem 1]{CW} gave the first classification of graphs $G$ with $\nu(G)=\beta(G).$ Then Hibi et al \cite{HHKO} modified their result slightly and gave a full generalization with some corrections. \begin{example} Let $G$ be the graph given in Figure \ref{fig:cochordal}. One can easily verify that $ \beta(G)=2.$ However $\reg G= 2$ by Fr\"oberg's characterization. In general, we can find examples of graphs where $\beta (G)+1$ can be arbitrarily large compared to $\reg G.$ \begin{figure}[h] \includegraphics[width=0.5\linewidth]{chordal.pdf} \caption{A co-chordal graph} \label{fig:cochordal} \end{figure} \end{example} The above upper bound is strengthened by making use of co-chordal subgraph covers of a graph and this bound is proved by Woodroofe. Recall that a graph is co-chordal if its complement is chordal and $\text{co-chord}(G)$, the \emph{co-chordal number} \index{co-chordal number} of $G$, denotes the least number of co-chordal subgraphs of $G$ whose union is $G$. The graph in Figure \ref{fig:cochordal} is an example of a co-chordal graph. Let $\{e_1, \dots, e_r\}$ be a maximal matching of minimal size in $G.$ For each $i,$ let $G_i$ be the subgraph of $G$ with edges $e_i \cup \{ \text{ edges in } G \text{ adjacent to } e_i\}.$ Note that $G_1, \dots, G_r$ forms a co-chordal subgraph cover of $G,$ thus $\text{co-chord}(G) \le \beta (G)$ and the bound in Theorem \ref{thm.reggeneralupper} improves the bound in Theorem \ref{thm.regupper}. \begin{theorem}[\protect{\cite[Lemma 1]{Russ}}] \label{thm.reggeneralupper} \index{regularity} \index{co-chordal number} Let $G$ be a simple graph. Then $$\reg I(G) \le \text{co-chord}(G)+1.$$ \end{theorem} \begin{sketch} Let $\text{co-chord}(G)=r$ and let $G_1, \dots, G_r$ be a co-chordal cover of $G$. It follows from Fr\"oberg's characterization of regularity 2 graphs, Theorem \ref{thm.regtwo}, that $\reg (G_i) =2$ for each $i\in \{1, \dots, r\}.$ Then the result follows immediately from Corollary \ref{cor.kalaimeshulam}. \end{sketch} Gap-free graphs have also been of interest in the investigation of regularity. These graphs arise naturally since their induced matching number is 1 which is the smallest possible number it could be. However, computing or bounding the regularity of this class of graphs is not so easy. Furthermore, there are very few examples on how large the regularity of such edge ideals can be, see \cite{NP} by Nevo and Peeva for an example of a gap-free graph $G$ in 12 variables with $\reg G= 4.$ Putting an additional condition on the gap-freeness of $G$ may result with an upper bound and it is indeed achieved in \cite{Nevo} . \begin{theorem}[\protect{\cite[Theorem 1.2]{Nevo}, \cite[Proposition 19]{DHS}}] \label{thm.gapclawfree} If $G$ is gap-free and claw-free, then $\reg I(G) \le 3.$ \end{theorem} \begin{sketch} Let $x$ be a vertex in $G$ with the maximum possible degree. By Lemma \ref{lem.deletecontract}, we have $\reg (G) \leq \max \{ \reg (G \setminus N_G[x]) +1, \reg (G \setminus x)\}.$ Note that induced subgraphs $G \setminus N_G[x]$ and $G\setminus x$ of $G$ are both gap-free and claw-free. It follows from the induction on the number of vertices that $\reg (G\setminus x) \le 3.$ Thus it suffices to show that $\reg ( G\setminus N_G[x]) \le 2.$ By Fr\"oberg's characterization, it is enough to prove that $( G\setminus N_G[x])^c$ is chordal and it is proved by contradiction: \begin{enumerate} \item Suppose $(G\setminus N_G[x])^c$ has an induced cycle on $w_1,w_2, \ldots, w_n$ (in order) of length at least 4. \item Any vertex of $G$ is distance 2 from $x$ in $G$ by \cite[Proposition 3.3]{DHS}, then $\{x,y\}$ and $\{y,w_1\}$ are edges in $G$ for some vertex $y.$ \item Note that either $\{y,w_2\}$ or $ \{y,w_n\}$ must be an edge in $G.$ Otherwise edges $ \{w_2,w_n\}$ and $\{x,y\}$ form a gap in $G.$ \item Without loss of generality, suppose $\{y,w_2\}$ is an edge. Then the induced subgraph on $\{x,y,w_1,w_2\}$ is a claw in $G,$ a contradiction. \end{enumerate} \end{sketch} \begin{example} There are examples of gap-free and claw-free graphs where both values in Theorem \ref{thm.gapclawfree} can be attained for regularity. For example, if $G^c$ is a tree then $\reg G=2$ and if $G^c$ is $C_n$ for $n\geq 5$ then $\reg G= 3.$ \end{example} This result is generalized to $n$-claw free and gap-free graphs by Banerjee in \cite{Ba}. The proof of the general case follows similarly and uses induction on $n.$ \begin{theorem}[\protect{\cite[Theorem 3.5]{Ba}}] \label{thm.gapnclawfree} If $G$ is gap-free and $n$-claw-free, then $\reg I(G) \le n.$ \end{theorem} Another special class of graphs are planar graphs. This class of graphs emerges frequently in applications since they can be drawn in the plane without edges crossing. It is given in \cite{Russ} that even though regularity of a planar graph may be arbitrarily large, regularity of its complement can be bounded above by 4. \begin{theorem}[\protect{\cite[Theorem 3.4]{Russ}}] \label{thm.regplanarupper} If $G$ is a planar graph, then $\reg I(G^c) \le 4.$ \end{theorem} \subsection{Exact values} Computing the regularity for special classes of graphs has been an attractive research topic in the recent years. To that extend characterizing edge ideals with certain regularity has been of interest as well. However, very little is known in the latter case. A combinatorial characterization of edge ideals with regularity 3 is still not known. However, a partial result for bipartite graphs is achieved by Fern$\acute{\textrm{a}}$ndez-Ramos and Gimenez in \cite{FG}. Recall that a graph $G$ is bipartite if the vertices $V$ can be partitioned into disjoint subsets $V= X \cup Y$ such that $\{x,y\} $ is an edge in $G$ only if $x \in X$ and $y \in Y$ or vice versa. The \textit{bipartite complement} of a bipartite graph $G,$ denoted by $G^{bc},$ is the bipartite graph over the same partition of vertices and $ \{x,y\} \in G^{bc}$ if and only if $\{x,y\} \notin G$ for $x \in X, y \in Y.$ \begin{theorem}[\protect{\cite[Theorem 3.1]{FG}}] \label{thm.regbipartitethree} Let $G$ be a connected bipartite graph. Then $\reg I(G)=3$ if and only if $G^c$ has no induced cycles of length $\ge 4$ and $G^{bc}$ has no induced cycles of length $\ge 6.$ \end{theorem} In most of the known cases, it turns out that regularity can be expressed in terms of the induced matching number. We first recall the results when the regularity is one more than the induced matching number. \begin{theorem}\label{thm.inducedmatch1} Let $G$ be a simple graph and $\nu(G)$ be the induced matching number of $G.$ Then $$\reg I(G) = \nu(G)+1$$ in the following cases: \begin{enumerate}[label=(\alph*)] \item $G$ is a chordal graph (see \cite{HVT2008}); \item $G$ is a weakly chordal graph (see \cite{Russ}); \item $G$ is sequentially Cohen-Macaulay bipartite graph (see \cite{Van}); \item $G$ is unmixed bipartite graph (see \cite{Ku}); \item $G$ is very well-covered graph (see \cite{MMCRTY}); \item $G$ is vertex decomposable graph and has no closed circuit of length 5 (see \cite{KMo}); \item $G$ is a $(C_4,C_5)$-free vertex decomposable graph (see \cite{BC}); \item $G$ is a unicyclic graph with cycle $C_n$ when \begin{enumerate}[label=(\roman*)] \item $ n \equiv 0, 1 ~( mod ~3)$ (see \cite{ ABS, BHT, J}) or \item $ \nu(G \setminus \Gamma(G)) < \nu (G)$ where $\Gamma(G)$ is the collection of all neighbors of the roots in the rooted trees attached to $C_n $ (see \cite{ABS}). \end{enumerate} \end{enumerate} \end{theorem} \begin{figure}[h] \includegraphics[width=0.33\linewidth]{vertexdec.pdf} \caption{A vertex decomposable and a (weakly) chordal graph} \label{fig:vertexdec} \end{figure} \begin{remark} Chordal graphs are vertex decomposable and vertex decomposable graphs are sequentially Cohen-Macaulay (see \cite{FVT, Russ2}). Sequentially Cohen-Macaulay bipartite graphs are vertex decomposable (see \cite{Van}). Also if $G$ is a very well-covered graph then $G$ is unmixed and if $G$ is chordal, then it is weakly chordal. Thus, $(b)$ implies $(a)$, and $(e)$ implies $(d)$. Note that bipartite graphs have no odd cycles. Hence, $(f)$ implies $(c)$. \end{remark} \begin{example} Let $G$ be the graph given in Figure \ref{fig:unicyclic1}. Then $\Gamma(G)=\{x_6, x_7,x_8\}$ and $\nu(G \setminus \Gamma(G))=2 < \nu(G)=3.$ Furthermore, this graph does not belong to any of the classes described in $(a)$---$(g)$. \begin{figure}[h] \includegraphics[width=0.6\linewidth]{unicyclic1.pdf} \caption{A graph satisfying (h) (ii) in Theorem \ref{thm.inducedmatch1}} \label{fig:unicyclic1} \end{figure} \end{example} It is of interest to find different expressions for the regularity of edge ideals. The next result collects all known cases in which the regularity has a different expression than the above classes and it is still in terms of induced matching. \begin{theorem} \label{thm.inducedmatch2} Let $G$ be a simple graph and $\nu(G)$ be the induced matching number of $G.$ Then $$\reg I(G) = \nu(G)+2$$ in the following cases: \begin{enumerate}[label=(\alph*)] \item $G$ is an $n$-cycle $C_n$ when $ n \equiv 2 ~ (mod ~3)$ (see \cite{ BHT, J}); \item $G$ is a unicyclic graph with cycle $C_n$ when $ n \equiv 2 ~( mod ~3)$ and $ \nu(G \setminus \Gamma(G)) = \nu (G)$ where $\Gamma(G)$ is the collection of all neighbors of the roots in the rooted trees attached to $C_n $ (see \cite{ABS}). \end{enumerate} \end{theorem} \section{Regularity of powers of edge ideals} \label{chapter5} The regularity of powers of an edge ideal is considerably harder to compute than that of the edge ideal itself. However, in many cases, known bounds and exact formulas for the regularity of $I(G)$ inspire new bounds and exact formulas for the regularity of $I(G)^q$, for $q \ge 1$. This section is divided into two parts, where in the first subsection we list a number of lower and upper bounds, and in the second subsection we give the exact values of $\reg I(G)^q$, for $q \ge 1$, for special classes of graphs. \subsection{Lower and upper bounds} Just like with studies on the regularity of edge ideals, the induced matching number of a graph is ultimately connected to the regularity of powers of its edge ideal. The following general lower bound generalizes that of Theorem \ref{thm.reggenerallower} for an edge ideal to its powers. \begin{theorem}[\protect{\cite[Theorem 4.5]{BHT}}] \label{thm.generallower} Let $G$ be any graph and let $I = I(G)$ be its edge ideal. Then for all $q \ge 1$, we have $$\reg I^q \ge 2q+\nu(G)-1.$$ \end{theorem} \begin{sketch} The proof is based on the following observations: \begin{enumerate} \item If $H$ is an induced subgraph of $G$ then for any $i, j \in {\mathbb Z}$, we have \begin{align} \beta_{i,j}(I(H)^q) \le \beta_{i,j}(I(G)^q). \label{eq.500} \end{align} In particular, this gives $\reg I(H)^q \le \reg I(G)^q$. \item If $H$ is the induced subgraph of $G$ consisting of a maximal induced matching of $\nu(G)$ edges then for any $q \ge 1$, we have \begin{align} \reg I(H)^q = 2q+\nu(G)-1. \label{eq.501} \end{align} \end{enumerate} To establish the first observation, recall that the \emph{upper-Koszul simplicial complex} $K^\alpha(I)$ associated to a monomial ideal $I \subseteq S = K[x_1, \dots, x_n]$ at degree $\alpha \in {\mathbb Z}^n$ consists of faces $$\Big\{W \subseteq \{1, \dots, n\} ~\Big|~ \dfrac{x^\alpha}{\prod_{w \in W} w} \in I\Big\},$$ and a variation of Hochster's formula (Theorem \ref{formula.hochster}) gives $$\beta_{i,\alpha}(I) = \dim_k \widetilde{H}_{i-1}(K^\alpha(I);K) \text{ for all } i \ge 0.$$ The inequality (\ref{eq.500}) then follows by noting that for $\alpha \in {\mathbb Z}^n$ with $\supp(\alpha) \subseteq V_H$, $K^\alpha(I(H)^q) = K^\alpha(I(G)^q).$ The second observation is proved by induction, noting that in this case $I(H)$ is a complete intersection. \end{sketch} \begin{figure}[h] \includegraphics[width=0.4\linewidth]{lowerbound.pdf} \caption{A uniyclic graph} \label{fig:lowerbound} \end{figure} \begin{example} Though there are many classes of graphs for which the lower bound given in Theorem \ref{thm.generallower} is attained, there are also classes of graphs where the asymptotic linear function $\reg I(G)^q$ is strictly bigger than $2q + \nu(G)-1$ for all $q \gg 0$. Let $G$ be the graph depicted in Figure \ref{fig:lowerbound}. Then it is easy to see that $\nu(G)=2$, whereas $\reg I(G)^q= 2q+2$ for $q\ge 1.$ \end{example} A similar general upper bound generalizing that of Theorem \ref{thm.reggeneralupper} is, unfortunately, not available. It is established, by Jayanthan, Narayanan and Selvaraja \cite{JNS}, only for a special class of graphs --- bipartite graphs. \begin{theorem}[\protect{\cite[Theorem 1.1]{JNS}}] \label{thm.JNS} Let $G$ be a bipartite graph and let $I = I(G)$ be its edge ideal. Then for all $q \ge 1$, we have $$\reg I^q \le 2q+\text{co-chord}(G)-1.$$ \end{theorem} \begin{sketch} The statement is proved by induction utilizing Theorem \ref{thm.Banerjee}. The crucial step in the proof is to show that for any $q \ge 1$ and any collection of edges $e_1, \dots, e_q$ of $G$ (not necessarily distinct), we have \begin{align} \reg (I^{q+1}:e_1\dots e_q) \le \text{co-chord}(G)+1. \label{eq.503} \end{align} Observe that when $G$ is a bipartite graph, by \cite[Lemma 3.7]{AB}, $$I^{q+1}:e_1 \dots e_q = (((I^2:e_1)^2: \dots)^2 : e_q),$$ and so (\ref{eq.503}) itself can be obtained by induction. To this end, let $G'$ be the graph associated to the polarization of $I^2:e$ for an edge $e$ in $G$ (which is generated in degree 2 by Theorem \ref{even_connec_equivalent}). The proof is completed by establishing the following facts: \begin{enumerate} \item $\reg I(G') \le \text{co-chord}(G')+1.$ This inequality was proved to hold for any graph in \cite{Russ}. \item $\text{co-chord}(G') \le \text{co-chord}(G).$ This is combinatorial statement, which can be shown by analyzing how $G'$ is constructed from $G$. \end{enumerate} \end{sketch} \begin{remark} Let $G$ be a 4-cycle. We can easily verify that $\nu(G)=1$ and $\text{co-chord}(G)=1.$ Since the lower bound in Theorem \ref{thm.generallower} coincides with the upper bound in Theorem \ref{thm.JNS}, we have $\reg I(G)^q=2q= 2q+\nu(G)-1=2q+ \text{co-chord}(G)-1$ for all $q \ge 1$. Classes of graphs for which the two upper and lower bounds agree were discussed in \cite[Corollary 5.1]{JNS}. On the other hand, the upper bound given in Theorem \ref{thm.JNS} can be strict. For example, if $G$ is $C_8,$ then $\nu(G)=2$ and $\text{co-chord}(G)=3.$ By \cite[Theorem 5.2]{BHT}, it is known that $\reg I(G)^q=2q+1 < 2q + \text{co-chord}(G)-1$ for all $q \ge 1$. \end{remark} For a special class of graphs --- gap-free graphs --- there is another upper bound that was proved by Banerjee \cite{Ba}. \begin{theorem}[\protect{\cite[Theorem 6.19]{Ba}}] \label{thm.Ba_gapfree} Let $G$ be a gap-free graph and let $I = I(G)$ be its edge ideal. Then for all $q \ge 2$, we have $$\reg I^q \le 2q + \reg I - 1.$$ \end{theorem} \begin{remark} The bound in Theorem \ref{thm.Ba_gapfree} is slightly weaker than the conjectural bound of Conjecture \ref{conj.BBH}. \end{remark} \subsection{Exact values} In most of the known cases where the asymptotic linear function $\reg I(G)^q$ can be computed explicitly, the lower bound in Theorem \ref{thm.generallower} turns out to give the exact formula. In this subsection, we describe those instances. In \cite[Theorem 3.2]{HHZ}, the authors showed that $I(G)$ has a linear resolution if and only if $I(G)^q$ has a linear resolution for all $q\geq 1.$ Combination of their result with Fr\"oberg's characterization yields to the exact value of regularity of powers of co-chordal graphs. \begin{theorem} \label{thm.linear} Let $G$ be a co-chordal graph and $I = I(G)$ be its edge ideal. Then for all $q \ge 1$, we have $$\reg I^q = 2q.$$ \end{theorem} As often the case when dealing with graphs, one of the first classes to consider is that of trees and forests. \begin{theorem}[\protect{\cite[Theorem 4.7]{BHT}}] \label{thm.forest} Let $G$ be a forest and let $I = I(G)$ be its edge ideal. Then for all $q \ge 1$, we have $$\reg I^q = 2q+\nu(G) -1.$$ \end{theorem} \begin{sketch} Thanks to the general lower bound in Theorem \ref{thm.generallower}, it remains to establish the upper bound $$\reg I^q \le 2q+\nu(G)-1.$$ In fact, a more general inequality can be proved for induced subgraphs of $G.$ Let $H$ and $K$ be induced subgraphs of $K$ such that $$E_H \cup E_K = E_G \text{ and } E_H \cap E_K = \emptyset.$$ Then the required upper bound follows from the following inequality (by setting $H$ to be the empty graph): \begin{align} \reg (I(H) + I(K)^q) \le 2q + \nu(G)-1. \label{eq.504} \end{align} The inequality (\ref{eq.504}) is proved by induction on $q+|V_K|$, making use of the short exact sequence arising by taking quotient and colon with respect to a leaf $xy$ in $K$. Note that, by \cite[Lemma 2.10]{MO}, we have $$(I(H)+I(K)^q) : xy = (I(H):xy) + (I(K)^q:xy) = (I(H):xy) + I(K)^{q-1}.$$ \end{sketch} The next natural class of graphs to consider is that of cycles and graphs containing exactly one cycle (i.e., \emph{unicyclic} graphs). \begin{theorem}[\protect{\cite[Theorem 5.2]{BHT}}] \label{thm.cycle} Let $C_n$ denote the $n$-cycle and let $I = I(C_n)$ be its edge ideal. Let $\nu = \lfloor \frac{n}{3} \rfloor$ be the induced matching number of $C_n$. Then $$\reg I = \left\{ \begin{array}{lcll} \nu+1 & \text{if} & n \equiv 0,1 & (\text{mod } 3) \\ \nu+2 & \text{if} & n \equiv 2 & (\text{mod } 3) \end{array} \right.$$ and for any $q \ge 2$, we have $$\reg I^q = 2q+\nu-1.$$ \end{theorem} \begin{sketch} The first statement was already proved in \cite{J}. To prove the second statement, again thanks to the general lower bound of Theorem \ref{thm.generallower}, it remains to establish the upper bound \begin{align} \reg I^q \le 2q+\nu-1. \label{eq.505} \end{align} The inductive method of Theorem \ref{thm.Banerjee} once again is invoked, and it reduces the problem to showing that for any collection of edges $e_1, \dots, e_q$ of $G$ (not necessarily distinct), we have $$\reg (I^{q+1} : e_1 \dots e_q) \le \nu+1.$$ To this end, let $J$ be the polarization of $I^{q+1}:e_1 \dots e_q$. It can be seen that $$J = I(H) + (x_{i_1}y_{i_1}, \dots, x_{i_t}y_{i_t}),$$ where $H$ is a graph over the vertices $\{x_1, \dots, x_n\}$ and $x_{i_1}^2, \dots, x_{i_t}^2$ are the non-squarefree generators of $I^{q+1}: e_1 \dots e_q$. By using standard short exact sequences, it can be shown that $$\reg(J) = \reg I(H).$$ Observe further that $H$ contains $C_n$ as a subgraph, so $H$ has a Hamiltonian cycle. Thus, the required upper bound now follows from a more general upper bound for the regularity of a graph admitting either a Hamiltonian path or a Hamiltonian cycle (which is the content of \cite[Theorems 3.1 and 3.2]{BHT}). \end{sketch} The following theorem was proved in the special case of \emph{whiskered} cycles by Moghimian, Seyed Fakhari, and Yassemi \cite{MFY}, and then in more generality for unicyclic graphs by Alilooee, Beyarslan and Selvaraja \cite{ABS}. \begin{theorem}[\protect{\cite[Theorem 1.2]{ABS}, \cite[Proposition 1.1]{MFY}}] \label{thm.unicyclic} Let $G$ be a unicyclic graph and let $I = I(G)$ be its edge ideal. Then for all $q \ge 1$, we have $$\reg I^q = 2q + \reg I - 2.$$ \end{theorem} \begin{sketch} The proof is based on establishing the conjectural bound of Conjecture \ref{conj.BBH}.(2) \begin{align} \reg (I^q ) \le 2q+\reg I-2. \label{eq.506} \end{align} The inequality (\ref{eq.506}) is proved by using induction on $q.$ It also requires a good understanding of $\reg(I(C_n)^q,f_1,\ldots, f_k)$ where $C_n$ is the cycle in $G$ and $f_1, \ldots, f_k$ are the edges of $G$ that are not in $C_n.$ By making use of the general lower bound of Theorem \ref{thm.generallower} and the first main result of \cite{ABS}, namely, $$\reg I = \left\{ \begin{array}{rl} \nu(G) + 2 & \text{if } n \equiv 2 (\text{mod } 3) \text{ and } \nu(G \setminus \Gamma(G)) = \nu(G) \\ \nu(G)+ 1 & \text{otherwise,} \end{array} \right.$$ equality is achieved for the latter case. (Here, $\Gamma(G)$ is a well described subset of the vertices in $G$.) The proof is completed by showing $\reg ( I(G \setminus \Gamma(G))^q) =2q+\nu(G)$ and using inequality (\ref{eq.500}). \end{sketch} \begin{example} Let $G$ be the graph depicted in Figure \ref{fig:bicyclic}. Macaulay 2 computations show that $$\reg I(G)=5, \reg I(G)^2=6, \reg I(G)^3=8, \reg I(G)^4=10, \reg I(G)^5=12.$$ Thus, the formula given in Theorem \ref{thm.unicyclic} does not necessarily hold for a graph containing more than 1 cycle. \begin{figure}[h] \includegraphics[width=0.45\linewidth]{bicyclic.pdf} \caption{A bicyclic graph} \label{fig:bicyclic} \end{figure} \end{example} A particular interesting class of graph is those for which $\nu(G) = 1$. It is expected that for such a graph $G$, powers of its edge ideal should asymptotically have linear resolutions. This is examined in the next two theorems under various additional conditions. \begin{theorem}[\protect{\cite[Theorem 1.2]{Ba}}] \label{thm.gapcricket} Let $G$ be a gap-free and cricket-free graph, and let $I = I(G)$ be its edge ideal. Then for all $q \ge 2$, we have $$\reg I^q = 2q;$$ i.e., $I^q$ has a linear resolution. \end{theorem} \begin{sketch} Since $I^q$ is generated in degree $2q$, it remains to show that for $q \ge 2$, $\reg I^q \le 2q$. By induction, making use of the inductive techniques of Theorem \ref{thm.Banerjee} (and \cite[Theorem 3.4]{Ba} which proves that $\reg I \le 3$), it suffices to show that for any collection of edges $e_1, \dots, e_q$ in $G$, $\reg (I^{q+1}:e_1 \dots e_q) \le 2$. Let $J$ be the polarization of $I^{q+1} : e_1 \dots e_q$ and let $H$ be the simple graph associated to $J$. The statement is reduced to showing that $\reg J = 2$, or equivalently (by Theorem \ref{thm.regtwo}), that $H^c$ is a chordal graph. That is, $H$ does not have any induced anticycle of length at least 4. By contradiction, suppose that $H$ has an anticycle $w_1, \dots, w_s$ of length at least 4. Since an induced anticycle of length 4 gives a gap, we may assume further that $s \ge 5$. Suppose also that $e_1 = xy$. The proof follows from the following observations: \begin{enumerate} \item There must be an edge between $\{x,y\}$ and $\{w_1,w_3\}$; otherwise $xy$ and $w_1w_3$ form a gap in $G$. \item Suppose that $xw_1 \in E_G$. Then neither of $w_2$ nor $w_n$ can coincide with $x$. \item Neither of $w_2$ nor $w_n$ can coincide with $y$; otherwise the other vertex (among $\{w_2, w_n\}$) and $w_1$ would be even-connected implying that $w_1w_2$ or $w_1w_n$ is an edge in $H$. \item Either $w_2$ or $w_n$ must be connected to $x$; otherwise, $xy$ and $w_2w_n$ form a gap in $G$. \item Suppose that $xw_2 \in E_G$. Now, by the same line of arguments applying to $\{w_3, w_n\}$, we deduce that either $w_3$ or $w_n$ must be a neighbor of $x$. \item Suppose that $xw_3 \in E_G$. We arrive at a contradiction that $\{w_1, w_3, x, y, w_2\}$ forms a cricket in $G$. \end{enumerate} \end{sketch} \begin{theorem}[\protect{\cite[Theorem 4.9]{Erey}}] \label{thm.gapdiamond} Let $G$ be a gap-free and diamond-free graph, and let $I = I(G)$ be its edge ideal. Then for all $q \ge 2$, we have $$\reg I^q = 2q;$$ i.e., $I^q$ has a linear resolution. \end{theorem} \begin{sketch} The proof of this theorem is based on a good understanding of the combinatorial structures of gap-free and diamond-free graphs. Let $\omega(G)$ denote the largest size of a complete subgraph in $G$ (the \emph{clique number} of $G$). If $\omega(G) < 3$ then $G$ is cricket-free, and the assertion follows from Theorem \ref{thm.gapcricket}. If $\omega(G) \ge 4$, then it is shown that the complement of $G$ is chordal, and the conclusion follows from Theorem \ref{thm.regtwo}. Consider the case where $\omega(G) = 3$. It is shown that if, in addition, $G$ is also $C_5$-free then the complement of $G$ is either chordal or $C_6$, and the assertion again follows from known results. The essential part of the proof is then to examine the structures of $G$ when $G$ is gap-free, diamond-free and contains a $C_5$. This is where novel and interesting combinatorics happen. It is shown that, in this case, $G$ can be obtained from a list of ten specific graphs via a so-called process of \emph{multiplying vertices}. The proof is completed with a careful analysis of each of these ten graphs to show that $\reg (I(G)^{q+1} : e_1 \dots e_q) \le 2$ and to employ Banerjee's result, Theorem \ref{thm.Banerjee}. \end{sketch} Another large class of graphs for which the regularity of powers of their edge ideals can be computed explicitly is that of very well-covered graphs. The following theorem was proved by Norouzi, Seyed Fakhari, and Yassemi \cite{NFY} for very well-covered graphs with an additional condition, and then by Jayanthan and Selvaraja \cite{JS} in full generality for any very well-covered graph. \begin{theorem}[\protect{\cite[Theorem 5.3]{JS}, \cite[Theorem 3.6]{NFY}}] \label{thm.verywell} Let $G$ be a very well-covered graph and let $I = I(G)$ be its edge ideal. Then for all $q \ge 1$, we have $$\reg I^q = 2q + \nu(G)-1.$$ \end{theorem} \begin{sketch} The proof also uses inductive techniques given by Theorem \ref{thm.Banerjee} and goes along the same line as that of previous theorems in this section. The heart of the arguments is to verify that for any collection of edges $e_1, \dots, e_q$ in $G$, we have $$\reg (I^{q+1} : e_1 \dots e_q) \le \nu(G)+1.$$ If $J = (I^{q+1}: e_1 \dots e_q)$ is squarefree then this is achieved by letting $H$ be the graph associated to $J$ and establishing the following facts: \begin{enumerate} \item $H$ is also very well-covered; \item $\nu(H) \le \nu(G)$; \item $\reg I(H) = \nu(H) + 1$ and $\reg I(G) = \nu(G)+1$ (this is the content of \cite[Theorem 4.12]{MMCRTY}). \end{enumerate} The arguments are much more involved for the case where $J$ is not squarefree. Let $H$ be the graph corresponding to the polarization of $J$. The proof is completed by an ingenious induction on the number of vertices added to $G$ in order to obtain $H$. \end{sketch} \section{Higher Dimension} \label{chapter6} In this section we discuss the regularity of squarefree monomial ideals generated in degree more than two. The goal, as in the case of the edge ideals, is to find bounds and/or formulas for regularity of the ideal in terms of ``its combinatorics". As mentioned in the preliminaries, one can view these ideals both as Stanley-Reisner ideals or edge ideals of hypergraphs. If interpreted as Stanley-Reisner ideals, via Hochster's formula one can potentially get all Betti numbers of a given ideal in terms of the combinatorics of the underlying complex. The hypergraph case, however, is far less understood compared to either the edge ideal case or the Stanley-Reisner ideal interpretation. However, in the last decade, some results have been proven in that direction. In some cases, general squarefree monomial ideals can be viewed as path ideals of simple graphs, and there has been some progress for those cases in the last few years. Since the main topic of this survey is the edge ideals, we are not trying to be comprehensive in this section. Our aim is to give the reader an idea of possible approaches to generalize the results on edge ideals to higher dimensions. We split the section into three subsections; devoted to Stanley-Reisner ideals, edge ideals of hypergraphs and path ideals respectively. \subsection{Stanley-Reisner Ideals} As mentioned earlier, any squarefree monomial ideal can be viewed as the Stanley-Reisner ideal of some simplicial complex. This combinatorial interpretation is by far the most studied one among monomial ideals generated in higher degrees. The first result we mention is a famous theorem by Eagon and Reiner which establishes the relation between minimal regularity and maximal depth as well as relation between regularity of a Stanley-Reisner ideal and the structure of a special kind of dual complex, namely, the Alexander dual. In a sense it is similar to Fr\"oberg's result; this gives a classification of linear resolution. The\textit{ Alexander dual} of a simplicial complex $\Delta$ is the simplicial complex whose faces are the complements of the nonfaces of $\Delta$. If $I$ is a Stanley-Reisner ideal then by $I^\vee$ we denote the Stanley-Reisner ideal of the Alexander dual of the simplicial complex of $I$. \begin{theorem}[\protect{\cite[Theorem 3]{ER}}] Let $I$ be a Stanley-Reisner ideal in $S=K[x_1,\dots,x_n]$. Then $I$ has $q$-linear resolution if and only if depth of $S/I^\vee$ has depth $n-q$. In particular, $I$ has linear resolution if and only if the Alexander dual of $\Delta (I)$ is Cohen-Macaulay. \end{theorem} The next result bounds regularity with another important algebraic invariant, the arithmetic degree. For a squarefree monomial ideal $I$, its \textit{arithmetic degree}, denoted by $\adeg(I)$, is given by number of facets in the corresponding simplicial complex. \begin{theorem}[\protect{\cite[Theorem 3.8]{FkT}}] Suppose $I$ is a Stanley-Reisner ideal with codimension $\geq 2$. Then, $\reg I \le \adeg(I)$. \end{theorem} \subsection{Hypergraphs} Any squarefree monomial ideal is the edge ideal of a hypergraph. The corresponding simplicial complex is the independence complex of this hypergraph and computing that is an NP-hard problem in general The problem of finding bounds for the regularity of graphs can be extended to hypergraphs. In \cite{CHHKTT} the authors provided a sufficient condition for a hypergraph to have regularity $\leq 3.$ For every vertex $x$ of a hypergraph ${\mathcal H}$, let ${\mathcal H} : x$ denote the simple hypergraph of all minimal subsets $A \subset V \setminus \{x\}$ such that $A$ or $A\cup \{x\}$ is an edge of ${\mathcal H}.$ \begin{theorem}[\protect{\cite[Theorem 6.4]{CHHKTT}}] \label{thm.hyperbound} Let ${\mathcal H}$ be a simple hypergraph such that ${\mathcal H} :x$ is a graph whose complement is chordal for all vertices of ${\mathcal H}.$ Then, $ \reg {\mathcal H} \leq 3.$ \end{theorem} Any hypergraph whose all edges have same cardinality is called \textit{uniform}. A $d$-uniform hypergraph (i.e., a uniform hypergraph whose all edges are represented by monomials of degree $d$) is called \textit{properly-connected} if for any two edges $E,E'$ sharing at least one vertex, the length of the shortest path between $E$ and $E'$ is $d- |E \cap E'|$. If the length of the shortest path between two edges is $\geq t$ then they are called $t$-disjoint (for relevant definitions see \cite{HVT2008}). \begin{example} Consider the $4$-uniform hypergraph H with edge set: $$\{x_1x_2x_3x_4, x_1x_2x_3x_7, x_1x_2x_6x_7, x_1x_5x_6x_7, x_1x_5x_6x_8\}$$ There is a proper irredundant chain of length $4$ from the edge $E=x_1x_2x_3x_4$ to $E_0=x_1x_5x_6x_8$. Furthermore, there is no shorter such chain. But these edges have a nonempty intersection. So H is not properly-connected since the distance between them is $4$ which is not same as $4 -| E \cap E_0| = 3$. This hypergraph is not properly-connected. On the other hand every finite simple graph is properly connected. \end{example} A $d$-uniform properly connected hypergraph $H$ is called \textit{triangulated} if for any subset $A$ of vertices the induced subgraph $H'$ on $A$ has a vertex $x$ such that the induced subgraph on its neighborhood $N(x) \cup \{x\}$ is complete. For a more formal definition and relevant notions see \cite{HVT2008}. \begin{example} Simple graphs that are triangulated are precisely the chordal graphs due to Theorem 5.2 of \cite{HVT2008}. \end{example} The following result by H\`a and Van-Tuyl gives a lower bound for regularity of properly connected hypergraphs and the bound becomes an equality if the hypergraph is triangulated. \begin{theorem}[\protect{\cite[Theorem 6.5 and 6.8]{HVT2008}}] Let $H$ be a properly-connected hypergraph. Suppose $d$ is the common cardinality of the edges in $H$. Let $c$ be the maximal number of pairwise $d+1$ disjoint edges in $H$. Then, $\reg H \ge (d-1)c + 1$, and the equality occurs if $H$ is triangulated. \end{theorem} A $2$-\textit{collage} for a hypergraph is a subset $C$ of the edges with the property that for every edge $E$ of the hypergraph, we can delete a vertex $v$ of $E$ so that $E \setminus \{v\}$ is contained in some edge of $C$. For uniform hypergraphs, the condition for a collection $C$ of the edges to be a $2$-collage is equivalent to requiring that for any edge $E$ not in $C$, there exists $F\in C$ such that the symmetric difference of $E$ and $F$ consists of exactly two vertices. When H is a graph, it is straightforward to see that for any minimal $2$-collage, there is a maximal matching of the same or lesser cardinality. The following theorem gives a formula for regularity of uniform hypergraphs in terms of its collages. \begin{theorem}[\protect{\cite[ Theorem 1.1]{HW}}] Let $H$ be a simple $d$-uniform hypergraph and let $c$ be the minimum size of a $2$ collage in $H$. Then, $\reg I(H) \leq (d-1)c+1$. \end{theorem} The next theorem is a generalization of the previous that work for all simple hypergraph. \begin{theorem}[\protect{\cite[Theorem 1.2] {HW}}] Let $H$ be a simple hypergraph and let $\{m_1,....,m_c\}$ be a $2$-collage. Then, $\reg I(H) \leq \sum_{i=1}^c |m_i| - c + 1$. \end{theorem} The next theorem gives a formula for regularity of edge ideals of \textit{clutters}, which are hypergraphs where no edge contains any other edge as a subset. A monomial ideal $I$ has \textit{linear quotients } if the monomials that generate $I$ can be ordered $g_1, \ldots, g_q$ such that for all $1\leq i \leq q-1, ((g_1, \ldots, g_i):g_{i+1})$ is generated by linear forms. For further details see \cite{MV}. \begin{theorem}[\protect{\cite[Corollary 3.35]{MV}}] Let $H$ be a clutter such that $I(H)$ has linear quotients. Then, $\reg I(H) =\max\{|E|: E \in E(H)\}$. \end{theorem} Taylor resolutions, introduced in \cite{Tay}, are free resolutions of monomial ideals constructed in a natural way that makes computations easy to deal with. These are in general not minimal. The following theorem gives a formula for regularity when Taylor resolutions are minimal. A hypergraph $H$ is called \textit{saturated} when the Taylor resolution of $I(H)$ is minimal. For further details see \cite{LM}. \begin{theorem}[\protect{\cite[Proposition 4.1]{LM}}] For a saturated hypergraph $H$, we have $\reg I(H =|X|-|L| +1$. Here $L$ is the minimal monomial generating set of $I(H)$ and $X$ is the number of variables that divide some minimal monomial generator. \end{theorem} \subsection{Path Ideal} Path ideals of finite simple graphs are interesting generalizations of edge ideals, see Section \ref{sub.alg} to recall the definition of path ideals. For gap-free graphs their regularity too tend to behave nicely. \begin{theorem}[\protect{\cite[Theorem 1.1]{Bp}}] Let $G$ be a gap-free and claw-free graph. Then for all $t \leq 6$, $I_t$ has linear resolution. If further $G$ is induced whiskered-$K_4$ free then for all $t$, $I_t$ has linear resolution. \end{theorem} Let $L_n$ denote a line of length $n$. The regularity of path ideals of lines have been computed by Alilooee and Faridi in the next theorem. \begin{theorem}[\protect{\cite[Theorem 3.2]{AlFr}}] Let $n,t,p$ and $d$ be integers such that $n \geq 2$, $2 \leq t \leq n, n=(t+1)p+d$, where $p\geq 0, 0 \leq d \leq t$. Then, $\reg I_t (L_n)$ is $p(t-1)+1$ for $d < t$ and $(p+1)(t-1) +1$ for $d=t$. \end{theorem} We end this section by a result regarding a somewhat different kind of path ideals. A tree is a graph in which there exists a unique path between every pair of distinct vertices; a rooted tree is a tree together with a fixed vertex called the root. In particular, in a rooted tree there exists a unique path from the root to any given vertices. We can also view a rooted tree as a directed graph by assigning to each edge the direction that goes ``away'' from the root. Let $\Gamma$ be a rooted tree and $I_t(\Gamma)$ be the squarefree monomial ideal generated by all ``directed paths'' of length $t-1$ in the above sense. The following theorem gives regularity of such ideals. In this theorem, we define $l_t(\Gamma)$ to be the number of leaves in $\Gamma$ whose level is at least $t -1$ and $p_t(\Gamma)$ to be the maximal number of pairwise disjoint paths of length $t$ in $\Gamma$ (i.e., $p_t(\Gamma)= \max \{|D|~|~ D \text{ is a set of disjoint paths of length } t \text{ in } \Gamma\}$). \begin{theorem}[\protect{\cite[Theorem 3.4]{BHO}}] Let $\Gamma$ be a rooted tree on $n$ vertices. Then, $\reg I_t(\Gamma) \leq (t-1)[l_t(\Gamma) + p_t(\Gamma)] +1$. \end{theorem} \section{Open problems and questions} \label{chapter7} We end the paper by listing a number of open problems and questions in the research area which we hope to be solved. Our first problem is inspired by Theorems \ref{thm.regtwo} and \ref{thm.regbipartitethree}. Since graphs with regularity 2 are classified. The next class of graph to examine is that of regularity 3. \begin{problem} \label{prob.reg3} Characterize graphs $G$ for which $\reg I(G) = 3$. \end{problem} In various results, for example Theorems \ref{thm.gapclawfree} and \ref{thm.hyperbound}, ``local'' condition on the regularity of $G:x$ for all vertices $x$ lead to a ``global'' statement on the regularity of $G$. We ask if similar local conditions on $G:x$, for all vertices $x$, would also lead to a statement on the asymptotic linear function $\reg I(G)^q$. \begin{question} Let $G$ be a simple graph and let $I = I(G)$ be its edge ideal. Suppose that for any vertex $x$ in $G$, we have $\reg (I:x) \le r$. Does this imply that for any $q \ge 1$, $$\reg I^q \le 2q + r-1?$$ \end{question} As noted in Lemma \ref{lem.deletecontract}, for any vertex $x$, the regularity of $I(G)$ is always equal to either the regularity of $I(G) : x$ or the regularity of $(I(G),x)$. It would be interesting to know for which vertex $x$ the equality happens one way or another. \begin{problem} Let $G$ be a graph and let $I = I(G)$ be its edge ideal. Find conditions on a vertex $x$ of $G$ such that \begin{enumerate} \item $\reg I = \reg (I,x).$ \item $\reg I = \reg (I:x).$ \end{enumerate} \end{problem} The regularity of $I(G)$ has been computed for several special classes of graphs (see Theorem \ref{thm.inducedmatch1}). A particular class of graphs which is of interest is that of vertex decomposable graphs. For a vertex decomposable graph $G$, the statement of Lemma \ref{lem.deletecontract} can be made slightly more precise (see \cite{HW}), namely, there exists a vertex $x$ such that $$\reg G = \max \{\reg (G \setminus N_G[x]) +1, \reg (G \setminus x)\}.$$ For such a graph $G$, it is also known that the independent complex $\Delta(G)$ is shellable and the quotient ring $S/I(G)$ is sequentially Cohen-Macaulay. \begin{problem} Let $G$ be a vertex decomposable graph. Compute $\reg I(G)$ via combinatorial invariant of $G$. \end{problem} As noted throughout Sections \ref{chapter4} and \ref{chapter5}, the induced matching number of a graph $G$ is closely related to the regularity of $I(G)$. In fact, $\nu(G) + 1$ gives a lower bound for $\reg I(G)$ and, more generally, $2q+ \nu(G)-1$ gives a lower bound for $\reg I(G)^q$ for any $q \ge 1$. Moreover, for many special classes of graphs the equality has been shown to hold. Thus, it is desirable to characterize all graphs for which the equality is attained. \begin{problem} \label{prob.nu} Characterize graphs $G$ for which the edge ideals $I = I(G)$ satisfy \begin{enumerate} \item $\reg I = \nu(G)+1$. \item $\reg I^q = 2q+\nu(G) -1 \ \forall \ q \gg 0.$ \end{enumerate} \end{problem} When $\nu(G) = 1$, the answer to Problem \ref{prob.nu}.(2) is predicted in the following open problem. \begin{problem}[Francisco-H\`a-Van Tuyl and Nevo-Peeva] \label{prob.FHVT} Suppose that $\nu(G) = 1$, i.e., $G^c$ has no induced 4-cycle and let $I = I(G)$. \begin{enumerate} \item Prove (or disprove) that $\reg I^q = 2q$ for all $q \gg 0$. \item Prove (or disprove) that $\reg I^{q+1} = \reg I^q + 2$ for all $q \ge \reg(I)-1$. \end{enumerate} \end{problem} Note that examples exist in which $\nu(G) = 1$ and $\reg I^q \not= 2q$ for small values of $q$ (cf. \cite{NP}), so in Problem \ref{prob.FHVT} it is necessary to consider $q \gg 0$. A satisfactory solution to Problem \ref{prob.reg3} would be a good starting point to tackle Problem \ref{prob.FHVT}, since regularity 3 is the first open case of the problem. In fact, in this case, $\reg I(G)^q$ is expected to be linear starting at $q = 2$. \begin{problem}[Nevo-Peeva] Suppose that $\nu(G) = 1$ and $\reg I(G) = 3$. Then is it true that for all $q \ge 2$, $$\reg I(G)^q = 2q?$$ \end{problem} In many known cases where the asymptotic linear function $\reg I(G)^q$ can be computed, it happens to be $\reg I(G)^q = 2q+\nu(G)-1$. We would like to see if this is the case when the equality is already known to hold for small values of $q$. If this is indeed the case then how far one must go before concluding that the equality holds for all $q \ge 1$? \begin{problem} Let $G$ be a graph. Find a number $N$ such that if $\reg I(G)^q = 2q+ \nu(G)-1$ for all $1 \le q \le N$ then, for all $q \ge 1$, we have $$\reg I(G)^q = 2q+\nu(G)-1.$$ \end{problem} \noindent (Computational experiments seem to suggest that $N$ can be taken to be 2.) A particularly related question is whether for special classes of graphs satisfying $\reg I(G) = \nu(G)+1$ one would have $\reg I(G)^q = 2q+\nu(G)-1$ for all $q \ge 1$ (or for all $q \gg 0$). Inspired by Theorem \ref{thm.inducedmatch1}, we raise the following question. \begin{question} \label{quest.nu+1} Let $G$ be a graph and let $I = I(G)$ be its edge ideal. Suppose that $G$ is of one of the following types: \begin{enumerate} \item $G$ is chordal; \item $G$ is weakly chordal; \item $G$ is sequentially Cohen-Macaulay bipartite; \item $G$ is vertex decomposable and contains no closed circuit of length 5; \item $G$ is $(C_4,C_5)$-free vertex decomposable. \end{enumerate} Is it true that for all $q \gg 0$, $$\reg I^q = 2q+\nu(G)-1?$$ \end{question} Note that chordal graphs and sequentially Cohen-Macaulay bipartite graphs are vertex decomposable. Also bipartite graphs contain no closed circuit of length 5. Thus, in Question \ref{quest.nu+1}, an affirmative answer to part (4) would imply that for part (3). It is well-known that the resolution of a monomial ideal is dependent on the characteristic of the ground field. And yet, in all known cases where the regularity of powers of an edge ideal can be computed, it is characteristic-independent. We would like to see examples where this is not the case, or a confirmation that this is always the case if the regularity of the edge ideal itself is characteristic-independent. \begin{problem} \quad \begin{enumerate} \item Find examples of graphs $G$ for which the asymptotic linear function $\reg I(G)^q$, for $q \gg 0$, is characteristic-dependent. \item Suppose that $\reg I(G)$ is independent of the characteristic of the ground field. Is the asymptotic linear function $\reg I(G)^q$, for $q \gg 0$, necessarily also characteristic-independent? \end{enumerate} \end{problem} When an exact formula may not be available, it is of interest to find a lower and an upper bound. Since the lower bound in Theorem \ref{thm.generallower} holds for any graph, it is desirable to find an upper bound to couple with this general lower bound. In this direction, one may either try to prove the bound in Theorem \ref{thm.JNS} for any graph or to relate the regularity of powers of $I(G)$ to the regularity of $I(G)$ itself. \begin{conjecture}[Alilooee, Banerjee, Beyarslan and H\`a] \label{conj.BBH} Let $G$ be any graph and let $I = I(G)$ be its edge ideal. Then for all $q \ge 1$, we have \begin{enumerate} \item $\reg I^q \le 2q + \text{co-chord}(G) -1.$ \item $\reg I^q \le 2q+\reg I -2.$ \item $\reg I^{q+1} \le \reg I^q + 2.$ \end{enumerate} \end{conjecture} Note that by Theorem \ref{thm.reggeneralupper}, Conjecture \ref{conj.BBH}.(3) $\Rightarrow$ Conjecture \ref{conj.BBH}.(2) $\Rightarrow$ Conjecture \ref{conj.BBH}.(1). Even though, in general, $\reg I^q$ is asymptotically a linear function, for small values of $q$, there are examples for which $\reg I^q > \reg I^{q+1}$. We ask if this would \emph{not} be the case for edge ideals of graphs. \begin{question} Let $G$ be a graph and let $I = I(G)$ be its edge ideal. Is the function $\reg I^q$ increasing for all $q \ge 1$? \end{question} In investigating the asymptotic linear function $\reg I^q$, it is also of interest to know the smallest value $q_0$ starting from which $\reg I^q$ attains its linear form. In known cases where the regularity of $I(G)^q$ can be computed explicitly for all $q \ge 1$, we have $q_0 \le 2$. Computational experiments seem to suggest that this is indeed always the case. \begin{question} \label{quest.q0} Let $G$ be a graph and let $q_0$ be the least integer such that $\reg I(G)^q$ is a linear function for all $q \ge q_0$. Is it true that $q_0 \le 2$? \end{question} A much weaker question, yet still very interesting, would be whether $q_0 \le \reg I(G)$? --- This question is still wide open! A significant step toward answering Question \ref{quest.q0} is to give a uniform bound for $q_0$ (i.e., a bound that does not depend on $G$). \begin{problem} Find a number $N$ (which may depend on $n$ and $m$) such that for any graph $G$ over $n$ vertices and $m$ edges, we have $q_0 \le N$. \end{problem}
{ "timestamp": "2018-03-13T01:14:18", "yymm": "1712", "arxiv_id": "1712.00887", "language": "en", "url": "https://arxiv.org/abs/1712.00887", "abstract": "We survey recent studies on the Castelnuovo-Mumford regularity of edge ideals of graphs and their powers. Our focus is on bounds and exact values of $\\text{reg} I(G)$ and the asymptotic linear function $\\text{reg} I(G)^q$, for $q \\geq 1,$ in terms of combinatorial data of the given graph $G.$", "subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)", "title": "Regularity of Edge Ideals and Their Powers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9895109075027803, "lm_q2_score": 0.8376199673867852, "lm_q1q2_score": 0.8288340940713471 }
https://arxiv.org/abs/2208.07229
Proof of a conjecture on the determinant of walk matrix of rooted product with a path
The walk matrix of an $n$-vertex graph $G$ with adjacency matrix $A$, denoted by $W(G)$, is $[e,Ae,\ldots,A^{n-1}e]$, where $e$ is the all-ones vector. Let $G\circ P_m$ be the rooted product of $G$ and a rooted path $P_m$ (taking an endvertex as the root), i.e., $G\circ P_m$ is a graph obtained from $G$ and $n$ copies of $P_m$ by identifying each vertex of $G$ with an endvertex of a copy of $P_m$. Mao-Liu-Wang (2015) and Mao-Wang (2022) proved that, for $m=2$ and $m\in\{3,4\}$, respectively$$\det W(G\circ P_m)=\pm a_0^{\lfloor\frac{m}{2}\rfloor}(\det W(G))^m,$$ where $a_0$ is the constant term of the characteristic polynomial of $G$. Furthermore, Mao-Wang (2022) conjectured that the formula holds for any $m\ge 2$. In this note, we verify this conjecture using the technique of Chebyshev polynomials.
\section{Introduction} \label{intro} Let $G$ be a simple graph with vertex set $\{1,\ldots,n\}$. The \emph{adjacency matrix} of $G$ is the $n\times n$ symmetric matrix $A=(a_{i,j})$, where $a_{i,j}=1$ if $i$ and $j$ are adjacent; $a_{i,j}=0$ otherwise. For a graph $G$, the \emph{walk matrix} of $G$ is \begin{equation} W=W(G):=[e,Ae,\ldots,A^{n-1}e], \end{equation} where $e$ is the all-ones vector. Note that walk matrices are clearly integral but are usually not symmetric. Compared to general integral matrices, walk matrices of graphs have some special properties. For example, the determinant of any walk matrix of order $n$ is always a multiple of $2^{\lfloor\frac{n}{2}\rfloor}$. This kind of matrices has attracted increasing attention in recent years as many interesting properties of graphs are closely related to the corresponding walk matrices. A typical result is a theorem of Wang \cite{wang2017JCTB} which says that any graph $G$ with $2^{-\lfloor\frac{n}{2}\rfloor}\det W(G)$ odd and square-free is uniquely determined (up to isomorphism) by its generalized spectrum. Here, the generalized spectrum of a graph $G$ means the spectrum of $G$ together with that of its complement $\overline{G}$. In \cite{mao2015}, in order to construct graphs that are determined by generalized spectrum, the authors considered the rooted product graph $G\circ P_2$ and proved the following theorem. Throughout this paper, we shall fix a graph $G$ and use $a_0$ to denote the constant term of the characteristic polynomial of $G$. \begin{theorem}[\cite{mao2015}] $$\det W(G\circ P_2)=\pm a_0(\det W(G))^2.$$ \end{theorem} Recently, Mao and Wang \cite{mao2022} extended the above formula to $G\circ P_m$ for $m=3,4$. Precisely, they proved: \begin{theorem}[\cite{mao2022}]For $m=3$ or $4$, $$\det W(G\circ P_m)=\pm a_0^{\lfloor\frac{m}{2}\rfloor}(\det W(G))^m.$$ \end{theorem} In the same paper, the authors proposed the following natural conjecture, which (if true) unifies and extends the above two theorems. \begin{conjecture}[\cite{mao2022}]\label{mainconj} For any positive integer $m\ge 2$, $$\det W(G\circ P_m)=\pm a_0^{\lfloor\frac{m}{2}\rfloor}(\det W(G))^m.$$ \end{conjecture} The main aim of this note is to confirm this conjecture. \begin{theorem}\label{main} Conjecture \ref{mainconj} is true. \end{theorem} The overall strategy of the proof is the same as in \cite{mao2022} which is based on explicit computations of eigenvalues and eigenvectors. Indeed, most derivations in \cite{mao2022} for $m=3,4$ can be easily extended to any positive $m$ except some resultant-related computations. A new finding of this note is that most computations involved are closely related to Chebyshev polynomials of the second kind. A crucial step is a newly established equality concerning the resultant of two special linear combinations of Chebyshev polynomials (Lemma \ref{newres}). \section{Eigenvalues and eigenvectors of $G\circ P_m$} We always regard the endvertex of $P_m$ as its root vertex. For an $n$-vertex labelled graph $G$, the \emph{rooted product graph} $G\circ P_m$, is a graph obtained from $G$ and $n$ copies of $P_m$ by identifying the root vertex of the $i$-th copy of $P_m$ with the $i$-th vertex of $G$ for $i=1,2,\ldots,n$, see Fig.~1 for an illustration. This is a special case of rooted product graphs $G\circ H$ introduced by Godsil-McKay \cite{godsil1978} and Schwenk \cite{schwenk1974}. \begin{figure} \centering \includegraphics[height=6.6cm]{fig11.pdf} \caption{Graph $G$ (left) and the rooted product graph $G\circ P_4$ (right).} \end{figure} \begin{definition}\normalfont{ Let $A=(a_{ij})$ be an $m\times n$ matrix and $B$ a $ p \times q$ matrix. The \emph{Kronecker product} $A\otimes B$ is the $pm \times qn$ block matrix: $$A\otimes B=\begin{bmatrix} a_{11}B&\cdots&a_{1n}B\\ \vdots&\ddots&\vdots\\ a_{m1}B&\cdots&a_{mn}B \end{bmatrix}. $$ } \end{definition} By appropriately labelling the vertices in $G\circ P_m$, the adjacency matrix $A(G\circ P_m)$ has a nice structure. \begin{observation}\label{adjGP} $A(G\circ P_m)=A(P_m)\otimes I_n+D_1\otimes A(G),$ where $I_n$ is the identity matrix of order $n$ and $D_1$ is the diagonal matrix $\textup{diag~}(1,0,\ldots,0)$ of order $m$. \end{observation} For a graph $G$, we use $\phi(G;x)$ to denote the characteristic polynomial of $G$, i.e., $\phi(G;x)=\det(xI-A(G))$. The roots of $\phi(G;x)=0$ are called the eigenvalues (spectrum) of $G$. The following lemma is a special case of decomposition of $\phi(G\circ H,x)$ derived by Schwenk \cite{schwenk1974} (see also \cite{godsil1978,gutman1980}). \begin{lemma} $\phi(G\circ P_m;x)=(\phi(P_{m-1};x))^n\phi\left(G;\frac{\phi(P_m;x)}{\phi(P_{m-1};x)}\right)=\prod\limits_{i=1}^n(\phi(P_m;x)-\lambda_i\phi(P_{m-1};x)),$ where $\lambda_1,\ldots,\lambda_n$ are eigenvalues of $G$. \end{lemma} Let $U_n(x)$ be the $n$-th Chebyshev polynomial of the second kind, defined by $$U_n(\cos\theta)=\frac{\sin(n+1)\theta}{\sin\theta}.$$ It is known that $U_n(x)$'s satisfy the three-term recurrence relations: $U_{n+1}(x)=2xU_n(x)-U_{n-1}(x)$ and the initial conditions: $U_0(x)=1$ and $U_1(x)=2x$. Define $S_n(x)=U_n(x/2)$. Then $S_n(x)$ is a monic polynomial with integral coefficients and is referred to as the \emph{renormalized} Chebyshev polynomial \cite{rivlin1990}. It is well known that $\phi(P_m;x)=S_m(x)$. \begin{definition}\label{eigmu}\normalfont{ Let $\lambda_1,\ldots,\lambda_n$ denote the eigenvalues of $G$ and $\xi_1,\ldots,\xi_n$ be the corresponding normalized eigenvector. We use $\mu_i^{(j)}(j\in\{1,2,\ldots,m\})$ to denote all zeroes of $S_m(x)-\lambda_iS_{m-1}(x)$ for $i\in\{1,2,\ldots,n\}$ and write $$\eta_i^{(j)}=\frac{1}{S_{m-1}(\mu_i^{(j)})}\begin{bmatrix} S_{m-1}(\mu_i^{(j)})\\ S_{m-2}(\mu_i^{(j)})\\ \vdots\\ S_0(\mu_i^{(j)}) \end{bmatrix}\otimes \xi_i.$$ } \end{definition} It should be pointed out that $S_{m-1}(\mu_i^{(j)})$ is never zero, see Corollary \ref{res1restated} in Sect.~\ref{sectres}. \begin{lemma}\label{distmu} For any $i\in\{1,2,\ldots,n\}$, all roots of $S_m(x)-\lambda_iS_{m-1}(x)$ are simple, i.e., $\mu_i^{(j_1)}\neq \mu_i^{(j_2)}$ for any distinct $j_1$ and $j_2$ in $\{1,2,\ldots,m\}$. \end{lemma} \begin{proof} The roots of the renormalized Chebyshev polynomials $S_m(x)$ and $S_{m-1}(x)$ are $\{a_k=2\cos\frac{k\pi}{m+1}\colon\,1\le k\le m\}$ and $\{b_k=2\cos\frac{k\pi}{m}\colon\,1\le k\le m-1\}$. Note that $a_1>b_1>a_2>b_2>\cdots>a_{m-1}>b_{m-1}>a_m$. Moreover, as all roots $a_k$'s are clearly \emph{simple}, we find that the sequence $S_m(+\infty), S_m(b_1),S_m(b_2),\ldots,S_m(b_{m-1}),S_m(-\infty)$ must have alternating signs. Write $f(x)=S_m(x)-\lambda_iS_{m-1}(x)$. Since $S_{m-1}(b_k)=0$ for $k=1,2,\ldots,m-1$ and $m-1=\deg S_{m-1}(x)<\deg S_m(x)=m$, the signs of $S_m(x)$ and $f(x)$ are the same for each $x\in\{b_1,\ldots,b_{m-1}\}\cup\{+\infty,-\infty\}$. This means the sequence $f(+\infty)$, $f(b_1)$, $f(b_2)$, $\ldots$, $f(b_{m-1})$, $f(-\infty)$ also have alternating signs. By intermediate zero theorem for continuous functions, we see that $f(x)$ has at least one root in each of the $m$ intervals: $(-\infty,b_{m-1})$, $(b_{m-1},b_{m-2})$, $\ldots,$ $(b_2,b_1)$ and $(b_1,+\infty)$. Since $f(x)$ is a polynomial of degree $m$, this means all roots of $f(x)$ are simple, as desired. \end{proof} \begin{lemma}\label{eigA} Let $\tilde{A}$ denote the adjacency matrix of $G\circ P_m$. Then $\tilde{A}\eta_i^{(j)}=\mu_i^{(j)}\eta_i^{(j)}$ for $i\in\{1,2,\ldots,n\}$ and $j\in\{1,2,\ldots,m\}$. \end{lemma} \begin{proof} We fix $i$ and $j$ and write $s_k=S_k(\mu_i^{(j)})$ $(k=0,1,\ldots, m-1)$ for simplicity. By Observation \ref{adjGP} and some basic properties of the Kronecker product, we obtain \begin{eqnarray}\label{Aeta} \tilde{A}\eta_i^{(j)} &=&\frac{1}{s_{m-1}}(A(P_m)\otimes I_n+D_1\otimes A(G))((s_{m-1},s_{m-2},\ldots,s_0)^\textup{T}\otimes \xi_i)\nonumber\\ &=&\frac{1}{s_{m-1}}((A(P_m)\begin{bmatrix} s_{m-1}\\ s_{m-2}\\ \vdots\\ s_0 \end{bmatrix})\otimes \xi_i+\begin{bmatrix} s_{m-1}\\ 0\\ \vdots\\ 0 \end{bmatrix}\otimes (\lambda_i \xi_i))\nonumber\\ &= &\frac{1}{s_{m-1}}(A(P_m)\begin{bmatrix} s_{m-1}\\ s_{m-2}\\ \vdots\\ s_0 \end{bmatrix}+\begin{bmatrix} \lambda_i s_{m-1}\\ 0\\ \vdots\\ 0 \end{bmatrix})\otimes \xi_i\nonumber\\ &= &\frac{1}{s_{m-1}}\begin{bmatrix} s_{m-2}+\lambda_is_{m-1}\\ s_{m-3}+s_{m-1}\\ \vdots\\ s_0+s_2\\ s_1 \end{bmatrix}\otimes \xi_i. \end{eqnarray} By Definition \ref{eigmu}, we see that $\lambda_i s_{m-1}=s_m$. Noting that $S_1(x)=xS_0(x)$ and $S_k(x)+S_{k+2}(x)=xS_{k+1}$ for any $k\ge 0$, we obtain \begin{equation} \begin{bmatrix} s_{m-2}+\lambda_is_{m-1}\\ s_{m-3}+s_{m-1}\\ \vdots\\ s_0+s_2\\ s_1 \end{bmatrix}=\begin{bmatrix} s_{m-2}+s_{m}\\ s_{m-3}+s_{m-1}\\ \vdots\\ s_0+s_2\\ s_1 \end{bmatrix}=\mu_i^{(j)}\begin{bmatrix} s_{m-1}\\ s_{m-2}\\ \vdots\\ s_1\\ s_0 \end{bmatrix}. \end{equation} Now, Eq. \eqref{Aeta} becomes $$\tilde{A}\eta_i^{(j)}=\frac{1}{s_{m-1}}\mu_{i}^{(j)}\begin{bmatrix} s_{m-1}\\ s_{m-2}\\ \vdots\\ s_1\\ s_0 \end{bmatrix}\otimes\xi_i=\mu_i^{(j)}\eta_i^{(j)},$$ as desired. \end{proof} \section{Resultants of Chebyshev-related polynomials}\label{sectres} \begin{definition}\normalfont{ Let $f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$ and $g(x)=b_mx^m+b_{m-1}x^{m-1}+\cdots+b_1x+b_0$. The resultant of $f(x)$ and $g(x)$, denoted by $\textup{Res}_x(f(x),g(x))$, or simply $\textup{Res}(f(x),g(x))$, is defined to be $$a_n^mb_m^n\prod_{1\le i\le n,1\le j\le m}(\alpha_i-\beta_j),$$ where $\alpha_i$'s and $\beta_j$'s are the roots (in complex field $\mathbb{C}$) of $f(x)$ and $g(x)$, respectively. } \end{definition} We list some basic properties of resultants for convenience. \begin{lemma}\label{basicres} Let $f(x)=a_nx^n+\cdots+a_0=a_n\prod_{i=1}^n(x-\alpha_i)$ and $g(x)=b_mx^m+\cdots+b_0=b_m\prod_{j=1}^m(x-\beta_j)$. Then the followings hold:\\ \textup{(\rmnum{1})} $\textup{Res}(f(x),g(x))=a_n^m\prod_{i=1}^{n}g(\alpha_i) =(-1)^{mn}b_m^{n}\prod_{j=1}^m f(\beta_j);$\\ \textup{(\rmnum{2})} If $m<n$ then $\textup{Res}_x(f(x)+tg(x),g(x))=\textup{Res}_x(f(x),g(x))$ for any $t\in \mathbb{C}$;\\ \textup{(\rmnum{3})} $\textup{Res}_x(f(tx),g(tx))=t^{mn}\textup{Res}_x(f(x),g(x))$ for any $t\in \mathbb{C}\setminus\{0\}.$ \end{lemma} We need the following result due to Dilcher and Stolarsky \cite{dilcher2005}, see \cite{jacobs2011,louboutin2013} for different proofs. \begin{lemma}[\cite{dilcher2005}]For any integer $m\ge 1$, $$\textup{Res}(U_m(x),U_{m-1}(x))=(-1)^{\frac{m(m-1)}{2}}2^{m(m-1)}.$$ \end{lemma} \begin{corollary}\label{res1restated} For any integers $m,n\ge 1$ and $i\in\{1,\ldots,n\}$, $$\prod_{j=1}^mS_{m-1}(\mu_i^{(j)})=(-1)^{\frac{m(m-1)}{2}}.$$ \end{corollary} \begin{proof} Recall that $S_{m-1}(x)=U_{m-1}(x/2)$ is a monic polynomial and so is $S_{m}(x)-\lambda_iS_{m-1}(x).$ Since $\mu_i^{(j)}$'s are roots of $S_{m}(x)-\lambda_iS_{m-1}(x)$, we obtain, by Lemma \ref{basicres}, \begin{eqnarray*}\label{Res1} \prod_{j=1}^mS_{m-1}(\mu_i^{(j)}) &=&\textup{Res}(S_{m}(x)-\lambda_iS_{m-1}(x),S_{m-1}(x))\\ &=&\textup{Res}(S_{m}(x),S_{m-1}(x))\\ &=&\left(\frac{1}{2}\right)^{m(m-1)}\textup{Res}(U_{m}(x),U_{m-1}(x))\\ &=&(-1)^{\frac{m(m-1)}{2}}, \end{eqnarray*} as desired. \end{proof} The following result on the resultant of Chebyshev-related polynomials seems to be new. We guessed this equality during this study and asked for a proof on MathOverflow. We appreciate Terry Tao very much for his elegant proof which we include here. \begin{lemma}[\cite{MO}]\label{newres}For any integer $m\ge 1$ and any complex number $t$, $$\textup{Res}_x \left( U_m(x)+tU_{m-1}(x),\sum_{k=0}^{m-1}U_k(x) \right) =(-1)^{\frac{m(m-1)}{2}} t^{\left\lfloor\frac{m}{2} \right\rfloor}2^{m(m-1)}.$$ \end{lemma} \begin{proof} Since $U_m(x) + t U_{m-1}(x)$ is of degree $m$ and $\sum_{k=0}^{m-1} U_k(x)$ is of degree $m-1$ with leading coefficient $2^{m-1}$, the resultant factors as $$ (-1)^{m(m-1)}2^{m(m-1)} \prod_{j=1}^{m-1} (U_m(\beta_j) + t U_{m-1}(\beta_j))$$ where $\beta_1,\dots,\beta_{m-1}$ are zeroes of $\sum_{k=0}^{m-1} U_k(x)$. Fortunately, these zeroes can be located explicitly using the usual trigonometric addition and subtraction identities. Telescoping the trigonometric identity $$\sin k \theta = \frac{\cos\left(k-\frac{1}{2}\right) \theta - \cos\left(k+\frac{1}{2}\right) \theta}{2 \sin \frac{\theta}{2} }$$ we conclude that $$ \sum_{k=0}^{m-1} U_k(\cos \theta) = \frac{1}{\sin\theta} \sum_{k=1}^{m}\sin k\theta=\frac{\cos\frac{\theta}{2}-\cos\left(m+\frac{1}{2}\right)\theta}{2 \sin \theta \sin \frac{\theta}{2}} = \frac{ \sin\frac{m+1}{2} \theta\sin\frac{m}{2} \theta}{2 \cos \frac{\theta}{2} \sin^2 \frac{\theta}{2}}$$ and so the $m-1 = \lfloor \frac{m}{2} \rfloor + \lfloor \frac{m-1}{2} \rfloor$ zeroes of $\sum_{k=0}^{m-1} U_k(x)$ take the form $\cos\frac{2\pi j}{m+1}$ for $1 \leq j \le \lfloor\frac{m}{2}\rfloor$ and $\cos \frac{2\pi j}{m} $ for $1 \leq j \le \lfloor\frac{m-1}{2}\rfloor$. Since the first class $\cos \frac{2\pi j}{m+1} $ of zeroes are also zeroes of $U_m(x)$, and the second class $\cos\frac{2\pi j}{m}$ are zeroes of $U_{m-1}(x)$, the resultant therefore simplifies to $$ (-1)^{m(m-1)}2^{m(m-1)} t^{\lfloor \frac{m}{2} \rfloor} \prod_{1 \leq j \le\lfloor \frac{m}{2}\rfloor} U_{m-1}\left( \cos\frac{2\pi j}{m+1} \right) \prod_{1 \leq j \le\lfloor\frac{m-1}{2}\rfloor} U_{m}\left( \cos\frac{2\pi j}{m} \right).$$ But $$ U_{m-1}\left( \cos\frac{2\pi j}{m+1} \right) =\frac{\sin \frac{2\pi mj}{m+1}}{\sin\frac{2\pi j}{m+1}} = -1$$ and similarly $$ U_{m}\left( \cos \frac{2\pi j}{m} \right) = \frac{\sin\frac{2\pi (m+1)j}{m}}{\sin\frac{2\pi j}{m}} = +1$$ and the lemma then follows after counting up the signs. \end{proof} Using a similar argument as in the proof of Corollary \ref{res1restated}, we obtain the following \begin{corollary}\label{res2restated} For any integers $m,n\ge 1$ and $i\in\{1,\ldots,n\}$, $$\prod_{j=1}^m\sum_{k=0}^{m-1}S_k(\mu_i^{(j)})=(-1)^{\frac{m(m-1)}{2}} (-\lambda_i)^{\left\lfloor\frac{m}{2} \right\rfloor}.$$ \end{corollary} \section{Proof of Theorem \ref{main}} \begin{lemma}[\cite{mao2015}]\label{basicW} Let $\lambda_i$ be the eigenvalues of $G$ with normalized eigenvector $\xi_i$ for $i=1,2,\ldots,n$. Then $$\det W(G)=\pm \prod_{1\le i_1\le i_2\le n}(\lambda_{i_2}-\lambda_{i_1})\prod_{1\le i\le n}(e_n^\textup{T} \xi_i).$$ \end{lemma} \begin{definition} $S(x)=\sum_{k=0}^{m-1}S_k(x).$ \end{definition} The following equality is straightforward. \begin{lemma}\label{emn} $e_{mn}^\textup{T} \eta_i^{(j)}=\frac{S(\mu_{i}^{(j)})}{S_{m-1}(\mu_i^{(j)})}e_n^\textup{T} \xi_i.$ \end{lemma} Let $\tilde{A}=A(G\circ P_m)$. By Lemmas \ref{eigA} and \ref{emn}, we have \begin{eqnarray}\label{eA} &&e_{mn}^\textup{T}\tilde{A}^k[\eta_1^{(1)},\ldots,\eta_n^{(1)};\ldots;\eta_1^{(m)},\ldots,\eta_n^{(m)}]\nonumber\\ &=&e_{mn}^\textup{T}[\eta_1^{(1)},\ldots,\eta_n^{(1)};\ldots;\eta_1^{(m)},\ldots,\eta_n^{(m)}]\begin{bmatrix} (\mu_1^{(1)})^k&&&\\ &(\mu_2^{(1)})^k&&\\ &&\ddots&\\ &&&(\mu_n^{(m)})^k \end{bmatrix}_{(mn)\times (mn)}\nonumber\\ &=&[(\mu_1^{(1)})^k,(\mu_2^{(1)})^k,\ldots,(\mu_n^{(m)})^k]\begin{bmatrix} \frac{S(\mu_{1}^{(1)})}{S_{m-1}(\mu_1^{(1)})}e_n^\textup{T} \xi_1&&&\\ &\frac{S(\mu_{2}^{(1)})}{S_{m-1}(\mu_2^{(1)})}e_n^\textup{T} \xi_2&&\\ &&\ddots&\\ &&&\frac{S(\mu_{n}^{(m)})}{S_{m-1}(\mu_n^{(m)})}e_n^\textup{T} \xi_n \end{bmatrix}. \end{eqnarray} Let $D$ denote the diagonal matrix in Eq.~\eqref{eA}. Write $E^{(j)}=[\eta_1^{(j)},\ldots,\eta_n^{(j)}]$ and $$M^{(j)}=\begin{bmatrix} 1&1&\cdots&1\\ \mu_1^{(j)}&\mu_2^{(j)}&\cdots&\mu_n^{(j)}\\ \vdots&\vdots&\vdots&\vdots\\ (\mu_1^{(j)})^{mn-1}&(\mu_2^{(j)})^{mn-1}&\cdots&(\mu_n^{(j)})^{mn-1} \end{bmatrix}_{(mn)\times n}. $$ Then, summarizing Eq. \eqref{eA} for $k$ from $0$ to $mn-1$ leads to \begin{equation}\label{wemd} (W(G\circ P_m))^\textup{T}[E^{(1)},\ldots,E^{(m)}]=[M^{(1)},\ldots,M^{(m)}]D. \end{equation} \noindent\textbf{Claim 1}: It holds that $$\det D=a_0^{\lfloor\frac{m}{2}\rfloor}\left(\prod\limits_{1\le i\le n}e_n^\textup{T}\xi_i\right)^m,$$ where $a_0=(-1)^n\det A(G)$, the constant term of characteristic polynomial of $G$. By Corollary \ref{res1restated}, we obtain $$\prod_{1\le i\le n}\prod_{1\le j\le m}S_{m-1}(\mu_i^{(j)})=(-1)^\frac{m(m-1)n}{2}.$$ Recall that $S(x)=\sum_{k=0}^{m-1}S_k(x)$. We can rewrite Corollary \ref{res2restated} as $$\prod_{j=1}^mS(\mu_i^{(j)})=(-1)^{\frac{m(m-1)}{2}} (-\lambda_i)^{\left\lfloor\frac{m}{2} \right\rfloor}.$$ Noting that $\prod_{i=1}^{n}(-\lambda_i)=a_0$, we obtain $$\prod_{i=1}^{n}\prod_{j=1}^mS(\mu_i^{(j)})=(-1)^{\frac{m(m-1)n}{2}} a_0^{\lfloor\frac{m}{2} \rfloor}.$$ and hence Claim 1 follows by direct multiplication of all diagonal entries in $D$. \noindent\textbf{Claim 2}: It holds that $$\det [M^{(1)},\ldots,M^{(m)}]=\left(\prod_{i=1}^{n}\prod_{1\le j_1<j_2\le m}\left(\mu_i^{(j_2)}-\mu_i^{(j_1)}\right)\right)\left(\prod_{1\le i_1< i_2\le n}(\lambda_{i_2}-\lambda_{i_1})\right)^m.$$ We use co-lexicographical order for Cartesian product $\{1,\ldots,n\}\times \{1,\ldots,m\}$. Precisely, $(i_1,j_1)<(i_2,j_2)$ (in co-lexicographical order) if either $j_1<j_2$, or $j_1=j_2$ and $i_1<i_2$. Write $V=[M^{(1)},\ldots,M^{(m)}]$. Using the familiar formula for a Vandermonde matrix, we obtain \begin{eqnarray}\label{detV} \det V\nonumber&=&\prod_{(i_1,j_1)<(i_2,j_2)}\left(\mu_{i_2}^{(j_2)}-\mu_{i_1}^{(j_1)}\right)\\ &=&\left(\prod_{i=1}^{n}\prod_{1\le j_1<j_2\le m}\left(\mu_{i}^{(j_2)}-\mu_{i}^{(j_1)}\right)\right)\left(\prod_{i_1\neq i_2}\prod_{(i_1,j_1)<(i_2,j_2)}\left(\mu_{i_2}^{(j_2)}-\mu_{i_1}^{(j_1)}\right)\right). \end{eqnarray} The second factor of Eq.~\eqref{detV} can be regrouped as \begin{eqnarray}\label{detV2} &&\prod_{1\le i_1< i_2\le n}\left(\prod_{(i_1,j_1)<(i_2,j_2)}\left(\mu_{i_2}^{(j_2)}-\mu_{i_1}^{(j_1)}\right)\right)\left(\prod_{(i_2,j_2)<(i_1,j_1)}\left(\mu_{i_1}^{(j_1)}-\mu_{i_2}^{(j_2)}\right)\right)\nonumber\\ &=&\prod_{1\le i_1< i_2\le n}\left(\prod_{j_2=1}^{m}\prod_{j_1=1}^{m}\left(\mu_{i_2}^{(j_2)}-\mu_{i_1}^{(j_1)}\right)\right)\left(\prod_{(i_2,j_2)<(i_1,j_1)}(-1)\right). \end{eqnarray} Recall that $S_m(x)-\lambda_iS_{m-1}(x)=\prod_{j=1}^m(x-\mu_i^{(j)})$. We see that $$\prod_{j_2=1}^{m}\prod_{j_1=1}^{m}\left(\mu_{i_2}^{(j_2)}-\mu_{i_1}^{(j_1)}\right)=\prod_{j_2=1}^{m}\left(S_m(\mu_{i_2}^{(j_2)})-\lambda_{i_1}S_{m-1}(\mu_{i_2}^{(j_2)})\right)=(\lambda_{i_2}-\lambda_{i_1})^m\prod_{j_2=1}^{m}S_{m-1}(\mu_{i_2}^{(j_2)}).$$ Note that for $i_1< i_2$, the inequality $(i_2,j_2)<(i_1,j_1)$ holds if and only if $j_2<j_1$. This means, for any fixed $i_1,i_2\in\{1,2,\ldots,n\}$ with $i_1<i_2$, $$\prod_{(i_2,j_2)<(i_1,j_1)}(-1)=(-1)^\frac{m(m-1)}{2}.$$ Finally, by Corollary \ref{res1restated}, $$\prod_{j_2=1}^{m}S_{m-1}(\mu_{i_2}^{(j_2)})=(-1)^\frac{m(m-1)}{2}.$$ Now, Eq.~\eqref{detV2} reduces to $$\left(\prod_{1\le i_1< i_2\le n}\left(\lambda_{i_2}-\lambda_{i_1}\right)\right)^m$$ and Claim 2 holds by Eq. \eqref{detV}. \noindent\textbf{Claim 3}: It holds that $$\det[E^{(1)},\ldots,E^{(m)}]=\pm \prod_{i=1}^{n}\prod_{1\le j_1< j_2\le m}\left(\mu_i^{(j_2)}-\mu_i^{(j_1)}\right).$$ Let $\tilde{s}_k^{(i,j)}=\frac{S_k(\mu_i^{(j)})}{S_{m-1}(\mu_i^{(j)})}$ and $K_{j_1,j_2}$ be the diagonal matrix: \begin{equation} K_{j_1,j_2}=\begin{bmatrix} \tilde{s}_{m-j_1}^{(1,j_2)}&&&\\ &\tilde{s}_{m-j_1}^{(2,j_2)}&&\\ &&\ddots&\\ &&&\tilde{s}_{m-j_1}^{(n,j_2)} \end{bmatrix}_{n\times n}. \end{equation} Write $Q=[\xi_1,\xi_2,\ldots,\xi_n]$. Then it is routine to check the following factorization: \begin{equation}\label{efac} [E^{(1)},E^{(2)},\ldots,E^{(m)}]=\begin{bmatrix} Q&&&\\ &Q&&\\ &&\ddots&\\ &&&Q \end{bmatrix} \begin{bmatrix} K_{1,1}&K_{1,2}&\cdots&K_{1,m}\\ K_{2,1}&K_{2,2}&\cdots&K_{2,m}\\ \cdots&\cdots&\cdots&\cdots\\ K_{m,1}&K_{m,2}&\cdots&K_{m,m} \end{bmatrix}, \end{equation} where the first factor is a block diagonal matrix with $m$ identical diagonal blocks. Since each block $K_{j_1,j_2}$ is diagonal, it is not difficult to see that, via an appropriate permutation matrix, the block matrix $[K_{j_1,j_2}]_{m\times m}$ is similar to the following block diagonal matrix \begin{equation*} \begin{bmatrix} L_{1}&&&\\ &L_2&&\\ &&\ddots&\\ &&&L_n \end{bmatrix}, \end{equation*} where the $(j_1,j_2)$-th entry of each $L_i$ is the $(i,i)$-th entry of $K_{j_1,j_2}$. Written exactly, \begin{eqnarray}\label{Lt} L_i&=&\begin{bmatrix} \tilde{s}_{m-1}^{(i,1)}& \tilde{s}_{m-1}^{(i,2)}&\ldots& \tilde{s}_{m-1}^{(i,m)}\\ \tilde{s}_{m-2}^{(i,1)}& \tilde{s}_{m-2}^{(i,2)}&\ldots& \tilde{s}_{m-2}^{(i,m)}\\ \cdots&\cdots&\cdots&\cdots\\ \tilde{s}_{0}^{(i,1)}& \tilde{s}_{0}^{(i,2)}&\ldots& \tilde{s}_{0}^{(i,m)}\\ \end{bmatrix}\nonumber\\ &=&\begin{bmatrix} S_{m-1}(\mu_i^{(1)})&\ldots& S_{m-1}(\mu_i^{(m)})\\ S_{m-2}(\mu_i^{(1)})&\ldots& S_{m-2}(\mu_i^{(m)})\\ \cdots&\cdots&\cdots\\ S_{0}(\mu_i^{(1)})&\ldots& S_{0}(\mu_i^{(m)})\\ \end{bmatrix}\begin{bmatrix} \frac{1}{S_{m-1}(\mu_i^{(1)})}&&\\ &\ddots&\\ && \frac{1}{S_{m-1}(\mu_i^{(m)})}\\ \end{bmatrix}. \end{eqnarray} Since $S_k(x)$ is a monic polynomial with degree $k$ for each nonnegative integer $k$, we find that the determinant of the first factor in Eq.~\eqref{Lt} equals \begin{equation}\label{detL} \det\begin{bmatrix} (\mu_i^{(1)})^{m-1}&(\mu_i^{(2)})^{m-1}&\cdots& (\mu_i^{(m)})^{m-1}\\ \cdots&\cdots&\cdots&\cdots\\ \mu_i^{(1)}&\mu_i^{(2)}&\cdots& \mu_i^{(m)}\\ 1&1& \cdots&1 \end{bmatrix}=(-1)^{\frac{m(m-1)}{2}}\prod_{1\le j_1< j_2\le m}\left(\mu_i^{(j_2)}-\mu_i^{(j_1)}\right). \end{equation} As $\det Q=\pm 1$ and $\det [K_{j_1,j_2}]_{m\times m}=\prod_{i=1}^n\det L_i$, Claim 3 follows by Eqs.~\eqref{efac}-\eqref{detL}. By Claim 3 and Lemma \ref{distmu}, we see that $\det[E^{(1)},\ldots,E^{(m)}]\neq 0$. Taking determinants for both sides of Eq.~\eqref{wemd} and using Claims 1-3, we obtain \begin{eqnarray*} \det W(G\circ P_m)&=&\frac{\det[M^{(1)},\ldots,M^{(m)}]\det D}{\det [E^{(1)},\ldots,E^{(m)}]}\\ &=&\pm a_0^{\lfloor\frac{m}{2}\rfloor}\left(\prod_{1\le i_1<i_2\le n}(\lambda_{i_2}-\lambda_{i_1})\right)^m\left(\prod\limits_{1\le i\le n}e_n^\textup{T}\xi_i\right)^m\\ &=&\pm a_0^{\lfloor\frac{m}{2}\rfloor}(\det W(G))^m, \end{eqnarray*} where the last equality follows from Lemma \ref{basicW}. This completes the proof of Theorem \ref{main}. For any even positive integer $n$, let $\mathcal{F}^*_{n}$ be the family of all $n$-vertex graph $G$ such that $ \det W(G)=\pm 2^\frac{n}{2}$ and the constant term of $\phi(G;x)$ is $\pm 1$. Write $$\mathcal{F}^*=\bigcup_{n \textup{~even}}\mathcal{F}^*_{n}.$$ As a special case of the aforementioned theorem of Wang \cite{wang2017JCTB}, each graph in $\mathcal{F}^*$ is DGS (determined by its generalized spectrum). \begin{lemma}[\cite{mao2022}]\label{constant1} If the constant term of $\phi(G;x)$ is $\pm 1$, then so is $\phi(G\circ P_m;x)$ for each integer $m\ge 2$. \end{lemma} As a direct consequence of Theorem \ref{main} and Lemma \ref{constant1}, we obtain the following result which was conjectured (in a slightly different form) by Mao and Wang \cite[Conjecture 3.1]{mao2022}. \begin{theorem}\label{dgs} If $G\in \mathcal{F}^*$ then for any integer $m\ge 2$, the graph $G\circ P_m\in \mathcal{F}^*$ and hence is DGS. \end{theorem} Theorem \ref{dgs} gives a simple method to construct large DGS-graphs from small ones. For example, let $G$ be the left graph in Figure 1. It can be easily checked that $G\in \mathcal{F}^*$. Thus using Theorem \ref{dgs} iteratively, we see that, for any positive sequence $m_1,m_2,\ldots,m_k,\ldots,$ with $m_i\ge 2$, all graphs in the family $$G\circ P_{m_1},(G\circ P_{m_1})\circ P_{m_2},((G\circ P_{m_1})\circ P_{m_2})\circ P_{m_3},\cdots$$ are DGS. \section*{Acknowledgments} This work is supported by the National Natural Science Foundation of China (Grant Nos. 12001006 and 12101379), Natural Science Basic Research Plan in Shannxi Province of China (Grant No.\,2020JQ-696) and the Scientific Research Foundation of Anhui Polytechnic University (Grant No.\,2019YQQ024).
{ "timestamp": "2022-08-16T02:29:59", "yymm": "2208", "arxiv_id": "2208.07229", "language": "en", "url": "https://arxiv.org/abs/2208.07229", "abstract": "The walk matrix of an $n$-vertex graph $G$ with adjacency matrix $A$, denoted by $W(G)$, is $[e,Ae,\\ldots,A^{n-1}e]$, where $e$ is the all-ones vector. Let $G\\circ P_m$ be the rooted product of $G$ and a rooted path $P_m$ (taking an endvertex as the root), i.e., $G\\circ P_m$ is a graph obtained from $G$ and $n$ copies of $P_m$ by identifying each vertex of $G$ with an endvertex of a copy of $P_m$. Mao-Liu-Wang (2015) and Mao-Wang (2022) proved that, for $m=2$ and $m\\in\\{3,4\\}$, respectively$$\\det W(G\\circ P_m)=\\pm a_0^{\\lfloor\\frac{m}{2}\\rfloor}(\\det W(G))^m,$$ where $a_0$ is the constant term of the characteristic polynomial of $G$. Furthermore, Mao-Wang (2022) conjectured that the formula holds for any $m\\ge 2$. In this note, we verify this conjecture using the technique of Chebyshev polynomials.", "subjects": "Combinatorics (math.CO)", "title": "Proof of a conjecture on the determinant of walk matrix of rooted product with a path", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771782476948, "lm_q2_score": 0.8397339736884711, "lm_q1q2_score": 0.8286303210350335 }
https://arxiv.org/abs/2106.14755
Counting Divisions of a $2\times n$ Rectangular Grid
Consider a $2\times n$ rectangular grid composed of $1\times 1$ squares. Cutting only along the edges between squares, how many ways are there to divide the board into $k$ pieces? Building off the work of Durham and Richmond, who found the closed-form solutions for the number of divisions into 2 and 3 pieces, we prove a recursive relationship that counts the number of divisions of the board into $k$ pieces. Using this recursion, we obtain closed-form solutions for the number of divisions for $k=4$ and $k=5$ using fitting techniques on data generated from the recursion. Furthermore, we show that the closed-form solution for any fixed $k$ must be a polynomial on $n$ with degree $2k-2$.
\section{Algorithm for Counting Divisions of $n\times m$ Board into $k$ Pieces} \label{sec:algorithm} While $d_k^m(n)$ has directly proven explicit formulas for small values for $m$ and $k$, computational methods are much more useful to collect data about greater values. We used those data to fit polynomials that predict the number of divisions of the $2\times n$ board into more than two pieces; we prove those formulas satisfy the recursion of Theorem~\ref{drecursion} in the next section. Our algorithm involves encoding the board as a graph, where the vertices represent the squares, and two vertices are adjacent if and only if their squares are connected in the board. A cut corresponds to removing an edge from the graph, and every valid division of the board corresponds to removing some set of edges from the graph to produce some number of connected components. The algorithm involves checking all possible combinations of edges to be removed, in other words, it iterates through the power set of the edge set of the graph, and finds the number of connected components for each graph after the edge removal. We implemented our algorithm in Python using Networkx [3] and Numpy [4]. However, not every sequence of cuts results in a valid division. For a division to be valid, any two squares that are separated by a cut must end up in different pieces of the division. In the algorithm, this means that if two vertices are adjacent before the edge removal and are not adjacent after the edge removal, the two vertices must not be in the same connected component of the resulting graph. Using this check for validity, the following algorithm returns the number of divisions of an $n\times m$ board into $1$ through $mn$ pieces. While the algorithm is computationally expensive, it is exhaustive. Therefore no divisions are missed by the algorithm and the divisions into all numbers of pieces are returned by the algorithm at once. \section{Recursive Relationship for Counting Divisions into $k$ Pieces} This problem is ripe for recursion because the $2\times n$ board can be extended by simply adding two squares onto the rightmost end, creating a $2\times (n+1)$ board. We can then use these two newly-added squares to form divisions of the $2\times (n+1)$ board in a way that builds off the divisions of the $2\times n$ board nested inside of it. We will denote the $2\times n$ board as $B=(S, A)$, where $S=\{x_0,x_1,\dots,x_{2n-1}\}$, and the $2\times (n+1)$ board as $B'=(S', A')$, where $S'=S\cup\{x_{2n},x_{2n+1}\}$. In order for $B'$ to build upon the divisions of $B$, special attention needs to be paid to the two rightmost squares of $B$, $x_{2n-2}$ and $x_{2n-1}$. Whether or not these squares are in different pieces is important for how the two newly added squares can be added to the board to create more divisions. We will partition the set of divisions $\mathcal{D}_k(n)$ into $\mathcal{S}_k(n)$ and $\mathcal{T}_k(n)$, where $\mathcal{S}_k(n)$ is the set of divisions where $x_{2n-2}$ and $x_{2n-1}$ are not in the same piece, and $\mathcal{T}_k(n)$ is the set of divisions where $x_{2n-2}$ and $x_{2n-1}$ are in the same piece. We will denote the number of divisions in each set as $|\mathcal{S}_k(n)|=s_k(n)$ and $|\mathcal{T}_k(n)|=t_k(n)$. \begin{theorem} The number of divisions of a $2\times n$ board into $k$ pieces satisfies the following recursion: \begin{equation*} d_k(n+1)=d_{k-2}(n)+3d_{k-1}(n)+d_k^2(n)+2s_k(n) \end{equation*} \label{drecursion} \end{theorem} \begin{proof} The two additional squares in $B'$ can be used to add 0, 1, or 2 extra pieces to each division of $B$. Thus, to obtain a division into $k$ pieces for $B'$, we only have to consider how the additional squares interact with the divisions of $B$ into \textbf{(i)} $k-2$, \textbf{(ii)} $k-1$, and \textbf{(iii)} $k$ pieces. Throughout this proof, we let $D=\{P_1,P_2,\dots,P_\ell\}$ be the division in $B$ into $\ell$ pieces and we let $D'$ be the division in $B'$. \textbf{Case (i):} To obtain $k$ pieces in $B'$, $x_{2n}$ and $x_{2n+1}$ must both be added as an individual piece, that is, the division in $B'$ is:\\ $D'=\{P_1,P_2,\dots,P_{k-2},\{x_{2n}\},\{x_{2n+1}\}\}$ (see Figure~\ref{fig:kminus2}).\\ There is only one way to add the two squares for each division $D\in \mathcal{D}_{k-2}(n)$ to obtain a division in $\mathcal{D}_k(n+1)$. Therefore, there must be a term of $d_{k-2}(n)$ in the recursion. \textbf{Case (ii):} To obtain $k$ pieces in $B'$, $x_{2n}$ and $x_{2n+1}$ must be used to add exactly one piece. There are three ways to do this. The first is to add both as a single piece:\\ $D'=\{P_1,P_2,\dots,P_{k-1},\{x_{2n},x_{2n+1}\}\}$.\\ The other two ways are to add $x_{2n}$ or $x_{2n+1}$ as its own piece, and attach the other to piece containing $x_{2n-2}$ and $x_{2n-1}$, respectively. If $x_{2n-2}\in P_i$ and $x_{2n-1}\in P_j$ (note that it is \textit{not} necessary for $i\neq j$), then the two ways to add the single piece are:\\ $D'=\{P_1,P_2,\dots,P_i\cup\{x_{2n}\},\dots,P_{k-1},\{x_{2n+1}\}\}$ and\\ $D'=\{P_1,P_2,\dots,P_j\cup\{x_{2n+1}\},\dots,P_{k-1},\{x_{2n}\}\}$ (see Figure~\ref{fig:kminus1}).\\ Thus, for each division $D\in \mathcal{D}_{k-1}(n)$, there are 3 ways to obtain a division in $\mathcal{D}_k(n+1)$. Therefore, there must be a term of $3d_{k-1}(n)$ in the recursion. \textbf{Case (iii):} To obtain $k$ pieces in $B'$, $x_{2n}$ and $x_{2n+1}$ must both be attached to some other piece in $B$. How these squares can be added depends on if the division in $B$ is in $\mathcal{T}_k(n)$ or $\mathcal{S}_k(n)$. If $D\in \mathcal{T}_k(n)$, then $x_{2n-2},x_{2n-1}\in P_i$, that is, they are in the same piece. The only way to add $x_{2n}$ and $x_{2n+1}$ to remain at $k$ pieces is to add them both to $P_i$, so:\\ $D'=\{P_1,P_2,\dots,P_i\cup\{x_{2n},x_{2n+1}\},\dots,P_k\}$.\\ There is only one way to add the two squares for each division $D\in \mathcal{T}_{k}(n)$ to obtain a division in $\mathcal{D}_k(n+1)$. Therefore, there must be a term of $t_{k}(n)$ in the recursion. If $D\in \mathcal{S}_k(n)$, then $x_{2n-2}\in P_i$ and $x_{2n-1}\in P_j$, where $i\neq j$. There are three ways to add $x_{2n}$ and $x_{2n+1}$ to remain at exactly $k$ pieces. They can both be added to $P_i$, they can both be added to $P_j$, or they can split, with $x_{2n}$ being added to $P_i$ and $x_{2n+1}$ being added to $P_j$:\\ $D'=\{P_1,P_2,\dots,P_i\cup\{x_{2n},x_{2n+1}\},\dots,P_{k}\}$,\\ $D'=\{P_1,P_2,\dots,P_j\cup\{x_{2n},x_{2n+1}\},\dots,P_{k}\}$, and \\ $D'=\{P_1,P_2,\dots,P_i\cup\{x_{2n}\},P_j\cup\{x_{2n+1}\},\dots,P_{k}\}$ (see Figure~\ref{fig:sep_recursion}).\\ Thus, for each division $D\in \mathcal{S}_{k}(n)$, there are 3 ways to obtain a division in $\mathcal{D}_k(n+1)$. Therefore, there must be a term of $3s_{k}(n)$ in the recursion. Adding everything together, we get the following: \begin{equation*} d_k(n+1)=d_{k-2}(n)+3d_{k-1}(n)+t_k(n)+3s_k(n) \end{equation*} It is quite clear that $\mathcal{D}_k(n)=\mathcal{S}_k(n)\cup \mathcal{T}_k(n)$ and $\mathcal{S}_k(n)\cap \mathcal{T}_k(n)=\emptyset$, so $d_k(n)=s_k(n)+t_k(n)$. We can substitute this into the equation above to obtain the recursion in the theorem statement. \end{proof} \begin{figure} \centering \includegraphics[width=4cm]{kminus2.eps} \caption{The single way to add the two squares to each division in $\mathcal{D}_{k-2}(n)$.} \label{fig:kminus2} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{kminus1.eps} \caption{The three ways to add the two squares to each division in $\mathcal{D}_{k-1}(n)$. Note that a cut \textit{could} exist between $x_{2n-2}$ and $x_{2n-1}$, it has been omitted in this figure.} \label{fig:kminus1} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{sep_recursion.eps} \caption{The three ways to add the two squares to each division in $\mathcal{S}_k(n)$.} \label{fig:sep_recursion} \end{figure} This recursion is only of use when $s_k(n)$ can be calculated. Fortunately, $s_k(n)$ can also be computed recursively, in terms that do not involve $d_k(n)$. \begin{theorem} The number of divisions of a $2\times n$ board into $k$ pieces where the rightmost squares separate satisfies the following recursion: \begin{equation*} s_k(n+1)=d_{k-2}(n)+2d_{k-1}(n)+s_k (n) \end{equation*} \label{s_recursion} \end{theorem} \begin{proof} Using the same arguments from the proof of Theorem~\ref{drecursion}, finding these divisions simply involves counting the number of divisions of $B'$ where $x_{2n}$ and $x_{2n+1}$ end up in different pieces. They separate in the $d_{k-2}(n)$ term, in two of the cases of the $d_{k-1}(n)$ term, none of the cases of the $t_k(n)$ term, and in one of the cases of the $s_k(n)$ term, so the recursion is just the sum of these cases. \end{proof} As a check of this result, we show that Durham and Richmond's result for the number of divisions into two pieces satisfies this recursion. Note that our notation of $s_2(n)$ corresponds to their result that the number of partitions where the first two squares are in different pieces is $2n-1$. \begin{align*} d_0(n)=0,&\quad d_1(n)=1,\quad d_2(n)=2n^2-n,\quad s_2(n)=2n-1\\ d_2(n+1)&=0+3(1)+2n^2-n+2(2n-1) \\&=3+2n^2-n+4n-2=2n^2+3n+1 \\&=2(n+1)^2-(n+1) \end{align*} \section{Closed-Form Solutions for Divisions of $2\times n$ Board into Three, Four, and Five Pieces} \label{sec:threeandfour} Using the tools we have proven thus far, we can find closed-form solutions for the number of divisions of a $2\times n$ board into $k=3,4,5$ pieces. First, we find closed forms of $s_3(n)$, $s_4(n)$, and $s_5(n)$ by generating data using the recursions and then fitting a polynomial to the points. We repeat this process for $d_3(n)$ (and show it matches the result by Durham and Richmond), $d_4(n)$, and $d_5(n)$. Because we know the degrees of each of these polynomials, we only have to generate $2k-2$ points for each $s_k(n)$ and $2k-1$ points for each $d_k(n)$ to exactly determine their coefficients. There are multiple methods of interpolation, all of which will produce the same result. Our fitting methods are least-squares regression using Numpy~\cite{harris} and Lagrange interpolation~\cite{archer}. Newton's divided difference method~\cite{weisstein} is another avenue that future researchers can employ. \input{formula_tables.tex} The formulas for $s_k(n)$ are shown in Table \ref{tab:s_formulas} and the formulas for $d_k(n)$ are shown in Table \ref{tab:d_formulas}. These polynomials can be explicitly proven to hold by induction using the recursions. As an example, we show the formulas hold for $k=3$ by using the proposed polynomials, as well as the proven polynomials for $d_1(n)$ and $d_2(n)$. \begin{example} \begin{align*} s_3(n+1)&=d_1(n)+2d_2(n)+s_3(n)\\ &=(1)+2n(2n-1)+\frac{4}{3}n^3-3n^2+\frac{8}{3}n-1\\ &=\frac{4}{3}n^3+n^2+\frac{2}{3}n\\ &=\frac{4}{3}(n+1)^3-3(n+1)^2+\frac{8}{3}(n+1)-1\\ d_3(n+1)&=d_1(n)+3d_2(n)+d_3(n)+2s_3(n)\\ &=(1)+3n(2n-1)+\frac{2}{3}n^4-\frac{4}{3}n^3+\frac{11}{6}n^2-\frac{13}{6}n+1\\ &\quad+2\left(\frac{4}{3}n^3-3n^2+\frac{8}{3}n-1\right)\\ &= \frac{2}{3}n^4+\frac{4}{3}n^3+\frac{11}{6}n^2+\frac{1}{6}n\\ &= \frac{2}{3}(n+1)^4-\frac{4}{3}(n+1)^3+\frac{11}{6}(n+1)^2-\frac{13}{6}(n+1)+1 \end{align*} \end{example} The proofs for $k=4$ and $k=5$ proceed in the exact same manner, and are omitted for brevity. The greatest challenge to proving formulas like these is to actually obtain the polynomials; the proof that they hold consists of only algebraic manipulations that can readily be outsourced to a computer algebra system. In theory, this process can be continued for $k=6,7,\dots$, as long as enough points are generated using the recursion. So long as the computer calculating the fitting polynomials has sufficient precision, the decimals returned by the interpolation algorithm can be converted into fractions relatively easily. \section{Functional Forms of $d_k^2(n)$ and $s_k^2(n)$}\label{sec:forms} In the previous section, we implicitly assumed that the functions that count the number of divisions are polynomials on $n$. This assumption is correct. In this section, we prove that for all $k$, $d_k(n)$ and $s_k(n)$ must be polynomials on $n$. Moreover, we show that the degrees of these polynomials are $2k-2$ and $2k-3$, respectively. Our proofs of the polynomial nature of the functions is based on the fact that the sum of polynomials is a polynomial, and our proof of their degrees is based on Faulhaber's formula. A straightforward derivation of Faulhaber's formula is given in \cite{orosi}. The formula gives an expression for the sum of the first $n$ integers each raised to the positive integer power $p$ as a polynomial of degree $p+1$. Stated more explicitly, \begin{equation*} \sum_{k=1}^nk^p=\frac{1}{p+1}\sum_{j=0}^p(-1)^j\binom{p+1}{j} B_jn^{p+1-j}, \end{equation*} where $B_j$ is the $j$-th Bernoulli number \cite{apostol}. As we are only interested in the degree of the polynomial, we will condense the coefficients into a single variable, \begin{equation*} \sum_{k=1}^nk^p=\sum_{j=0}^pA_jn^{p+1-j}, \end{equation*} where $A_j=\frac{(-1)^j}{p+1}\binom{p+1}{j}B_j$. Before we can employ the formula, we must rewrite our recursions in terms of summations. Beginning with $s_k(n)$, by Theorem~\ref{s_recursion} we have \begin{equation*} s_k(n)=d_{k-2}(n-1)+2d_{k-1}(n-1)+s_k(n-1). \end{equation*} Substituting for $s_k(n-1)$ using the recursion gives \begin{align*} s_k(n)&=d_{k-2}(n-1)+2d_{k-1}(n-1)\\ &+d_{k-2}(n-2)+2d_{k-1}(n-2)+s_k(n-2). \end{align*} This can be repeated until the input is 0, where $d_k(0)=s_k(0)=0$ for all $k$, so \begin{equation} s_k(n)=\sum_{j=1}^{n-1} d_{k-2}(j)+2d_{k-1}(j). \label{eq:s_summation} \end{equation} We can do the same for $d_k(n)$. From Theorem~\ref{drecursion} we have \begin{equation*} d_k(n)= d_{k-2}(n-1)+3d_{k-1}(n-1)+2s_k(n-1)+d_k(n-1). \end{equation*} Substituting for $d_k(n-1)$ using the recursion gives \begin{align*} d_k(n)&= d_{k-2}(n-1)+3d_{k-1}(n-1)+2s_k(n-1)\\ &+d_{k-2}(n-2)+3d_{k-1}(n-2)+2s_k(n-2)+d_k(n-2). \end{align*} Again, this can be repeated all the way to 0, so \begin{equation} d_k(n)=\sum_{j=1}^{n-1} d_{k-2}(j)+3d_{k-1}(j)+2s_k(j) \label{eq:d_summation} \end{equation} \begin{theorem} For all $k$, $s_k(n)$ and $d_k(n)$ are polynomials on $n$, where $s_k(n)$ is of degree $2k-3$ and $d_k(n)$ is of degree $2k-2$. \end{theorem} \begin{proof} Proof is by strong induction on $k$ The base cases are satisfied for $k=1,2$, as $d_1(n)=1,d_2(n)=2n^2-n,s_1(n)=0,s_2=2n-1$, each of which is a polynomial with appropriate degree. Our inductive assumption is that for all positive integers $\ell$ less than some integer $k$, $s_\ell(n)$ and $d_\ell(n)$ are of degree $2\ell-3$ and $2\ell-2$, respectively. Using Equation~(\ref{eq:s_summation}), it must be a polynomial, because it is a sum of polynomials. The same holds true for $d_k(n)$ using Equation~(\ref{eq:d_summation}). To prove the degrees, we start with $s_k(n)$. From Equation~(\ref{eq:s_summation}), we have \begin{equation*} s_k(n)=\sum_{j=1}^{n-1} d_{k-2}(j)+2d_{k-1}(j). \end{equation*} Expanding $d_{k-1}(j)$ as a polynomial of degree $2k-4$ gives \begin{align*} s_k(n)&=\sum_{j=1}^{n-1}\left(d_{k-2}(j)+2\left(\sum_{i=0}^{2k-4} C_i j^i\right)\right)\\ &=\sum_{j=1}^{n-1}\left( d_{k-2}(j)+2C_{2k-4}j^{2k-4}+2\left(\sum_{i=0}^{2k-5} C_i j^i\right)\right). \end{align*} By Faulhaber's formula, $\displaystyle\sum_{j=1}^{n-1} 2C_{2k-4}j^{2k-4}$ is equal to a polynomial of degree $2k-3$. The remainder of the summation involves terms of powers that are at most $2k-5$, so Faulhaber's formula will only produce polynomials of degree up to $2k-4$ for those terms. Therefore $s_k(n)$ is equal to a polynomial of degree $2k-3$. To prove $d_2(n)$ is equal to a polynomial of degree $2k-2$, we begin with Equation~(\ref{eq:d_summation}), \begin{equation*} d_k(n)=\sum_{j=1}^{n-1} d_{k-2}(j)+3d_{k-1}(j)+2s_k(j). \end{equation*} Expanding $s_k(j)$ as a polynomial of degree $2k-3$ gives \begin{align*} d_k(n)&=\sum_{j=1}^{n-1}\left( d_{k-2}(j)+3d_{k-1}(j)+2\left(\sum_{i=0}^{2k-3}C_ij^i\right)\right)\\ &=\sum_{j=1}^{n-1}\left( d_{k-2}(j)+3d_{k-1}(j)+2C_{2k-3}j^{2k-3}+2\left(\sum_{i=0}^{2k-4}C_ij^i\right)\right). \end{align*} By Faulhaber's formula, $\displaystyle\sum_{j=1}^{n-1} 2C_{2k-3}j^{2k-3}$ is equal to a polynomial of degree $2k-2$. The remainder of the summation involves terms of powers that are at most $2k-4$, so Faulhaber's formula will only produce polynomials of degree up to $2k-3$ for those terms. Therefore $d_k(n)$ is equal to a polynomial of degree $2k-2$. This completes the induction. \end{proof} The power in knowing that these functions are polynomials with predictable degrees is that their closed-form solutions can be found simply by using interpolation on data generated from the recursion, which can swiftly be implemented in most programming languages. A polynomial of degree $d$ is uniquely determined by $d+1$ points, so by generating enough data for some fixed $k$, the polynomial that counts the number of divisions can be calculated immediately, with no need for any combinatorial arguments. \section{Future Work} Counting the number of divisions of a $2\times n$ rectangular grid into $k$ pieces has exhibited multiple interesting patterns. The functions that count these divisions are all polynomials with predictable degrees, a fact that is not at all obvious from the first glance at this problem. While we have shown that all the division-counting functions will be polynomials, these polynomials are unknown to us for $k>5$. Our process of fitting polynomials to the data generated from the recursion eventually failed due to the limited precision of our computer. Our recursion technique works well for the $2\times n$ board, but quickly becomes more complicated when the height of the board is 3 or greater. For a $3\times (n+1)$ board, the recursion that counts the number of divisions will involve the number of divisions of the $3\times n$ board into $k-3,k-2,k-1$ and $k$ pieces. While the $k-3$ case is easy (just add the three new squares as separate pieces), the other cases will depend on how the three rightmost squares of the $3\times n$ board are distributed among the pieces of each division. The cases would be: (i) all three are in different pieces, (ii) two are in one piece, the last is in another (of which there are three subcases depending on which square is alone), and (iii) all three are in the same piece. This would require defining more sets than just $\mathcal{S}_k(n)$ and $\mathcal{T}_k(n)$ to account for all possible ways to add the new squares. It's worth noting that this problem is deeply connected to graph theory. Our goal of counting the number of divisions of the $2\times n$ into $k$ pieces board is equivalent to counting the number of ways to remove edges from a $2\times n$ lattice graph while following some specific rules. Those rules being that the removed edges must result in the graph being decomposed into $k$ connected components, and that if an edge is removed between two vertices, those vertices must end up in different connected components, similar to how a cut determines that two adjacent squares end up in different pieces. We were able to leverage this equivalence to write Python code that counts all the divisions of an $m\times n$ board into $k$ pieces for all $1\leq k\leq mn$. This was achieved by encoding the grid as an $m\times n$ lattice graph with connected squares being adjoined by edges. A cut corresponded to removing the edge between two vertices. Using Networkx~\cite{hagberg}, we were able to iterate through all possible edge removals and count the number of pieces that were returned for each removal that followed the rules mentioned previously. Removals that did not follow the rules were ignored. The program is not efficient, but it is exhaustive. To access the code, go to \url{https://github.com/jakebr118/Counting-Divisions/blob/main/division_counting}. A pattern with our polynomials is that the leading coefficient appeared to be decreasing monotonically for greater and greater $k$. The leading coefficients of the polynomials for $k\leq 5$ are $2,\frac{2}{3},\frac{4}{45},\frac{2}{315}$, in order. Whether or not this pattern continues is unknown to us, but we conjecture that it holds. Our reasoning is that in order to ``control" the number of divisions at small $n$, the coefficient on the highest-order term must stay small. A rigorous proof (or counterexample) is definitely in order, though. \begin{abstract} What is the number of ways to divide a $2\times n$ rectangular grid, cutting along the grid lines, into $k$ pieces? Are there formulas that calculate these number of divisions? If so, what do they look like? We show that all division-counting formulas will be polynomials on $n$ for all $k$, and that for a fixed $k$, the division-counting polynomial will have degree $2k-2$. We find these polynomials up to $k=5$. \end{abstract} \textbf{Acknowledgements}\\ The author wishes to thank Emilie Wiesner and Daniel Visscher for their support, knowledge, and inspiration throughout the course of the project. \section{Symmetry of Divisions}\label{sec:sym} Consider the two divisions shown in Figure~\ref{fig:symmetry}. One could reasonably argue that these two divisions are the same, as one can be obtained from the other by simply reflecting the board along a horizontal or vertical axis. This raises the question: is there a way to count the divisions that are the same under isometric transformations of the board, that is, rotations and reflections? To answer this question, we introduce the following definition: \begin{definition} Two divisions are \textbf{equivalent up to isometry} if and only if one division can be mapped onto the other by rotating or reflecting the board. The number of divisions up to isometry is denoted as $i_2^2(n)$ \end{definition} Equivalence up to isometry forms an equivalence relation over the set of divisions of the board, therefore, there is a set of equivalence classes under this equivalence relation. Each division in an equivalence class can be obtained from the others by rotating and reflecting the overall board. Therefore, each equivalence class is an orbit of the divisions under the group actions of rotation and reflection. In this case, these actions correspond to the dihedral group of order 4, $\mathcal{D}_2$. This calls to mind Burnside's Lemma [5]: \begin{lemma}[Burnside's Lemma] Let $X$ be a set and $G$ be a group acting on the set. If $X^g$ is the set of elements in $X$ that are fixed by each $g\in G$, then the number of orbits $|X/G|$ of $X$ when acted upon by $G$ is \begin{equation*} |X/G|=\frac{1}{|G|}\sum_{g\in G}|X^g| \end{equation*} \label{burnside} \end{lemma} \begin{figure} \centering \includegraphics[width=9cm]{symmetry.eps} \caption{Reflecting the board along a horizontal or vertical axis maps one division onto the other.} \label{fig:symmetry} \end{figure} Before moving on to counting the divisions up to isometry, a couple additional lemmas will be proven to facilitate the arguments. \begin{lemma} For a $2\times n$ board being divided into 2 pieces, there are at most 2 vertical cuts. \end{lemma} \begin{proof} Consider a division consisting of 3 or more vertical cuts. Either three cuts are on the same row, or two are on one row and one is on the other. If three are on the same row, then one cut must be between the other two. The outer two cuts will form a division consisting of a rectangular piece and the remainder of the board. The third vertical cut will divide this rectangular piece into two pieces, giving three in total. If two are on one row and one is on the other, there are three cases: (i) one cut is between the other two, and the outer cuts are on the same row, (ii) one cut is between the other two, and the outer cuts are on different rows, (iii) the single cut in one row aligns with one of the two cuts in the other row. \\ \\ For case (i), the outer cuts will form a division consisting of a rectangular piece and the remainder of the board. The third cut will divide the remainder of the board, giving three total. For case (ii), the outer cuts will form a division consisting of an L-shaped piece and the remainder of the board. The third cut will divide this L-shaped piece into two pieces, giving three in total. For case (iii), the two cuts on the same row will form a division consisting of a rectangle and the remainder of the board. The third cut will divide the remainder of the board, giving three in total. \end{proof} \begin{lemma} Any division fixed by rotation must consist of isometrically identical pieces. \label{identical_rotation} \end{lemma} \begin{proof} Consider a division consisting of non-identical pieces. In order for a rotation to fix a division, when rotated, each piece must map exactly to where the other piece was previously. If the pieces are shaped differently, then neither piece will fit into where the other piece was previously, meaning the division cannot be fixed by rotation. This means a division consisting of non-identical pieces is not fixed by rotation, and the contrapositive must also be true. \end{proof} \begin{lemma} Any division fixed by reflection must map vertical cuts to vertical cuts. \label{cuts_reflection} \end{lemma} \begin{proof} Suppose there is a reflection that does not map vertical cuts to vertical cuts. Then, after the reflection, the division will be different, as the vertical cuts are not in the same place as before. This means that any reflection that does not map vertical cuts to vertical cuts cannot fix any division, and the contrapositive must also be true. \end{proof} \begin{lemma} Let $g\in \mathcal{D}_2$ not be the identity action. Then for each $x\in X^g$, there must be 0 or 2 vertical cuts. If there are 2, they must be the same distance from the vertical axis of reflection. \label{lem:not_identity} \end{lemma} \begin{proof} A division consisting of only 1 vertical cut cannot be fixed by any non-identity group action. This is because a single vertical cut corresponds to a division that results in a corner being removed from the main board. The non-rectangular remainder of the board is cannot be fixed by any rotation or reflection, so the division as a whole cannot be fixed by any group action. \\ \\ Consider a division consisting of 2 vertical cuts. Suppose that the two cuts are not the same distance from the vertical axis of reflection, which means they cannot be identical. Then the pieces cannot be fixed by any symmetry in $\mathcal{D}_2$. By Lemma~\ref{identical_rotation}, they cannot be fixed by the rotation because they are not identical. By Lemma~\ref{cuts_reflection}, they cannot be fixed by any reflection because any such reflection cannot map vertical cuts to vertical cuts, as they will not ``align" with one another. Therefore any division where the vertical cuts are not the same distance from the vertical axis of reflection cannot by fixed by any non-identity group action, and the contrapositive must also be true. \end{proof} We now have the capability to prove the following theorem: \begin{theorem} The number of divisions up to isometry for the $2\times n$ board into two pieces is $$i_2^2(n)=\frac{1}{2}n(n+1)$$. \label{the:symmetry} \end{theorem} \begin{proof} Let $D_2^2(n)$ be the set of divisions of the $2\times n$ board into 2 pieces. The number of divisions up to isometry is equal to the number of orbits of $D_2^2(n)$ under the elements of $\mathcal{D}_2$. These are counted using Burnside's Lemma: \begin{equation*} i^2_2(n)=\frac{1}{|\mathcal{D}_2|}\sum_{g\in \mathcal{D}_2}|D_2^2(n)^g| \end{equation*} Recall that $\mathcal{D}_2=\{e,\rho,\mu_H,\mu_V\}$, where $e$ is the identity, $\rho$ is a rotation by $180^\circ$, $\mu_H$ is a horizontal reflection, and $\mu_V$ is a vertical reflection. Clearly, all elements of $D_2^2(n)$ are fixed by $e$, so $|D_2^2(n)^e|=d_2^2(n)=2n^2-n$. For $\rho$, recall from Lemma~\ref{identical_rotation} that any division fixed by rotation must consist of identical pieces, and from Lemma~\ref{lem:not_identity} that the division must consist of 0 or 2 vertical cuts. If there are 0, then the board must be cut completely in half along the horizontal middle, which corresponds to 1 division. If there are 2, then they must be the same distance from the vertical axis of reflection. To produce identical pieces, the two vertical cuts must be on opposite sides both the horizontal and vertical axes of reflection, corresponding to a division that creates two identical L-shaped pieces. There are $n-1$ ways to choose where to place the cut above the horizontal axis, which fixes the location of the second cut. This corresponds to $n-1$ divisions. Therefore, there are $n$ divisions fixed by rotation. For $\mu_H$, recall from Lemma~\ref{cuts_reflection} that any division fixed by reflection must consist of identical pieces, and from Lemma~\ref{lem:not_identity} that the division must consist of 0 or 2 vertical cuts. If there are 0, then the board must be cut completely in half along the horizontal middle, which corresponds to 1 division. If there are 2, then they must be the same distance from the vertical axis of reflection. To map vertical cuts to vertical cuts, the two vertical cuts must be on opposite sides both the horizontal axis of reflection, but on the same side of the vertical axis of reflection, corresponding to a division that cuts the board along a vertical cut of length 2. There are $n-1$ ways to choose where to place the cut above the horizontal axis, which fixes the location of the second cut. This corresponds to $n-1$ divisions. Therefore, there are $n$ divisions fixed by horizontal reflection. For $\mu_V$, recall from Lemma~\ref{cuts_reflection} that any division fixed by reflection must consist of identical pieces, and from Lemma~\ref{lem:not_identity} that the division must consist of 0 or 2 vertical cuts. If there are 0, then the board must be cut completely in half along the horizontal middle, which corresponds to 1 division. If there are 2, then they must be the same distance from the vertical axis of reflection. To map vertical cuts to vertical cuts, the two vertical cuts must be on the same side of horizontal axis of reflection, but on opposite sides of the vertical axis of reflection, corresponding to a division that cuts a rectangle out of one side of the board, leaving a cup-shaped piece on the other side. There are $n-1$ ways to choose where to place the first cut, which fixes the location of the second cut. This corresponds to $n-1$ divisions. Therefore, there are $n$ divisions fixed by vertical reflection. Putting everything together: \begin{equation*} i_2^2(n)=\frac{1}{4}(2n^2-n+n+n+n)=\frac{1}{4}(2n^2+2n)=\frac{1}{2}n(n+1) \end{equation*} \end{proof} \section{Table of Values} \begin{table}[h] \centering \begin{tabular}{l|ccccc} $n$\textbackslash $k$&1&2&3&4&5\\\hline 1&1&1&0&0&0\\ 2&1&6&4&1&0\\ 3&1&15&29&21&7\\ 4&1&28&107&153&111\\ 5&1&45&286&678&831\\ 6&1&66&630&2241&4131\\ 7&1&91&1219&6091&15748\\ 8&1&120&2149&14385&49728\\ 9&1&153&3532&30556&136322\\ 10&1&190&5496&59745&334650\\ 11&1&231&8185&109297&751797\\ 12&1&276&11759&189321&1570261\\ 13&1&325&16394&313314&3085929\\ 14&1&378&22282&498849&5759013\\ 15&1&435&29631&768327&10280634\\ 16&1&496&38665&1149793&17657998\\ 17&1&561&49624&1677816&29321364\\ 18&1&630&62764&2394433&47256260\\ 19&1&703&78357&3350157&74164659\\ 20&1&780&96691&4605049&113659083\\\hline $n$\textbackslash $k$&6&7&8&9&10\\\hline 1&0&0&0&0&0\\ 2&0&0&0&0&0\\ 3&1&0&0&0&0\\ 4&45&10&1&0&0\\ 5&603&274&78&13&1\\ 6&4596&3334&1635&545&120\\ 7&24732&25747&18667&9668&3600\\ 8&104575&146831&145602&105508&56931\\ 9&369826&671848&868084&829399&600723\\ 10&1138712&2596968&4227342&5121823&4751906\\ 11&3138171&8779022&17557890&26233173&30193383\\ 12&7896713&26600748&64099355&115621525&161241434\\ 13&18416453&73572842&210256970&450245602&747044478\\ 14&40266876&188347198&630047025&1579913534&3073562178\\ 15&83292430&451183577&1747134845&5071899551&11430370077\\ 16&164186075&1020203769&4529679900&15074867903&38959972033\\ 17&310252468&2193013248&11071498344&41885398146&123062891994\\ 18&564768560&4507745568&25686998844&109653320338&363498052540\\ 19&994447045&8903613660&56893770324&272254496222&1011537069434\\ 20&1699620357&16969012366&120878343957&644635341150&2668573664500\\ \end{tabular} \label{tab:my_label} \end{table} \section{Closed Form Solution for Counting Divisions into Two Pieces} \label{sec:twopieces} Durham and Richmond previously showed that the number of divisions of the $2\times n$ board into two pieces is given by $2n^2-n$. Wagon~\cite{wagon} provided an alternative visual proof to their result, we offer yet another alternative proof. Our argument makes use of induction and recursion, and these arguments will be utilized in a similar manner in the following sections. A key insight to our argument is that the divisions of a $2\times n$ board are embedded within the divisions of a $2\times (n+1)$ board, and the divisions of this ``sub-board" (consisting of all but the rightmost column) correspond in a predictable manner with the divisions of the overall board. \begin{theorem}[Durham] \label{the:twopieces} The number of divisions of a $2\times n$ board into two pieces is $d^2_2(n)=n(2n-1)$. \end{theorem} \begin{proof} Our proof is by induction on $n$. By Remark~\ref{re:onedivision}, $d^2_2(1)=1=(1)(2(1)-1)$. For the induction, assume for a positive integer $j$ that $d_2^2(j)=j(2j-1)$. Each division of the $2\times j$ sub-board has some number of corresponding divisions in the $2\times (j+1)$ board. Note that the sub-board's division may be into only one piece. Color the rightmost vertically-connected squares of the $2\times(j+1)$ board blue and the rightmost vertically-connected squares of the $2\times j$ sub-board green (see Figure~\ref{fig:greenblue}). To obtain a division into two pieces on the $2\times(j+1)$ board, there are two cases to consider. First, there's the case where the $2\times j$ sub-board is divided into only one piece. In this case, there are three ways to obtain a division of the $2\times (j+1)$ board into two pieces. Two divisions can be obtained by cutting off one of the blue squares and attaching the other to the $2\times j$ sub-board, and one division is obtained by cutting off both blue squares into their own piece (see Figure~\ref{fig:greentogether}). \begin{figure} \centering \includegraphics[width=5cm]{twocolor.eps} \caption{Coloring argument that is used in the proof of Theorem~\ref{the:twopieces}.} \label{fig:greenblue} \end{figure} The remaining divisions can be obtained by dividing the $2\times j$ sub-board into two pieces and adding the blue squares onto them. Recall from Lemma~\ref{lem:separation} that the number of divisions of the $2\times j$ sub-board is partitioned by whether or not the green squares separate. Furthermore, for each division, there will be some number of ways to add the blue squares to obtain a division of the $2\times (j+1)$ board. Therefore, we can write: \begin{equation} \label{eq:divisionsincomplete} d_2^2(j+1)=3+At_2^2(j)+Bs_2^2(j) \end{equation} For some positive integers $A,B$. To find $A$, note that when the green squares do not separate, the blue squares must remain connected and attach to the green squares, or else the board will be divided into more than two pieces. Therefore, for each division of the $2\times j$ sub-board where the green squares do not separate, there is one corresponding division in the $2\times (j+1)$ board. Thus, $A=1$ \begin{figure} \centering \includegraphics[width=12 cm]{sep.eps} \caption{The three ways for the blue squares to be added to the board when the $2\times j$ sub-board remains fully intact. The case on the left accounts for two ways, depending on which blue square becomes its own piece.} \label{fig:greentogether} \end{figure} When the green squares do separate, there are three possible sub-cases: i) both blue squares attach to the top green square, ii) both blue squares attach to the bottom green square, or iii) the blue squares separate and attach one to each green square (see Figure~\ref{fig:addgreenseparate}). Therefore, for each division of the $2\times j$ board where the green squares do separate, there are 3 corresponding divisions in the $2\times (j+1)$ board, thus $B=3$. \begin{figure} \centering \includegraphics[width=12cm]{threesep.eps} \caption{The three ways to add the blue squares to a division of the $2\times j$ sub-board where the green squares separate.} \label{fig:addgreenseparate} \end{figure} Rewriting Equation~\ref{eq:divisionsincomplete} using Lemma~\ref{lem:separation}: \begin{equation} d_2^2(j+1)=3+t_2^2(j)+3s_2^2(j)=3+2s_2^2(j)+d_2^2(j) \label{eq:divisionscomplete} \end{equation} To count $s_2^2(j)$, note that the only divisions of the $2\times j$ sub-board that cause the green squares to separate are the division that divides the board along the horizontal middle and the divisions that remove only the corner of the board. To remove only a corner of the board, place a vertical cut along any of the $j-1$ vertical edges that are contained in the interior of the board (ie, those edges that are not on the outer boundary). Then, make a horizontal cut along the horizontal middle of the board in the direction of the corner to be removed (See Figure~\ref{fig:greenseparate}). Thus, there are $j-1$ ways to remove one corner from the board, and because there are two corners being considered, the total number of divisions that separate the green squares is \begin{equation*} s_2^2(j)=2(j-1)+1=2j-1 \end{equation*} Substituting this into Equation~\ref{eq:divisionscomplete}, and using the inductive assumption: \begin{equation*} d_2^2(j+1)=3+2(2j-1)+j(2j-1)=2j^2+3j+1=(j+1)(2(j+1)-1) \end{equation*} Which completes the induction. \end{proof} \begin{figure} \centering \includegraphics[width=4cm]{endoff.eps} \caption{Each vertical red line is a possible edge to place a vertical cut that, when paired with the horizontal cut to the right edge of the board, removes the upper right corner of the board and results in the green squares becoming separated.} \label{fig:greenseparate} \end{figure} The ``adding squares" approach from this section forms the core of our arguments for the next section. This strategy highlights how the number of divisions depends on the number of divisions of the board into a smaller number of pieces. In addition, there is a further dependence on \textit{how} the divisions are structured, particularly by whether or not the rightmost squares end up in separate pieces.
{ "timestamp": "2021-07-23T02:02:23", "yymm": "2106", "arxiv_id": "2106.14755", "language": "en", "url": "https://arxiv.org/abs/2106.14755", "abstract": "Consider a $2\\times n$ rectangular grid composed of $1\\times 1$ squares. Cutting only along the edges between squares, how many ways are there to divide the board into $k$ pieces? Building off the work of Durham and Richmond, who found the closed-form solutions for the number of divisions into 2 and 3 pieces, we prove a recursive relationship that counts the number of divisions of the board into $k$ pieces. Using this recursion, we obtain closed-form solutions for the number of divisions for $k=4$ and $k=5$ using fitting techniques on data generated from the recursion. Furthermore, we show that the closed-form solution for any fixed $k$ must be a polynomial on $n$ with degree $2k-2$.", "subjects": "Combinatorics (math.CO)", "title": "Counting Divisions of a $2\\times n$ Rectangular Grid", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9891815523039174, "lm_q2_score": 0.8376199673867852, "lm_q1q2_score": 0.8285582195804169 }
https://arxiv.org/abs/1809.05858
The Method of Alternating Projections
The method of alternating projections involves orthogonally projecting an element of a Hilbert space onto a collection of closed subspaces. It is known that the resulting sequence always converges in norm if the projections are taken periodically, or even quasiperiodically. We present proofs of such well known results, and offer an original proof for the case of two closed subspaces, known as von Neumann's theorem. Additionally, it is known that this sequence always converges with respect to the weak topology, regardless of the order projections are taken in. By focusing on projections directly, rather than the more general case of contractions considered previously in the literature, we are able to give a simpler proof of this result. We end by presenting a technical construction taken from a recent paper, of a sequence for which we do not have convergence in norm.
\section{Introduction} \label{introduction} The method of alternating projections has been widely studied in mathematics. Interesting not only for its rich theory, it also has many wide-reaching applications, for instance to the iterative solution of large linear systems, in the theory of partial differential equations, and even in image restoration; see \cite{Deu92} for a survey. \subsection{What is the method of alternating projections?} We begin by defining what we mean by the method of alternating projections. Let $H$ be a real or complex Hilbert space, $J\geq2$ an integer, and suppose that $M_1,\dots ,M_J$ are closed subspaces of $H$. For each $j \in \{1,\dots ,J\}$, let $P_j$ be the orthogonal projection onto the closed subspace $M_j$, and let $(j_n)_{n\geq1}$ be a sequence taking values in $\{1,\dots,J\}$. We define the sequence $(x_n)_{n\geq 0}$ by choosing an element $x_0 \in H$, and letting \begin{equation*} x_n = P_{j_n}x_{n-1}, \quad n\geq 1. \end{equation*} It is natural to ask under what conditions this sequence $(x_n)$ converges. This is often referred to as the method of alternating projections, and will be the focus of this dissertation. In order to motivate why we might expect $(x_n)$ to converge, it is useful to look at a simple example. Let $H = \mathbb{R}^2$, and consider the two closed subspaces \begin{align*} M_1&= \{(x,y) \in \mathbb{R}^2 : x=y\}, \\ M_2 &= \{(x,y) \in \mathbb{R}^2 : y=0\}. \end{align*} We investigate what happens when we project $x_0 \in H$ repeatedly between $M_1$ and $M_2$ (see Figure \ref{alternating projections}). \begin{figure}[H] \begin{center} \includegraphics[width=110mm]{figure2.pdf} \caption{The method of alternating projections for two subspaces of $\mathbb{R}^2$} \label{alternating projections} \end{center} \end{figure} We see that the resulting sequence converges to $(0,0)$: the projection of $x_0$ onto $M_1 \cap M_2$. More generally, we will see in Section \ref{convergence in norm} that if the sequence $(j_n)$ is taken to be periodic, then $(x_n)$ always converges in norm to the projection of $x_0$ onto $\bigcap_{j=1}^J M_j$. However, as we will observe in Section \ref{failure strong convergence}, we may find a sequence $(j_n)$ for which $(x_n)$ does not converge in norm. In this dissertation we work through the major results relating to the convergence of $(x_n)$, sometimes offering new or more direct proofs than those in the literature today, and including important details where they have been omitted. \subsection{A brief history} \label{a brief history} The first major result relating to the method of alternating projections is due to von Neumann \cite{von49}. In 1949, he proved that when we have two projections onto closed subspaces of a Hilbert space (that is, $J=2$), then $(x_n)$ converges in norm to the projection of $x_0$ onto the intersection of the two subspaces. The next significant advance happened in 1960, when Pr\'ager proved that the sequence $(x_n)$ converges in norm whenever $H$ is finite-dimensional \cite{Pra60}. Shortly after, in 1962, Halperin generalised von Neumann's theorem by proving that when the sequence $(j_n)$ is periodic, $(x_n)$ converges in norm \cite{Hal62}. \\ \\ In 1965, Amemiya and Ando proved a convergence result about products (compositions) of contractions, of which a corollary is that our sequence $(x_n)$ always converges weakly \cite{AmAn65}. This subsumes the result by Pr\'ager, since in finite-dimensional Hilbert spaces, the weak topology and norm topology coincide, and so weak convergence is equivalent to convergence in norm. There had been no further convergence results until 1995, when Sakai improved on Halperin's theorem. He proved that when the sequence $(j_n)$ is so-called \textit{quasiperiodic}, we have convergence in norm \cite{Sak95}. Based on these positive results, it is natural to ask whether $(x_n)$ always converges in norm without restrictions on $H$ or the sequence $(j_n)$. Indeed, Amemiya and Ando posed this question in their paper \cite{AmAn65}. It was only in 2012 when Paszkiewicz \cite{Pas12} proved that for an infinite-dimensional Hilbert space, we may find five subspaces, a vector $x_0 \in H$, and a sequence $(j_n)$ such that $(x_n)$ does not converge in norm. In 2014, Kopeck\'a and M\"uller improved Paszkiewicz's construction from five subspaces to three \cite{KoMu14}. Indeed, this is the best we can do, since for the case of two subspaces, we are guaranteed convergence in norm by von Neumann's theorem \cite{von49}. Kopeck\'a and Paszkiewicz refined this construction in 2017. They went on to show that for any infinite-dimensional Hilbert space $H$, we may find three subspaces such that for any non-zero $x_0 \in H$, there is a sequence $(j_n)$ for which $(x_n)$ does not converge in norm \cite{KoPa17}. \begin{figure}[H] \begin{center} \includegraphics[width=136mm]{figure3.pdf} \caption{A history of the method of alternating projections} \end{center} \end{figure} \subsection{Notation} \label{notation} Throughout this dissertation, $H$ will be a (real or complex) Hilbert space, $J\geq2$ an integer, and $M_1,\dots, M_J$ a family of closed subspaces of $H$ with intersection $M=\bigcap_{j=1}^{J} M_j$. Given a closed subspace $Y$ of $H$, we write $P_Y$ for the orthogonal projection onto $Y$, and for ease of notation, we write $P_1,\dots,P_J$ for the orthogonal projections onto $M_1, \dots ,M_J$. Throughout, $(j_n)_{n\geq1}$ will be a sequence taking values in $\{1,\dots,J\}$. We define the sequence $(x_n)_{n\geq 0}$ by choosing a vector $x_0 \in H$, and letting \begin{equation*} x_n = P_{j_n}x_{n-1}, \quad n\geq1. \end{equation*} This will be the general setting for this dissertation. In particular, when we mention $(j_n)$ or $(x_n)$, we are referring to the sequences described above. We will write $B(H)$ for the space of bounded linear operators on $H$, and $B_H$ for the closed unit ball in $H$. Additionally, we write $\mathbb{F}$ for the (real or complex) scalar field of $H$. Other ad hoc notation will be introduced as needed. \newpage \section{Preliminaries} We begin by recalling what it means to project orthogonally onto a closed subspace. It is a standard fact that for a closed subspace $Y$ of a Hilbert space $H$, we have $H=Y \oplus Y^\perp$. Hence each $x \in H$ can be written uniquely as $x=y+z$, where $y \in Y$ and $z \in Y^\perp$. The orthogonal projection $P_Y \colon H \to H$ onto $Y$ is given by $P_Y(x) = y$. In fact, it is simple to see that $P_Y(x)$ is the unique closest point in $Y$ to $x$. Before proving the important results about the method of alternating projections, it will help to introduce some elementary facts, to be referred to throughout this dissertation. The proofs are simple, but we include them to make the dissertation self-contained. In the following lemma, we present a few elementary facts, mainly about projections. \begin{lemma} \label{foundational} Let $x \in H$, $Y$ be a closed subspace of $H$, and $P$ the orthogonal projection onto $Y$. Then \begin{enumerate}[label=(\alph*)] \item $P$ is linear, idempotent ($P^2 = P$), and self-adjoint ($P=P$*). \label{preliminary} \item For vectors $u,v \in H$ with $u\perp v$, we have $\|u+v\|^2 = \|u\|^2 + \|v\|^2$. \label{pythagoras} \item $\|x-Px\|^2 = \|x\|^2 - \|Px\|^2$. \label{projection equality} \item $\|Px\| \leq \|x\|$ with equality if and only if $Px=x$. \label{projection norm} \item $\|P\|=1$ if $Y\neq\{0\}$, and $\|P\|=0$ if $Y=\{0\}$. \label{projection norm 1} \item For any $x \in H$ and $y\in Y$, $\|x-Px\| \leq \|x-y\|$ with equality if and only if $Px=y$. \label{projection is onto closest point} \item If $U$ and $V$ are closed subspaces of $H$ with $U \perp V$, then $U+V$ is closed.\label{sum projections closed} \item If $U$ and $V$ are closed subspaces of $H$ with $U \perp V$, then $P_U+P_V = P_{U+V}$. \label{adding projections} \end{enumerate} \end{lemma} \begin{proof} \ref{preliminary} For $i \in \{1,2\}$, let $x_i \in H$. Since $Y$ is a closed subspace of $H$, we have $H = Y \oplus Y^\perp$, so there are unique $y_i \in Y$ and $z_i \in Y^\perp$ such that $x_i = y_i + z_i$. For $\lambda \in \mathbb{F}$, we have \[P(x_1+\lambda x_2) = P(y_1 + \lambda y_2 + z_1 + \lambda z_2) = y_1 + \lambda y_2 = P(x_1) + \lambda P(x_2). \] Hence $P$ is linear. We also note that \[P^2(x_1) = P\big{(}P(y_1 + z_1)\big{)} = P(y_1) = y_1,\] so $P$ is idempotent. Finally, we have \begin{equation*} \langle Px_1,x_2 \rangle = \langle y_1,y_2 + z_2 \rangle = \langle y_1,y_2 \rangle = \langle y_1 + z_1,y_2 \rangle = \langle x_1,Px_2 \rangle. \end{equation*} Therefore $P$ is self-adjoint. \\ \\ \ref{pythagoras} Since $u \perp v$, we have $\langle u,v \rangle= 0 = \langle v,u \rangle$, and so \[\|u+v\|^2 = \langle u + v, u + v \rangle = \langle u,u \rangle + \langle v,v \rangle = \|u\|^2 + \|v\|^2. \] \ref{projection equality} The result follows by applying \ref{pythagoras} with $u=Px$ and $v=x-Px$. \\ \\ \ref{projection norm} Applying \ref{projection equality}, we have that \[ \|Px\|^2 = \|x\|^2 - \|x - Px\|^2 \leq \|x\|^2,\] with equality if and only if $Px=x$. \\ \\ \ref{projection norm 1} The result follows immediately from \ref{projection norm}. \\ \\ \ref{projection is onto closest point} Since $H=Y \oplus Y^\perp$, then given $x\in H$, there are unique $\widetilde{y}\in Y$ and $\widetilde{z} \in Y^\perp$ such that $x=\widetilde{y}+\widetilde{z}$. So for any $y \in Y$, we have by \ref{pythagoras} that \begin{equation*} \|x-y\|^2 = \|(\widetilde{y}-y) + \widetilde{z}\|^2 = \|\widetilde{y}-y\|^2 + \|\widetilde{z}\|^2 \geq \|\widetilde{z}\|^2 = \|x - \widetilde{y}\|^2 = \|x - Px\|^2, \end{equation*} with equality if and only if $Px=y$. \\ \\ \ref{sum projections closed} Let $(x_n)$ be a Cauchy sequence in $U+V$. We write each $x_n$ as $u_n + v_n$, where $u_n \in U$, and $v_n \in V$. Since $U\perp V$, we have by \ref{pythagoras} that \[\|x_n - x_m\|^2 = \|(u_n-u_m) + (v_n - v_m)\|^2 =\|u_n-u_m\|^2 + \|v_n - v_m\|^2, \quad n,m \in \mathbb{N}.\] In particular, $\|u_n - u_m\| \leq \|x_n - x_m\|$ and $\|v_n - v_m\| \leq \|x_n - x_m\|$, so that $(u_n)$ and $(v_n)$ are both Cauchy sequences. Since $U$ and $V$ are closed subspaces of the Hilbert space $H$, they must be complete. Therefore $(u_n)$ and $(v_n)$ converge to some limits $u$ and $v$ respectively, and so $(x_n) = (u_n + v_n)$ converges to $u+v$. Hence $U+V$ is a complete subspace of $H$, and therefore closed. \\ \\ \ref{adding projections} By \ref{sum projections closed}, we know that $U+V$ is closed, and so $P_{U+V}$ is well defined. Since $U \perp V$, and since projections are self-adjoint, we have \[0 = \langle P_Vx,P_Uy \rangle = \langle P_UP_Vx,y \rangle = \langle x,P_VP_Uy \rangle, \quad x,y \in H.\] Hence $P_UP_V = 0 = P_VP_U$. Therefore, since $P_U$ and $P_V$ are idempotent, \[(P_U+P_V)^2 = P_U^2 + P_V^2 = P_U + P_V,\] and so $P_U + P_V$ is indeed a projection. We now note that for $x \in U$, $y\in V$, we have \begin{equation*} (P_U + P_V)(x+y) = P_Ux + P_Vx + P_Uy + P_Vy = x+y, \end{equation*} and for $z \in (U+V)^\perp$, $w \in H$, we have \begin{equation*} \langle (P_U+P_V)z,w \rangle = \langle z,(P_U+P_V)w \rangle = 0. \end{equation*} Hence $(P_U + P_V)$ is the identity on $U+V$, and zero on $(U+V)^\perp$, and so $P_U + P_V = P_{U+V}$ as claimed. \end{proof} The following lemma is still elementary, but the results are more specific. They will be particularly useful in proving Theorem \ref{von neumann} (von Neumann) \cite{von49} and Theorem \ref{halperin} (Halperin) \cite{Hal62}. \begin{lemma} \label{foundational 2} Let $M_1,\dots ,M_J$ be a finite family of closed subspaces of a Hilbert space $H$, with intersection $M = \bigcap_{j=1}^J M_j $. For each $j \in \{1,\dots ,J\}$, let $P_j$ be the orthogonal projection onto the closed subspace $M_j$, and let $P_M$ be the orthogonal projection onto M. Let $T = P_J\dots P_1$. Then \begin{enumerate}[label=(\alph*)] \item $\bigcap_{k=1}^j \ker(I-P_k) = \ker(I-P_j\dots P_1)$ for $j \in \{1,\dots ,J\}$. \label{useful for amemiya} \item $Tx=x$ if and only if $x \in M$. \label{Tx=x} \item $T^*x=x$ if and only if $x \in M$. \label{T*x=x} \item Let $A$ be a contraction on $H$ (a bounded operator with operator norm at most $1$). Suppose that $\|A^{n+1}x - A^nx\| \to 0$ as $n\to \infty$ for every $x \in H$. Then $A^ny \to 0$ as $n\to \infty$ for every $y \in \overline{\ran(I-A)}$. \label{using kakutani} \item If $V$ is a subspace of $H$, then $H=\overline{V} \oplus V^\perp$. \label{direct sum subspace} \item $(\ran(I-T))^\perp = \ker(I-T^*)$. \label{ran ker equality} \item $H=\overline{\ran(I-T)} \oplus \ker(I-T^*)$. \label{direct sum ran ker} \end{enumerate} \end{lemma} \begin{proof} \ref{useful for amemiya} If $x \in \bigcap_{k=1}^j \ker(I-P_k)$, then $P_kx=x$ for each $k \in \{1,\dots ,j\}$, and hence $P_j\dots P_1x = x$. Conversely, if $x\in \ker(I-P_j\dots P_1)$, then \begin{equation*} \|x\| = \|P_j\dots P_1x\| \leq \|P_{j-1}\dots P_1x\| \leq \dots \leq \|P_1x\| \leq \|x\|. \end{equation*} Hence $\|P_1x\| = \|x\|$, and so by Lemma \ref{foundational}\ref{projection norm}, we have $P_1x=x$. But also $\|P_2P_1x\| = \|x\|$, so that $\|P_2x\| = \|x\|$. Lemma \ref{foundational}\ref{projection norm} then gives that $P_2x=x$. In this way, a simple induction shows that for each $k \in \{1,\dots ,j\}$, $P_kx=x$. \\ \\ \ref{Tx=x} Applying \ref{useful for amemiya} with $j=J$, we have \begin{equation*} Tx=x \iff x\in \ker(I-T) \iff x \in \bigcap_{k=1}^J \ker(I-P_k) \iff x \in M. \end{equation*} \ref{T*x=x} An identical argument to \ref{useful for amemiya}, but with each $P_k$ replaced by $P_{j-k}$, gives \[\bigcap_{k=1}^j \ker(I-P_k) = \ker(I-P_1\dots P_j), \quad j \in \{1,\dots ,J\}.\] We apply this with $j=J$, noting that $T^* = P_1 \dots P_J$, to get \begin{equation*} T^*x=x \iff x\in \ker(I-T^*) \iff x \in \bigcap_{k=1}^J \ker(I-P_k) \iff x \in M. \end{equation*} \ref{using kakutani} If $x\in \ran(I-A)$, then $x=(I-A)w$ for some $w \in H$. So, by assumption, \begin{equation*} \|A^nx\| = \|A^n(I-A)w\| = \|A^nw - A^{n+1}w\| \to 0 \textnormal{ as } n \to \infty. \end{equation*} Now suppose $y \in \overline{\ran(I-A)}$. Let $\varepsilon > 0$. We can find $x \in \ran(I-A)$ such that $\|x-y\| < \varepsilon$. Then \begin{equation*} \|A^ny\| \leq \|A^nx\| + \|A^n(x-y)\| \leq \|A^nx\| + \|x-y\| < \|A^nx\| + \varepsilon. \end{equation*} Hence $\limsup_{n\to\infty} \|A^ny\| \leq \varepsilon$, and since $\varepsilon$ was arbitrary, we have $\|A^ny\| \to 0$ as $n\to \infty$. \\ \\ \ref{direct sum subspace} Since $\overline{V}$ is closed, we have $H = \overline{V} \oplus (\overline{V})^\perp$. So we are done if we can show that $(\overline{V})^\perp = V^\perp$. Since $V\subseteq \overline{V}$, it follows that $(\overline{V})^\perp \subseteq (V)^\perp$. We now show that the other inclusion holds. Let $x \in (V)^\perp$, so that $\langle x,v\rangle = 0$ for all $v \in V$, and let $y \in \overline{V}$, so that there exists a sequence $y_n \in V$ converging in norm to $y$. By continuity of inner products (with one argument fixed), we have \[\langle x,y \rangle = \langle x,\lim_{n\to\infty} y_n \rangle = \lim_{n\to\infty} \langle x,y_n \rangle = \lim_{n\to\infty} 0 = 0.\] Hence $x\in (\overline{V})^\perp$, and so $(V)^\perp \subseteq (\overline{V})^\perp$. \\ \\ \ref{ran ker equality} Noting that $(I-T)^* = I-T^*$, we have \begin{equation*} \begin{aligned} x\in (\ran(I-T))^\perp &\iff \langle x,y \rangle = 0 \textnormal{ for all } y\in \ran(I-T) \\ &\iff \langle x,(I-T)w \rangle = 0 \textnormal{ for all } w\in H \\ &\iff \langle (I-T^*)x,w \rangle = 0 \textnormal{ for all } w \in H \\ &\iff (I-T^*)x = 0 \\ &\iff x\in \ker(I-T^*). \end{aligned} \end{equation*} Hence $(\ran(I-T))^\perp = \ker(I-T^*)$. \\ \\ \ref{direct sum ran ker} Applying \ref{direct sum subspace} and \ref{ran ker equality}, we have \begin{equation} \nonumber H=\overline{\ran(I-T)} \oplus (\ran(I-T))^\perp = \overline{\ran(I-T)} \oplus \ker(I-T^*), \end{equation} as required. \end{proof} \newpage \section{Motivation} This dissertation focuses on proving the convergence results discussed in Section \ref{a brief history}. However, it is important to understand how these results interact with other areas of mathematics. We present three applications of the method of alternating projections beyond functional analysis. The first is a playful example in which we make use of von Neumann's theorem in the unexpected context of dividing a string into equal thirds. The other two highlight its use in finding iterative solutions to systems of linear equations and in the theory of partial differential equations. A key theme throughout this section is that the usefulness of the method of alternating projections stems from it often being easier to compute projections onto a single closed subspace, rather than directly computing the projection onto the intersection of closed subspaces. It is also worth remarking that there are many more applications beyond the three we present in this section; see \cite{Deu92} for a survey. \subsection{Dividing a string into equal thirds} We begin with a charming demonstration of how von Neumann's theorem (to be proved later as Theorem \ref{von neumann}) can be applied to divide a string into equal thirds. This is due to Burkholder, and presented in the paper ``Stochastic Alternating Projections'' \cite{DiKhSa10}. We take a string and attach two paperclips anywhere along it, calling these the `left' and `right' paperclips. We will present an iterative process, so that the positions of the paperclips will converge to one third and two thirds of the total length of the string. At any given stage, we apply the following steps. \begin{enumerate}[label=(\alph*)] \item We fold over the right end of the string so that it touches the left paperclip, and slide the right paperclip until it reaches the loop. We then unfold the string. \begin{figure}[H] \begin{center} \includegraphics[width=130mm]{figure4.pdf} \caption{Step (a) of the iteration} \label{step a} \end{center} \end{figure} \item This time, we fold over the left end of the string so that it touches the right paperclip, and slide the left paperclip until it reaches the loop. We then unfold the string. \begin{figure}[H] \begin{center} \includegraphics[width=130mm]{figure5.pdf} \caption{Step (b) of the iteration} \label{step b} \end{center} \end{figure} \end{enumerate} Applying (a) followed by (b) makes up one iteration. \begin{claim*} Repeating these iterations, the positions of the paperclips converge to one third and and two thirds of the total length of the string. \end{claim*} Although there are simpler ways to prove this, it is interesting to see how von Neumann's theorem may be applied in this unexpected context to give a slick proof. \begin{proof} Suppose the three sections of the string have lengths $x$, $y$ and $z$ respectively. Then, as shown in Figures \ref{step a} and \ref{step b}, an application of (a) leaves the three sections with lengths $x$, $(y+z)/2$, $(y+z)/2$, and an application of (b) leaves the sections with lengths $(x+y)/2$, $(x+y)/2$, and $z$. Hence, applications of (a) and (b) correspond to the projections \[ P_1 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1/2 & 1/2 \\ 0 & 1/2 & 1/2 \end{pmatrix}, \quad P_2 = \begin{pmatrix} 1/2 & 1/2 & 0 \\ 1/2 & 1/2 & 0 \\ 0 & 0 & 1 \end{pmatrix},\] applied to the vector $(x,y,z)^T$. We work in the Hilbert space $H = \mathbb{R}^3$, and let $M_1$ and $M_2$ be the closed subspaces $P_1(H)$ and $P_2(H)$ respectively. Since $P_1$ and $P_2$ are idempotent and self-adjoint, they are in fact orthogonal projections. Let $w \in H$. For $j \in \{1,2\}$, we have \[ w \in M_j \iff P_jw = w, \] and so \[ w \in M_1\cap M_2 \iff P_1w = P_2w = w.\] It is a simple check to see that $P_1w = P_2w = w$ if and only if $w = (a,a,a)^T$ for some $a \in \mathbb{R}$, and so, \[M_1\cap M_2 = \big{\{} (a,a,a)^T : a \in \mathbb{R} \big{\}}. \] Hence, we have \[P_{M_1\cap M_2} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} \frac{x+y+z}{3} \\ \frac{x+y+z}{3} \\ \frac{x+y+z}{3} \end{pmatrix}. \] By von Neumann's theorem, which states that the limit of alternating projections onto two closed subspaces converges in norm to the projection onto the intersection of these subspaces, we have \[ \Bigg{\|}(P_1P_2)^n\begin{pmatrix} x \\ y \\ z \end{pmatrix} - \begin{pmatrix} \frac{x+y+z}{3} \\ \frac{x+y+z}{3} \\ \frac{x+y+z}{3} \end{pmatrix} \Bigg{\|} \to 0 \textnormal{ as $n \to \infty$}. \] Hence the positions of the paperclips converge to one third and two thirds of the total length of string, as claimed. \end{proof} We may also be interested in how quickly the position of the paperclips converge to one third and two thirds of the total length of the string. It turns out (after some simple calculations, which we omit here) that for a string of length $c$, the deviation of the left paperclip from $c/3$ and the right from $2c/3$, after $n$ iterations, is at most $\frac{2c}{3} \cdot 4^{-n}$ and $\frac{c}{3} \cdot 4^{1-n}$ respectively. To put this into perspective, for a string $1$ metre in length, only $3$ iterations are needed for an error of less than $1.1$ centimetres for the left paperclip. \subsection{Solving systems of linear equations} As mentioned in Section \ref{a brief history}, Halperin proved that the sequence obtained by periodically projecting an element of a Hilbert space orthogonally onto a collection of closed subspaces converges in norm to the projection of the element onto the intersection of these closed subspaces \cite{Hal62}. Inspired by Deutsch \cite{Deu01}, we demonstrate how Halperin's theorem (to be proved later as Theorem \ref{halperin}) can be used to find an iterative solution to a system of linear equations. \subsubsection{The setup} Let $H$ be a real or complex Hilbert space, $\{y_1,\dots,y_J\} \in H \setminus \{0\}$, and $\{c_1,\dots,c_J\} \in \mathbb{F}$. We want to find an element $x \in H$ satisfying the equations \begin{equation} \label{satisfy equation} \langle x,y_i \rangle = c_i, \quad i\in \{1,\dots, J\}. \end{equation} We consider the hyperplanes \begin{equation*} V_i = \{y \in H \mid \langle y,y_i \rangle = c_i \}, \quad i\in \{1,\dots, J\}, \end{equation*} and note that they are closed. We set \begin{equation*} V = \bigcap_{i=1}^J V_i. \end{equation*} Then $x$ satisfies (\ref{satisfy equation}) if and only if $x \in V$. Throughout this section, we assume a solution exists, so that $V \neq \emptyset$. At this stage, we would like to take periodic projections of a vector $x_0 \in H$ onto the $J$ hyperplanes, and show it converges in norm to an element of $V$ (i.e. to a solution of (\ref{satisfy equation})). It seems natural to appeal to Halperin's theorem to obtain such a result. However, the hyperplanes $V_i$ may not be subspaces since they do not necessarily contain the origin. \subsubsection{An interlude about affine spaces} We can resolve this problem through the notion of affine spaces. There are many equivalent definitions of affine spaces, but we will use the one which is most natural in this context. We say that $U \subset H$ is an \textit{affine space} if $U=L+u$ for some (unique) subspace $L$ of $H$, and (any) $u \in U$. We may define a projection of $z \in H$ onto an affine space $U$ by \[P_U(z) = P_L(z - u) + u.\] This is well defined since given $u,\tilde{u} \in U$, there is some $l \in L$ such that $u = l+\tilde{u}$, and so \[P_L(u-\tilde{u}) = P_L(l) = l = u - \tilde{u}.\] Therefore by linearity of $P_L$, we have that for any $z \in H$, \begin{equation*} P_L(z-u) + u = P_L(z - \tilde{u}) + \tilde{u}. \end{equation*} For each $i \in \{1,\dots,J\}$, we let $M_i$ be the subspace given by \begin{equation*} M_i = \{y \in M \mid \langle y,y_i \rangle = 0 \}. \end{equation*} Then for any $i \in \{1,\dots,J\}$, we have \begin{equation*} V_i = M_i + v_i, \quad v_i \in V_i, \end{equation*} and hence each $V_i$ is an affine space. Let $M = \bigcap_{i=1}^J M_j$. It is a simple check that we have \begin{equation*} V = M + v, \quad v \in V. \end{equation*} and so $V$ is also an affine space. Therefore for any $v \in V$, we have \begin{equation*} \begin{aligned} &V_i = M_i + v, \\ & V = M + v. \end{aligned} \end{equation*} \subsubsection{Finding an iterative solution} We are now in a position to be able to make use of Halperin's theorem. We begin by choosing a starting vector $x_0 \in H$, and fixing some $v \in V$. Then for any $i,j \in \{1,\dots,J\}$, we have \begin{equation} \label{composition of v} \begin{aligned} P_{V_j}P_{V_i}x_0 &= P_{V_j}(P_{M_i}(x_0-v) + v) = P_{M_j}P_{M_i}(x_0-v) + v. \\ \end{aligned} \end{equation} Therefore, letting $T = P_{V_J}\dots P_{V_1}$ and applying (\ref{composition of v}) repeatedly gives \[T^n x_0 = v + (P_{M_J}\dots P_{M_1})^n (x_0-v), \quad n\in \mathbb{N}.\] Hence by Halperin's Theorem, \begin{equation*} \|T^nx_0 - P_Vx_0\| = \|(P_{M_J}\dots P_{M_1})^n(x_0-v) - P_M(x_0 - v)\| \to 0 \textnormal{ as } n\to \infty. \end{equation*} In particular, since $P_Vx_0 \in V$, we see that $T^n x_0$ converges in norm to a solution of (\ref{satisfy equation}). By the Hilbert projection theorem, there is a unique $\tilde{v} \in V$ such that $\|x_0-\tilde{v}\|$ is minimised over $V$. We show that this unique $\tilde{v}$ is in fact $P_Vx_0$. For any $w \in V$, Lemma \ref{foundational}\ref{projection is onto closest point} gives \begin{align*} \|x_0-P_Vx_0\| &= \|(x_0 - w) - P_M(x_0-w)\| \leq \|(x_0-w) \|. \end{align*} Since $w \in V$ was arbitrary, then this unique $\tilde{v}$ is indeed $P_Vx_0$. Hence, setting $x_0=0$, we have that $T^n 0$ converges in norm to the unique minimal norm solution of (\ref{satisfy equation}). It is a simple check that for each $z \in H$, \begin{equation} \label{project hyperplane} P_{V_i}(z) = z - \frac{y_i\big{(}\langle z,y_i \rangle - c_i\big{)}}{\|y_i\|^2}, \quad i \in \{1,\dots, J\}. \end{equation} Thus we have a formula to easily calculate $P_{V_i}(z)$ for any $z \in H$. A special case of particular interest is when $H = \mathbb{R}^N$ ($N \in \mathbb{N}$), where the inner product is taken to be the dot product. Writing $x = (x_1,\dots, x_N)$ and $y_i = (a_{i1},\dots, a_{iN})$ for $i \in \{1\dots J\}$, equation (\ref{satisfy equation}) may be rewritten as \begin{equation*} \sum_{j=1}^N a_{ij}x_j = c_i, \quad i \in \{1,\dots, J\}, \end{equation*} a system of linear equations. Assuming a solution exists, we have that $T^n x_0$ converges in norm to the unique solution closest in norm to our initial `guess' $x_0$. This is called the Kaczmarz Method, first suggested in 1937 \cite{Kac37} (see \cite{Kac93} for an English translation). Its practical value stems from our being able to easily project onto a hyperplane by using (\ref{project hyperplane}). It has a computational advantage over other known methods of solving systems of linear equations if the system is sparse. In particular, when the matrix $A=[a_{ij}]$ is sparse, the computation of $P_{V_i}(z)$ is very fast \cite{Deu92}. \subsection{Solving PDEs on composite domains} The final application we present is known as the Schwarz alternating method. It allows us to find an iterative solution of an elliptic partial differential equation on a region made up of two overlapping regions, in which the partial differential equation is easy to solve. We present this method for the Dirichlet problem; further examples can be found in \cite{Lio88}. We consider the Sobolev space $H = H_0^1(\Omega)$, where the domain $\Omega = \Omega_1 \cup \Omega_2 \subset \mathbb{R}^2$ is a union of two sufficiently smooth subdomains (for example, if the domains are locally the graph of a Lipschitz continuous function). We view $H$ as a Hilbert space with inner product given by \begin{equation*} \langle u,v \rangle_H = \langle \nabla u, \nabla v \rangle_{L^2(\Omega)} = \int_\Omega \nabla u \cdot \overline{ \nabla v} \,dx. \end{equation*} Let $\Gamma=\partial \Omega$, and for $k\in\{1,2\}$, let $\Gamma_k = \partial \Omega_k \cap \partial \Omega$ and $\gamma_k = \partial \Omega_k \setminus \partial \Omega$. \begin{figure}[H] \begin{center} \includegraphics[width=90.5mm]{figure6.pdf} \caption{An illustration of the domain $\Omega = \Omega_1 \cup \Omega_2$} \label{Dirichlet} \end{center} \end{figure} For $f \in L^2(\Omega)$, we would like to find a weak solution to the Dirichlet problem \begin{equation} \label{dirichlet union} \left\{\begin{alignedat}{2} -\Delta &u = f && \quad \text{in}\ \Omega, \\ &u=0 && \quad \text{on}\ \Gamma. \\ \end{alignedat}\right. \end{equation} Finding a weak solution means finding $u \in H$ such that we have \begin{equation*} \langle f,v \rangle_{L^2(\Omega)} = \langle\nabla u, \nabla v \rangle_{L^2(\Omega)}, \quad v\in H. \end{equation*} The motivation behind the definition of a weak solution is that for test functions $u,v \in C_c^\infty(\Omega)$, with $u$ satisfying (\ref{dirichlet union}), integration by parts gives \begin{equation*} \begin{aligned} \langle f,v \rangle_{L^2(\Omega)} = \int_\Omega f\overline{v} \,dx = \int_\Omega -\Delta u \cdot \overline{v} \,dx = \int_\Omega \nabla u \cdot \overline{\nabla v} \,dx = \langle\nabla u, \nabla v \rangle_{L^2(\Omega)}. \end{aligned} \end{equation*} Noting that $v \mapsto \langle f,v \rangle_{L^2(\Omega)}$ is a (conjugate) linear bounded functional on $H$, the Riesz representation theorem gives that there exists a unique $u \in H$ such that \begin{equation*} \langle f,v \rangle_{L^2(\Omega)} = \langle\nabla u, \nabla v \rangle_{L^2(\Omega)}, \quad v \in H. \end{equation*} That is to say, there is a unique weak solution of (\ref{dirichlet union}). In what follows, we will use von Neumann's theorem \cite{von49} to find a sequence converging in norm to the weak solution of (\ref{dirichlet union}). We begin by fixing $u_0 \in H$. We obtain $u_1 \in H$ by first finding a weak solution of \begin{equation} \label{dirichlet single} \left\{\begin{alignedat}{2} -\Delta &u_1 = f && \quad \text{in}\ \Omega_1, \\ &u_1=0 && \quad \text{on}\ \Gamma_1, \\ &u_1=u_0 && \quad \text{on}\ \gamma_1, \\ \end{alignedat}\right. \end{equation} and then extending $u_1$ from $\Omega_1$ to $\Omega$ by letting $u_1=u_0$ on $\Omega_2 \setminus \Omega_1$. We note that finding a weak solution of (\ref{dirichlet single}) means finding $u_1 \in H^1(\Omega_1)$ with $u_1 = 0$ on $\Gamma_1$, and $u_1 = u_0$ on $\gamma_1$, such that \begin{equation*} \langle f,v \rangle_{L^2(\Omega_1)} = \langle\nabla u_1, \nabla v \rangle_{L^2(\Omega_1)}, \quad v \in H_0^1(\Omega_1). \end{equation*} The Riesz representation theorem again gives that there is a unique such $u_1$. We then define $u_2 \in H$ by solving an analogous problem on $\Omega_2$, with $u_0$ replaced by $u_1$. Continuing in this way, we generate a sequence $(u_n)_{n\geq0}$ in $H$. We will show that $u_n$ converges in norm to $u$, the unique weak solution. For $k\in\{1,2\}$, let $Y_k = H_0^1(\Omega_k)$, viewed as a closed subspace of $H$ after extending functions defined on $\Omega_k$ by zero to all of $\Omega$. For $k \in \{1,2\}$, we let $M_k = Y_k^\perp$, and $P_k$ be the orthogonal projection onto $M_k$. We also write $M = M_1 \cap M_2$. Since $u$ and $u_1$ are both weak solutions of the Dirichlet problem on $\Omega_1$, we have that for every $v \in Y_1$, \begin{equation*} \begin{aligned} \langle u-u_1,v \rangle_{H}&= \langle u-u_1,v \rangle_{Y_1} \\ &=\langle \nabla (u-u_1), \nabla v \rangle_{L^2(\Omega_1)} \\ &=\langle \nabla u, \nabla v \rangle_{L^2(\Omega_1)} - \langle \nabla u_1, \nabla v \rangle_{L^2(\Omega_1)} \\ &= \langle f,v \rangle_{L^2(\Omega_1)} - \langle f,v \rangle_{L^2(\Omega_1)} = 0. \end{aligned} \end{equation*} Therefore $u-u_1 \in M_1$. We also note that $u_1-u_0 \in Y_1 = M_1^\perp$. Hence we have \begin{equation*} u-u_0 = \underbrace{(u-u_1)}_{\in M_1} + \underbrace{(u_1-u_0)}_{\in M_1^\perp}, \end{equation*} and so $P_1(u-u_0) = u-u_1$. Similarly, we see that $P_2(u-u_1)= u-u_2$, and so on. More generally, defining $x_n \in H$ by $x_n = u - u_{n}$ for $n \geq 1$, we have that $x_{2n+2}= P_2P_1 x_{2n}$, and so \begin{equation*} x_{2n} = (P_2P_1)^n x_0, \quad n\geq 1. \end{equation*} By von Neumann's theorem, we have \begin{gather*} \| x_{2n} - P_Mx_0\| \to 0, \\ \|x_{2n+1} - P_Mx_0\| = \|P_1(x_{2n} - P_Mx_0)\| \leq \| x_{2n} - P_Mx_0\| \to 0, \end{gather*} as $n \to \infty$, and therefore \begin{equation*} \|x_{n} - P_Mx_0\| \to 0 \textnormal{ as } n\to \infty. \end{equation*} Since $Y_1^\perp \cap Y_2^\perp = (Y_1+Y_2)^\perp$ (generally true for subspaces of a Hilbert space), and since the space $Y=Y_1 + Y_2$ can be shown to be dense in $H$, we have that $M=Y^\perp = \{0\}$. Hence $x_n \to 0$ as $n \to \infty$, and so \begin{equation*} \|u_n - u\| \to 0 \textnormal{ as } n\to \infty. \end{equation*} So we have generated a sequence $u_n \in H$ converging in norm to the unique weak solution. In fact, it turns out that $u_n$ converges in norm to the weak solution exponentially fast (although this is not guaranteed if, for example, we were to switch to Neumann boundary conditions); see \cite{Lio88} for more detail. Given Halperin's theorem, it is not surprising that we may extend this method to more than two subdomains. This extension is, again, discussed in \cite{Lio88}. We end this section by remarking how the method of alternating projections was applied in very different ways in the examples above. While solving systems of linear equations, we used it to find an element in the intersection of closed affine subspaces. In contrast, for the Schwartz alternating method, we knew that the intersection of our subspaces was $\{0\}$, but we applied it to a sequence with terms we did not know. This highlights how versatile the method of alternating projections is, and is another reason why it has so many applications. \newpage \section{Convergence in norm} \label{convergence in norm} In this section, we work through the major results that give conditions for $(x_n)$ to converge in norm, including those by von Neumann ($J=2$), Halperin (periodic projections), and Sakai (quasiperiodic projections). \subsection{Two closed subspaces} We will begin by proving von Neumann's theorem, that for a sequence of projections onto two closed subspaces, we are guaranteed convergence in norm \cite{von49}. \begin{theorem} [von Neumann] \label{von neumann} Let $P_1,P_2$ be orthogonal projections onto the closed subspaces $M_1,M_2$ of the real or complex Hilbert space $H$, and $P_M$ the orthogonal projection onto $M=M_1\cap M_2$. Then for any $x \in H$, \[\textnormal{$\|(P_2P_1)^n x -P_Mx\| \to 0 $ as $n \to \infty$.}\] \end{theorem} Rather than follow von Neumann's proof, we present one which appears to not yet feature in the literature. Our proof is inspired by \cite{BaDeHu09}, where the following version of the spectral theorem is used in a similar context. \begin{theorem*}[Spectral theorem] \label{spectral} Let $H$ be a real or complex Hilbert space, and $T \in B(H)$ be a self-adjoint linear operator. Then there exists a measure space $(\Omega , \Sigma , \mu )$, a unitary map $U \colon H \to L^2(\Omega,\mu)$, and $m\in L^\infty(\Omega,\mu)$ with the property that $m(t) \in \mathbb{R}$ for almost all $t\in \Omega$ \begin{equation*} UTU^{-1}f = m \cdot f, \quad f \in L^2(\Omega,\mu), \end{equation*} where $(m \cdot f)(t) = m(t)f(t)$ for $t \in \Omega$. Here, $\|m\|_\infty = \|T\|$. \end{theorem*} The idea is to consider $(P_1P_2P_1)^n$ rather than $(P_2P_1)^n$. The operator $P_1P_2P_1$ being self-adjoint allows us to apply the spectral theorem to shift the problem into some $L^2(\Omega,\mu)$, where we may make use of tools such as the dominated convergence theorem. \begin{proof}[Proof of Theorem \ref{von neumann}] Let $T=P_1P_2P_1$. Since $P_1$ and $P_2$ are idempotent, we see that $(P_2P_1)^nx$ converges in norm to $P_Mx$ if and only if $T^nx$ does. We will prove the latter. Since $T$ is self-adjoint, the spectral theorem gives that there exists a measure space $(\Omega , \Sigma , \mu )$, a unitary map $U: H \to L^2(\Omega,\mu)$, and $m\in L^\infty(\Omega,\mu)$ with $m(t) \in \mathbb{R}$ for almost all $t\in \Omega$, such that \begin{equation*} UTU^{-1}f = m \cdot f, \quad f \in L^2(\Omega,\mu). \end{equation*} We note that for $f \in L^2(\Omega,\mu)$, we have $m\cdot f \in L^2(\Omega,\mu)$ and also \begin{equation*} UT^nU^{-1}f = m^n \cdot f,\quad n\geq 0. \end{equation*} Let $x \in H$, and consider $f=Ux$. Noting that $UT(x) = m \cdot f$, we have \begin{equation*} \begin{aligned} m\|f\|^2 &= \langle m \cdot f,f \rangle = \langle UTx,Ux \rangle = \langle U^*UTx,x \rangle = \langle Tx,x \rangle = \langle P_1P_2P_1x,x \rangle \\ & = \langle P_2P_1x,P_1x \rangle = \langle P_2^2P_1x,P_1x \rangle = \langle P_2P_1x,P_2P_1x \rangle = \|P_1P_2x\|^2 \geq 0. \end{aligned} \end{equation*} Hence we have $m(t) \geq 0$ for almost all $t\in \Omega$. Since $\|m\|_\infty = \|T\| \leq 1$, then $m(t) \leq 1$ for almost all $t \in \Omega$. So for almost all $t \in \Omega$, we have $0\leq m(t) \leq 1$. Hence, defining \begin{equation*} \begin{alignedat}{2} &\widetilde{\Omega} &&= \{ t \in \Omega: m(t)\leq 1\}, \\ &\Omega' &&= \{t \in \Omega : m(t)<1\}, \\ &\Omega^* &&= \{t \in \Omega : m(t)=1\}, \end{alignedat} \end{equation*} we have that $\Omega \setminus \widetilde{\Omega}$ has zero measure. Additionally, noting that $ \Omega' \cap \Omega^* = \emptyset$, $\widetilde{\Omega} = \Omega' \cup \Omega^*$, and $1-m= 0$ on $\Omega^*$, we have \begin{equation} \label{dct inequality} \begin{aligned} \Big{(}\|(m^n - m^{n+1})\cdot f\|_{L^2(\Omega,\mu)}\Big{)}^2 &= \int_{\Omega} |m^n(1-m)\cdot f|^2 \,d\mu \\ &= \int_{\widetilde{\Omega}} |m^n(1-m)\cdot f|^2 \,d\mu \\ &= \int_{\Omega'} |m^n(1-m)\cdot f|^2 \,d\mu \\ &\qquad + \int_{\Omega^*} |m^n(1-m)\cdot f|^2 \,d\mu \\ &= \int_{\Omega'} |m^n(1-m) \cdot f|^2 \,d\mu \\ & \leq \int_{\Omega'} |m^n \cdot f|^2 \,d\mu. \end{aligned} \end{equation} We note that $(m(t))^n \to 0$ as $n\to \infty$ for $t \in \Omega'$. We also note that $f$ is integrable, and so is finite almost everywhere on $\Omega$, and therefore on $\Omega'$. Hence $(m(t))^n f(t) \to 0$ as $n \to \infty$ for $t \in \Omega'$. Since $|m^n \cdot f| \leq |f|$ on $\Omega'$, and $f$ is integrable on $\Omega'$, we may apply the dominated convergence theorem to get \begin{equation} \label{dct} \lim_{n\to\infty} \int_{\Omega'} |m^n \cdot f|^2 d\mu = \int_{\Omega'} \lim_{n\to\infty} |m^n \cdot f|^2 d\mu = \int_{\Omega'} 0 \textnormal{ } d\mu = 0. \end{equation} We note that $\|U^{-1}\| = 1$ (since $U$ is unitary), and that $U(T^n - T^{n+1})U^{-1}f = (m^n - m^{n+1}) \cdot f$. Applying these, along with (\ref{dct inequality}) and (\ref{dct}), gives \begin{equation} \label{kakutani inequality} \begin{aligned} \|T^nx - T^{n+1}x\| &= \|U^{-1}U(T^n - T^{n+1})U^{-1}Ux\| \\ & \leq \|U^{-1}\| \cdot \|U(T^n - T^{n+1})U^{-1}f\|_{L^2(\Omega,\mu)} \\ &= \|U(T^n - T^{n+1})U^{-1}f\|_{L^2(\Omega,\mu)} \\ &= \|(m^n - m^{n+1}) \cdot f\|_{L^2(\Omega,\mu)} \\ &\leq \Big( \int_{\Omega'} |m^n \cdot f|^2 d\mu \Big)^{1/2} \textnormal{ $\to 0$ as $n \to \infty$}. \end{aligned} \end{equation} We know by Lemma \ref{foundational 2}\ref{T*x=x} that $(I-T^*)x = 0 \iff x \in M$. Hence by Lemma \ref{foundational 2}\ref{direct sum ran ker}, we have \begin{equation*} H=\overline{\ran(I-T)} \oplus \ker(I-T^*) =\overline{\ran(I-T)} \oplus M. \end{equation*} So for any $x \in H$, there is a unique pair $y\in \overline{\ran(I-T)}$ and $z \in M$ such that $x=y+z$. We have by (\ref{kakutani inequality}) that $\|T^nx - T^{n+1}x\| \to 0$ as $n \to \infty$, so we can apply Lemma \ref{foundational 2}\ref{using kakutani} to see that \[\|T^n x - P_Mx\| = \|T^ny + T^n z - z\| = \|T^ny\| \to 0 \, \, \textnormal{ as } n \to \infty,\] thus concluding our proof. \end{proof} We end this section by remarking that since projections are idempotent, any sequence of projections involving $P_1$ and $P_2$ may be reduced to one where $P_1$ and $P_2$ are alternating. Therefore Theorem \ref{von neumann} does indeed show that if $J=2$, and $(j_n)$ is any sequence, then $(x_n)$ converges in norm. \subsection{Periodic projections} In 1962, Halperin improved on von Neumann's theorem to show that $(x_n)$ converges in norm whenever $(j_n)$ is periodic. \begin{theorem} [Halperin's theorem] \label{halperin} Let $H$ be a real or complex Hilbert space, $J \geq 2$ an integer, and $M_1,\dots,M_J$ be a collection of closed subspaces of $H$. Let $T = P_J \dots P_1$, and let $P_M$ be the orthogonal projection onto the intersection $M = \bigcap_{j=1}^J M_j$. Then for each $x\in H$, \[\textnormal{$\|T^n x - P_M x\| \to 0$ as $n\to \infty$.}\] \end{theorem} In this section, we follow a proof by Netyanun and Solomon \cite{NeSo06}, which makes use of Kakutani's lemma \cite{Kak40} to prove Theorem \ref{halperin} in a succinct way. \subsubsection{Kakutani's lemma} We begin with the core lemma of the proof, due to Kakutani \cite{Kak40}. We remark that we essentially proved Kakutani's lemma for the special case $J=2$ as part of our proof of Theorem \ref{von neumann} (von Neumann). \begin{lemma} [Kakutani's lemma] \label{kakutani} Let $T=P_J\dots P_1$. For each $x \in H$, \[\| T^n x - T^{n+1} x \| \to 0 \textnormal{ as } n \to \infty.\] \end{lemma} \begin{proof} Let $x \in H$. Since for each $j \in \{1,\dots, J\}$ we have $\|P_j\| \leq 1$, then $\|T^{n+1} x\| \leq \|T^n x\|$. Therefore $\|T^n x\|$ is a monotonically decreasing sequence bounded below by $0$, and so we have \begin{equation} \label{decreasing sequence} \|T^nx\|^2 - \|T^{n+1}x\|^2 \to 0 \textnormal{ as } n \to \infty. \end{equation} We now let $Q_0 = I $, and for each $ j \in \{1,\dots ,J\}$, we recursively define $Q_j = P_jQ_{j-1}$. Then \begin{align*} &\|T^nx - T^{n+1}x\|^2 \\ &= \|(T^n x - P_1T^n x) + (P_1T^nx - P_2P_1T^n x) + \dots \\ &\quad \quad + (P_{J-1}\dots .P_1T^nx - P_J\dots P_1T^n x) \|^2 \\ &= \big{\|}\sum_{j=0}^{J-1} (Q_jT^nx - Q_{j+1}T^nx)\big{\|}^2 \\ &\leq \Big{(}\sum_{j=0}^{J-1} \|Q_jT^nx - Q_{j+1}T^nx\| \Big{)}^2 && \textnormal{$\big{[}$triangle inequality$\big{]}$}\\ &\leq J \sum_{j=0}^{J-1} \|Q_jT^nx - Q_{j+1}T^nx\|^2 && \textnormal{$\Big{[}\big{(}\sum_{j=0}^{J} a_j\big{)}^2 \leq J\sum_{j=0}^{J} {a_j}^2 \Big{]}$} \\ & = J \sum_{j=0}^{J-1} (\|Q_jT^nx\|^2 - \|Q_{j+1}T^nx\|^2) && \textnormal{$\big{[}$Lemma \ref{foundational}\ref{projection equality}$\big{]}$}\\ &= J(\|Q_0T^nx\|^2 - \|Q_JT^nx\|^2) && \textnormal{$\big{[}$telescoping series$\big{]}$} \\ & = J(\|T^nx\|^2 - \|T^{n+1}x\|^2) \to 0 \textnormal{ as } n \to \infty. && \textnormal{$\big{[}Q_J=T$ and (\ref{decreasing sequence})$\big{]}$} \end{align*} This concludes the proof. \end{proof} \subsubsection{Proving Halperin's theorem} We are now ready to prove Theorem \ref{halperin}. \begin{proof}[Proof of Theorem \ref{halperin}] As in the proof of Theorem \ref{von neumann}, we have by Lemma \ref{foundational 2}\ref{T*x=x} that $(I-T^*)x = 0 \iff x \in M$. Hence by Lemma \ref{foundational 2}\ref{direct sum ran ker}, \begin{equation*} H=\overline{\ran(I-T)} \oplus \ker(I-T^*) =\overline{\ran(I-T)} \oplus M. \end{equation*} So for any $x \in H$, there is a unique pair $y\in \overline{\ran(I-T)}$ and $z \in M$ such that $x=y+z$. By Lemma \ref{kakutani} (Kakutani), we have $\|T^nx - T^{n+1}x\| \to 0$ as $n \to \infty$. Therefore, we can apply Lemma \ref{foundational 2}\ref{using kakutani} to see that \[\|T^n x - P_Mx\| = \|T^ny + T^n z - z\| = \|T^ny\| \to 0 \, \, \textnormal{ as } n \to \infty,\] thus completing the proof.\end{proof} We remark that it is in fact relatively simple to extend Theorem \ref{halperin} (Halperin), so that instead of projections, we consider contractions that are non-negative. An identical proof to that of Theorem \ref{halperin} works here; the only difference is that we do not know whether, for a non-negative contraction $A$, we have \begin{equation} \label{inequality for kakutani} \|x-Ax\|^2 \leq \|x\|^2 - \|Ax\|^2, \quad x\in H. \end{equation} In Lemma \ref{foundational}\ref{projection equality}, we proved (\ref{inequality for kakutani}) for the special case where $A$ is a projection. It turns out it is also true more generally when $A$ is a non-negative contraction; a proof can be found in \cite{NeSo06}. We note that this extension was in fact first proved by Amemiya and Ando \cite{AmAn65}, and more recently by Bauschke, Deutsch, Hundal and Park \cite{BaDeHuPa03}. \subsection{Quasiperiodic projections} It turns out we can generalise Theorem \ref{halperin} (Halperin) by finding an even weaker condition than periodicity for the sequence of projections to converge in norm. In 1995, Sakai proved that we have convergence in norm if the sequence of projections is so-called \textit{quasiperiodic} \cite{Sak95}. Before defining what it means for a sequence to be quasiperiodic, or formally stating Sakai's theorem, we remind ourselves of our usual setting. Let $H$ be a real or complex Hilbert space, $J\geq 2$ be an integer, and $M_1,\dots ,M_J$ be a family of closed subspaces of $H$ with intersection $M=\bigcap_{j=1}^J M_j$. Given a closed subspace $Y$ of $H$, we write $P_Y$ for the orthogonal projection onto $Y$, and for ease of notation, we write $P_1,\dots, P_J$ for the orthogonal projections onto the closed subspaces $M_1,\dots,M_J$ respectively. Let $(j_n)_{n\geq1}$ be a sequence taking values in $\{1,\dots,J\}$. We define the sequence $(x_n)_{n\geq0}$ by picking an arbitrary element $x_0 \in H$, and letting \begin{equation*} x_n = P_{j_n}x_{n-1}, \quad n\geq 1. \end{equation*} For ease of notation, we write \begin{equation*} s = (j_n)_{n\geq1}. \end{equation*} We now define what it means for a sequence to be quasiperiodic. \begin{definition*} Consider a sequence $s= (j_n)_{n\geq1}$, where each $j_n \in \{1,\dots,J\}$. We say that $s$ is \textit{quasiperiodic} if each $i \in \{1,\dots,J\}$ appears in $s$ infinitely many times, and that for each such $i$, \begin{equation*} I(s,i) = \sup_n\big{(}k_n(i) - k_{n-1}(i)\big{)} \end{equation*} is finite, where $k_0(i) = 0$, and $(k_n(i))_{n\geq1}$ is the increasing sequence of all natural numbers such that $j_{k_n(i)}=i$. Put more simply, $s$ being quasiperiodic means that the number of entries between an element $i \in \{1,\dots,J\}$ (in the sequence $s$) and the next appearance of it, is bounded (by $I(s,i) < \infty$). \end{definition*} \begin{theorem}[Sakai's Theorem] \label{sakai theorem} If $(j_n)$ is a quasiperiodic sequence, then $(x_n)$ converges in norm to the limit of the orthogonal projection of $x_0$ onto $M= \bigcap_{j=1}^J M_j$. \end{theorem} We follow Sakai's proof of this result \cite{Sak95}, splitting it into small subsections, each highlighting a key element or idea of the proof. \subsubsection{A criterion for convergence} We begin by proving a lemma which gives us a criterion for $(x_n)$ to converge in norm. \begin{lemma} \label{key result sakai} Suppose there is a constant $A$ (which may depend on the sequence $s$), such that \begin{equation} \label{key inequality} \|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2, \quad n>m\geq 1. \end{equation} Then the sequence $(x_n)$ converges in norm. \end{lemma} \begin{proof} Since $x_{k+1}$ is the orthogonal projection of $x_k$ onto $M_{j_{k+1}}$, by Lemma \ref{foundational}\ref{projection equality} we have \begin{equation*} \|x_{k+1}\|^2 + \|x_{k} - x_{k+1}\|^2 = \|x_{k+1}\|^2 + \|x_{k}\|^2 - \|x_{k+1}\|^2 = \|x_k\|^2. \end{equation*} Hence $(\|x_k\|)$ is monotonically decreasing. Adding the equalities from $k=m$ to $k=n-1$, we obtain \begin{equation*} \|x_{m}\|^2 = \|x_n\|^2 + \sum_{k=m}^{n-1} \|x_{k+1} - x_{k}\|^2. \end{equation*} Therefore (\ref{key inequality}) is equivalent to \begin{equation*} \|x_n - x_m\|^2 \leq A(\|x_{m}\|^2 - \|x_n\|^2). \end{equation*} Since $(\|x_k\|)$ is monotonically decreasing and bounded below by $0$, we have that $\|x_k\|$ converges to some limit $c\geq0$. In particular, given $\varepsilon >0$, there exists $K \in \mathbb{N}$ such that whenever $n \geq K$,\[ 0 \leq \|x_n\| - c \leq \frac{\varepsilon}{2A}.\] Therefore for $n,m\geq K$, we have \begin{equation*} \begin{aligned} \|x_n - x_m\|^2 &\leq A(\|x_{m}\|^2 - \|x_n\|^2) \\ &\leq A\left\lvert \|x_m\|^2 - c\big{|} + A\big{|}c - \|x_n\|^2 \right\lvert \\ & < A\cdot \frac{\varepsilon}{2A} + A\cdot \frac{\varepsilon}{2A} \leq \varepsilon. \end{aligned} \end{equation*} Hence $(x_n)$ is a Cauchy sequence, and so converges in norm (since $H$, being a Hilbert space, is complete). \end{proof} So under the conditions in Theorem \ref{key result sakai}, we have that the sequence $(x_n)$ converges in norm to some limit, say $x_\infty$. In particular, it also converges weakly to this limit. In the next lemma, we show that under the additional assumption that each projection appears infinitely many times in the sequence $(P_{j_n})_{n\geq 0}$, we have that $x_\infty = P_Mx_0$. \begin{lemma} \label{key result sakai 2} Suppose the sequence $(x_n)$ converges weakly. Suppose also that that $s = (j_n)$ takes every value in $\{1,\dots,J\}$ infinitely many times. Then the limit is the orthogonal projection of $x_0$ onto $M= \bigcap_{j=1}^J M_j$. \end{lemma} \begin{proof} By assumption, $(x_n)$ converges weakly to some limit, say $x_\infty$. Each $j \in \{1,\dots,J\}$ occurs infinitely many times in $s$, and so there is some subsequence $(x_{n_k})_{k\geq1}$ such that each $x_{n_k} \in M_j$. Then for every $y \in M_j^\perp$, we have $\langle x_{n_k},y \rangle =0$, and therefore \begin{equation*} \langle x_\infty , y \rangle = \lim_{k\to\infty} \langle x_{n_k}, y \rangle = \lim_{k\to\infty}0 = 0. \end{equation*} Hence $x_\infty \in M_j$ for each $j \in \{1,\dots,J\}$, and so $x_\infty \in M$. To show that $x_\infty$ is the orthogonal projection of $x_0$ onto $M$, it suffices to show that $x_0 - x_\infty \in M^\perp$, since then we would have \begin{equation*} x_0 = \underbrace{x_\infty}_{\in M} + \underbrace{x_0 - x_\infty}_{\in M^\perp}. \end{equation*} Let $x \in M$. For every $n\geq0$, we have $(I-P_{j_{n+1}})x_n \in (M_{j_{n+1}})^\perp$ and $x \in M_{j_{n+1}}$. Therefore, \begin{align*} \langle x_n - x_{n+1},x \rangle &= \langle x_n - P_{j_{n+1}}x_n,x \rangle \\ &= \langle (I-P_{j_{n+1}})x_n, x \rangle \\ &=0. \end{align*} Adding these from $n=0$ to $n=h-1$, we have $\langle x_0 - x_h,x \rangle=0$, and so \begin{equation*} \langle x_0 - x_\infty,x \rangle = \lim_{h\to \infty} \langle x_0 - x_h,x \rangle = \lim_{h\to \infty} 0 = 0. \end{equation*} Hence $x_0 - x_\infty \in M^\perp$, and so $(x_n)$ converges weakly to the orthogonal projection of $x_0$ onto $M$. \end{proof} We now state a simple corollary of Theorems \ref{key result sakai} and \ref{key result sakai 2}. \begin{corollary} \label{key result sakai 3} Suppose the sequence $s$ is quasiperiodic. Suppose also that there is a constant $A$ (which may depend on the sequence $s$), such that \begin{equation} \label{key inequality 2} \|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2, \quad n>m\geq 1. \end{equation} Then the sequence $(x_n)$ converges in norm to the orthogonal projection of $x_0$ onto $M$. \end{corollary} \begin{proof} By Lemma \ref{key result sakai}, we immediately have that $(x_n)$ converges in norm, and so in particular converges weakly to some limit, say $x_\infty$. Since $s$ is quasiperiodic, it takes every value in \{1,\dots,J\} infinitely many times. Therefore, Lemma \ref{key result sakai 2} gives that $x_\infty = P_Mx_0$. \end{proof} In particular, given a quasiperiodic sequence $s$, if we can find a constant $A$ such that (\ref{key inequality 2}) holds, then Theorem \ref{sakai theorem} (Sakai) follows immediately. The rest of this section involves finding such a constant. \subsubsection{Useful lemmas} We proceed to prove two simple, but useful, lemmas. These will be used in later parts of the proof of Theorem \ref{sakai theorem} (Sakai). \begin{lemma} \label{easy inequality sakai} Let $y_1, y_2, \dots , y_N, y_{N+1} \in H$. Then \begin{equation*} \|y_{N+1} - y_1\|^2 \leq N\sum_{k=1}^N \|y_{k+1} - y_k\|^2. \end{equation*} \end{lemma} \begin{proof} Applying the triangle inequality along with $\big{(}\sum_{k=1}^{N} a_k\big{)}^2 \leq N\sum_{k=1}^{N} a_k^2$, we have \begin{align*} \|y_{N+1} - y_1\| &= \Big{\|}\sum_{k=1}^N y_{k+1} - y_k \Big{\|}^2 \\ &\leq \Big{(}\sum_{k=1}^N \|y_{n+1} - y_n\|\Big{)}^2 \\ &\leq N\sum_{k=1}^N \|y_{n+1} - y_n\|^2. \qedhere \end{align*} \end{proof} \begin{lemma} \label{small lemma} Let $P$ be the orthogonal projection onto a closed subspace of $H$, and let $x,y \in H$. Then \begin{enumerate}[label=(\alph*)] \item $\|x-Py\|^2 \leq \|x-y\|^2 + \|x-Px\|^2$. \label{small lemma a} \item $\|x-y\|^2 \leq \|x-Py\|^2 + \|x-Px\|^2 + 2\|y-Py\|^2$. \label{small lemma b} \end{enumerate} \end{lemma} \begin{proof} For (a), we note that $x-Px \perp P(x-y)$, and so Lemma \ref{foundational}\ref{pythagoras} gives \begin{align*} \|x-Py\|^2 &= \|x-Px+P(x-y)\|^2 \\ & = \|x-Px\|^2 + \|P(x-y)\|^2 \\ &\leq \|x-Px\|^2 + \|x-y\|^2. \end{align*} For (b), we begin by noting that since $Px,Py \perp y-Py$, we have \begin{align*} \langle x-Py,y-Py \rangle &= \langle x,y-Py \rangle = \langle x-Px,y-Py \rangle. \end{align*} Therefore, appealing to the Cauchy-Schwartz inequality and noting that $2ab \leq a^2+b^2$, we have \begin{align*} \|x-y\|^2 &= \|x-Py-(y-Py))\|^2 \\ &\leq \|x-Py\|^2 + \|y-Py\|^2 + 2|\langle x-Py,y-Py \rangle| \\ &= \|x-Py\|^2 + \|y-Py\|^2 + 2|\langle x-Px,y-Py \rangle| \\ &\leq \|x-Py\|^2 + \|y-Py\|^2 + 2\|x-Px\|\|y-Py\| \\ &\leq \|x-Py\|^2 + \|y-Py\|^2 + \|x-Px\|^2 + \|y-Py\|^2 \\ &\leq \|x-Py\|^2 + \|x-Px\|^2 + 2\|y-Py\|^2. \qedhere \end{align*} \end{proof} \subsubsection{Two statements implying Sakai's theorem} There are two steps left in the proof. We will first find two statements from which Theorem \ref{sakai theorem} (Sakai) follows, and then we will show these statements are true. In this subsection, we do the former. We begin by defining $I = I(s) = \sup_{1\leq j\leq J} I(s,j)$, and \[S_l = \sum_{k=l}^{l+I-2} \|x_{k+1} - x_k\|^2.\] By Lemma \ref{key result sakai 3}, to prove Theorem \ref{sakai theorem} (Sakai), it suffices to show that \begin{equation} \label{want to prove sakai} \|x_n - x_m\|^2 \leq \big{(}(I(s)-1)(I(s)-2)+3\big{)} \sum_{k=m}^{n-1} \|x_{k+1} - x_k\|^2, \quad n>m\geq1. \, \, \, \end{equation} Let $n \geq m \geq 1$. Suppose first that \[ \textnormal{(a)} \quad n-m \leq 2I-3. \] Then by Lemma \ref{easy inequality sakai} (with $N=n-m$, $y_1 = x_m$, $y_2 = x_{m+1}$,$\dots$, $y_N=x_{n-1}$, $y_{N+1} = x_n$), we have \begin{equation} \label{almost there} \|x_n-x_m\|^2 \leq (n-m)\sum_{k=m}^{n-1} \|x_{k+1} - x_k\|^2. \end{equation} Since $n-m \leq 2I-3 \leq (I-1)(I-2) + 3$, we see that (\ref{want to prove sakai}) holds. We may therefore assume that \[\textnormal{(b)} \quad n-m \geq 2I-2. \] We now note that in order to show (\ref{want to prove sakai}) holds, it is sufficient to prove the following statements. \noindent (i) If $S_{n-I+1} \leq S_m$, then \begin{equation*} \|x_n - x_m\|^2 \leq \|x_n - x_{m+I-1}\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_m. \end{equation*} (ii) If $S_m < S_{n-I+1}$, then \begin{equation*} \|x_n - x_m\|^2 \leq \|x_{n-I+1} - x_m\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_{n-I+1}. \end{equation*} Indeed, if we have (i) and (ii), then we can apply them repeatedly (whichever of the two we are able to apply at each step), until we are in case (a). For example, suppose we have just applied (i) to $\|x_n - x_m\|$. Then either $n-(m+I-1) \geq 2I-2$, so that we can apply one of (i) or (ii) to $\|x_n - x_{m+I-1}\|^2$, or $n-(m+I-1) \leq 2I-2$, so that we are in case (a) and we get, as in (\ref{almost there}), \begin{equation*} \|x_n - x_{m+I-1}\|^2 \leq (n-(m+I-1))\sum_{k=m+I-1}^{n-1} \|x_{k+1} - x_k\|^2. \end{equation*} After repeated applications of (i) or (ii), and once we are in case (a) so that we have a similar inequality to (\ref{almost there}), we obtain (\ref{want to prove sakai}), and we are done. \subsubsection{Proving Sakai's theorem} In the last section, we showed that in order to prove Theorem \ref{sakai theorem} (Sakai), it suffices to prove that for $n-m \leq 2I-2$, the following two statements hold. \noindent (i) If $S_{n-I+1} \leq S_m$, then \begin{equation*} \|x_n - x_m\|^2 \leq \|x_n - x_{m+I-1}\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_m. \end{equation*} (ii) If $S_m < S_{n-I+1}$, then \begin{equation*} \|x_n - x_m\|^2 \leq \|x_{n-I+1} - x_m\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_{n-I+1}. \end{equation*} \begin{proof}[Proof of (i)] For $k \in \{m,m+1, \dots, m+I-2\}$, applying Lemma \ref{small lemma}\ref{small lemma b} with $x=x_n,y=x_k$, $P=P_{j_{k+1}}$, and setting $p_{j_{k+1}}=P_{j_{k+1}}x_n$, we have \begin{equation*} \|x_n-x_k\|^2 \leq \|x_n-x_{k+1}\|^2 + \|x_n - p_{j_{k+1}}\|^2 + 2\|x_{k+1} - x_k\|^2. \end{equation*} Applying this inequality one by one to each of $\|x_n-x_m\|^2$, $\|x_n-x_{m+1}\|^2,\dots,$ $\|x_n-x_{m+I-2}\|^2$, we obtain \begin{equation} \label{intermediate inequality} \begin{aligned} \|x_n-x_m\|^2 &\leq \|x_n-x_{m+1}\|^2 + \|x_n - p_{j_{m+1}}\|^2 + 2\|x_{m+1} - x_m\|^2 \\ &\leq \dots \leq \|x_n-x_{m+I-1}\|^2 + \sum_{k=m}^{m+I-2} \|x_n - p_{j_{k+1}}\|^2 + 2S_m. \end{aligned} \end{equation} The set $\{ x_{n-I+1},x_{n-I+2},\dots , x_n\}$ consists of $I$ consecutive elements of the sequence $(x_n)$. So by definition of $I$, at least one of these elements, say $x_h$, belongs to $M_{j_{k+1}}$. We choose the largest such number $h$, and denote it by $h(j_{k+1})$. Since $p_{j_{k+1}}=P_{j_{k+1}}x_n$ is the projection of $x_n$ onto $M_{j_{k+1}}$, and $x_{h(j_{k+1})} \in M_{j_{k+1}}$, Lemma \ref{foundational}\ref{projection is onto closest point} gives \begin{equation} \label{closest point inequality} \|x_n - p_{j_{k+1}}\|^2 \leq \|x_n - x_{h(j_{k+1})}\|^2. \end{equation} Applying Lemma \ref{easy inequality sakai}, we obtain \begin{equation} \label{intermediate inequality 2} \begin{aligned} \|x_n - x_{h(j_{k+1})}\|^2 & \leq (n-h(j_{k+1})) \sum_{k=h(j_{k+1})}^{n-1} \|x_{k+1} - x_k\|^2 \\ & \leq (n-h(j_{k+1})) S_{n-I+1}. \end{aligned} \end{equation} Since $k \in \{m,\dots,m+I-2\}$, $k$ ranges over $I-1$ consecutive numbers. Therefore, there is some number $a$ in this range such that $M_{j_{a+1}}$ is equal to one of $M_{j_{n-1}}$ or $M_{j_{n}}$. Rephrasing this, there is some $a \in \{m,\dots,m+I-2\}$ for which $n-h(j_{a+1})$ is equal to $0$ or $1$. Since $ 0 \leq n-h(j_{k+1}) \leq I-1$, we have \begin{equation} \label{intermediate inequality 3} \begin{aligned} &\sum_{k=m}^{m+I-2} n-h(j_{k+1}) \\ &\leq \sum_{k=m}^{a-1} n-h(j_{k+1}) + \Big{(}n-h(j_{a+1})\Big{)} + \sum_{k=a+1}^{m+I-2} n-h(j_{k+1}) \\ & \leq \Big{(}\sum_{k=m}^{a-1} I-1 \Big{)} + 1 + \Big{(}\sum_{k=a+1}^{m+I-2} I-1 \Big{)} \\ & \leq (I-1)(I-2) + 1. \end{aligned} \end{equation} Hence applying (\ref{closest point inequality}), (\ref{intermediate inequality 2}) and (\ref{intermediate inequality 3}) in that order, and recalling that $S_{n-I+1} \leq S_m$ (by assumption), we have \begin{equation} \label{intermediate inequality 4} \begin{aligned} \sum_{k=m}^{m+I-2} \|x_n - p_{j_{k+1}}\|^2 + 2S_m & \leq \sum_{k=m}^{m+I-2} \|x_n - x_{h(j_{k+1})}\|^2 + 2S_m \\ & \leq \sum_{k=m}^{m+I-2} (n-h(j_{k+1})) S_{n-I+1} + 2S_m \\ & \leq \big{(}(I-1)(I-2) + 1\big{)}S_{n-I+1} + 2S_m \\ & \leq \big{(}(I-1)(I-2) + 3\big{)}S_m. \end{aligned} \end{equation} So finally, by (\ref{intermediate inequality}) and (\ref{intermediate inequality 4}), we have \begin{equation*} \begin{aligned} \|x_n-x_m\|^2 & \leq \|x_n-x_{m+I-1}\|^2 + \sum_{k=m}^{m+I-2} \|x_n - p_{j_{k+1}}\|^2 + 2S_m \\ & \leq \|x_n-x_{m+I-1}\|^2 + \big{(}(I-1)(I-2) + 3\big{)}S_m, \end{aligned} \end{equation*} and so (i) is proved. \end{proof} \begin{proof}[Proof of (ii)] For each $k \in \{n-I+1,\dots, n-1\}$, applying Lemma \ref{small lemma}\ref{small lemma a} with $x=x_m$, $y=x_k$, $P=P_{j_{k+1}}$, and setting $p_{j_{k+1}}=P_{j_{k+1}}x_m$, we have \begin{equation*} \|x_m - x_n\|^2 \leq \|x_m - x_k\|^2 + \|x_m - p_{j_{k+1}} \|^2. \end{equation*} Applying these inequalities repeatedly (as in the proof of (i)), we obtain \begin{equation} \label{intermediate inequality b1} \|x_m - x_n\|^2 \leq \|x_m - x_{n-I+1}\|^2 + \sum_{k=n-I+1}^{n-1} \|x_m - p_{j_{k+1}}\|^2. \end{equation} An argument similar to (\ref{intermediate inequality 4}) shows that we have \begin{equation} \label{intermediate inequality b2} \sum_{k=n-I+1}^{n-1} \|x_m - p_{j_{k+1}}\|^2 \leq \{(I-1)(I-2) + 1\}S_m. \end{equation} Combining (\ref{intermediate inequality b1}) and (\ref{intermediate inequality b2}), and recalling that $S_m<S_{n-I+1}$, we obtain (ii) and so the proof is complete. \end{proof} \subsubsection{Concluding remarks} \label{concluding remarks} Sakai's paper ends by posing several open questions about the convergence of sequences of projections. He also mentions a few simple results. In this subsection we briefly discuss some of the questions and ideas raised by Sakai. The first question he poses is the following. For an arbitrary sequence $s$, does (\ref{key inequality}) always hold with $A=J-1$? We remark that it appears this question has not yet been addressed in the literature. Perhaps this is because it can be easily resolved given a result by Kopeck\'a and Paszkiewicz \cite{KoPa17} (stated later as Theorem \ref{big result kopecka}). We resolve Sakai's question in Corollary \ref{Sakai open question}, where we find a sequence $s$ for which (\ref{key inequality}) does not hold for any constant $A$. Another interesting question posed by Sakai is whether we still have convergence in norm for the case that $J= \infty$ and $(j_n)$ is quasiperiodic. It is worth noting that quasiperiodic sequences covering every integer do exist. We offer the following example: \begin{equation*} 1,2,1,3,1,2,1,4,1,2,1,3,1,2,1,5,\dots \end{equation*} That is, the sequence which has $1$ every $2$\textsuperscript{nd} number, $2$ every $4\textsuperscript{th}$ number, \dots, $n$ every $2^n$-th number, etc. More generally, for the case $J=\infty$, a quasiperiodic sequence always has $I = \sup_{j\in \mathbb{N}}I(s,j) = \infty$, and so the argument in the proof of Theorem \ref{sakai theorem} (Sakai) does not extend to this case. However, even when $J= \infty$, we can still show convergence for special cases. For example, suppose the sequence of closed subspaces $(M_j)_{j\geq1}$ is monotonically decreasing (i.e. $M_1\supseteq M_2 \supseteq M_3 \supseteq \dots $). Then we have \begin{equation} \label{commuting projections} P_bP_a = P_aP_b = P_b, \quad b\geq a \geq1. \end{equation} Consider the sequence $s$ given by $j_n = n$ for every $n \in \mathbb{N}$. Applying (\ref{commuting projections}) and Lemma \ref{foundational}\ref{projection equality} for the first equality gives that for $n\geq m \geq 1$, \begin{equation*} \begin{aligned} \|x_n - x_m\|^2 &= \|x_m\|^2 - \|x_n\|^2 \\ &= \|x_m - x_{m+1} + x_{m+1} - x_{m+2} + \dots - x_n + x_n\|^2 - \|x_n\|^2 \\ & \leq \|x_m - x_{m+1}\|^2 + \dots + \|x_{n-1} - x_n\|^2 + \|x_n\|^2 - \|x_n\|^2 \\ & = \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2. \end{aligned} \end{equation*} Hence (\ref{key inequality}) holds with $A=1$, and so $(x_n)$ convergences in norm. Finally, we note that in his paper, Sakai observed that if at least one of the $J$ subspaces is finite-dimensional, then for any sequence $(j_n)$, we have that $(x_n)$ converges in norm. To prove this, we will need to make use of a result by Amemiya and Ando, to be stated later as Theorem \ref{amemiya}. We therefore defer this result (and its proof) to Section \ref{amemiya section}, where we state it as Lemma \ref{Sakai mistake}. \newpage \section{Weak convergence} \label{amemiya section} All of our results so far have had some restriction on the sequence of projections, or on the Hilbert space $H$. It is natural to ask what happens if we do not have such restrictions. In 1965, Amemiya and Ando proved that for any sequence of projections, we always have weak convergence \cite{AmAn65}. \begin{theorem} \label{amemiya} Let $H$ be a real or complex Hilbert space, $J \geq 2$ an integer, and $M_1,\dots ,M_J$ a family of closed subspaces in $H$. For each $j \in \{1,\dots ,J\}$, let $P_j$ be the orthogonal projection onto the closed subspace $M_j$, and let $(j_n)$ be any sequence taking values in $\{1,\dots,J\}$. Let $x_0 \in H$ be a vector, and let $(x_n)$ be the sequence defined by \begin{equation*} x_n = P_{j_n}x_{n-1}, \quad n\geq1. \end{equation*} Then $x_n$ converges weakly as $n\to \infty$. \end{theorem} In fact, Amemiya and Ando proved a slightly stronger result about contractions \cite{AmAn65}, of which Theorem \ref{amemiya} is a corollary. By proving Theorem \ref{amemiya} directly, we are able to simplify their proof. Additionally, by including details not originally present in \cite{AmAn65}, we aim to to make the proof easier to follow. We present our proof through a series of four lemmas. For simplicity, we write `neighbourhood' to mean a basic weakly open neighbourhood of $0$ in $H$. We also write $B_H$ for the closed unit ball in $H$. \begin{lemma} \label{neighbourhood} For any neighbourhood $U$, and any $j \in \{1,\dots,J\}$, there is an $\varepsilon = \varepsilon(j) > 0 $ such that for $x\in B_H$ \begin{equation*} \|P_jx\| \geq 1-\varepsilon \implies (I-P_j)x \in U. \end{equation*} \end{lemma} \begin{proof} Let $U$ be a neighbourhood, and $x \in B_H$. Then there are $y_1,\dots ,y_r \in H$ and $\delta >0$, such that \begin{equation*} U = \{x \in H: |\langle x,y_k\rangle| < \delta \textnormal{ for each } 1\leq k \leq r \}. \end{equation*} Let $\varepsilon$ be small enough such that $0 < \eta \sqrt{2\varepsilon - \varepsilon^2}<\delta$, where $\eta = \max \{\|y_k\| : 1\leq k \leq r \}$. For example, $\varepsilon = \min \{1, \frac{\delta^2}{2\eta ^2}\}$ works. Suppose that $\|P_jx\| \geq 1-\varepsilon$. Then by Lemma \ref{foundational}\ref{projection equality}, \begin{equation*} \begin{aligned} \|(I-P_j)x\|^2 = \|x-P_jx\|^2 = \|x\|^2 - \|P_jx\|^2 \leq 1 - (1-\varepsilon)^2 = 2\varepsilon - \varepsilon^2. \end{aligned} \end{equation*} Hence by the Cauchy-Schwartz inequality, for any $k \in \{1,\dots ,r\}$, \begin{equation*} |\langle (I-P_j)x,y_k \rangle| \leq \|(I-P_j)x\| \|y_k\| \leq \eta \sqrt{2\varepsilon - \varepsilon^2} < \delta, \end{equation*} and thus $(I-P_j)x \in U$. \end{proof} \begin{lemma} \label{commutes} Let $Q_j$ be the orthogonal projection onto $Ker(I-P_j\dots P_1)$. Then for each $k \in \{1,\dots ,j\}$, $Q_j$ and $P_k$ commute. \end{lemma} \begin{proof} By Lemma \ref{foundational 2}\ref{useful for amemiya}, for each $x\in H$, \[Q_j x \in \ker(I-P_j\dots P_1) = \bigcap_{k=1}^j\ker(I-P_k).\] Therefore for each $k \in \{1,\dots ,j\}$, we have $(I-P_k)Q_jx = 0$, and so $P_kQ_jx = Q_jx$. Hence, \begin{equation} \label{equality P Q} P_kQ_j = Q_j. \end{equation} Since $P_k$ and $Q_j$ are self-adjoint, \[Q_j = (Q_j)^* = (P_kQ_j)^* = (Q_j)^*(P_k)^* = Q_jP_k,\] and so $P_kQ_j = Q_j = Q_jP_k$. \end{proof} \begin{lemma} \label{new neighbourhood} Let $R_j = I - Q_j$. Then for any neighbourhood $U$, there is another neighbourhood $V$ such that for $x \in B_H$, \begin{equation*} (I-P_k)x \in V, \quad k \in \{1,\dots ,j\} \implies R_jx \in U. \end{equation*} \end{lemma} \begin{proof} Let $H^j$ be the Cartesian product $H\times \dots \times H$ ($j$ times), viewed as a Hilbert space with addition and scalar multiplication given by \begin{equation*} (u_1,\dots,u_j) + \lambda(v_1,\dots,v_j) = (u_1 + \lambda v_1, \dots, u_1 + \lambda v_j), \quad u_i,v_i \in H, \, \, \lambda \in \mathbb{F}, \end{equation*} and inner product given by \begin{equation*} \big{\langle}(u_1,\dots,u_j),(v_1,\dots,v_j)\big{\rangle}_{H^j} = \langle u_1,v_1\rangle + \dots + \langle u_j,v_j\rangle, \quad u_i,v_i \in H. \end{equation*} Therefore the norm on $H^j$ is \begin{equation*} \|(u_1,\dots,u_j)\|_{H^j} = \sqrt{\|u_1\|^2 + \dots + \|u_j\|^2}, \quad u_1,\dots,u_j \in H. \end{equation*} We consider the map $g\colon R_j(H) \to H^j$ given by \[g(R_jx) = \big{(}(I-P_1)x,\dots ,(I-P_j)x\big{)}, \quad x \in H,\] where both spaces are endowed with their respective weak topologies. Lemma \ref{foundational 2}\ref{useful for amemiya} gives that for any $x\in H$, \begin{equation*} \begin{aligned} (I-P_k)x = 0, \quad k \in \{1,\dots ,j\} &\iff x\in \bigcap_{k=1}^j \ker(I-P_k) \\ &\iff x\in \ker(I-P_j\dots P_1) \\ &\iff Q_jx = x \\ &\iff R_jx=0. \end{aligned} \end{equation*} The $\impliedby$ implication shows that $g$ is well defined, while the $\implies$ implication shows that $g$ is injective. We now note that (\ref{equality P Q}) gives \begin{equation*} (I-P_k)R_j = (I-P_k)(I-Q_j) = I-P_k - Q_j + P_kQ_j = I-P_k - Q_j + Q_j = I-P_k. \end{equation*} So given $x \in H$, and noting that $\|I-P_j\|\leq \|I\| + \|P_j\| \leq 2$, we have \begin{equation*} \begin{aligned} \|g(R_jx)\|_{H^j} &= \| \big{(}(I-P_1)x,\dots ,(I-P_j)x\big{)} \|_{H^j} \\ &= \|\big{(}(I-P_1)R_jx,\dots ,(I-P_j)R_jx\big{)}\|_{H^j} \\ &= \sqrt{\|(I-P_1)R_jx\|^2 + \dots + \|(I-P_j)R_jx\|^2} \\ &\leq \|(I-P_1)R_jx\| + \dots + \|(I-P_j)R_jx\| \\ & \leq 2j\|R_jx\|. \end{aligned} \end{equation*} Therefore $g$ is bounded. It is a simple check to see that $g$ is linear. Hence $g$ is continuous. Let $f$ be the restriction of $g$ to $R_j(B_H)$ (where $R_j(B_H)$ is endowed with the relative weak topology). Then $f$ must also be continuous and injective. We know that a unit ball in a normed vector space $H$ is compact with respect to the weak topology if and only if $H$ is reflexive. In our case, $H$ is a Hilbert space, so is indeed reflexive. Therefore $B_H$ is weakly compact. Since $R_j$ is continuous, then $R_j(B_H)$ is also weakly compact. The weak topology on any vector space is Hausdorff, so in particular $H^j$ is Hausdorff with respect to the weak topology. Hence $f$ is an injective continuous map from a compact topological space into a Hausdorff space, so that \[\textnormal{ $R_j(B_H)$ and $f(R_j(B_H))$ are homeomorphic}.\] Therefore we can replace the codomain ($H^j$) of $f$ with the image of $f$ (endowed with the relative weak topology), so that $f$ becomes a bijection. In particular, $f^{-1}$ is then continuous at the origin, and hence the claim follows. \end{proof} For $j \in\{1,\dots ,J\}$, let $\mathcal{M}_j$ be the collection of maps which are in a free semigroup generated by $j$ of the projections $\{P_1,\dots,P_J\}$. We also set $\mathcal{M}_0 = \{I\}$. \begin{lemma} \label{important neighbourhood lemma} Let $U$ be a neighbourhood, and let $S \in \mathcal{M}_j$. There exists a positive number $\varepsilon = \varepsilon(U,j)$ depending only on $U$ and $j$ such that given $x \in B_H$, we have \begin{equation*} \|Sx\| \geq 1-\varepsilon \implies (I-S)x \in U. \end{equation*} \end{lemma} \begin{proof} We prove this by induction on $j$. The case $j=1$ follows immediately from Lemma \ref{neighbourhood} (since $S$ is just $P_i$ for some $i \in \{1,\dots ,J\}$). Suppose the assertion is true for $j-1$. Let $S \in \mathcal{M}_j$. If $S \in \mathcal{M}_{j-1}$, we would be done by the induction hypothesis. Therefore we may assume that \[S \in \mathcal{M}_j \setminus \mathcal{M}_{j-1}.\] Without loss of generality, we may also assume that $S$ is in the free semigroup generated by $P_1,P_2,\dots ,P_j$ (if not we simply relabel the projections). Then since $S \notin \mathcal{M}_{j-1}$, for any index $k \in \{1,\dots, j\}$, $S$ can be written in the form \begin{equation*} S=T_1P_kT_2 = T_3P_kT_4, \end{equation*} where $T_1,T_4 \in \mathcal{M}_{j-1}$, and $T_2,T_3 \in \mathcal{M}_{j}$. Let $U$ be a neighbourhood, and pick $V$ as in Lemma \ref{new neighbourhood}. Since $P_i$ is continuous, and so weakly continuous for each $i \in \{1,\dots ,j\}$, we may pick a neighbourhood $W$ such that \begin{equation*} 4W + 4P_iW \subseteq V, \quad i \in \{1,\dots ,j\}. \end{equation*} Indeed, we have that $f_i = 4I + 4P_i$ is weakly continuous for each $i \in \{1,\dots,j\}$. Then since $V$ is weakly open, $f_i^{-1}(V)$ is too, and so we may find a neighbourhood $W_i$ contained in $f_i^{-1}(V)$. Hence, \begin{equation*} f_i(W_i) \subseteq f_i(f_i^{-1}(V))\subseteq V, \quad i \in \{1,\dots,j\}. \end{equation*} Now since $\bigcap_{i=1}^j W_i$ is a finite intersection of weakly open sets, it is itself weakly open. Therefore we many find a neighbourhood $W$ contained in $\bigcap_{i=1}^j W_i$. Then $W$ has the required property that for each $i \in \{1,\dots j\}$, \begin{equation*} 4W + 4P_iW = f_i(W) \subseteq f_i(W_i) \subseteq V. \end{equation*} By the induction hypothesis, it is possible to find $\varepsilon_1$ such that for $x\in B_H$ and $T\in \mathcal{M}_{j-1}$, we have \[\|Tx\| \geq 1-\varepsilon_1 \implies (I-T)x \in W. \] By Lemma \ref{neighbourhood}, we can find $\varepsilon_2$ such that for $x\in B_H$ and $T=P_k$, \[\|Tx\| \geq 1-\varepsilon_2 \implies (I-T)x \in W.\] We set $\varepsilon = \min\{\varepsilon_1,\varepsilon_2\}$, and note it is independent of $S$ (since $\varepsilon_1$ and $\varepsilon_2$ are). Then given $x \in B_H$ and either $T\in \mathcal{M}_{j-1}$ or $T=P_k$, \begin{equation} \label{neighbourhood implication} \|Tx\| \geq 1-\varepsilon \implies (I-T)x \in W. \end{equation} We now fix $x \in B_H$, and we assume that $\|Sx\| \geq 1-\varepsilon$. If we can show that $(I-S)x \in U$, then our induction is complete. We note that \begin{equation*} 1\geq \|T_4x\| \geq \|P_kT_4x\| \geq \|T_3P_kT_4x\| = \|Sx\| \geq 1-\varepsilon. \end{equation*} In particular, $\|T_4x\| \geq 1-\varepsilon$ and $\|P_k(T_4x)\| \geq 1-\varepsilon$. Then by (\ref{neighbourhood implication}), we have $(I-T_4)x \in W$ and $(I - P_k)(T_4x) \in W$. Hence, \begin{equation} \label{element of 1} \begin{alignedat}{2} (I-P_k)x &= (I-T_4)x + (I-P_k)(T_4x) - P_k(I-T_4)x \\ & \in W + W + P_kW \subseteq 2W + 2P_kW \subseteq \frac{1}{2} V. \end{alignedat} \end{equation} We also note that $ \|T_1(P_kT_2x)\| = \|Sx\| \geq 1-\varepsilon$, so that by (\ref{neighbourhood implication}), we have $(I-T_1)(P_kT_2x) \in W$. Hence, \begin{equation} \label{element of 2} \begin{aligned} (I-P_k)Sx &= (T_1 - I)P_kT_2x + P_k(I-T_1)P_kT_2x \\ &\in W + P_kW \subseteq \frac{1}{2} V. \end{aligned} \end{equation} Since (\ref{element of 1}) and (\ref{element of 2}) are valid for all $k \in \{1,\dots ,j\}$, Lemma \ref{new neighbourhood} guarantees that $R_jx \in \frac{1}{2} U$ and $R_j(Sx) \in \frac{1}{2} U$. Therefore, \[R_j(I-S)x \in U.\] Recalling that we assumed $S \in \mathcal{M}_j \setminus \mathcal{M}_{j-1}$ is in the free semigroup generated by $P_1,P_2,\dots,P_j$, a similar argument to Lemma \ref{foundational 2}\ref{useful for amemiya} gives \begin{equation*} \ker(I-P_j\dots P_1)=\bigcap_{k=1}^j \ker(I-P_k) = \ker(I-S). \end{equation*} Since $Q_j$ is the projection onto $\ker(I-P_j\dots P_1) = \ker(I-S)$, we have that $(I-S)(I-R_j)x = (I-S)Q_jx = 0$. Rearranging, we have \[(I-S)x = (I-S)R_jx.\] We note also that since $Q_j$ commutes with $P_k$ for each $k \in \{1,\dots ,j\}$ (by Lemma \ref{commutes}), then $R_j$ commutes with $P_k$ for each $k \in \{1,\dots ,j\}$. Therefore $R_j$ commutes with $S$, and so $R_j$ commutes with $I-S$. Hence \begin{equation*} (I-S)x = (I-S)R_jx = R_j(I-S)x \in U, \end{equation*} thus completing the induction. \end{proof} We are finally able to prove that $(x_n)$ always converges with respect to the weak topology. \begin{proof}[Proof of Theorem \ref{amemiya}] We begin by noting that without loss of generality, we may assume that \[\|x_0\|=1.\] Indeed, if we prove Theorem \ref{amemiya} for this case, then we may extend it to any $x_0 \in H$ by noting that \[P_{j_n}\dots P_{j_1}x_0 \textnormal{ converges weakly } \iff P_{j_n}\dots P_{j_1}\frac{x_0}{\|x_0\|} \textnormal{ converges weakly}.\] We also note that ($\|x_n\|$) is a monotonically decreasing sequence, bounded below by $0$, and so it converges to some non-negative limit. If $\|x_n\| \to 0$, then $x_n$ converges in norm, and hence converges weakly. We may therefore suppose that $\lim_{n\to \infty} \|x_n\| >0$, and so $\inf_{n\geq 0}\|x_n\|>0.$ For any neighbourhood $U$, let $\varepsilon = \varepsilon(U,J)$ be as in Lemma \ref{important neighbourhood lemma}. Then there exists an $N \in \mathbb{N}$ such that for $n \geq m \geq N$, we have \[\|x_n\| \geq (1-\varepsilon)\|x_m\|.\] Indeed, suppose for a contradiction there was no such $N$, so that for any $N_i\in \mathbb{N}$ there are $n_i \geq m_i \geq N_i$ such that $\|x_{n_i}\| < (1-\varepsilon)\|x_{m_i}\|$. We begin by picking $N_1 = 0$ and finding appropriate $n_1 \geq m_1 \geq N_1=0$. Then, letting $N_2 = n_1+1$, we pick $n_2 \geq m_2 \geq N_2$, and continue inductively in this way. We then have for $k \in \mathbb{N}$, \begin{equation*} \begin{aligned} \|x_{n_k}\| &< (1-\varepsilon)\|x_{m_k}\| \leq (1-\varepsilon)\|x_{N_k}\| \leq (1-\varepsilon)\|x_{n_{k-1}}\| \\ & < (1-\varepsilon)^2\|x_{n_{k-2}}\| <\dots <(1-\varepsilon)^k\|x_{n_1}\| \\ &\leq (1-\varepsilon)^k\|x_0\| \to 0 \textnormal{ as } k\to \infty, \end{aligned} \end{equation*} contradicting $\inf_{n\geq0}\|x_n\|>0$. For a given $U$, we find an $N$ as above, and let $n\geq m \geq N$. Let $x = \frac{x_m}{\|x_m\|}$, and note that there is some $S\in \mathcal{M}_J$ such that $x_n = S\circ x_m$. Then \begin{equation*} \|Sx\| = \frac{\|x_n\|}{\|x_m\|} \geq 1-\varepsilon, \end{equation*} so Lemma \ref{important neighbourhood lemma} guarantees that $(I-S)x \in U$. Hence \begin{equation} \label{difference in neighbourhood} \begin{aligned} x_m - x_n &= (I-S)x_m = \|x_m\|(I-S) \frac {x_m} {\|x_m\|} \\ &= \|x_m\|(I-S)x \leq \|x_0\|(I-S)x \\ &= (I-S)x \in U. \end{aligned} \end{equation} Let $\delta >0$ and $y \in H$. Since $U$ was an arbitrary neighbourhood, we can pick $U = \{x \in H : | \langle x,y \rangle | < \delta/2\}$. Then (\ref{difference in neighbourhood}) gives that for $n \geq m \geq N$, we have $x_m - x_n \in U$, and therefore \[| \langle x_m - x_n, y \rangle | < \delta / 2.\] But we also note that $(x_n)$ is a bounded sequence ($\|x_n\| \leq \|x_0\|$ for each $n \in \mathbb{N}$), and so has a weakly convergent subsequence, say $x_{n_k}$, converging weakly to some limit $x_\infty \in H$. Then there exists some $K \geq N$ such that \[| \langle x_{n_K} - x_\infty, y \rangle | < \delta /2 .\] Therefore for $n\geq n_K$, \begin{equation*} |\langle x_n - x_\infty, y \rangle| \leq |\langle x_n - x_{n_{K}},y \rangle| + |\langle x_{n_{K}} - x_\infty,y \rangle| < \delta /2 + \delta /2 = \delta. \end{equation*} Hence $x_n$ converges weakly to $x_\infty$, completing our proof. \end{proof} We have shown that $(x_n)$ always converges weakly, but we do not yet know to what limit. Rather than taking Amemiya and Ando's approach in finding the limit, we notice that a simpler argument due to Sakai (Lemma \ref{key result sakai 2}) \cite{Sak95} works here too. Combining this with Theorem \ref{amemiya}, we obtain the following. \begin{corollary} \label{limit} Suppose that $(j_n)$ takes every value in $\{1,\dots,J\}$ infinitely many times. Then $(x_n)$ converges weakly to the orthogonal projection of $x_0$ onto $M = \bigcap_{j=1}^J M_j$. \end{corollary} Since in a finite-dimensional Hilbert space, convergence in norm is equivalent to weak convergence, the following corollary is immediate from Theorem \ref{amemiya}. \begin{corollary} Assume the same setting as in Theorem \ref{amemiya}, except that we now specify that $H$ is a finite-dimensional Hilbert space. Then $(x_n)$ converges in norm. \end{corollary} In fact, even more can be said. We end this section with a nice observation due to Sakai \cite{Sak95} (briefly mentioned in Section \ref{concluding remarks}). We consider the set $D=\{1\leq d \leq J : (j_n) \textnormal{ takes value $d$ infinitely many times}\}.$ \begin{lemma} \label{Sakai mistake} Suppose there is some $i \in D$ such that $M_i$ is finite-dimensional. Then $(x_n)$ converges in norm. \end{lemma} Sakai gave a proof of this result in his paper \cite{Sak95}, but it appears to be incorrect. In particular, it seems as though Sakai assumed that if a sequence converges weakly to a limit, and has a subsequence which converges in norm, then the sequence converges in norm. However, this is not generally true, and we demonstrate this as follows. It is known that there are sequences which converge weakly, but not in norm. By translating the sequence if needed, we may find a sequence $(a_n)$ converging weakly to $0$, but not in norm. But then the sequence $(b_n)$ given by \begin{equation*} (b_n) = (a_1,0,a_2,0,a_3,0 \dots) \end{equation*} converges weakly to $0$ and has a subsequence which converges in norm, but does not itself converge in norm. However, we find that Lemma \ref{Sakai mistake} still turns out to be true. We offer the following proof. \begin{proof}[Proof of Lemma \ref{Sakai mistake}] We know, due to Theorem \ref{amemiya}, that $(x_n)$ converges weakly to some limit $x_\infty$. Since $i \in D$, we may pass to a subsequence $(x_{n_k})_{k\geq 1}$ such that each $x_{n_k} \in M_i$, and note that it must also converge weakly to $x_\infty$ (as $k \to \infty$). However $M_i$ is finite-dimensional, and in a finite-dimensional space, weak convergence is equivalent to convergence in norm, so we have that $(x_{n_k})$ converges in norm to $x_\infty$. By definition of $D$, we may find some $t \in \mathbb{N}$ such that for $n\geq t$, we have $j_n \in D$. In particular, for $n\geq t$, we have $P_{j_n}x_\infty = x_\infty$. Since $(x_{n_k})$ converges in norm to $x_\infty$, for any $\varepsilon>0$, we may find some $K \geq t$ such that $\|x_{n_K} - x_\infty\| < \varepsilon$. So for $m\geq n_K$, we have \begin{equation*} \begin{aligned} \|x_m - x_\infty\| &= \|P_{j_m}P_{j_{m-1}}\dots P_{j_{n_{K+1}}}x_{n_{K}} - x_\infty\| \\ & = \|P_{j_m}P_{j_{m-1}}\dots P_{j_{n_{K+1}}}(x_{n_{K}} - x_\infty)\| \\ & \leq \|x_{n_{K}} - x_\infty\| < \varepsilon. \end{aligned} \end{equation*} Hence $(x_m)$ converges in norm to $x_\infty$, concluding our proof. \end{proof} \newpage \section{Failure of strong convergence} \label{failure strong convergence} As mentioned in the introduction, Amemiya and Ando's question as to whether there is a sequence of projections that does not converge in norm \cite{AmAn65} went unanswered for a long time. It was resolved only in 2012, when Paszkiewicz proved that for any infinite-dimensional Hilbert space, we may find five subspaces, a vector $x_0 \in H$, and a sequence $(j_n)$, so that $(x_n)$ does not converge in norm \cite{Pas12}. This construction was improved by Kopeck\'a and M\"uller from five subspaces to three \cite{KoMu14}, and then refined in 2017 by Kopeck\'a and Paszkiewicz \cite{KoPa17}. In this section, we will closely follow Kopeck\'a and Paszkiewicz's construction, presenting a series of technical lemmas as stated in \cite{KoPa17}, leading to a proof of the following theorem. \begin{theorem} \label{big result kopecka} There exists a sequence $(j_n)$ with the following property. If $H$ is an infinite-dimensional Hilbert space, and $x_0 \in H$ is a non-zero vector, then there exists three closed subspaces $M_1,M_2,M_3 \subset H$ intersecting only at the origin such that the sequence $(x_n)$ does not converge in norm. \end{theorem} We aim to present the proofs of the lemmas in a more accessible way, adding additional details where they have been omitted. \subsection{Notation} Given subsets $X,Y \subset H$, we will write $\bigvee X$ for the closed linear span of $X$, and $X \vee Y$ for the closed linear span of $X \cup Y$. We will also write $\bigvee_{i\in I} X_i$ for the closed linear span of $\bigcup_{i \in I} X_i$. Given $x,y \in H$, we will write $\vee x$ and $x\vee y$ for $\vee \{x\}$ and $\vee\{x,y\}$, respectively. For $m \in \mathbb{N}$, we will write $\mathcal{S}_m$ to denote the free semigroup with generators $a_1, \dots, a_m$. If $A_1, \dots, A_m \in B(H)$, and $\varphi = a_{i_r}\dots a_{i_1} \in \mathcal{S}_m$ for some $r \in \mathbb{N}$ and $i_j \in \{1,\dots,m\}$, then we write \begin{equation*} \varphi(A_1,\dots,A_M) = A_{i_r}\dots A_{i_1} \in B(H). \end{equation*} We refer to elements of a semigroup as `words', made up of the `letters' from the set $\{a_1,\dots,a_m\}$. We denote the length of the word $\varphi$ by $|\varphi| = r$, and the number of occurrences of the letter $a_i$ in the word $\varphi$ by $|\varphi _i|$, so that $\sum_{i=1}^m |\varphi_i| = |\varphi|=r$. \subsection{Continuous dependence of words on letters} We begin by proving that if we have control over the number of appearances of a contraction in a product, then replacing the contraction with one close in norm to the original does not change the product much. \begin{lemma} \label{contraction inequality} Let $\psi \in \mathcal{S}_n$ for some $n \in \mathbb{N}$. Assume that for each $i \in \{1,2,\dots ,n\}$, $A_i, B_i, E \in B(H)$ are contractions such that each $A_i$ commutes with $E$. Then \begin{equation*} \|\psi(A_1,\dots ,A_n)E - \psi(B_1,\dots ,B_n)E\| \leq \sum_{1\leq i \leq n} |\psi_i| \cdot \|A_iE - B_iE\|. \end{equation*} \end{lemma} \begin{proof} We prove this by induction on $|\psi|$. For $|\psi|=0$, we have that $\psi(A_1,\dots ,A_n) = \psi(B_1,\dots ,B_n)$ is the identity on $H$, so the inequality holds (with both sides being $0$). For our induction hypothesis (IH), suppose the assertion is true for $|\psi| \leq r$. We now suppose $|\psi|= r+1$. Then we have $\psi = \varphi a_j$ for some $\varphi \in \mathcal{S}_n$ with $|\varphi| = r$ and $j\in \{1,2,\dots ,n\}$. Hence \begin{align*} &\|\psi(A_1,\dots ,A_n)E - \psi(B_1,\dots ,B_n)E\| && \\ & =\|\varphi(A_1,\dots ,A_n)A_jE - \varphi(B_1,\dots ,B_n)B_jE\| \\ & \leq \|\varphi(A_1,\dots ,A_n)A_jE - \varphi(B_1,\dots ,B_n)A_jE\| \\ & \quad + \|\varphi(B_1,\dots ,B_n)A_jE - \varphi(B_1,\dots ,B_n)B_jE\| && \textnormal{\big{[}triangle inequality\big{]}} \\ & \leq \|\varphi(A_1,\dots ,A_n)E - \varphi(B_1,\dots ,B_n)E\| \\ & \quad + \|\varphi(B_1,\dots ,B_n)\| \cdot \|A_jE-B_jE\| && \textnormal{\big{[}$A_jE=EA_j$,\, $\|A_j\| \leq 1$\big{]}} \\ & \leq \|A_jE-B_jE\| + \sum_{1\leq i \leq n} |\varphi_i| \cdot \|A_iE - B_iE\| && \textnormal{\big{[}IH, $\|\varphi(B_1,\dots, B_n)\| \leq 1$\big{]}} \\ & = \sum_{1\leq i \leq n} |\psi_i| \cdot \|A_iE - B_iE\|. \end{align*} This completes the induction. \end{proof} We now state two simple corollaries of Lemma \ref{contraction inequality}, which will be useful later on. \begin{corollary} \label{contraction inequality corollary} Let $\psi \in \mathcal{S}_3$. Suppose $E,W,X,X',Y,Y',Z$ are subspaces of $H$, such that $W,X,Y \subset E$ and $X',Y' \perp$ E. Then \begin{equation*} \|\psi(P_W,P_X,P_Y)E - \psi(P_Z,P_{X\vee X'}, P_{Y\vee Y'})E\| \leq | \psi_1| \cdot \|P_ZP_E-P_W\|. \end{equation*} \end{corollary} \begin{proof} We apply Lemma \ref{contraction inequality} for $n=3$, and note that $P_WP_E=P_W$, $P_X=P_XP_E=P_{X \vee X'}P_E$, and $P_Y=P_{Y}P_E=P_{Y \vee Y'}P_E$. \end{proof} \begin{corollary} \label{bound projection word} Let $n \in \mathbb{N}$, $\varphi \in S_n$ and $A_1,\dots, A_n$, $B_1,\dots, B_n$ be contractions. Then \begin{equation*} \|\varphi(A_1,\dots ,A_n) - \varphi(B_1,\dots ,B_n)\| \leq |\varphi | \cdot \max_{1\leq i \leq n} \|A_i - B_i\|. \end{equation*} \end{corollary} \begin{proof} We apply Lemma \ref{contraction inequality}, with $E$ taken to be the identity map. \end{proof} \subsection{Constructing three subspaces and a finite product of projections} \label{triples} In this subsection, given orthonormal vectors $u$ and $v$ in $H$, we construct three subspaces $X$, $Y$, and $W=u \vee v$, and a finite product of projections $\psi(P_W,P_X,P_Y)$ onto $W$ such that $\psi(P_W,P_X,P_Y)u$ is close to $v$. This will be useful later when we `glue' together countably many copies of these triples of subspaces. Let $\varepsilon >0$, and let $k=k(\varepsilon)$ be the smallest positive integer $k$ such that \[\Big{(}\cos \frac{\pi}{2k}\Big{)}^k > 1 - \varepsilon.\] We note that such a $k$ exists since $(\cos \frac{\pi}{2r})^r \to 1$ as $r \to \infty$. Let $u$ and $v$ be orthonormal vectors in $H$, and for $j \in \{0,\dots ,k\}$, let \[h_j=u\cos \frac{\pi j}{2k} + v\sin \frac{\pi j}{2k},\] so that $h_0 = u$ and $h_k=v$. Then by definition of $k$, if we project $u$ consecutively onto $\vee h_1,\dots ,\vee h_k$, then we arrive at $v$ with error less than $\varepsilon$, so that \begin{equation} \label{consecutive projections} \|(P_{\vee h_k}\dots P_{\vee h_1})u - v\|<\varepsilon. \end{equation} This is illustrated in the following figure. \vspace{4mm} \begin{figure}[H] \begin{center} \includegraphics[width=81.82mm]{figure7.pdf} \caption{Approximating $v$ by projections of $u$} \label{segment projections} \end{center} \end{figure} We have by Theorem \ref{von neumann} (von Neumann) \cite{von49} that a projection onto $\vee h_j$ can be arbitrarily well approximated by iterating projections between two subspaces with intersection $\vee h_j$. In the following lemma, we call these subspaces $W$ and $X_j'$, and show that we can approximate $X_j'$ with a subspace $X_j$ so that $X_1 \subset \dots \subset X_k$. We will then see in Lemma \ref{replace projection} that we are able to replace the projections onto each $X_j$ by a product $(XYX)^{s(j)}$ where $X=X_k$, and $Y$ is such that $\|P_X - P_Y\|$ is small. Therefore, instead of projecting onto several subpaces to get from $u$ to $v$, we only need to project onto three of them, $W$, $X$, and $Y$, finitely many times. \begin{lemma} \label{quarter circle} Let $\varepsilon >0$, and recall that $k=k(\varepsilon)$ is the smallest positive integer $k$ such that $(cos\frac{\pi}{2k})^k < 1 -\varepsilon$. There exists $\varphi \in S_{k+1}$ with the following property. Suppose $X$ is a subspace of $H$ with $\dim X = \infty$, and $u, v \in X$ are vectors such that $\|u\| = \|v\| = 1$ and $u \perp v$. Let $W=u \vee v$. Then there exists subspaces $X_1 \subset \dots \subset X_{k(\varepsilon )} \subset X$ such that $\dim X_j = j+1$ for all $j \in \{1,\dots ,k \}$, and \begin{equation*} \| \varphi(P_W,P_{X_1},\dots ,P_{X_{k}})u - v\| < 2\varepsilon. \end{equation*} \end{lemma} \begin{proof} We pick orthonormal vectors $z_0,z_1,\dots ,z_{k-1} \in W^{\perp} \cap X$. We then construct inductively a sequence $\alpha_1>\dots >\alpha_{k-1}>\alpha_k = 0$, and a sequence of subspaces $X_1 \subset \dots \subset X_k \subset X$ in the following way. We pick $\alpha_0 \in (0,1)$ arbitrarily. Suppose that the sequence $\alpha_1>\dots > \alpha_{j-1}$, and the subspaces $X_1 \subset \dots \subset X_{j-1}$ have already been constructed for some $j \in \{1,\dots, k-1\}$. We then set \begin{equation*} X_j' = \bigvee \{h_0+\alpha_0z_0, h_1+\alpha_1z_1,\dots ,h_{j-1}+\alpha_{j-1}z_{j-1},h_j \}. \end{equation*} Since $z_i$ is orthogonal to $W$ for each $i\in \{1,\dots ,j-1\}$, we have \[W\cap X_j' = \vee h_j.\] Therefore by Theorem \ref{von neumann} (von Neumann), for each $x\in H$, we have \begin{equation*} (P_{X_j'}P_WP_{X_j'})^r x\to P_{\vee h_j}x \textnormal{ as } r\to\infty. \end{equation*} Since both $\vee h_j$ and $X_j'$ are finite-dimensional, both $P_{\vee h_j}$ and $P_{X_j'}P_WP_{X_j'}$ map into finite-dimensional subspaces of $H$. Hence there exists $r(j) \in \mathbb{N}$ such that \begin{equation*} \|(P_{X_j'}P_WP_{X_j'})^{r(j)} - P_{\vee h_j}\| < \frac{\varepsilon}{k}. \end{equation*} Let $X_j = \bigvee \{h_0+\alpha_0z_0, h_1+\alpha_1z_1,\dots ,h_{j-1}+\alpha_{j-1}z_{j-1},h_j + \alpha_jz_j\}$. As $\alpha_j \to 0$, $X_j$ is just a small perturbation of $X_j'$, so we can pick $\alpha_j>0$ small enough that \begin{equation} \label{small perturbation} \|(P_{X_j}P_WP_{X_j})^{r(j)} - P_{\vee h_j}\| < \frac{\varepsilon}{k}. \end{equation} Suppose we have constructed $X_1 \subset \dots \subset X_{k-1}$ and $\alpha_1>\dots >\alpha_{k-1}$ as above. We set $\alpha_k = 0$ and $X_k = X_k' = \bigvee \{h_0+\alpha_0z_0, h_1+\alpha_1z_1,\dots ,h_{k-1}+\alpha_{k-1}z_{k-1},h_k\}$. We now find $r(k) \in \mathbb{N}$ such that (\ref{small perturbation}) holds also for $j=k$. Let $\varphi \in S_{k+1}$, and $\psi \in S_k$ be given by \begin{align*} \varphi (c,b_1,\dots ,b_k) &= (b_kcb_k)^{r(k)}\dots (b_1cb_1)^{r(1)}, \\ \psi(a_1,\dots ,a_k) &= a_1\dots a_k. \end{align*} Then by Corollary \ref{bound projection word}, \begin{equation} \label{consecutive projections 2} \begin{aligned} &\| \varphi(P_W,P_{X_1},\dots ,P_{X_k}) - P_{\vee h_k}\dots P_{\vee h_1} \| \\ &= \|\psi\big{(}(P_{X_k}P_WP_{X_k})^{r(k)},\dots ,(P_{X_1}P_WP_{X_1})^{r(1)}\big{)} - \psi(P_{\vee h_k}\dots P_{\vee h_1})\| \\ & \leq |\psi|\cdot \max_{1\leq j \leq k} \|(P_{X_j}P_WP_{X_j})^{r(j)} - P_{\vee h_j} \| \\ &< k \cdot \frac{\varepsilon}{k} = \varepsilon. \end{aligned} \end{equation} Hence, (\ref{consecutive projections}) and (\ref{consecutive projections 2}) give that \begin{equation*} \begin{aligned} &\|\varphi(P_W,P_{X_1},\dots ,P_{X_k})u - v\| \\ &\leq \| \varphi(P_W,P_{X_1},\dots ,P_{X_k})u - (P_{\vee h_k}\dots P_{\vee h_1})u\| + \|(P_{\vee h_k}\dots P_{\vee h_1})u - v\| \\ &< \varepsilon + \varepsilon = 2\varepsilon, \end{aligned} \end{equation*} completing the proof. We note that $\varphi$ does not depend on $X$, $u$, or $v$. \end{proof} We have constructed above a family of $k$ finite-dimensional subspaces. We see in the next lemma that these can be replaced by projections onto just two subspaces: the largest subspace in the family and a small variation of it. \begin{lemma} \label{replace projection} Let $k \in \mathbb{N}$, $\varepsilon >0$, $\eta >0$, and $a>0$ be given. There exist natural numbers $a<s(k)<s(k-1)<\dots <s(1)$ with the following property. Suppose $X_1 \subset \dots \subset X_k \subset X \subset E$ are closed subspaces of $H$, with $X$ separable and $\dim (X^\perp \cap E) = \infty$. Then there exists a closed subspace $Y \subset E$ such that $X \cap Y = \{0\}$, $\|P_X-P_Y\|<\eta$, and \begin{equation*} \|(P_XP_YP_X)^{s(j)} - P_{X_j}\| < \varepsilon, \quad j \in \{1,\dots ,k\}. \end{equation*} \end{lemma} \begin{proof} We may assume that $0<\eta<1$; if the statement holds in this case, then it clearly holds for any $\eta >0$. We begin by fixing $0<\beta_{k+1}<\frac{\eta}{2}$, and choosing $s(k) > a$ large enough that $\frac{1}{(1+\beta_{k+1}^2)^{s(k)}} < \varepsilon$. We then inductively choose numbers $\beta_k,s(k-1),\beta_{k-1},s(k-2),\dots ,s(1),\beta_1$ such that \begin{equation} \label{inductive numbers} \begin{gathered} \beta_{k+1} > \beta_k >\dots >\beta_1 > 0, \\ a < s(k) < s(k-1) < \dots < s(1), \\ \frac{1}{(1+\beta_{j+1}^2)^{s(j)}} < \varepsilon \textnormal{ \, and \, } \Big{|}\frac{1}{(1+\beta_{j}^2)^{s(j)}} - 1\Big{|}< \varepsilon, \quad j \in \{1,\dots ,k\}. \end{gathered} \end{equation} We will show that these $s(j)$'s are as required. Since $X_1, \dots, X_k$ and $X$ are closed subspaces of a separable Hilbert space $H$, they are themselves separable Hilbert spaces under the same norm. A Hilbert space is separable if and only if it has an, at most, countable orthonormal basis. Hence we can find an, at most, countable orthonormal basis $\{e_i\}_{i\in I}$ in $X$ such that there are sets $\emptyset = I_0 \subset I_1 \subset \dots \subset I_k \subset I_{k+1} = I$ with the property that $\{e_i\}_{i\in I_j}$ is an orthonormal basis in $X_j$ for $j \in \{1,\dots ,k\}$. For $e_i \in I_j \setminus I_{j-1}$ we define $\gamma_i = \beta_j$. Since $\dim (X^\perp \cap E) = \infty$, we can find a set of orthonormal vectors $\{w_i\}_{i\in I}$ in $X^\perp \cap E$ indexed by $I$. Let $Y = \bigvee \{e_i + \gamma_iw_i : i\in I\}$. Then for $i \in I$, it is a simple check that \begin{equation*} e_i = \underbrace{\frac{e_i + \gamma_iw_i}{{1+\gamma_i^2}}}_{\in Y} + \underbrace{e_i - \frac{e_i + \gamma_iw_i}{1+\gamma_i^2}}_{\in Y^\perp}. \end{equation*} Hence $P_Ye_i = \frac{e_i + \gamma_iw_i}{1+\gamma_i^2}$. Therefore, since $e_i \in X$ and $w_i \in X^\perp$, \begin{equation*} (P_XP_YP_X)e_i = P_XP_Y(P_Xe_i) = P_X(P_Ye_i) = P_X(\frac{e_i + \gamma_iw_i}{1+\gamma_i^2}) = \frac{e_i}{1+\gamma_i^2}. \end{equation*} Hence, we have \begin{equation*} (P_XP_YP_X)^me_i = \frac{e_i}{(1+\gamma_i^2)^m}, \quad m\in\mathbb{N}. \end{equation*} Let $x \in X$. Writing it as $x = \sum_{i\in I} a_ie_i$, we see that Lemma \ref{foundational}\ref{pythagoras}, $\|e_i\| = 1$, (\ref{inductive numbers}), and $\gamma_i = \beta_j$, give \begin{equation} \label{projections bound X} \begin{aligned} &\|(P_XP_YP_X)^{s(j)}x - P_{X_j}x\|^2 \\ &= \Big{\|} \sum_{i\in I} a_i \frac{e_i}{(1+\gamma_i^2)^{s(j)}} - \sum_{i\in I_j} a_ie_i \Big{\|}^2 \\ &= \Big{\|} \sum_{i\in I_j} a_ie_i\Big{(}-1+\frac{1}{(1+\gamma_i^2)^{s(j)}}\Big{)} + \sum_{i\in I\setminus I_j} a_i\frac{e_i}{(1+\gamma_i^2)^{s(j)}} \Big{\|}^2 \\ &= \sum_{i\in I_j} \Big{\|}a_ie_i\Big{(}-1+\frac{1}{(1+\gamma_i^2)^{s(j)}}\Big{)}\Big{\|}^2 + \sum_{i\in I\setminus I_j} \Big{\|}a_i\frac{e_i}{(1+\gamma_i^2)^{s(j)}} \Big{\|}^2 \\ &= \sum_{i\in I_j} |a_i|^2 \Big{(}1- \frac{1}{(1+\gamma_i^2)^{s(j)}}\Big{)}^2 + \sum_{i\in I\setminus I_j} |a_i|^2 \frac{1}{(1+\gamma_i^2)^{2s(j)}} \\ &\leq \sum_{i\in I_j} |a_i|^2 \varepsilon^2 + \sum_{i\in I\setminus I_j} |a_i|^2\varepsilon^2 \\ &= \varepsilon^2 \sum_{i\in I} |a_i|^2 = \varepsilon^2 \|x\|^2. \end{aligned} \end{equation} We note that $P_{X_j}P_X = P_{X_j}$ (since $X_j \subset X$), and recall that projections are idempotent. Hence, by (\ref{projections bound X}), we have that for any $z \in H$ and $j \in \{1,\dots ,k\}$, \begin{align*} \|(P_XP_YP_X)^{s(j)}z - P_{X_j}z\|^2 &= \|(P_XP_YP_X)^{s(j)}(P_Xz) - P_{X_j}(P_Xz)\|^2 \\ &\leq \varepsilon^2 \|P_Xz\|^2. \end{align*} Therefore, \begin{equation*} \|(P_XP_YP_X)^{s(j)} - P_{X_j}\| < \varepsilon, \quad j \in \{1,\dots ,k\}. \end{equation*} It remains to verify that $\|P_X-P_Y\| < \eta$, and that $X \cap Y = \{0\}$. For the latter, suppose that $z \in X \cap Y$. Then since $z \in X$, $z$ can be written as $\sum_{i\in I} b_ie_i$, and since $z \in Y$, $z$ can be written as $\sum_{i\in I} c_i(e_i+\gamma_iw_i)$ (where each $b_i,c_i \in \mathbb{F}$). Since each $w_i \in X^\perp$, and each $e_i \in X$, then for every $j \in I$, \begin{equation*} b_j = \langle z, e_j \rangle = \langle \sum_{i\in I} c_i(e_i+\gamma_iw_i), e_j \rangle = \langle \sum_{i\in I} c_ie_i,e_j \rangle = c_j. \end{equation*} Therefore, \[0 = z - z = \sum_{i\in I} b_i(e_i+\gamma_iw_i) - \sum_{i\in I} b_ie_i = \sum_{i\in I} b_i\gamma_iw_i.\] Hence $b_i$ = 0 for every $i \in I$, and so $z = \sum_{i\in I} b_ie_i = 0$. Finally, we want to show that $\|P_X-P_Y\| < \eta$. It is known that if $U = \bigvee \{f_i : i\in I\}$ for some orthonormal set $\{f_i\}_{i \in I}$, we have $P_Ux = \sum_{i\in I} \langle x,f_i \rangle f_i $ for all $x \in Z$. This, along with $0<\gamma_i<\beta_k+1<\frac{\eta}{2}<1$ and $|a-b|^2 \leq 2|a|^2 + 2|b|^2$, gives that for any $0\neq z \in H$, \begin{equation*} \begin{aligned} &\|P_Xz - P_Yz\|^2 \\ &= \Big{\|} \sum_{i\in I} \langle e_i,z \rangle e_i - \sum_{i\in I} \frac{e_i + \gamma_iw_i }{1+\gamma_i^2} \langle e_i+\gamma_iw_i,z \rangle \Big{\|}^2 \\ &\leq \sum_{i\in I} \frac{1}{(1+\gamma_i^2)^2} \big{|}\gamma_i^2\langle e_i,z\rangle - \gamma_i \langle w_i,z \rangle \big{|}^2 + \frac{\eta^2}{4} \sum_{i\in I} \big{|}\frac{1}{1+\gamma_i^2} \langle e_i + \gamma_iw_i,z \rangle \big{|}^2 \\ &\leq 2(\eta/2)^4\|z\|^2 + 2(\eta /2)^2\|z\|^2 + (\eta^2/4)\|z\|^2 \\ &< \eta^2 \|z\|^2. \end{aligned} \end{equation*} So indeed $\|P_X-P_Y\| < \eta$. \end{proof} As before, let $u$ and $v$ be orthonormal vectors, $W=u\vee v$, and $\varepsilon >0$. We proceed to make use of Lemmas \ref{quarter circle} and \ref{replace projection} to find a word $\psi$, and two (almost parallel) subspaces $X$ and $Y$, such that $\|\psi(P_W,P_X,P_Y)u - v\| < 3\varepsilon$. \begin{lemma} \label{word and almost parallel subspaces} For every $\varepsilon >0$, there exists $N=N(\varepsilon)$, such that for every $\eta>0$, there exists $\psi \in S_3$ with $|\psi_1| \leq N$ that has the following property. Let $X\subset E$ be subspaces of $H$ such that $X$ is separable and $\dim (X^\perp \cap E) = \infty$. Let $u,v \in X$ be vectors such that $\|u\|=\|v\|=1$ and $u\perp v$. Let $W = u \vee v$. Then there exists a subspace $Y \subset E$ such that $X \cap Y = \{0\}$, $\|P_X-P_Y\|<\eta$, and \begin{equation*} \|\psi(P_W,P_X,P_Y)u - v\| < 3\varepsilon. \end{equation*} \end{lemma} \begin{proof} Let $\varepsilon >0$ and $\eta >0$ be given. Let $\varphi \in S_{k(\varepsilon)+1}$ be as in Lemma \ref{quarter circle}, and let $N = |\varphi_1|$. Since $\|u\|=\|v\|=1$ and $u\perp v$, we can apply Lemma \ref{quarter circle} to see that there exist subspaces $X_1 \subset \dots \subset X_{k(\varepsilon)} \subset X$ such that \begin{equation*} \| \varphi(P_W,P_{X_1},\dots ,P_{X_{k(\varepsilon)}})u - v\| < 2\varepsilon. \end{equation*} For $k=k(\varepsilon)$, the given $\eta$, $a=1$, and $\varepsilon$ replaced by $\frac{\varepsilon}{|\varphi|}$, we choose natural numbers $s(k) < s(k-1) < \dots < s(1)$ as in Lemma \ref{replace projection}. Since $X$ is separable and $\dim X^\perp \cap E = \infty$, Lemma \ref{replace projection} gives that there exists a subspace $Y$ of $E$ such that $X \cap Y = \{0\}$, $\|P_X-P_Y\|<\eta$ and for each $j\in \{1,\dots ,k\}$, \begin{equation*} \|(P_XP_YP_X)^{s(j)} - P_{X_j}\| < \frac{\varepsilon}{|\varphi|}. \end{equation*} We then define $\psi$ to be $\varphi$, but with $a_i$ replaced by $(a_2a_3a_2)^{s(i-1)}$ for each $i\in \{2,\dots ,k+1\}$, so that \begin{equation*} \psi(P_W,P_X,P_Y) = \varphi(P_W,(P_XP_YP_X)^{s(1)},\dots ,(P_XP_YP_X)^{s(k)}). \end{equation*} It is simple to see that $|\psi_1| = |\varphi_1| = N$. Finally, by Corollary \ref{bound projection word}, \begin{align*} & \|\psi(P_W,P_X,P_Y)u - v\| \\ &= \|\varphi(P_W,(P_XP_YP_X)^{s(1)},\dots ,(P_XP_YP_X)^{s(k)})u - v\| \\ & \leq \| \varphi(P_W,(P_XP_YP_X)^{s(1)},\dots ,(P_XP_YP_X)^{s(k)})u - \varphi(P_W,P_{X_1},\dots ,P_{X_k})u \| \\ & \quad + \|\varphi(P_W,P_{X_1},\dots ,P_{X_k})u - v\| \\ & \leq |\varphi| \cdot \frac{\varepsilon}{|\varphi|} + 2\varepsilon = 3\varepsilon. \end{align*} This concludes the proof. \end{proof} We may now make use of Corollary \ref{contraction inequality corollary} to show that we in fact have some freedom in our choice of $W$, $X$, and $Y$ above. \begin{lemma} \label{freedom choice spaces} For every $\varepsilon>0$, there exists $\delta = \delta(\varepsilon)$ such that for every $\eta >0$, there exists $\psi \in S_3$ with the following property. Let $X\subset E$ be subspaces of $H$ such that X is separable and $\dim X=\dim X^\perp \cap E = \infty$. Let $u,v \in X$ be vectors such that $\|u\|=\|v\|=1$ and $u\perp v$. Let $W=u\vee v$. Then there exists a subspace $Y\subset E$ such that $X \cap Y = \{0\}$ and $\|P_X - P_Y\|<\eta$ with the following property. If $X',Y',Z$ are subspaces such that $X',Y' \subset E$ and $\|P_W-P_ZP_E\|<\delta$, then \begin{equation*} \|\psi(P_Z,P_{X\vee X'},P_{Y\vee Y'})u - v\| < 4\varepsilon. \end{equation*} \end{lemma} \begin{proof} Given $\varepsilon >0$, we pick $N \in \mathbb{N}$ as in Lemma \ref{word and almost parallel subspaces}, and let $\delta = \frac{\varepsilon}{N}$. For these $\varepsilon$ and $N$, and a given $\eta > 0$, we choose $\psi$ according to Lemma \ref{word and almost parallel subspaces}. For a given subspace $X$, we also choose $Y$ according to this lemma. Let $X',Y',Z$ be as above. Then applying both Corollary \ref{contraction inequality corollary} and Lemma \ref{word and almost parallel subspaces}, we have \begin{align*} & \|\psi(P_Z,P_{X\vee X'},P_{Y\vee Y'})u - v\| \\ & \leq \|\psi(P_Z,P_{X\vee X'},P_{Y\vee Y'})u - \psi(P_W,P_X,P_Y)u\| + \|\psi(P_W,P_X,P_Y)u - v\| \\ &\leq |\psi_1| \cdot \|P_W-P_ZP_E\| + 3\varepsilon \\ &\leq N\delta + 3\varepsilon = 4\varepsilon. \qedhere \end{align*} \end{proof} \subsection{`Gluing' together the triples} The last step in proving Theorem \ref{big result kopecka} uses Lemma \ref{freedom choice spaces} to show that given an orthonormal set $\{e_i\}_{i=1}^\infty$ with an infinite-dimensional orthogonal complement, we can construct three closed subspaces $X,Y,Z$ of $H$ and words $\Psi^{(i)}$ such that $\Psi^{(i)}(P_Z,P_X,P_Y)e_i$ is close to $e_{i+1}$ for every $i \in \mathbb{N}$. Kopeck\'a and Paszkiewicz refer to this as `gluing' together countably many of the triples $W$, $X$, and $Y$ considered in Section \ref{triples} \cite{KoPa17}. \begin{lemma} \label{almost there eva} For any $\varepsilon_i >0$ where $i\in\mathbb{N}$, there exists $\Psi^{(i)} \in \mathcal{S}_3$ with the following property. Suppose $\{e_i\}_{i=1}^{\infty}$ is an orthonormal set in $H$ with an infinite-dimensional orthogonal complement. Then there are three closed subspaces $X,Y,Z$ of $H$ such that \begin{equation} \label{Psi projection inequality} \|\Psi^{(i)}(P_Z,P_X,P_Y)e_i - e_{i+1}\| < 4\varepsilon_i, \quad i\in \mathbb{N}. \end{equation} \end{lemma} \begin{proof} For each $\varepsilon_i >0$ ($i \in \mathbb{N}$), we define $\delta_i = \delta(\varepsilon_i)$ as in Lemma \ref{freedom choice spaces}. We set $\delta_0 = 1$ and $\eta_i = \min\{\delta_{i-1},\delta_{i+1}\}$, and choose $\psi^{(i)} \in S_3$ as in Lemma \ref{freedom choice spaces}. We define $\Psi^{(i)}$ as follows, \begin{equation*} \Psi^{(i)}(P_Z,P_X,P_Y) = \begin{cases} \psi^{(i)}(P_Z,P_X,P_Y) & \text{if $i$ is even},\\ \psi^{(i)}(P_Y,P_X,P_Z) & \text{if $i$ is odd}. \end{cases} \end{equation*} We begin by finding, for each $i \in \mathbb{N}$, closed infinite-dimensional subspaces $E_i$ of $H$ such that \begin{equation} \label{finding ei} \begin{gathered} e_i,e_{i+1} \in E_i, \\ P_{\vee e_{i+1}} = P_{E_i}P_{E_{i+1}} = P_{E_{i+1}}P_{E_i}, \\ E_i \perp E_j \textnormal{ if } |i-j| \geq 2. \end{gathered} \end{equation} We first note that since $\{e_i\}_{i=1}^\infty$ has an infinite-dimensional orthogonal complement, we may find an orthonormal set $\{f_i\}_{i=1}^\infty$, such that $e_i \perp f_i$ for every $i,j \in \mathbb{N}$. We then consider the infinite-dimensional spaces \[F_k = \bigvee \big{\{}f_i : i=(p_k)^r \textnormal{ for some } r\in\mathbb{N} \big{\}} ,\] where $p_k$ is the $k\textsuperscript{th}$ prime number. We set \[E_{i} = \langle e_{i} \rangle \oplus \langle e_{i+1} \rangle \oplus F_{i},\] and note these $E_i$ do indeed satisfy (\ref{finding ei}). For each $i \in \mathbb{N}$, we find a closed subspace $X_i \subset E_i$, such that $e_i,e_{i+1} \in X_i$, and $\dim X_i = \dim(X_i^\perp \cap E_i) = \infty$. We then have $X_n = \langle e_{n} \rangle \oplus \langle e_{n+1} \rangle \oplus \widetilde{F_{n}}$ for some closed infinite-dimensional subspace $\widetilde{F_{n}} \subset F_n$, and \begin{equation*} \label{subspace conditions} \begin{gathered} e_i,e_{i+1} \in X_i, \\ P_{\vee e_{i+1}} = P_{X_i}P_{X_{i+1}} = P_{X_{i+1}}P_{X_i}, \\ X_i \perp X_j \textnormal{ if } |i-j| \geq 2. \end{gathered} \end{equation*} By Lemma \ref{freedom choice spaces}, there exist closed subspaces $Y_i \subset E_i$ such that $\|P_{X_i} - P_{Y_i}\| < \eta_i$, and \begin{equation} \label{above lemma property} \|\psi^{(i)}(P_{Z_i},P_{X_i\vee X'},P_{Y_i\vee Y'})e_i - e_{i+1}\| < 4\varepsilon, \end{equation} whenever $W_i = e_i\vee e_{i+1}$, and $X',Y',Z_i$ are subspaces such that $X',Y' \subset E_i^\perp$ and $\|P_{W_i} - P_{Z_i}P_{E_i}\|<\delta$. We now set $Y_0 = \vee e_1$ and \begin{equation*} X = \bigvee_{i \in \mathbb{N}} X_i, \quad Y= \bigvee_{i\in\mathbb{N}_{\geq0}} Y_{2i}, \quad Z = \bigvee_{i\in\mathbb{N}_{\geq0}} Y_{2i+1}. \end{equation*} Then setting \[X_i' = \widetilde{F_{i-1}} \vee \widetilde{F_{i+1}} \vee \bigvee_{\substack{j\in\mathbb{N} \\ j\notin \{i-1,i+1\}}}X_j,\]we have $X_i' \perp E_i$ and $X = X_i \vee X_i'$. We proceed to show (\ref{Psi projection inequality}) by considering the cases where $i$ is even and odd separately. Suppose first that $i$ is even. Then as above, for each $i \in \mathbb{N}$, we can find a subspace $Y_i'$ of $H$ such that $Y_i' \perp E_i$ and $Y=Y_i \vee Y_i'$. We note that $P_ZP_{E_i} = P_{Y_{i-1} \vee Y_{i+1}}P_{E_i}$, $P_{W_i} = P_{X_{i-1} \vee X_{i+1}}P_{E_i}$, $X_{i-1} \perp X_{i+1}$, and $Y_{i-1}\perp Y_{i+1}$. By Lemma \ref{foundational}\ref{sum projections closed}, for orthogonal closed subspaces $U,V$ of $H$, we have that $U+V = U\vee V$. Applying this, along with Lemma \ref{foundational}\ref{adding projections}, we have \begin{equation*} \begin{aligned} \|P_{W_i} - P_{Z}P_{E_i}\| &= \|P_{X_{i-1} \vee X_{i+1}}P_{E_i} - P_{Y_{i-1} \vee Y_{i+1}}P_{E_i}\| \\ &= \|P_{X_{i-1} + X_{i+1}}P_{E_i} - P_{Y_{i-1} + Y_{i+1}}P_{E_i}\| \\ &= \|(P_{X_{i-1}} + P_{X_{i+1}})P_{E_i} - (P_{Y_{i-1}} + P_{Y_{i+1}})P_{E_i} \| \\ &= \|(P_{X_{i-1}} - P_{Y_{i-1}})P_{E_i} + (P_{X_{i+1}} - P_{Y_{i+1}})P_{E_i} \| \\ &\leq \|P_{X_{i-1}} - P_{Y_{i-1}}\| + \|P_{X_{i+1}} - P_{Y_{i+1}} \| < \eta_{i-1} + \eta_{i+1} \\ &= \min \{\delta_{i-2}, \delta_i \} + \min \{\delta_{i}, \delta_{i+2} \} \leq \delta_i. \end{aligned} \end{equation*} Hence, by (\ref{above lemma property}), \begin{equation*} \|\Psi^{(i)}(P_{Z},P_{X},P_{Y})e_i - e_{i+1}\| = \|\psi^{(i)}(P_{Z},P_{X},P_{Y})e_i - e_{i+1}\| < 4\varepsilon_i. \end{equation*} If $i$ is odd, then we can find a subspace $Y_i'$ of $H$ such that $Y_i' \perp E_i$ and $Z=Y_i \vee Y_i'$. As above, we show that $\|P_{W_i} - P_{Y}P_{E_i}\| < \delta_i$, and so by (\ref{above lemma property}), \begin{align*} & \|\Psi^{(i)}(P_{Z},P_{X},P_{Y})e_i - e_{i+1}\| = \|\psi^{(i)}(P_{Y},P_{X},P_{Z})e_i - e_{i+1}\| < 4\varepsilon_i. \qedhere \end{align*} \end{proof} We are finally able to prove Theorem \ref{big result kopecka}, that a sequence of alternating projections may diverge. \begin{proof}[Proof of Theorem \ref{big result kopecka}] For $\varepsilon_i = 9^{-i}$ ($i \in \mathbb{N}$), we pick $\Psi^{(i)}$ as in Lemma \ref{almost there eva}. Let $e_1 = \frac{x_0}{\|x_0\|}$. Since $H$ is infinite-dimensional, we can find an orthonormal set $\{e_i\}_{i=1}^\infty$ with an infinite-dimensional orthogonal complement. We choose closed subspaces $X, Y, Z$ as in Lemma \ref{almost there eva}, renaming them $M_1,M_2$ and $M_3$ respectively. Let $A_k = \Psi^{(k)}(P_{M_1},P_{M_2},P_{M_3})$. We then have, for all $k \in \mathbb{N}$, \begin{equation} \label{KoPa last} \begin{aligned} &\|A_kA_{k-1}\dots A_1e_1 - e_{k+1}\| \\ & \leq \|A_kA_{k-1}\dots A_2(A_1e_1 - e_2)\| + \|A_kA_{k-1}\dots A_2 e_2- e_{k+1}\| \\ & < 4\varepsilon_1 + \|A_kA_{k-1}\dots A_2 e_2- e_{k+1}\| \\ & \leq 4\varepsilon_1 + \|A_kA_{k-1}\dots A_3(A_2e_2 - e_3)\| + \|A_kA_{k-1}\dots A_3e_3- e_{k+1}\| \\ & < 4\varepsilon_1 + 4\varepsilon_2 + \|A_kA_{k-1}\dots A_3e_3- e_{k+1}\| \\ &\vdotswithin{=} \\ &< 4\varepsilon_1 + 4\varepsilon_2 + \dots + 4\varepsilon_{k-1} + \|A_ke_k - e_{k+1}\| \\ &< 4(9^{-1} + \dots + 9^{-k}) < 4 \sum_{k=1}^{\infty}\frac{1}{9^j} = \frac{1}{2}. \end{aligned} \end{equation} By construction, each $A_k$ is some product of orthogonal projections onto $M_1$, $M_2$ or $M_3$. Let $n_k$ be the total number of projections in the product $A_kA_{k-1}\dots A_1$. We define the sequence $(j_n)$ by letting $j_n$ take value $i$ whenever the $n\textsuperscript{th}$ projection in $A_kA_{k-1}\dots A_1$ is onto $M_i$. We define the sequence $(x_n)$ as in the statement of the theorem, so that $x_{n_k} = A_k \dots A_1 x_0$. We will now show that the subsequence $(x_{n_k})_{k\geq1}$ does not converge in norm. This then implies that $(x_n)$ does not converge in norm either, and we are done. We define the sequence \[(y_k) = \Big{(}\frac{x_{n_k}}{\|x_0\|}\Big{)} = (A_k\dots A_1e_1).\] By (\ref{KoPa last}), for each $k \in \mathbb{N}$, we have \[ \|y_k-e_{k+1}\| < \frac{1}{2} .\] Applying the reverse triangle inequality and the Cauchy-Schwarz inequality (and noting that $\|e_{k+1}\|$=1), we have \begin{equation*} \begin{aligned} |\langle y_k,e_{k+1} \rangle | &\geq |\langle e_{k+1},e_{k+1} \rangle| - |\langle y_k - e_{k+1},e_{k+1} \rangle| \\ &= 1 - |\langle y_k - e_{k+1},e_{k+1} \rangle| \\ &\geq 1 - \|y_k - e_{k+1}\| \\ &\geq 1- \frac{1}{2} = \frac{1}{2}. \end{aligned} \end{equation*} Now suppose for a contradiction that $y_k$ converges in norm to some limit $y$. Then there is some $m \in \mathbb{N}$ such that for $k\geq m$, \[ \|y_k-y\| < \frac{1}{4}. \] Again, applying the reverse triangle inequality and Cauchy-Schwartz inequality, we have that for $k \geq m$, \begin{equation*} \begin{aligned} |\langle y,e_{k+1} \rangle | &\geq |\langle y_k, e_{k+1} \rangle | - | \langle y - y_k,e_{k+1} \rangle| \\ & \geq |\langle y_k, e_{k+1} \rangle | - \|y-y_k\| \\ & > \frac{1}{2} - \frac{1}{4} = \frac{1}{4}. \end{aligned} \end{equation*} We therefore have, by Bessel's inequality, that \begin{equation*} \|y\|^2 \geq \sum_{k=1}^\infty |\langle y,e_k \rangle|^2 \leq \sum_{k=m}^\infty |\langle y,e_{k+1} \rangle|^2 = \sum_{k=m}^\infty \frac{1}{16} = \infty, \end{equation*} a contradiction. Hence $(y_k)$ does not converge in norm. Therefore $(x_{n_k})$ does not converge in norm, and so neither does $(x_n)$. This completes the proof. \end{proof} \subsection{An extension} In fact, Kopeck\'a and Paszkiewicz went on to prove that there exist three closed subspaces in $H$ such that for any non-zero vector $x_0 \in H$, there is some sequence of projections $(j_n)$ for which $(x_n)$ does not converge in norm \cite{KoPa17}. In particular, here we begin by choosing three subspaces, and then given a non-zero vector $x_0 \in H$, we find an appropriate sequence $(j_n)$. This is in contrast with Theorem \ref{big result kopecka}, where we first find a sequence $(j_n)$, and then given a non-zero vector $x_0 \in H$, we find appropriate subspaces. The main idea of the proof is showing the following. Suppose we have three closed subspaces $X_1,X_2,X_3 \subset H$, a non-zero vector $x_0 \in H$, and a sequence of projections $(j_n)$ such that $(x_n)$ does not converge in norm (which we know is possible by Theorem \ref{big result kopecka}). Then we may find a closed infinite-dimensional subspace $L$ of $H$ such that for every non-zero $y_0 \in L$, there exists a sequence $(k_n)$ taking values in $\{1,2,3\}$ such that the sequence given by \begin{equation*} y_n = P_{Y_{k_n}}y_{n-1}, \quad n\geq 1, \end{equation*} does not converge in norm, where $Y_i = X_i \cap L$ for $i \in \{1,2,3\}$. The proof is technical, non-constructive, and not directly relevant to our focus, so we omit it. We end the section with a brief remark. As mentioned in Section \ref{concluding remarks}, Sakai's paper \cite{Sak95} ends by posing the following open question. For an arbitrary sequence $s = (j_n)$, does (\ref{key inequality}) always hold with $A=J-1$? That is, does \begin{equation*} \|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2 \end{equation*} hold with $A=J-1$, and any $n \geq m \geq 1$? In light of Theorem \ref{big result kopecka}, this is easily resolved. We find a sequence $s$ for which (\ref{key inequality}) does not hold for any constant $A$. \begin{corollary} \label{Sakai open question} There exists a sequence s, and subspaces $M_1,M_2,M_3$ in $H$ such that there is no constant $A$ for which \begin{equation*} \|x_n - x_m\|^2 \leq A \sum_{k=m}^{n-1} \|x_{k+1} - x_k \|^2 \end{equation*} holds for any $n>m\geq1$. \end{corollary} \begin{proof} Let $s=(j_n)$ be the sequence in Theorem \ref{big result kopecka}. We pick a vector $x_0 \in H$, and choose $M_1$, $M_2$, and $M_3$ according to this theorem. Suppose for a contradiction there is such a constant $A$. Then by Lemma \ref{key result sakai}, we have that $x_n$ converges in norm, contradicting Theorem \ref{big result kopecka}. \end{proof} \newpage \section{Concluding remarks} There is a lot of interesting mathematics related to the method of alternating projections that we could not fit into this dissertation. Two main areas we have not covered are what happens when we have closed convex subsets instead of closed subspaces, and the rate of convergence in the method of alternating projections. \subsection{Closed convex subsets} There are many extensions of the method of alternating projections. These include considering closed convex subsets rather than closed subspaces, contractions rather than projections, or generalising results to Banach spaces with certain properties (for example, considering a uniformly convex Banach space instead of a Hilbert space; see \cite{BaLy10, BaSe17, BrRe77}). Here, we give a brief summary of known results when we have closed convex subsets. We begin by remarking that we can indeed define a projection $P$ onto a closed convex subset $C$ of $H$. By the Hilbert projection theorem, for any $x \in H$, there exists a unique $y \in C$ minimising $\|x-y\|$ over $C$. We define the projection $P_C$ of $x$ onto $C$ by $P_C(x) = y$. In 1965, Bregman proved that any sequence of periodic projections converges weakly to an element in the intersection of the closed convex subsets (assuming the intersection is non-empty) \cite{Bre65}. We note that the intersection of a finite number of closed convex subsets is also closed and convex. However, as opposed to the case of closed subspaces, the point we converge to need not be the projection onto the intersection of the closed convex subsets. We offer an example to illustrate this. \begin{figure}[H] \begin{center} \includegraphics[width=100mm]{figure8.pdf} \caption{An example of two closed convex subsets where $(x_n)$ does not converge to the projection of $x_0$ onto the intersection of the two subsets} \label{intersection projection} \end{center} \end{figure} We note that Bregman's result implies that we have convergence in norm for periodic projections when $H$ is finite-dimensional, since in this case, convergence in norm and weak convergence are equivalent. In fact, an identical argument to our proof of Lemma \ref{Sakai mistake} shows that it is enough for only one of the closed subsets to be contained in a finite-dimensional space. As in the case for closed subspaces, it is natural to ask if we always have convergence in norm. In 2004, Hundal constructed an example of two closed convex subsets $C_1$ and $C_2$, intersecting only at the origin, such that $(P_{C_2}P_{C_1})^n$ converges weakly to $0$ (by Bregman's result), but does not converge in norm \cite{Hun04}. In fact, this proof was an important input towards Paszkiewicz's construction of five subspaces, a non-zero vector $x_0 \in H$, and a sequence $(j_n)$ such that $(x_n)$ does not converge in norm \cite{KoPa17, Pas12}. \subsection{Rates of convergence} Let $H=\mathbb{R}^2$, and let $\theta \in (0,\pi/2)$ be fixed. We consider the two closed subspaces \begin{align*} M_1 &= \big{\{}(x,y) \in \mathbb{R}^2 : x=y\big{\}}, \\ M_2 &= \big{\{}(t\cos \theta,t \sin \theta) \in \mathbb{R}^2 : t \in \mathbb{R}\big{\}}. \end{align*} Our example in the introduction is the case $\theta = \pi/4$. Looking at Figure \ref{alternating projections}, it is not surprising that if we increase the angle $\theta$, we converge faster to $M = M_1\cap M_2 = \{0\}$. What may be more surprising is that we may extend this idea to define the notion of an angle between subspaces. The Friedrichs angle between two closed subspaces $M_1$ and $M_2$ of $H$ is defined to be the angle in $[0,\frac{\pi}{2}]$ whose cosine is given by \begin{equation*} c(M_1,M_2) = \sup \{ |\langle x,y \rangle | : x \in M_1 \cap M^\perp \cap B_H, y\in M_2 \cap M^\perp \cap B_H \}. \end{equation*} It is known that for all $n\geq1$, we have \begin{equation*} \|(P_{M_2}P_{M_1})^n - P_M\| = c(M_1,M_2)^{2n-1}. \end{equation*} The upper bound was proved by Aronszajn \cite{Aro50}, and equality by Kayalar and Weinert \cite{KaWe88}. Hence, letting $T=P_{M_2}P_{M_1}$, we have that $T^n$ converges uniformly (in operator norm) to $P_M$ if and only if $c(M_1,M_2)<1$ (i.e. the Friedrichs angle between $M_1$ and $M_2$ is positive). When this happens, $T^n$ converges uniformly to $P_M$ at a geometric rate, in the sense that there exist $C \geq 0 $ and $\alpha \in (0,1)$ such that \begin{equation*} \|T^n - P_M\| \leq C\alpha^n, \quad n\geq 1. \end{equation*} It turns out that $c(M_1,M_2)=1$ can only happen in an infinite-dimensional space. For $c(M_1,M_2)=1$, we do not have uniform convergence, but we still have strong convergence (for every $x \in H$, $\|T^nx - P_Mx\| \to 0$) by Theorem \ref{von neumann} (von Neumann). In 2009, Bauschke, Deutsch and Hundal \cite{BaDeHu09} proved that in this case, convergence is arbitrarily slow, in the sense that for any monotonically decreasing sequence $(\lambda_n)$ in $[0,1]$ tending to $0$, there exists $x_\lambda \in H$ such that \begin{equation*} \|T^n(x_\lambda) - P_M(x_\lambda)\| \geq \lambda_n, \quad n\geq 1. \end{equation*} Hence we have a dichotomy: \begin{equation*} \begin{aligned} &c(M_1,M_2) < 1 \implies \textnormal{ convergence at a uniform geometric rate}, \\ &c(M_1,M_2) = 1 \implies \textnormal{ arbitrarily slow convergence}. \end{aligned} \end{equation*} In 2012, Badea, Grivaux and M\"uller \cite{BaGrMu12} extended the notion of Friedrichs angle and the results discussed above to the case of $J\geq2$ closed subspaces $M_1,\dots,M_J$. In particular, the same dichotomy still holds, except with $c(M_1,M_2)$ replaced by $c(M_1,\dots, M_J)$. The most recent result concerning the rate of convergence is the following. Let $M = \bigcap_{j=1}^J M_j$ be the intersection of $J$ closed subspaces, and $T=P_{M_J}\dots P_{M_1}$. In 2017, Badea and Seifert \cite{BaSe16} proved that there exists a dense subspace $H_0$ of $H$ such that for any $x_0 \in H_0$, we have \begin{equation*} \|T^n(x_0) - P_M(x_0)\| = o(n^{-k}), \quad k\geq1. \end{equation*} They referred to this as `super-polynomially fast' convergence \cite{BaSe16}. Their result tells us that given $\varepsilon>0$, even in the bad case where $c(M_1,\dots, M_J)=1$, if we pick an initial point where we have slow convergence, we are a distance of less than $\varepsilon$ away from a point where we have super-polynomially fast convergence. For applications, it would be useful to be able to get a better idea of where the points (elements of $H$) that give fast and slow convergence are located. However, fairly little is known about this. Nevertheless, there is a conjecture by Deutsch and Hundal as to where points that give slow convergence can be found \cite{DeHu10}. The paper proves equivalent conditions for $c(M_1,\dots,M_J)<1$, from which it follows that \begin{equation*} c(M_1,\dots, M_J) = 1 \iff \sum_{j=1}^J M_j^\perp \textnormal{ is not closed in } H. \end{equation*} In this case, we know that given a monotonically decreasing sequence $(\lambda_n)$ in $[0,1]$ tending to $0$, there exists $x_\lambda \in H$ such that \begin{equation*} \|T^n(x_\lambda) - P_M(x_\lambda)\| \geq \lambda_n, \quad n\geq 1. \end{equation*} Deutsch and Hundal's conjecture is that for $(\lambda_n)$ tending to $0$ sufficiently slowly, \begin{equation*} x_\lambda \in M^\perp \setminus \sum_{j=1}^J M_j^\perp. \end{equation*} This would be useful in knowing how to avoid points where we have slow convergence, but it remains to be seen if this conjecture is true. \newpage \subsection{Conclusion} In this dissertation, we presented proofs of some well known results concerning the method of alternating projections. These include an original proof of von Neumann's theorem \cite{von49}, clarifying a remark in Sakai's paper \cite{Sak95}, and simplifying Amemiya and Ando's proof \cite{AmAn65} for the case of orthogonal projections. The key results are that $(x_n)$ always converges weakly, $(x_n)$ converges in norm when $(j_n)$ is quasiperiodic (and in particular periodic), and that we may find a sequence $(j_n)$, such that for any given vector $x_0 \in H$, we may find three closed subspaces intersecting only at the origin, for which $(x_n)$ does not converge in norm. Beyond those mentioned in the dissertation, we do not know of any other results regarding the convergence of $(x_n)$. In particular, given a sequence $(j_n)$ that is not quasiperiodic, and with none of the closed subspaces $M_j$ finite-dimensional, no further results are available to determine whether $(x_n)$ converges in norm. Whether in the future we will be able to say more about the convergence of $(x_n)$ remains to be seen. \newpage \renewcommand{\abstractname}{Acknowledgements} \begin{abstract} \thispagestyle{plain} I would like to thank David Seifert for sparking my interest in the method of alternating projections, and for taking the time to supervise my dissertation. \end{abstract} \newpage \nocite{*}
{ "timestamp": "2018-09-18T02:10:36", "yymm": "1809", "arxiv_id": "1809.05858", "language": "en", "url": "https://arxiv.org/abs/1809.05858", "abstract": "The method of alternating projections involves orthogonally projecting an element of a Hilbert space onto a collection of closed subspaces. It is known that the resulting sequence always converges in norm if the projections are taken periodically, or even quasiperiodically. We present proofs of such well known results, and offer an original proof for the case of two closed subspaces, known as von Neumann's theorem. Additionally, it is known that this sequence always converges with respect to the weak topology, regardless of the order projections are taken in. By focusing on projections directly, rather than the more general case of contractions considered previously in the literature, we are able to give a simpler proof of this result. We end by presenting a technical construction taken from a recent paper, of a sequence for which we do not have convergence in norm.", "subjects": "Functional Analysis (math.FA)", "title": "The Method of Alternating Projections", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.99168422078522, "lm_q2_score": 0.8354835411997897, "lm_q1q2_score": 0.8285358445335897 }
https://arxiv.org/abs/1404.5886
Log-concavity and strong log-concavity: a review
We review and formulate results concerning log-concavity and strong-log-concavity in both discrete and continuous settings. We show how preservation of log-concavity and strongly log-concavity on $\mathbb{R}$ under convolution follows from a fundamental monotonicity result of Efron (1969). We provide a new proof of Efron's theorem using the recent asymmetric Brascamp-Lieb inequality due to Otto and Menz (2013). Along the way we review connections between log-concavity and other areas of mathematics and statistics, including concentration of measure, log-Sobolev inequalities, convex geometry, MCMC algorithms, Laplace approximations, and machine learning.
\section{Introduction: log-concavity} \label{sec:intro} Log-concave distributions and various properties related to log-concavity play an increasingly important role in probability, statistics, optimization theory, econometrics and other areas of applied mathematics. In view of these developments, the basic properties and facts concerning log-concavity deserve to be more widely known in both the probability and statistics communities. Our goal in this survey is to review and summarize the basic preservation properties which make the classes of log-concave densities, measures, and functions so important and useful. In particular we review preservation of log-concavity and ``strong log-concavity'' (to be defined carefully in section~\ref{sec:basicDefinitions}) under marginalization, convolution, formation of products, and limits in distribution. The corresponding notions for discrete distributions (log-concavity and ultra log-concavity) are also reviewed in section~\ref{Sec:DiscreteLogConc}. A second goal is to acquaint our readers with a useful monotonicity theorem for log-concave distributions on $\mathbb{R}$ due to \cite{MR0171335}, and to briefly discuss connections with recent progress concerning ``asymmetric'' Brascamp-Lieb inequalities. Efron's theorem is reviewed in Section~\ref{subsec:EfronMonoThm}, and further applications are given in the rest of Section~\ref{sec:EfronsTheoremOneDimension}. There have been several reviews of developments connected to log-concavity in the mathematics literature, most notably \cite{MR588074} and \cite% {MR1898210}. We are not aware of any comprehensive review of log-concavity in the statistics literature, although there have been some review type papers in econometrics, in particular \cite{An:98} and \cite{Bagnoli:95}. Given the pace of recent advances, it seems that a review from a statistical perspective is warranted. Several books deal with various aspects of log-concavity: the classic books by \cite{MR552278} (see also \cite{MR2759813}) and \cite{MR954608} both cover aspects of log-concavity theory, but from the perspective of majorization in the first case, and a perspective dominated by unimodality in the second case. Neither treats the important notion of strong log-concavity. The recent book by \cite{MR2814377} perhaps comes closest to our current perspective with interesting previously unpublished material from the papers of Brascamp and Lieb in the 1970's and a proof of the Brascamp and Lieb result to the effect that strong log-concavity is preserved by marginalization. Unfortunately Simon does not connect with recent terminology and other developments in this regard and focuses on convexity theory more broadly. \cite{MR1964483} (chapter 6) gives a nice treatment of the Brunn-Minkowski inequality and related results for log-concave distributions and densities with interesting connections to optimal transportation theory. His chapter 9 also gives a nice treatment of the connections between log-Sobolev inequalities and strong log-concavity, albeit with somewhat different terminology. \cite% {MR1849347} is, of course, a prime source for material on log-Sobolev inequalities and strong log concavity. The nice book on stochastic programming by \cite% {MR1375234} has its chapter 4 devoted to log-concavity and $s-$concavity, but has no treatment of strong log-concavity or inequalities related to log-concavity and strong log-concavity. In this review we will give proofs some key results in the body of the review, while proofs of supporting results are postponed to Section~\ref{sec:Proofs} (Appendix B). \section{Log-concavity and strong log-concavity: definitions and basic results} \label{sec:basicDefinitions} We begin with some basic definitions of log-concave densities and measures on $\mathbb{R}^d$. \begin{definition} (0-d): A density function $p$ with respect to Lebesgue measure $\lambda$ on $% (\mathbb{R}^d, \mathcal{B}^d)$ is log-concave if $p = e^{-\varphi}$ where $% \varphi$ is a convex function from $\mathbb{R}^d$ to $(-\infty, \infty]$. Equivalently, $p$ is log-concave if $p = \exp (\tilde{\varphi} )$ where $% \tilde{\varphi} = - \varphi $ is a concave function from $\mathbb{R}^d$ to $% [-\infty, \infty)$. \end{definition} We will usually adopt the convention that $p$ is lower semi-continuous and $% \varphi = - \log p$ is upper semi-continuous. Thus $\{x \in \mathbb{R}^d : \ p(x) > t \}$ is open, while $\{ x \in \mathbb{R}^d : \ \varphi(x) \le t \}$ is closed. We will also say that a non-negative and integrable function $f $ from $\mathbb{R}^d$ to $[0,\infty)$ is log-concave if $f = e^{-\varphi}$ where $\varphi $ is convex even though $f$ may not be a density; that is $\int_{\mathbb{R}^d} f d \lambda \in (0,\infty)$. Many common densities are log-concave; in particular all Gaussian densities \begin{eqnarray*} p_{\mu, \Sigma} (x) = (2 \pi | \Sigma |)^{-d/2} \exp \left ( - \frac{1}{2} (x - \mu)^T \Sigma^{-1} (x - \mu ) \right ) \end{eqnarray*} with $\mu \in \mathbb{R}^d$ and $\Sigma$ positive definite are log-concave, and \begin{eqnarray*} p_C(x) = 1_C (x) / \lambda (C) \end{eqnarray*} is log-concave for any non-empty, open and bounded convex subset $C \subset \mathbb{R}^d$. With $C$ open, $p$ is lower semi-continuous in agreement with our convention noted above; of course taking $C$ closed leads to upper semi-continuity of $% p $. In the case $d=1$, log-concave functions and densities are related to several other important classes. The following definition goes back to the work of P\'olya and Schoenberg. \begin{definition} \label{PolyaFrequencyOfOrderK} Let $p$ be a function on $\mathbb{R}$ (or some subset of $\mathbb{R}$), and let $x_1 < \cdots < x_k$, $y_1 < \cdots < y_k$. Then $p$ is said to be a P\'olya frequency function of order $k$ (or $% p \in PF_k$) if $\mbox{det} ( p(x_i - y_j ) ) \ge 0$ for all such choices of the $x$'s and $y$'s. If $p$ is $PF_k$ for every $k$, then $p \in PF_{\infty}$% , the class of P\'olya frequency functions of order $\infty$. \end{definition} A connecting link to P\'olya frequency functions and to the notion of monotone likelihood ratios, which is of some importance in statistics, is given by the following proposition: \begin{proposition} \label{LogConcaveEqualsPF2} $\phantom{blab}$\newline (a) The class of log-concave functions on $\mathbb{R}$ coincides with the class of P\'olya frequency functions of order $2$.\newline (b) A density function $p$ on $\mathbb{R}$ is log-concave if and only if the translation family $\{ p( \cdot - \theta ) : \ \theta \in \mathbb{R} \}$ has monotone likelihood ratio: i.e. for every $\theta_1 < \theta_2 $ the ratio $% p(x-\theta_2)/p(x-\theta_1)$ is a monotone nondecreasing function of $x$. \end{proposition} \begin{proof} See Section~\ref{sec:Proofs}. \end{proof} \smallskip \begin{definition} (0-m): A probability measure $P$ on $(\mathbb{R}^d, \mathcal{B}^d)$ is log-concave if for all non-empty sets $A,B \in \mathcal{B}^d$ and for all $0 < \theta < 1$ we have \begin{eqnarray*} P( \theta A + (1-\theta) B) \ge P(A)^{\theta} P(B)^{1-\theta} . \end{eqnarray*} \end{definition} It is well-known that log-concave measures have sub-exponential tails, see \cite{MR733944} and Section \ref{section_reg} below. To accommodate densities having tails heavier than exponential, the classes of $s-$concave densities and measures are of interest. \begin{definition} \label{defn:Sconcave} (s-d): A density function $p$ with respect to Lebesgue measure $\lambda $ on an convex set $C \subset \mathbb{R}^d$ is $s-$concave if \begin{eqnarray*} p( \theta x + (1-\theta) y) \ge M_s (p(x), p(y); \theta ) \end{eqnarray*} where the generalized mean $M_s (u,v; \theta)$ is defined for $u,v\ge 0$ by \begin{eqnarray*} M_s (u,v; \theta) \equiv \left \{ \begin{array}{ll} (\theta u^s + (1-\theta) v^s )^{1/s}, & s \not= 0, \\ u^{\theta} v^{1-\theta}, & s = 0, \\ \mbox{min} \{ u, v \}, & s = -\infty, \\ \mbox{max} \{ u,v\} , & s = + \infty.% \end{array} \right . \end{eqnarray*} \end{definition} \begin{definition} (s-m): A probability measure $P $ on $(\mathbb{R}^d, \mathcal{B}^d)$ is $s-$% concave if for all non-empty sets $A,B $ in $\mathcal{B}^d$ and for all $% \theta \in (0,1)$, \begin{eqnarray*} P( \theta A + (1-\theta ) B) \ge M_s (P(A), P(B); \theta ) \end{eqnarray*} where $M_s (u,v; \theta)$ is as defined above. \end{definition} These classes of measures and densities were studied by \cite{MR0404557} in the case $s=0$ and for all $s \in \mathbb{R}$ by \cite{MR0450480}, \cite{MR0404559}, \cite{MR0388475}, and \cite{MR0428540}. The main results concerning these classes are nicely summarized by \cite% {MR954608}; see especially sections 2.3-2.8 (pages 46-66) and section 3.3 (pages 84-99). In particular we will review some of the key results for these classes in the next section. For bounds on densities of $s-$% concave distributions on $\mathbb{R}$ see \cite{Doss-Wellner:2013}; for probability tail bounds for $s-$concave measures on $\mathbb{R}^d$, see \cite{MR2510011}. For moment bounds and concentration inequalities for $s-$% concave distributions with $s<0$ see \cite{MR3005719} and \cite{Guedon:12}, section 3. A key theorem connecting probability measures to densities is as follows: \begin{theorem} \label{PrekopaRinott} Suppose that $P$ is a probability measure on $(\mathbb{% R}^d, \mathcal{B}^d)$ such that the affine hull of $\mbox{supp} (P)$ has dimension $d$. Then $P$ is a log-concave measure if and only if it has a log-concave density function $p$ on $\mathbb{R}^d$; that is $p = e^{\varphi}$ with $\varphi$ concave satisfies \begin{equation*} P(A) = \int_A p d \lambda \ \ \ \mbox{for} \ \ \ A \in \mathcal{B}^d . \end{equation*} \end{theorem} For the correspondence between $s-$concave measures and $t-$concave densities, see \cite{MR0404559}, \cite{MR0450480} section 3, \cite{MR0428540}% , and \cite{MR954608}. \medskip One of our main goals here is to review and summarize what is known concerning the (smaller) classes of (what we call) \textsl{strongly log-concave} densities. This terminology is not completely standard. Other terms used for the same or essentially the same notion include: \begin{itemize} \item \textsl{Log-concave perturbation of Gaussian}; \cite{MR1964483}, \cite{MR1800860}. pages 290-291. \item \textsl{Gaussian weighted log-concave}; \cite{MR0450480} pages 379, 381. \item \textsl{Uniformly convex potential}: \cite{MR1800062}, abstract and page 1034, \cite{MR2895086}, Section 7. \item \textsl{Strongly convex potential}: \cite{MR1800860}. \end{itemize} In the case of real-valued discrete variables the comparable notion is called \textsl{ultra log-concavity}; see e.g. \cite{MR1462561}, \cite{MR3030616}, and \cite{MR2327839}. We will re-visit the notion of ultra log-concavity in Section~\ref{Sec:DiscreteLogConc}. \smallskip Our choice of terminology is motivated in part by the following definition from convexity theory: following \cite{MR1491362}, page 565, we say that a proper convex function $h:\mathbb{R}^{d}\rightarrow \overline{\mathbb{R}}$ is \textsl{strongly convex} if there exists a positive number $c$ such that \begin{equation*} h(\theta x+(1-\theta )y)\leq \theta h(x)+(1-\theta )h(y)-\frac{1}{2}c\theta (1-\theta )\Vert x-y\Vert ^{2} \end{equation*} for all $x,y\in \mathbb{R}^{d}$and $\theta \in (0,1)$. It is easily seen that this is equivalent to convexity of $h(x)-(1/2)c\Vert x\Vert ^{2}$ (see \cite{MR1491362}, Exercise12.59, page 565). Thus our first definition of strong log-concavity of a density function $p$ on $\mathbb{R}^d$ is as follows: \begin{definition} \label{strongLogConcaveDefn1} For any $\sigma ^{2}>0$ define the class of strongly log-concave densities with variance parameter $\sigma ^{2}$, or $% SLC_{1}(\sigma ^{2},d)$ to be the collection of density functions $p$ of the form \begin{equation*} p(x)=g(x)\phi _{\sigma ^{2}I}(x) \end{equation*}% for some log-concave function $g$ where, for a positive definite matrix $% \Sigma $ and $\mu \in \mathbb{R}^{d}$, $\phi _{\Sigma }(\cdot -\mu )$ denotes the $N_{d}(\mu ,\Sigma )$ density given by \begin{equation} \phi _{\Sigma }(x-\mu )=(2\pi |\Sigma |)^{-d/2}\exp \left( -\frac{1}{2}% (x-\mu )^{T}\Sigma ^{-1}(x-\mu )\right) . \label{def_Gaussian} \end{equation} If a random vector $X$ has a density $p$ of this form, then we also say that $X$ is strongly log-concave. \end{definition} Note that this agrees with the definition of strong convexity given above since, \begin{equation*} h(x)\equiv -\log p(x)=-\log g(x)+d\log (\sigma \sqrt{2\pi })+\frac{|x|^{2}}{% 2\sigma ^{2}}, \end{equation*}% so that \begin{equation*} -\log p(x)-\frac{|x|^{2}}{2\sigma ^{2}}=-\log g(x)+d\log (\sigma \sqrt{2\pi }% ) \end{equation*}% is convex; i.e. $-\log p(x)$ is strongly convex with $c=1/\sigma ^{2}$. Notice however that if $p \in SLC_{1}(\sigma ^{2},d)$ then larger values of $\sigma^2$ corresp to smaller values of $c = 1/\sigma^2$, and hence $p$ becomes {\sl less} strongly log-concave as $\sigma^2$ increases. Thus in our definition of strong log-concavity the coefficient $\sigma^2$ measures the ``flatness'' of the convex potential It will be useful to relax this definition in two directions: by allowing the Gaussian distribution to have a non-singular covariance matrix $\Sigma$ other than the identity matrix and perhaps a non-zero mean vector $\mu$. Thus our second definition is as follows. \begin{definition} \label{strongLogConcaveDefn2} Let $\Sigma $ be a $d\times d$ positive definite matrix and let $\mu \in \mathbb{R}^{d}$. We say that a random vector $X$ and its density function $p$ are strongly log-concave and write $% p\in SLC_{2}(\mu ,\Sigma ,d)$ if \begin{equation*} p(x)=g(x)\phi _{\Sigma }(x-\mu )\ \ \ \mbox{for}\ \ \ x\in \mathbb{R}^{d} \end{equation*}% for some log-concave function $g$ where $\phi _{\Sigma }(\cdot -\mu )$ denotes the $N_{d}(\mu ,\Sigma )$ density given by (\ref{def_Gaussian}). \end{definition} Note that $SLC_{2}(0,\sigma ^{2}I,d)=SLC_{1}(\sigma ^{2},d)$ as in Definition~\ref{strongLogConcaveDefn1}. Furthermore, if $p\in SLC_{2}(\mu ,\Sigma ,d)$ with $\Sigma $ non-singular, then we can write \begin{eqnarray*} p(x) &=&g(x)\frac{\phi _{\Sigma }(x-\mu )}{\phi _{\Sigma }(x)}\cdot \frac{% \phi _{\Sigma }(x)}{\phi _{\sigma ^{2}I}(x)}\phi _{\sigma ^{2}I}(x) \\ &=&g(x)\exp (\mu ^{T}\Sigma ^{-1}x-(1/2)\mu ^{T}\Sigma ^{-1}\mu ^{T}) \\ &&\ \ \ \cdot \exp \left( -\frac{1}{2}x^{T}(\Sigma ^{-1}-\frac{1}{\sigma ^{2}% }I)x\right) \cdot \phi _{\sigma ^{2}I}(x) \\ &\equiv &h(x)\phi _{\sigma ^{2}I}(x)\text{ ,} \end{eqnarray*}% where $\Sigma ^{-1}-I/\sigma ^{2}$ is positive definite if $1/\sigma ^{2}$ is smaller than$\ $the smallest eigenvalue of $\Sigma ^{-1}$. In this case, $% h$ is log-concave, so $p\in SLC_{1}(\sigma ^{2},d)$.\medskip \begin{example} \label{NormalDensities} (Gaussian densities) If $X \sim p $ where $p$ is the $N_d(0, \Sigma)$ density with $\Sigma$ positive definite, then $X $ (and $p)$ is strongly log-concave $SLC_2 ( 0 , \Sigma, d)$ and hence also log-concave. In particular for $d=1$, if $X \sim p$ where $p $ is the $N_1 (0, \sigma^2)$ density, then $X$ (and $p$) is $SLC_1 ( \sigma^2 ,1) = SLC_2 (0, \sigma^2 , 1)$ and hence is also log-concave. Note that $\varphi_X^{\prime\prime} (x) \equiv (-\log p)^{\prime \prime} (x) = 1/\sigma^2$ is constant in this latter case. \end{example} \begin{example} \label{LogisticDensity} (Logistic density) If $X \sim p$ where $p(x) = e^{-x} /(1+e^{-x} )^2 = (1/4)/(\cosh(x/2))^2$, then $X $ (and $p$) is log-concave and even \textsl{strictly log-concave} since $\varphi_X^{\prime \prime} (x) = (-\log p)^{\prime \prime} (x) = 2p(x) > 0 $ for all $x \in \mathbb{R}$, but $X$ is \textsl{not} strongly log-concave. \end{example} \begin{example} \label{BridgeDensity} (Bridge densities) If $X \sim p_\theta$ where, for $% \theta \in (0,1)$, \begin{equation*} p_{\theta} (x) = \frac{\sin (\pi\theta)}{2 \pi ( \cosh(\theta x) + \cos (\pi \theta))}, \end{equation*} then $X$ (and $p_{\theta}$) is log-concave for $\theta \in (0,1/2]$, but fails to be log-concave for $\theta \in (1/2,1)$. For $\theta \in (1/2,1)$, $% \varphi_{\theta}^{\prime \prime} (x) = (-\log p_{\theta})^{\prime \prime} (x) $ is bounded below, by some negative value depending on $\theta$, and hence these densities are \textsl{semi-log-concave} in the terminology of \cite{CattiauxGuillin:13} who introduce this further generalization of log-concave densities by allowing the constant in the definition of a class of strongly log-concave densities to be negative as well as positive. This particular family of densities on $\mathbb{R}$ was introduced in the context of binary mixed effects models by \cite{MR2024756}. \end{example} \begin{example} \label{SubbotinDensity} (Subbotin density) If $X \sim p_r$ where $p_r(x) = C_r \exp( - |x|^r/r )$ for $x \in \mathbb{R}$ and $r>0$ where $C_r = 1/[2 \Gamma (1/r) r^{1/r-1}]$, then $X$ (and $p_r$) is log-concave for all $r\ge1$% . Note that this family includes the Laplace (or double exponential) density for $r=1$ and the Gaussian (or standard normal) density for $r=2$. The only member of this family that is strongly log-concave is $p_2$, the standard Gaussian density, since $(-\log p)^{\prime \prime} (x) = (r-1) |x|^{r-2}$ for $x\not= 0$. \end{example} \begin{example} \label{SupremumBrownianBridge} (Supremum of Brownian bridge) If $\mathbb{U}$ is a standard Brownian bridge process on $[0,1]$, Then $P(\sup_{0\leq t\leq 1}\mathbb{U}(t)>x)=\exp (-2x^{2})$ for $x>0$, so the density is $f(x)=4x\exp (-2x^{2})1_{(0,\infty )}(x)$, which is strongly log concave since $(-\log f)^{\prime \prime }(x)=4+x^{-2}\geq 4$. This is a special case of the Weibull densities $f_{\beta }(x)=\beta x^{\beta -1}\exp (-x^{\beta })$ which are log-concave if $\beta \geq 1$ and strongly log-concave for $\beta \geq 2$% . For more about suprema of Gaussian processes, see Section \ref% {ssec_suprema}\ below. \end{example} For further interesting examples, see \cite{MR954608} and \cite{MR1375234} . There exist a priori many ways to strengthen the property of log-concavity. An very interesting notion is for instance the log-concavity of order $p$. This is a one-dimensional notion, and even if it can be easily stated for one-dimensional measures on $\mathbb{R}^{d}$, see \cite{MR2857249} Section 4, we state it in its classical way on $\mathbb{R}$. \begin{definition} \label{def_LC_order_alpha}A random variable $\xi >0$ is said to have a log-concave distribution of order $p\geq 1$, if it has a density of the form $f(x)=x^{p-1}g(x),$ $x>0$, where the function $g$ is log-concave on $\left(0, \infty \right) $. \end{definition} Notice that the notion of log-concavity of order $1$ coincides with the notion of log-concavity for positive random variables. Furthermore, it is easily seen that log-concave variables of order $p>1$ are more concentrated than log-concave variables. Indeed, with the notations of Definition~\ref{def_LC_order_alpha} and setting moreover $f=\exp \left( -\varphi_{f}\right) $ and $g=\exp \left( -\varphi _{g}\right) $, assuming that $f$ is $\mathcal{C}^{2}$ we get, \begin{equation*} \Hess \varphi _{f}= \Hess \varphi _{g}+\frac{p-1}{x^{2}}\text{ .} \end{equation*}% As a matter of fact, the exponent $p$ strengthens the Hessian of the potential of $g$, which is already a log-concave density. Here are some example of log-concave variables of order $p$. \begin{example} The Gamma distribution with $\alpha \geq 1$ degrees of freedom, which has the density $f(x)=\Gamma \left( \alpha \right) ^{-1}x^{\alpha -1}e^{-x} 1_{(0,\infty)} (x) $ is log-concave of order $\alpha $. \end{example} \begin{example} The Beta distribution $B_{\alpha ,\beta }$ with parameters $\alpha \geq 1$ and $\beta \geq 1$ is log-concave of order $\alpha $. We recall that its density $g$ is given by $g(x)=B\left( \alpha ,\beta \right) ^{-1}x^{\alpha-1}\left( 1-x\right) ^{\beta -1} 1_{(0,1)}(x)$. \end{example} \begin{example} The Weibull density of parameter $\beta \geq 1$, given by $h_{\beta }\left( x\right) =\beta x^{\beta -1}\exp \left( -x^{\beta }\right) 1_{(0, \infty)} (x) $ is log-concave of order $\beta $. \end{example} It is worth noticing that when $X$ is a log-concave vector in $\mathbb{R}^{d} $ with spherically invariant distribution, then the Euclidian norm of $X$, denoted $\left\Vert X\right\Vert $, follows a log-concave distribution of order $d-1$ (this is easily seen by transforming to polar coordinates; see \cite{MR2083386} for instance). The notion of log-concavity of order $p$ is also of interest when dealing with problems in greater dimension. Indeed, a general way to reduce a problem defined by $d$ -dimensional integrals to a problem involving one-dimensional integrals is given by the ``localization lemma'' of \cite{MR1238906}; see also \cite{MR1608200}. We will not further review this notion and we refer to \cite{MR2083386}, \cite{MR2839024}\ and \cite{MR2857249} for nice results related in particular to concentration of log-concave variables of order $p$. The following sets of equivalences for log-concavity and strong log-concavity will be useful and important. To state these equivalences we need the following definitions from \cite{MR2814377}, page 199. First, a subset $A $ of $\mathbb{R}^d$ is \emph{balanced} (\cite{MR2814377}) or \emph{centrally symmetric} (\cite{MR954608}) if $x \in A$ implies $-x \in A$. \begin{definition} \label{ConvexlyLayered-EvenRadialMonotone} A nonnegative function $f$ on $% \mathbb{R}^d$ is \emph{convexly layered} if $\{ x : \ f(x) > \alpha \}$ is a balanced convex set for all $\alpha>0$. It is called \emph{even, radial monotone} if (i) $f(-x) = f(x)$ and (ii) $f(rx) \ge f(x)$ for all $0 \le r \le 1$ and all $x \in \mathbb{R}^d$. \end{definition} \begin{proposition} \label{EquivalencesLogCon} (Equivalences for log-concavity). Let $p = e^{-\varphi}$ be a density function with respect to Lebesgue measure $% \lambda $ on $\mathbb{R}^d$; that is, $p \ge 0$ and $\int_{\mathbb{R}^d} p d \lambda =1$. Suppose that $\varphi \in C^2 $. Then the following are equivalent:\newline (a) $\varphi = - \log p$ is convex; i.e. $p$ is log-concave.\newline (b) $\nabla \varphi = - \nabla p / p : \mathbb{R}^d \rightarrow \mathbb{R}^d$ is monotone:\newline \begin{equation*} \langle \nabla \varphi (x_2) - \nabla \varphi (x_1) , x_2 - x_1 \rangle \ge 0 \ \ \ \mbox{for all} \ \ x_1, x_2 \in \mathbb{R}^d . \end{equation*} (c) $\nabla^2 \varphi = {\nabla^2 \mathstrut} (\varphi ) \ge 0$.\newline (d) $J_a (x; p) = p(a+x) p(a-x) $ is convexly layered for each $a \in \mathbb{R}^d$.\newline (e) $J_a (x; p)$ is even and radially monotone.\newline (f) $p$ is mid-point log-concave: for all $x_1,x_2 \in \mathbb{R}^d$, \begin{equation*} p\left ( \frac{1}{2} x_1 + \frac{1}{2} x_2\right ) \ge p(x_1)^{1/2} p(x_2)^{1/2} . \end{equation*} \end{proposition} The equivalence of (a), (d), (e), and (f) is proved by \cite{MR2814377}, page 199, without assuming that $p\in C^{2}$. The equivalence of (a), (b), and (c) under the assumption $\varphi \in C^{2}$ is classical and well-known. This set of equivalences generalizes naturally to handle $% \varphi \notin C^{2}$, but $\varphi $ proper and upper semicontinuous so that $p$ is lower semicontinuous; see Section \ref{section_ap} below for the adequate tools of convex regularization. In dimension 1, \cite{Bobkov:96a} proved the following further characterizations of log-concavity on $\mathbb{R}$. \begin{proposition}[\protect\cite{Bobkov:96a}] \label{prop_bobkov}Let $\mu $ be a nonatomic probability measure with distribution function $F=\mu \left( \left( -\infty ,x\right] \right) $, $x\in \mathbb{R}$. Set $a=\inf \left\{ x\in \mathbb{R}: \ F\left( x\right)>0\right\} $ and $b=\sup \left\{ x\in \mathbb{R}: \ F\left( x\right) <1\right\} $. Assume that $F$ strictly increases on $\left( a,b\right) $, and let $F^{-1}:(0,1)\rightarrow \left( a,b\right) $ denote the inverse of $F$ restricted to $\left( a,b\right) $. Then the following properties are equivalent:\\ (a) \ $\mu $ is log-concave;\\ (b) for all $h>0$, the function $R_{h}\left( p\right) =F\left( F^{-1}\left( p\right) +h\right) $ is concave on $\left( a,b\right) $;\\ (c ) $\mu $ has a continuous, positive density $f$ on $\left(a,b\right) $ and, moreover, the function $I\left( p\right) =f\left( F^{-1}\left( p\right) \right) $ is concave on $(0,1)$. \end{proposition} Properties (b) and (c ) of Proposition \ref{prop_bobkov} were first used in \cite{Bobkov:96a} along the proofs of his description of the extremal properties of half-planes for the isoperimetric problem for log-concave product measures on $\mathbb{R}^{d}$. In \cite{MR2857249} the concavity of the function $I\left( p\right) =f\left( F^{-1}\left(p\right) \right) $ defined in point (c ) of Proposition \ref{prop_bobkov}, plays a role in the proof of concentration and moment inequalities for the following information quantity: $-\log f\left( X\right)$ where $X$ is a random vector with log-concave density $f$. Recently, \cite{BobkovLedoux:14} used the concavity of $I$ to prove upper and lower bounds on the variance of the order statistics associated to an i.i.d. sample drawn from a log-concave measure on $\mathbb{R}$. The latter results allow then the authors to prove refined bounds on some Kantorovich transport distances between the empirical measure associated to the i.i.d. sample and the log-concave measure on $\mathbb{R}$. For more facts about the function $I $ for general measures on $\mathbb{R}$ and in particular, its relationship to isoperimetric profiles, see Appendix A.4-6 of \cite{BobkovLedoux:14}. \begin{example} If $\mu $ is the standard Gaussian measure on the real line, then $I$ is symmetric around $1/2$ and there exist constants $0< c_{0} \le c_{1}< \infty$ such that \begin{equation*} c_{0}t\sqrt{\log \left( 1/t\right) }\leq I\left( t\right) \leq c_{1}t\sqrt{% \log \left( 1/t\right) }\text{ ,} \end{equation*}% for $t\in \left( 0,1/2\right] $ (see \cite{BobkovLedoux:14}\ p.73). \end{example} We turn now to similar characterizations of strong log-concavity. \begin{proposition} \label{EquivalencesStrongLogConTry3} (Equivalences for strong log-concavity, $SLC_{1}$). Let $p=e^{-\varphi }$ be a density function with respect to Lebesgue measure $\lambda $ on $\mathbb{R}^{d}$; that is, $p\geq 0$ and $% \int_{\mathbb{R}^{d}}pd\lambda =1$. Suppose that $\varphi \in C^{2}$. Then the following are equivalent:\newline (a) $p$ is strongly log-concave; $p\in SLC_{1}(\sigma ^{2},d)$.\newline (b) $\rho (x)\equiv \nabla \varphi (x)-x/\sigma ^{2}:\mathbb{R}^{d}\rightarrow \mathbb{R}^{d}$ is monotone:\newline \begin{equation*} \langle \rho (x_{2})-\rho (x_{1}),x_{2}-x_{1}\rangle \geq 0\ \ \ \mbox{for all}\ \ x_{1},x_{2}\in \mathbb{R}^{d}. \end{equation*}% (c) $\nabla \rho (x)=\nabla ^{2}\varphi -I/\sigma ^{2}\geq 0$.\newline (d) For each $a\in \mathbb{R}^{d}$ the function \begin{equation*} J_{a}^{\phi }(x;p)\equiv \frac{p(a+x)p(a-x)}{\phi _{\sigma ^{2}I/2}(x)} \end{equation*}% is convexly layered. \newline (e) The function $J_{a}^{\phi }(x;p)$ in (d) is even and radially monotone for all $a\in \mathbb{R}^{d}$. \newline (f) For all $x,y\in \mathbb{R}^{d}$, \begin{equation*} p\left( \frac{1}{2}x+\frac{1}{2}y\right) \geq p(x)^{1/2}p(y)^{1/2}\exp \left( \frac{1}{8}|x-y|^{2}\right) . \end{equation*} \end{proposition} \begin{proof} See Section~\ref{sec:Proofs}. \end{proof} We investigate the extension of Proposition \ref{prop_bobkov} concerning log-concavity on $\mathbb{R}$,\ to the case of strong log-concavity. (The following result is apparently new.) Recall that a function $h$ is strongly concave on $\left( a,b\right) $ with parameter $c>0$ (or $c $-strongly concave), if for any $x,y\in \left( a,b\right) $, any $\theta \in \left( 0,1\right) $, \begin{equation*} h(\theta x+(1-\theta )y)\geq \theta h(x)+(1-\theta )h(y)+\frac{1}{2}c\theta (1-\theta )\Vert x-y\Vert ^{2}\text{ .} \end{equation*} \begin{proposition} \label{prop_I_SLC} Let $\mu $ be a nonatomic probability measure with distribution function $F=\mu \left( \left( -\infty ,x\right] \right) $, $x\in \mathbb{R}$. Set $a=\inf \left\{ x\in \mathbb{R}: F\left( x\right) >0\right\} $ and $b=\sup \left\{ x\in \mathbb{R}: F\left( x\right) <1\right\} $, possibly infinite. Assume that $F$ strictly increases on $\left( a,b\right)$, and let $F^{-1}:(0,1)\rightarrow \left( a,b\right) $ denote the inverse of $F$ restricted to $\left( a,b\right) $. Suppose that $X$ is a random variable with distribution $\mu $. Then the following properties hold: \begin{description} \item[(i)] If $X\in SLC_{1}\left( c,1\right) $, $c>0$, then $I\left( p\right) =f\left( F^{-1}\left( p\right) \right) $ is $\left( c\left\Vert f\right\Vert _{\infty }\right) ^{-1}$-strongly concave and $\left( c^{-1} \sqrt{\var \left( X\right) }\right) $-strongly concave on $(0,1)$. \item[(ii)] The converse of point \textbf{(i)} is false: there exists a log-concave variable $X$ which is not strongly concave (for any parameter $c>0$) such that the associated $I$ function is strongly log-concave on $\left( 0,1\right) $. \item[(iii)] There exist a strongly log-concave random variable $X\in SLC\left( c,1\right) $ and $h_{0}>0$ such that the function $R_{h_{0}}\left( p\right) =F\left( F^{-1}\left( p\right) +h_{0}\right) $ is concave but not strongly concave on $\left( a,b\right) $. \item[(iv)] There exists a log-concave random variable $X$ which is not strongly log-concave (for any positive parameter), such that for all $h>0$, the function $R_{h_{0}}\left( p\right) =F\left( F^{-1}\left( p\right) +h\right) $ is strongly concave on $\left( a,b\right) $. \end{description} \end{proposition} From \textbf{(i) }and \textbf{(ii)} in Proposition \ref{prop_I_SLC}, we see that the strong concavity of the function $I$ is a necessary but not sufficient condition for the strong log-concavity of $X$. Points \textbf{% (iii)} and \textbf{(iv)} state that no relations exist in general between the strong log-concavity of $X$ and strong concavity of its associated function $R_{h}$. \begin{proof} See Section~\ref{sec:Proofs}. \end{proof} The following proposition gives a similar set of equivalences for our second definition of strong log-concavity, Definition~\ref{strongLogConcaveDefn2}. \smallskip \begin{proposition} \label{EquivalencesStrongLogConDef2Try2} (Equivalences for strong log-concavity, $SLC_2$). Let $p = e^{-\varphi}$ be a density function with respect to Lebesgue measure $\lambda$ on $\mathbb{R}^d$; that is, $p \ge 0$ and $\int_{\mathbb{R}^d} p d \lambda =1$. Suppose that $\varphi \in C^2 $. Then the following are equivalent:\newline (a) $p$ is strongly log-concave; $p \in SLC_2 (\mu, \Sigma, d)$ with $\Sigma > 0$, $\mu \in \mathbb{R}^d$.\newline (b) $\rho (x) \equiv \nabla \varphi (x) - \Sigma^{-1}(x - \mu) : \mathbb{R}% ^d \rightarrow \mathbb{R}^d$ is monotone:\newline \begin{equation*} \langle \rho (x_2) - \rho (x_1) , x_2 - x_1 \rangle \ge 0 \ \ \ \mbox{for all} \ \ x_1, x_2 \in \mathbb{R}^d . \end{equation*} (c) $\nabla \rho (x) = \nabla^2 \varphi - \Sigma^{-1} \ge 0$.\newline (d) For each $a \in \mathbb{R}^d$, the function \begin{eqnarray*} J_a^{\phi} (x; p) = p(a+x)p(a-x)/ \phi_{\Sigma/2} (x) \end{eqnarray*} is convexly layered. (e) For each $a \in \mathbb{R}^d$ the function $% J_a^{\phi} (x; p)$ in (d) is even and radially monotone.\newline (f) For all $x , y \in \mathbb{R}^d$, \begin{eqnarray*} p\left ( \frac{1}{2} x + \frac{1}{2} y\right ) \ge p(x)^{1/2} p(y)^{1/2} \exp \left ( \frac{1}{8} (x-y)^T \Sigma^{-1} (x-y) \right ) . \end{eqnarray*} \end{proposition} \begin{proof} To prove Proposition~\ref{EquivalencesStrongLogConDef2Try2} it suffices to note the log-concavity of $g(x)=p(x)/\phi _{\Sigma /2}(x)$ and to apply Proposition~\ref{EquivalencesLogCon} (which holds as well for log-concave functions). The claims then follow by straightforward calculations; see Section~\ref{sec:Proofs} for more details. \end{proof} \section{Log-concavity and strong log-concavity: preservation theorems} Both log-concavity and strong log-concavity are preserved by a number of operations. Our purpose in this section is to review these preservation results and the methods used to prove such results, with primary emphasis on: (a) affine transformations, (b) marginalization, (c) convolution. The main tools used in the proofs will be: (i) the Brunn-Minkowski inequality; (ii) the Brascamp-Lieb Poincar\'{e} type inequality; (iii) Pr\'{e}kopa's theorem; (iv) Efron's monotonicity theorem. \subsection{Preservation of log-concavity} \label{subsec:PreservationLogConcave} \subsubsection{Preservation by affine transformations} Suppose that $X$ has a log-concave distribution $P$ on $(\mathbb{R}^{d},% \mathcal{B}^{d})$, and let $A$ be a non-zero real matrix of order $m\times d$% . Then consider the distribution $Q$ of $Y=AX$ on $\mathbb{R}^{m}$. \begin{proposition} \label{prop:AffinePreservLogCon} (log-concavity is preserved by affine transformations). The probability measure $Q $ on $\mathbb{R}^m$ defined by $% Q(B) = P( AX \in B )$ for $B \in \mathcal{B}^m$ is a log-concave probability measure. If $P$ is non-degenerate log-concave on $\mathbb{R}^d$ with density $p$ and $m=d$ with $A $ of rank $d$, then $Q$ is non-degenerate with log-concave density $q$. \end{proposition} \begin{proof} See \cite{MR954608}, Lemma 2.1, page 47. \end{proof} \subsubsection{Preservation by products} Now let $P_1 $ and $P_2$ be log-concave probability measures on $(\mathbb{R}% ^{d_1} , \mathcal{B}^{d_1})$ and $(\mathbb{R}^{d_2} , \mathcal{B}^{d_2})$ respectively. Then we have the following preservation result for the product measure $P_1 \times P_2$ on $(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2} , \mathcal{B}^{d_1} \times \mathcal{B}^{d_2} )$: \begin{proposition} \label{prop:ProdMeasPreservLogConc} (log-concavity is preserved by products) If $P_1$ and $P_2$ are log-concave probability measures then the product measure $P_1 \times P_2$ is a log-concave probability measure. \end{proposition} \begin{proof} See \cite{MR954608}, Theorem 2.7, page 50. A key fact used in this proof is that if a probability measure $P$ on $(\mathbb{R}^d, \mathcal{B}^d)$ assigns zero mass to every hyperplane in $\mathbb{R}^d$, then log-concavity of $P$ holds if and only if $P( \theta A + (1-\theta) B) \ge P(A)^{\theta} P(B)^{1-\theta}$ for all rectangles $A,B$ with sides parallel to the coordinate axes; see \cite{MR954608}, Theorem 2.6, page 49. \end{proof} \subsubsection{Preservation by marginalization} Now suppose that $p$ is a log-concave density on $\mathbb{R}^{m+n}$ and consider the marginal density $q(y) = \int_{\mathbb{R}^m} p(x,y) dx $. The following result due to \cite{MR0404557} concerning preservation of log-concavity was given a simple proof by \cite{MR0450480} (Corollary 3.5, page 374). In fact they also proved the whole family of such results for $s-$% concave densities. \begin{theorem} \label{LCpreservedByMargin} (log-concavity is preserved by marginalization; Pr\'ekopa's theorem). Suppose that $p$ is log-concave on $\mathbb{R}^{m+n}$ and let $q(y) = \int_{\mathbb{R}^m} p(x,y) dx$. Then $q$ is log-concave. \end{theorem} This theorem is a center piece of the entire theory. It was proved independently by a number of mathematicians at about the same time: these include \cite{MR0404557}, building on \cite{MR0096173}, \cite{MR0315079}, \cite{Brascamp-Lieb:74}, \cite{Brascamp-Lieb:75}, \cite{MR0450480}, \cite{MR0404559}, \cite{MR0388475}, and \cite{MR0428540}. \cite{MR2814377}, page 310, gives a brief discussion of the history, including an unpublished proof of Theorem~\ref{LCpreservedByMargin} given in \cite{Brascamp-Lieb:74}. Many of the proofs (including the proofs in \cite{Brascamp-Lieb:75}, \cite{MR0404559}% , and \cite{MR0428540}) are based fundamentally on the Brunn-Minkowski inequality; see \cite{MR588074}, \cite{MR1898210}, and \cite{MR2167203} for useful surveys. We give two proofs here. The first proof is a \textsl{transportation} argument from \cite{MR1991646}; the second is a proof from \cite% {Brascamp-Lieb:74} which has recently appeared in \cite{MR2814377}. \begin{proof} (Via \textsl{transportation}). We can reduce to the case $n=1$ since it suffices to show that the restriction of $q$ to a line is log-concave. Next note that an inductive argument shows that the claimed log-concavity holds for $m+1$ if it holds for $m$, and hence it suffices to prove the claim for $% m=n=1$. Since log-concavity is equivalent to mid-point log concavity (by the equivalence of (a) and (e) in Proposition~\ref{EquivalencesLogCon}), we only need to show that \begin{eqnarray} q \left (\frac{u+v}{2} \right ) \ge q(u)^{1/2} q(v)^{1/2} \label{MidPointLogConcavityofQ} \end{eqnarray} for all $u,v \in \mathbb{R}$. Now define \begin{eqnarray*} f(x) = p(x,u), \ \ \ g(x) = p(x,v), \ \ \ h(x) = p(x, (u+v)/2) . \end{eqnarray*} Then (\ref{MidPointLogConcavityofQ}) can be rewritten as \begin{eqnarray*} \int h(x) dx \ge \left ( \int f(x) dx) \right )^{1/2} \left ( \int g(x) dx \right )^{1/2} . \end{eqnarray*} From log-concavity of $p$ we know that \begin{eqnarray} h\left ( \frac{z+w}{2} \right ) = p \left ( \frac{z+w}{2} , \frac{u+v}{2} \right ) \ge p(z,u)^{1/2} p(w,v)^{1/2} = f(z)^{1/2} g(w)^{1/2} . \label{ConsequenceLogConcavityOfP} \end{eqnarray} By homogeneity we can arrange $f, g$, and $h$ so that $\int f(x) dx = \int g(x) dx = 1$; if not, replace $f$ and $g$ with $\tilde{f}$ and $\tilde{g}$ defined by $\tilde{f} (x) = f(x) / \int f(x^{\prime})dx^{\prime}= f(x) / q(u) $ and $\tilde{g} (x) = g(x) / \int g(x^{\prime})dx^{\prime}= g(x) / q(v) $. Now for the transportation part of the argument: let $Z$ be a real-valued random variable with distribution function $K$ having smooth density $k$. Then define maps $S$ and $T$ by $K(z) = F(S(z))$ and $K(z) = G(T(z))$ where $% F$ and $G$ are the distribution functions corresponding to $f$ and $g$. Then \begin{eqnarray*} k(z) = f(S(z))S^{\prime}(z) = g(T(z))T^{\prime}(z) \end{eqnarray*} where $S^{\prime}, T^{\prime}\ge0$ since the same is true for $k$, $f$, and $% g$, and it follows that \begin{eqnarray*} 1 = \int k(z) dz & = & \int f(S(z))^{1/2} g(T(z))^{1/2} (S^{\prime}(z))^{1/2} (T^{\prime}(z))^{1/2} dz \\ & \le & \int h\left ( \frac{S(z) + T(z)}{2} \right ) (S^{\prime}(z))^{1/2} (T^{\prime}(z))^{1/2} dz \\ & \le & \int h\left ( \frac{S(z) + T(z)}{2} \right ) \cdot \frac{% S^{\prime}(z) + T^{\prime}(z)}{2} dz \\ & = & \int h(x) dx \end{eqnarray*} by the inequality (\ref{ConsequenceLogConcavityOfP}) in the first inequality and by the arithmetic - geometric mean inequality in the second inequality. \end{proof} \begin{proof} (Via \textsl{symmetrization}). By the same induction argument as in the first proof we can suppose that $m=1$. By an approximation argument we may assume, without loss of generality that $p$ has compact support and is bounded. Now let $a \in \mathbb{R}^n$ and note that \begin{eqnarray*} J_a (y; q) & = & q(a+y) q(a-y) \\ & = & \int \! \! \int p(x, a+y) p(z, a-y) dx dz \\ & = & 2 \int \! \! \int p(u+v, a+y) p(u-v, a-y) du dv \\ & = & 2 \int \! \! \int J_{u,a} (v,y; p ) du dv \end{eqnarray*} where, for $(u,a)$ fixed, the integrand is convexly layered by Proposition~% \ref{EquivalencesLogCon} (d). Thus by the following Lemma~\ref% {convexlyLayeredPreservedByMarginalization}, the integral over $v$ is an even lower semi-continuous function of $y$ for each fixed $u,a$. Since this class of functions is closed under integration over an indexing parameter (such as $u$), the integration over $u$ also yields an even radially monotone function, and by Fatou's lemma $J_a (y; g) $ is also lower semicontinuous. It then follows from Proposition~\ref{EquivalencesLogCon} again that $g$ is log-concave. \end{proof} \begin{lemma} \label{convexlyLayeredPreservedByMarginalization} Let $f$ be a lower semicontinuous convexly layered function on $\mathbb{R}^{n+1}$ written as $% f(x,t)$, $x \in \mathbb{R}^n$, $t\in \mathbb{R}$. Suppose that $f$ is bounded and has compact support. Let \begin{eqnarray*} g(x) = \int_{\mathbb{R}} f(x,t) dt . \end{eqnarray*} Then $g$ is an even, radially monotone, lower semicontinuous function. \end{lemma} \begin{proof} First note that sums and integrals of even radially monotone functions are again even and radially monotone. By the wedding cake representation \begin{eqnarray*} f(x) = \int_0^\infty 1\{ f(x) > t\} dt, \end{eqnarray*} it suffices to prove the result when $f$ is the indicator function of an open balanced convex set $K$. Thus we define \begin{eqnarray*} K(x) = \{ t \in \mathbb{R} : \ (x,t) \in K \}, \ \ \mbox{for} \ \ x \in \mathbb{R}^n . \end{eqnarray*} Thus $K(x) = (c(x), d(x))$, an open interval in $\mathbb{R}$ and we see that \begin{eqnarray*} g(x) = d(x) - c(x). \end{eqnarray*} But convexity of $K$ implies that $c(x)$ is convex and $d(x)$ is concave,and hence $g(x)$ is concave. Since $K$ is balanced, it follows that $c(-x) = - d(x)$, or $d(-x) = - c(x)$, so $g$ is even. Since an even concave function is even radially monotone, and lower semicontinuity of $g$ holds by Fatou's lemma, the conclusion follows. \end{proof} \subsubsection{Preservation under convolution} Suppose that $X, Y$ are independent with log-concave distributions $P$ and $% Q $ on $(\mathbb{R}^d, \mathcal{B}^d)$, and let $R$ denote the distribution of $X+Y$. The following result asserts that $R$ is log-concave as a measure on $\mathbb{R}^d$. \begin{proposition} \label{LCpreservedByConv} (log-concavity is preserved by convolution). Let $% P $ and $Q$ be two log-concave distributions on $(\mathbb{R}^d , \mathcal{B}% ^d) $ and let $R$ be the convolution defined by $R(B) = \int_{\mathbb{R}^d} P( B- y) dQ(y)$ for $B \in \mathcal{B}^d$. Then $R$ is log-concave. \end{proposition} \begin{proof} It suffices to prove the proposition when $P$ and $Q$ are absolutely continuous with densities $p$ and $q$ on $\mathbb{R}^d$. Now $h(x,y) = p(x-y)q(y)$ is log-concave on $\mathbb{R}^{2d}$, and hence by Proposition~% \ref{LCpreservedByMargin} it follows that \begin{eqnarray*} r(y) = \int_{\mathbb{R}^d} h(x,y) dy = \int_{\mathbb{R}^d} p(x-y) q(y) dy \end{eqnarray*} is log-concave. \end{proof} Proposition~\ref{LCpreservedByConv} was proved when $d=1$ by \cite{MR0047732} who used the $PF_2$ terminology of P\'olya frequency functions. In fact all the P\'olya frequency classes $PF_k$, $k\ge 2$, are closed under convolution as shown by \cite{MR0230102}; see \cite{MR2759813}, Lemma A.4 (page 758) and Proposition B.1, page 763. The first proof of Proposition~\ref% {LCpreservedByConv} when $d\ge 2$ is apparently due to \cite{MR0241584}. While the proof given above using Pr\'ekopa's theorem is simple and quite basic, there are at least two other proofs according as to whether we use:% \newline (a) the equivalence between log-concavity and monotonicity of the scores of $% f$, or \newline (b) the equivalence between log-concavity and non-negativity of the matrix of second derivatives (or Hessian) of $-\log f$, assuming that the second derivatives exist. The proof in (a) relies on Efron's inequality when $d=1$, and was noted by \cite{Wellner-2013} in parallel to the corresponding proof of ultra log-concavity in the discrete case given by \cite{MR2327839}; see Theorem~\ref{thm:UltraLogConvPreservByConv}. We will return to this in Section~\ref{sec:EfronsTheoremOneDimension}. For $d>1$ this approach breaks down because Efron's theorem does not extend to the multivariate setting without further hypotheses. Possible generalizations of Efron's theorem will be discussed in Section~\ref% {sec:StrongLogConcavityPreservMultivariateCase}. The proof in (b) relies on a Poincar\'{e} type inequality of \cite{MR0450480}. These three different methods are of some interest since they all have analogues in the case of proving that strong log-concavity is preserved under convolution. It is also worth noting the following difference between the situation in one dimension and the result for preservation of convolution in higher dimensions: as we note following Theorems 29 and 33, \cite{Ibragimov:56} and \cite{Keilson-Gerber:71} showed that in the one-dimensional continuous and discrete settings respectively that if $p\star q$ is unimodal for every unimodal $q$, then $p$ is log-concave. The analogue of this for $d>1$ is more complicated in part because of the great variety of possible definitions of \textquotedblleft unimodal\textquotedblright\ in this case; see \cite{MR954608} chapters 2 and 3 for a thorough discussion. In particular \cite{MR0074845} provided the following counterexample when the notion of unimodality is taken to be \textsl{centrally symmetric convex} unimodality; that is, the sets $S_{c}(p)\equiv \{x\in \mathbb{R}^{d}:p(x)\geq c\}$ are symmetric and convex for each $c\geq 0$. Let $p$ be the uniform density on $[-1,1]^{2}$ (so that $p(x)=(1/4)1_{[-1,1]^{2}}(x)$); then $p$ is log-concave. Let $q$ be the density given by $1/12$ on $[-1,1]^{2}$ and $1/24$ on $([-1,1]\times (1,5])\cup ([-1,1]\times \lbrack -5,-1))$. Thus $q$ is centrally symmetric convex (and hence also \textsl{quasi-concave}, $q\in \mathcal{P}_{-\infty }$ as in Definition~\ref{defn:Sconcave}. But $h=p\star q$ is not centrally symmetric convex (and also is not quasi-concave), since the sets $S_{c}(h)$ are not convex: see Figure~\ref{fig:ShermansExample}. \begin{figure}[htb!] \centering \includegraphics[width=0.50\textwidth]{ShermansExample.pdf} \caption{Sherman's example, $h=p\star q$} \label{fig:ShermansExample} \end{figure} \subsubsection{Preservation by (weak) limits} Now we consider preservation of log-concavity under convergence in distribution. \begin{proposition} \label{prop:logconcPreservConvD} (log-concavity is preserved under convergence in distribution). Suppose that $\{ P_n \}$ is a sequence of log-concave probability measures on $\mathbb{R}^d$, and suppose that $P_n \rightarrow_d P_0$. Then $P_0$ is a log-concave probability measure. \end{proposition} \begin{proof} See \cite{MR954608}, Theorem 2.10, page 53. \end{proof} Note that the limit measure in Proposition~\ref{prop:logconcPreservConvD} might be concentrated on a proper subspace of $\mathbb{R}^d$. If we have a sequence of log-concave densities $p_n$ which converge pointwise to a density function $p_0$, then by Scheff\'e's theorem we have $p_n \rightarrow p_0 $ in $L_1 (\lambda)$ and hence $d_{TV} (P_n , P_0) \rightarrow 0$. Since convergence in total variation implies convergence in distribution we conclude that $P_0$ is a log-concave measure where the affine hull of $% \mbox{supp} (P_0)$ has dimension $d$ and hence $P_0$ is the measure corresponding to $p_0$ which is necessarily log-concave by Theorem~\ref% {PrekopaRinott}. Recall that the class of normal distributions on $\mathbb{R}^d$ is closed under all the operations discussed above: affine transformation, formation of products, marginalization, convolution, and weak limits. Since the larger class of log-concave distributions on $\mathbb{R}^d$ is also preserved under these operations, the preservation results of this section suggest that the class of log-concave distributions is a very natural nonparametric class which can be viewed naturally as an enlargement of the class of all normal distributions. This has stimulated much recent work on nonparametric estimation for the class of log-concave distributions on $\mathbb{R}$ and $% \mathbb{R}^d$: for example, see \cite{MR2546798}, \cite{MR2645484}, \cite% {MR2758237}, \cite{MR2757433}, \cite{MR2509075}, and \cite{hen+ast06}, and see Section~\ref% {subsec:NonparametricStatistics} for further details. \subsection{Preservation of strong log-concavity} \label{subsec:PreservationStrongLogConcave} Here is a theorem summarizing several preservation results for strong log-concavity. Parts (a), (b), and (d) were obtained by \cite{hen+ast06}. \begin{theorem} \label{StrongLogConPreservOne} (Preservation of strong log-concavity)\newline (a) (Linear transformations) Suppose that $X $ has density $p \in SLC_2 (0,\Sigma, d)$ and let $A$ be a $d\times d$ nonsingular matrix. Then $Y=AX$ has density $q \in SLC_2 (0, A \Sigma A^T, d)$ given by $q(y) = p(A^{-1} y) % \mbox{det} (A^{-1} )$.\newline (b) (Convolution) If $Z = X+Y$ where $X \sim p \in SLC_2 (0, \Sigma, d)$ and $Y \sim q \in SLC_2 (0, \Gamma , d)$ are independent, then $Z = X +Y \sim p \star q \in SLC_2 (0 , \Sigma + \Gamma , d)$.\newline (c) (Product distribution) If $X \sim p \in SLC_2 (0, \Sigma, m)$ and $Y \sim q \in SLC_2 (0, \Gamma, n)$, then \begin{equation*} (X,Y) \sim p\cdot q \in SLC_2 \left ( 0 , \left ( \begin{array}{cc} \Sigma & 0 \\ 0 & \Gamma% \end{array} \right ) , m+n \right ) . \end{equation*} (d) (Product function) If $p \in SLC_2 (0 , \Sigma, d)$ and $q \in SLC_2 (0, \Gamma, d)$, then $h$ given by $h(x) = p(x) q(x)$ (which is typically not a probability density function) satisfies $h \in SLC_2 (0, (\Sigma^{-1} + \Gamma^{-1} )^{-1} )$. \end{theorem} Part (b) of Theorem~\ref{StrongLogConPreservOne} is closely related to the following result which builds upon and strengthens Pr\'ekopa's Theorem~\ref% {LCpreservedByMargin}. It is due to \cite{MR0450480} (Theorem 4.3, page 380); see also \cite{MR2814377}, Theorem 13.13, page 204. \begin{theorem} \label{StrongLogConPreservTwoTry2} (Preservation of strong log-concavity under marginalization). Suppose that $p\in SLC_2 (0, \Sigma, m+n)$. Then the marginal density $q$ on $\mathbb{R}^n$ given by \begin{eqnarray*} q(x) = \int_{\mathbb{R}^m} p(x,y) dy \end{eqnarray*} is strongly log-concave: $q \in SLC_2 (0, \Sigma_{11}, m)$ where \begin{eqnarray} \Sigma = \left ( \begin{array}{cc} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22}% \end{array} \right ). \label{PartitionOfCovMatrix} \end{eqnarray} \end{theorem} \begin{proof} Since $p \in SLC_2 (0, \Sigma, m+n)$ we can write \begin{eqnarray*} p(x,y) & = & g(x,y) \phi_{\Sigma} (x,y) \\ & = & g(x,y) \frac{1}{(2 \pi | \Sigma |)^{(m+n)/2}} \exp \left ( - \frac{1}{2% } (x^T, y^T) \left ( \begin{array}{cc} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22}% \end{array} \right )^{-1} \left ( \begin{array}{c} x \\ y% \end{array} \right ) \right ) \end{eqnarray*} where $g$ is log-concave. Now the Gaussian term in the last display can be written as \begin{eqnarray*} \lefteqn{\phi_{Y|X} (y|x) \cdot \phi_X (x) } \\ & = & \frac{1}{(2 \pi | \Sigma_{22\cdot 1} | )^{n/2}} \exp \left ( - \frac{1% }{2} (y- \Sigma_{21} \Sigma_{11}^{-1} x)^T \Sigma_{22\cdot 1}^{-1} ( y - \Sigma_{21} \Sigma_{11}^{-1}x ) \right ) \\ && \ \ \ \cdot \frac{1}{(2 \pi | \Sigma_{11} | )^{m/2}} \exp \left ( - \frac{% 1}{2} x^T \Sigma_{11}^{-1} x \right ) \end{eqnarray*} where $\Sigma_{22\cdot 1} \equiv \Sigma_{22} - \Sigma_{21} \Sigma_{11}^{-1} \Sigma_{12}$, and hence \begin{eqnarray*} q(x) & = & \int_{\mathbb{R}^n} g(x,y) \frac{1}{(2 \pi | \Sigma_{22\cdot 1} |)^{n/2}} \exp \left ( - \frac{1}{2} (y- \Sigma_{21} \Sigma_{11}^{-1} x)^T \Sigma_{22\cdot 1}^{-1} ( y - \Sigma_{21} \Sigma_{11}^{-1}x ) \right ) dy \\ && \ \ \ \cdot \frac{1}{(2 \pi | \Sigma_{11} | )^{m/2}} \exp \left ( - \frac{1}{2} x^T \Sigma_{11}^{-1} x \right ) \\ & = & \int_{\mathbb{R}^n} g(x, \tilde{y} + \Sigma_{21} \Sigma_{11}^{-1} x ) \cdot \frac{1}{(2 \pi | \Sigma_{22\cdot 1} | )^{n/2}} \exp \left ( - \frac{1}{2} \tilde{y}^T \Sigma_{22\cdot 1}^{-1} \tilde{y} \right ) d \tilde{y} \\ && \ \ \ \cdot \frac{1}{(2 \pi | \Sigma_{11} | )^{m/2}} \exp ( - (1/2) x^T \Sigma_{11}^{-1} x ) \\ & \equiv & h(x) \phi_{\Sigma_{11}} (x) \end{eqnarray*} where \begin{eqnarray*} h(x) \equiv \int_{\mathbb{R}^n} g(x, \tilde{y} + \Sigma_{21} \Sigma_{11}^{-1} x ) \cdot \frac{1}{(2 \pi | \Sigma_{22\cdot 1} | )^{n/2}} \exp \left ( - \frac{1}{2} \tilde{y}^T \Sigma_{22\cdot 1}^{-1} \tilde{y} \right ) d \tilde{y} \end{eqnarray*} is log-concave: $g$ is log-concave, and hence $\tilde{g} (x, \tilde{y}) \equiv g(x, \tilde{y} + \Sigma_{21} \Sigma_{11}^{-1} x )$ is log-concave; the product $\tilde{g} (x, \tilde{y}) \cdot \exp ( - (1/2) \tilde{y}^T \Sigma_{22\cdot 1}^{-1} \tilde{y} )$ is (jointly) log-concave; and hence $h$ is log-concave by Pr\'ekopa's Theorem~\ref{LCpreservedByMargin}. \end{proof} \begin{proof} (Theorem~\ref{StrongLogConPreservOne}): (a) The density $q$ is given by $% q(y) = p(A^{-1} y) \det(A^{-1} )$ by a standard computation. Then since $p \in SLC_2 ( 0 , \Sigma,d)$ we can write \begin{eqnarray*} q(y) = g(A^{-1} y) \det (A^{-1} ) \phi_{\Sigma} (A^{-1} y) = g(A^{-1} y) \phi_{A \Sigma A^T} (y) \end{eqnarray*} where $g(A^{-1} y)$ is log-concave by Proposition~\ref% {prop:AffinePreservLogCon}. \newline (b) If $p \in SLC_2 (0, \Sigma, d)$ and $q \in SLC_2 (0, \Gamma,d)$, then the function \begin{eqnarray*} h(z,x) = p(x) q(z-x) \end{eqnarray*} is strongly log-concave jointly in $x$ and $z$: since \begin{eqnarray*} \lefteqn{x^T \Sigma^{-1} x + (z-x)^T \Gamma^{-1} (z-x) } \\ & = & z^T (\Sigma + \Gamma)^{-1} z + (x - Cz)^T ( \Sigma^{-1} + \Gamma^{-1} ) (x - Cz) \end{eqnarray*} where $C \equiv (\Sigma^{-1} + \Gamma^{-1} )^{-1} \Gamma^{-1}$, it follows that \begin{eqnarray*} h(z,x) & = & g_p (x) g_q (z-x) \phi_{\Sigma}(x) \phi_{\Gamma} (z-x) \\ & = & g(z,x) \phi_{\Sigma + \Gamma} (z) \cdot \phi_{\Sigma^{-1} + \Gamma^{-1} } (x - Cz) \end{eqnarray*} is jointly log-concave. Hence it follows that \begin{eqnarray*} p\star q (z) = \int_{\mathbb{R}^d} h(z,x) dx & = & \phi_{\Sigma + \Gamma} (z) \int_{\mathbb{R}^d} g(z,x) \phi_{\Sigma^{-1} + \Gamma^{-1} } (x-Cz) dx \\ & \equiv & \phi_{\Sigma + \Gamma} (z) g_0 (z) \end{eqnarray*} where $g_0 (z)$ is log-concave by Pr\'ekopa's theorem, Theorem~\ref% {LCpreservedByMargin}. \newline (c ) This is easy since \begin{eqnarray*} p(x) q(y) & = & g_p (x) g_q (y) \phi_{\Sigma} (x) \phi_{\Gamma} (y) = g(x,y) \phi_{\tilde{\Sigma}} (x,y) \end{eqnarray*} where $\tilde{\Sigma} $ is the given $2d\times 2d$ block diagonal matrix and $g$ is jointly log-concave (by Proposition~\ref{prop:ProdMeasPreservLogConc}% ). \newline (d) Note that \begin{eqnarray*} p(x) q(x) = g_p (x) g_q (x) \phi_{\Sigma} (x) \cdot \phi_{\Gamma} (x) \equiv g_0 (x) \phi_{( \Sigma^{-1} + \Gamma^{-1} )^{-1}} (x) \end{eqnarray*} where $g_0 $ is log-concave. \end{proof} \section{Log-concavity and ultra-log-concavity for discrete distributions} \label{Sec:DiscreteLogConc} We now consider log-concavity and ultra-log-concavity in the setting of discrete random variables. Some of this material is from \cite{MR3030616} and \cite{MR2327839}. An integer-valued random variable $X$ with probability mass function $\{ p_x : \ x \in \mathbb{Z}\}$ is \textsl{log-concave} if \begin{eqnarray} p_x^2 \ge p_{x+1} p_{x-1} \ \ \mbox{for all} \ x \in \mathbb{Z} . \end{eqnarray} If we define the \textsl{score function} $\varphi$ by $\varphi (x) \equiv p_{x+1}/p_x$, then log-concavity of $\{ p_x\}$ is equivalent to $\varphi$ being decreasing (nonincreasing). A stronger notion, analogous to strong log-concavity in the case of continuous random variables, is that of \textsl{ultra-log-concavity}: for any $\lambda > 0$ define $\mathbf{U L C} (\lambda)$ to be the class of integer-valued random variables $X$ with mean $E X = \lambda $ such that the probability mass function $p_x$ satisfies \begin{eqnarray} x p_x^2 \ge (x+1) p_{x+1} p_{x-1} \ \ \ \mbox{for all} \ \ x\ge 1. \label{ULCversion1} \end{eqnarray} Then the class of ultra log-concave random variables is $\mathbf{ULC} = \cup_{\lambda>0} \mathbf{ULC} (\lambda )$. Note that (\ref{ULCversion1}) is equivalent to log-concavity of $x \mapsto p_x / \pi_{\lambda,x} $ where $% \pi_{\lambda ,x} = e^{-\lambda} \lambda^x / x!$ is the Poisson distribution on $\mathbb{N}$, and hence ultra-log-concavity corresponds to $p$ being \textsl{log-concave relative to $\pi_{\lambda}$} (or $p \le_{{\small % \mbox{lc}}} \pi_{\lambda}$) in the sense defined by \cite{MR799285}. Equivalently, $p_x= h_x \pi_{\lambda, x}$ where $h$ is log-concave. When we want to emphasize that the mass function $\{ p_x \}$ corresponds to $X$, we also write $p_X (x) $ instead of $p_x$. If we define the \textsl{relative score function} $\rho$ by \begin{eqnarray*} \rho (x) & \equiv & \frac{(x+1)p_{x+1}}{\lambda p_x} -1 , \end{eqnarray*} then $X \sim p \in \mathbf{U L C} (\lambda)$ if and only if $\rho $ is decreasing (nonincreasing). Note that \begin{equation*} \rho (x) = \frac{(x+1)\varphi (x)}{\lambda} -1 = \frac{(x+1)\varphi (x)}{% \lambda} - \frac{(x+1)\pi_{\lambda, x+1}}{\lambda \pi_{\lambda , x}}. \end{equation*} \smallskip Our main interest here is the preservation of log-concavity and ultra-log-concavity under convolution. \smallskip \begin{theorem} \label{thm:UltraLogConvPreservByConv} (a) (\cite{Keilson-Gerber:71}) The class of log-concave distributions on $% \mathbb{Z}$ is closed under convolution. If $U \sim p$ and $V\sim q$ are independent and $p$ and $q$ are log-concave, then $U+V \sim p\star q $ is log-concave.\newline (b) (\cite{MR0494391}, \cite{MR1462561}) The class of ultra-log-concave distributions on $\mathbb{Z}$ is closed under convolution. More precisely, these classes are closed under convolution in the following sense: if $U \in \mathbf{U L C} (\lambda)$ and $V \in \mathbf{% U L C} (\mu)$ are independent, then $U+V \in \mathbf{U L C} (\lambda + \mu)$. \end{theorem} \smallskip Actually, \cite{Keilson-Gerber:71} proved more: analogously to \cite% {Ibragimov:56} they showed that $p$ is strongly unimodal (i.e. $X+Y \sim p \star q$ with $% X,Y$ independent is unimodal for every unimodal $q$ on $\mathbb{Z}$) if and only if $X \sim p$ is log-concave. Liggett's proof of (b) proceeds by direct calculation; see also \cite{MR0494391}. For recent alternative proofs of this property of ultra-log-concave distributions, see \cite{MR2482101} and \cite{MR2793607}. A relatively simple proof is given by \cite{MR2327839} using results from \cite{MR2236061} and \cite{MR0171335}, and that is the proof we will summarize here. See \cite% {MR2929095} for an application of ultra log-concavity and Theorem~\ref% {thm:UltraLogConvPreservByConv} to finding optimal constants in Khinchine inequalities. \smallskip Before proving Theorem~\ref{thm:UltraLogConvPreservByConv} we need the following lemma giving the score and the relative score of a sum of independent integer-valued random variables. \begin{lemma} \label{ConvolutionAndScoresDiscreteCase} If $X, Y$ are independent non-negative integer-valued random variables with mass functions $p = p_X$ and $q=p_Y$ then:\newline (a) $\varphi_{X+Y} (z) = E \{ \varphi_X (X) | X+Y = z \}$. \newline (b) If, moreover, $X$ and $Y$ have means $\mu$ and $\nu$ respectively, then with $\alpha = \mu / (\mu + \nu)$, \begin{eqnarray*} \rho_{X+Y} (z) = E \{ \alpha \rho_X (X) + (1-\alpha) \rho_Y (Y) \big | X + Y = z \} . \end{eqnarray*} \end{lemma} \begin{proof} For (a), note that with $F_z \equiv p_{X+Y}(z)$ we have \begin{eqnarray*} \varphi_{X+Y} (z) & = & \frac{p_{X+Y} (z+1)}{p_{X+Y} (z)} = \sum_x \frac{p (x) q (z+1-x)}{F_z} \\ & = & \sum_x \frac{p(x)}{p(x-1)} \cdot \frac{p(x-1) q(z+1-x)}{F_z} \\ & = & \sum_x \frac{p(x+1)}{p(x)} \cdot \frac{p(x) q(z-x)}{F_z} . \end{eqnarray*} To prove (b) we follow \cite{MR2236061}, page 471: using the same notation as in (a), \begin{eqnarray*} \rho_{X+Y} (z) & = & \frac{(z+1) p_{X+Y} (z+1)}{(\mu+\nu) p_{X+Y} (z)} -1 \\ & = & \sum_x \frac{(z+1)p(x) q(z+1-x)}{(\mu+\nu) F_z} -1 \\ & = & \sum_x \left \{ \frac{x p(x) q(z+1-x)}{(\mu+\nu) F_z} \ + \ \frac{% (z-x+1)p(x) q(z+1-x)}{(\mu+\nu) F_z} \right \} -1 \\ & = & \alpha \left \{ \sum_x \frac{x p_X (x)}{\mu p(x-1)} \cdot \frac{p(x-1) q(z-x+1)}{F_z} -1 \right \} \\ && \ \ \ + \ (1-\alpha) \left \{ \sum_x \frac{z-x+1}{\nu} \frac{q(z-x+1)}{% q(z-x)} \cdot \frac{ p(x) q(z-x)}{F_z} - 1 \right \} \\ & = & \sum_x \frac{p(x) q(z-x)}{F_z} \left \{ \alpha \rho_X (x) + (1-\alpha) \rho_Y (z-x) \right \} . \end{eqnarray*} \end{proof} \begin{proof} ~Theorem~\ref{thm:UltraLogConvPreservByConv}: (b) This follows from (b) of Lemma~\ref{ConvolutionAndScoresDiscreteCase} and Theorem 1 of \cite% {MR0171335}, upon noting Efron's remark 1, page 278, concerning the discrete case of his theorem: for independent log-concave random variables $X$ and $Y$ and a measurable function $\Phi$ monotone (decreasing here) in each argument, $E \{ \Phi (X,Y) | X+Y = z \}$ is a monotone decreasing function of $z$: note that log-concavity of $X$ and $Y$ implies that \begin{equation*} \Phi (x,y) = \frac{\mu}{\mu+ \nu} \rho_X (x) + \frac{\nu}{\mu + \nu} \rho_Y (y) \end{equation*} is a monotone decreasing function of $x$ and $y$ (separately) by since the relative scores $\rho_X$ and $\rho_Y$ are decreasing. Thus $\rho_{X+Y}$ is a decreasing function of $z$, and hence $X+Y \in \mathbf{ULC} (\mu + \nu)$. (a) Much as in part (b), this follows from (a) of Lemma~\ref% {ConvolutionAndScoresDiscreteCase} and Theorem 1 of \cite{MR0171335}, upon replacing the relative scores $\rho_X$ and $\rho_Y$ by scores $\varphi_X$ and $\varphi_Y$ and by taking $\Phi (x,y) = \varphi_X (x)$. \end{proof} For interesting results concerning the entropy of discrete random variables, Bernoulli sums, log-concavity, and ultra-log-concavity, see \cite{MR3030616}% , \cite{MR1093412}, and \cite{MR2327839}. For recent results concerning nonparametric estimation of a discrete log-concave distribution, see \cite{MR3091658} and \cite{MR3174308}. It follows from \cite{MR1093412} that the hypergeometric distribution (sampling without replacement count of \textquotedblleft successes\textquotedblright ) is equal in distribution to a Bernoulli sum; hence the hypergeometric distribution is ultra-log-concave. \medskip \section{Regularity and approximations of log-concave functions} \label{section_reg_and_ap} \subsection{Regularity\label{section_reg}} The regularity of a log-concave function $f=\exp \left( -\varphi \right) $ depends on the regularity of its convex potential $\varphi $. Consequently, log-concave functions inherit the special regularity properties of convex functions. Any log-concave function is nonnegative. When the function $f$ is a log-concave \textit{density} (with respect to the Lebesgue measure), which means that $f$ integrates to $1$, then it is automatically bounded. More precisely, it has exponentially decreasing tails and hence, it has finite $% \Psi _{1}$ Orlicz norms; for example, see \cite{MR733944} and \cite% {MR1849347}. The following lemma gives a pointwise estimate of the density. \begin{theorem}[\protect\cite{MR2645484}, Lemma 1] \label{theorem_exp_tails} Let $f$ be a log-concave density on $\mathbb{R}% ^{d} $. Then there exist $a_{f}=a>0$ and $b_{f}=b\in \mathbb{R}$ such that $% f\left( x\right) \leq e^{-a\left\Vert x\right\Vert +b}$ for all $x\in \mathbb{R}^{d}$. \end{theorem} Similarly, strong log-concavity implies a finite $\Psi _{2}$ Orlicz norm (see \cite{MR1849347} Theorem 2.15, page 36, \cite{MR1964483}, Theorem 9.9, page 280), \cite{MR1742893}, and \cite{MR1682772}. For other pointwise bounds on log-concave densities themselves, see \cite% {MR773927}, \cite{MR2546798} and \cite{MR2309621}. As noticed in \cite{MR2645484}, Theorem \ref{theorem_exp_tails} implies that if a random vector $X$ has density $f$, then the moment generating function of $X$ is finite in an open neighborhood of the origin. Bounds can also be obtained for the supremum of a log-concave density as well as for its values on some special points in the case where $d=1$. \begin{proposition} \label{prop_pointwise_bounds}Let $X$ be a log-concave random variable, with density $f$ on $\mathbb{R}$ and median $m$. Then \begin{eqnarray} \frac{1}{12% \var% \left( X\right) } &\leq &f\left( m\right) ^{2}\leq \frac{1}{2% \var% \left( X\right) }\text{ ,} \label{line1_prop_pointwise} \\ \frac{1}{12% \var% \left( X\right) } &\leq &\sup_{x\in \mathbb{R}}f\left( x\right) ^{2}\leq \frac{1}{% \var% \left( X\right) }\text{ ,} \label{line2_prop_pointwise} \\ \frac{1}{3e^{2}% \var% \left( X\right) } &\leq &f\left( \mathbb{E}\left[ X\right] \right) ^{2}\leq \frac{1}{% \var% \left( X\right) }\text{ .} \label{line3_prop_pointwise} \end{eqnarray} \end{proposition} Proposition \ref{prop_pointwise_bounds} can be found in \cite% {BobkovLedoux:14}, Proposition B.2. See references therein for historical remarks concerning these inequalities. Proposition \ref% {prop_pointwise_bounds} can also be seen as providing bounds for the variance of a log-concave variable. See \cite{Kim-Samworth:14}, section 3.2, for some further results of this type. Notice that combining (\ref{line1_prop_pointwise}) and (\ref% {line2_prop_pointwise}) we obtain the inequality $\sup_{x\in \mathbb{R}% }f\left( x\right) \leq 2\sqrt{3}f\left( m\right) $. In fact, the concavity of the function $I$ defined in Proposition \ref{prop_bobkov} allows to prove the stronger inequality $\sup_{x\in \mathbb{R}}f\left( x\right) \leq 2f\left( m\right) $. Indeed, with the notations of Proposition \ref% {prop_bobkov}, we have $I\left( 1/2\right) =f\left( m\right) $ and for any $% x\in \left( a,b\right) $, there exists $t\in \left( 0,1\right) $ such that $% x=F^{-1}\left( t\right) $. Hence,% \begin{eqnarray*} \lefteqn{2f\left( m\right) =2I\left( \frac{1}{2}\right) = 2I\left( \frac{t}{2}+\frac{1-t}{2}\right)} \\ & \geq & 2\left( \frac{1}{2}I\left( t\right) +\frac{1}{2}I\left( 1-t\right) \right) \geq I\left( t\right) =f\left( x\right) \text{ .} \end{eqnarray*} A classical result on continuity of convex functions is that any real-valued convex function $\varphi $ defined on an open set $U\subset \mathbb{R}^{d}$ is locally Lipschitz and in particular, $\varphi $ is continuous on $U$. For more on continuity of convex functions see Section 3.5 of \cite{MR2178902}. Of course, any continuity of $\varphi $ (local or global) corresponds to the same continuity of $f$. For an expos\'{e} on differentiability of convex functions, see \cite{MR2178902} (in particular sections 3.8 and 3.11; see also \cite{MR1676726} section 7). A deep result of \cite{Alexandrov:39} is the following (we reproduce here Theorem 3.11.2 of \cite{MR2178902}). \begin{theorem}[\protect\cite{Alexandrov:39}] \label{theorem_Alexandrov}Every convex function $\varphi $ on $\mathbb{R}% ^{d} $ is twice differentiable almost everywhere in the following sense: $f$ is twice differentiable at a, with Alexandrov Hessian $\nabla ^{2}f\left( a\right) $ in $\Sym^{+}\left( d,\mathbb{R}\right) $ (the space of real symmetric $d\times d$ matrices), if $\nabla f\left( a\right) $ exists, and if for every $\varepsilon >0$ there exists $\delta >0$ such that% \begin{equation*} \left\Vert x-a\right\Vert <\delta \text{ \ \ implies \ \ }\sup_{y\in \partial f\left( x\right) }\left\Vert y-\nabla f\left( a\right) -\nabla ^{2}f\left( a\right) \left( x-a\right) \right\Vert \leq \varepsilon \left\Vert x-a\right\Vert \text{ .} \end{equation*}% Here $\partial f\left( x\right) $ is the subgradient of $f$ at $x$ (see Definition 8.3 in \cite{MR1491362}). Moreover, if $a$ is such a point, then% \begin{equation*} \lim_{h\rightarrow 0}\frac{f\left( a+h\right) -f\left( a\right) -\left\langle \nabla f\left( a\right) ,h\right\rangle -\frac{1}{2}% \left\langle \nabla ^{2}f\left( a\right) h,h\right\rangle }{\left\Vert h\right\Vert ^{2}}=0\text{ .} \end{equation*} \end{theorem} We immediately see by Theorem \ref{theorem_Alexandrov}, that since $\varphi $ is convex and $f=\exp \left( -\varphi \right) $, it follows that $f$ is almost everywhere twice differentiable. For further results in the direction of Alexandrov's theorem see \cite{MR0482164,MR585231}. \subsection{Approximations\label{section_ap}} Again, if one wants to approximate a non-smooth log-concave function $f=\exp \left( -\varphi \right) $ by a sequence of smooth log-concave functions, then convexity of the potential $\varphi $ can be used to advantage. For an account about approximation of convex functions see \cite{MR2178902}, section 3.8. On the one hand, if $\varphi \in L_{loc}^{1}\left( \mathbb{R}^{d}\right) $ the space of locally integrable functions, then the standard use of a regularization kernel (i.e. a one-parameter family of functions associated with a mollifier) to approximate $\varphi $ preserves the convexity as soon as the mollifier is nonnegative. A classical result is that this gives in particular approximations of $\varphi $ in $L^{p}$ spaces, $p\geq 1$, as soon as $\varphi \in L^{p}\left( \mathbb{R}^{d}\right) $. On the other hand, \textit{infimal convolution} (also called \textit{epi-addition}, see \cite{MR1491362}) is a nonlinear analogue of mollification that gives a way to approximate a lower semicontinuous proper convex function from below (section 3.8, \cite{MR2178902}). More precisely, take two proper convex functions $f$ and $g$ from $\mathbb{R}^{d}$ to $\mathbb{R}\cup \left\{ \infty \right\} $, which means that the functions are convex and finite for at least one point. The infimal convolution between $f$ and $g$, possibly taking the value $-\infty $, is% \begin{equation*} \left( f\odot g\right) \left( x\right) =\inf_{y\in \mathbb{R}^{n}}\left\{ f\left( x-y\right) +g\left( y\right) \right\} \text{ .} \end{equation*}% Then, $f\odot g$ is a proper convex function as soon as $f\odot g\left( x\right) >-\infty $ for all $x\in \mathbb{R}^{d}$. Now, if $f$ is a lower semicontinuous proper convex function on $\mathbb{R}^{d}$, the Moreau-Yosida approximation $f_{\varepsilon }$ of $f$ is given by% \begin{eqnarray*} f_{\varepsilon }\left( x\right) &=&\left( f\odot \frac{1}{2\varepsilon }% \left\Vert \cdot \right\Vert ^{2}\right) \left( x\right) \\ &=&\inf_{y\in \mathbb{R}^{n}}\left\{ f\left( y\right) +\frac{1}{2\varepsilon }\left\Vert x-y\right\Vert ^{2}\right\} \text{ .} \end{eqnarray*}% for any $x\in \mathbb{R}^{d}$ and $\varepsilon >0$. The following theorem can be found in \cite{MR1676726} (Proposition 7.13), see also \cite% {BarbuPrecupanu:86}, \cite{Brezis:73} or \cite{MR2178902}. \begin{theorem} \label{theorem_Moreau-Yosida}The Moreau-Yosida approximates $f_{\varepsilon } $ are $\mathcal{C}^{1,1}$ (i.e. differentiable with Lipschitz derivative) convex functions on $\mathbb{R}^{d}$ and $f_{\varepsilon }\rightarrow f$ as $\varepsilon \rightarrow 0$. Moreover, $\partial f_{\varepsilon }=\left( \varepsilon I+\left( \partial f\right) ^{-1}\right) ^{-1}$ as set-valued maps. \end{theorem} An interesting consequence of Theorem \ref{theorem_Moreau-Yosida} is that if two convex and proper lower semicontinuous functions agree on their subgradients, then they are equal up to a constant (corollary 2.10 in \cite% {Brezis:73}). Approximation by a regularization kernel and Moreau-Yosida approximation have different benefits. While a regularization kernel gives the most differentiability, the Moreau-Yosida approximation provides an approximation of a convex function from below (and so, a log-concave function from above). It is thus possible to combine these two kinds of approximations and obtain the advantages of both. For an example of such a combination in the context of a (multivalued) stochastic differential equation and the study of the so-called Kolmogorov operator, see \cite{MR2474491}. When considering a log-concave random vector, the following simple convolution by Gaussian vectors gives an approximation by log-concave vectors that have $\mathcal{C}^{\infty }$ densities and finite Fisher information matrices. In the context of Fisher information, regularization by Gaussians was used for instance in \cite{SidneyStone:74}\ to study the Pitman estimator of a location parameter. \begin{proposition}[convolution by Gaussians] \label{Lemma_convol_Gauss_multidim} Let $X$ be a random vector in $\mathbb{R}^{d}$ with density $p$ w.r.t. the Lebesgue measure and $G$ a $d$ -dimensional standard Gaussian variable, independent of $X$. Set $Z=X+\sigma G$, $\sigma >0$ and $p_{Z}=\exp \left( -\varphi _{Z}\right) $ the density of $Z$. Then: \begin{description} \item[(i)] If $X$ is log-concave, then $Z$ is also log-concave. \item[(ii)] If $X$ is strongly log-concave, $Z\in SLC_{1}\left( \tau^{2},d\right) $then $Z$ is also strongly log-concave; $Z\in SLC_{1}\left( \tau ^{2}+\sigma ^{2},d\right) $. \item[(iii)] $Z$ has a positive density $p_{Z}$ on $\mathbb{R}^{d}$. Furthermore, $\varphi _{Z}$ is $C^{\infty }$ on $\mathbb{R}^{d}$ and \begin{eqnarray} \nabla \varphi _{Z}\left( z\right) &=&\sigma ^{-2}\mathbb{E}\left[ \sigma G\left\vert X+\sigma G=z\right. \right] \notag \\ &=&\mathbb{E}\left[ \rho _{\sigma G}\left( \sigma G\right) \left\vert X+\sigma G=z\right. \right] \text{ ,} \label{conv_gauss_score} \end{eqnarray}% where $\rho _{\sigma G}\left( x\right) =\sigma ^{-2}x$ is the score of $\sigma G$. \item[(iv)] The Fisher information matrix for location $J(Z)=\mathbb{E}\left[ \nabla \varphi _{Z}\otimes \nabla \varphi _{Z}\left( Z\right) \right] $, is finite and we have $J\left( Z\right) \leq J\left( \sigma G\right) =\sigma ^{-4}I_{d}$ as symmetric matrices. \end{description} \end{proposition} \begin{proof} See Section~\ref{sec:Proofs}. \end{proof} We now give a second approximation tool, that allows to approximate any log-concave density by strongly log-concave densities. \begin{proposition} \label{prop_approx_SLC}Let $f$ be a log-concave density on $\mathbb{R}^{d}$. Then for any $c>0$, the density% \begin{equation*} h_{c}\left( x\right) =\frac{f\left( x\right) e^{-c\left\Vert x\right\Vert ^{2}/2}} {\int_{\mathbb{R}^{d}}f\left( v\right) e^{-c\left\Vert v\right\Vert^{2}/2}dv}, \text{ \ \ }x\in \mathbb{R}^{d}, \end{equation*}% is $SLC_{1}\left( c^{-1},d\right) $ and $h_{c}\rightarrow f$ as $c\rightarrow 0$ in $L_{p}$, $p\in \left[ 1, \infty \right] $. More precisely, there exists a constant $A_{f}>0$ depending only on $f$, such that for any $\varepsilon >0$,% \begin{equation*} \sup \left\{ \sup_{x\in \mathbb{R}^{d}}\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert ;\left( \int_{\mathbb{R}^{d}}\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert ^{p}dx\right) ^{1/p}\right\} \leq A_{f}c^{1-\varepsilon }\text{ .} \end{equation*} \end{proposition} \begin{proof} See Section~\ref{sec:Proofs}. \end{proof} Finally, by combining Proposition \ref{prop_approx_SLC}\ and \ref{Lemma_convol_Gauss_multidim}, we obtain the following approximation lemma. \begin{proposition} \label{lemma_approx}For any log-concave density on $\mathbb{R}^{d}$, there exists a sequence of strongly log-concave densities that are $\mathcal{C}^{\infty }$, have finite Fisher information matrices and that converge to $f$ in $L_{p}\left( \leb \right) $, $p\in \left[ 1,\infty \right] $. \end{proposition} \begin{proof} Approximate $f$ by a strongly log-concave density $h$ as in Proposition \ref{prop_approx_SLC}. Then approximate $h$ by convolving with a Gaussian density. In the two steps the approximations can be as tight as desired in $L_{p}$, for any $p\in \left[ 1, \infty \right] $. The fact that the convolution with Gaussians for a (strongly) log-concave density (that thus belongs to any $L_{p}\left( \leb \right) $, $p\in \left[ 1,\infty \right] $) gives approximations in $L_{p}$, $p\in \left[ 1, \infty \right] $ is a simple application of general classical theorems about convolution in $L_{p}$ (see for instance \cite{MR924157}, p. 148). \end{proof} \section{Efron's theorem and more on preservation of log-concavity and strong log-concavity under convolution in 1-dimension} \label{sec:EfronsTheoremOneDimension} Another way of proving that strong log-concavity is preserved by convolution in the one-dimensional case is by use of a result of \cite{MR0171335}. This has already been used by \cite{MR3030616} and \cite{MR2327839} to prove preservation of ultra log-concavity under convolution (for discrete random variables), and by \cite{Wellner-2013} to give a proof that strong log-concavity is preserved by convolution in the one-dimensional continuous setting. These proofs operate at the level of scores or relative scores and hence rely on the equivalences between (a) and (b) in Propositions~\ref% {EquivalencesLogCon} and \ref{EquivalencesStrongLogConTry3}. Our goal in this section is to re-examine Efron's theorem, briefly revisit the results of \cite{MR3030616} and \cite{Wellner-2013}, give alternative proofs using second derivative methods via symmetrization arguments, and to provide a new proof of Efron's theorem using some recent results concerning asymmetric Brascamp-Lieb inequalities due to \cite{Menz-Otto:2013} and \cite% {CarlenCordero-ErausquinLieb}. \subsection{Efron's monotonicity theorem} \label{subsec:EfronMonoThm} The following monotonicity result is due to \cite{MR0171335}. \begin{theorem}[Efron] \label{EfronMonotonicityThm} Suppose that $\Phi :\mathbb{R} ^{m}\rightarrow \mathbb{R}$ where $\Phi $ is coordinatewise non-decreasing and let \begin{equation*} g(z)\equiv E\left\{ \Phi (X_1, \cdots , X_m)\bigg |\sum_{j=1}^m X_j =z\right\}, \end{equation*}% where $X_1, \ldots , X_m $ are independent and log-concave. Then $g$ is non-decreasing. \end{theorem} \begin{remark} As noted by \cite{MR0171335}, Theorem~\ref{EfronMonotonicityThm} continues to hold for integer valued random variables which are log-concave in the sense that $p_x \equiv P(X = x)$ for $x \in \mathbb{Z}$ satisfies $p_x^2 \ge p_{x+1} p_{x-1}$ for all $x \in \mathbb{Z}$. \end{remark} In what follows, we will focus on Efron's theorem for $m=2$. As it is shown in \cite{MR0171335}, the case of a pair of variables ($m=2$) indeed implies the general case with $m \ge 2$. Let us recall the argument behind this fact, which involves preservation of log-concavity under convolution. In fact, stability under convolution for log-concave variables is not needed to prove Efron's theorem for $m=2$ as will be seen from the new proof of Efron's theorem given here in Section~\ref% {subsec:AlternativePrfOfEfronViaAsymmBrascampLieb}, so it is consistent to prove the preservation of log-concavity under convolution via Efron's theorem for $m=2$. \begin{proposition} If Theorem \ref{EfronMonotonicityThm} is satisfied for $m=2$, then it is satisfied for $m\geq 2$. \end{proposition} \begin{proof} We proceed as in \cite{MR0171335} by induction on $m\geq 2$. Let $\left( X_{1},\ldots ,X_{m}\right) $ be a $m-$tuple of log-concave variables, let $% S=\sum_{i=1}^{m}X_{i}$ be their sum and set \begin{equation*} \Lambda \left( t,u\right) =\mathbb{E}\left[ \Phi \left( X_{1},\ldots ,X_{m}\right) \left\vert \sum_{i=1}^{m-1}X_{i}=t\text{ },\text{ }% X_{m}=u\right. \right] \text{ .} \end{equation*}% Then \begin{equation*} \mathbb{E}\left[ \Phi \left( X_{1},\ldots ,X_{m}\right) \left\vert S=s\right. \right] =\mathbb{E}\left[ \Lambda \left( T,X_{m}\right) \left\vert T+X_{m}=s\right. \right] \text{ ,} \end{equation*}% where $T=\sum_{i=1}^{m-1}X_{i}$. Hence, by the induction hypothesis for functions of two variables, it suffices to prove that $\Lambda $ is coordinatewise non-decreasing. As $T$ is a log-concave variable (by preservation of log-concavity by convolution), $\Lambda \left( t,u\right) $ is non-decreasing in $t$ by the induction hypothesis for functions of $m-1$ variables. Also $\Lambda \left( t,u\right) $ is non-decreasing in $u$ since $% \Phi $ is non-decreasing in its last argument. \end{proof} \cite{MR0171335} also gives the following corollaries of Theorem \ref% {EfronMonotonicityThm}\ above. \begin{corollary} \label{CorOneOfEfron} Let $\{\Phi _{t}(x_{1},\ldots ,x_{m}):\ t\in T\}$ be a family of measurable functions increasing in every argument for each fixed value of $t$, and increasing in $t$ for each fixed value of $% x_{1},x_{2},\ldots ,x_{m}$. Let $X_{1},\ldots ,X_{m}$ be independent and log-concave and write $S\equiv \sum_{j=1}^{m}X_{j}$. Then \begin{equation*} g(a,b)=E\left\{ \Phi _{a+b-S}(X_{1},\cdots ,X_{m})\bigg |a\leq S\leq a+b\right\} \end{equation*}% is increasing in both $a$ and $b$. \end{corollary} \begin{corollary} \label{CorTwoOfEfron} Suppose that the hypotheses of Theorem~\ref% {EfronMonotonicityThm} hold and that $A=\{x=(x_{1},\ldots ,x_{m})\in \mathbb{% R}^{m}:a_{j}\leq x_{j}\leq b_{j}\}$ with $-\infty \leq a_{j}<b_{j}\leq \infty $ is a rectangle in $\mathbb{R}^{m}$. Then \begin{equation*} g(z)\equiv E\left\{ \Phi (X_{1},\cdots ,X_{m})\bigg |% \sum_{j=1}^{m}X_{j}=z,(X_{1},\ldots ,X_{m})\in A\right\} \end{equation*}% is a non-decreasing function of $z$. \end{corollary} The following section will give applications of Efron's theorem to preservation of log-concavity and strong log-concavity in the case of real-valued variables. \subsection{First use of Efron's theorem: strong log-concavity is preserved by convolution via scores} \begin{theorem} \label{LCandSLCpreservedByConvOneDim} (log-concavity and strong log-concavity preserved by convolution via scores)\newline (a) (\cite{MR0087249}) If $X $ and $Y$ are independent and log-concave with densities $p$ and $q$ respectively, then $X+Y \sim p\star q$ is also log-concave.\newline (b) If $X \in SLC_1( \sigma^2, 1)$ and $Y \in SLC_1 (\tau^2 , 1)$ are independent, then $X + Y \in SLC_1 ( \sigma^2 + \tau^2 , 1) $ \end{theorem} Actually, \cite{MR0087249} proved more: he showed that $p$ is strongly unimodal (i.e. $X+Y\sim p\star q$ with $X,Y$ independent is unimodal for every unimodal $q$ on $\mathbb{R}$) if and only if $X$ is log-concave. \begin{proof} (a) From Proposition~\ref{EquivalencesLogCon} log-concavity of $p$ and $q$ is equivalent to monotonicity of their score functions $\varphi _{p}^{\prime }=(-\log p)^{\prime }=-p^{\prime }/p$ $a.e.$ and $\varphi _{q}^{\prime }=(-\log q)^{\prime }=-q^{\prime }/q$ $a.e.$ respectively. From the approximation scheme described in Proposition~\ref{Lemma_convol_Gauss_multidim} above, we can assume that both $p$ and $q$ are absolutely continuous. Indeed, Efron's theorem applied to formula (\ref{conv_gauss_score}) of Proposition~\ref{Lemma_convol_Gauss_multidim} with $m=2$ and $\Phi (x,y)=\rho _{\sigma G}(x)$, gives that the convolution with a Gaussian variable preserves log-concavity. Then, from Lemma 3.1 of \cite{MR2128239}, \begin{equation*} E\left\{ \rho _{X}(X)\bigg |X+Y=z\right\} =\rho _{X+Y}(z). \end{equation*} Thus by Efron's theorem with $m=2$ and \begin{equation*} \Phi (x,y)=\rho _{Y}(y), \end{equation*}% we see that $E\{\Phi (X,Y)|X+Y=z\}=\varphi _{p\star q}^{\prime }(z)$ is a monotone function of $z$, and hence by Proposition~\ref{EquivalencesLogCon}, (a) if and only if (b), log-concavity of the convolution $p\star q=p_{X+Y}$ holds. \medskip (b) The proof of preservation of strong log-concavity under convolution for $p$ and $q$ strong log-concave on $\mathbb{R}$ is similar to the proof of (a), but with scores replaced by relative scores, but it is interesting to note that a symmetry argument is needed. From Proposition~\ref% {EquivalencesStrongLogConTry3} strong log-concavity of $p$ and $q$ is equivalent to monotonicity of their relative score functions $\rho _{p}(x)\equiv \varphi _{p}^{\prime }(x)-x/\sigma ^{2}$ and $\rho _{q}(x)\equiv \varphi _{q}^{\prime }(x)-x/\tau ^{2}$ respectively. Now we take $m=2$, $\lambda \equiv \sigma ^{2}/(\sigma ^{2}+\tau ^{2})$, and define \begin{equation*} \Phi (x,y)=\lambda \rho _{p}(x)+(1-\lambda )\rho _{q}(y). \end{equation*}% Thus $\Phi $ is coordinate-wise monotone and by using Lemma \ref% {lem:ScoreProjectionMultivariate} with $d=1$ we find that \begin{equation*} E\{\Phi (X,Y)|X+Y=z\}=\varphi _{p\star q}^{\prime }(z)-\frac{z}{\sigma ^{2}+\tau ^{2}}=\rho _{p\star q}(z). \end{equation*}% Hence it follows from Efron's theorem that the relative score $\rho _{p\star q}$ of the convolution $p\star q$, is a monotone function of $z$. By Proposition~\ref{EquivalencesStrongLogConTry3}(b) it follows that $p\star q\in SLC_{1}(\sigma ^{2}+\tau ^{2},1)$. \end{proof} \subsection{A special case of Efron's theorem via symmetrization} We now consider a particular case of Efron's theorem. Our motivation is as follows: in order to prove that strong log-concavity is preserved under convolution, recall that we need to show monotonicity in $z$ of \begin{equation*} \rho _{X+Y}(z)=E\left\{ \frac{\sigma ^{2}}{\sigma ^{2}+\tau ^{2}}\rho _{X}(X)+\frac{\tau ^{2}}{\sigma ^{2}+\tau ^{2}}\rho _{Y}(Y)\bigg | % X+Y=z\right\} \text{.} \end{equation*} Thus we only need to consider functions $\Phi $ of the form \begin{equation*} \Phi (X,Y)=\Psi \left( X\right) +\Gamma \left( Y\right), \end{equation*} where $\Psi $ and $\Gamma $ are non-decreasing, and show the monotonicity of \begin{equation*} E\left\{ \Phi (X,Y)\bigg |X+Y=z\right\} \end{equation*} for functions $\Phi$ of this special form. By symmetry between $X$ and $Y$, this reduces to the study of the monotonicity of \begin{equation*} E\left\{ \Psi \left( X\right) \bigg |X+Y=z\right\} \text{.} \end{equation*} We now give a simple proof of this monotonicity in dimension $1$. \begin{proposition} \label{prop_efron_spe_dim_1} Let $\Psi :\mathbb{R\rightarrow R}$ be non-decreasing and suppose that $X \sim f_X$, $Y \sim f_Y$ are independent and that $f_X, f_Y$ are log-concave. If the function $\eta : \mathbb{R} \rightarrow \mathbb{R}$ given by% \begin{equation*} \eta(z) \equiv E\left\{ \Psi \left( X\right) \bigg |X+Y=z\right\} \end{equation*}% is well-defined ($\Psi $ integrable with respect to the conditional law of $% X+Y$), then it is non-decreasing. \end{proposition} \begin{proof} First notice that by truncating the values of $\Psi $ and using the monotone convergence theorem, we assume that $\Psi $ is bounded. Moreover, by Proposition~\ref{Lemma_convol_Gauss_multidim}, we may assume that $f_{Y}$ is $\mathcal{C}^{1}$ with finite Fisher information, thus justifying the following computations. We write \begin{equation*} E\left\{ \Psi \left( X\right) \bigg |X+Y=z\right\} =\int_{\mathbb{R}}\Psi \left( x\right) \frac{f_{X}\left( x\right) f_{Y}\left( z-x\right) }{F_{z}}dx% \text{ ,} \end{equation*}% where \begin{equation*} F_{z}=\int_{\mathbb{R}}f_{X}\left( x\right) f_{Y}\left( z-x\right) dx>0\text{ .} \end{equation*}% Moreover, with $f_{X}=\exp (-\varphi _{X})$ and $f_{Y}=\exp (-\varphi _{Y})$% , \begin{eqnarray*} \lefteqn{\frac{\partial }{\partial z}\left( \Psi \left( x\right) \frac{% f_{X}\left( x\right) f_{Y}\left( z-x\right) }{F_{z}}\right) } \\ &=&-\Psi \left( x\right) \varphi _{Y}^{\prime }\left( z-x\right) \frac{% f_{X}\left( x\right) f_{Y}\left( z-x\right) }{F_{z}} \\ &&\ \ +\ \Psi \left( x\right) \frac{f_{X}\left( x\right) f_{Y}\left( z-x\right) }{F_{z}}\int_{\mathbb{R}}\varphi _{Y}^{\prime }\left( z-x\right) \frac{f_{X}\left( x\right) f_{Y}\left( z-x\right) }{F_{z}}dx\text{ ,} \end{eqnarray*}% where $\varphi _{Y}^{\prime }\left( y\right) =-f_{Y}^{\prime }\left( y\right) /f_{Y}\left( y\right) $. As $f_{X}$ is bounded (see Section \ref% {section_reg}) and $Y$ has finite Fisher information, we deduce that $\int_{% \mathbb{R}}\left\vert \varphi _{Y}^{\prime }\left( z-x\right) \right\vert f_{X}\left( x\right) f_{Y}\left( z-x\right) dx$ is finite. Then, \begin{eqnarray*} \lefteqn{\frac{\partial }{\partial z}\left( E\left\{ \Psi \left( X\right) \bigg|X+Y=z\right\} \right) } \\ &=&-E\left\{ \Psi \left( X\right) \cdot \varphi _{Y}^{\prime }\left( Y\right) \bigg |X+Y=z\right\} +E\left\{ \Psi \left( X\right) \bigg |% X+Y=z\right\} E\left\{ \varphi _{Y}^{\prime }\left( Y\right) \bigg |% X+Y=z\right\} \\ &=&-\cov\left\{ \Psi \left( X\right) ,\varphi _{Y}^{\prime }\left( Y\right) % \bigg |X+Y=z\right\} \text{ .} \end{eqnarray*}% If we show that the latter covariance is negative, the result will follow. Let $\left( \tilde{X},\tilde{Y}\right) $ be an independent copy of $\left( X,Y\right) $. Then \begin{eqnarray*} \lefteqn{E\left\{ \left( \Psi \left( X\right) -\Psi \left( \tilde{X}\right) \right) \left( \varphi _{Y}^{\prime }\left( Y\right) -\varphi _{Y}^{\prime } \left( \tilde{Y}\right) \right) \bigg|\tilde{X}+\tilde{Y}=z,X+Y=z\right\} } \\ & = & 2\cov\left\{ \Psi \left( X\right) ,\varphi _{Y}^{\prime }\left( Y\right) \bigg |X+Y=z\right\} \text{ .} \end{eqnarray*} Furthermore, since $X\geq \tilde{X}$ implies $Y\leq \tilde{Y}$ under the given condition $[X+Y=z, \tilde{X}+\tilde{Y} = z]$, \begin{eqnarray*} \lefteqn{E\left\{ \left( \Psi \left( X\right) -\Psi \left( \tilde{X}\right) \right) \left( \varphi _{Y}^{\prime }\left( Y\right) -\varphi _{Y}^{\prime} \left( \tilde{Y}\right) \right) \bigg|\tilde{X}+\tilde{Y}=z,X+Y=z\right\} } \\ &=&2E\left\{ \underset{\geq 0}{\underbrace{\left( \Psi \left( X\right) -\Psi \left( \tilde{X}\right) \right) }}\underset{\leq 0}{\underbrace{\left( \varphi _{Y}^{\prime }\left( Y\right) -\varphi _{Y}^{\prime }\left( \tilde{Y}% \right) \right) }}\mathbf{1}_{\left\{ X\geq \tilde{X}\right\} } \bigg |X+Y=z,\tilde{X}+\tilde{Y}=z\right\} \\ &\leq &0. \end{eqnarray*} This proves Proposition~\ref{prop_efron_spe_dim_1}. \end{proof} \subsection{Alternative proof of Efron's theorem via asymmetric Brascamp-Lieb inequalities} \label{subsec:AlternativePrfOfEfronViaAsymmBrascampLieb} Now our goal is to give a new proof of Efron's Theorem~\ref{EfronMonotonicityThm} in the case $m=2$ using results related to recent asymmetric Brascamp-Lieb inequalities and covariance formulas due to \cite{Menz-Otto:2013}. \begin{theorem}[Efron] Suppose that $\Phi :\mathbb{R}^{2}\rightarrow \mathbb{R}$, such that $\Phi $ is coordinatewise non-decreasing and let \begin{equation*} g(z)\equiv E\left\{ \Phi (X,Y)\bigg |X+Y=z\right\} \text{ }, \end{equation*}% where $X$ and $Y$ are independent and log-concave. Then $g$ is non-decreasing. \end{theorem} \begin{proof} Notice that by truncating the values of $\Phi $ and using the monotone convergence theorem, we may assume that $\Phi $ is bounded. Moreover, by convolving $\Phi $ with a positive kernel, we preserve coordinatewise monotonicity of $\Phi $ and we may assume that $\Phi $ is $\mathcal{C}^{1}$. As $\Phi $ is taken to be bounded, choosing for instance a Gaussian kernel, it is easily seen that we can ensure that $\nabla \Phi $ is uniformly bounded on $\mathbb{R}^{2}$. Indeed, if \begin{equation*} \Psi _{\sigma ^{2}}\left( a,b\right) =\int_{\mathbb{R}^{2}}\Phi (x,y)\frac{1% }{2\pi \sigma ^{2}}e^{-\left\Vert \left( a,b\right) -\left( x,y\right) \right\Vert ^{2}/2\sigma ^{2}}dxdy\text{ ,} \end{equation*}% then \begin{equation*} \nabla \Psi _{\sigma ^{2}}\left( a,b\right) =-\int_{\mathbb{R}^{2}}\Phi (x,y)% \frac{\left\Vert \left( a,b\right) -\left( x,y\right) \right\Vert }{2\pi \sigma ^{4}}e^{-\left\Vert \left( a,b\right) -\left( x,y\right) \right\Vert ^{2}/2\sigma ^{2}}dxdy\text{ ,} \end{equation*}% which is uniformly bounded in $\left( a,b\right) $ whenever $\Phi $ is bounded. Notice also that by Lemma \ref{lemma_approx},\ it suffices to prove the result for strictly (or strongly) log-concave variables that have $% \mathcal{C}^{\infty }$ densities and finite Fisher information. We write% \begin{equation*} N\left( z\right) =\int_{\mathbb{R}}f_{X}\left( z-y\right) f_{Y}\left( y\right) dy \end{equation*}% and \begin{equation*} g(z)=\int_{\mathbb{R}}\Phi \left( z-y,y\right) \frac{f_{X}\left( z-y\right) f_{Y}\left( y\right) }{N\left( z\right) }dy\text{ ,} \end{equation*}% with $f_{X}=\exp \left( -\varphi _{X}\right) $ and $f_{Y}=\exp \left( -\varphi _{Y}\right) $ the respective strictly log-concave densities of $X$ and $Y$. We note $\mu _{X}$ and $\mu _{Y}$ respectively the distributions of $X$ and $Y$. Since $\varphi _{X}^{\prime }$ is $L_{2}\left( \mu _{X}\right) $ (which means that $\mu _{X}$ has finite Fisher information) and $f_{Y}$ is bounded (see Theorem \ref{theorem_exp_tails}), we get that $f_{X}^{\prime }\left( z-y\right) f_{Y}\left( y\right) =-\varphi _{X}^{\prime }\left( z-y\right) f_{X}\left( z-y\right) f_{Y}\left( y\right) $ is integrable and so $N$ is differentiable with gradient given by% \begin{equation*} N^{\prime }\left( z\right) =-\int_{\mathbb{R}}\varphi _{X}^{\prime }\left( z-y\right) f_{X}\left( z-y\right) f_{Y}\left( y\right) dy\text{ .} \end{equation*}% By differentiating with respect to $z$ inside the integral defining $g$ we get \begin{eqnarray} \lefteqn{\frac{d}{dz}\left( \Phi \left( z-y,y\right) \frac{f_{X}\left( z-y\right) f_{Y}\left( y\right) }{\int_{\mathbb{R}}f_{X}\left( z-y^{\prime }\right) f_{Y}\left( y^{\prime }\right) dy^{\prime }}\right)} \label{first-derivative_inside} \\ &=& \left( \partial _{1}\Phi \right) \left( z-y,y\right) \frac{f_{X}\left( z-y\right) f_{Y}\left( y\right) } {N\left( z\right) }-\Phi \left( z-y,y\right) \varphi _{X}^{\prime }\left( z-y\right) \frac{f_{X}\left( z-y\right) f_{Y}\left( y\right) }{N\left( z\right) } \notag \\ && \ + \ \Phi \left( z-y,y\right) \frac{f_{X}\left( z-y\right) f_{Y}\left( y\right) } {N\left( z\right) }\int_{\mathbb{R}}\varphi _{X}^{\prime } \left( z-y\right) \frac{f_{X}\left( z-y\right) f_{Y}\left( y\right) } {N\left( z\right) }dy\text{ .} \notag \end{eqnarray}% We thus see that the quantity in (\ref{first-derivative_inside}) is integrable (with respect to Lebesgue measure) and we get \begin{equation} g^{\prime }\left( z\right) =\mathbb{E}\left[ \left( \partial _{1}\Phi \right) \left( X,Y\right) \left\vert X+Y=z\right. \right] -\cov\left[ \Phi \left( X,Y\right) ,\varphi _{X}^{\prime }\left( X\right) \left\vert X+Y=z\right. \right] \text{ .} \label{derivative_I} \end{equation}% Now, by symmetrization we have \begin{eqnarray*} \lefteqn{\cov\left[ \Phi \left( X,Y\right) ,\varphi _{X}^{\prime }\left( X\right) \left\vert X+Y=z\right. \right] } \\ &=&\mathbb{E}\left[ \left( \Phi \left( X,Y\right) -\Phi \left( \tilde{X},% \tilde{Y}\right) \right) \left( \varphi _{X}^{\prime }\left( X\right) -\varphi _{X}^{\prime }\left( \tilde{X}\right) \right) \mathbf{1}_{\left\{ X\geq \tilde{X}\right\} }\left\vert X+Y=z,\tilde{X}+\tilde{Y}=z\right. % \right] \\ &=&\mathbb{E}\left[ \left( \int_{\tilde{X}}^{X}\left( \partial _{1}\Phi -\partial _{2}\Phi \right) \left( u,z-u\right) du\right) \underset{\geq 0}{% \underbrace{\left( \varphi _{X}^{\prime }\left( X\right) -\varphi _{X}^{\prime }\left( \tilde{X}\right) \right) }}\mathbf{1}_{\left\{ X\geq \tilde{X}\right\} }\left\vert X+Y=z,\tilde{X}+\tilde{Y}=z\right. \right] \\ &\leq &\mathbb{E}\left[ \left( \int_{\tilde{X}}^{X}\left( \partial _{1}\Phi \right) \left( u,z-u\right) du\right) \left( \varphi _{X}^{\prime }\left( X\right) -\varphi _{X}^{\prime }\left( \tilde{X}\right) \right) \mathbf{1}% _{\left\{ X\geq \tilde{X}\right\} }\left\vert X+Y=z,\tilde{X}+\tilde{Y}% =z\right. \right] \\ &=&\cov\left[ \Phi _{1}\left( X\right) ,\varphi _{X}^{\prime }\left( X\right) \left\vert X+Y=z\right. \right] \text{ ,} \end{eqnarray*}% where $\Phi _{1}\left( x\right) =\int_{0}^{x}\left( \partial _{1}\Phi \right) \left( u,z-u\right) du$. We denote $\eta $ the distribution of $X$ given $X+Y=z$. The measure $\eta $ has density $h_{z}\left( x\right) =N^{-1}\left( z\right) f_{X}\left( x\right) f_{Y}\left( z-x\right) ,$ $y\in \mathbb{R}$. Notice that $h_{z}$ is strictly log-concave on $\mathbb{R}$ and that for all $x\in \mathbb{R}$, \begin{equation*} \left( -\log h_{z}\right) ^{\prime \prime }\left( x\right) = \varphi_{X}^{\prime \prime }\left( x\right) +\varphi _{Y}^{\prime \prime }\left( z-x\right) \text{ .} \end{equation*} Now we are able to use the asymmetric Brascamp and Lieb inequality of \cite{Menz-Otto:2013} (Lemma 2.11, page 2190, with their $\delta \psi \equiv 0$ so their $\psi =\psi _{c}$ with $\psi ^{\prime \prime }>0$) or \cite{CarlenCordero-ErausquinLieb} ((1.2), page 2); see Proposition~\ref{prop:MenzOtto} below. This yields \begin{eqnarray*} \lefteqn{\cov\left[ \Phi _{1}\left( X\right) ,\varphi _{X}^{\prime }\left( X\right) \left\vert X+Y=z\right. \right] } \\ &=&\int_{\mathbb{R}}\left( \Phi _{1}\left( x\right) -\mathbb{E}\left[ \Phi _{1}\left( X,Y\right) \left\vert X+Y=z\right. \right] \right) \left( \varphi _{X}^{\prime }\left( x\right) -\mathbb{E}\left[ \varphi _{X}^{\prime }\left( X\right) \left\vert X+Y=z\right. \right] \right) h_{z}\left( x\right) dx \\ &\leq &\sup_{x\in \mathbb{R}}\left\{ \frac{\varphi _{X}^{\prime \prime }\left( x\right) }{\left( -\log h_{z}\right) ^{\prime \prime }\left( x\right) }\right\} \int_{\mathbb{R}}\Phi _{1}^{\prime }\left( x\right) h_{z}\left( x\right) dx \\ &=&\sup_{x\in \mathbb{R}}\left\{ \frac{\varphi _{X}^{\prime \prime }\left( x\right) }{\varphi _{X}^{\prime \prime }\left( x\right) +\varphi _{Y}^{\prime \prime }\left( z-x\right) }\right\} \mathbb{E}\left[ \left( \partial _{1}\Phi \right) \left( X,Y\right) \left\vert X+Y=z\right. \right] \\ &\leq &\mathbb{E}\left[ \left( \partial _{1}\Phi \right) \left( X,Y\right) \left\vert X+Y=z\right. \right] \text{ .} \end{eqnarray*}% Using the latter bound in (\ref{derivative_I}) then gives the result. \end{proof} \section{Preservation of log-concavity and strong log-concavity under convolution in $\mathbb{R}^d$ via Brascamp-Lieb inequalities and towards a proof via scores} \label{sec:StrongLogConcavityPreservMultivariateCase} In Sections~\ref{sec:EfronsTheoremOneDimension} and~\ref{Sec:DiscreteLogConc}, we used Efron's monotonicity theorem~\ref{EfronMonotonicityThm} to give alternative proofs of the preservation of log-concavity and strong log-concavity under convolution in the cases of continuous or discrete random variables on $\mathbb{R}$ or $\mathbb{Z}$ respectively. In the former case, we also used asymmetric Brascamp-Lieb inequalities to give a new proof of Efron's monotonicity theorem. In this section we look at preservation of log-concavity and strong log-concavity under convolution in $\mathbb{R}^{d}$ via:\newline (a) the variance inequality due to \cite{MR0450480};\newline (b) scores and potential (partial) generalizations of Efron's monotonicity theorem to $\mathbb{R}^{d}$.\newline While point (a) gives a complete answer (Section \ref{ssec:BrascampLiebPfSLCpreservedByConv}), the aim of point (b) is to give an interesting link between preservation of (strong) log-concavity in $\mathbb{R}^{d}$ and a (guessed) monotonicity property in $\mathbb{R}^{d}$ (Section \ref{ssection:SLC_Efron_multidim}). This latter property would be a partial generalization of Efron's monotonicity theorem to the multi-dimensional case and further investigations remain to be accomplished in order to prove such a result. We refer to Section~\ref{sec:AppendixA}\ (Appendix A) for further comments about the Brascamp-Lieb inequalities and related issues, as well as a recall of various functional inequalities. \subsection{Strong log-concavity is preserved by convolution (again): proof via second derivatives and a Brascamp-Lieb inequality} \label{ssec:BrascampLiebPfSLCpreservedByConv} We begin with a different proof of the version of Theorem~\ref% {StrongLogConPreservOne}(b) corresponding to our first definition of strong log-concavity, Definition~\ref{strongLogConcaveDefn1}, which proceeds via the Brascamp-Lieb variance inequality as given in part (a) of Proposition~% \ref{BLInequalityPlus}: \begin{proposition} \label{prop:MultSLCpreservedByConvViaBL} If $X \sim p \in SLC_1 (\sigma^2, d) $ and $Y\sim q \in SLC_1 (\tau^2,d)$ are independent, then $Z=X+Y \sim p\star q \in SLC_1( \sigma^2 + \tau^2, d)$. \end{proposition} \begin{proof} Now $p_{Z}=p_{X+Y}=p\star q$ is given by \begin{equation} p_{Z}(z)=\int p(x)q(z-x)dx=\int p(z-y)q(y)dy. \label{form_conv} \end{equation}% Now $p=\exp (-\varphi _{p})$ and $q=\exp (-\varphi _{q})$ where we may assume (by (b) of Proposition \ref{Lemma_convol_Gauss_multidim}) that $\varphi _{p},\varphi _{q}\in C^{2}$ and that $p$ and $q$ have finite Fisher information. Then, by Proposition~\ref{EquivalencesStrongLogConTry3}, \begin{equation*} {\nabla ^{2}\mathstrut }(\varphi _{p})(x)\geq \frac{1}{\sigma ^{2}}I,\ \ % \mbox{and}\ \ {\nabla ^{2}\mathstrut }(\varphi _{q})(x)\geq \frac{1}{\tau ^{2}}I. \end{equation*}% As we can interchange differentiation and integration in (\ref{form_conv}) (see for instance the detailed arguments for a similar situation in the proof of Proposition \ref{prop_efron_spe_dim_1}), we find that \begin{equation*} \nabla (-\log p_{Z})(z)=-\frac{\nabla p_{Z}}{p_{Z}}\left( z\right) =E\{\nabla \varphi _{q}(Y)|X+Y=z\}=E\{\nabla \varphi _{p}(X)|X+Y=z\}. \end{equation*}% Then \begin{eqnarray*} \lefteqn{{\nabla^2 \mathstrut}(-\log p_{Z})(z)} \\ &=&\nabla \left\{ E[q(z-X)\nabla (-\log q)(z-X)]\cdot \frac{1}{p_{Z}(z)}% \right\} \\ &=&E\{-\nabla q(Y)(\nabla \log q(Y))^{T}|X+Y=z\} \\ &&\qquad +\ E\{{\nabla ^{2}\mathstrut }(-\log q)(Y)|X+Y=z\}+\left( E\{\nabla \log q(Y)|X+Y=z\}\right) ^{\otimes 2} \\ &=&-Var(\nabla \varphi _{q}(Y)|X+Y=z)+E\{{\nabla ^{2}\mathstrut }\,\varphi _{q}(Y)|X+Y=z\} \\ &=&-Var(\nabla \varphi _{p}(X)|X+Y=z)+E\{{\nabla ^{2}\mathstrut }\,\varphi _{p}(X)|X+Y=z\}. \end{eqnarray*}% Now we apply \cite{MR0450480} Theorem 4.1 (see Proposition~\ref% {BLInequalityPlus}(a)) with \begin{eqnarray} &&h(x)=\nabla _{z}\varphi _{q}(z-x), \\ &&F(x)=p(x)q(z-x), \label{NotationConvSpecialCase} \end{eqnarray}% to obtain \begin{eqnarray*} \lefteqn{Var(\nabla _{z}\varphi _{q}(Y)|Z+Y=z)} \\ &\leq &\int_{\mathbb{R}^{d}}{\nabla ^{2}\mathstrut }\,\varphi _{g}(z-x)\left\{ {\nabla ^{2}\mathstrut }\,\varphi _{p}(x)+{\nabla ^{2}\mathstrut }\,\varphi _{q}(z-x)\right\} ^{-1}\cdot {\nabla ^{2}\mathstrut }\,\varphi _{q}(z-x)\frac{F(x)}{\int_{\mathbb{R}% ^{d}}F(x^{\prime })dx^{\prime }}dx. \end{eqnarray*}% This in turn yields \begin{eqnarray} \lefteqn{{\nabla^2 \mathstrut}(-\log p_{Z})(z)} \label{BCLspecialCase1} \\ &\geq &E\left\{ {\nabla ^{2}\mathstrut }\,\varphi _{q}(Y)-{\nabla ^{2}\mathstrut }\,\varphi _{q}(Y)\big [{\nabla ^{2}\mathstrut }\,\varphi _{p}(X)+{\nabla ^{2}\mathstrut }\,\varphi _{q}(Y)\big ]^{-1}{\nabla ^{2}\mathstrut }\,\varphi _{q}(Y)|X+Y=z\right\} . \notag \end{eqnarray}% By symmetry between $X$ and $Y$ we also have \begin{eqnarray} \lefteqn{{\nabla^2 \mathstrut}(-\log p_{Z})(z)} \label{BCLspecialCase2} \\ &\geq &E\left\{ {\nabla ^{2}\mathstrut }\,\varphi _{p}(X)-{\nabla ^{2}\mathstrut }\,\varphi _{p}(X)\big [{\nabla ^{2}\mathstrut }\,\varphi _{p}(X)+{\nabla ^{2}\mathstrut }\,\varphi _{q}(Y)\big ]^{-1}{\nabla ^{2}\mathstrut }\,\varphi _{p}(X)|X+Y=z\right\} . \notag \end{eqnarray}% In proving the inequalities in the last two displays we have in fact reproved Theorem 4.2 of \cite{MR0450480} in our special case given by (\ref% {NotationConvSpecialCase}). Indeed, Inequality (4.7) of Theorem 4.2 in \cite% {MR0450480} applied to our special case is the first of the two inequalities displayed above. Now we combine (\ref{BCLspecialCase1}) and (\ref{BCLspecialCase2}). We set \begin{eqnarray*} && \alpha \equiv \frac{\sigma^2}{\sigma^2+\tau^2}, \ \ \ \beta \equiv 1-\alpha = \frac{\tau^2}{\sigma^2 + \tau^2} , \\ && A \equiv \big [ {\nabla^2 \mathstrut} \, \varphi_p (X) + {\nabla^2 \mathstrut} \, \varphi_q (Y) \big ]^{-1} , \\ && s = s(X) \equiv {\nabla^2 \mathstrut} \, \varphi_p (X), \ \ \ t = t(X) \equiv {\nabla^2 \mathstrut} \, \varphi_q (Y) . \end{eqnarray*} We get from (\ref{BCLspecialCase1}) and (\ref{BCLspecialCase2}): \begin{eqnarray*} \lefteqn{{\nabla^2 \mathstrut} (-\log p_{Z}) (z) } \label{BCLspecialCaseSymmetricOne} \\ & \ge & E \left \{ \alpha s + \beta t - \alpha s A s - \beta t A t \big | % X+Y = z \right \} \\ & = & E \left \{ (\alpha s + \beta t) A (s+t) - \alpha s A s - \beta t A t % \big | X+Y = z \right \} \\ && \qquad \mbox{since} \ A (s+t) = I \equiv \mbox{identity} \\ & = & E \left \{ \alpha s A t + \beta t A s \big | X+Y = z \right \} . \end{eqnarray*} Now \begin{eqnarray*} \alpha s A t & = & \frac{\sigma^2}{\sigma^2 + \tau^2} {\nabla^2 \mathstrut} \, \varphi_p \big [ {\nabla^2 \mathstrut} \, \varphi_p (X) + {\nabla^2 \mathstrut} \, \varphi_q (Y) \big ]^{-1} {\nabla^2 \mathstrut} \, \varphi_q (Y) \\ & = & \frac{\sigma^2}{\sigma^2 + \tau^2} \big [ ({\nabla^2 \mathstrut} \, \varphi_p )^{-1} (X) + ( {\nabla^2 \mathstrut} \, \varphi_q)^{-1} (Y) \big ]% ^{-1} . \end{eqnarray*} By symmetry \begin{eqnarray*} \beta t A s & = & \frac{\tau^2}{\sigma^2+\tau^2} \big [ ({\nabla^2 \mathstrut% } \, \varphi_p )^{-1} (X) + ( {\nabla^2 \mathstrut} \, \varphi_q)^{-1} (Y) % \big ]^{-1} \end{eqnarray*} and we therefore conclude that \begin{eqnarray*} \lefteqn{{\nabla^2 \mathstrut} (-\log p_{Z}) (z) } \label{BCLspecialCaseSymmetricTwo} \\ & \ge & \frac{\sigma^2+\tau^2}{\sigma^2 + \tau^2} E \left \{ \big [ ({% \nabla^2 \mathstrut} \, \varphi_p )^{-1} (X) + ( {\nabla^2 \mathstrut} \, \varphi_q)^{-1} (Y) \big ]^{-1} \big | X+Y = z \right \} \\ & \ge & \frac{1}{\sigma^2 + \tau^2} I . \end{eqnarray*} Note that the resulting inequality \begin{eqnarray*} {\nabla^2 \mathstrut} (-\log p_{Z}) (z) \ge E \left \{ \big [ ({\nabla^2 \mathstrut} \, \varphi_p )^{-1} (X) + ( {\nabla^2 \mathstrut} \, \varphi_q)^{-1} (Y) \big ]^{-1} \big | X+Y = z \right \} \end{eqnarray*} also gives the right lower bound for convolution of strongly log-concave densities in the definition of $SLC_2( \mu, \Sigma,d)$, namely \begin{eqnarray*} {\nabla^2 \mathstrut} (-\log p_{Z}) (z) \ge ( \Sigma_X + \Sigma_Y )^{-1} . \label{BCLspecialCaseSymmetricThree} \end{eqnarray*} \end{proof} \subsection{Strong log-concavity is preserved by convolution (again): towards a proof via scores and a multivariate Efron inequality\label% {ssection:SLC_Efron_multidim}} We saw in the previous sections that Efron's monotonicity theorem allows to prove stability under convolution for (strongly) log-concave measures on $% \mathbb{R}$. However, the stability holds also in $\mathbb{R}^{d}$, $d>1$. This gives rise to the two following natural questions: does a generalization of Efron's theorem in higher dimensions exist? Does it allow recovery stability under convolution for log-concave measures in $\mathbb{R}^{d}$? Let us begin with a projection formula for scores in dimension $d$. \begin{lemma} \label{lem:ScoreProjectionMultivariate} (Projection) Suppose that $X$ and $Y$ are $d-$dimensional independent random vectors with log-concave densities $p_{X}$ and $q_{Y}$ respectively on $% \mathbb{R}^{d}$. Then $\nabla \varphi_{X+Y}$ and $\rho _{X+Y}:\mathbb{R}% ^{d}\rightarrow \mathbb{R}^{d}$ are given by \begin{eqnarray*} \nabla \varphi_{X+Y} (z) = E \left \{ \lambda \nabla \varphi_X (X) + (1-\lambda) \nabla \varphi_{Y} (Y) | X+Y = z \right \} \end{eqnarray*} for each $\lambda \in [0,1]$, and, if $p_X \in SLC_1 (\sigma^2, d)$ and $p_Y \in SLC_1 (\tau^2, d)$, then \begin{eqnarray*} \rho _{X+Y}(z) = E\left\{ \frac{\sigma ^{2}}{\sigma ^{2}+\tau ^{2}}% \rho_{X}(X) +\frac{\tau ^{2}}{\sigma ^{2}+\tau ^{2}}\rho _{Y}(Y)\bigg | % X+Y=z\right\} . \end{eqnarray*} \end{lemma} \smallskip \begin{proof} This can be proved just as in the one-dimensional case, much as in \cite% {MR659464}, but proceeding coordinate by coordinate. \end{proof} Since we know from Propositions~\ref{EquivalencesLogCon} and~\ref% {EquivalencesStrongLogConTry3} that the scores $\nabla \varphi_X$ and $% \nabla \varphi_Y$ and the relative scores $\rho_X$ and $\rho_Y$ are multivariate monotone, the projection Lemma~\ref% {lem:ScoreProjectionMultivariate} suggests that proofs of preservation of multivariate log-concavity and strong log-concavity might be possible via a multivariate generalization of Efron's monotonicity Theorem~ \ref{EfronMonotonicityThm} to $d\ge 2$ along the following lines: Suppose that $\Phi :(\mathbb{R}^{d})^{n}\rightarrow \mathbb{R}^{d}$ is coordinatewise multivariate monotone: for each fixed $j\in \{1,\ldots ,n\}$ the function $\Phi _{j}:\mathbb{R}^{d}\rightarrow \mathbb{R}^{d}$ defined by \begin{equation*} \Phi _{j}(x)=\Phi (x_{1},\ldots ,x_{j-1},x,x_{j+1},\ldots ,x_{n}) \end{equation*}% is multivariate monotone: that is \begin{equation*} \langle \Phi _{j}(x_{1})-\Phi _{j}(x_{2}),x_{1}-x_{2}\rangle \geq 0\ \ % \mbox{for all}\ \ x_{1},x_{2}\in \mathbb{R}^{d}. \end{equation*}% If $X_{1},\ldots ,X_{m}$ are independent with $X_{j}\sim f_{j}$ log-concave on $\mathbb{R}^{d}$, then it might seem natural to conjecture that the function $g$ defined by \begin{equation*} g(z)\equiv E\left\{ \Phi (X_{1},\ldots ,X_{n})\bigg |X_{1}+\cdots +X_{m}=z\right\} \end{equation*}% is a monotone function of $z\in \mathbb{R}^{d}$: \begin{equation*} \langle g(z_{1})-g(z_{2}),z_{1}-z_{2}\rangle \geq 0\ \ \mbox{for all}\ \ z_{1},z_{2}\in \mathbb{R}^{d}. \end{equation*} Unfortunately, this seemingly natural generalization of Efron's theorem does not hold without further assumptions. In fact, it fails for $m=2$ and $% \mathbf{X}_1, \mathbf{X}_2$ Gaussian with covariances $\Sigma_1$ and $% \Sigma_2$ sufficiently different. For an explicit example see \cite% {Saumard-Wellner:13}. \smallskip Again, the result holds for $m$ random vectors if it holds for $2,\ldots ,m-1 $ random vectors. It suffices to prove the theorem for $m=2$ random vectors. Since everything reduces to the case where $\Phi $ is a function of two variables (either for Efron's theorem or for a multivariate generalization), we will restrict ourselves to this situation. Thus if we define \begin{equation*} g(s)\equiv E\left\{ \Phi (X_{1},X_{2})\bigg |X_{1}+X_{2}=s\right\} \text{ }, \end{equation*}% then we want to show that \begin{equation*} \langle g(s_{1})-g(s_{2}),s_{1}-s_{2}\rangle \geq 0\ \ \mbox{for all}\ \ s_{1},\ s_{2}\in \mathbb{R}^{d}\text{ }. \end{equation*} Finally, our approach to Efron's monotonicity theorem in dimension $d\geq 2$ is based on the following remark. \begin{remark} \label{remark_link_cov_Efron_multidim}For suitable regularity of $\Phi :(\mathbb{R}^{d})^{2}\rightarrow \mathbb{R}^{d}$ and $\rho _{X}:\mathbb{R}^{d}\rightarrow \mathbb{R}$, we have \begin{eqnarray*} \left( \nabla g\right) \left( z\right) & = &E\left\{ \left( \nabla _{1}\Phi \right) (X,Y)\bigg |X+Y=z\right\} \\ && \ \ -\ \cov\left\{ \Phi (X,Y),\rho _{X}\left( X\right) \bigg |X+Y=z\right\} \ \ \left( \in \mathbb{R}^{d\times d}\right) \text{ .} \end{eqnarray*} Recall that $\nabla _{1}\Phi \equiv \nabla \Phi _{1}:(\mathbb{R}^{d})^{2}\rightarrow \mathbb{R}^{d\times d}$. Furthermore, the matrix $\left( \nabla g\right) \left( z\right) $ is positive semi-definite if for all $a\in \mathbb{R}^{d}$, $a^{T}\nabla g(z)a^{T}\geq 0$, which leads to leads to asking if the following covariance inequality holds: \begin{equation} \cov\left\{ a^{T}\Phi (X,Y),\rho _{X}^{T}(X)a\bigg |X+Y=z\right\} \leq E\left\{ a^{T}\nabla _{1}\Phi (X,Y)a\bigg |X+Y=z\right\} \text{?} \label{cov_ineq_Efron_multidim} \end{equation}% Covariance inequality (\ref{cov_ineq_Efron_multidim}) would imply a multivariate generalization of Efron's theorem (under sufficient regularity). \end{remark} \section{Peakedness and log-concavity} \label{sec:peakedness} Here is a summary of the results of \cite{MR0187269}, \cite{MR927147}, \cite{MR2095937}, and \cite{MR994278}. First \cite{MR2095937}. Let $f$ be log-concave, and let $g$ be convex. Then if $X \sim N_d (\mu, \Sigma) \equiv \gamma$, \begin{eqnarray} E \{ g(X+\mu-\nu) f(X) \} \le E f(X) \cdot Eg(X) \label{HargeInequalityConvexGfirstForm} \end{eqnarray} where $\mu = E(X)$, $\nu = E( X f(X) )/E ( f(X))$. Assuming that $f \ge 0$, and writing $\tilde{f} d \gamma \equiv f d \gamma / \int f d \gamma $, $% \tilde{g}(x - \mu) \equiv g(x)$, and $\tilde{X} \sim \tilde{f} d \gamma$ so that $\tilde{X}$ is strongly log-concave, this can be rewritten as \begin{eqnarray} E \tilde{g} (\tilde{X} - E(\tilde{X} )) \le E \tilde{g}(X- \mu) . \label{HargeInequalityConvexGsecondForm} \end{eqnarray} In particular, for $\tilde{g}(x) = | x |^r$ with $r \ge 1$, \begin{equation*} E | \tilde{X} - \tilde{\mu} |^r \le E | X - \mu |^r , \end{equation*} and for $\tilde{g}(x) = | a^T x |^r$ with $a \in \mathbb{R}^d$, $r\ge 1$, \begin{equation*} E | a^T(\tilde{X} - \tilde{\mu} ) |^r \le E | a^T ( X- \mu ) |^r, \end{equation*} which is Theorem 5.1 of \cite{MR0450480}. Writing (\ref% {HargeInequalityConvexGfirstForm}) as (\ref{HargeInequalityConvexGsecondForm}% ) makes it seem more related to the ``peakedness'' results of \cite{MR927147} to which we now turn. \medskip An $n$-dimensional random vector $Y$ is said to be \textsl{more peaked} than a vector $X$ if they have densities and if \begin{equation*} P(Y \in A) \ge P(X \in A) \end{equation*} holds for all $A \in \mathcal{A}_n$, the class of compact, convex, symmetric (about the origin) Borel sets in $\mathbb{R}^n$. When this holds we will write $Y \overset{p}{\ge} X$. A vector $a $ \textsl{majorizes} the vector $b$ (and we write $a \succ b$) if $\sum_{i=1}^k b_{[i]} \le \sum_{i=1}^k a_{[i]}$ for $k=1, \ldots , n-1$ and $\sum_{i=1}^n b_{[i]} = \sum_{i=1}^n a_{[i]}$ where $a_{[1]} \ge a_{[2]} \ge \cdots \ge a_{[n]}$ and similarly for $b$. (In particular $b = (1, \ldots , 1)/n \prec (1,0, \ldots , 0) = a$.) \medskip \begin{proposition} (Sherman, 1955; see \cite{MR927147}) \label% {PeakednessPlusSymmetryPreservedByConvolution} Suppose that $f_1, f_2, g_1, g_2$ are log-concave densities on $\mathbb{R}^n$ which are symmetric about $0$. Suppose that $X_j \sim f_j$ and $Y_j \sim g_j $ for $j=1,2$ are independent. Suppose that $Y_1 \overset{p}{\ge} X_1$ and $% Y_2 \overset{p}{\ge} X_2$. Then $Y_1 + Y_2 \overset{p}{\ge} X_1 + X_2$. \end{proposition} \begin{proposition} \label{PeakednessPreservedWeightedSumsOnRealLine} If $X_1, \ldots X_n$ are independent random variables with log-concave densities symmetric about $0$, and $Y_1, \ldots , Y_n$ are independent with log-concave densities symmetric about $0$, and $Y_j \overset{p}{>} X_j$ for $% j = 1, \ldots , n$, then \begin{equation*} \sum_{j=1}^n c_j Y_j \overset{p}{\ge} \sum_{j=1}^n c_j X_j \end{equation*} for all real numbers $\{ c_j \}$. \end{proposition} \begin{proposition} \label{PeakednessPreservedUnderConvergenceInDistr} If $\{ X_m \}$ and $\{ Y_m \}$ are two sequences of $n-$dimensional random vectors with $Y_m \overset{p}{>} X_m$ for each $m$ and $X_m \rightarrow_d X$% , $Y_m \rightarrow_d Y$, then $Y \overset{p}{>} X$. \end{proposition} \begin{proposition} $Y \overset{p}{\ge} X$ if and only if $C Y \overset{p}{\ge } CX$ for all $% k\times n$ matrices $C$ with $k\le n$. \end{proposition} \medskip \begin{proposition} (\cite{MR0187269}) \label{ProschanMajorizationAndIIDLogConcaveOnR} Suppose that $Z_1, \ldots , Z_n $ are i.i.d. random variables with log-concave density symmetric about zero. Then if $a , b \in \mathbb{R}_+^n$ with $a \succ b$ ($a$ majorizes $b$), then \begin{equation*} \sum_{j=1}^n b_j Z_j \overset{p}{\ge } \sum_{j=1}^n a_j Z_j \ \ \ \mbox{in} \ \ \mathbb{R} \end{equation*} \end{proposition} \begin{proposition} (\cite{MR927147}) \label{OlkinTongMajorizationAndIIDLogConcaveOnRd} Suppose that $Z_1, \ldots , Z_n $ are i.i.d. $d-$dimensional random vectors with log-concave density symmetric about zero. Then if $a_j , b_j \in \mathbb{R}^1 $ with $a \succ b$ ($a$ majorizes $b$), then \begin{equation*} \sum_{j=1}^n b_j Z_j \overset{p}{>} \sum_{j=1}^n a_j Z_j \ \ \ \mbox{in} \ \ \mathbb{R}^d . \end{equation*} \end{proposition} Now let $\mathcal{K}_n \equiv \{ x \in \mathbb{R}^n : \ x_1 \le x_2 \le \ldots \le x_n \}$. For any $y \in \mathbb{R}^n$, let $\hat{y} = (\hat{y}_1, \ldots, \hat{y}_n )$ denote the projection of $y$ onto $\mathcal{K}_n$. Thus $| y - \hat{y} |^2 = \mbox{min}_{x \in \mathcal{K}} | y - x |^2$. \begin{proposition} \label{KellyOrderedMeansPeakednessForGaussian} (\cite{MR994278}). Suppose that $\underline{Y} = (Y_1, \ldots , Y_n)$ where $Y_j \sim N(\mu_j, \sigma^2) $ are independent and $\mu_1\le \mu_2 \le \ldots \le \mu_n$. Thus $\underline{\mu} \in {\mathcal K}_n$ and $\underline{\hat{\mu}} \equiv \underline{\hat{Y}} \in {\mathcal K}_n$. Then $\hat{\mu}_k - \mu_k \overset{p}{\ge} Y_k - \mu_k$ for each $k \in \{ 1, \ldots , n \}$; i.e. \begin{eqnarray*} P( | \hat{\mu}_k - \mu_k | \le t ) \ge P( | Y_k - \mu_k | \le t ) \ \ \ \mbox{for all} \ \ t>0, \ \ k \in \{ 1, \ldots, n \}. \end{eqnarray*} \end{proposition} \section{Some open problems and further connections with log-concavity} \label{sec:OpenProblemsAndFurtherConnections} \subsection{Two questions} \noindent \textbf{Question 1:} \ \ Does Kelly's Proposition~\ref{KellyOrderedMeansPeakednessForGaussian} continue to hold if the normal distributions of the $Y_i$'s is replaced some other centrally-symmetric log-concave distribution, for example Chernoff's distribution (see \cite{Balabdaoui-Wellner:13})? \medskip \noindent \textbf{Question 2:} \ \ \cite{Balabdaoui-Wellner:13} show that Chernoff's distribution is log-concave. Is it strongly log-concave? A proof would probably give a way of proving strong log-concavity for a large class of functions of the form $f(x)=g(x)g(-x)$ where $g\in PF_{\infty }$ is the density of the sum $\sum_{j=1}^{\infty }(Y_{j}-\mu _{j}) $ where $Y_{j}$'s are independent exponential random variables with means $\mu _{j}$ satisfying $\sum_{j=1}^{\infty }\mu _{j}=\infty $ and $% \sum_{j=1}^{\infty }\mu _{j}^{2}<\infty $. \subsection{Cross-connections with the families of hyperbolically monotone densities} A theory of hyperbolically monotone and completely monotone densities has been developed by \cite{MR1224674}, \cite{MR1481175}. \begin{definition} A density $f$ on $\mathbb{R}^+$ is hyperbolically completely monotone if $% H(w) \equiv f(uv) f(u/v)$ is a completely monotone function of $w = (v+1/v)/2 $. A density $f$ on $\mathbb{R}^+$ is hyperbolically monotone of order $k$, or $f \in HM_k$ if the function $H$ satisfies $(-1)^j H^{(j)} (w) \ge 0$ for $j = 0, \ldots , k-1$ and $(-1)^{k-1} H^{(k-1)} (w) $ is right-continuous and decreasing. \end{definition} For example, the exponential density $f(x) = e^{-x} 1_{(0,\infty)} (x)$ is hyperbolically completely monotone, while the half-normal density $f(x) = \sqrt{2/\pi} \exp (- x^2/2) 1_{(0,\infty)(x) }$ is $HM_1$ but not $HM_2$. \cite{MR1481175} page 305 shows that if $X\sim f\in HM_{1}$, then $\log X\sim e^{x}f(e^{x})$ is log-concave. Thus $HM_{1}$ is closed under the formation of products: if $X_{1},\ldots ,X_{n}\in HM_{1}$, then $Y\equiv X_{1}\cdots X_{n}\in HM_{1}$. \subsection{Suprema of Gaussian processes\label{ssec_suprema}} \cite{MR2375638} use log-concavity of Gaussian measures to show that the supremum of an arbitrary non degenerate Gaussian process has a continuous and strictly increasing distribution function. This is useful for bootstrap theory in statistics. The methods used by \cite{MR2375638} originate in \cite% {MR0388475} and \cite{MR745081}; see \cite{MR1642391} chapters 1 and 4 for an exposition. Furthermore, in relation to Example \ref{SupremumBrownianBridge}\ above, one can wonder what is the form of the density of the maximum of a Gaussian process in general? \cite{MR2415134} actually gives a complete characterization of the distribution of suprema of Gaussian processes. Indeed, the author proves that $F$ is the distribution of the supremum of a\ general Gaussian process if and only if $\Phi ^{-1}\left( F\right) $ is concave, where $\Phi ^{-1}$ is the inverse of the standard normal distribution function on the real line. Interestingly, the \textquotedblleft only if\textquotedblright\ part is a direct consequence the Brunn-Minkowski type inequality for the standard Gaussian measure $\gamma _{d}$ on $\mathbb{R% }^{d}$ due to \cite{MR745081}: for any $A$ and $B\in \mathbb{R}^{d}$ of positive measure and for all $\lambda \in \left( 0,1\right) $,% \begin{equation*} \Phi ^{-1}\left( \gamma _{d}\left( \lambda A+\left( 1-\lambda \right) B\right) \right) \geq \lambda \Phi ^{-1}\left( \gamma _{d}\left( A\right) \right) +\left( 1-\lambda \right) \Phi ^{-1}\left( \gamma _{d}\left( B\right) \right) \text{ .} \end{equation*} \subsection{Gaussian correlation conjecture} The Gaussian correlation conjecture, first stated by \cite{MR0413364}, is as follows. Let $A$ and $B$ be two symmetric convex sets. If $\mu $ is a centered, Gaussian measure on $\mathbb{R}^{n}$, then% \begin{equation} \mu \left( A\cap B\right) \geq \mu \left( A\right) \mu \left( B\right) \text{ .} \label{Gaussian_corr} \end{equation}% In other words, the correlation between the sets $A$ and $B$ under the Gaussian measure $\mu $ is conjectured to be nonnegative. As the indicator of a convex set is log-concave, the Gaussian correlation conjecture intimately related to log-concavity. In \cite{MR1742895}, the author gives an elegant partial answer to Problem (\ref{Gaussian_corr}), using semigroup techniques. The Gaussian correlation conjecture has indeed been proved to hold when $d=2$ by \cite{MR0448705} and by \cite{MR1742895} when one of the sets is a symmetric ellipsoid and the other is convex symmetric. \cite{MR1894593} gave another proof of Harg\'{e}'s result, as a consequence of Caffarelli's Contraction Theorem (for more on the latter theorem, see Section \ref{ssection_opt_transp}\ below). Extending Caffarelli's Contraction Theorem, \cite{MR2983070} also extended the result of Harg\'{e} and Cordero-Erausquin, but without proving the full Gaussian correlation conjecture. \cite{MR1742895} gives some hints towards a complete solution of Problem (% \ref{Gaussian_corr}). Interestingly, a sufficient property would be the preservation of log-concavity along a particular family of semigroups. More precisely, let $A(x)$ be a positive definite matrix for each $x\in \mathbb{R}^{d}$ and define \begin{equation*} Lf(x)=(1/2)(\mbox{div}(A(x)^{-1}\nabla f)-(\nabla f(x)^{T}A^{-1}(x)x). \end{equation*}% The operator $L$ is the infinitesimal generator of an associated semigroup. The question is: does $L$ preserve log-concavity? See \cite{MR1742895} and \cite{MR1863297}. For further connections involving the semi-group approach to correlation inequalities, see \cite{MR1307413}, \cite{MR1334608}, \cite{MR2376572}, and \cite{CattiauxGuillin:13}. Further connections in this direction involve the theory of parabolic and heat-type partial differential equations; see e.g. \cite{MR1033179}, \cite{MR1863297}, \cite{MR2462697}, \cite{MR684757}, \cite{MR692284}. \subsection{Further connections with Poincar\'e, Sobolev, and log-Sobolev inequalities} For a very nice paper with interesting historical and expository passages, see \cite{MR1800062}. Among other things, these authors establish an entropic or log-Sobolev version of the Brascamp-Lieb type inequality under a concavity assumption on $h^{T}\varphi ^{\prime \prime }(x)h$ for every $h$. The methods in the latter paper build on \cite{Maurey:91}. See \cite{bakry2014analysis}\ for a general introduction to these analytic inequalities from a Markov diffusion operator viewpoint \subsection{Further connections with entropic central limit theorems} This subject has its beginnings in the work of \cite{Linnik:59}, \cite{MR659464}, and \cite{MR815975}, but has interesting cross-connections with log-concavity in the more recent papers of \cite{MR2128239}, \cite{MR1124273}, \cite{MR1991646}, \cite{MR2083473}, and \cite{MR2128238}. More recently still, further results have been obtained by: \cite{MR2077162}% , \cite{MR2545242}, and \cite{MR2644890}. \subsection{Connections with optimal transport and Caffarelli's contraction theorem\label{ssection_opt_transp}} \cite{MR2895086} give a nice survey about advances in transport inequalities, with Section 7 devoted to strongly log-concave measures (called measures with \textquotedblleft uniform convex potentials\textquotedblright\ there). The theory of optimal transport is developed in \cite{MR1964483} and \cite{MR2459454}. See also \cite{MR1127042}% , \cite{MR1124980}, \cite{MR1800860}, and \cite{MR2983070} for results on (strongly) log-concave measures. The latter authors extend the results of \cite{MR1800860} under a third derivative hypothesis on the \textquotedblleft potential\textquotedblright\ $% \varphi $. In the following, we state the celebrated Caffarelli's Contraction Theorem (% \cite{MR1800860}). Let us recall some related notions. A Borel map $T$ is said to \textit{push-forward} $\mu $ onto $\nu $, for two Borel probability measures $\mu $ and $\nu $, denoted $T_{\ast }\left( \mu \right) =\nu $, if for all Borel sets $A$, $\nu \left( A\right) =\mu \left( T^{-1}\left( A\right) \right) $. Then the Monge-Kantorovich problem (with respect to the quadratic cost) is to find a map $T_{opt}$ such that% \begin{equation*} T_{opt}\in \arg \min_{T\text{ s.t.}T_{\ast }\left( \mu \right) =\nu }\left\{ \int_{\mathbb{R}^{d}}\left\vert T\left( x\right) -x\right\vert ^{2}d\mu \left( x\right) \right\} \text{ .} \end{equation*}% The map $T_{opt}$ (when it exists) is called the Brenier map and it is $\mu $% -a.e. unique. Moreover, \cite{MR1100809} showed that Brenier maps are characterized to be gradients of convex functions (see also \cite{MR1369395}% ). See \cite{MR2087149} for a very nice elementary introduction to monotone transportation. We are now able to state Caffarelli's Contraction Theorem. \begin{theorem}[\protect\cite{MR1800860}] Let $b\in \mathbb{R}^{d}$, $c\in \mathbb{R}$ and $V$ a convex function on $\mathbb{R}^{d}$. Let $A$ be a positive definite matrix in $\mathbb{R}^{d\times d}$ and $Q$ be the following quadratic function, \begin{equation*} Q\left( x\right) =\left\langle Ax,x\right\rangle +\left\langle b,x\right\rangle +c,\text{ \ }x\in \mathbb{R}^{d}\text{ .} \end{equation*} Let $\mu $ and $\nu $ denote two probability measures on $\mathbb{R}^{d}$ with respective densities $\exp \left( -Q\right) $ and $\exp \left( -\left( Q+V\right) \right) $ with respect to Lebesgue measure. Then the Brenier map $T_{opt}$ pushing $\mu$ forward onto $\nu $ is a contraction: \begin{equation*} \left\vert T\left( x\right) -T\left( y\right) \right\vert \leq \left\vert x-y\right\vert \text{ \ \ for all }x,y\in \mathbb{R}^{d}\text{ .} \end{equation*} \end{theorem} Notice that Caffarelli's Contraction Theorem is in particular valid when $\mu $ is a Gaussian measure and that case, $\nu $ is a strongly log-concave measure. \subsection{Concentration and convex geometry} \cite{Guedon:12} gives a nice survey, explaining the connections between the Hyperplane conjecture, the KLS conjecture, the Thin Shell conjecture, the Variance conjecture and the Weak and Strong moments conjecture. Related papers include \cite{MR2846382} and \cite{FradelizaGuedonPajor:13}. It is well-known that concentration properties are linked the behavior of moments. \cite{MR2857249} prove that if $\eta >0$ is log-concave then the function \begin{equation*} \bar{\lambda}_{p}=\frac{1}{\Gamma \left( p+1\right) }\mathbb{E}\left[ \eta ^{p}\right] ,\text{ \ \ \ }p\geq 0, \end{equation*}% is also log-concave, where $\Gamma $ is the classical Gamma function. This is equivalent to having a so-called "reverse Lyapunov's inequality",% \begin{equation*} \bar{\lambda}_{a}^{b-c}\bar{\lambda}_{c}^{a-b}\leq \bar{\lambda}_{b}^{a-c},% \text{ \ \ \ }a\geq b\geq c\geq 0. \end{equation*}% Also, \cite{MR2083386} proves that log-concavity of $\tilde{\lambda}% _{p}=\mathbb{E}\left[ \left( \eta /p \right) ^{p}\right] $ holds (this is a consequence of the Pr\'{e}kopa-Leindler inequality).These results allow for instance \cite{MR2857249} to prove sharp concentration results for the information of a log-concave vector. \subsection{Sampling from log concave distributions; convergence of Markov chain Monte Carlo algorithms} Sampling from log-concave distributions has been studied by \cite{MR773927}, \cite{MR2910053} for log-concave densities on $\mathbb{R}$, and by \cite{MR1284987, MR1304785}% , \cite{MR1682608}, and \cite{MR2309621} for log-concave densities on $\mathbb{R}^d$; see also \cite{lovaszVempala:03}, \cite{LovaszVempala:06}, \cite{MR1318794}, and \cite{MR1608200}. Several different types of algorithms have been proposed: the rejection sampling algorithm of \cite{MR773927} requires knowledge of the mode; see \cite{MR2910053} for some improvements. The algorithms proposed by \cite{Gilk:Wild:adap:1992} are based on adaptive rejection sampling. The algorithms of \cite{MR1994729} and \cite{MR1904830} involve ``slice sampling''; and the algorithms of \cite{lovaszVempala:03}, \cite{LovaszVempala:06}, \cite{MR2309621} are based on random walk methods. Log-concavity and bounds for log-concave densities play an important role in the convergence properties of MCMC algorithms. For entry points to this literature, see \cite{Gilk:Wild:adap:1992}, \cite{MR1425412}, \cite{MR1608152}, \cite{MR1904830}, \cite{MR1953771}, \cite{MR2877599}, and \cite{MR2977521}. \subsection{Laplace approximations} Let $X_{1},...,X_{n}$ be i.i.d. real-valued random variables with density $q$ and Laplace transform \begin{equation*} \phi \left( s\right) =\mathbb{E}\left[ \exp \left( \theta X_{1}\right) % \right] \text{ .} \end{equation*}% Let $x^{\ast }$ be the upper limit of the support of $q$ and let $\tau >0$ be the upper limit of finiteness of $\phi $. Let us assume that $q$ is almost log-concave (see \cite{MR1354837} p155) on $\left( x_{0},x^{\ast}\right) $ for some $x_{0}<x^{\ast}$. This means that there exist two constants $c_{1}>c_{2}>0$ and two functions $c$ and $h$ on $\mathbb{R}$ such that \begin{equation*} q\left( x\right) =c\left( x\right) \exp \left( -h\left( x\right) \right) \text{, \ \ }x<x^{\ast }\text{ ,} \end{equation*}% where $c_{2}<c\left( x\right) <c_{1}$ whenever $x>x_{0}$ and $h$ is convex. In particular, log-concave functions are almost log-concave for $% x_{0}=-\infty $. Now, fix $y\in \mathbb{R}$. The saddlepoint $s$ associated to $y$ is defined by \begin{equation*} \left( \frac{d}{dt}\log \phi \right) \left( s\right) =y \end{equation*}% and the variance $\sigma ^{2}\left( s\right) $ is defined to be% \begin{equation*} \sigma ^{2}\left( s\right) =\left( \frac{d^{2}}{dt^{2}}\log \phi \right) \left( s\right) \text{ .} \end{equation*}% Let us write $f_{n}$ the density of the empirical mean $\overline{X}% =\sum_{i=1}^{n}X_{i}/n$. By Theorem 1\ of \cite{MR1094278}, if $q\in L^{\zeta }\left( \lambda \right) $ for $1<\zeta <2$, then the following saddlepoint approximations hold uniformly for $s_{0}<s<\tau $ for any $% s_{0}>0$: \begin{equation*} f_{n}\left( y\right) =\sqrt{\frac{n}{2\pi \sigma ^{2}\left( s\right) }}\phi \left( s\right) ^{n}\exp \left( -nsy\right) \left\{ 1+O\left( \frac{1}{n}% \right) \right\} \end{equation*}% and \begin{equation*} \mathbb{P}\left( \overline{X}>y\right) =\frac{\phi \left( s\right) ^{n}\exp \left( -nsy\right) }{s\sigma \left( s\right) \sqrt{n}}\left\{ B_{0}\left( s\sigma (s)\sqrt{n}\right) +O\left( \frac{1}{n}\right) \right\} \end{equation*}% where $B_{0}(z)=z\exp \left( z^{2}/2\right) \left( 1-\Phi \left( z\right) \right) $ with $\Phi $ the standard normal distribution function. According to \cite{MR1094278}, this result extends to the multidimensional setting where almost log-concavity is required on the entire space (and not just on some directional tails). As detailed in \cite{MR1354837}, saddlepoint approximations have many applications in statistics, such as in testing or Markov chain related estimation problems. As Bayesian methods are usually expensive in practice, approximations of quantities linked to the prior/posterior densities are usually needed. In connection with Laplace's method, log-quadratic approximation of densities are especially suited when considering log-concave functions, see \cite% {MR1354837}, \cite{BarberWilliams:97}, \cite{Minka:01}, and references therein. \subsection{Machine learning algorithms and Gaussian process methods} \cite{BougTarBouj:05} used the radius margin bound of \cite{MR1719582}\ on the performance of a Support Vector Machine (SVM) in order to tune hyper-parameters of the kernel. More precisely they proved that for a weighted $L^{1}$ -distance kernel the radius is log-convex while the margin is log-concave. Then they used this fact to efficiently tune the multi-parameter of the kernel through a direct application of the Convex ConCave Procedure (or CCCP) due to \cite{yuilRanga:03}. In contrast to the gradient descent technique (\cite{ChapVapBouMukh:02}), \cite{BougTarBouj:05} show that a variant of the CCCP which they call Log Convex ConCave Procedure (or LCCP) ensures that the radius margin bound decreases monotonically and converges to a local minimum without a search for the size step. Bayesian methods based on Gaussian process priors have become popular in statistics and machine learning: see, for example \cite{Seeger:04}, \cite% {MR2773550}, \cite{MR2418663}, and \cite{MR2819028}. These methods require efficient computational techniques in order to be scalable, or even tractable in practice. Thus, log-concavity of the quantities of interest becomes important in this area, since it allows efficient optimization schemes. In this context, \cite{paninski:04} shows that the predictive densities corresponding to either classification, regression, density estimation or point process intensity estimation models, are log-concave given any observed data. Furthermore, in the density and point process intensity estimation, the likelihood is log-concave in the hyperparameters controlling the mean function of the Gaussian prior. In the classification and regression settings, the mean, covariance and observation noise parameters are log-concave. As noted in \cite{paninski:04}, the results still hold for much more general prior distributions than Gaussian: it suffices that the prior and the noise (in models where a noise appears) are jointly log-concave. The proofs are based on preservation properties for log-concave functions such as pointwise limit or preservation by marginalization. \subsection{Compressed sensing and random matrices} Compressed sensing, aiming at reconstructing sparse signals from incomplete measurement, is extensively studied since the seminal works of \cite{MR2241189}, \cite{MR2230846}\ and \cite{MR2300700}. As detailed in \cite{MR3113826}, compressed sensing is intimately linked to the theory of random matrices. The matrices ensembles that are most frequently used and studied are those linked to Gaussian matrices, Bernoulli matrices and Fourier (sub-)matrices. By analogy with the Wishart Ensemble, the Log-concave Ensemble is defined in \cite{MR2601042} to be the set of squared $n\times n$ matrices equipped with the distribution of $AA^{\ast }$, where $A$ is a $% n\times N$ matrix with i.i.d. columns that have an isotropic log-concave distribution. \cite{MR2601042} show\ that the Log-concave Ensemble satisfies a sharp Restricted Isometry Property (RIP), see also \cite{MR3113826} Chapter 2. \subsection{Log-concave and s-concave as nonparametric function classes in statistics} \label{subsec:NonparametricStatistics} Nonparametric estimation of log-concave densities was initiated by \cite{MR1941467} in the context of testing for unimodality. For log-concave densities on $\mathbb{R}$ it has been explored in more detail by \cite{MR2546798}, \cite{MR2509075}, and recent results for estimation of log-concave densities on $\mathbb{R}% ^{d} $ have been obtained by \cite{MR2645484}, \cite{MR2758237}, \cite{MR2816336}. \cite{MR2758237} formulate the problem of computing the maximum likelihood estimator of a multidimensional log-concave density as a non-differentiable convex optimization problem and propose an algorithm that combines techniques of computational geometry with Shor's r-algorithm to produce a sequence that converges to the estimator. An R version of the algorithm is available in the package LogConcDEAD: \textsl{Log-Concave Density Estimation in Arbitrary Dimensions}, with further description of the algorithm given in \cite{CuleGramacySamworth:2008}. Nonparametric estimation of $s-$% concave densities has been studied by \cite{MR2766867}. They show that the MLE exists and is Hellinger consistent. \cite{Doss-Wellner:2013} have obtained Hellinger rates of convergence for the maximum likelihood estimators of log-concave and $s-$concave densities on $\mathbb{R}$, while \cite{Kim-Samworth:14} study Hellinger rates of convergence for the MLEs of log-concave densities on $\mathbb{R}^d$. \cite{hen+ast06} consider replacement of Gaussian errors by log-concave error distributions in the context of the Kalman filter. \cite{MR2757433} gives a review of some of the recent progress. \section{Appendix A: Brascamp-Lieb inequalities and more} \label{sec:AppendixA} Let $\mathbf{X}$ have distribution $P$ with density $p=\exp (-\varphi )$ on $% \mathbb{R}^{d}$ where $\varphi $ is strictly convex and $\varphi \in C^{2}(% \mathbb{R}^{d});$ thus ${\nabla ^{2}\mathstrut }(\varphi )\left( x\right) =\varphi ^{\prime \prime }(x)>0,$ $x\in \mathbb{R}^{d}$ as symmetric matrices. Let $G,H$ be real-valued functions on $\mathbb{R}^{d}$ with $G,H\in C^{1}(\mathbb{R}^{d})\cap L_{2}(P)$. We let $H_{1} (P) $ denote the set of functions $f$ in $L_{2}\left( P\right) $ such that $\nabla f$ (in the distribution sense) is in $L_{2}\left( P\right) $. Let $\mathbf{Y}$ have distribution $Q$ with density $q=\psi ^{-\beta }$ on an open, convex set $\Omega \subset \mathbb{R}^{d}$ where $\beta >d$ and $% \psi $ is a positive, strictly convex and twice continuously differentiable function on $\Omega $. In particular, $Q$ is $s=-1/\left( \beta -d\right) $% -concave (see Definition \ref{defn:Sconcave} and \cite{MR0388475}, \cite% {MR0404559}). Let $T$ be a real-valued function on $\mathbb{R}^{d}$ with $% T\in C^{1}(\Omega )\cap L_{2}(Q)$. The following Proposition summarizes a number of analytic inequalities related to a Poincar\'{e}-type inequality from \cite{MR0450480}. Such inequalities are deeply connected to concentration of measure and isoperimetry, as exposed in \cite% {bakry2014analysis}. Concerning log-concave measures, these inequalities are also intimately linked to the geometry of convex bodies. Indeed, as noted by \cite{CarlenCordero-ErausquinLieb} page 9, \begin{quote} \textquotedblleft The Brascamp-Lieb inequality (1.3), as well as inequality (1.8), have connections with the geometry of convex bodies. It was observed in [2] (\cite{MR1800062}) that (1.3) (\textsl{see Proposition~\ref{BLInequalityPlus}, (a)}) can be deduced from the Pr\'{e}kopa-Leindler inequality (which is a functional form of the Brunn-Minkowski inequality). But the converse is also true: the Pr% \'{e}kopa theorem follows, by a local computation, from the Brascamp-Lieb inequality (see [5] (\cite{MR2115450}) where the procedure is explained in the more general complex setting). To sum up, the Brascamp-Lieb inequality (1.3) can be seen as the \textsl{local} form of the Brunn-Minkowski inequality for convex bodies.\textquotedblright \end{quote} \begin{proposition} \label{BLInequalityPlus} $\phantom{bla}$\newline (a) \cite{MR0450480}: If $p$ is strictly log-concave, then \begin{equation*} \var(G(\mathbf{X}))\leq E\left[ \nabla G(\mathbf{X})^{T}(\varphi ^{\prime \prime }(\mathbf{X}))^{-1}\nabla G(\mathbf{X})\right] \text{ }. \end{equation*}% (b) If $p=\exp (-\varphi )$ where $\varphi ^{\prime \prime }\geq cI$ with $c>0$, then \begin{equation*} \var (G(\mathbf{X}))\leq \frac{1}{c}E|\nabla G(\mathbf{X})|^{2}. \end{equation*}% (c) \cite{MR2376572}: If $\varphi \in L_{2}\left( P\right) $, then for all $f\in H_{1} (P)$, \begin{equation*} \var \left( f\left( \mathbf{X}\right) \right) \leq E\left[ \nabla f(\mathbf{X})^{T}(\varphi ^{\prime \prime }(\mathbf{X}))^{-1}\nabla f(\mathbf{X})\right] -\frac{1+a/b}{d}\left( \cov \left( \varphi \left( \mathbf{X}\right) ,f\left( \mathbf{X}\right) \right) \right) ^{2}\text{ ,} \end{equation*} where \begin{equation*} a=\inf_{x\in \mathbb{R}^{d}}\min \left\{ \lambda \text{ eigenvalue of }% \varphi ^{\prime \prime }\left( x\right) \right\} \end{equation*} and \begin{equation*} b=\sup_{x\in \mathbb{R}^{d}}\max \left\{ \lambda \text{ eigenvalue of }% \varphi ^{\prime \prime }\left( x\right) \right\} \text{ .} \end{equation*} Notice that $0\leq a\leq b\leq +\infty $ and $b>0$. (d) \cite{MR2510011}: If $U=\psi T$, then \begin{equation*} \left( \beta +1\right) \var% (T(\mathbf{Y}))\leq E\left[ \frac{1}{V\left( \mathbf{Y}\right) }\nabla U(% \mathbf{Y})^{T}(\varphi ^{\prime \prime }(\mathbf{Y}))^{-1}\nabla U(\mathbf{Y% })\right] +\frac{n}{\beta -n}E\left[ T(\mathbf{Y})\right] ^{2}\text{ .} \end{equation*}% Taking $\psi =\exp \left( \varphi /\beta \right) $ and setting $R_{\varphi ,\beta }\equiv \varphi ^{\prime \prime }+\beta ^{-1}\nabla \varphi \otimes \nabla \varphi $, this implies that for any $\beta \geq d$,% \begin{equation*} \var% (G(\mathbf{X}))\leq C_{\beta }E\left[ \nabla G(\mathbf{X})^{T}(R_{\varphi ,\beta }(\mathbf{X}))^{-1}\nabla G(\mathbf{X})\right] \text{ ,} \end{equation*}% where $C_{\beta }=\left( 1+\sqrt{\beta +1}\right) ^{2}\left/ \beta \right. $% . Notice that $1\leq C_{\beta }\leq 6$. (e) \cite{MR1307413}: If $p=\exp (-\varphi )$ where $\varphi ^{\prime \prime }\geq cI$ with $c>0$, then \begin{equation*} \mbox{Ent}_{P}\left( G^{2}(\mathbf{X})\right) \leq \frac{1}{c}E|\nabla G(% \mathbf{X})|^{2}. \end{equation*}% where \begin{equation*} \mbox{Ent}_{P}(Y^{2})=E\left[ Y^{2}\log (Y^{2})\right] -E\left[ Y^{2}\right] \log (E\left[ Y^{2}\right] ). \end{equation*}% (f) \cite{MR1600888}, \cite{MR1849347}: If the conclusion of (e) holds for all smooth $G$, then $E_{P}\exp (\alpha |% \mathbf{X}|^{2})<\infty $ for every $\alpha <1/(2c)$.\newline (g) \cite{MR1742893}: If $E_{P}\exp (\alpha |\mathbf{X}|^{2})<\infty $ for a log-concave measure $P$ and some $\alpha >0$, then the conclusion of (e) holds for some $c=c_{d}$.\newline (h) \cite{MR1800062}: If $\varphi $ is strongly convex with respect to a norm $\Vert \cdot \Vert $ (so $p$ is strongly log-concave with respect to $% \Vert \cdot \Vert $), then \begin{equation*} \mbox{Ent}_{P}(G^{2}(\mathbf{X}))\leq \frac{2}{c}E_{P}\Vert \nabla G(\mathbf{% X})\Vert _{\ast }^{2} \end{equation*}% for the dual norm $\Vert \cdot \Vert _{\ast }$. \end{proposition} Inequality (a) originated in \cite{MR0450480} and the original proof of the authors is based on a dimensional induction. For more details about the induction argument used by \cite{MR0450480}, see \cite% {CarlenCordero-ErausquinLieb}. Building on \cite{Maurey:91}, \cite{MR1800062} give a non-inductive proof of (a) based on the Pr\'{e}kopa-Leindler theorem \cite{MR0315079}, \cite{MR0404557}, \cite{MR2199372} which is the functional form of the celebrated Brunn-Minkowski inequality. The converse is also true in the sense that the Brascamp-Lieb inequality (a) implies the Pr\'{e}kopa-Leindler inequality, see \cite{MR2115450}. Inequality (b) is an easy consequence of (a) and is referred to as a Poincar\'{e} inequality for strongly log-concave measures. Inequality (c) is a reinforcement of the Brascamp-Lieb inequality (a) due to \cite{MR2376572}. The proof is based on (Marvovian) semi-group techniques, see \cite{bakry2014analysis} for a comprehensive introduction to these tools. In particular, \cite{MR2376572}, Lemma 7, gives a variance representation for strictly log-concave measures that directly implies the Brascamp-Lieb inequality (a). The first inequality in (d) is referred in \cite{MR2510011} as a ``weighted Poincar\'{e}-type inequality" for convex (or $s-$concave with negative parameter $s$) measures. It implies the second inequality of (d) which is a quantitative refinement of the Brascamp-Lieb inequality (a). Indeed, Inequality (a) may be viewed as the limiting case in the second inequality of (d) for $\beta \rightarrow +\infty $ (as in this case $C_{\beta }\rightarrow 1$ and $R_{\varphi ,\beta }\rightarrow \varphi ^{\prime \prime } $). As noted in \cite{MR2510011}, for finite $\beta $ the second inequality of (d) may improve the Brascamp-Lieb inequality in terms of the decay of the weight. For example, when $Y$ is a random variable with exponential distribution with parameter $\lambda >0$ ($q\left( y\right) =\lambda e^{-\lambda y}$ on $\Omega =\left( 0,\infty \right) $), the second inequality in (d) gives the usual Poincar\'{e}-type inequality,% \begin{equation*} \var (G(Y))\leq \frac{6}{\lambda ^{2}}E\left[ \left( G^{\prime }(Y)\right) ^{2} \right] \text{,} \end{equation*} which cannot be proved as an direct application of the Brascamp-Lieb inequality (a). \cite{MR799431} shows that the inequality in the last display holds (in the exponential case) with $6$ replaced by $4$, and establishes similar results for other distributions. The exponential and two-sided exponential (or Laplace) distributions are also treated by \cite{MR1440138}. Points (e) to (h) deal, in the case of (strongly) log-concave measures, with the so-called logarithmic-Sobolev inequality, which is known to strengthen the Poincar\'{e} inequality (also called spectral gap inequality) (see for instance Chapter 5 of \cite{bakry2014analysis}). Particularly, \cite{MR1800062} proved their result in point (d), \textit{via} the use of the Pr\'{e}kopa-Leindler inequality. In their survey on optimal transport, \cite{MR2895086} show how to obtain the result of Bobkov and Ledoux from some transport inequalities. We give now a simple application of the Brascamp-Lieb inequality (a), that exhibits its relation with the Fisher information for location. \begin{example} \label{LinearG} Let $G(x)=a^{T}x$ for $a\in \mathbb{R}^{d}$. Then the inequality in (a) becomes \begin{equation} a^{T}\cov(\mathbf{X})a\leq a^{T}E\{[\varphi ^{\prime \prime }(\mathbf{X}% )]^{-1}\}a \label{CovMatrixBound} \end{equation}% or equivalently \begin{equation*} \cov(\mathbf{X})\leq E\{[\varphi ^{\prime \prime }(\mathbf{X})]^{-1}\} \end{equation*}% with equality if $\ \mathbf{X}\sim N_{d}(\mu ,\Sigma )$ with $\Sigma $ positive definite. When $d=1$ (\ref{CovMatrixBound}) becomes \begin{equation} \cov(\mathbf{X})\leq E[(\varphi ^{\prime \prime })^{-1}(\mathbf{X}% )]=E[((-\log p)^{\prime \prime })^{-1}(\mathbf{X})] \label{VarianceBound} \end{equation}% while on the other hand \begin{equation} \cov(\mathbf{X})\geq \lbrack E(\varphi ^{\prime \prime })(\mathbf{X}% )]^{-1}\equiv I_{loc}^{-1}(\mathbf{X}) \label{CRVarianceLowerBound} \end{equation}% where $I_{loc}(X)=E(\varphi ^{\prime \prime })$ denotes the Fisher information for location (in $X$ or $p$); in fact for $d\geq 1$ \begin{equation*} \cov(\mathbf{X})\geq \lbrack E(\varphi ^{\prime \prime })(\mathbf{X}% )]^{-1}\equiv I_{loc}^{-1}(\mathbf{X}) \end{equation*}% where $I_{loc}(X)\equiv E(\varphi ^{\prime \prime })$ is the Fisher information matrix (for location). If $X\sim N_{d}(\mu ,\Sigma )$ then equality holds (again). On the other hand, when $d=1$ and $p$ is the logistic density given in Example~\ref{LogisticDensity}, then $\varphi ^{\prime \prime }=2p$ so the right side in (\ref{VarianceBound}) becomes $% E\{(2p(\mathbf{X}))^{-1}\}=\int_{\mathbb{R}}(1/2)dx=\infty $ while $Var(% \mathbf{X})=\pi ^{2}/3$ and $I_{loc}(\mathbf{X})=1/3$ so the inequality (\ref% {VarianceBound}) holds trivially, while the inequality (\ref% {CRVarianceLowerBound}) holds with strict inequality: \begin{equation*} 3=I_{loc}^{-1}(\mathbf{X})<\frac{\pi ^{2}}{3}=Var(\mathbf{X})<E[(\varphi ^{\prime \prime })^{-1}(\mathbf{X})]=\infty . \end{equation*}% (Thus while $X$ is slightly inefficient as an estimator of location for $p$, it is not drastically inefficient.) \end{example} Now we briefly summarize the asymmetric Brascamp - Lieb inequalities of \cite% {Menz-Otto:2013} and \cite{CarlenCordero-ErausquinLieb}. \begin{proposition} \label{prop:MenzOtto} $\phantom{bla}$\newline (a) \cite{Menz-Otto:2013}: Suppose that $d=1$ and $G, H \in C^{1} (\mathbb{R}% ) \cap L^2 ( P)$. If $p$ is strictly log-concave and $1/r + 1/s = 1$ with $% r\ge 2$, then \newline \begin{eqnarray*} | \cov ( G(X), H(X)) | \le \sup_x \left \{ \frac{ | H^{\prime}(x)|}{% \varphi^{\prime \prime} (X) } \right \} E \{ | G^{\prime}(X) | \} . \end{eqnarray*} (b) \cite{CarlenCordero-ErausquinLieb}: If $p$ is strictly log-concave on $% \mathbb{R}^d$ and $\lambda_{min}(x)$ denotes the smallest eigenvalue of $% \varphi^{\prime \prime}$, then \begin{eqnarray*} | \cov (G(X), H(X) ) | \le \| ( \varphi^{\prime \prime} )^{-1/r} \nabla G \|_s \cdot \| \lambda_{min}^{(2-r)/r} (\varphi^{\prime \prime})^{-1/r} \nabla H \|_r . \end{eqnarray*} \end{proposition} \begin{remark} (i) When $r=2$, the inequality in (b) yields \begin{equation*} \left ( \cov (G(X), H(X) ) \right )^2 \le E \{ \nabla G^T ( \varphi^{\prime\prime})^{-1} \nabla G \} \cdot E \{ \nabla G^T ( \varphi^{\prime\prime})^{-1} \nabla G \} \end{equation*} which can also be obtained from the Cauchy-Schwarz inequality and the Brascamp-Lieb inequality (a) of Proposition~\ref{BLInequalityPlus}.\newline (ii) The inequality (b) also implies that \begin{eqnarray*} | \cov (G(X), H(X) ) | \le \| ( \lambda_{min}^{-1/r} \nabla G \|_s \cdot \| \lambda_{min}^{-1/s} \nabla H \|_r ; \end{eqnarray*} taking $r=\infty$ and $s=1$ yields \begin{equation*} | \cov (G(X), H(X) ) | \le \| \nabla G \|_1 \cdot \| \lambda_{min}^{-1} \nabla H \|_\infty \end{equation*} which reduces to the inequality in (a) when $d=1$. \end{remark} \section{Appendix B: some further proofs} \label{sec:Proofs} \begin{proof} Proposition~\ref{LogConcaveEqualsPF2}: \textbf{(b):} $p_{\theta} (x) = f(x-\theta)$ has MLR if and only if \begin{eqnarray*} \frac{f(x-\theta^{\prime})}{f(x-\theta)} \le \frac{f(x^{\prime}-\theta^{% \prime})}{f(x^{\prime}-\theta)} \ \ \mbox{for all} \ \ x < x^{\prime}, \ \theta < \theta^{\prime} \end{eqnarray*} This holds if and only if \begin{eqnarray} \log f(x-\theta^{\prime}) + \log f(x^{\prime}-\theta) \le \log f(x^{\prime}- \theta^{\prime}) + \log f (x-\theta) . \label{MLR1stIFF} \end{eqnarray} Let $t = (x^{\prime}-x)/(x^{\prime}-x + \theta^{\prime}-\theta)$ and note that \begin{eqnarray*} && x-\theta = t(x-\theta^{\prime}) + (1-t) (x^{\prime}- \theta), \\ && x^{\prime}- \theta^{\prime}= (1-t) (x - \theta^{\prime}) + t (x^{\prime}- \theta) \end{eqnarray*} Hence log-concavity of $f$ implies that \begin{eqnarray*} && \log f(x-\theta) \ge t \,\log f(x-\theta^{\prime}) + (1-t) \log f(x^{\prime}- \theta) , \\ && \log f(x^{\prime}- \theta^{\prime}) \ge (1-t) \log f(x-\theta^{\prime}) + t\, \log f(x^{\prime}-\theta) . \end{eqnarray*} Adding these yields (\ref{MLR1stIFF}); i.e. $f$ log-concave implies $% p_{\theta} (x)$ has MLR in $x$. Now suppose that $p_{\theta} (x)$ has MLR so that (\ref{MLR1stIFF}) holds. In particular that holds if $x,x^{\prime},\theta, \theta^{\prime}$ satisfy $% x-\theta^{\prime}=a < b = x^{\prime}-\theta$ and $t = (x^{\prime}-x)/(x^{\prime}-x+\theta^{\prime}-\theta) =1/2$, so that $% x-\theta = (a+b)/2 = x^{\prime}-\theta^{\prime}$. Then (\ref{MLR1stIFF}) becomes \begin{equation*} \log f(a) + \log f(b) \le 2 \log f ((a+b)/2) . \end{equation*} This together with measurability of $f$ implies that $f$ is log-concave. \noindent \textbf{(a):} \ Suppose $f$ is $PF_2$. Then for $x< x^{\prime}$, $% y< y^{\prime}$, \begin{eqnarray*} \lefteqn{\det \left ( \begin{array}{l l} f(x-y) & f(x- y') \\ f(x' - y ) & f(x' - y') \end{array} \right ) } \\ && = f(x-y) f(x^{\prime}- y^{\prime}) - f(x - y^{\prime}) f(x^{\prime}- y) \ge 0 \end{eqnarray*} if and only if \begin{eqnarray*} && f(x-y^{\prime}) f(x^{\prime}-y) \le f(x-y) f(x^{\prime}-y^{\prime}) , \end{eqnarray*} or, if and only if \begin{eqnarray*} && \frac{f(x - y^{\prime})}{f(x-y)} \le \frac{f(x^{\prime}-y^{\prime})}{% f(x^{\prime}-y)} . \end{eqnarray*} That is, $p_{y} (x)$ has MLR in $x$. By (b) this is equivalent to $f$ log-concave. \end{proof} \begin{proof} Proposition~\ref{EquivalencesStrongLogConTry3}: To prove Proposition~\ref{EquivalencesStrongLogConTry3} it suffices to note the log-concavity of $g(x)=p(x)/\prod_{j=1}^{d}\phi (x_{j}/\sigma )$ and to apply Proposition~\ref{EquivalencesLogCon} (which holds as well for log-concave functions). The claims then follow by basic calculations. Here are the details. Under the assumption that $\varphi \in C^{2}$ (and even more generally) the equivalence between (a) and (b) follows from \cite% {MR1491362}, Exercise 12.59, page 565. The equivalence between (a) and (c) follows from the corresponding proof concerning the equivalence of (a) and (c) in Proposition~\ref{EquivalencesLogCon}; see e.g. \cite{MR2061575}, page 71. (a) implies (d): this follows from the corresponding implication in Proposition~\ref{EquivalencesLogCon}. Also note that for $x_{1},x_{2}\in \mathbb{R}^{d}$ we have \begin{eqnarray*} \lefteqn{\langle \nabla \varphi _{J_{a}}(x_{2})-x_{2}/c-\left( \nabla \varphi _{J_{a}}(x_{1})-x_{1}/c\right) ,x_{2}-x_{1}\rangle } \\ &=&\langle \nabla \varphi (a+x_{2})-\nabla \varphi (a-x_{2})-x_{2}/c-\left( \nabla \varphi (a+x_{1})-\nabla \varphi (a-x_{1})-x_{1}/c\right) ,x_{2}-x_{1}\rangle \\ &=&\langle \nabla \varphi (a+x_{2})-\nabla \varphi (a+x_{1})-(x_{2}-x_{1})/(2c),x_{2}-x_{1}\rangle \\ &&\ \ -\ \langle \nabla \varphi (a-x_{2})-\nabla \varphi (a-x_{1})+(x_{2}-x_{1})/(2c),x_{2}-x_{1}\rangle \\ &=&\langle \nabla \varphi (a+x_{2})-\nabla \varphi (a+x_{1})-(a+x_{2}-(a+x_{1}))/(2c),x_{2}-x_{1}\rangle \\ &&\ \ +\ \langle \nabla \varphi (a-x_{1})-\nabla \varphi (a-x_{2})-(a-x_{1}-(a-x_{2}))/(2c),a-x_{1}-(a-x_{2})\rangle \\ &\geq &0\ \ \ \end{eqnarray*}% if $c=\sigma ^{2}/2$. (d) implies (e): this also follows from the corresponding implication in Proposition~\ref{EquivalencesLogCon}. Also note that when $\varphi \in C^2$ so that $\nabla^2 \varphi $ exists, \begin{eqnarray*} \nabla^2 \varphi_{J_a} (x) -2 I/\sigma^2 & = & \nabla^2 \varphi (a+x) + \nabla^2 \varphi (a-x) - 2I/\sigma^2 \\ & = & \nabla^2 \varphi (a+x) - I/\sigma^2 + \nabla^2 \varphi (a-x) - I/\sigma^2 \\ & \ge & 0 + 0 = 0. \end{eqnarray*} To complete the proof when $\varphi \in C^2$ we show that (e) implies (c). Choosing $a=x_0 $ and $x=0$ yields \begin{eqnarray*} 0 & \le & \nabla^2 \varphi_{J_a} (0) -2 I/\sigma^2 \\ & = & \nabla^2 \varphi (x_0) + \nabla^2 \varphi (x_0) - 2I/\sigma^2 \\ & = & 2 \left ( \nabla^2 \varphi (x_0) - I/\sigma^2 \right ) , \end{eqnarray*} and hence (c) holds. To complete the proof more generally, we proceed as in \cite{MR2814377}, page 199: to see that (e) implies (f), let $a = (x_1 + x_2)/2$, $x = (x_1-x_2)/2$. Since $J_a (\cdot ; g)$ is even and radially monotone, $J_a (0; g)^{1/2} \ge J_a (x; g)^{1/2}$; that is, \begin{equation*} \{ g(a+0) g(a-0) \}^{1/2} \ge \{ g(a+x) g(a-x) \}^{1/2}, \end{equation*} or \begin{equation*} g((x_1 +x_2)/2) \ge g(x_1)^{1/2} g(x_2)^{1/2} . \end{equation*} Finally (f) implies (a): as in \cite{MR2814377}, page 199 (with ``convex'' changed to ``concave'' three times in the last three lines there): midpoint log-concavity of $g$ together with lower semicontinuity implies that $g$ is log-concave, and hence $p$ is strongly log-concave, so (a) holds. \end{proof} \begin{proof} Proposition~\ref{prop_I_SLC}: First notice that by Proposition \ref{Lemma_convol_Gauss_multidim}, we may assume that $f$ is $\mathcal{C}^{\infty }$ (so $\varphi $ is also $\mathcal{C}^{\infty }$). \textbf{(i) }As $I$ is $\mathcal{C}^{\infty }$, we differentiate $I$ twice. We have $I^{\prime }\left( p\right) =f^{\prime }\left( F^{-1}\left( p\right) \right) /I\left( p\right) =-\varphi ^{\prime }\left( F^{-1}\left( p\right) \right) $ and% \begin{equation} I^{\prime \prime }\left( p\right) =-\varphi ^{\prime \prime }\left( F^{-1}\left( p\right) \right) /I\left( p\right) \leq -c^{-1}\left\Vert f\right\Vert _{\infty }^{-1}\text{ .} \label{I_second} \end{equation}% This gives the first part of \textbf{(i). }The second part comes from the fact that $\left\Vert f\right\Vert _{\infty }^{-1}\geq \sqrt{ \var \left( X\right) }$ by Proposition \ref{prop_pointwise_bounds}\ below. \textbf{(ii)} It suffices to exhibit an example. We take $X\geq 0$, with density \begin{equation*} f\left( x\right) =xe^{-x}\mathbf{1}_{\left( 0, \infty \right) }\left( x\right) \text{ }. \end{equation*} Then $f=e^{-\varphi }$ is log-concave (in fact, $f$ log-concave of order $2$% , see Definition \ref{def_LC_order_alpha}) and not strongly log-concave as, on the support of $f$, $\varphi ^{\prime \prime }\left( x\right) =x^{-2}\rightarrow 0 $ as ${x\rightarrow \infty }0$. By the equality in (\ref{I_second}) we have \begin{equation*} I^{\prime \prime }\left( p\right) =-\frac{\varphi ^{\prime \prime }}{f}% \left( F^{-1}\left( p\right) \right) \text{ .} \end{equation*}% Hence, to conclude it suffices to show that $\inf_{x>0}\left\{ \varphi^{\prime \prime }/f\right\} >0$. By simple calculations, we have \begin{equation*} \varphi ^{\prime \prime }\left( x\right) /f\left( x\right) =x^{-3}e^{x}% \mathbf{1}_{\left( 0, \infty \right) }\left( x\right) \geq e^3/27 >0\text{ ,} \end{equation*} so \textbf{(ii)} is proved. \textbf{(iii) }We take $f\left( x\right) =\exp \left( -\varphi \right) =\alpha ^{-1}\exp \left( -\exp \left( x\right) \right) 1_{\left\{ x>0\right\} }$ where $\alpha =\int_{0}^{\infty }\exp \left( -\exp \left( x\right) \right) dx$. Then the function $R_{h}$ is $\mathcal{C}^{\infty }$ on $\left( 0,1\right) $ and we have by basic calculations, for any $p\in \left( 0,1\right) $, \begin{equation*} R_{h}^{\prime }\left( p\right) =f\left( F^{-1}\left( p\right) +h\right) /f\left( F^{-1}\left( p\right) \right) \end{equation*} and% \begin{equation*} R_{h}^{\prime \prime }\left( p\right) =\frac{f\left( F^{-1}\left( p\right) +h\right) }{f\left( F^{-1}\left( p\right) \right) ^{2}}\left( \varphi ^{\prime }\left( F^{-1}\left( p\right) \right) -\varphi ^{\prime }\left( F^{-1}\left( p\right) +h\right) \right) \text{ .} \end{equation*}% Now, for any $x>0$, taking $p=F\left( x\right) $ in the previous identity gives% \begin{eqnarray} R_{h}^{\prime \prime }\left( F\left( x\right) \right) &=&\frac{f\left( x+h\right) }{f\left( x\right) ^{2}}\left( \varphi ^{\prime }\left( x\right) -\varphi ^{\prime }\left( x+h\right) \right) \label{Rh_second} \\ &=&\alpha ^{-1}\exp \left( \exp \left( x\right) \left( 2-\exp \left( h\right) \right) \right) \cdot \exp \left( x\right) \left( 1-\exp \left( h\right) \right) \text{ .} \notag \end{eqnarray}% We deduce that if $h>\log 2$ then $R_{h}^{\prime \prime }\left( F\left( x\right) \right) \rightarrow 0$ whenever $x\rightarrow +\infty $. Taking $% h_{0}=1$ gives point \textbf{(iii)}. \textbf{(iv)} For $X$ of density $f\left( x\right) =xe^{-x}\mathbf{1}% _{\left( 0,+\infty \right) }\left( x\right) $, we have $\inf R_{h}^{\prime \prime }\leq -he^{-h}<0$. Our proof of the previous fact is based on identity (\ref{Rh_second})\ and left to the reader. \end{proof} \begin{proof} Proposition~\ref{EquivalencesStrongLogConDef2Try2}: Here are the details. Under the assumption that $\varphi \in C^2$ (and even more generally) the equivalence of (a) and (b) follows from \cite{MR1491362}, Exercise 12.59, page 565. The equivalence of (a) and (c) follows from the corresponding proof concerning the equivalence of (a) and (c) in Proposition~\ref{EquivalencesLogCon}; see e.g. \cite{MR2061575}, page 71. That (a) implies (d): this follows from the corresponding implication in Proposition~\ref{EquivalencesLogCon}. Also note that for $x_1, x_2 \in \mathbb{R}^d$ we have \begin{eqnarray*} \lefteqn{\langle \nabla \varphi_{J_a} (x_2) - x_2/c - \left ( \nabla \varphi_{J_a} (x_1) - x_1/c\right ) , x_2-x_1 \rangle } \\ & = & \langle \nabla \varphi (a+x_2) - \nabla \varphi(a-x_2) - x_2/c - \left (\nabla \varphi (a+x_1) - \nabla \varphi(a-x_1) - x_1/c \right ) , x_2 - x_1 \rangle \\ & = & \langle \nabla \varphi (a+x_2) - \nabla \varphi (a+x_1) - (x_2-x_1)/(2c), x_2 - x_1 \rangle \\ && \ \ - \ \langle \nabla \varphi (a-x_2) - \nabla \varphi (a-x_1) + (x_2-x_1)/(2c), x_2 - x_1 \rangle \\ & = & \langle \nabla \varphi (a+x_2) - \nabla \varphi (a+x_1) - (a+x_2-(a+x_1))/(2c), x_2 - x_1 \rangle \\ && \ \ + \ \langle \nabla \varphi (a-x_1) - \nabla \varphi (a-x_2) - (a-x_1- (a-x_2))/(2c), a - x_1 - (a-x_2) \rangle \\ & \ge & 0 \ \ \ \end{eqnarray*} if $c = \sigma^2/2$. (d) implies (e): this also follows from the corresponding implication in Proposition~\ref{EquivalencesLogCon}. Also note that when $\varphi \in C^2$ so that $\nabla^2 \varphi $ exists, \begin{eqnarray*} \nabla^2 \varphi_{J_a} (x) -2 I/\sigma^2 & = & \nabla^2 \varphi (a+x) + \nabla^2 \varphi (a-x) - 2I/\sigma^2 \\ & = & \nabla^2 \varphi (a+x) - I/\sigma^2 + \nabla^2 \varphi (a-x) - I/\sigma^2 \\ & \ge & 0 + 0 = 0. \end{eqnarray*} To complete the proof when $\varphi \in C^2$ we show that (e) implies (c). Choosing $a=x_0 $ and $x=0$ yields \begin{eqnarray*} 0 & \le & \nabla^2 \varphi_{J_a} (0) -2 I/\sigma^2 \\ & = & \nabla^2 \varphi (x_0) + \nabla^2 \varphi (x_0) - 2I/\sigma^2 \\ & = & 2 \left ( \nabla^2 \varphi (x_0) - I/\sigma^2 \right ) , \end{eqnarray*} and hence (c) holds. To complete the proof more generally, we proceed as in \cite{MR2814377}, page 199: to see that (e) implies (f), let $a = (x_1 + x_2)/2$, $x = (x_1-x_2)/2$. Since $J_a (\cdot ; g)$ is even and radially monotone, $J_a (0; g)^{1/2} \ge J_a (x; g)^{1/2}$; that is, $$ \{ g(a+0) g(a-0) \}^{1/2} \ge \{ g(a+x) g(a-x) \}^{1/2}, $$ or $$ g((x_1 +x_2)/2) \ge g(x_1)^{1/2} g(x_2)^{1/2} . $$ Finally (f) implies (a): as in \cite{MR2814377}, page 199 (with ``convex'' changed to ``concave'' three times in the last three lines there): midpoint log-concavity of $g$ together with lower semicontinuity implies that $g$ is log-concave, and hence $p$ is strongly log-concave, so (a) holds. \end{proof} \begin{proof} Proposition~\ref{Lemma_convol_Gauss_multidim}: \textbf{(i)} This is given by the stability of log-concavity through convolution. \textbf{(ii)}This is point \textbf{(b)} of Theorem \ref{StrongLogConPreservOne}. \textbf{(iii)} We have \begin{equation*} \varphi _{Z}\left( z\right) =-\log \int_{y\in \mathbb{R}^{d}}p\left( y\right) q\left( z-y\right) dy \end{equation*}% and% \begin{equation*} \int_{y\in \mathbb{R}^{d}}\left\Vert \nabla q\left( z-y\right) \right\Vert p\left( y\right) dy=\int_{y\in \mathbb{R}^{d}}\left\Vert z-y\right\Vert q\left( z-y\right) p\left( y\right) dy<\infty \end{equation*}% since $y\mapsto \left\Vert z-y\right\Vert q\left( z-y\right) $ is bounded. This implies that $p_{Z}>0$ on $\mathbb{R}^{d}$ and% \begin{eqnarray*} \nabla \varphi _{Z}\left( z\right) &=&\int_{y\in \mathbb{R}^{d}}\frac{z-y}{% \sigma ^{2}}\frac{p\left( y\right) q\left( z-y\right) }{\int_{y\in \mathbb{R}% ^{d}}p\left( u\right) q\left( z-u\right) du}dy \\ &=&\sigma ^{-2}\mathbb{E}\left[ \sigma G\left\vert X+\sigma G=z\right. % \right] \\ &=&\mathbb{E}\left[ \rho _{\sigma G}\left( \sigma G\right) \left\vert X+\sigma G=z\right. \right] \text{ .} \end{eqnarray*}% In the same manner, successive differentiation inside the integral shows that $\varphi _{Z}$ is $\mathcal{C}^{\infty }$, which gives (\textbf{iii}). \textbf{(iv)} Notice that% \begin{eqnarray*} \lefteqn{\left\vert \left\Vert \int_{z,y\in \mathbb{R}^{d}}\sigma ^{-4}\left( z-y\right) \left( z-y\right) ^{T}p\left( y\right) q\left( z-y\right) dydz\right\Vert \right\vert } \\ &\leq &\sigma ^{-4}\int_{y\in \mathbb{R}^{d}}\int_{z\in \mathbb{R}% ^{d}}\left\Vert z-y\right\Vert ^{2}q\left( z-y\right) p\left( y\right) dy<\infty \end{eqnarray*}% as $y\mapsto \left\Vert z-y\right\Vert ^{2}q\left( z-y\right) $ is bounded. Hence the Fisher information $J(Z)$ of $Z$ is finite and we have% \begin{eqnarray*} J\left( Z\right) &=&\sigma ^{-4}\int_{z,y\in \mathbb{R}^{d}}\mathbb{E}\left[ \sigma G\left\vert X+\sigma G=z\right. \right] \mathbb{E}\left[ \left( \sigma G\right) ^{T}\left\vert X+\sigma G=z\right. \right] p\left( y\right) q\left( z-y\right) dydz \\ &\leq &\sigma ^{-4}\int_{z,y\in \mathbb{R}^{d}}\mathbb{E}\left[ \sigma G\left( \sigma G\right) ^{T}\left\vert X+\sigma G=z\right. \right] p\left( y\right) q\left( z-y\right) dydz \\ &=&\sigma ^{-4}\int_{z,y\in \mathbb{R}^{d}}\left( \int_{u\in \mathbb{R}% ^{d}}\left( z-u\right) \left( z-u\right) ^{T}\frac{p\left( u\right) q\left( z-u\right) }{\int_{y\in \mathbb{R}^{d}}p\left( v\right) q\left( z-v\right) dv% }du\right) p\left( y\right) q\left( z-y\right) dydz \\ &=&\sigma ^{-4}\int_{z,y\in \mathbb{R}^{d}}\left( \int_{u\in \mathbb{R}% ^{d}}\left( z-u\right) \left( z-u\right) ^{T}p\left( u\right) q\left( z-u\right) du\right) \frac{p\left( y\right) q\left( z-y\right) }{\int_{v\in \mathbb{R}^{d}}p\left( v\right) q\left( z-v\right) dv}dydz \\ &=&\sigma ^{-4}\int_{z\in \mathbb{R}^{d}}\int_{u\in \mathbb{R}^{d}}\left( z-u\right) \left( z-u\right) ^{T}p\left( u\right) q\left( z-u\right) dudz \\ &=&\int_{u\in \mathbb{R}^{d}}\left( \underset{J\left( \sigma G\right) }{% \underbrace{\sigma ^{-4}\int_{z\in \mathbb{R}^{d}}\left( z-u\right) \left( z-u\right) ^{T}q\left( z-u\right) dz}}\right) p\left( u\right) du \\ &=&J\left( \sigma G\right) \text{ ,} \end{eqnarray*}% which is (\textbf{iv}). \end{proof} \begin{proof} Proposition~\ref{prop_approx_SLC}: The fact that $h_{c}\in SLC_{1}\left( c^{-1},d\right) $ is obvious due to Definition \ref{strongLogConcaveDefn1}. By Theorem \ref{theorem_exp_tails}\ above, there exist $a>0$ and $b\in \mathbb{R}$ such that% \begin{equation*} f\left( x\right) \leq e^{-a\left\Vert x\right\Vert +b},\text{ \ \ }x\in \mathbb{R}^{d}. \end{equation*}% We deduce that if $X$ is a random vector with density $f$ on $\mathbb{R}^{d}$, then $\mathbb{E}\left[ e^{\left( a/2\right) \left\Vert X\right\Vert } \right] < \infty $ and so, for any $\beta >0$, \begin{equation*} \mathbb{P}\left( \left\Vert X\right\Vert >2\beta \right) \leq Ae^{-a\beta }% \text{ ,} \end{equation*}% where $A=\mathbb{E}\left[ e^{\left( a/2\right) \left\Vert X\right\Vert }% \right] >0$. Take $\varepsilon \in \left( 0,1\right) $. We have$\medskip $ \noindent $% \begin{array}{l} \left\vert \int_{\mathbb{R}^{d}}f\left( v\right) e^{-c\left\Vert v\right\Vert ^{2}/2}dv-1\right\vert \\ =\int_{\mathbb{R}^{d}}f\left( v\right) \left( 1-e^{-c\left\Vert v\right\Vert ^{2}/2}\right) dv \\ =\int_{\mathbb{R}^{d}}f\left( v\right) \left( 1-e^{-c\left\Vert v\right\Vert ^{2}/2}\right) \mathbf{1}_{\left\{ \left\Vert v\right\Vert \leq 2c^{-\varepsilon /2}\right\} }dv \\ \qquad +\int_{\mathbb{R}^{d}}f\left( v\right) \left( 1-e^{-c\left\Vert v\right\Vert ^{2}/2}\right) \mathbf{1}_{\left\{ \left\Vert v\right\Vert >2c^{-\varepsilon /2}\right\} }dv \\ \leq \left( 1-e^{-2c^{1-\varepsilon }}\right) \int_{\mathbb{R}^{d}}f\left( v\right) \mathbf{1}_{\left\{ \left\Vert v\right\Vert \leq \sqrt{2}% c^{-\varepsilon /2}\right\} }dv+\mathbb{P}\left( \left\Vert X\right\Vert >2c^{-\varepsilon /2}\right) \\ \leq \left( 1-e^{-2c^{1-\varepsilon }}\right) +Ae^{-ac^{-\varepsilon /2}}% \text{ .}% \end{array}% \medskip $ \noindent We set $B_{\alpha }=\left( 1-e^{-2\alpha ^{1-\varepsilon }}\right) +Ae^{-a\alpha ^{-\varepsilon /2}}$ and we then have \begin{equation*} \left\vert \int_{\mathbb{R}^{d}}f\left( v\right) e^{-c\left\Vert v\right\Vert ^{2}/2}dv-1\right\vert \leq B_{c}=O_{c\rightarrow 0}\left( c^{1-\varepsilon }\right) \rightarrow _{c\rightarrow 0}0\text{ .} \end{equation*}% Now, for $x\in \mathbb{R}^{d}$, we have, for all $c>0$ such that $B_{c}<1$, \begin{eqnarray*} \lefteqn{ \left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert } \\ & = & \left\vert \frac{f\left( x\right) e^{-c\left\Vert x\right\Vert ^{2}/2}} {\int_{\mathbb{R}^{d}}f\left( v\right) e^{-c\left\Vert v\right\Vert ^{2}/2}dv} - f\left( x\right) \right\vert \\ & \leq & \left\vert \frac{f\left( x\right) e^{-c\left\Vert x\right\Vert ^{2}/2}} {\int_{\mathbb{R}^{d}}f\left( v\right) e^{-c\left\Vert v\right\Vert ^{2}/2}dv} -f\left( x\right) e^{-c\left\Vert x\right\Vert ^{2}/2}\right\vert +\left\vert f\left( x\right) e^{-c\left\Vert x\right\Vert ^{2}/2}-f\left( x\right) \right\vert \\ & \leq & f\left( x\right) \left\vert \left( \int_{\mathbb{R}^{d}}f\left(v\right) e^{-c\left\Vert v\right\Vert ^{2}/2}dv\right) ^{-1}-1\right\vert + f\left( x\right) \left( 1-e^{-c\left\Vert x\right\Vert ^{2}/2}\right) \\ & \leq & f\left( x\right) \left( \frac{B_{c}}{1-B_{c}}+1-e^{-c\left\Vert x\right\Vert ^{2}/2}\right) \text{ .}% \end{eqnarray*} Hence, for all $c>0$ such that $B_{c}<1$,% \begin{eqnarray*} \lefteqn{\sup_{x\in \mathbb{R}^{d}}\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert } \\ & \leq & \sup_{\left\{ x;\left\Vert x\right\Vert \leq 2c^{-\varepsilon/2}\right\} } \left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert + \sup_{\left\{ x;\left\Vert x\right\Vert >2c^{-\varepsilon /2}\right\} }\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert \\ &\leq & e^{b}\left( \frac{B_{c}}{1-B_{c}}+1-e^{-2c^{1-\varepsilon }}\right) +e^{-2ac^{-\varepsilon /2}+b} \left( \frac{B_{c}}{1-B_{c}}+1\right) \\ &=& O\left( c^{1-\varepsilon }\right) \text{ as }c\rightarrow 0\text{ .} \end{eqnarray*} Furthermore, for $p\in \left[ 1, \infty \right) $, \begin{eqnarray*} \lefteqn{ \int_{\mathbb{R}^{d}}\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert ^{p}dx }\\ & = & \int_{\mathbb{R}^{d}}\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert ^{p} \mathbf{1}_{\left\{ \left\Vert x\right\Vert \leq 2c^{-\varepsilon /2}\right\} }dx +\int_{\mathbb{R}^{d}}\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert ^{p}\mathbf{1}_{\left\{ \left\Vert x\right\Vert >2c^{-\varepsilon /2}\right\} }dx \\ & \leq & \sup_{\left\{ x;\left\Vert x\right\Vert \leq 2c^{-\varepsilon/2}\right\} }\left\vert h_{c}\left( x\right) -f\left( x\right) \right\vert^{p} +\int_{\mathbb{R}^{d}}f\left( x\right) ^{p}\left( \frac{B_{c}}{1-B_{c}} +1\right) \mathbf{1}_{\left\{ \left\Vert x\right\Vert >2c^{-\varepsilon/2}\right\} }dx \\ & \leq & e^{pb}\left( \frac{B_{c}}{1-B_{c}}+1-e^{-2c^{1-\varepsilon }}\right)^{p} +\left( \frac{B_{c}}{1-B_{c}}+1\right) e^{\left( p-1\right) b}\mathbb{P} \left( \left\Vert X\right\Vert >2c^{-\varepsilon /2}\right) \\ & \leq & e^{pb}\left( \frac{B_{c}}{1-B_{c}}+1-e^{-2c^{1-\varepsilon }}\right)^{p} +A\left( \frac{B_{c}}{1-B_{c}}+1\right) e^{\left( p-1\right) b}e^{-ac^{-\varepsilon /2}} \\ & = & O\left( c^{p\left( 1-\varepsilon \right) }\right) \text{ as }c\rightarrow 0 \text{.}% \end{eqnarray*} The proof is now complete. \end{proof} \section*{Acknowledgments} We owe thanks to Michel Ledoux for a number of pointers to the literature and for sending us a pre-print version of \cite{BobkovLedoux:14}. \bigskip \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
{ "timestamp": "2014-04-24T02:11:47", "yymm": "1404", "arxiv_id": "1404.5886", "language": "en", "url": "https://arxiv.org/abs/1404.5886", "abstract": "We review and formulate results concerning log-concavity and strong-log-concavity in both discrete and continuous settings. We show how preservation of log-concavity and strongly log-concavity on $\\mathbb{R}$ under convolution follows from a fundamental monotonicity result of Efron (1969). We provide a new proof of Efron's theorem using the recent asymmetric Brascamp-Lieb inequality due to Otto and Menz (2013). Along the way we review connections between log-concavity and other areas of mathematics and statistics, including concentration of measure, log-Sobolev inequalities, convex geometry, MCMC algorithms, Laplace approximations, and machine learning.", "subjects": "Statistics Theory (math.ST)", "title": "Log-concavity and strong log-concavity: a review", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540716711547, "lm_q2_score": 0.8459424353665381, "lm_q1q2_score": 0.8284771684756318 }
https://arxiv.org/abs/1102.2141
The Turán number of $F_{3,3}$
Let $F_{3,3}$ be the 3-graph on 6 vertices, labelled abcxyz, and 10 edges, one of which is abc, and the other 9 of which are all triples that contain 1 vertex from abc and 2 vertices from xyz. We show that for all $n \ge 6$, the maximum number of edges in an $F_{3,3}$-free 3-graph on $n$ vertices is $\binom{n}{3} - \binom{\lfloor n/2 \rfloor}{3} - \binom{\lceil n/2 \rceil}{3}$. This sharpens results of Zhou and of the second author and Rödl.
\section{Introduction} The {\em Tur\'an number} $\mbox{ex}(n,F)$ is the maximum number of edges in an $F$-free $r$-graph on $n$ vertices.% \footnote{ An {\em $r$-graph} (or {\em $r$-uniform hypergraph}) $G$ consists of a vertex set and an edge set, each edge being some $r$-set of vertices. We say $G$ is {\em $F$-free} if it does not have a (not necessarily induced) subgraph isomorphic to $F$. } It is a long-standing open problem in Extremal Combinatorics to understand these numbers for general $r$-graphs $F$. For ordinary graphs ($r=2$) the picture is fairly complete, although there are still many open problems, such as determining the order of magnitude for Tur\'an numbers of bipartite graphs. However, for $r \ge 3$ there are very few known results. Having solved the problem for the complete graph $F=K_t$, Tur\'an \cite{T61} posed the natural question of determining $\mbox{ex}(n,F)$ when $F=K^r_t$ is a complete $r$-graph on $t$ vertices. To date, no case with $t>r>2$ of this question has been solved, even asymptotically. Despite the lack of progress on the Tur\'an problem for complete hypergraphs, there are certain hypergraphs for which the problem has been solved asymptotically, or even exactly; we refer the reader to the survey \cite{K11}. While it would be more satisfactory to have a general theory, we may hope that this will develop out of the methods discovered in solving isolated examples. The contribution of this paper is a surprisingly short complete solution to the Tur\'an problem for the following $3$-graph. Let $F_{3,3}$ be the $3$-graph on $6$ vertices, labelled $abcxyz$, and $10$ edges, one of which is $abc$, and the other $9$ of which are all triples that contain $1$ vertex from $abc$ and $2$ vertices from $xyz$. A lower bound for $\mbox{ex}(n,F_{3,3})$ is given by the following natural construction. Let $B(n)$ denote the balanced complete bipartite $3$-graph, which is obtained by partitioning a set of $n$ vertices into parts of size $\lfloor n/2 \rfloor$ and $\lceil n/2 \rceil$, and taking as edges all triples that are not contained within either part. Let \[ b(n) = \binom{n}{3} - \binom{\lfloor n/2 \rfloor}{3} - \binom{\lceil n/2 \rceil}{3}\] denote the number of edges in $B(n)$. We prove the following result. \begin{theo} \label{main} For any $n \ge 1$, $\mbox{ex}(n,F_{3,3}) = b(n)$, unless $n=5$, when $\mbox{ex}(5,F_{3,3}) = 10$. \end{theo} The Tur\'an problem for $F_{3,3}$ was previously studied by Mubayi and R\"odl \cite{MR02}, who obtained the asymptotic result $\mbox{ex}(n,F_{3,3}) = (1+o(1))b(n)$. Another related problem is the following result of Zhou \cite{Z91}. Say that two vertices $x,y$ in a $3$-graph $G$ are {\em $t$-connected} if there are vertices $a,b,c$ such that every triple with $2$ vertices from $abc$ and $1$ from $xy$ is an edge. Say that $xyz$ is a {\em $t$-triple} if $xyz$ is an edge and each pair in $xyz$ is $t$-connected. For example, $K^3_5$ is a $t$-triple (this is the motivation for the definition). The result of \cite{Z91} is that the unique largest $3$-graph on $n$ vertices with no $t$-triple is complete bipartite. Note that $F_{3,3}$ is a $t$-triple, so Theorem \ref{main} strengthens Zhou's extremal result (but not the classification of the extremal example). Our proof uses the link multigraph method introduced by de Caen and F\"uredi \cite{dCF00}. There are now a few examples where this method has been used to obtain asymptotic results, or exact results for $n$ sufficiently large. We used it in \cite{KM04} to obtain an exact result for cancellative $3$-graphs for all $n$, and an exact result for the configuration $F_5 = \{123,124,345\}$ for $n \ge 33$. However, until recently there were no known applications to an exact result for all $n$ with a single forbidden hypergraph. The result in this paper gives such an application; another was given very recently by Goldwasser \cite{G+}, who obtained an exact result for $F_5$ for all $n$. In the next section we describe the link multigraph construction and prove a lemma that applies to such multigraphs. We use this to prove Theorem \ref{main} in Section 3. The final section contains some concluding remarks and open problems. \section{A multigraph lemma} The proof of Theorem \ref{main} will use the following construction of a multigraph from a $3$-graph $G$. Suppose $S$ is a set of vertices in $G$. The `link multigraph' of $S$ has vertex set $X = V \setminus S$ and edge set $M = \sum_{a \in S} G(a)[X]$. Here we write $G(a) = \{xy: axy \in G\}$, denote the restriction to $X$ by $[X]$, and use summation to denote multiset union. Thus we obtain a multigraph $M$ in which each pair of vertices has multiplicity between $0$ and $|S|$. Furthermore, we may regard each edge of $M$ as being `coloured' by a vertex in $S$ (an edge may have several colours). We write $w(xy)$ for the multiplicity of the pair $xy$ in $M$. Now suppose $M$ is any multigraph on $n$ vertices (not necessarily as above). Write $w(xy)$ for the multiplicity of a pair $xy$ in $M$, and write $e(M)$ for the sum of $w(xy)$ over all (unordered) pairs of vertices in $M$. For any $S \subseteq V(M)$ let $i(S)$ denote the sum of $w(xy)$ over all pairs of vertices that contain at least one vertex of $S$. If $S=\{x\}$ consists of a single vertex then $i(S)=d(x)$ is the weighted degree of $x$. Define \begin{equation*} m(n) = \begin{cases} \frac{3n^2}{2}-n & \text{if $n$ is even,} \\ \frac{3n^2-1}{2}-n & \text{if $n$ is odd.} \end{cases} \end{equation*} \begin{lemma}\label{multi} Suppose $M$ is a multigraph on $n$ vertices with $0 \le w(xy) \le 4$ for every pair $xy$ and $w(xy)+w(xz)+w(yz) \le 10$ for every triple $xyz$. Then $e(M) \le m(n)$. \end{lemma} \nib{Proof.} We argue by induction on $n$. The statement is trivial for $n=1$ and $n=2$, and is immediate from the assumption on triples for $n=3$. Now suppose that $n \ge 4$. We consider separate cases according to the parity of $n$. Suppose first that $n$ is even, $M$ is a multigraph satisfying the hypotheses of the lemma, and suppose for a contradiction that $e(M) = \frac{3n^2}{2}-n+1$. Since $e(M) > 3\binom{n}{2}$ we can choose a pair $xy$ with $w(xy)=4$. If we delete $xy$ then we obtain a multigraph $M'$ on $n-2$ satisfying the hypotheses of the lemma. By induction hypothesis we have $e(M') \le \frac{3(n-2)^2}{2}-(n-2)$. Then $i(xy) = e(M)-e(M') \ge \frac{3n^2}{2}-n+1 - \frac{3(n-2)^2}{2} + (n-2) = 6n-7$. Now the sum of $w(xz)+w(yz)$ over $z$ in $V(M) \setminus \{x,y\}$ is $i(xy)-w(xy) \ge 6n-11 > 6(n-2)$, so there must be some $z$ with $w(xz)+w(yz) \ge 7$. But then $w(xy)+w(xz)+w(yz) \ge 11$, contradiction. The argument for $n$ odd is similar. Suppose for a contradiction that $M$ satisfies the hypotheses of the lemma but $e(M) = \frac{3n^2-1}{2}-n+1$. Choose $xy$ with $w(xy)=4$. The induction hypothesis gives $i(xy) \ge \frac{3n^2-1}{2}-n+1 - \frac{3(n-2)^2-1}{2} + (n-2) = 6n-7$. Then, as in the case of $n$ even, we have $i(xy)-w(xy) > 6(n-2)$, so there must be some $z$ with $w(xz)+w(yz) \ge 7$, contradiction. \hfill $\Box$ \medskip Note the following two examples where equality holds in Lemma \ref{multi}. (We do not claim that these are the only cases of equality.) \begin{enumerate} \item Define a multigraph $M_1(n)$ on $n$ vertices as follows. Let $A \cup B$ be a balanced partition of the vertex set. Let crossing pairs have multiplicity $4$ and pairs inside each part have multiplicity $2$. If $n$ is even then $e(M_1(n)) = 2\binom{n}{2} + 2(n/2)^2 = 3n^2/2-n$. If $n$ is odd then $e(M_1(n)) = 2\binom{n}{2} + 2 \frac{n^2-1}{4} = \frac{3n^2-1}{2}-n$. \item Define a multigraph $M_2(n)$ on $n$ vertices as follows. Let all pairs have multiplicity $3$ except for a maximum size matching of multiplicity $4$. If $n$ is even then $e(M_2(n)) = 3\binom{n}{2} + n/2 = 3n^2/2-n$. If $n$ is odd then $e(M_2(n)) = 3\binom{n}{2} + \frac{n-1}{2} = \frac{3n^2-1}{2}-n$. \end{enumerate} The following two calculations will also be useful. Note that $M_2(n-1)$ can be obtained from $M_2(n)$ by deleting a vertex, which can be any vertex if $n$ is even, but must be the vertex not incident to an edge of multiplicity $4$ when $n$ is odd. Then $m(n)-m(n-1)$ is equal to the number of edges removed, which is $3(n-2)+4 = 3(n-1)+1$ when $n$ is even, or $3(n-1)$ when $n$ is odd. Next consider a copy of $B(n)$ with parts $A$ and $B$. Construct a copy of $B(n-4)$ by removing vertices $wx$ from $A$ and $yz$ from $B$. Then we have $b(n)-b(n-4) = i(wxyz)$, where similarly to our multigraph notation, we let $i(S)$ denote the number of edges that contain at least one vertex of $S$. We can count $i(wxyz)$ as follows. There are $4$ edges of $B(n)$ contained in $wxyz$. Next consider edges with $2$ vertices in $wxyz$. The $4$ crossing pairs $wy$, $wz$, $xy$, $xz$ form an edge with each of the $n-4$ vertices of $V \setminus \{w,x,y,z\}$, so contribute $4(n-4)$ edges. The pairs $wx$ and $yz$ form an edge with any of the vertices in the other part, so contribute $n-4$ edges (this holds whether $n$ is even or odd). Finally, note that the link multigraph of $wxyz$ in $B(n)$ is precisely $M_1(n-4)$. Thus the number of edges with $1$ vertex in $wxyz$ is $m(n-4)$. Then we have \[b(n)-b(n-4) = m(n-4) + 5(n-4) + 4.\] \section{Proof of Theorem \ref{main}} We start with the lower bound. We have already described the construction $B(n)$, but we also need to check that it is $F_{3,3}$-free. To see this, suppose for a contradiction that there is a copy of $F_{3,3}$ in $B(n)$, labelled $abcxyz$ as above. Label the parts of $B(n)$ as $X$ and $Y$, and suppose without loss of generality that $a \in X$. The edges $axy$, $axz$, $ayz$ can only be simultaneously realised by putting all of $x,y,z$ in $Y$, or $2$ of $x,y,z$ in $Y$ and $1$ in $X$. Either way, the edges $bxy$, $bxz$, $byz$ imply that $b$ is in $X$, and the edges $cxy$, $cxz$, $cyz$ imply that $c$ is in $X$. But this contradicts the fact that $abc$ is an edge. Thus $B(n)$ is $F_{3,3}$-free. This shows that $\mbox{ex}(n,F_{3,3}) \ge b(n)$. This bound can be improved when $n=5$, as $b(5)=9$, but the complete $3$-graph $K^3_5$ is $F_{3,3}$-free. It follows that $\mbox{ex}(5,F_{3,3}) = 10$ (obviously it cannot be larger). The main task in the proof is to establish the upper bound. We prove the following statement by induction on $n$: \begin{itemize} \item Suppose $G$ is an $F_{3,3}$-free $3$-graph on $n \ge 1$ vertices. Then $e(G) \le b(n)$, unless $n=5$, in which case $e(G) \le 10$. \end{itemize} Note that this statement is trivial for $n \le 5$, as $B(n)$ is complete for $n \le 4$, and for $n=5$ the statement allows $e(K^3_5)=10$ edges. Furthermore, the bound holds for $n=6$, as $B(6)$ is obtained by deleting $2$ edges from $K^3_6$, and it is clear that if one only deletes one edge from $K^3_6$ then there is a copy of $F_{3,3}$. Moreover, $B(6)$ is the unique $F_{3,3}$-free $3$-graph on $6$ vertices with $\binom{6}{3}-2 = 18$ edges. To see this, we exhibit appropriate edges in the complement of $F_{3,3}$ as defined above: $xab$ and $yac$ are not in $F_{3,3}$ and intersect in $1$ vertex, whereas $xab$ and $xac$ are not in $F_{3,3}$ and intersect in $2$ vertices. Now suppose for a contradiction that $n \ge 7$ with $n \ne 9$ and $G$ is an $F_{3,3}$-free $3$-graph on $n$ vertices with $e(G) = b(n)+1$. (We will need to modify the argument in the case $n=9$.) We start by finding a copy of $K^3_4$ in $G$. For this we use the following averaging argument. Given a $3$-graph $H$ on $m$ vertices, let $d(H) = e(H)\binom{m}{3}^{-1}$ denote the density of $H$. A simple calculation shows that $d(G)$ is the average of $d(G \setminus v)$ over all vertices $v$ of $G$. Note that deleting a vertex from $B(n)$ leaves a complete bipartite $3$-graph on $n-1$ vertices; it is not necessarily balanced, but certainly has at most $b(n-1)$ edges. It follows that $d(B(n)) \le d(B(n-1))$, i.e.\ $d(B(n))$ is non-increasing in $n$. Since $d(B(n)) \to 3/4$ as $n \to \infty$ we have $d(B(n)) \ge 3/4$ for all $n$. Since $e(G) > b(n)$ we have $d(G) > 3/4$. Averaging again, we see that there is a set $abcd$ of $4$ vertices where $d(G[abcd])>3/4$. This implies that all $4$ triples in $abcd$ are edges of $G$, as desired. Note that $G \setminus \{a,b,c,d\}$ is an $F_{3,3}$-free $3$-graph on $n-4$ vertices with $e(G) - i(abcd)$ edges. By induction this is at most $b(n-4)$ (since $n \ne 9$), so we obtain $i(abcd) \ge b(n)-b(n-4)+1$. Now we count the edges incident to $abcd$ according to the number of vertices of $abcd$ they contain. There are $4$ such edges contained in $abcd$. To estimate edges with one vertex in $abcd$ let $M$ be the link multigraph of $abcd$ in $G$. Note that there is no triangle $xyz$ in $M$ such that each pair $xy$, $xz$, $yz$ is coloured by the same set of $3$ colours from $abcd$: this would give a copy of $F_{3,3}$. This implies that $w(xy)+w(xz)+w(yz) \le 10$ for every triple $xyz$ in $M$. Thus we can apply Lemma \ref{multi} to get $e(M) \le m(n-4)$. We conclude that the number of edges with $2$ vertices in $abcd$ is at least $b(n)-b(n-4)+1 - 4 - m(n-4) = 5(n-4)+1$. It follows that there is some $e \in V(G) \setminus \{a,b,c,d\}$ such that all $6$ pairs from $abcd$ form an edge with $e$. Thus $abcde$ forms a copy of $K^3_5$ in $G$. For each $x \in abcde$ we have $i(abcde \setminus x) \ge b(n)-b(n-4)+1$, so \[\Sigma := \sum_{x \in abcde} i(abcde \setminus x) \ge 5(b(n)-b(n-4)+1) = 5(m(n-4)+5(n-3)).\] We can also count $\Sigma$ according to the intersection of edges with $abcde$. Edges with at least $2$ vertices in $abcde$ are counted $5$ times, and edges with $1$ vertex in $abcde$ are counted $4$ times. By Lemma \ref{multi}, for each $x \in abcde$ the link multigraph of $abcde \setminus x$ restricted to $V(G) \setminus \{a,b,c,d,e\}$ has at most $m(n-5)$ edges. By averaging, there are at most $\frac{5}{4}m(n-5)$ edges with $1$ vertex in $abcde$. Since these are counted $4$ times they contribute at most $5m(n-5)$ to $\Sigma$. Also, $abcde$ is complete, so we have $10$ edges inside $abcde$. Writing $Z$ for the number of edges with $2$ vertices in $abcde$, we obtain \[ 5(m(n-4)+5(n-3)) = 5(b(n)-b(n-4)+1) \le \Sigma \le 5(10+Z+m(n-5)),\] so $Z \ge m(n-4)-m(n-5) + 5(n-5)$. Recall that $m(n-4)-m(n-5)$ is $3(n-5)+1$ when $n$ is even, or $3(n-5)$ when $n$ is odd. Thus $Z \ge 8(n-5)$. It follows that there is some $f \in V(G) \setminus \{a,b,c,d,e\}$ such that at least $8$ pairs from $abcde$ form an edge with $f$. Thus $abcdef$ is obtained from $K^3_6$ by deleting at most $2$ edges, and if $2$ edges are deleted they cannot be disjoint, as they both contain $f$. As noted above, this implies that $abcdef$ contains $F_{3,3}$, so we have a contradiction. It remains to prove the bound for $n=9$. Again we start by choosing $abcd$ as a copy of $K^3_4$ in $G$. Since $b(5)=\binom{5}{3}-1$, we obtain $i(abcd) \ge b(n)-b(n-4)$. Then the same calculation as above shows that there are at least $5(n-4)$ edges with $2$ vertices in $abcd$. Furthermore, equality can only hold if deleting $abcd$ leaves $\binom{5}{3}$ edges, i.e.\ a copy of $K^3_5$. If equality does not hold then we can find a copy of $K^3_5$ as above, so either way we have a copy of $K^3_5$. Let $X$ be the vertex set of this $K^3_5$. Now note that for any $Y$ spanning a copy of $K^3_4$ we either have $i(Y) \ge b(n)-b(n-4)+1$ or $G \setminus Y$ spans $K^3_5$. Also, there cannot be $3$ vertices $x_1,x_2,x_3$ in $X$ such that $G \setminus (X \setminus x_i)$ spans $K^3_5$ for $1\le i \le 3$, as then $x_1,x_2,x_3$ together with any $3$ vertices of $G \setminus X$ spans a copy of $F_{3,3}$. Now we can modify the second calculation above to get $3(b(n)-b(n-4)+1) + 2(b(n)-b(n-4)) \le 5(10+Z+m(n-5))$, so $Z \ge 8(n-5)-2 = 30 > 7(n-5)$. It follows that there is some $f \in V(G) \setminus \{a,b,c,d,e\}$ such that at least $8$ pairs from $abcde$ form an edge with $f$. As above, this creates a copy of $F_{3,3}$, so we have a contradiction. This proves the theorem. \hfill $\Box$ \section{Concluding remarks} The obvious unanswered question from this paper is to characterise the extremal examples for the problem: is it true that for $n \ge 6$, equality can only be achieved by $B(n)$? It may be that there is a simple proof of this statement, but if not, one might still hope to prove it for $n$ sufficiently large by the stability method. The idea would be to show that any $F_{3,3}$-free graph $G$ on $n$ vertices with $e(G) \sim b(n)$ is `structurally close' to $B(n)$. Then one would hope to show that any construction $B'(n)$ that is sufficiently close to $B(n)$ is suboptimal, unless $B'(n)=B(n)$. (See \cite[Section 5]{K11} for further discussion of this method.) Consideration of this stability question leads us in turn to the question of what constructions are (near) extremal for the multigraph result, Lemma \ref{multi}. We have already seen two very different constructions that achieve the maximum $m(n)$. However, we can rule out $M_2(n)$ by returning to the $3$-graph world. Suppose that $G$ is an $F_{3,3}$-free $3$-graph and $M$ is the link multigraph of a $K^3_4$ $abcd$ in $G$. Let $J$ be the set of pairs of multiplicity at least $3$ in $M$. We claim that $J$ is $K_t$-free, where $t = R_4(3)$ is the $4$-colour Ramsey number for triangles. To see this, colour the pairs of multiplicity $3$ according to which colour from $abcd$ is {\em not} available, and colour the pairs of multiplicity $4$ arbitrarily from $abcd$. If $J$ contains $K_t$ then by definition we can find a monochromatic triangle $xyz$. Without loss of generality $d$ is the missing colour for $xy$, $xz$ and $yz$. Then $abcxyz$ is a copy of $F_{3,3}$, contradiction. Thus $J$ is $K_t$-free. It follows that $M_2(n)$, or indeed any `sufficiently close' construction, cannot be realised as the link multigraph of $G$. However, we can give other multigraph constructions that are not ruled out on these grounds. For example, take a balanced partition of $n$ vertices into $4$ parts $W$, $X$, $Y$, $Z$, such that all pairs within a part have multiplicity $2$, all pairs between $W$ and $X$ or between $Y$ and $Z$ have multiplicity $4$, and all pairs between the other four pairs of parts have multiplicity $3$. One can check that this construction satisfies the hypotheses of Lemma \ref{multi}, and the number of edges is approximately $m(n)$. However, as far as we are aware, it does not seem to arise as the link multigraph of a near extremal $F_{3,3}$-free $3$-graph. The potentially large variety of multigraph constructions suggests that this may be a difficult approach to proving stability for $F_{3,3}$, so perhaps other ideas are needed. \medskip \nib{Acknowledgements.} We thank the Mathematisches Forschungsinstitut Oberwolfach for their hospitality, and the organisers (Jeff Kahn, Angelika Steger and Benny Sudakov) for inviting us to the meeting at which this research was conducted.
{ "timestamp": "2011-02-11T02:02:16", "yymm": "1102", "arxiv_id": "1102.2141", "language": "en", "url": "https://arxiv.org/abs/1102.2141", "abstract": "Let $F_{3,3}$ be the 3-graph on 6 vertices, labelled abcxyz, and 10 edges, one of which is abc, and the other 9 of which are all triples that contain 1 vertex from abc and 2 vertices from xyz. We show that for all $n \\ge 6$, the maximum number of edges in an $F_{3,3}$-free 3-graph on $n$ vertices is $\\binom{n}{3} - \\binom{\\lfloor n/2 \\rfloor}{3} - \\binom{\\lceil n/2 \\rceil}{3}$. This sharpens results of Zhou and of the second author and Rödl.", "subjects": "Combinatorics (math.CO)", "title": "The Turán number of $F_{3,3}$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631667229061, "lm_q2_score": 0.8397339736884711, "lm_q1q2_score": 0.8282826614921699 }
https://arxiv.org/abs/1908.08910
Counting pop-stacked permutations in polynomial time
Permutations in the image of the pop-stack operator are said to be pop-stacked. We give a polynomial-time algorithm to count pop-stacked permutations up to a fixed length and we use it to compute the first 1000 terms of the corresponding counting sequence. Only the first 16 terms had previously been computed. With the 1000 terms we prove some negative results concerning the nature of the generating function for pop-stacked permutations. We also predict the asymptotic behavior of the counting sequence using differential approximation.
\section{Pop-stacked permutations} The abstract data type known as a \emph{stack} has two operations: \emph{push} adds an element at the top of the stack; \emph{pop} removes the top element from the stack. A \emph{pop-stack} is a variation of this introduced by Avis and Newborn~\cite{avis1981pop} in which the pop operation empties the entire stack. Let $\pi = a_1a_2 \dots a_n$ be a permutation of $[n]=\{1,2,\dots,n\}$. An \emph{ascending run} of $\pi$ is a maximal sequence of consecutive ascending letters $a_i < a_{i+1} < \dots < a_{i+d-1}$, and a \emph{descending run} is defined similarly. For instance, the ascending runs of $\pi = 617849235$ are $6$, $178$, $49$ and $235$; its descending runs are $61$, $7$, $84$, $92$, $3$ and $5$. Let $P(\pi)$ be the result of greedily sorting $\pi$ using a pop-stack subject to the constraint that elements on the pop-stack are increasing when read from the top to the bottom of the stack. In other words, if we factor $\pi$ into its descending runs $\pi = D_1 D_2 \dots D_m$, then $P(\pi)$ is obtained by reversing each of those runs: $P(\pi)=D_1^r D_2^r \dots D_m^r$. For instance, $P(5321764)=1235467$ and $P(617849235) = 167482935$. A permutation $\pi$ is said to be sortable by a pop-stack if $P(\pi)$ is the identity permutation. More generally, $\pi$ is said to be sortable by $k$ passes through a pop-stack if $P^k(\pi)$ is the identity permutation. Claesson and Guðmundsson~\cite{ClGu2019} showed that the generating function for the number of permutations of $[n]$ that are sortable by $k$ passes through a pop-stack is always rational. Asinowski et~al.~\cite{asinowski-etal-2019} defined that $\sigma$ is \emph{pop-stacked} if $\sigma=P(\pi)$ for some permutation $\pi$, and gave the following theorem. \begin{theorem}[Asinowski et~al.~\cite{asinowski-etal-2019}] A permutation is pop-stacked if and only if for each pair $(R_i,R_{i+1})$ of its adjacent ascending runs $\min R_i < \max R_{i+1}$. \end{theorem} They further showed that the generating function for pop-stacked permutations of $[n]$ with exactly $k$ ascending runs is rational for each $k$. Enumerating pop-stacked permutations without this restriction is, however, an open problem. Asinowski et~al.\ initiated an investigation into this by calculating the number of pop-stacked permutations of length $n=1,\ldots,16$, adding the resulting sequence to the OEIS~\cite{oeis-no-link} as A307030 and noting that ``this sequence is hard to compute''. In the following section, we give an efficient algorithm for counting pop-stacked permutations, expanding the sequence up to $n=1000$. While the algorithm and the augmented sequence could give additional insight into the structure of pop-stacked permutations, finding a generating function or a closed form solution to their enumeration remains an open problem. Section~\ref{section:experimental} gives experimental data in this direction. \section{Polynomial-time counting algorithm} A \emph{ballot}, alternatively known as an ordered set partition, is a collection of pairwise disjoint nonempty sets, referred to as blocks, where the blocks are assigned some total ordering. Any permutation can be seen as a ballot by decomposing it into its ascending runs. The permutation $\pi = 6 178 49 235$ would then be viewed as the ballot $\{6\}\{1,7,8\}\{4,9\}\{2,3,5\}$. Conversely, a ballot $B_1 B_2\dots B_k$ represents a permutation in this manner if, and only if, $\max B_i > \min B_{i+1}$ for each $i$ in $[k-1]$. Thus, the ballots corresponding to pop-stacked permutations are precisely those such that \[ \max B_i > \min B_{i+1} \quad\text{and}\quad \min B_i < \max B_{i+1}. \] In other words, the intervals between the smallest and largest elements of each pair of adjacent blocks overlap, $$[\min B_i, \max B_i] \cap [\min B_{i+1}, \max B_{i+1}]\neq\emptyset, $$ and we call these ballots \emph{overlapping}; here, $[a,b]$ denotes the interval $\{a,a+1,\dots,b\}$. Let $F[U]$ be the set of overlapping ballots whose underlying set is $U$. As an example, \[ F[\{1,2,3\}] = \bigl\{\{1, 2, 3\}, \{2\}\{1, 3\}, \{1, 3\}\{2\}\bigr\}. \] Let $F_{c,d}[U]$ denote the subset of $F[U]$ whose last block, $B$, is such that $c = \min B$ and $d = \max B$. Clearly, if $c > d$ then $F_{c,d}[U]=\emptyset$. Also, \[ F[U] \,=\!\bigcup_{c,d\,\in\,U}\! F_{c,d}[U]. \] If $c = \min U$ and $d = \max U$, then one possibility is that there is a single block consisting of all elements of $U$. Let us now consider the more typical case when there are two or more blocks, and let us write the ballot as $B_1 B_2\dots B_k$. By definition, its last block, $B_k$, satisfies $c = \min B_k$ and $d = \max B_k$, or expressed differently $\{c,d\}\subseteq B_k \subseteq [c,d]$. Let $a = \min B_{k-1}$ and $b = \max B_{k-1}$. The blocks $B_{k-1}$ and $B_k$ overlap if, and only if, $a < d$ and $b > c$. Thus \begin{equation}\label{rec-decomp} \begin{aligned} F_{c,d}[U] \;=\;\;& \{ U : c = \min U \land d = \max U \} \;\;\cup \\ &\bigcup_{ \substack{ \{c,d\}\,\subseteq\, B\,\subseteq\, [c,d]\smallskip\\ a,b \,\in\, U\setminus B \smallskip \\ a < d \,\land\, b > c } } F_{a,b}[U \setminus B]\, B,\qquad\qquad \end{aligned} \end{equation} where $F_{a,b}[U \setminus B]B$ is the set $\bigl\{ wB : w \in F_{a,b}[U\setminus B]\bigr\}$, and the somewhat cryptic looking $\{ U : c = \min U \land d = \max U \}$ expresses the singleton $\{U\}$ if $c = \min U$ and $d = \max U$, and the empty set otherwise. We now turn to counting. Let $f(n)$ be the number of overlapping ballots of $[n]$. That is, $f(n) = |F[n]|$ in which $F[n]$ is short for $F[\{1,\dots,n\}]$. Also, let $f_{c,d}(n) = |F_{c,d}[n]|$. If $c > d$ then $f_{c,d}(n) = 0$. Otherwise we shall use the recursive decomposition~\eqref{rec-decomp} and do case analysis based on whether $c$ and $d$ are the same or two distinct elements. If $c=d$, then the last block consists of a single point. In terms of \eqref{rec-decomp} the ballot is written $w\{c\}$, where $w\in F_{a,b}[[n]\setminus\{c\}]$ and $a<c<b$. After ``rescaling'' we can consider $w$ a ballot in $F[n-1]$; here we subtract $1$ from each element larger than $c$. Note that this, however, also lowers the value of $b$ by one. Thus, the number of such ballots is \[ \sum_{a=1}^{c-1} \sum_{b=c}^{n-1} f_{a,b}(n-1). \] If $c < d$, then write the ballot as $wB$ and let $\ell=|B|-2$. There are $\binom{d-c-1}{\ell}$ ways to choose $B$. After rescaling we have $w\in F_{a,b}[n-\ell-2]$, where $a \leq d-\ell-2$ and $b\geq c$. Thus, the number of such ballots is \[ \sum_{\ell=0}^{d-c-1} \binom{d-c-1}{\ell} \sum_{a=1}^{d-\ell-2} \sum_{b=c}^{n-\ell-2} f_{a,b}(n-\ell-2). \] Finally, if $c=1$ and $d=n$, we also count the case where the ballot consists of a single block. Taking all this together, we have that \begin{equation} \label{eq:f-recurrence} \begin{split} f_{c,d}(n) &= [c = 1 \land d=n] \\ &+ [c=d]\, \sum_{a=1}^{c-1} \sum_{b=c}^n f_{a,b}(n-1) \\ &+ [c<d]\, \sum_{\ell=0}^{d-c-1} \binom{d-c-1}{\ell} \sum_{a=1}^{d-\ell-2} \sum_{b=c}^{n-\ell-2} f_{a,b}(n-\ell-2). \end{split} \end{equation} Here $[p]$ is the Iverson bracket: it converts the proposition $p$ into $1$ if $p$ is satisfied, and $0$ otherwise. Further, $f(n) = \sum_{a=1}^n \sum_{b=a}^n f_{a,b}(n)$. Recurrence~\eqref{eq:f-recurrence} can be augmented to count overlapping ballots with a specific number of blocks, or, equivalently, pop-stacked permutations with a specific number of ascending runs. Let $f_{c,d}(n,k)$ denote the number of overlapping ballots of $[n]$ with exactly $k$ blocks. Then we have $f(n,k) = \sum_{a=1}^n \sum_{b=a}^n f_{a,b}(n,k)$ and \begin{equation} \label{eq:f-k-recurrence} \begin{split} f_{c,d}(n,k) &= [c = 1 \land d=n \land k=1] \\ &+ [c=d]\, \sum_{a=1}^{c-1} \sum_{b=c}^n f_{a,b}(n-1,k-1) \\ &+ [c<d]\, \sum_{\ell=0}^{d-c-1}\binom{d-c-1}{\ell}\sum_{a=1}^{d-\ell-2}\sum_{b=c}^{n-\ell-2}f_{a,b}(n-\ell-2,k-1). \end{split} \end{equation} Note that there are two locations in the recurrence \eqref{eq:f-recurrence} where we have a plain two-dimensional sum over $f$, that is $\sum_{a=\star}^{\star}\sum_{b=\star}^{\star} f_{a,b}(\star)$, where $\star$ are fixed and not dependent on $a$, $b$ or each other. We simplify these two-dimensional sums using ``prefix sums''. Let \[ g_{c,d}(n) = \sum_{a=1}^c \sum_{b=1}^d f_{a,b}(n) \] In particular, $g_{c,d}(n) = 0$ if $c = 0$ or $d=0$. Note that \begin{equation}\label{eq:g_recurrence_fast} g_{c,d}(n) = f_{c,d}(n) + g_{c-1,d}(n) + g_{c,d-1}(n) - g_{c-1,d-1}(n). \end{equation} Also noting that \[ \sum_{a=p}^q \sum_{b=r}^s f_{a,b}(n) = g_{q,s}(n) - g_{p-1,s}(n) - g_{q,r-1}(n) + g_{p-1,r-1}(n), \] we can now simplify the above equation to \begin{equation}\label{eq:f_recurrence_fast} \begin{split} f_{c,d}(n) &= [c = 1 \land d = n] \\[1.2ex] &+ [c=d]\, \Delta_{c-1,n,c-1}(n-1) \\[1ex] &+ [c<d] \sum_{\ell=0}^{d-c-1} \binom{d-c-1}{\ell}\Delta_{d-2-\ell,n-2-\ell,c-1}(n-2-\ell) \end{split} \end{equation} where $\Delta_{u,v,w}(n) = g_{u,v}(n) - g_{u,w}(n)$. We further have $f(n) = g_{n,n}(n)$. The same simplification can also be applied to the recurrence for counting by blocks. Say we wanted to compute $f(n)$ for all $1\leq n\leq N$. We can precompute binomial coefficients $\binom{n}{k}$ for all $0 \leq k \leq n \leq N$ using the recurrence $\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$. Then, using dynamic programming we can compute $f_{c,d}(n)$, $g_{c,d}(n)$ and $f(n)$ using Recurrences~\ref{eq:g_recurrence_fast} and \ref{eq:f_recurrence_fast} for all $1 \leq c$, $d \leq n \leq N$ in $O(N^4)$ time using $O(N^3)$ memory. When counting by blocks this is $O(N^5)$ time, but $O(N^3)$ memory is still sufficient. This assumes that all arithmetic operations are $O(1)$. In reality, some of the numbers are on the order of $N!$. This means that multiprecision arithmetic has to be used, which slows down the computation considerably. One way to speed this up is to choose a set of relatively small primes whose product is greater than $N!$. For each prime $p$, the above computation is then carried out in the finite field $\textbf{F}_p$. This can be done in parallel, as the computation for different primes is independent. The values of $f(n)$, which are guaranteed to be at most $N!$ for all $n\leq N$, are then recovered using the Chinese Remainder Theorem. This was used to calculate the number of pop-stacked permutations of each length up to $N=1000$. With $286$ distinct primes just under $10^9$, and one CPU core per prime, the computation took just under an hour to complete, with each core using 3.8GiB of RAM. In a similar manner the number of pop-stacked permutations of each length up to $N=300$ grouped by number of ascending runs were computed. Table~\ref{tbl:seq} gives the number of pop-stacked permutations of each length up to $N=45$, but the complete results, along with the code used to generate the results, can be found on GitHub~\cite{github}. \begin{table} \centering \begin{tabular}{rr} $n$ & $f(n)$ \\ \hline 1 & 1 \\ 2 & 1 \\ 3 & 3 \\ 4 & 11 \\ 5 & 49 \\ 6 & 263 \\ 7 & 1653 \\ 8 & 11877 \\ 9 & 95991 \\ 10 & 862047 \\ 11 & 8516221 \\ 12 & 91782159 \\ 13 & 1071601285 \\ 14 & 13473914281 \\ 15 & 181517350571 \\ 16 & 2608383775171 \\ 17 & 39824825088809 \\ 18 & 643813226048935 \\ 19 & 10986188094959045 \\ 20 & 197337931571468445 \\ 21 & 3721889002400665951 \\ 22 & 73539326922210382215 \\ 23 & 1519081379788242418149 \\ 24 & 32743555520207058219615 \\ 25 & 735189675389014372317381 \\ 26 & 17167470189102029106503457 \\ 27 & 416297325393961581614919699 \\ 28 & 10468759109047048511785181499 \\ 29 & 272663345523662949571086535201 \\ 30 & 7346518362495550669587951987399 \\ 31 & 204539324291355079758576427320853 \\ 32 & 5878416448467628215599958670190869 \\ 33 & 174223945386975482728912851110751431 \\ 34 & 5320106374135453888563313157982976111 \\ 35 & 167232974698164950641578719412434688845 \\ 36 & 5407019929661274797886581276653666104943 \\ 37 & 179677314965899717327756420597568210468933 \\ 38 & 6132116544121046402686046213590718114272089 \\ 39 & 214787281796488809444762543177377466419782267 \\ 40 & 7716175695131570964771559074490172330993576115 \\ 41 & 284131588386675257705011846785657928372695002841 \\ 42 & 10717718945463416620327720805595647805635809236711 \\ 43 & 413908527884993695909526722330319436067536797304549 \\ 44 & 16356508568742954048255540186930772843919017766669517 \\ 45 & 661053598808034620660440013405109251647269697650963759 \end{tabular} \caption{The number of pop-stacked permutations of each length up to $N=45$.} \label{tbl:seq} \end{table} \section{Experimental analysis} \label{section:experimental} With the first 1000 terms of the counting sequence of pop-stacked permutations now calculated, we turn to a pair of experimental techniques for an empirical analysis: \emph{automated fitting} and \emph{differential approximation}. Given initial terms of a counting sequence, the first of these methods searches for a generating function whose power series expansion matches the sequence, while the second predicts the asymptotic growth of the sequence. For the counting sequence at hand, automated fitting does not conjecture a generating function, giving instead several (rigorous) negative results, while differential approximation gives very precise estimates of the asymptotic behavior. \subsection{Automated fitting for pop-stacked permutations} Let $a_0,a_1,\ldots$ be a counting sequence and $F(x) = \sum_{n \geq 0}a_nx^n$ its generating function. If $F(x)$ is a rational function, then we can write $F(x) = p(x)/q(x)$ for relatively prime polynomials $p(x), q(x) \in \mathbb{Q}[x]$; equivalently, \begin{equation} \label{equation:rational-form} q(x)F(x) - p(x) = 0. \end{equation} \sloppy Conversely, suppose we are given only some initial terms $a_0, a_1, \ldots, a_n$ of a counting sequence and want to determine whether the generating function $F(x)$ of the unknown counting sequence is rational. If $F(x)$ is rational with $\max(\deg(p(x)),\deg(q(x))) = d$, then we can write Equation~(\ref{equation:rational-form}) as \begin{equation} \label{equation:generic-rational} (q_0 + q_1x + \cdots + q_dx^d)(a_0 + a_1x + \cdots + a_nx^n) - (p_0 + p_1x + \cdots + p_dx^d) = 0. \end{equation} Expanding the left-hand side gives a polynomial in $x$, and the coefficients of $x^0, x^1, \ldots, x^n$ must all equal $0$. We thus have a system of $n+1$ equations in the $2d+2$ unknowns $p_0,\ldots,p_d,q_0,\ldots,q_d$. A generic system of this form is likely to have non-trivial solutions when $n \leq 2d$, and so when initial terms up to $a_n$ are known, it is only productive to consider $d$ such that $2d < n$. If this system has no non-trivial solution, then we are guaranteed that $F(x)$ is not rational with numerator and denominator of degree at most $d$. If the system does have a non-trivial solution, then it is possible, though far from guaranteed, that \[ F(x) = \frac{p_0 + p_1x + \cdots + p_dx^d}{q_0 + q_1x + \cdots + q_dx^d}. \] The larger the difference between $n$ and $2d$, the more confident that one can be in such a conjecture. Empirically, this is like using the first $2d$ known terms to guess the rational generating function and the remaining $n-2d$ as confirmation. Automated fitting can be extended beyond the realm of rational generating functions. A generating function $F(x)$ is called \emph{algebraic} if there are polynomials $p_0(x), \ldots, p_m(x) \in \mathbb{Q}[x]$ such that \[ p_m(x)F^m(x) + \cdots + p_1(x)F(x) + p_0(x) = 0, \] called \emph{differentially finite} (or \emph{D-finite}) if there are polynomials $p_0(x), \ldots, p_k(x), q(x) \in \mathbb{Q}[x]$ such that \[ p_k(x)F^{(k)}(x) + \cdots + p_1(x)F'(x) + p_0(x)F(x) + q(x) = 0, \] and called \emph{differentially algebraic} (or \emph{D-algebraic}) if there exists a $(k+2)$-variate polynomial $P$ with coefficients in $\mathbb{Q}$ such that \[ P(x, F(x), F'(x), \ldots, F^{(k)}(x)) = 0. \] To determine whether a generating function $F(x)$ is algebraic given some initial terms, an equation similar to~(\ref{equation:generic-rational}) can be set up assuming each $p_i(x)$ has degree at most $d$, giving a linear system with $n$ equations and $(m+1)(d+1)$ unknowns. In the D-finite case, the system has $(k+2)(d+1)$ unknowns. The D-algebraic case requires further assumptions about form---the ideas are similar, but not worth elaborating upon here. There are various software packages that perform fitting of this kind, including \texttt{Gfun}~\cite{salvy:gfun} in Maple, \texttt{Guess}~\cite{kauers:guess} in Mathematica, and \texttt{Guess}~\cite{rubey:fricas-guess} in FriCAS. We have used a different package, \texttt{GuessFunc}, written by the third author. We applied automated fitting to the counting sequence of pop-stacked permutations up to length $1000$, and found no conjectured rational, algebraic, D-finite, or D-algebraic form for the unknown generating function $F(x)$. From this we can conclude rigorously that, for example, \begin{enumerate}[label={$\diamond$}] \item If $F(x)$ is rational, then either the degree of the denominator or the degree of the numerator is at least $500$. \item If $F(x)$ is algebraic, then the degree of algebraicity $m$ and the maximum degree of polynomial coefficient $d = \max(p_0(x), \ldots, p_m(x))$ must satisfy $(m+1)(d+1) > 1000$. \item If $F(x)$ is D-finite, then the differential order $k$ and the maximum degree of polynomial coefficient $d = \max(q(x), p_0(x), \ldots, p_k(x))$ must satisfy $(k+2)(d+1) > 1000$. \end{enumerate} A similar negative result could be written for the D-algebraic case, although it would require further explanation of the structure of the corresponding search space. One can also apply various transformations to the generating function before initiating the automated fitting procedure. In addition to trying to find a fit for the ordinary generating function $F(x) = \sum_{n \geq 0}a_nx^n$, we also attempted to find a fit for the exponential generating function $\sum_{n \geq 0} (a_n/n!)x^n$, the reciprocal $1/F(x)$, the compositional inverse $F(x)^{\langle -1 \rangle}$, and also several combinations of these transformations. No results were found. \subsection{Automated fitting for pop-stacked permutations with a fixed number of ascending runs} Let $F_k(x)$ denote the power series for those pop-stacked permutations with precisely $k$ ascending runs. Asinowski et~al.~\cite{asinowski-etal-2019} showed that these permutations are in bijection with words from a regular language that is recognized by a certain deterministic finite automaton (DFA) $\mathcal{A}_k$, proving that $F_k(x)$ is rational. Furthermore, a system of linear equations can be derived from this DFA, whose solution gives $F_k(x)$. Deriving $F_k(x)$ in this way is only practical for small values of $k$, however, as the number of states in $\mathcal{A}_k$ grows exponentially with $k$. As mentioned earlier, Recurrence~(\ref{eq:f-k-recurrence}) permits the fast computation of the counting sequence for pop-stacked permutations with a fixed number of ascending runs. This, along with the techniques of automated fitting gives rise to a different approach for finding $F_k(x)$, albeit heuristically\footnote{Given enough terms of the sequence, automated fitting will find $F_k(x)$. The number of terms required is the sum of the degrees of the numerator and denominator of $F_k(x)$, which is not known. An upper bound is twice the number of states in $\mathcal{A}_k$, which is exponential.}. Using the counting sequence for pop-stacked permutations of length at most $300$ with a fixed number of ascending runs, we were able to find a rational fit for each $F_k(x)$ for $k \leq 24$. We were further able to verify that the rational fits were exact for $k\leq 6$ by using the previously mentioned method based on Asinowski et~al.~\cite{asinowski-etal-2019}. The first four generating functions follow. \begin{align*} F_1(x) &= \frac{x}{1-x},\\ F_2(x) &= \frac{2x^3}{(1-2x)(1-x)^2},\\ F_3(x) &= \frac{2x^4(1+3x-6x^2)}{(1-3x)(1-2x)^2(1-3x)^3},\\ F_4(x) &= \frac{2x^6(21-74x+5x^2+180x^3-144x^4)}{(1-4x)(1-3x)^2(1-2x)^3(1-x)^4}. \end{align*} Based on this data, which can be found in full on GitHub~\cite{github}, we pose the following conjecture. \begin{conjecture} For all $k$, the rational generating function $F_k(x)$ can be written as \[ F_k(x) = N_k(x)\Big/\prod_{i=1}^k (1-ix)^{k-i+1}, \] where $N_k(x)$ is a polynomial of degree $k(k + 1)/2$, the same degree as the conjectured denominator. \end{conjecture} \subsection{Differential approximation} Differential approximation empirically estimates the asymptotic growth of a counting sequence based on its initial terms by using linear differential equations to model the unknown generating function and studying the complex singularities of solutions of those linear differential equations. Here we will only present the results of this analysis---for information about how differential approximation works we refer the reader to~\cite{guttmann:asymptotic-analysis, guttmann:series-analysis}. The cornerstone of analytic combinatorics is the observation that the asymptotic behavior of a counting sequence is intimately connected to the singularities of its generating function when treated as a complex function. For example, the location of the singularities closest to the origin (the \emph{dominant} singularities) roughly determine the exponential growth of the counting sequence, and the nature of those singularities determines the sub-exponential behavior. The output of differential approximation is an estimate of the location and nature (specifically, the critical exponent) of all singularities of the unknown generating function based on the given known initial terms. Typically, although not always, the dominant singularity is predicted with the highest precision, with the precision of the estimates of other singularities decreasing as distance from the origin increases. Obviously such an analysis is only experimental, but in practice the estimates given by differential approximation are incredibly accurate. In tests where the true singularity structure of a generating function is independently known, the estimates from differential approximation are rarely off by more than the last decimal place. The counting sequence of pop-stacked permutations grows superexponentially~\cite{asinowski-etal-2019}, implying that its generating function has a singularity at the origin. Accordingly, we use differential approximation to analyze the exponential generating function. It predicts a number of singularities on the positive real axis, located at the values below. \begin{footnotesize} \begin{align*} &1.113439041736727043761661526918083240141390165833449466152700785053219911270\ldots\\ &2.417184228722564007388473547672885752580057534770845001690528350200102151036\ldots\\ &3.076673197412146436807595671137309181422151285506943038305240180949212077913\ldots\\ &3.527590791728018755531106354662725269743465863978439496914729951030934478987\ldots\\ &3.872438162423457670453537298789680569472671309363632792004917259462379566078\ldots\\ &4.152519207830100565666605055176411745894938982832118599384868016797119166567\ldots\\ &4.388766437824164163366758081274636520883940965171626205159043874261749420137\ldots\\ &4.593300493040369902037314403433340137408669134838327397901215132095535249496\ldots\\ &4.773787732301263733990448984231076188826829730174328444872240429327757789160\ldots\\ &4.93539355029443080528699130532727322201728351298582403913\\ &5.08176797057144544489527338196678922218609719159\\ &5.215588012778242472294262722856995906\\ &5.453200964209036692\\ &5.55979961612\\ &5.659669 \end{align*} \end{footnotesize} Each of these singularities is predicted to have critical exponent $-1$, making them simple poles. The topmost 9 estimates have been truncated to fit on the page. In reality, they are given to many more decimal places---nearly $800$ for the dominant singularity. More precise estimates could be obtained if desired. These results suggest that the exponential generating function may posses an infinite number of singularities. If true, this would imply the non-D-finiteness of both the ordinary and exponential generating functions. Differential approximation also predicts several complex pairs of singularities, also simple poles, of which we'll list just a few. \begin{footnotesize} \begin{align*} &0.4279380975440727242991591373540946029637854497521857134254777354059489934\ldots\\ & \, \pm 3.6012595134274782137294551323567899146878282109407492350988015900552787045\ldots i\\ &1.8079319224525533045652715650438553186508451786578693412247786970810774117\ldots\\ & \, \pm 4.0462349876106887702897457441128645763490304850344195743880592871046130995\ldots i\\ &2.5083998717369662727687249193314945476381464747880461769920884622874845896\ldots\\ & \, \pm 4.2416800160392329291940969204250545140382149982272394213372595306429864967\ldots i \end{align*} \end{footnotesize} The dominant pole at $\mu \approx 1.11343904$ implies that the exponential growth rate of the counting sequence is \[ \mu^{-1} \approx 0.8981183185746869695116759646856448\ldots, \] implying that the asymptotic behavior of the number of pop-stacked permutations is \[ a_n \sim C \cdot n! \cdot (0.898118\ldots)^n. \] Differential approximation does provide an estimate for the constant $C$ but this can be obtained numerically given the extremely accurate estimate for $\mu$. We find that \[ C \approx 0.6956885490706357679957031687241101565741983507216179232324\ldots \] giving the final asymptotic approximation \[ a_n \sim (0.695688\ldots) \cdot n! \cdot (0.898118\ldots)^n. \] Full decimal values for the approximated singularities and constants can also be found on GitHub~\cite{github}. \textbf{Acknowledgements.} Computations were performed on the Garpur cluster~\cite{garpur}, a joint project between the University of Iceland and the University of Reykjavik funded by the Icelandic Centre for Research. We thank them for the use of their resources.
{ "timestamp": "2019-08-26T02:13:19", "yymm": "1908", "arxiv_id": "1908.08910", "language": "en", "url": "https://arxiv.org/abs/1908.08910", "abstract": "Permutations in the image of the pop-stack operator are said to be pop-stacked. We give a polynomial-time algorithm to count pop-stacked permutations up to a fixed length and we use it to compute the first 1000 terms of the corresponding counting sequence. Only the first 16 terms had previously been computed. With the 1000 terms we prove some negative results concerning the nature of the generating function for pop-stacked permutations. We also predict the asymptotic behavior of the counting sequence using differential approximation.", "subjects": "Combinatorics (math.CO)", "title": "Counting pop-stacked permutations in polynomial time", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471684931717, "lm_q2_score": 0.8418256551882382, "lm_q1q2_score": 0.8282277872218572 }
https://arxiv.org/abs/1111.2450
The Bernstein-Orlicz norm and deviation inequalities
We introduce two new concepts designed for the study of empirical processes. First, we introduce a new Orlicz norm which we call the Bernstein-Orlicz norm. This new norm interpolates sub-Gaussian and sub-exponential tail behavior. In particular, we show how this norm can be used to simplify the derivation of deviation inequalities for suprema of collections of random variables. Secondly, we introduce chaining and generic chaining along a tree. These simplify the well-known concepts of chaining and generic chaining. The supremum of the empirical process is then studied as a special case. We show that chaining along a tree can be done using entropy with bracketing. Finally, we establish a deviation inequality for the empirical process for the unbounded case.
\section{Introduction}\label{introduction.section} We introduce a new Orlicz norm which we name the Bernstein-Orlicz norm. It interpolates sub-Gaussian and sub-exponential tail behavior. With this new norm, we apply the usual techniques based on Orlicz norms. In particular, we derive deviation inequalities for suprema in a fairly simple and straightforward way. The Bernstein-Orlicz norm captures Bernstein's probability inequalities, and its use puts further derivations in a unifying framework, shared for example by techniques for the sub-Gaussian case, such as those for empirical processes based on symmetrization and Hoeffding's inequality. We furthermore introduce chaining and generic chaining along a tree, which is we believe conceptually simpler than the usual chaining and generic chaining. We invoke it for the presentation of maximal inequalities for general random variables with finite Bernstein-Orlicz norm. The supremum of the empirical process is then studied as a special case, and we show that chaining along a tree can be done using entropy with bracketing. We establish a deviation inequality for the empirical process indexed by a class of functions ${\cal G}$, in terms of the new Bernstein-Orlicz norm. The class ${\cal G}$ is assumed to satisfy a uniform Bernstein condition, but need not be uniformly bounded in supremum norm. The paper is organized as follows. In Section \ref{definition.section}, we introduce the Bernstein-Orlicz norm and discuss the relation with Bernstein's inequality. We then present some bounds for maxima of finitely many random variables (Section \ref{finitely-many.section}) or suprema over a countable set of random variables (Section \ref{Suprema.section}). Section \ref{Suprema.section} also contains the concept of (generic) chaining along a tree. The proofs of the results in Sections \ref{definition.section}, \ref{finitely-many.section} and \ref{Suprema.section} are elementary and given immediately following their statement. Section \ref{adaptivetruncationsection} contains the application to the empirical process. The proofs here are more technical, and given separately in Sections \ref{proofs.section} and \ref{lastsection}. \section{The Bernstein-Orlicz norm} \label{definition.section} Consider a random variable $Z \in \mathbb{R}$ with distribution $\PP$. We first recall the general Orlicz norm (see e.g.\ \cite {krasnosel1961convex}). \begin{definition} Let $\Psi: [0, \infty) \rightarrow [0 , \infty ) $ be an increasing and convex function with $\Psi(0)=0$. The $\Psi$-Orlicz norm of $Z$ is $$\| Z \|_{\Psi} := \inf \biggl \{ c >0 : \ \EE \Psi \biggl ({ |Z | \over c } \biggr ) \le 1 \biggr \} . $$ \end{definition} A special case is the $L_m (\PP)$-norm ($m \ge 1$) which corresponds to $\Psi (z) = z^m $. Other important special cases are $\Psi (z) = \exp [ z^2 ] - 1$ for sub-Gaussian random variables and $\Psi(z) = \exp (z) -1 $ for sub-exponential random variables. We propose functions $\Psi$ that combine sub-Gaussian intermediate tails and sub-exponential far tails. For each $L>0$ we define \begin{equation}\label{PsiL.equation} \Psi_L (z):= \exp \biggl [ { \sqrt {1+ 2L z } - 1 \over L } \biggr ]^2 - 1 , \ z \ge 0 . \end{equation} It is easy to see that $\Psi_L$ is increasing and convex, and that $\Psi_L (0)=0$. \begin{definition} Let $L>0$ be given. The ($L$-)Bernstein-Orlicz norm is the $\Psi$-Orlicz norm with $\Psi= \Psi_L$ given in (\ref{PsiL.equation}). \end{definition} Indeed, the Bernstein-Orlicz norm combines sub-Gaussian and sub-exponential behavior: $$\Psi_L (z) \approx \cases {\exp [ z^2] -1 & for $Lz$ small \cr \exp[2z/L ] -1 & for $Lz$ large \cr } . $$ Note that the constant $L$ governs the range of the sub-Gaussian behavior. It is a dimensionless constant, i.e., it does not depend on the scale of measurement. The inverse of $\Psi_L$ is $$\Psi_L^{-1} (t)= \sqrt { \log (1+t)} + {L \over 2} \log (1+t) , \ t \ge 0 . $$ With this and with Chebyshev's inequality, one now directly derives a probability inequality for $Z$. \begin{lemma} \label{Psi-Prob.lemma} Let $\tau:= \| Z \|_{\Psi_L} $. We have for all $t >0$, $$\PP \biggl ( |Z| >\tau \biggl [ \sqrt {t} + {Lt \over 2} \biggr ] \biggr ) \le 2 \exp [-t] . $$ \end{lemma} {\bf Proof of Lemma \ref{Psi-Prob.lemma}.} By Chebyshev's inequality, for all $c>\| Z \|_{\Psi_L} $, $$\PP \biggl ( | Z | / c \ge \sqrt {t} + {Lt \over 2} \biggr ) = \PP \biggl ( |Z| / c \ge \Psi_L^{-1} ( {\rm e}^t -1) \biggr ) $$ $$ = \PP \biggl ( \Psi_L (| Z|/c) \ge {\rm e}^t -1 \biggr ) \le \biggl ( \EE \Psi_L ( |Z|/ c ) +1 \biggr ) {\rm e}^{-t} .$$ Thus, $$\PP \biggl ( | Z | / \tau > \sqrt {t} + {Lt \over 2} \biggr ) = \lim_{c \downarrow \tau} \PP \biggl ( | Z | / c > \sqrt {t} + {Lt \over 2} \biggr ) $$ $$ \le \lim_{c \downarrow \tau} \biggl ( \EE \Psi_L ( |Z|/ c ) +1 \biggr ) {\rm e}^{-t} \le 2 {\rm e}^{-t} . $$ \hfill $\sqcup \mkern -12mu \sqcap$ The next lemma says that a converse result holds as well, that is, from the probability inequality of Lemma \ref{Psi-Prob.lemma} one can derive a bound for the Bernstein-Orlicz norm, with constants $L$ and $\tau$ multiplied by $\sqrt 3$\footnote{The constant can possibly be improved.}. \begin{lemma} \label{Prob-Psi.lemma} Suppose that for for some constants $\tau$ and $L$, and for all $t >0$, $$\PP \biggl ( |Z| \ge \tau \biggl [ \sqrt {t} + {Lt \over 2} \biggr ] \biggr ) \le 2 \exp[-t] . $$ Then $\| Z \|_{\Psi_{\sqrt 3 L }} \le \sqrt 3 \tau $. \end{lemma} {\bf Proof of Lemma \ref{Prob-Psi.lemma}.} We have $$\EE \Psi_{\sqrt 3L} \biggl (|Z|/(\sqrt {3} \tau)\biggr )= \int_0^{\infty} \PP \biggl ( |Z| \ge \sqrt 3 \tau \Psi_{\sqrt 3 L}^{-1} (t) \biggr )dt $$ $$ = \int_0^{\infty} \PP \biggl ( |Z| \ge \sqrt 3 \tau \biggl [ \sqrt {\log (1+t) } + {\sqrt 3 L \over 2 } \log (1+t ) \biggr ] \biggr )dt $$ $$ = \int_0^{\infty} \PP \biggl ( |Z| \ge \tau \biggl [ \sqrt {\log (1+t)^3 } + { L \over 2 } \log (1+t )^3 \biggr ] \biggr )dt \le 2 \int_{0}^{\infty} {1 \over (1+t)^3} dt = 1. $$ \hfill $\sqcup \mkern -12mu \sqcap$ We recall Bernstein's inequality, see \cite{Bennet:62}. \begin{theorem} \label{Prob-Bernstein.theorem} Let $X_1 , \ldots , X_n $ be independent random variables with values in $\mathbb{R}$ and with mean zero. Suppose that for some constants $\sigma$ and $K$, one has $$ {1 \over n} \sum_{i=1}^n \EE | X_i|^m \le {m! \over 2} K^{m-2} \sigma^2 , \ m =1 , 2 , \ldots . $$ Then for all $t>0$, $$ \PP \biggl ( {1 \over \sqrt n } \biggl | \sum_{i=1}^n X_i \biggr | \ge \sigma \sqrt {2 t } + { Kt \over\sqrt n} \biggr ) \le 2 \exp [-t] . $$ \end{theorem} The following corollary shows that $\| \cdot \|_{\Psi_L}$ indeed captures the nature of Bernstein's inequality. \begin{corollary} \label{Bernstein-Psi.corollary} Let $X_1 , \ldots , X_n $ be independent random variables satisfying the conditions of Theorem \ref{Prob-Bernstein.theorem}. Then by this theorem and Lemma \ref{Prob-Psi.lemma}, for $L := \sqrt 6 K /( \sqrt n \sigma) $, we have $$\biggl \| {1 \over \sqrt n } \sum_{i=1}^n X_i \biggr \|_{\Psi_L} \le \sqrt 6 \sigma . $$ \end{corollary} \section{The Bernstein-Orlicz norm for the maximum of finitely many variables} \label{finitely-many.section} Using Orlicz norms, the argument for obtaining a bound for the expectation of maxima is standard. We refer to \cite{vanderVaart:96} for a general approach. We consider the special case of the Bernstein-Orlicz norm. \begin{lemma}\label{Expect-Finite.lemma} Let $\tau$ and $L$ be constants, and let $Z_1 , \ldots , Z_p$ be random variables satisfying $$\max_{1 \le j \le p } \| Z_j \|_{\Psi_L} \le \tau . $$ Then $$\EE \max_{1 \le j \le p} |Z_j | \le \tau\biggl [ \sqrt {\log(1+ p)} + {L\over 2} \log (1+p ) \biggr ] . $$ \end{lemma} {\bf Proof of Lemma \ref{Expect-Finite.lemma} .} Let $c > \tau$. Then by Jensen's inequality $$\EE \max_{1 \le j \le p} |Z_j |\le c \Psi_L^{-1} \left ( \EE \Psi_L\biggl ( \max_{1 \le j \le p } |Z_j | / c\biggr ) \right ) = c \Psi_L^{-1} \left ( \EE \max_{1 \le j \le p } \Psi_L\biggl ( |Z_j | / c\biggr ) \right ) $$ $$ \le c \Psi_L^{-1} \left ( \sum_{j=1}^p \EE \Psi_L\biggl ( |Z_j | / c\biggr ) \right ) \le c \Psi_L^{-1} \left ( p \max_{1 \le j \le p } \EE \Psi_L\biggl ( |Z_j | / c\biggr ) \right ). $$ Therefore, $$\EE \max_{1 \le j \le p} |Z_j | \le \lim_{c \downarrow \tau} c \Psi_L^{-1} \left ( p \max_{1 \le j \le p } \EE \Psi_L\biggl ( |Z_j | / c\biggr ) \right ) \le \tau \Psi_L^{-1} ( p ) $$ $$= \tau \biggl [ \sqrt {\log(1+ p)} + {L\over 2} \log (1+p ) \biggr ] . $$ \hfill $\sqcup \mkern -12mu \sqcap$ As a special case, one may consider the random variables $$Z_j := {1 \over \sqrt n} \sum_{i=1}^n g_j(X_i), \ j=1 , \ldots , p , $$ where $X_1 , \ldots , X_n$ are independent random variables with values in some space ${\cal X}$, and where $g_1 , \ldots , g_p$ are real-valued functions on ${\cal X}$. If the $g_j(X_i)$ are centered for all $i$ and $j$, and if one assumes the Bernstein condition $${1 \over n } \sum_{i=1}^n \EE | g_j (X_i) |^m \le { m! \over 2 } K^{m-2} \sigma^2 , m=2,3, \ldots , \ j=1 , \ldots, p , $$ then one can apply Lemma \ref{Expect-Finite.lemma}, with $\tau:= \sqrt 6 \sigma$ and $L= \sqrt 6 K/ (\sqrt n \sigma)$, giving the inequality \begin{equation} \label{S=0} \EE \max_{1 \le j \le p} \biggl | {1 \over \sqrt n} \sum_{i=1}^n g_j (X_i) \biggr | \le \sigma \sqrt {6 \log (1+ p)} + { 3K \over \sqrt n} \log (1+ p ) . \end{equation} This follows from Corollary \ref{Bernstein-Psi.corollary}. The constants can however be improved when using direct arguments (see e.g.\ Lemma 14.12 \cite{BvdG2011}). We now present a deviation inequality in probability for the maximum of finitely many variables. \begin{lemma}\label{deviation-Prob.lemma} Let Let $Z_1 , \ldots , Z_p$ be random variables satisfying for some $L$ and $\tau$ $$\max_{1 \le j \le p } \| Z_j \|_{\Psi_L} \le \tau . $$ Then for all $t > 0$ $$\PP \left ( \max_{1 \le j \le p } | Z_j | \ge \tau \biggl [ \sqrt {\log(1+ p)} + {L\over 2} \log (1+p ) + \sqrt t +{ Lt \over 2} \biggr ] \right ) \le 2 \exp[-t] . $$ \end{lemma} {\bf Proof of Lemma \ref{deviation-Prob.lemma}.} We first use that for any $a>0$ and $t>0$, one has $\sqrt {a} + \sqrt {t} > \sqrt {a+t}$, so that $$\PP \left ( \max_{1 \le j \le p } | Z_j | \ge \tau \biggl [ \sqrt {\log(1+ p)} + {L\over 2} \log (1+p ) + \sqrt t +{ Lt \over 2} \biggr ] \right ) $$ $$ \le \PP \left ( \max_{1 \le j \le p } | Z_j | > \tau \biggl [ \sqrt {t+\log(1+ p)} + {L\over 2} ( t+ \log (1+p ) ) \biggr ] \right ) .$$ Next, we apply the union bound and Lemma \ref{Psi-Prob.lemma}: $$ \PP \left ( \max_{1 \le j \le p } | Z_j | > \tau \biggl [ \sqrt {t+\log(1+ p)} + {L\over 2} ( t+ \log (1+p ) ) \biggr ] \right ) $$ $$ \le \sum_{j=1}^p \PP \left ( | Z_j | > \tau \biggl [ \sqrt {t+\log(1+ p)} + {L\over 2} ( t+ \log (1+p ) ) \biggr ] \right )$$ $$ \le 2 p \exp\biggl [ -(t+ \log (1+p) ) \biggr ] = {2p \over 1+p} \exp[-t] \le 2 \exp[-t] . $$ \hfill $\sqcup \mkern -12mu \sqcap$ Using Lemma \ref{Prob-Psi.lemma}, this is easily converted into a the following deviation inequality for the Bernstein-Orlicz norm. We use the notation $$x_+ := x {\rm l } \{ x > 0 \} . $$ \begin{lemma}\label{deviation-Psi.lemma} Let Let $Z_1 , \ldots , Z_p$ be random variables satisfying for some $L$ and $\tau$ $$\max_{1 \le j \le p } \| Z_j \|_{\Psi_L} \le \tau . $$ Then $$\biggl \| \biggl ( \max_{1 \le j \le p } |Z_j| - \tau \biggl [ \sqrt {\log (1+p)} + {L \over 2} \log (1+p) \biggr ] \biggr )_+ \biggr \| _{\Psi_{\sqrt 3 L}} \le \sqrt 3 \tau. $$ \end{lemma} {\bf Proof of Lemma \ref{deviation-Psi.lemma}.} Let $$Z := \biggl (\max_{1 \le j \le p } |Z_j| - \tau \biggl [ \sqrt {\log(1+ p)} + {L\over 2} \log (1+p ) \biggr ] \biggr )_+ . $$ By Lemma \ref{deviation-Prob.lemma}, we have for all $t >0$ $$ \PP \biggl ( Z \ge \tau \biggl [ \sqrt t + {Lt \over 2} \biggr ] \biggr )$$ $$ = \PP \left ( \max_{1 \le j \le p } | Z_j | \ge \tau \biggl [ \sqrt {\log(1+ p)} + {L\over 2} \log (1+p ) + \sqrt t +{ L t\over 2} \biggr ] \right ) \le 2 \exp[-t] . $$ Application of Lemma \ref{Prob-Psi.lemma} finishes the proof. \hfill $\sqcup \mkern -12mu \sqcap$. \section{Chaining along a tree}\label{Suprema.section} A common technique for bounding suprema of stochastic processes is chaining as developed by Kolmogorov, leading to versions of Dudley's entropy bound (\cite{dudley1967sizes}). See e.g. \cite {vanderVaart:96} or \cite{vandeGeer:00} and the references therein. We however propose another method which we call chaining along a tree. This method is conceptually simpler than the usual chaining and, as far as we know, does not introduce unnecessary restrictions. An example will be detailed in Section \ref{adaptivetruncationsection} for the case of entropy with bracketing. The generic chaining technique of \cite{talagrand2005generic} is a refinement which we shall also consider in Definition \ref{generic-tree.definition} and Theorem \ref{generic}. Let $S \in \mathbb{N}_0$ be fixed. \begin{definition} A finite tree\footnote{Actually, ${\cal T}$ is rather a forest consisting of $|G_0|$ trees} ${\cal T} $ is a collection $\{ G_s \}_{s=0}^S $ of disjoint subsets of $\{ 1 , \ldots , N\} $ such that $\cup_{s=0}^S G_s = \{ 1 , \ldots , N \}$, together with a function $${\rm parent} : \{ 1 , \ldots , N\} \rightarrow \{ 1 , \ldots , N\} ,$$ such that ${\rm parent } (j) \in G_{s-1}$ for $j \in G_s $, $s \in 1 , \ldots , S $. We call an element of $\{ 1 , \ldots , N\}$ a node, and $G_s$ a generation, $s=0 , \ldots , S$. A branch of the tree with end node $j_S \in G_S$ is the sequence $\{ j_0 , \ldots , j_S\} $ with $j_{s-1} = {\rm parent}({j_s})$, $s=1, \ldots , S$. \end{definition} \begin{definition} Let a collection of real-valued random variables ${\cal W }:= \{ W_j \}_{j=1}^N$ be given. A finite labeled tree $({\cal T}, {\cal W})$ is a finite tree with on each node $j$ a label $W_j $. \end{definition} Let $\Theta$ be some countable set and let $Z_{\theta} \in \mathbb{R}$ be a random variable defined for each $\theta \in \Theta$. We consider supremum of the process $\{ |Z_{\theta} | : \ \theta \in \Theta \} $. \begin{definition} \label{tree.definition} Let $\delta >0$ and $\tau>0$ be constants and let ${\cal L} := \{ L_s \}_{s=0}^S$ be a sequence of positive numbers. A $(\delta,\tau, {\cal L} )$ finite tree chain for $\{ Z_{\theta} \}$ is a finite labeled tree $({\cal T} ,{\cal W} )$ such that for all $s=0 , \ldots , S$, $$ \| W_{j} \|_{\Psi_{L_s}} \le \tau 2^{-s} , \ \forall \ j \in G_s, $$ and such that one can apply chaining of $\{ Z_{\theta } \}$ along the tree $({\cal T} , {\cal W})$, with approximation error $\delta$. That is, for each $\theta \in \Theta$ there is an end node $j_S \in G_S$ such that the branch $\{ j_0 , \ldots , j_S \}$ satisfies $$| Z_{\theta}| \le \sum_{s=0}^S | W_{j_s} | + \delta . $$ \end{definition} In the above definition, the approximation error $\delta$ will generally depend on the depth $S$ of the tree. We assume that at a fine enough level, the approximation error is small. The usual chaining technique does not assume a tree structure, but indeed often needs only a finite number of steps. A tree structure follows if the members at the finest level are taken as end nodes. With a finite number of steps, the sum given in (\ref{gamma.equation}) is finite. This avoids requiring convergence of an infinite sum. We have presented the definition of a finite tree chain for the Bernstein-Orlicz norm $\| \cdot \|_{\Psi_L}$. However, the concept is not particularly tied up with this norm, e.g., for sub-Gaussian cases one may choose to replace the Bernstein-Orlicz norm by the $L_2 (\PP)$ norm (corresponding to case where the constants in ${\cal L}$ all vanish). Let us now turn to the results. \begin{theorem} \label{Tree.theorem} Let $({\cal T},{\cal W} )$ be an $(\delta ,\tau, {\cal L} )$ finite tree chain for $\{ Z_{\theta } \}$. Define \begin{equation}\label{gamma.equation} \gamma:= \tau \sum_{s=0}^S 2^{-s} \biggl [ \sqrt {\log (1+ |G_s| ) } + {L_s \over 2} \log (1+ |G_s| ) \biggr ] . \end{equation} It holds that \begin{equation}\label{meanbound.equation} \EE \left ( \sup_{\theta \in \Theta} | Z_{\theta} | \right ) \le \gamma + \delta. \end{equation} \end{theorem} \begin{remark}\label{minimize-tree.remark} One may minimize the right hand side of (\ref{meanbound.equation}) over all finite trees. \end{remark} {\bf Proof of Theorem \ref{Tree.theorem}.} We have $$\EE \sup_{\theta \in \Theta} |Z_{\theta} | \le \sum_{s=0}^S \EE \max_{j \in G_s } | W_j |+ \delta .$$ Application of Lemma \ref{Expect-Finite.lemma} gives that for each $s \in \{ 0 , \ldots , S \} $ $$\EE \max_{j \in G_s } |W_j| \le \tau 2^{-s} \biggl [ \sqrt {\log (1+ |G_s| ) } + { L_s \over 2} \log (1+ | G_s | ) \biggr ] . $$ \hfill $\sqcup \mkern -12mu \sqcap$ With generic chaining, the condition on the Bernstein-Orlicz norm of the labels is dropped in the definition of the tree. This Bernstein-Orlicz norm then turns up in the constants (\ref{gamma1*.equation}) and (\ref{gamma2*.equation}) which appear in the generic chaining bound of Theorem \ref{generic} \begin{definition}\label{generic-tree.definition} Let $\delta >0$ be a constant. A $\delta$ finite generic tree chain for $\{ Z_{\theta} \}$ is a finite labeled tree $({\cal T} ,{\cal W} )$ such that one can apply generic chaining of $\{ Z_{\theta } \}$ along the tree $({\cal T} , {\cal W})$ with approximation error $\delta$. That is, for each $\theta \in \Theta$ there is an end node $j_S \in G_S$ such that the branch $\{ j_0 , \ldots , j_S \}$ satisfies $$| Z_{\theta}| \le \sum_{s=0}^S | W_{j_s} | + \delta . $$ \end{definition} Let $({\cal T} ,{\cal W} )$ a finite labeled tree. For each end node $k \in G_S$, we let $$\{ j_0 (k) , \ldots , j_S(k)\} $$ be the corresponding branch (so that $j_S (k) =k $), and we write $$ W_s(k):= W_{j_s(k)} , \ k \in G_S , \ s=0, 1 , \ldots , S . $$ Fix a sequence of positive constants ${\cal L} := \{ L_s \}_{s=0}^S $. We write for $k \in G_S$, \begin{equation}\label{gamma1*.equation} \gamma_{1,*} (k) := \sum_{s=0}^S \| W_s (k) \|_{\Psi_{L_s}} \sqrt {\log (1+ | G_s | ) } , \end{equation} \begin{equation}\label{gamma2*.equation} \gamma_{2,*} (k) := \sum_{s=0}^S \| W_s (k) \|_{\Psi_{L_s}} { L_s } \log (1+ | G_s | ) , \end{equation} $$\gamma_*(k) := \gamma_{1,*} (k)+ { \gamma_{2,*}(k) \over 2} . $$ Moreover, we let $$\gamma_{1,*} := \max_{k \in G_S} \gamma_{1,*} (k) , \ \gamma_{2,*} := \max_{k \in G_S} \gamma_{2,*} (k) , \ \gamma_{*} := \max_{k \in G_S} \gamma_{*} (k) , $$ and $$\tau_* := \max_{k \in G_S } \sum_{s=0}^S \| W_s (k) \|_{\Psi_{L_s}} \sqrt {1+s} , $$ and $$L_* \tau_* := \max_{k \in G_S } \sum_{s=0}^S \| W_s (k) \|_{\Psi_{L_s}} (1+s) L_s .$$ \begin{theorem}\label{generic} Let $({\cal T} ,{\cal W} )$ be a $\delta$ finite generic tree chain for $\{ Z_{\theta} \} $. Then $$\PP \biggl ( \sup_{\theta \in \Theta} | Z_{\theta} | \ge \gamma_* + \delta + \tau_* \biggl [1+ {L_* \over 2} \biggr ] + \tau_* \biggl [ \sqrt {t} + {L_* t \over 2} \biggr ] \biggr ) \le 2 \exp[-t] . $$ \end{theorem} \begin{remark}\label{minimize-tree2.remark} The result of Theorem \ref{generic} may again be optimized over all finite generic trees. \end{remark} {\bf Proof of Theorem \ref{generic}.} Define for $s=0 , \ldots , S$, $$\alpha_s := \biggl [ \sqrt {\log (1+ |G_s| ) } + {L_s \over 2} \log (1+ |G_s| ) \biggr ] + \biggl [ \sqrt {(1+s)(1+t) } + {(1+s)(1+t) L_s \over 2}\biggr ] . $$ Using Lemma \ref{deviation-Prob.lemma}, we see that \begin{equation}\label{single-s.equation} \PP \biggl ( \max_{j \in G_s } { |W_j | \over \| W_j \|_{\Psi_{L_s} } } \ge \alpha_s \biggr ) \le 2 \exp[-(1+t)(1+s) ] , \ s=0 , \ldots , S . \end{equation} We have $$\PP \biggl (\max_{k } \sum_{s=0}^S | W_{s} (k)| \ge \max_{k } \sum_{s=0}^S\| W_s(k) \|_{\Psi_{L_s}} \alpha_s \biggr ) $$ $$ \le \PP \biggl (\exists k : \ \sum_{s=0}^S | W_{s} (k)| \ge \sum_{s=0}^S \| W_s(k) \|_{\Psi_{L_s}} \alpha_s \biggr ) $$ $$ \le \sum_{s=0}^{S} \PP \biggl (\exists k : \ |W_{s} (k)| \ge \| W_s(k) \|_{\Psi_{L_s}} \alpha_s \biggr ) $$ $$ = \sum_{s=0}^S \PP\biggl (\max_k { | W_{s} (k)| \over \| W_s(k) \|_{\Psi_{L_s}} }\ge \alpha_s \biggr ) \le \sum_{s=0}^S \PP \biggl ( \max_{j \in G_s } { |W_j | \over \| W_j \|_{\Psi_{L_s}} } \ge \alpha_s \biggr ) . $$ Now insert (\ref{single-s.equation}) to find $$\PP \biggl (\max_k \sum_{s=0}^S |W_{s} (k) | \ge \max_k \sum_{s=0}^S \| W_s (k) \|_{\Psi_{L_s}} \alpha_s \biggr ) \le 2 \sum_{s=0}^S \exp[-(1+t)(1+s) ]$$ $$ \le {2 {\rm e} ^{-(1+t)} \over 1- {\rm e} ^{-(1+t) } } \le {2 {\rm e} ^{-1} \over 1- {\rm e} ^{-1 } } \exp[-t] \le 2 \exp[-t] . $$ We have by definition $$ \max_k \sum_{s=0}^S \| W_s(k) \|_{\Psi_{L_s}} \biggl [ \sqrt {\log (1+ |G_s| ) } + {L_s \over 2} \log (1+ |G_s| ) \biggr ]=\gamma_*, $$ $$\max_k \sum_{s=0}^S \| W_s(k) \|_{\Psi_{L_s}} \sqrt {(1+s)} =\tau_* , $$ and $$\max_k \sum_{s=0}^S \| W_s(k) \|_{\Psi_{L_s}} (1+s) L_s =\tau_* L_*.$$ Therefore, $$\max_k \sum_{s=0}^S \| W_s(k) \|_{\Psi_{L_s}}\alpha_s \le \gamma_* + \tau_* \sqrt {1+t} + { \tau_* (1+t) L \over 2} $$ $$ \le \gamma_* + \tau_* + {\tau_* L_* \over 2} + \tau_* \biggl [ \sqrt {t} + {L_*t \over 2} \biggr ] .$$ \hfill $\sqcup \mkern -12mu \sqcap$ Note that the constants $L_*$ and $\tau_*$ possibly depend on the complexity of $\Theta$ through the quantities $\{ \| W_s (k) \|_{\Psi_{L_s} } : \ k \in G_s , \ s=0, \ldots , S \} $. Moreover, the choice of the constants ${\cal L} = \{ L_s \}_{s=0}^S $ may also depend on the complexity of $\Theta$. In the application to the empirical process (see Section \ref{adaptivetruncationsection}), the latter will be indeed the case. We will nevertheless derive there a deviation inequality where we put the dependency on the complexity of $\Theta$ in the shift. As a simple corollary of Theorem \ref{generic}, one obtains a deviation inequality in the Bernstein-Orlicz norm. We state this for completeness. In Section \ref{adaptivetruncationsection} we will not apply Corollary \ref{deviation-Psi-sup.corollary} directly, because as such, it does not allow us to put all dependency on the complexity of $\Theta$ in the shift. \begin{corollary}\label{deviation-Psi-sup.corollary} Let the conditions of Theorem \ref{generic} be met. Then the combination of this theorem with Lemma \ref{Prob-Psi.lemma} gives $$\biggl \| \biggl ( \sup_{\theta \in \Theta} | Z_{\theta} | - ( \gamma_* + \delta + \tau_* [1+ L_* / 2 ] ) \biggr )_+ \biggr \|_{\psi_{\sqrt 3 L_*}} \le \sqrt 3 \tau_* . $$ By Jensen's inequality, we then get $$\EE \sup_{\theta \in \Theta} | Z_{\theta} | \le \gamma_* + \delta + \tau_* \biggl [1+ { L_* \over 2} \biggr ] + \sqrt 3 \tau_* \biggl [ \sqrt {\log 2} + {\sqrt 3 L_* \over 2} \log 2 \biggr ] . $$ \end{corollary} \begin{example} \label{Talagrand.example} In \cite{talagrand2005generic}, the sizes $|G_s|$ of generation $s$ is fixed to be $$| G_s |= 2^{2^{2s}} , \ s=0 , \ldots , S . $$ In that case, $$\log (1+ |G_s | ) \le (2^{2s} +1 ) \log 2 \le 2^{2s+1} \le 2^{2(s+1)}. $$ Hence $$\gamma_* \le 2 \gamma_0 , $$ where $$ \gamma_0 := \max_{k \in G_S} \gamma_0 (k) , $$ and for $k \in G_S$, $$ \gamma_{0} (k) := \gamma_{1,0} (k) + { \gamma_{2,0} (k) \over 2} , $$ and $$\gamma_{1,0} (k) := \sum_{s=0}^S \| W_s (k) \|_{\Psi_{L_s}} 2^s ,\ \gamma_{2,0} (k) := \sum_{s=0}^S \| W_s(k) \|_{\Psi_{L_s}} L_s 2^{2s} . $$ Furthermore, since $1+s \le 2^{2s} $ for all $s \ge 0$, $$\tau_* \le \gamma_{1,0} := \max_{k \in G_S} \gamma_{1,0} (k) , $$ and $$\tau_* L_* \le \gamma_{2,0} := \max_{k \in G_S} \gamma_{2,0} (k) . $$ Hence, $$ \gamma_* + \tau_* \biggl [1+ {L_* \over 2}\biggr ] \le 3 \biggl [ \gamma_{1,0} + {\gamma_{2,0} \over 2} \biggr ] , $$ and $$\sqrt 3\tau_* \biggl [ \sqrt {\log 2} + \sqrt {3 L_* \over 2} \log 2 \biggr ] \le \sqrt {3 \log 2} \ \gamma_{1,0} + { 3 \log 2 \over 2} \gamma_{2,0} . $$ It follows from Corollary \ref{deviation-Psi-sup.corollary} that $$\EE \sup_{\theta \in \Theta} | Z_{\theta} | \le (3+ \sqrt {3\log 2} ) \gamma_{1,0} + {3+ 3 \log 2 \over 2} \gamma_{2,0} .$$ Thus, we arrive at a special case of Theorem 1.2.7 in \cite{talagrand2005generic}. The latter book does not treat deviation inequalities. \end{example} When using a $(\delta, \tau , {\cal L} )$ finite tree chain, one takes $\| W_s (k) \|_{\Psi_{L_s}} \le \tau 2^{-s} $ for all $s$ and $k \in G_s$. In that case, the constants $\tau_*$ and $L_*$ in the bounds given in Corollary \ref{deviation-Psi-sup.corollary} only depend on the scale parameter $\tau$ and on the constants ${\cal L} = \{ L_s \}_{s=0}^S $. This is detailed in the next theorem. \begin{theorem} \label{Sup-Probability.theorem} Let the conditions of Theorem \ref{Tree.theorem} be met, and define $$\gamma := \tau \sum_{s=0}^S2^{-s} \biggl [ \sqrt {\log (1+ |G_s| ) } + {L_s \over 2} \log (1+ |G_s| ) \biggr ] , $$ and $$ L:= \sum_{s=0}^S { 2^{-s} L_s (1+s) \over 4}. $$ Then for all $t>0$ $$\PP \biggl ( \sup_{\theta \in \Theta } |Z_{\theta} | \ge \gamma + \delta +4\tau \biggl [ 1 + {L \over 2} \biggr ] + 4 \tau \biggl [ \sqrt {t} + {Lt \over 2} \biggr ]\biggr ) \le 2 \exp[-t] . $$ \end{theorem} {\bf Proof of Theorem \ref{Sup-Probability.theorem}.} This follows from Theorem \ref{generic}, where one takes $$\| W_s (k) \|_{\Psi_{L_s}} \le \tau 2^{-s} . $$ We have $$\tau_* /\tau \le \sum_{s=0}^S 2^{-s} \sqrt {(1+s)} =2 \sum_{s=1}^{S+1} 2^{-s} \sqrt {s} \le 2 \int_0^{\infty} 2^{-x} \sqrt x dx = {\sqrt \pi \over (\log 2)^{3/2} } \le 4 . $$ Moreover, $$L_* \tau_* \le \sum_{s=0}^S 2^{-s} L_s (1+s) = 4L . $$ \hfill $\sqcup \mkern -12mu \sqcap$ \section{Application to empirical processes} \label{adaptivetruncationsection} Let ${\cal X}$ be some measurable space, and consider independent ${\cal X}$-valued random variables $X_1 , \ldots , X_n$. Let ${\cal G}$ be a collection of real-valued functions on ${\cal X}$. Write $$P_n g := {1 \over n} \sum_{i=1}^n g(X_i), \ Pg := {1 \over n} \sum_{i=1}^n \EE g(X_i), $$ and $$\| g \|^2 := {1 \over n} \sum_{i=1}^n \EE g^2 (X_i) . $$ We assume the normalization $$\sup_{g \in {\cal G} } \| g \| \le 1 . $$ We study the supremum of the empirical process $ \{ \nu _n (g) : \ g \in {\cal G} \} $, where $ \nu_n (g) := \sqrt n (P_n - P)g $. We recall the deviation inequality of \cite{Massart:00a}, which refines the constants in \cite{talagrand1996new}. \begin{theorem}\label{massart.theorem} (\cite{Massart:00a}) Suppose that for a constant $K$ \begin{equation}\label{supnorm.condition} \sup_{g \in {\cal G} } \sup_{x \in {\cal X} } | g (x) | \le K . \end{equation} Then for all $\epsilon>0$ and all $t>0$, it holds that \begin{equation}\label{massart} \PP \left ( \sup_{g \in {\cal G}} | \nu_n (g) | \ge (1+ \epsilon) \EE \sup_{g \in {\cal G}} | \nu_n (g) |+ \sqrt{2 \kappa t} + \kappa (\epsilon) K t /\sqrt {n} \right ) \le \exp[-t] , \end{equation} where $\kappa $ and $\kappa (\epsilon)$ can be taken equal to $\kappa=4$ and $\kappa(\epsilon)= 2.5+ 32/\epsilon$. \end{theorem} For the i.i.d.\ case, \cite{Bousquet:02} obtained constants remarkably close those to for the case where ${\cal G}$ is a singleton. In fact, \cite{Massart:00a} and \cite{Bousquet:02} and others have derived {\it concentration} inequalities which in addition to upper bounds show similar lower bounds for the supremum of the empirical process. This is complemented in \cite{Lederer11} to moment concentration inequalities assuming only moment conditions on the envelope $\Gamma ( \cdot ) := \sup_{g \in {\cal G}} |g(\cdot ) |$, instead of the boundedness assumption (\ref{supnorm.condition}). In this paper, we provide a deviation inequality of the same spirit as in the above Theorem \ref{massart.theorem}, where we replace condition (\ref{supnorm.condition}) by a weaker Bernstein condition (see (\ref{Uniform-Bernstein.condition})), which essentially requires that the $g(X_i)$ have sub-exponential tails, and where we also present a deviation result in Bernstein-Orlicz norm. These deviation results in probability and in Bernstein-Orlicz norm are given in Theorem \ref{deviation-empirical-process.theorem}. We have not tried to optimize the constants. Moreover, we replace the expectation $\EE \sup_{g \in {\cal G} }| \nu_n (g) |$ in (\ref{massart}) by the upper bound we obtain from chaining arguments\footnote{This upper bound can be shown to be (up to constants) tight in certain examples. The upper bound following from generic chaining is modulo constants tight for the general sub-Gaussian case.}. Deviation inequalities for the sub-exponential case can be found in literature (see e.g.\ \cite{viens2007supremum}), but these do not cover the more refined interpolation of sub-Gaussian and sub-exponential tail behavior. The above cited work also contains lower bounds for suprema, thus completing the results to concentration inequalities. Now our first aim is to show that entropy with bracketing conditions allow one to construct a finite tree chain. We recall here the definition of a bracketing set and entropy with bracketing (see \cite{Blum:55}, or see \cite {vanderVaart:96}, \cite{vandeGeer:00} and their references). \begin{definition} Let $s>0$ be arbitrary. A $2^{-s}$-bracketing set for $\{ {\cal G}, \| \cdot \| \} $ is a finite collection of functions $\{ [\tilde g_j^L, \tilde g_j^U ] \}_{j=1}^{\tilde N_s} $ satisfying $\| \tilde g_j^U - \tilde g_j^L \| \le 2^{-s} $ for all $j$, and such that for each $g \in {\cal G}$ there is a $j \in \{ 1 , \ldots , \tilde N_s \} $ such that $\tilde g_j^L \le g \le \tilde g_j^U $. If no such finite collection exists, we write $\tilde N_s = \infty$. \end{definition} We also introduce a generalized bracketing set, in the spirit of \cite{vandeGeer:00}. \begin{definition} Let $K>0$ be a fixed constant. A generalized bracketing set for $ {\cal G} $ is a finite collection of functions $\{ [\tilde g_j^L,\tilde g_j^U ] \}_{j=1}^{\tilde N_0} $ satisfying for all $j$ $$P| \tilde g_j^U - \tilde g_j^L |^m \le {m! \over 2} (2K)^{m-2} , \ m=2,3, \ldots , $$ and such that for each $g \in {\cal G}$ there is a $j \in \{ 1 , \ldots , { \tilde N}_0 \} $ such that $\tilde g_j^L \le g \le \tilde g_j^U $. Write $\tilde N_0= \infty$ if no such finite collection exists. \end{definition} A special case is where the envelope function $\Gamma := \sup_{g \in {\cal G}} | g| $ satisfies the Bernstein condition $$P\Gamma^m \le {m! \over 2} (2K)^{m-2} , \ m=2,3, \ldots .$$ Then one can take $[ - \Gamma, \Gamma]$ as generalized bracketing set, consisting of only one element. In what follows, we let for each $s \in \mathbb{N}$, $\tilde N_s$ be the cardinality of a minimal $2^{-s}$-bracketing set for ${\cal G}$. The $2^{-s} $-entropy with bracketing of ${\cal G}$ is $$\tilde H_s := \log (1+ \tilde N_s) , \ s \in \mathbb{N}. $$ Moreover, $\tilde N_0 $ is the cardinality of a minimal generalized bracketing set, and we let $${ \tilde H}_0 := \log (1+ { \tilde N}_0). $$Finally, we write \begin{equation} \label{product-generations} N_s := \prod_{k=0}^s \tilde N_k , \ H_s := \log (1+ N_s ) , \ s \in \mathbb{N}_0 . \end{equation} The following theorem uses arguments of \cite{Ossiander:87}, and is comparable to Theorem 2.7.11 in \cite{talagrand2005generic} (who adapts the technique of \cite{Ossiander:87}). However, we do not use generic chaining here. On the other hand, our results lead to the more involved deviation inequalities as given in Theorem \ref{deviation-empirical-process.theorem}. \begin{theorem} \label{tree-bracketing.theorem} Suppose that for some constant $K \ge 1$, one has the Bernstein condition \begin{equation}\label{Uniform-Bernstein.condition} \sup_{g \in {\cal G}} P| g|^m \le {m! \over 2} K^{m-2} , \ m=2,3, \ldots . \end{equation} Let $S$ be some integer, $\tau:= 3 \sqrt {6}$ and $\delta: = 4 \sqrt n \sum_{s=1}^{S} 2^{-{2s} } / K_{s-1} +\sqrt n 2^{-S} $, where $\{K_{s-1} \}_{s=1}^S $ is an arbitrary deceasing sequence of positive constants (called truncation levels). Suppose that $\tilde N_s < \infty$ for all $s=0, \ldots , S$. Then there is a $(\delta , \tau, {\cal L} )$ finite tree chain for $\{ \nu_n (g) \}$, with $|G_s | \le N_s$, $s=0, \ldots , S$, and with $$L_0 = { 4 \sqrt 6 K \over \sqrt n} , \ L_s = { 2 \sqrt 6 \ 2^s K_{s-1} \over 3 \sqrt {n} } , \ s=1 , \ldots , S . $$ \end{theorem} As a consequence, we can derive a bound for the expectation of the supremum of the empirical process. \begin{theorem} \label{expectation-empirical-process.theorem} Assume the Bernstein condition (\ref {Uniform-Bernstein.condition}). Let $$\bar {\bf E}_S := 2^{-S} \sqrt {n} + 14 \sum_{s=0}^S 2^{-s} \sqrt {6 \tilde H_s } + 6^2 K { \tilde H_0 \over \sqrt {n} } . $$ Then one has $$\EE \biggl ( \sup_{g \in {\cal G} } |\nu_n (g) | \biggr ) \le \min_{S } \bar {\bf E}_S . $$ \end{theorem} \begin{remark} \label{S=0.remark} When $\Theta $ is finite, say $|\Theta |=p$, one may choose a bound with $S=\delta=0$, and $\tilde H_0 \le \log (1+ p)$. Theorem then \ref{expectation-empirical-process.theorem} yields - up to constants - the same bound as in (\ref{S=0}). \end{remark} Finally, we present the main result of this section. We give deviation results in probability and in Bernstein-Orlicz norm, where the dependency on the complexity of ${\cal G}$ is only in the shift. \begin{theorem} \label{deviation-empirical-process.theorem} Assume the Bernstein condition (\ref {Uniform-Bernstein.condition}). Define as in Theorem \ref{expectation-empirical-process.theorem}, $$\bar {\bf E}_S := 2^{-S} \sqrt {n} + 14 \sum_{s=0}^S 2^{-s} \sqrt {6 \tilde H_s } + 6^2 K { H_0 \over \sqrt {n} } .$$ Let $$ \tilde L:= { \sqrt 6 K \over 2 \sqrt n } . $$ Then for all $t>0$, $$\PP \biggl ( \sup_{g \in {\cal G}} | \nu_n (g) | \ge \min_S \bar { \bf E}_S + 6^2 K / \sqrt n + 24 \sqrt 6 \biggl [ \sqrt t + { \tilde L t \over 2} \biggr ] \biggr ] \biggr ) \le 2 \exp[-t] . $$ Moreover, $$\biggl \| \biggr ( \biggl [ \sup_{g \in {\cal G}} | \nu_n (g)| \biggr ] - \biggl [ \min_{S} \bar {\bf E}_S + 6^2 K /\sqrt n + 24 \sqrt 6 \biggr ] \biggr)_+ \biggr \|_{\Psi_{\sqrt 3 \tilde L} } \le 72 \sqrt 2 .$$ \end{theorem} Theorem \ref{deviation-empirical-process.theorem} can be compared to results in \cite{Adamczak:08}. One sees that our bound replaces the sub-exponential Orlicz-norm $$\biggl \| \max_{1 \le i \le n} \sup_{g \in {\cal G} } | g(X_i) |\biggr \|_{\Psi} , \ \Psi (z) = \exp (z) -1, z \ge 0 , $$ occurring in \cite{Adamczak:08} by a constant proportional to $K$, which means we generally gain a $\log n $-term. On the other hand, the shift in \cite{Adamczak:08} is up to a factor $(1+\epsilon)$ equal to the expectation $$\EE \sup_{g \in {\cal G} } | \nu_n (g) | , $$ as in \cite{Massart:00a}) (whose result is cited here in Theorem \ref{massart.theorem}). \begin{remark} Again, when $|\Theta |= p$ is finite, one can choose $S=\delta=0$, and $\tilde H_0 \le \log (1+p)$. as in Remark \ref{S=0.remark}. Theorem \ref{deviation-empirical-process.theorem} then reduces to the usual union bound type deviation inequalities for the maximum of finitely many random variables (that is, the results are - up to constants - a special case of Lemmas \ref{deviation-Prob.lemma} and \ref{deviation-Psi.lemma}). \end{remark} \section{Proofs for Section \ref{adaptivetruncationsection}} \label{proofs.section} \subsection{Proof of Theorem \ref{tree-bracketing.theorem}} This follows from similar arguments as in \cite{vandeGeer:00}, who uses in turn ideas of \cite{Ossiander:87}. Let for $s=1, \ldots , S$, $$\{ [ \tilde g_j^{s,L} , \tilde g_j^{s,U} ]\}_{j=1}^{\tilde N_s} $$ be a minimal $2^{-s }$-bracketing set for $\| \cdot \| $. Let $\{ [\tilde g_j^{0,L}, \tilde g_j^{0,U} ] \}_{j=1}^{{\tilde N}_0} $ be a generalized bracketing set. Consider some $g\in {\cal G}$, and let $ [\tilde g^{0,L} , \tilde g^{0,U} ] $ be the corresponding generalized bracket, and for all $s\in \{1 , \ldots , S \} $, let the corresponding brackets be $[\tilde g^{s,L} , \tilde g^{s,U} ]$. Thus $$\tilde g^{s,L} \le g \le \tilde g^{s,U} , \ s=0, \ldots , S, $$ and $$P | \tilde g_{0,U} - \tilde g_{0,L} |^m \le { m! \over 2} (2K)^{m-2}, \ m=2,3, \ldots, $$ $$ P | \tilde g^{s,U} - \tilde g_{s.L} |^2\le 2^{-2s} , \ s=1, \ldots , S . $$ If for some $s$ there are several brackets in $\{ [ \tilde g_j^{s,L}, \tilde g_j^{s,U} ] \}_{j=1}^{\tilde N_s}$ corresponding to $g$, we choose a fixed but otherwise arbitrary one. Define $$g^{s,L} := \max_{ 0 \le k \le s} \tilde g^{k,L} , \ g^{s,U} := \min_{ 0 \le k \le s} \tilde g^{k,U}.$$ Then $$g^{0,L} \le g^{1,L} \le \cdots \le g^{S,L} \le g \le g^{S,U} \le \cdots \le g^{1,U} \le g^{0,U} , $$ and moreover $ g^{s,U} - g^{s,L} \le \tilde g^{s,U} - \tilde g^{s,L} $. Denote the difference between upper and lower bracket by $$\Delta^s := g^{s,U}-g^{s,L} , \ s=0 , \ldots , S.$$ The differences $\Delta^s$ are decreasing in $s$. Furthermore, $\| \Delta^s \| \le 2^{-s} $, for all $s\in \{ 0,1 , \ldots , S \} $. Let ${\cal N}_s := | \{ [ g_j^{s,L}, g_j^{s,U}] \} | $, $s=0, \ldots , S$. It is easy to see that $${\cal N}_s \le \prod_{k=0}^s \tilde N_k =: N_s, \ s=0, \ldots , S. $$ We define a tree with end nodes $\{ 1 , \ldots , {\cal N}_S\}$. At each end node $j$ sits a pair of brackets $[g_j^{S,L} , g_j^{S,U} ]$. For each $s=0 , \ldots , S-1$, we define the parents at generation $s$ as follows. Let $$\tilde V_k^s := \{ l: [g_k^{s-1,L}, g_k^{s-1,U} ] \ {\rm forms \ a} \ 2^{-(s-1)}{\rm -bracket \ for} \ [ g_l^{s,L} , g_l^{s,U} ] \}. $$ Then $\cup_{k=1}^{{\cal N}_{s-1} } \tilde V_k^s = \{ 1 , \ldots , {\cal N}_s\}$, that is, for each bracket $[ g_l^{s,L}, g_l^{s,U} ] $ there is a $k\in \{ 1 , \ldots , {\cal N}_{s-1}\}$ with $ l \in \tilde V_k$. To see this, we note that for each $l$, there is a function $g$ with $g_l^{s,L} \le g \le g_l^{s,U}$, and by the above construction, there is a $k$ with $g_k^{{s-1}, L} \le g_l^{s, L} \le g \le g_l^{s,U} \le g_k^{s-1, U} $. We let $\{ V_k^s \}_{k=1}^{{\cal N}_{s-1}} $ be a disjoint version of $\{ \tilde V_k^s \} $, e.g., the one given by $$V_1^s = \tilde V_1^s , \ V_k^s = \tilde V_k^s \backslash \cup_{l=1}^{k-1} \tilde V_l^s, \ k=1 , \ldots , {\cal N}_{s-1} . $$ We let $${\rm parent} (j_s) = k \ {\rm if} \ j_s \in V_k^s. $$ We now turn to an adaptive truncation device. For for each $s=0, \ldots , S-1$, we are given truncation levels $K_s$, such that $K_s$ is assumed to be decreasing in $s$. Let $g$ be fixed and $$g^{0,L} \le g^{1,L} \le \cdots \le g^{S,L} \le g \le g^{S,U} \le \cdots \le g^{1,U} \le g^{0,U} . $$ Define $$\Delta^s := g^{s,U} - g^{s,L}, \ y_s := {\rm l} \{ \Delta^s \ge K_s \} . $$ Then $$K_s {\rm l} \{ y_s=1\} \le \Delta^s {\rm l} \{ y_s=1 \} , \ s=0 , \ldots , S-1 , $$ which implies (for $s=0, \ldots , S-1$) $$P\Delta^s {\rm l} \{ y_s=1 \} \le { P| \Delta^s|^2 \over K_s } \le { 2^{-2s} \over K_s} $$ We can write any $g \in {\cal G}$ as \begin{eqnarray}\label{telescope.equation} & &g= \sum_{s=1}^S ( g - g^{0,s} ) {\rm l} \{ y_s =1 , y_{s-1} = \ldots = y_0 =0 \} \\ &&+ \sum_{s=1}^S ( g^{s,L} - g^{s-1, L} ) {\rm l} \{ y_{s-1}= \ldots = y_0 =0 \} + g_{0,L} + ( g- g^{0,L} ) {\rm l} \{ y_0=1 \} \nonumber \end{eqnarray} Let $$W_{j_0 } := |\nu_n ( g^{0,L}) |+ |\nu_n (\Delta^0 ) | , $$ $$ W_{j_s }:= |\nu_n (\Delta^s {\rm l} \{ y_{s-1}=0 \} ) |+ |\nu_n ((g^{s,L} - g^{s-1, L} ){\rm l}\{ y_{s-1}=0 \} ) | , \ s=1 , \ldots , S . $$ Then it follows from (\ref{telescope.equation}) that $$| \nu_n (g) | \le \sum_{s=0}^S | W_{j_s} | + \sqrt {n} \sum_{s=0}^S P \Delta^s {\rm l} \{ y_s=1 \} \le \sum_{s=0}^S | W_{j_s} | + \delta, $$ for $$\delta = \sqrt {n} \sum_{s=1}^{S} { 4 \ 2^{-2s} \over K_{s-1} }+ \sqrt {n} 2^{-S} . $$ Note now that $$( P |g^{0,L} |^m)^{1/m} \le ( P |g |^m)^{1/m} + ( P |\Delta^0 |^m)^{1/m} $$ $$ \le ( { m! \over 2} K^{m-2} )^{1/m} + ( { m! \over 2} (2K)^{m-2} )^{1/m} \le 2 ( { m! \over 2} (2K)^{m-2} )^{1/m} , $$ so $$ P |g^{0,L} |^m \le {m!\over 2} (4K)^{m-2} 2^2 . $$ By Corollary \ref{Bernstein-Psi.corollary} $$ \| \nu_n ( g^{0,L} ) \|_{\psi_{L_0} } \le 2 \sqrt 6 ,$$ for $$L_0 = { \sqrt 6 } (8K /2)/\sqrt n = 4 \sqrt 6 {/ \sqrt n} , $$ where we multiplied by a factor 2 because the Bernstein condition for the centered functions holds with the above $4K$ replaced by $8K$. Moreover, $L_0 = {\sqrt 6 } (4K)/\sqrt n$, so again by Corollary \ref{Bernstein-Psi.corollary}, $$\| \nu_n ( \Delta^0 ) \|_{\Psi_{L_0} } \le \sqrt 6 . $$ The triangle inequality gives $$\biggl \| | \nu_n (g^{0,L} ) | + |\nu_n (\Delta^0) | \biggr \|_{\Psi_{L_0} } \le 3 \sqrt 6 =: \tau . $$ Moreover, for $s=1 , \ldots , S$, $$| ( g^{s, L} - g^{s-1, L}) {\rm l} \{ y_{s-1}=0 \} | \le \Delta^{s-1} \le K_{s-1} , \ \| \Delta^{s-1} \| \le 2^{-s+1} , $$ and $$ \Delta^s {\rm l}\{ y_{s-1}=0 \} \le \Delta^{s-1} \le K_{s-1} , \ \| \Delta^s \| \le 2^{-s}. $$ So, again by Corollary \ref{Bernstein-Psi.corollary}, we may take $$L_s := \sqrt {6} \ 2^s \max( {2 \over 3} K_{s-1} /2, {2 \over 3} K_{s-1} )/\sqrt n = {2 \sqrt {6}K_{s-1} \over 3 \sqrt n } , \ s=1 , \ldots , S . $$ Then, again by the triangle inequality, $$\biggl \| | \nu_n (( g^{s, L} - g^{s-1, L}) {\rm l} \{ y_{s-1}=0 \} ) |+ |\nu_n (\Delta^s {\rm l} \{ y_{s-1}=0 \}) | \biggr \|_{\Psi_{L_s}} \le 3 \sqrt 6 \ 2^{-s} . $$ \hfill $\sqcup \mkern -12mu \sqcap$ \subsection{Three technical lemmas} To apply the result of Theorem \ref{tree-bracketing.theorem}, we need three technical lemmas. First we need a bound for $N_s:= \prod_{k=0}^s \tilde N_s$, or actually for $H_s := \log (1+ N_s)$. \begin{lemma} \label{5decreasingcoveringslemma} Let $s \in \{ 0, \ldots , S\}$, $H_s := \log (1+ \prod_{k=0}^s \tilde N_k ) $ and $ \tilde H_s := \log (1+ \tilde N_s)$. It holds that $$\sum_{s=1}^S 2^{-s} \sqrt {H_s} \le \sqrt {\tilde H_0} + 2 \sum_{s=1}^S 2^{-s} \sqrt {\tilde H_s} . $$ \end{lemma} {\bf Proof of Lemma \ref{5decreasingcoveringslemma}.} We have $$\sqrt {H_s} \le \sum_{k=0}^s \sqrt {\tilde H_k } , $$ so $$\sum_{s=1}^S 2^{-s} \sqrt {H_s} \le \sum_{s=1}^S2^{-s} \sqrt { \tilde H_0 } +\sum_{s=1}^S 2^{-s} \sum_{k=1}^s \sqrt {\tilde H_k } $$ $$\le \sqrt {\tilde H_0} + \sum_{k=1}^S \sum_{s=k}^S 2^{-s} \sqrt {\tilde H_k } \le \sqrt {\tilde H_0} +2 \sum_{k=1}^S 2^{-k} \sqrt {\tilde H_k } .$$ \hfill $\sqcup \mkern -12mu \sqcap$ The next lemma inserts a special choice for the truncation levels $\{ K_s \}$, and then establishes a bound for the expectation of the supremum of the empirical process, derived from the one of Theorem \ref{Tree.theorem}. \begin{lemma}\label{1ChoiceK.lemma} Let Let $S$ be some integer and $\epsilon \ge 0$ be an arbitrary constant. Take $$K_{s-1} := 2^{-s} \sqrt n \biggl ( { \sqrt 6 \over 3 \sqrt { \log (1+ N_s) }} \wedge{ 1 \over \epsilon} \biggr ) , \ s=1 , \ldots , S,$$ where $u\wedge v$ denotes the minimum of $u$ and $v$. Define as in Theorem \ref{tree-bracketing.theorem}, $$L_0 := {4 \sqrt 6 K \over \sqrt n} , \ L_s := { 2 \sqrt 6 \ 2^s K_{s-1} \over 3 \sqrt {n} } , \ s=1 , \ldots , S , $$ $$\delta: = 4 \sqrt n \sum_{s=1}^{S} 2^{-{2s} } / K_{s-1} +\sqrt n 2^{-S} , $$ and $\tau:= 3\sqrt 6$. Let $${\bf E}_S := \tau \sum_{s=0}^S 2^{-s} \biggl [ \sqrt {\log (1+ N_s)} + {L_s \over 2} \log (1+ N_s ) \biggr ] + \delta . $$ Then $${\bf E}_S \le \bar {\bf E}_S +4\epsilon, $$ where $$\bar {\bf E}_S := 2^{-S} \sqrt {n} + 14 \sum_{s=0}^S 2^{-s} \sqrt {6 \tilde H_s } + 6^2 K { H_0 \over \sqrt {n} } . $$ \end{lemma} {\bf Proof of Lemma \ref{1ChoiceK.lemma}.} We have $${\bf E}_S = \sum_{s=1}^{S} {4 \ 2^{-2s} \sqrt {n} \over K_{s-1} } + 2^{-S} \sqrt {n} + \tau \sqrt {\log (1+ N_0)} + 2 \sqrt 6\ \tau K { \log (1+ N_0) \over \sqrt {n} } $$ $$ + \tau \sum_{s=1}^S 2^{-s} \biggl [\sqrt {\log (1+ N_s)} + {1\over 3} \sqrt 6 \ 2^s K_{s-1} { \log (1+ N_s ) \over \sqrt {n}} \biggr ] $$ $$= \sum_{s=1}^{S} {4\ 2^{-2s} \sqrt {n} \over K_{s-1} } + 2^{-S} \sqrt {n} + 3 \sqrt {6 \log (1+ N_0)} + 6^2 K { \log (1+ N_0) \over \sqrt {n} } $$ $$ + 3 \sum_{s=1}^S 2^{-s}\sqrt {6 \log (1+ N_s)} + \sum_{s=1}^S 6 K_{s-1} { \log (1+ N_s ) \over \sqrt {n}} =I+II+III, $$ where $$I:= 2^{-S} \sqrt n + 3 \sqrt {6 \log (1+ N_0)} + 6^2 K { \log (1+ N_0) \over \sqrt {n} }, $$ $$II:= 3 \sum_{s=1}^S 2^{-s}\sqrt {6 \log (1+ N_s)}, $$ and $$III:= \sum_{s=1}^{S} {4 \ 2^{-2s} \sqrt {n} \over K_{s-1} } + \sum_{s=1}^S 6 K_{s-1} { \log (1+ N_s ) \over \sqrt {n}}.$$ Insert $$K_{s-1}={1 \over 3} \sqrt 6 \ 2^{-s} \sqrt { n \over \log (1+ N_s) } \wedge 2^{-s} { \sqrt n \over \epsilon} , \ s=1 , \ldots , S.$$ Note that $K_s$ is decreasing in $s$. Moreover $$ {4 \ 2^{-2s} \sqrt {n} \over K_{s-1} } + 6 K_{s-1} { \log (1+ N_s ) \over \sqrt {n}}\le 4 \sqrt {6}\ 2^{-s} \sqrt { \log (1+ N_s ) } + 4\ 2^{-s} \sqrt n \epsilon . $$ We find $$III \le 4 \sqrt {6}\sum_{s=1}^S 2^{-s} \sqrt { \log (1+ N_s ) } + 4\epsilon,$$ so that $$II+III\le 7 \sqrt {6}\sum_{s=1}^S 2^{-s} \sqrt { \log (1+ N_s ) } + 4\epsilon.$$ Now apply Lemma \ref{5decreasingcoveringslemma}. This gives $$ II + III \le 7 \sqrt 6 \sqrt { \log (1+ \tilde N_0) }+14 \sqrt 6 \sum_{s=1}^S 2^{-s} \sqrt { \log (1+ \tilde N_s ) } + 4\epsilon.$$ Hence, $$ I+II+III\le 2^{-S} \sqrt n + 6^2K { \log (1+ \tilde N_0) \over \sqrt {n} } +10 \sqrt 6 \sqrt {\log (1+ \tilde N_0) } $$ $$+ 14 \sqrt 6 \sum_{s=1}^S 2^{-s} \sqrt { \log (1+ \tilde N_s)} + 4\epsilon$$ $$ \le 2^{-S} \sqrt {n} + 14 \sqrt 6 \sum_{s=0}^S 2^{-s} \sqrt { \log (1+ \tilde N_s)} + 6^2K { \log (1+ \tilde N_0) \over \sqrt {n} } +4\epsilon. $$ \hfill $ \sqcup \mkern -12mu \sqcap$ We now derive some bounds which will be used for obtaining the deviation inequalities in probability and in Bernstein-Orlicz norm of Theorem \ref{deviation-empirical-process.theorem}. \begin{lemma}\label{2ChoiceK.lemma} Let the constants $\{K_{s-1} \}_{s=1}^S$, $\{ L_s\}_{s=0}^S$, and $\tau$ be as in Lemma \ref{2ChoiceK.lemma}. Let $$L:= \sum_{s=0}^S 2^{-s} { L_s (1+s) \over 4}. $$ Then $$L \le \sqrt 6 K/\sqrt n +2 \wedge {\sqrt 6 \over \epsilon} , $$ and $$4 \tau(1+L/2) \le 6^2 K/\sqrt n + 24 \sqrt 6 .$$ \end{lemma} {\bf Proof of Lemma \ref {2ChoiceK.lemma} .} We have $$L= {L_0 \over 4} + \sum_{s=1}^S {2^{-s} L_s (1+s) \over 4 } $$ $$ = { \sqrt 6 K \over \sqrt n} + \sum_{s=1}^S { (1+s) K_{s-1} \over \sqrt {6n}} .$$ But $$\sum_{s=1}^S 2^{-s} (1+s) \le 2 \int_0^{\infty} 2^{-x} x dx = {2 \over (\log 2)^2 } , $$ and since $H_s =\log (1+ N_s ) \ge \log(2)$, $$K_{s-1} \le 2^{-s} \sqrt n \biggl ( {\sqrt 6 \over 3 ( \log (2))^{1/2} } \wedge {1 \over \epsilon} \biggr ) . $$ Hence, $$L \le { \sqrt 6 K \over \sqrt n} + { 2 \over \sqrt 6 (\log 2)^2} \biggl ( {\sqrt 6 \over 3 ( \log (2))^{1/2} } \wedge {1 \over \epsilon} \biggr ) $$ $$= { \sqrt 6 K \over \sqrt n} + { 2 \over 3 (\log 2)^{(5/2)}} \wedge { 2 \over 6 (\log 2)^2} {\sqrt 6 \over \epsilon} $$ $$ \le {\sqrt 6 K\over \sqrt n} + 2 \wedge {\sqrt 6 \over \epsilon} .$$ As $\tau=3 \sqrt 6$, we get $$4 \tau(1+L/2) \le 6^2 K/\sqrt n + 24 \sqrt 6 . $$ \hfill $\sqcup \mkern -12mu \sqcap$ \section{Proof of Theorems \ref{expectation-empirical-process.theorem} and \ref{deviation-empirical-process.theorem}} \label{lastsection} {\bf Proof of Theorem \ref{expectation-empirical-process.theorem}.} This follows from Theorem \ref{Tree.theorem}, Theorem \ref{tree-bracketing.theorem}, and Lemma \ref{1ChoiceK.lemma} with $\epsilon=0$. \hfill $\sqcup \mkern -12mu \sqcap$ {\bf Proof of Theorem \ref{deviation-empirical-process.theorem}.} Let $t>0$ be arbitrary. Note that $\bar {\bf E}_S$ is as in Lemma \ref{1ChoiceK.lemma}. Apply the bounds of Lemma \ref{2ChoiceK.lemma} with $\epsilon=3 \sqrt {t}$ for the constant $L$ defined there. Then $$\tau(4+2L) +4\epsilon + 4\tau \biggl [ \sqrt t+ {L t \over 2} \biggr ] \le 6^2 K/\sqrt n + 24 \sqrt 6 + 4 \epsilon + 12 \sqrt {6t} + 2\tau { \sqrt 6 K t \over \sqrt n} + 2\tau {\sqrt 6 t \over \epsilon} $$ $$= 6^2 K /\sqrt n + 6^2 Kt/\sqrt n + 24 \sqrt 6+ 12 \sqrt{6t} +24 \sqrt t $$ $$\le 6^2 K / \sqrt n + 24 \sqrt 6+ 24 \sqrt 6 \biggl [ \sqrt t + { \tilde L t \over 2} \biggr ] , $$ where $$ \tilde L:= { \sqrt 6 K \over 2 \sqrt n } . $$ Then by Theorem \ref{Sup-Probability.theorem}, $$\PP \biggl ( \sup_{g \in {\cal G}} | \nu_n (g) | \ge \min_S \bar {\bf E}_S + 6^2 K / \sqrt n +24 \sqrt 6+ 24 \sqrt 6 \biggl [ \sqrt t + { \tilde L t \over 2} \biggr ] \biggr ] \biggr ) \le 2 \exp[-t] . $$ and by Lemma \ref{Prob-Psi.lemma} $$\biggl \| \biggr ( \biggl [ \sup_{g \in {\cal G}} | \nu_n (g)| \biggr ] - \biggl [ \min_{S} \bar {\bf E}_S + 6^2 K /\sqrt n + 24 \sqrt 6 \biggr ] \biggr)_+ \biggr \|_{\Psi_{\sqrt 3 \tilde L} } \le 72 \sqrt 2 .$$ \hfill $\sqcup \mkern -12mu \sqcap$ \bibliographystyle{plainnat}
{ "timestamp": "2011-11-22T02:02:37", "yymm": "1111", "arxiv_id": "1111.2450", "language": "en", "url": "https://arxiv.org/abs/1111.2450", "abstract": "We introduce two new concepts designed for the study of empirical processes. First, we introduce a new Orlicz norm which we call the Bernstein-Orlicz norm. This new norm interpolates sub-Gaussian and sub-exponential tail behavior. In particular, we show how this norm can be used to simplify the derivation of deviation inequalities for suprema of collections of random variables. Secondly, we introduce chaining and generic chaining along a tree. These simplify the well-known concepts of chaining and generic chaining. The supremum of the empirical process is then studied as a special case. We show that chaining along a tree can be done using entropy with bracketing. Finally, we establish a deviation inequality for the empirical process for the unbounded case.", "subjects": "Probability (math.PR)", "title": "The Bernstein-Orlicz norm and deviation inequalities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9912886164430469, "lm_q2_score": 0.8354835432479661, "lm_q1q2_score": 0.8282053256472108 }
https://arxiv.org/abs/0711.1400
Polynomials associated with Partitions: Polynomials associated with Partitions: Their Asymptotics and Zeros
Let $p_n$ be the number of partitions of an integer $n$. For each of the partition statistics of counting their parts, ranks, or cranks, there is a natural family of integer polynomials. We investigate their asymptotics and the limiting behavior of their zeros as sets and densities.
\section{Introduction}\label{section:introduction} The purpose of this paper is to survey several natural polynomial families associated with integer partitions focusing on their asymptotics and the limiting behavior of their zeros. Our principal families are \begin{enumerate} \item Taylor polynomials of the analytic function $P(x)=\prod_{n\geq 1} (1-x^n)^{-1}$, the generating function of the partition numbers (Section \ref{section:Taylor}). \item Polynomials $F_n(x)$ associated with counting partitions in parts (Section \ref{section:stanley}). \item Polynomials associated with the rank or crank of a partition (Section \ref{section:crank}). \end{enumerate} We introduce several definitions used throughout the paper. \begin{definition}\label{def:zero_attractor} Let ${\mathcal Z}(q_n)$ denote the finite set of zeros of the polynomial $q_n$. Then the {\sl zero attractor} $\mathcal A$ of the polynomial sequence $\{q_n\}$ whose degrees go to $\infty$ is the limit of ${\mathcal Z}(q_n)$ in the Hausdorff metric $\Delta$ on the non-empty compact subsets $\mathcal K$ of ${\mathbb C}$. \end{definition} We recall the standard: \begin{definition}\label{def:asymptotic_zero} The {\sl asymptotic zero distribution} for a sequence $\{q_n\}$ of polynomials whose degrees go to $\infty$ is the weak$^{*}$-limit of the normalized counting measures of their zeros $\frac{1}{\deg(q_n)} \sum \{ \delta_z : z \in {\mathcal Z}(q_n)\}$. \end{definition} We single out a useful compromise from the obtaining the full asymptotic zero distribution. \begin{definition}\label{def:argument} We say that the arguments of the zeros of a polynomial family $\{q_n(x)\}$ whose degrees go to $\infty$ are {\sl uniformly distributed on the unit circle as $n\to\infty$} if the normalized counting measures $\frac{1}{\deg(n)} \sum \{ \delta_{\arg z} : z \in {\mathcal Z}(q_n)\}$ converge in the weak$^{*}$-topology to normalized Lebesgure measure on the unit circle. \end{definition} The following result of Erd\"os and Tur\'an (\cite{erdos}, Theorem 1) will be used repeatedly throughout the paper to determine that the arguments of zeros are uniformly distributed. Let $q(x)$ be the polynomial $\sum_{k=0}^n a_k x^k$ of degree $n$ with non-zero constant term $a_0 \neq 0$. For $0 \leq \theta_1 < \theta_2 \leq 2 \pi$, \begin{equation}\label{eq:erdos} \left| \# \, \{ z : \arg z \in [\theta_1,\theta_2], q(z) =0 \} - \frac{ \theta_2-\theta_1}{2 \pi} n \right| < 16 \sqrt{ n \ln \left( \frac{ |a_0| + |a_1| + \cdots + |a_n|} { \sqrt{ a_0 a_n}} \right) }. \end{equation} \section{Taylor Polynomials of $P(x)$}\label{section:Taylor} Let $p_k$ be the number of partitions of a positive integer $k$ with $p_0=1$ by convention. The ordinary generating function $P(x)$ for $\{p_k\}$ is \begin{equation} P(x)= \prod_{n\geq 1} \frac{1}{1-x^n} = \sum_{k=0}^\infty p_k x^k. \end{equation} A natural choice of polynomials associated with the partitions is simply the Taylor polynomials $s_n(x)$ of $P(x)$: \begin{equation} s_n(x) = \sum_{k=0}^n p_k x^k \end{equation} since $P(x)$ is analytic in the open unit disk $\mathbb D$. The asymptotics of these polynomials $s_n(x)$ depend on the classical result of the asymptotics of the partition numbers $p_n$: \begin{equation}\label{eq:hardy_ramanujan} p_n \left/ \left[ \frac{1}{4n\sqrt{3}} \exp \left( \pi \sqrt{\frac{2n}{3}} \right) \right] \right. \to 1. \end{equation} See either \cite{andrews} or \cite{ayoub}. We first establish the limiting behavior of their zeros. \begin{theorem} (a) The zero attractor of the Taylor polynomials $\{ s_n(x)\}$ is the unit circle. \\ (b) The asymptotic zero density is Lebesgue measure on the unit circle. \end{theorem} \begin{proof} Recall the Enestr\"om-Kakeya Theorem: If the coefficients of the polynomial $q(z)= \sum_{k=0}^n a_k z^k$ satisfy $a_n \geq a_{n-1} \geq \cdots \geq a_0 \geq 0$, then all the zeros of $p(z)$ lie in the closed unit disk (see \cite{marden}, p. 136). Since the partition numbers are positive and increasing, the zeros of the Taylor polynomials $s_n(x)$ must lie in the closed unit disk $\overline{{\mathbb D}}$. Next let $f(x) = \sum_{k=0}^\infty c_k x^k$ be an analytic function with radius of convergence 1. To state the Jentzsch Theorem \cite{erdos} concerning the zeros of the Taylor polynomials $t_n(x)$ of $f(x)$, recall that $a$ is called a limit point of zeros of $t_n(x)$ if for every $\varepsilon >0$ there are infinitely many indices $n$ so $t_n(z_n)=0$ with $|z_n- a|< \varepsilon$. Then the collection of all limit points of zeros of $t_n(x)$ must contain the unit circle. Since $s_n(x)$ are the Taylor polynomials of the generating function $P(x)$ which is analytic and does not vanish in $\mathbb D$, no limit point of the polynomials $s_n(x)$ can lie inside $\mathbb D$ since such a limit point must be a zero of $P(x)$. Since the radius of convergence of $P(x)$ is 1, we conclude that the limit points are exactly the unit circle. We conclude that the zero attractor is the unit circle since the all the zeros of $s_n(x)$ are bounded in modulus by 1. For the polynomials $s_n(x)$, their constant terms are always 1 while their coefficients are all bounded above by $p_n$, so the right-hand side of inequality of Erd\"os-Tur\'an (\ref{eq:erdos}) is dominated by $16 \sqrt{ n \ln( n \sqrt{ p_n})}$. Hence, we have the following limit by (\ref{eq:hardy_ramanujan}): \begin{eqnarray*} \left| \frac{1}{n} \# \{ z : \arg z \in [\theta_1,\theta_2], q(z) =0 \} - \frac{ \theta_2-\theta_1}{2 \pi} \right|< 16 \sqrt{ \frac{1}{n} \ln( n \sqrt{ p_n})} \to 0. \end{eqnarray*} A compactness argument shows that the unit circle is the zero attractor. \end{proof} Because of the non-negativity and monotonicity of the coefficients of $s_n(x)$ together with the subexponential growth of $s_n(1)$, both the zero attractor and the asymptotic zero distribution for $s_n(x)$ were quickly obtained. A more complete understanding of these polynomials, though, requires their asymptotics outside the unit disk. In general, it is very useful to have asymptotic expansions for a polynomial family throughout the complex plane. In \cite{boyer_goh_euler}, we obtained such expansions for the Euler and Bernoulli polynomials while in Section \ref{section:stanley} we describe expansions for another partition polynomial family. Further, we note that the Euler and Bernoulli polynomial zeros are not uniformly distributed around the unit circle and that the zero distribution studied in Section \ref{section:stanley} is more subtle than any of these examples. \begin{theorem} Let $\delta >0$ and $0 < \eta < 1/2$, then \begin{equation*} s_n(x)=\frac{x^{n+1}}{x-1}\frac{e^{a\lambda _{n}} \, \lambda _{n}^{-2}}{4\sqrt{3}} \, \left(1+O_{\delta }(\lambda _{n}^{ -\eta }) \right), \end{equation*} where \begin{equation} a=\pi \sqrt{ 2/3 }, \quad \lambda _{n}= \sqrt{n - 1/24} \label{lamdan} \end{equation} and the constant in the big oh term $O_{\delta }(\lambda _{n}^{-\eta })$ depends only on $\delta $ and holds uniformly for all $x$ with $|x| \geq 1+\delta$. \end{theorem} \begin{proof} For any $0<r<1$, we have \begin{equation*} p_{n}=\frac{1}{2\pi i} \oint_{\left| \zeta \right| =r}\frac{P(\zeta )}{\zeta^{n+1}} \,d\zeta . \end{equation*} By summing over the above expression for the partition numbers $p_n$, we obtain an integral form for the Taylor polynomial: \begin{eqnarray*} s_n(x)=\frac{1}{2\pi i} \oint_{\left| \zeta \right| =r} \frac{P(\zeta )}{\zeta } \left( \sum_{j=0}^{n}(\frac{x}{\zeta })^{j} \right)\, d\zeta = \frac{1}{2\pi i} \oint_{\left| \zeta \right| =r} \frac{P(\zeta )}{\zeta -x} \left(1- \left( \frac{x}{\zeta } \right)^{n+1} \right) \,d\zeta . \end{eqnarray*} Since $\left| x\right| \geq 1+\delta$, we find, by using the Cauchy integral theorem, that \begin{equation*} \frac{1}{2\pi i} \oint_{\left| \zeta \right| =r} \frac{P(\zeta )}{\zeta -x} \,d\zeta =0. \end{equation*} The integral for the Taylor polynomial $s_n(x)$ reduces to \begin{equation} s_n(x)= \frac{-x^{n+1}}{2\pi i} \oint_{\left| \zeta \right| =r} \frac{P(\zeta)}{\zeta -x}\zeta^{-n-1}\,d\zeta. \label{vn} \end{equation} Next we define $I_n$ as the integral: \begin{equation} I_{n}=\frac{1}{2\pi i} \oint_{\left| \zeta \right| =r} \frac{P(\zeta )}{\zeta-x}\zeta^{-n-1}\,d\zeta. \label{in} \end{equation} In particular, we must have: \begin{equation} s_n(x)=-x^{n+1}I_{n}. \label{vnin} \end{equation} Thus our goal is to find an asymptotic approximation for $I_{n}$. Our strategy follows very closely that of \cite{ayoub}. Consequently, we will adopt the same notation as Ayoub to avoid confusion. Not surprisingly, the methods come from a proof of the asymptotics of partition numbers originally by J. Upsensky and uses the functional equation of the modular function. Basically, the major contribution to the integral in (\ref{in}) comes from the a small neighborhood of the strongest singularity $\zeta =1$ of $P(\zeta )$. We begin with the following well-known functional equation which is essential: For $\Re(\tau )>0$, \begin{equation} P(e^{-2\pi \tau }) = \psi (\tau )P(e^{-2\pi /\tau }), \label{functional} \end{equation} where \begin{equation*} \psi (\tau )=\sqrt{\tau } \exp \left[\frac{\pi }{12} \left(\frac{1}{\tau }-\tau \right) \right]. \end{equation*} Now we put $\zeta =e^{-2\pi \tau }$ with $ d\zeta = e^{-2\pi \tau } 2\pi i \, d\phi$ where $\tau =\alpha -i\phi$, with $\alpha =\alpha (n)>0$. Note that we shall choose $\alpha $ so that $\alpha \rightarrow 0$ as $n\rightarrow \infty$. The specific form of $\alpha $ will be made clear below. Using the functional equation, we write $I_n$ as: \begin{eqnarray} I_{n} = \int_{-1/2}^{1/2} \frac{P(e^{-2\pi \tau })}{e^{-2\pi \tau }-x} e^{2\pi n\tau } \,d\phi = J+\widetilde{I_{n}}, \label{inin} \end{eqnarray} where \begin{equation*} J=\int_{-1/2}^{1/2}\frac{\psi (\tau )}{e^{-2\pi \tau }-x}e^{2\pi n\tau } \, d\phi, \quad \widetilde{I_{n}}=\int_{-1/2}^{1/2} \frac{P(e^{-2\pi \tau })-\psi (\tau )}{e^{-2\pi \tau }-x}e^{2\pi n\tau } \, d\phi . \end{equation*} To estimate $\widetilde{I_{n}}$, we break the interval into three parts; a neighborhood of origin, say $-\phi _{0}\leq \phi \leq \phi _{0},$ and the remaining two segments from $-1/2$ to $-\phi _{0}$ and $\phi _{0}$ to $1/2$. Choose $\phi _{0}=\lambda \alpha $ and $\lambda$ that satisfies $2\pi =\alpha (1+\lambda ^{2})$, that is, $\phi _{0}=(2\pi \alpha -\alpha^{2})^{1/2}$. We proceed as in \cite{ayoub} to get the estimates: \begin{lemma} (a) For $\left| \phi \right| \leq \phi _{0}$ we have \begin{equation} P(e^{-2\pi \tau })-\psi (\tau )=O(1). \label{claima} \end{equation} (b) For $\phi _{0}\leq \phi \leq \frac{1}{2}$ or $\frac{-1}{2}\leq \phi \leq -\phi _{0}$ we have \begin{equation} P(e^{-2\pi \tau })-\psi (\tau )=O(e^{\pi /(48\alpha )}). \label{claimb} \end{equation} \end{lemma} \begin{proof} Equation (\ref{functional}) is essential for the proof here. For details see equations (14) and (19) on page 150 of \cite{ayoub}. \end{proof} We use this lemma to estimate $\widetilde{I_{n}}$. Define $\widetilde{I}_{n,1}$, $\widetilde{I}_{n,2}$, and $\widetilde{I}_{n,3}$ as: \begin{eqnarray*} \widetilde{I_{n}} &=& \left(\int_{-1/2}^{-\phi _{0}}+ \int_{-\phi _{0}}^{\phi_{0}}+\int_{\phi _{0}}^{1/2} \right) \, \left( \frac{P(e^{-2\pi \tau })- \psi (\tau )}{e^{-2\pi \tau }-x} e^{2\pi n\tau } \right) \,d\phi \\ &=& \widetilde{I}_{n,1}+\widetilde{I}_{n,2}+\widetilde{I}_{n,3}. \end{eqnarray*} From equation (\ref{claima}), \begin{equation*} \widetilde{I}_{n,2} =O \left( \int_{-\phi _{0}}^{\phi _{0}} \left| \frac{1}{e^{-2\pi \tau }-x}\right| \left| e^{2\pi n\tau }\right| \, d\phi \right). \end{equation*} For $\left| x\right| \geq 1+\delta $ \begin{equation*} \left| \frac{1}{e^{-2\pi \tau }-x}\right| \leq \frac{1}{\left| x\right| -\left| e^{-2\pi \tau }\right| } = \frac{1}{\left| x\right| -e^{-2\pi \alpha }} \leq \frac{1}{\left| x\right| -1} \leq \frac{1}{\delta }. \end{equation*} Hence $ \widetilde{I}_{n,2} =O_{\delta }(e^{2\pi n\alpha }), $ whereas from equation (\ref{claimb}) $ \widetilde{I}_{n,3}=O_{\delta } \left(e^{2\pi n\alpha +\pi /(48\alpha )} \right); $ and exactly the same estimate holds for $\widetilde{I}_{n,1}$. From equation (\ref{inin}), we have now shown the following: \begin{lemma} \begin{equation} I_{n}=J+O_{\delta }\left(e^{2\pi n\alpha +\pi /(48\alpha )} \right), \label{inj} \end{equation} \end{lemma} We use the functional equation (\ref{functional}) to obtain \begin{equation*} J=\int_{-1/2}^{1/2}\frac{(\alpha -i\phi )^{1/2}}{e^{-2\pi \tau }-x} \exp \left(\frac{\pi }{12(\alpha -i\phi )}+2\pi (n-\frac{1}{24})(\alpha -i\phi ) \right) \, d\phi . \end{equation*} For convenience, we put \begin{equation} m=2\pi \left(n-\frac{1}{24} \right) =2\pi \lambda _{n}^{2}. \label{m} \end{equation} We change variables $\phi =\alpha u$ to get \begin{equation*} J = \alpha^{3/2}\int_{-1/(2\alpha )}^{1/(2\alpha )} \, \frac{(1-iu)^{1/2}}{e^{-2\pi \alpha (1-iu)}-x} \, \exp \left( \frac{\pi }{12\alpha (1-iu)} + m\alpha (1-iu) \right) \, du. \end{equation*} To obtain an asymptotic approximation for $J,$ we set the coefficients of $\frac{1}{1-iu}$ and $1-iu$ to be equal. Thus $ \frac{\pi }{12\alpha }=m\alpha =\sigma, $ and so \begin{equation} \alpha =\sqrt{\frac{\pi }{12m}}, \quad \sigma =\sqrt{\frac{\pi m}{12}}, \label{alpsig} \end{equation} where $m$ was defined in equation (\ref{m}). This is how $\alpha $ is made explicit. Consequently, \begin{eqnarray} J &=& \alpha ^{3/2}\int_{-1/(2\alpha )}^{1/(2\alpha )} \frac{(1-iu)^{1/2}}{e^{-2\pi \alpha (1-iu)}-x} \exp \left[ \, \sigma \left(\frac{1}{1-iu}+(1-iu) \right) \right] \,du. \nonumber \\ &=& \alpha^{3/2}e^{2\sigma }\int_{-u_{0}}^{u_{0}} \frac{(1-iu)^{1/2}}{e^{-2\pi\alpha (1-iu)}-x}\exp [-\sigma g(u)] \, du, \label{j} \end{eqnarray} where \begin{equation*} g(u)=\frac{u^{2}}{1-iu}, \quad u_{0}=\frac{1}{2\alpha }. \end{equation*} Note that from equation (\ref{alpsig}) $\sigma =\sigma (n)\rightarrow \infty $ and $\alpha \rightarrow 0$ as $n\rightarrow \infty$. To approximate $J$ we follow (\cite{copson},page 91). We choose $\varepsilon$ to lie in the interval $(1/3,1/2)$. Write \begin{eqnarray} J&=& \alpha ^{3/2}e^{2\sigma } \left[ \int_{-u_{0}}^{-\sigma^{-\varepsilon}} +\int_{-\sigma^{-\varepsilon }}^{\sigma^{-\varepsilon }} + \int_{\sigma^{-\varepsilon }}^{u_{0}} \right] \frac{(1-iu)^{1/2}\exp [-\sigma g(u)]} {e^{-2\pi\alpha (1-iu)}-x} \, du \nonumber \\ &=& J_{1}+J_{2}+J_{3}. \label{j1j2j3} \end{eqnarray} \begin{lemma} (a) \qquad Both $J_1$ and $J_3$ equal $\displaystyle \frac{ \alpha^{3/2} e^{2 \sigma} \sqrt{\pi}}{ \sigma} o_\delta( \sigma^{1-3\epsilon})$. \\ (b) \qquad $\displaystyle J_2 = \frac{ \alpha^{3/2} e^{2 \sigma}}{1-x} \, \frac{ \sqrt \pi}{ \sqrt \sigma} ( 1+ O_\delta( \sigma^{1-3 \epsilon}))$. \end{lemma} \begin{proof} We estimate $J_{2}$ first. Note for $-\sigma^{-\varepsilon }\leq u\leq \sigma^{-\varepsilon }$ we have \begin{eqnarray*} (1-iu)^{1/2} &=& 1+O(\sigma ^{-\varepsilon })=1+O(\sigma ^{1-3\varepsilon }) \\ \frac{1}{e^{-2\pi \alpha (1-iu)}-x} &=& \frac{1}{1-x+O(\alpha )} = \frac{1}{1-x} \frac{1}{1+O_{\delta }(\alpha )} \\ &=& \frac{1}{1-x}(1+O_{\delta }(\alpha )) =\frac{1}{1-x}(1+O_{\delta }(\sigma^{-1}))=\frac{1}{1-x} (1+O_{\delta }(\sigma^{1-3\varepsilon })), \\ g(u) &=& \frac{u^{2}}{1-iu}=u^{2}+O(\sigma^{-3\varepsilon }) \end{eqnarray*} so that \begin{equation*} \exp [-\sigma g(u)]=\exp [-\sigma u^{2}](1+O(\sigma^{1-3\varepsilon })). \end{equation*} Making the above substitutions, we find \begin{eqnarray} J_{2} &=& \alpha ^{3/2}e^{2\sigma } \int_{-\sigma ^{-\varepsilon }}^{\sigma^{-\varepsilon }} \frac{(1-iu)^{1/2}\exp [-\sigma g(u)]}{e^{-2\pi \alpha (1-iu)}-x} \, du \\ &=& \alpha ^{3/2}e^{2\sigma } \int_{-\sigma^{-\varepsilon }}^{\sigma^{-\varepsilon }} \frac{\exp [-\sigma u^{2}]}{1-x}(1+O_{\delta } (\sigma^{1-3\varepsilon }))\,du \\ &=& \frac{\alpha^{3/2}e^{2\sigma }}{1-x} \left( \int_{-\sigma^{-\varepsilon}}^{\sigma^{-\varepsilon }}e^{-\sigma u^{2}} \, du \right) \, \left( 1+O_{\delta }(\sigma^{1-3\varepsilon }) \right) \label{j2} \end{eqnarray} Now \begin{equation*} \int_{-\sigma ^{-\varepsilon }}^{\sigma ^{-\varepsilon }} e^{-\sigma u^{2}} \,du = \int_{-\infty }^{\infty }e^{-\sigma u^{2}} \,du - \left[ \int_{-\infty}^{-\sigma^{-\varepsilon }} + \int_{\sigma^{-\varepsilon }}^{\infty} \right] e^{-\sigma u^{2}} \, du. \end{equation*} It is not hard to see that since $\int_{-\infty }^{\infty } e^{-\sigma u^{2}} \, du = {\sqrt{\pi }}/{\sqrt{\sigma }} $ \begin{equation*} \left[ \int_{-\infty }^{-\sigma^{-\varepsilon }} + \int_{\sigma^{-\varepsilon}}^{\infty } \right] \, e^{-\sigma u^{2}} \,du =o(\sigma^{1-3\varepsilon }) \end{equation*} so that \begin{equation*} \int_{-\sigma^{-\varepsilon }}^{\sigma ^{-\varepsilon }} e^{-\sigma u^{2}} \,du= \frac{\sqrt{\pi }}{\sqrt{\sigma }}(1+o(\sigma^{1-3\varepsilon })). \end{equation*} Hence from equation (\ref{j2}) we get \begin{eqnarray*} J_{2} &=& \frac{\alpha ^{3/2}e^{2\sigma }}{1-x}\frac{\sqrt{\pi }}{\sqrt{\sigma }} \left(1+o(\sigma ^{1-3\varepsilon }) \right) \, \left(1+O_{\delta }(\sigma ^{1-3\varepsilon }) \right) \\ &=& \frac{\alpha^{3/2}e^{2\sigma }}{1-x}\frac{\sqrt{\pi }}{\sqrt{\sigma }} \left( 1+O_{\delta }(\sigma ^{1-3\varepsilon }) \right). \end{eqnarray*} Recall \begin{equation*} J_{3}=\alpha^{3/2}e^{2\sigma } \int_{\sigma^{-\varepsilon }}^{u_{0}} \frac{(1-iu)^{1/2}\exp [-\sigma g(u)]}{e^{-2\pi \alpha (1-iu)}-x} \, du. \end{equation*} We have the estimates \begin{eqnarray} \left| J_{3}\right| &\leq& \alpha^{3/2}e^{2\sigma } \int_{\sigma^{-\varepsilon }}^{u_{0}} \left| \frac{(1-iu)^{1/2} \exp \left[-\sigma g(u) \right]}{ e^{-2\pi \alpha (1-iu)}-x} \right| \, du \nonumber \\ &=& \alpha^{3/2}e^{2\sigma } \int_{\sigma ^{-\varepsilon }}^{u_{0}} \frac{ (1+u^{2})^{1/4} \exp \left[-\sigma \, \Re g(u) \right]} {\left| e^{-2\pi \alpha (1-iu)}-x\right| } \,du \nonumber \\ &\leq& \frac{\alpha^{3/2}e^{2\sigma }}{\delta } \int_{\sigma^{-\varepsilon}}^{u_{0}}(1+u^{2})^{1/4} \exp \left[-\sigma \frac{u^{2}}{1+u^{2}} \right] \,du. \label{j3} \end{eqnarray} Since ${u^{2}}/{(1+u^{2})}$ is an increasing function of $u$, we have, for $u_{0}\geq u\geq \sigma^{-\varepsilon }$, $ {u^{2}}/{(1+u^{2})} \geq {\sigma^{-2\varepsilon }}/{(1+\sigma^{-2\varepsilon })}. $ This implies $ \exp \left( -\sigma {u^{2}}/{(1+u^{2})} \right) \leq $ $ \exp \left( -{\sigma^{1-2\varepsilon }}/ {(1+\sigma^{-2\varepsilon })} \right) $ $ \leq \exp \left( -{\sigma^{1-2\varepsilon }}/{2} \right). $ By assumption $1/3< \varepsilon <1/2$, so we find that $\exp (-\frac{\sigma^{1-2\varepsilon }}{2})$ is much smaller than $\sigma ^{1-3\varepsilon }$. Hence by the inequality (\ref{j3}) we get \begin{equation*} J_{3} = \frac{\alpha^{3/2}e^{2\sigma } \sqrt{\pi }}{\sqrt{\sigma }}o_{\delta}(\sigma^{1-3\varepsilon }). \end{equation*} Exactly the same estimate holds for $J_{1}$. \end{proof} We now return to the proof of the Theorem. By the definition of $J_1$, $J_2$, $J_3$ (see equation (\ref{j1j2j3})), we see that \begin{equation*} J=\frac{\alpha^{3/2}e^{2\sigma }}{1-x}\frac{\sqrt{\pi }}{\sqrt{\sigma }} \left(1+O_{\delta }(\sigma ^{1-3\varepsilon }) \right). \end{equation*} From equation (\ref{inj}) \begin{equation*} I_{n} = \frac{\alpha^{3/2}e^{2\sigma }}{1-x}\frac{\sqrt{\pi }}{\sqrt{\sigma }} (1+O_{\delta }(\sigma^{1-3\varepsilon }))+O_{\delta }(e^{2\pi n\alpha + \pi/(48\alpha )}). \end{equation*} To see the final result, we recall the equations (\ref{m}), (\ref{lamdan}), and (\ref{alpsig}). It is convenient that we express everything in terms of $\lambda_{n}$ which equals $\sqrt{n- 1/24}$. Thus, with $a=\pi \sqrt{2/3}$, \begin{eqnarray*} \alpha &=&\frac{1}{2\sqrt{6}\lambda _{n}}, \quad \sigma = \frac{\pi \lambda _{n}}{\sqrt{6}}, \\ 2\sigma &=& a\lambda _{n}. \\ 2\pi n\alpha +\pi /(48\alpha ) &=& \frac{\pi n}{\sqrt{6}\lambda _{n}}+\frac{\pi \lambda _{n}}{4\sqrt{6}} \\ &=& \frac{5\pi \lambda _{n}}{4\sqrt{6}}+o(1)=\frac{5a\lambda _{n}}{8}+o(1). \end{eqnarray*} Since $e^{{5a\lambda _{n}}/{8}}$ is dominated by $e^{a\lambda _{n}},$ we have \begin{equation*} I_{n} = \frac{e^{a\lambda _{n}}\lambda _{n}^{-2}}{(1-x)4\sqrt{3}} \left(1+O_{\delta}(\lambda _{n}^{1-3\varepsilon }) \right). \end{equation*} By comparing with equation (\ref{vnin}) and setting $\eta = 1- 3 \varepsilon$, we find that the proof is complete. \end{proof} \section{Polynomials for Partitions with Parts}\label{section:stanley} Let $p_k(n)$ denote the number of partitions of $n$ with exactly $k$ parts. Define the polynomials $F_n(x) = \sum_{k=1}^n p_k(n) x^k$, the {\sl partition with parts polynomials}. They have generating function: \[ P(x,u)=\prod_{k \geq 1} \frac{1}{1-xu^k} = \sum_{n=1}^\infty F_n(x) u^n. \] With $x=1$, $P(1,u)$ reduces to the generating function $P(x)$ for the partition numbers. To calculate these polynomials, we make use of the recurrence $ p_k(n)=p_{k-1}(n-1) + p_{k}(n-k). $ and the fact that about half their coefficients are actually given by the partition numbers: $p_{n-k}(n) = p(k)$, $2k <n-1$. It is also known that the coefficients of $F_n(x)$ are unimodal for $n$ sufficiently large (\cite{andrews}, page 100). These polynomials are mentioned in \cite{durfee_poly} where it is pointed out that they have complex zeros. Unfortunately, these facts do not give a hint to the complexity of their zeros (see Figure 2b). In fact, Richard Stanley plotted the zeros of $F_{200}(x)$ and asked what happens at $n\to \infty$. The proofs of the following results are found in \cite{boyer_goh}. The asymptotics for $F_n(x)$ outside the unit disk can be found using the method of Darboux. We state: \begin{theorem} \label{fnx-out} On compact subsets $K$ that lie in the open set $\{ z : |z| >1\}$, the polynomials $F_n(x)$ have the asymptotic form \begin{equation*} F_{n}(x)= x^{n}P\left(1,\frac{1}{x} \right) +O(\left| x\right|^{ C n} ), \label{fnx-out_2} \end{equation*} where $1/2<C <1$ and the big $O$ term holds uniformly in the compact set $K$. \end{theorem} From these asymptotics, we can give a simple argument that there is no limit point of zeros outside the closed unit disk. Let $\delta>0$ be given. Suppose $\{x_n\}$ is a sequence of zeros; that is, $F_n(x_n)=0$, that converges to $x^*$, say, and that $|x_n| \geq 1+\delta$ for all $n$. Then by Theorem \ref{fnx-out}, \[ 0= \frac{F_n(x_n)}{ x_n^n}= P(1, 1/x_n) + O( |x_n|^{(C -1) n}). \] Since $P(1,1/x^*) \neq 0$, we obtain a contradiction since $C<1$. Hence, the zero attractor must lie inside the closed unit disk $\overline{ {\mathbb D}}$. We find that the arguments of the zeros of $F_n(x)$ are uniformly distributed around the unit circle by writing $F_n(x)$ as $xg_n(x)$ and applying the result of Erd\"os-Tur\'an \cite{erdos} (see equation (\ref{eq:erdos})). Note that $g_n(1)=p_n$ and $g_n(x)$ is monic and $g_n(0)=1$. We state this result formally as: \begin{theorem} The arguments of the zeros of $\{F_n(x)\}$ are uniformly distributed on the unit circle as $n\to\infty$. \end{theorem} Understanding the behavior of zeros inside the unit disk $\mathbb D$ requires a more detailed analysis using the Hardy-Ramanujan circle method. A difficulty to overcome is that the functional equation of the modular function is unavailable for the generating function $P(x,u)$. An important first step in applying the circle method is to rewrite the generating function $P(x,u)$, for $|x|<1$ fixed, in a neighborhood of a rational point $e^{2 \pi i h/k}$ inside the unit disk $\mathbb D$ where $h$ and $k$ are relatively prime integers. Write $u$ as $e^{ 2 \pi i (h/k + i z)}$ with $\Re(z)>0$ small. The factorization below required careful estimates with $L$-functions: \[ \ln [P(x, e^{ 2 \pi i (h/k + i z)})] = e^{ w_{h,k}} e^{ \Psi(z)} e^{ j_{h,k}}, \] where \begin{eqnarray*} w_{h,k} &=& \frac{1}{2k}\ln(1-x^k)+ \sum_{ \ell, \ell \nmid k} \frac{x^\ell}{\ell} \, \frac{1}{ e^{- 2 \pi i \ell h/k}-1}, \quad (h,k)=1, \\ \Psi(z) &=& \frac{ \Li_2(x^k)}{ 2 \pi k^2} \, \frac{1}{z}, \\ j_{h,k}(z) &=& \frac{1}{ 2 \pi i} \int_{ -3/4 - i \infty}^{ -3/4 + i \infty} Q_{h,k}(s) \Gamma(s) (2 \pi z)^{-s} \, ds, \end{eqnarray*} where $\Li_2(x)$ is the dilogarithm function given on $\mathbb D$ as $\sum_{n=1}^\infty x^2/n^2$ (see \cite{andrews_sp_fct}, p. 102). and $Q_{h,k}(s)$ is defined by means of a series expansion for $\Re(s)\geq \sigma_0>1$: \[ Q_{h,k}(s)= \sum_{m \geq 1} \sum_{\ell \geq 1} \frac{ x^\ell}{\ell} e^{ 2 \pi i \ell m h/k} (\ell m)^{-s} \] which admits an analytic continuation to $\mathbb C$ with a unique singularity, a simple pole at $z=1$. Next we introduce the quantities needed for the asymptotic expansion for the polynomials $F_n(x)$: \[ I_k = \frac{1}{ \sqrt{\pi}} \, \frac{1}{ n^{3/4}} \, \left[ \frac{ \sqrt{ \Li_2(x^k)} } {k} \right]^{1/2} \, \exp \left( 2 \sqrt{n} \frac{ \sqrt{ \Li_2(x^k)} } {k} \right). \] \begin{theorem} Let $K$ be a compact subset of the open upper unit disk. Then the partition polynomials $F_n(x)$ have the asymptotic form \begin{eqnarray*} \lefteqn { F_{n}(x)= e^{w_{0,1}} I_{1} + (-1)^n e^{w_{1,2}} I_{1,2} + e^{-2\pi i n/3} e^{ w_{1,3}} I_{3} }\\ && \qquad \quad + e^{-4 \pi i n/3} e^{w_{2,3}} I_{3} + o\left(I_{1}+ I_{2}+I_{3} \right). \end{eqnarray*} uniformly on $K$. \end{theorem} For simplicity, it is enough to give the zero attractor in the upper unit disk ${\mathbb D}^+$ since the coefficients of $F_n(x)$ are all real. Introduce the non-negative subharmonic functions $f_k(x) = \frac{1}{k} \Re[ \sqrt{ \Li_2(x^k)}]$ for $|x| \leq 1$. Let $\mathcal{R}(k)$ be the subset of ${\mathbb D}^+$ given by: \[ \mathcal{R}(k) = \{ z \in {\mathbb D}^+ : |z| \leq 1, f_k(z) > f_j(z), j \in \{1,2,3\} , j \neq k\}. \] Using the same argument as above, we can easily show that there are no limit point of zeros that lies in any of the three regions ${\mathcal R}(1)$, ${\mathcal R}(2)$, or ${\mathcal R}(3)$. In fact, the zero attractor consists of the boundaries of these regions. See Figure 1a. To describe them, let $C_{k,\ell}$ be the curves given by $f_k(x) = f_\ell(x)$, $x \in {\mathbb D}^+$ and their subcurves $\gamma_{k,\ell} = C_{k,\ell} \cap \overline{{\mathcal R}(k)}$, where $1 \leq k < \ell \leq 3$. \begin{theorem} The zero attractor of $F_n(x)$ in the upper half-plane consists of the unit semi-circle together with the three curves $\gamma_{k,\ell}$, $1 \leq k < \ell \leq 3$. \end{theorem} \begin{figure}\label{fig:stanley} \begin{center} \includegraphics[height=4.5cm,width=4.5cm]{attractor_full_part_parts.pdf} \quad \includegraphics[height=4.5cm,width=4.5cm]{all_zeros_deg_10000_part_parts.pdf} \caption{(a) Zero Attractor for Partitions with Parts Polynomial; (b) All Zeros of $F_n(x)$, $n=10,000$} \end{center}\end{figure} A basic estimation of the number of zeros of $F_n(x)$ relative to the unit disk $\mathbb D$ demonstrates a striking dichotomy. \begin{theorem} (a) Let $\varepsilon >0$. Then $ \# \{ z : F_n(x)=0, 1 \leq |z| \leq 1 + \varepsilon \} = O(n). $ \\ (b) Let $K$ be any compact subset of the open unit disk $\mathbb D$. Then \[ \# (Z(F_n) \cap K) = O(\sqrt{n}). \] \end{theorem} \begin{corollary} The asymptotic zero distribution for $\{F_n(x)\}$ is Lebesgue measure on the unit circle. \end{corollary} Clearly, the standard definition for the asymptotic zero distribution (Definition \ref{def:asymptotic_zero}) ignores any contribution from the zeros inside the unit disk $\mathbb D$. As a consequence, it is necessary to extend that definition: \begin{definition} The {\sl asymptotic zero distribution of order $\alpha$ on a domain $D$} for a sequence $\{q_n\}$ of polynomials whose degrees go to $\infty$ is the weak$^{*}$-limit $\mu$ of the normalized counting measures of their zeros $\frac{1}{\deg(q_n)^\alpha} \sum \{ \delta_z : z \in {\mathcal Z}(q_n) \textrm{ and } z \in D\}$. \end{definition} For the sake of exposition, we will restrict our discussion of the order $1/2$ asymptotic zero distribution $\mu$ for $\{F_n(x)\}$ to the upper unit disk ${\mathbb D}^+$. Since the zero attractor consists of three analytic curves, the support of the measure $\mu$ is supported exactly on those curves; in particular, it will be enough to describe $\mu$ in a neighborhood of each of them. \begin{theorem} Let $1\leq k<\ell \leq 3$. For each curve $\gamma_{k,\ell}$ in the zero attractor there exists a neighborhood $U_{k,\ell}$ of $\gamma_{k,\ell}$ and a conformal map $G_{k,\ell}$ on $U_{k,\ell}$ that maps $\gamma_{k,\ell}$ into the unit circle such that the asymptotic zero distribution of $\{F_n(x)\}$ in $U_{k,\ell}$ of order $1/2$ is the pull-back of Lebesgue measure on the unit circle. \end{theorem} For the specifics of these mappings, see \cite{boyer_goh}. Their construction comes from the explicit asymptotic expansion of the polynomials $F_n(x)$. \section{Rank and Crank Polynomials}\label{section:crank} We now emphasize another way to look at the partition in parts polynomials $F_n(x)$ relative to their generating function to show its similarities with other generating functions that appear in partition theory. It is well known (see \cite{andrews_sp_fct}, p. 568) that \[ P(z,q) = 1+\sum_{n=1}^\infty F_n(z) q^n = 1+ \sum_{n=1}^\infty \frac{ q^{n^2} z^n }{ (q;q)_n (zq;q)_n} = \frac{1}{(zq;q)_\infty}; \] namely, $F_n(x)$ is the coefficient of $q^n$ of the generating function. With this viewpoint, there are several other natural polynomial families defined in terms of either Durfee squares (see \cite{durfee_poly}), ranks, or cranks. Here we are using the standard notations $(q)_n=(1-q)(1-q^2)\cdots (1-q^{n-1})$ and, more generally, $(a;q)_n = (1-a)(1-aq) \cdots (1-aq^{n-1})$; next, when $|q|<1$, let $(a;q)_\infty = \lim_{n \to \infty} (a;q)_n$ and $(q)_\infty$ for $(1;q)_\infty$. For a partition $\lambda$ of $n$, its Durfee square is the largest square that lies inside its Ferrers graph (see \cite{andrews}, Chapter 2). The polynomials $d_n(z)$ for Durfee squares were introduced in \cite{durfee_poly} and are given in terms of their generating function: \[ D(z,q)= \sum_{n=1,k=1}^\infty d(n,k) q^n z^k = \sum_{n=1}^\infty d_n(z) q^n = \sum_{n=1}^\infty \frac{ q^{n^2} z^n}{ (q;q)_n^2} \] where $d(n,k)$ is the number of partitions of $n$ with a Durfee square of size $k$. Further, in \cite{durfee_poly} and \cite{canfield}, they conjecture that the associated polynomials $\{d_n(z)\}$ have only negative real zeros. Note that the Erd\"os-Tur\'an result does not apply here since the degree of $d_n(z)$ is $\lfloor\sqrt{n} \rfloor$. F. Dyson introduced the statistic of rank for a partition $\lambda$ of $n$ as the difference between its largest part and the number of its parts (see \cite{andrews}, p. 142). We introduce the rank polynomials $r_n(z)$ as follows. Consider their generating function: \[ R(z,q)= \sum_{n=0}^\infty \, \sum_{m=-\infty}^\infty N(m,n) z^m q^n = \sum_{n=0}^\infty \, \sum_{m=-(n-1)}^{n-1} N_n(z) q^n = 1+ \sum_{n=1}^\infty \frac{ q^{n^2}}{ (zq;q)_n (z^{-1}q;q)_n} \] where $N(m,n)$ is the number of partitions of $n$ with rank $m$ and $\{N_n(z)\}$ are symmetric Laurent polynomials. Set $r_n(z) $ to be the principal part of $N_n(z)$ and call it the {\sl rank polynomial}: \[ r_n(z) = \sum_{m=0}^{n-1} N(m,n) z^m. \] Let $\lambda$ be the partition of $n$ given as $ \lambda_1 + \cdots + \lambda_s + 1 + \cdots 1$, where there are exactly $r$ 1's. Let $o(\lambda)$ be the number of parts $>r$. Then the crank of $\lambda$ is $\lambda_1$ if $r=0$ and $o(\lambda)-r$ if $r > 0$ \cite{andrews_garvan}. Let $M(m,n)$ be the number of partitions of $n$ whose crank is exactly $m$. Then $C(z,q)$ is their generating function for $n>1$ where \begin{eqnarray*} \lefteqn { C(z,q)= \sum_{n=0}^\infty \, \sum_{m=-\infty}^\infty M(m,n) z^m q^n = \sum_{n=0}^\infty \, \sum_{m=-\infty}^\infty M_n(z) q^n }\\ && \qquad = \prod_{n=1}^\infty \frac{ (1-q^n)}{ (1-zq^n) (1-z^{-1}q^n)} = \frac{ (q)_\infty}{ (z;q)_\infty (z^{-1};q)_\infty}. \end{eqnarray*} Let $c_n(z)$ be the principal part of $M_n(z)$ and call it the {\sl crank polynomial}: \[ c_n(z) = \sum_{m=0}^{n} M(m,n) z^m. \] We can apply the Erd\"os-Tur\'an result on the asymptotic distribution of the arguments of the zeros to the two families for the rank and crank polynomials since their coefficients are all non-negative, they are monic, and both quotients $r_n(1)/\sqrt{r_n(0)}$, $c_n(1)/\sqrt{c_n(0)}$ are bounded above by $\sqrt{p_n}$. We record this as a theorem: \begin{theorem} The arguments of the zeros of both the rank and crank polynomials are uniformly distributed on the unit circle as $n\to \infty$. \end{theorem} \begin{figure}\label{fig:rank_zeros} \begin{center} \includegraphics[height=4.5cm,width=4.5cm]{rank_principal_part_zeros_deg_101.pdf} \includegraphics[height=4.5cm,width=4.5cm]{crank_principal_part_zeros_deg_50.pdf} \caption{Zeros of the rank polynomial degree 100; crank polynomial degree 50} \end{center} \end{figure} From explicit computation, we find that their zero attractor appears to be the unit circle (see Figure 2). It is very natural to attempt to extend the work for the partition in parts polynomials $F_n(x)$ to establish this conjecture. Furthermore, it would be interesting to see how any partition polynomials in this paper fit into the statistical mechanics framework described in Vershik's paper \cite{vershik}. \section{Summary} For all but one of the partition polynomial families, the unit circle has a dominant role. Their zero attractor is either equal or contains the unit circle while their asymptotic zero distribution involves Lebesgue measure on the unit circle. All this makes it even more intriguing to understand the meaning of the subtle two-scale asymptotics of the partition in parts polynomials $F_n(x)$ in Section \ref{section:stanley}.
{ "timestamp": "2007-11-09T03:16:41", "yymm": "0711", "arxiv_id": "0711.1400", "language": "en", "url": "https://arxiv.org/abs/0711.1400", "abstract": "Let $p_n$ be the number of partitions of an integer $n$. For each of the partition statistics of counting their parts, ranks, or cranks, there is a natural family of integer polynomials. We investigate their asymptotics and the limiting behavior of their zeros as sets and densities.", "subjects": "Combinatorics (math.CO)", "title": "Polynomials associated with Partitions: Polynomials associated with Partitions: Their Asymptotics and Zeros", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513914124559, "lm_q2_score": 0.839733963661418, "lm_q1q2_score": 0.828104816681004 }
https://arxiv.org/abs/1810.03050
Leaky Roots and Stable Gauss-Lucas Theorems
Let $p:\mathbb{C} \rightarrow \mathbb{C}$ be a polynomial. The Gauss-Lucas theorem states that its critical points, $p'(z) = 0$, are contained in the convex hull of its roots. A recent quantitative version Totik shows that if almost all roots are contained in a bounded convex domain $K \subset \mathbb{C}$, then almost all roots of the derivative $p'$ are in a $\varepsilon-$neighborhood $K_{\varepsilon}$ (in a precise sense). We prove another quantitative version: if a polynomial $p$ has $n$ roots in $K$ and $\lesssim c_{K, \varepsilon} (n/\log{n})$ roots outside of $K$, then $p'$ has at least $n-1$ roots in $K_{\varepsilon}$. This establishes, up to a logarithm, a conjecture of the first author: we also discuss an open problem whose solution would imply the full conjecture.
\section{Introduction and result} \subsection{Introduction.} The Gauss-Lucas Theorem, stated by Gauss \cite{gauss} in 1836 and proved by Lucas \cite{lucas} in 1879, says that if $p_n:\mathbb{C} \rightarrow \mathbb{C}$ is a polynomial of degree $n$, then the $n-1$ zeroes of $p_n'$ lie inside the convex hull of the $n$ zeros of $p_n$. This has been of continued interest \cite{aziz, bray, bruj, bruj2, branko, dimitrov, joyal, kalman, malamud, marden, pawlowski, pereira, rav, trevor, siebeck, schm, specht, stein, walsh}. Totik \cite{totik} recently showed that, for sequences of polynomials $p_n$ with $\deg(p_n) \rightarrow \infty$, that if $(1-o(1))\deg{p_n}$ roots of $p_n$ lie inside a convex domain $K$, then any fixed $\varepsilon-$neighborhood of $K$ contains $(1-o(1))\deg{p_n}$ roots of $p'$. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.8] \draw [ thick] (0,0) ellipse (2cm and 1cm); \node at (0,0) {many roots in $K$}; \node at (-3,-0.8) {few roots}; \node at (0,-2) {roots of $p$}; \draw [ thick] (8,0) ellipse (2cm and 1cm); \draw [ thick, dashed] (8,0) ellipse (2.2cm and 1.1cm); \node at (8,0) {at least as many}; \node at (8,-0.5) {roots}; \node at (8-3,-0.8) {few roots}; \node at (8,-2) {roots of $p'$}; \end{tikzpicture} \end{center} \vspace{-10pt} \caption{The structure of asymptotic Gauss-Lucas theorems.} \end{figure} These results rely on potential theory and are asymptotic in nature. It would be of interest to have stable non-asymptotic statements showing that the roots of $p_n'$ cannot move very much outside of $K$, i.e. statements along the lines of $$ \# \left\{z \in K_{\varepsilon}: p_n'(z) = 0 \right\} \geq \# \left\{z \in K_{}: p_n(z) = 0 \right\} -1,$$ where $K_{\varepsilon}$ is the $\varepsilon-$neighborhood of $K$. A schematic picture of such results is given in Fig. 1. The first author has proposed a nonasymptotic formulation in \cite{trevor}. \begin{quote} \textbf{Conjecture} (Richards \cite{trevor})\textbf{.} For every $\varepsilon > 0$ there exists a constant $c_{K, \varepsilon} > 0$ depending only on the bounded convex domain $K \subset \mathbb{C}$ and $\varepsilon>0$ such that for all polynomials $p_n:\mathbb{C} \rightarrow \mathbb{C}$ of degree $n$ the following holds: if $p_n$ has at most $c_{K, \varepsilon}n$ roots outside of $K$, then $$ \# \left\{z \in K_{\varepsilon}: p_n'(z) = 0 \right\} \geq \# \left\{z \in K_{}: p_n(z) = 0 \right\} - 1.$$ \end{quote} The first author \cite{trevor} proved the conjecture if the number of roots outside of $K$ is $\lesssim_{K, \varepsilon} \sqrt{n}$. The purpose of our paper is to prove the conjecture up to a logarithm and to discuss a new conjecture about the geometry of Coulomb potentials in $\mathbb{C}$ that seems of intrinsic interest and would imply the full conjecture. \begin{thm} Let $K \subset \mathbb{C}$ be a bounded convex domain. For every $\varepsilon > 0$ there exists a constant $c_{K, \varepsilon} > 0$ such that for all polynomials $p_n: \mathbb{C} \rightarrow \mathbb{C}$ of degree $n$ the following holds: if $$ \# \left\{z \in \mathbb{C} \setminus K: p_n(z) = 0 \right\} \leq c_{K, \varepsilon} \frac{n}{\log{n}},$$ then $$ \# \left\{z \in K_{\varepsilon}: p_n'(z) = 0 \right\} \geq \# \left\{z \in K_{}: p_n(z) = 0 \right\} -1.$$ \end{thm} The full conjecture would follow from the truth of the following curious conjecture that we believe to be of interest in itself: let $\gamma:[0,1] \rightarrow \mathbb{C}$ be a curve in the complex plane normalized to $\gamma(0) = 0$ and $\gamma(1) = 1$, let $n \in \mathbb{N}$ be a parameter and let $\left\{z_1, \dots, z_m\right\} \subset \mathbb{C} \setminus \gamma([0,1])$ be a set of $m$ charges in the complex plane. We conjecture~\footnote{The authors have privately received a proof of a generalization of this conjecture from Vilmos Totik, who showed that the same conclusion holds for general measures (not just sums of point masses). As noted above, this in turn implies the conjectured approximate Gauss--Lucas theorem which originated in~\cite{trevor}. We look forward to seeing Totik's general result in print in the future.} that $m$ charges cannot `supercharge' a curve. More precisely, we conjecture that if $$ \min_{0 \leq t \leq 1}{ \left| \sum_{\ell=1}^{m}{\frac{1}{\gamma(t)-z_{\ell}}} \right| } \geq n \qquad \mbox{then} \qquad m \gtrsim n.$$ This, if true, would then imply the conjecture via our approach outlined below. We prove $m \gtrsim n/\log{n}$ (under slightly weaker assumptions for which this scaling is sharp). \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale = 2] \filldraw (0,0) circle (0.04cm); \node at (0,-0.3) {0}; \draw [thick] (0,0) to[out=30, in=120] (2,0); \filldraw (2,0) circle (0.04cm); \node at (2,-0.3) {1}; \filldraw (0.2,0) circle (0.03cm); \filldraw (1,0.5) circle (0.03cm); \filldraw (1.7,0.38) circle (0.03cm); \end{tikzpicture} \end{center} \caption{Strategically placed point charges charging a curve.} \end{figure} One could also ask the same question with the $m$ point charges replaced by a measure $\mu$. \section{A discrete Lemma} Let $\gamma:[0,1] \rightarrow \mathbb{C}$ be a curve whose endpoints satisfy $|\gamma(0) - \gamma(1)| = 1$. Let $\left\{z_1, \dots, z_m \right\} \subset \mathbb{C} \setminus \gamma([0,1])$ be arbitrary. We prove that a finite number of points cannot strongly charge the curve via convolution with a $|x|^{-1}$ potential unless there is a sufficient number of them. \begin{lemma} We have $$ \min_{0 \leq t \leq 1}{ \sum_{\ell=1}^{m}{\frac{1}{|\gamma(t)-z_{\ell}|} }} \leq 60 m \log{m}$$ \end{lemma} It is easy to see that the Lemma is sharp up to the constant: let us pick $\gamma(t) = t + 0i$ for $0 \leq t \leq 1$ and $z_j = j/m + i/m$. Then $$ \min_{0 \leq t \leq 1}{ \sum_{\ell=1}^{m}{\frac{1}{|\gamma(t)-z_{\ell}|} }} \gtrsim \sum_{j=1}^{m}{ \frac{m}{j}} \sim m \log{m}.$$ \begin{proof}[Proof of Lemma 1] We will bound the quantity from above by restricting the curve $\gamma$ to its $x-$coordinate and the complex number to its real part to conclude that $$ \min_{0 \leq t \leq 1}{ \sum_{\ell=1}^{m}{\frac{1}{|\gamma(t)-z_{\ell}|} }} \leq \min_{0 \leq t \leq 1}{ \sum_{\ell=1}^{m}{\frac{1}{|\Re \gamma(t)- \Re z_{\ell}|} }} \leq \min_{0 \leq t \leq 1}{ \sum_{\ell=1}^{m}{\frac{1}{|t - \Re z_{\ell}|} }} .$$ By `tying the ends of the interval together', we can replace that problem by yet another problem: given $\left\{x_1, \dots, x_m\right\} \subset \mathbb{T}$ (where $\mathbb{T}$ is the one-dimensional torus with length 1), our desired result would follow from knowing that $$\min_{x \in \mathbb{T}}{ \sum_{\ell=1}^{m}{\frac{1}{|x - x_{\ell}|} }} \leq 60 m \log{m},$$ where $\left| \cdot \right|$ denotes the toroidal distance $|x| = \min\left\{x, 1-x\right\}$. We re-interpret the problem yet again: by writing it as a convolution and using $\delta_x$ to denote the Dirac measure in a point $x$, our desired result would follow from showing that $$ \min_{y \in \mathbb{T}} \left( \frac{1}{|x|} *\left( \sum_{\ell = 1}^{m}{\delta_{x_{\ell}}}\right) \right)(y) \leq 60 m \log{m}.$$ We now replace the singular kernel by a slightly less singular function $$ f_m(x) = \begin{cases} |x|^{-1} \qquad &\mbox{if}~x \geq 1/(20m) \\ 20m \qquad &\mbox{otherwise.} \end{cases}$$ We observe that this function is integrable and \begin{align*} \int_{\mathbb{T}} f_m * \left( \sum_{\ell = 1}^{m}{\delta_{x_{\ell}}} \right) dx &= m \int_{\mathbb{T}}{f_m(x) dx} \\ &\leq m + 2 m \int_{(20m)^{-1}}^{1/2}{\frac{dx}{x}} \leq 2 m \int_{(20m)^{-1}}^{1}{\frac{dx}{x}} = 2 m \log{(20m)}. \end{align*} Markov's inequality now implies that the set where the convolution is large is small $$ \left| \left\{ x \in \mathbb{T}: \sum_{\ell=1}^{m}{f_m(x-x_{\ell})} \geq 20 m \log{(20m)} \right\} \right| \leq \frac{1}{10}.$$ However, it is also easy to see that the set of points really close to one of the $m$ points cannot be too large $$ \left| \left\{x \in \mathbb{T}: \min_{1 \leq \ell \leq m}{|x-x_{\ell}|} \leq \frac{1}{10m} \right\} \right| \leq \frac{1}{5}$$ and therefore there exists a point $y \in \mathbb{T}$ that is at distance at least $(10m)^{-1}$ from $\left\{x_1, \dots, x_m\right\}$ and for which $$\sum_{\ell=1}^{m}{f_m(y-x_{\ell})} \leq 20 m \log{(20m)} \leq 60m\log{m}.$$ However, for points at that distance, $f_m$ and $|x|^{-1}$ coincide and this proves the desired claim. \end{proof} \section{Proof of the Theorem} \begin{proof} We will assume that polynomial $p$ has $n+m$ roots where the $n$ roots in $K$ are given by $a_1, \dots, a_n \in K$ and that the roots $a_{n+1}, \dots, a_{n+m}$ are contained outside of $K$. The roots of a polynomial depend smoothly on the coefficients and, conversely, the coefficients of a polynomial depend smoothly on the roots. This allows us to assume for simplicity that all roots are distinct and that $p'$ and $p$ have no common roots. We note that roots of the derivative have to satisfy $$ \frac{p'(z)}{p(z)} = \sum_{k=1}^{n+m}{\frac{1}{z-a_k}} = 0.$$ We write the polynomial as $p(z) = q(z) r(z)$ where $$ q(z) = \prod_{k=1}^{n}{(z-a_k)} \qquad \mbox{and} \qquad r(z) = \prod_{k=n+1}^{n+m}{(z-a_k)}.$$ Then $p'(z) = q'(z)r(z) + q(z) r'(z)$. We will now introduce the set, for any $\delta >0$, $$ A_{\delta} = \left\{ z \in \mathbb{C}: \left| \frac{q'}{q} \right| \leq \left| \frac{r'}{r} \right| + \frac{\delta}{|r|} \right\}.$$ We observe that every root of $p'$ is contained in $A_{\delta}$ for all $\delta > 0$: this is because if $z$ is a root of $p'$, then $q'(z)/q(z) = - r'(z)/r(z)$. Simple far-field asymptotics also show that $A_{\delta}$ is bounded (for $\delta$ sufficiently small if $m=1$ and unconditionally if $m \geq 2$ as long as $m < n$). If $A$ is a connected component of $A_{\delta}$, then $$ |q'(z) r(z) | = |q(z) r'(z)| + \delta |q(z)| \qquad \mbox{and thus} \qquad |q'(z) r(z) | > |q(z) r'(z)| \quad \mbox{on}~\partial A.$$ Rouch\'{e}'s theorem implies that the number of roots of $p'(z) = q'(z)r(z) + r(z) q'(z)$ in $A$ is the same as the number of roots of $q'(z) r(z)$ in $A$. This has an important implication: it identifies regions in the complex plane, the connected components of $A_{\delta}$ (for sufficiently small $\delta$), where the number of roots of $p'$ and the number of the roots of $q'(z)r(z)$ exactly coincide. We are interested in the number of roots of $p'$ in $\mathbb{C} \setminus K_{\varepsilon}$ and want to show that this number is at most $m$. Let now $z_0 \in \mathbb{C} \setminus K_{\varepsilon}$, let $p'(z_0) = 0$ and let $A$ be the connected component of $A_{\delta}$ containing $z_0$. If $A$ does not intersect $K$, then $q'$ never vanishes in $A$ (this follows from the Gauss-Lucas theorem) and every root of $p'$ corresponds to a root of $r$ (of which there are $m$ in total). It thus suffices to establish a condition under which no connected component of $A_{\delta}$ intersects both $\mathbb{C} \setminus K_{\varepsilon}$ and $K$. We will now establish such a condition. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=1] \draw [thick] (0,0) ellipse (2cm and 1cm); \node at (0,0) {$K$}; \draw [thick] (0,0) ellipse (2.8cm and 1.8cm); \node at (2,-0.8) {$K_{\varepsilon}$}; \draw [thick, dashed] (-2,-2) to[out=300, in = 270] (-1, -2) to[out=90, in = 0] ( -1.5, -1) to[out=180, in=120] (-2,-2); \draw [thick, dashed] (-2-2,-2+2) to[out=300, in = 270] (-1-2, -2+2) to[out=90, in = 0] ( -1.5-2, -1+2) to[out=180, in=120] (-2-2,-2+2); \draw [thick, dashed] (3-2,-2+2) to[out=300, in = 270] (3-2, -2+2) to[out=90, in = 0] (3-2, -1+2) to[out=180, in=120] (3-2,-2+2); \draw [thick, dashed] (4, 1) ellipse (0.5cm and 1cm); \end{tikzpicture} \end{center} \caption{$K$, the $\varepsilon-$neighborhood $K_{\varepsilon}$ and the set $A_{\delta}$ (dashed). Every connected component of $A_{\delta}$ contains exactly as many roots of $p'$ as it has roots of $r$.} \end{figure} Suppose the statement fails and the connected component $A$ of $A_{\delta}$ intersects both $\mathbb{C} \setminus K_{\varepsilon}$ and $K$. It then contains a curve connecting $\mathbb{C} \setminus K_{\varepsilon}$ and $K$ (a curve that is at least of length $\varepsilon$) for all $\delta >0$. We will now construct a lower bound on $$ \left| \frac{q'(z)}{q(z)} \right| = \left| \sum_{k=1}^{n}{ \frac{1}{ z - a_k}} \right| \qquad \mbox{for}~z \in \mathbb{C} \setminus K.$$ By rotational and translational variance, we can again assume that $z \in \mathbb{R}$, that $(0,0) \in K$ is the closest point to $z$ in $K$, and $$z> \sup_{y \in K}{\mbox{Re}~y} = 0.$$ We now estimate $|q'/q|$ at $z$ by $$\left|\sum_{k=1}^n\dfrac{1}{z-a_k}\right|\geq\displaystyle\Re\left(\sum_{k=1}^n\dfrac{1}{z-a_k}\right)=\sum_{k=1}^n\dfrac{\Re(z-\overline{a_k})}{|z-a_k|^2}.$$ It now follows from the normalization described above that $$\left|\sum_{k=1}^n\dfrac{1}{z-a_k}\right|\geq\sum_{k=1}^n\dfrac{d(z,K)}{(d(z,K)+\operatorname{diam}(K))^2}=\dfrac{n\cdot d(z,K)}{(d(z,K)+\operatorname{diam}(K))^2}.$$ The assumption of $A$ intersecting both $\mathbb{C}\setminus K_\epsilon$ and $K$ implies that $A$ contains a path $\gamma$ whose endpoints are distance $\geq\varepsilon/2$ from each other and which lies a distance at least $\varepsilon/2$ from $K$. The lower bound on $|q'(z)/q(z)|$ obtained above, along with the definition of the set $A$ thus gives us that for $z$ in $\gamma$, $$\left|\dfrac{p'(z)}{p(z)}\right|=\left|\displaystyle\sum_{k=n+1}^{n+m}\dfrac{1}{z-a_k}\right|\geq\frac{1}{2}\dfrac{n\varepsilon}{(\varepsilon+\operatorname{diam}(K))^2},$$ and thus for $\varepsilon<\operatorname{diam}(K)$ and some constant $c_K>0$, $$\left|\displaystyle\sum_{k=n+1}^{n+m}\dfrac{1}{z-a_k}\right|>c_Kn\varepsilon\ \ \ \text{ for $z\in\gamma$}.$$ Rescaling by a factor of $\varepsilon^{-1}$ and using the Lemma implies that $$c_Kn\varepsilon\leq\dfrac{60m\log m}{\varepsilon} \qquad \text{forcing} \qquad m\gtrsim_{K,\varepsilon}\dfrac{n}{\log n}.$$ \end{proof}
{ "timestamp": "2019-01-23T02:18:48", "yymm": "1810", "arxiv_id": "1810.03050", "language": "en", "url": "https://arxiv.org/abs/1810.03050", "abstract": "Let $p:\\mathbb{C} \\rightarrow \\mathbb{C}$ be a polynomial. The Gauss-Lucas theorem states that its critical points, $p'(z) = 0$, are contained in the convex hull of its roots. A recent quantitative version Totik shows that if almost all roots are contained in a bounded convex domain $K \\subset \\mathbb{C}$, then almost all roots of the derivative $p'$ are in a $\\varepsilon-$neighborhood $K_{\\varepsilon}$ (in a precise sense). We prove another quantitative version: if a polynomial $p$ has $n$ roots in $K$ and $\\lesssim c_{K, \\varepsilon} (n/\\log{n})$ roots outside of $K$, then $p'$ has at least $n-1$ roots in $K_{\\varepsilon}$. This establishes, up to a logarithm, a conjecture of the first author: we also discuss an open problem whose solution would imply the full conjecture.", "subjects": "Complex Variables (math.CV)", "title": "Leaky Roots and Stable Gauss-Lucas Theorems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9881308789536606, "lm_q2_score": 0.837619961306541, "lm_q1q2_score": 0.8276781485949636 }
https://arxiv.org/abs/1804.07104
From a Consequence of Bertrand's Postulate to Hamilton Cycles
A consequence of Bertrand's postulate, proved by L. Greenfield and S. Greenfield in 1998, assures that the set of integers $\{1,2,\cdots, 2n\}$ can be partitioned into pairs so that the sum of each pair is a prime number for any positive integer $n$. Cutting through it from the angle of Graph Theory, this paper provides new insights into the problem. We conjecture a stronger statement that the set of integers $\{1,2,\cdots, 2n\}$ can be rearranged into a cycle so that the sum of any two adjacent integers is a prime number. Our main result is that this conjecture is true for infinitely many cases.
\section{Introduction} Primes are the collection that has been extensively studied in Number Theory. This paper is motivated from a result, by Greenfield and Greenfield in 1998 \cite{greenfield}, concerning prime numbers. \begin{thm}\label{greenfield}\cite{greenfield}\\ The set of integers $\{1, 2, 3, \cdots, 2n\}$, $n \geq 1$, can be partitioned into pairs $\{a_i,b_i\}$ such that $a_i+b_i$ is prime for all $i=1, 2, \cdots, n$. \end{thm} Taking $n=10$ for an example, $\{1, 2, \cdots, 20\}$ can be partitioned into pairs so that the sum of each pair is a prime number, as shown in Figure 1. \begin{figure} \includegraphics{pairs.pdf} \caption{$\{1, 2, \cdots, 20\}$ can be partitioned into prime pairs.} \end{figure} Theorem \ref{greenfield} was proved by L. Greenfield and S. Greenfield in 1998 \cite{greenfield} and reproduced by D. Galvin in 2006 \cite{Galvin}. This lovely result follows with an elegant proof from the well-known Bertrand’s Postulate, or sometimes called Bertrand-Chebyshev Theorem \cite{bertrand,tche}. \begin{thm}\label{Bertrand}\cite{bertrand,tche} For any positive integer $n>1$, there is at least a prime $p$ such that $n<p<2n$. \end{thm} As the proof of Theorem \ref{greenfield} is short, we demonstrate it in the following for self-contained. The proof is by induction on $n$. For $n=1$, it is trivial. Assume that the statement is true for any $k<n$. By Bertrand-Chebyshev Theorem, there exists a prime $p\in (2n,4n)$. We may assume that the prime $p=2n+k$ for some $1\leq k<2n$. Then the set of integers $\{k, k+1, \cdots, 2n-1, 2n\}$ obviously can be partitioned into pairs $\{k, 2n\}, \{k+1, 2n-1\}, \cdots$ and so on up to $\{n+\lfloor k/2\rfloor, n+\lceil k/2\rceil\}$ (the last is valid since $k$ is odd). Each of the pairs sums to the prime $2n+k$. By the induction hypothesis, the set of remaining integers $\{1, 2, \cdots, k-1\}$ has the property, too. Thus, the whole set of $\{1, 2, \cdots. 2n\}$ can be partitioned into pairs so that the sum of each pair is prime. This completes the proof. From the Graph Theory point of view, it is natural to think of a graph that treats numbers as vertices and two vertices are adjacent if the sum of the corresponding numbers is a prime. Let's denote as $G_n$ such a graph of vertices $\{1, 2, \cdots, n\}$. Theorem \ref{greenfield} can be rephrased in the terminology of Graph Theory: a graph defined in this way has a perfect matching. Inspired by Theorem \ref{greenfield}, we are interested in the structure of such a graph. We first give a formal definition of the mentioned graph. For any positive integer $n$, define a graph $G_n=(V,E)$ with the vertex set $V=\{1,2,\cdots, n\}$ and $E=\{ij: i+j \text{ is prime}\}$. We call $G_n$ the {\it prime sum graph} of order $n$. Expectedly, properties in $G_n$ can reflect some properties on prime numbers. Therefore, if one can provide some new insight from the angle of Graph Theory, then it could eventually raise awareness of new challenges of prime numbers. Obviously, prime sum graphs are bipartite graphs, and thus are 2-colorable. This reflects a trivial fact that a prime (an edge) can be formed only by connecting an odd number to an even number. Each vertex in $G_n$ has degree $\frac{n}{\log n}$ as $n$ is sufficiently large. Thus, there are nearly $\frac{n^2}{2\log ⁡n}$ edges. This reflects a well-known result that the number of primes below $n$ is $\pi(n)\sim \frac{n}{\log ⁡ n}$. Theorem \ref{greenfield} assures that $G_{2n}$ has a perfect matching. This result strongly relies on the existence of one prime $p$ between $2n$ and $4n$, as an immediate consequence of Bertrand-Chebyshev Theorem. In fact, there are some further extensions of Bertrand-Chebyshev Theorem that suggest the existence of more and more primes between $2n$ and $4n$ as $n$ is tending to infinity. Results alone this direction guarantee that every number connects to more than one numbers in a prime sum graph. Naturally, an interesting question is that \begin{center} ``can we say something more than a perfect matching hidden in the graph $G_{2n}$?'' \end{center} If there are two or more perfect matchings in $G_{2n}$, then it is likely that there exists a {\it Hamilton cycle}, a cycle that visits each vertex exactly once. \begin{figure} \begin{center} \scalebox{0.6}{\includegraphics{cycle.PNG}} \caption{Examples for small $G_{2n}$'s with a Hamilton cycle.} \end{center} \end{figure} In this paper, we pose an interesting conjecture concerning positive integers and prime numbers. \begin{conj}\label{conjecture} The set of integers $\{1, 2, 3, \cdots, 2n\}$, $n \geq 2$, can be rearranged in a circle such that the sum of any two adjacent numbers is a prime. In other words, $G_{2n}$ contains a Hamilton cycle. \end{conj} We first prove the following result, which provides a sufficient condition for $G_{2n}$ being Hamiltonian. \begin{thm}\label{main} $G_{2n}$, $n\geq 2$, contains a Hamilton cycle if there exist two primes $p_1<p_2$ in $[1,2n]$ ($p_1$ can be 1) such that $2n+p_1$ and $2n+p_2$ are primes and gcd$\displaystyle\left(\frac{p_2-p_1}{2},n\right)=1$. \end{thm} For any positive integer $m$, let $E_m$ denote the set of the edges in a prime sum graph $G=(V,E)$ with the sum of two endpoints is $m$, i.e., $E_m=\{uv\in E(G): u+v=m\}$. Denote $\mathcal{P}$ as the set of all primes less than $4n$. A set $H\subseteq \mathcal{P}$ is called a {\it Hamiltonian prime set} of $G_{2n}$ if $\bigcup_{p\in H}E_p$ contains a Hamilton cycle in $G_{2n}$. As a consequence of Theorem \ref{main} by setting $p_1=1$ and $p_2=3$, we have the following corollary. \begin{cor}\label{cor} The set $\{3, 2n+1, 2n+3\}\subseteq \mathcal{P}$ is a Hamiltonian prime set of $G_{2n}$ for $n\geq 2$. In other words, if $2n+1$ and $2n+3$ are twin primes, then $G_{2n}$ has a Hamilton cycle. \end{cor} It is worth mentioning that $\{3, 2n+1, 2n+3\}$ is the smallest Hamiltonian prime set of $G_{2n}$. The reason is that every vertex needs at least two edges to form a Hamilton cycle and thus the vertex $i$ requires at least two primes in its potential candidate set $[1+i, 2n+i]$ for each $1\leq i\leq 2n$. On the one hand, Corollary \ref{cor} implies that if the well-known twin prime conjecture is true, then there are infinitely many $G_{2n}$'s that have a Hamilton cycle. On the other hand, the above discussion can also be interpreted as a big challenge of proving the existence of infinitely many Hamiltonian $G_{2n}$'s. The reason is that, to achieve this conclusion, it needs to prove that there are infinitely many prime triples (or quadruples) satisfying certain conditions. Many similar statements concerning prime numbers have been addressed and proposed in the literature. However, most are still unsolved and the twin prime conjecture is one of the mysteries. A recent breakthrough in the twin prime conjecture is that Yitang Zhang showed in 2013 \cite{zhang} for the first time that we will never stop finding pairs of primes that are within a bounded distance — within 70 million. Soon after, dozens of outstanding researchers in the world work together to improve Zhang's 70 million bound, bringing it down to 246 \cite{maynard,polymath}. \begin{thm}\label{246}\cite{maynard,polymath} There are infinitely many pairs of primes $(p_i,p'_i)$ such that $p'_i-p_i\leq 246$. \end{thm} Thanks to the breakthrough, we can prove the following main result by combining Theorem \ref{main} and Theorem \ref{246} with an elaborate argument. \begin{thm}\label{mainthm} There are infinitely many $G_{2n}$'s that have a Hamilton cycle. \end{thm} This paper partly improves a Number Theory result by Greenfield and Greenfield \cite{greenfield} with new insights from Graph Theory. We pose Conjecture \ref{conjecture} and show that it is true for infinitely many cases. Although the main result is still far from our conjecture, the value of this paper is in calling awareness to the discovery of a simple, previously unnoticed property of numbers and its connection to several of the most central concepts in graphs and prime numbers. The rest of this paper is organized as follows. Section 2 and Section 3 are the proofs of Theorems \ref{main} and \ref{mainthm}, respectively. Finally, we conclude this paper in Section 4. \section{Proof of Theorem \ref{main}} Consider a balanced bipartite graph $B=(X\cup Y,E)$ with vertex set $V=X\cup Y$, where $|X|=|Y|$, and edge set $E$ which contains some edges with one vertex in $X$ and the other one in $Y$. Let $X=\{x_1,x_2,\cdots, x_n\}$ and $Y=\{y_1, y_2, \cdots, y_n\}$. For $0\leq i\leq n-1$, define {\it $i$-difference set} as the set consisting of edges whose indexes in $X$ and $Y$ differ by $i$ (mod $n$) and denote by $D_i=\{x_jy_k: i\equiv j-k ~(\text{mod}~ n)\}$. A result well known in Graph Theory assures as the following. \begin{thm}\label{difference} Let $B$ be a balanced bipartite graph of order $2n$, $n\geq 2$, as defined above and $s, t$ be two integers in $[0,n-1]$. Then $D_s\cup D_t$ forms a Hamilton cycle in $G$ if $\gcd(|t-s|,n)=1$. \end{thm} Recall that $E_m=\{uv\in E(G): u+v=m\}$. To prove the theorem, it suffices to show that $E_{p_1}\cup E_{2n+p_1}$ and $E_{p_2}\cup E_{2n+p_2}$ form two difference sets $D_s$ and $D_t$ for some $s$ and $t$, respectively, with $\gcd(|t-s|,n)=1$. Then, by Theorem \ref{difference}, $G_{2n}$ has a Hamilton cycle. Consider $\{1,2,\cdots,2n\}=X\cup Y$ with $X=\{x_1,x_2,\cdots, x_n\}$ and $Y=\{y_1, y_2, \cdots, y_n\}$, where $x_j=2j-1$ for $1\leq j\leq n$ and $y_k=2n-2(k-1)$ for $1\leq k\leq n$. Then $D_{\frac{p_1-1}{2}}=\{x_{j}y_{k}: j-k\equiv \frac{p_1-1}{2} ~(\text{mod}~ n)\}$ and $D_{\frac{p_2-1}{2}}=\{x_{j}y_{k}: j-k\equiv \frac{p_2-1}{2} ~(\text{mod}~ n)\}$. Next, we first prove that $D_{\frac{p_1-1}{2}}=E_{p_1}\cup E_{2n+p_1}$. Assume that $x_jy_k\in D_{\frac{p_1-1}{2}}$.\\ If $j\geq k$, then we have \begin{align*} & j-k=\frac{p_1-1}{2}\\ \Leftrightarrow ~& 2n+2(j-k)+1=2n+p_1\\ \Leftrightarrow ~& \underbrace{(2j-1)}_{x_j}+\underbrace{2n-2(k-1)}_{y_k}=2n+p_1. \end{align*} The last equality implies that $x_jy_k\in E_{2n+p_1}$, and vice versa. Thus, if $x_jy_k\in E_{2n+p_1}$ and $j\geq k$, then $x_jy_k\in D_{\frac{p_1-1}{2}}$.\\ If $j<k$, then we have \begin{align*} & j-k=\frac{p_1-1}{2}-n\\ \Leftrightarrow ~& 2n+2(j-k)+1=p_1\\ \Leftrightarrow ~& \underbrace{(2j-1)}_{x_j}+\underbrace{2n-2(k-1)}_{y_k}=p_1. \end{align*} This implies that $x_jy_k\in E_{p_1}$, and vice versa. Thus, if $x_jy_k\in E_{p_1}$ and $j< k$, then $x_jy_k\in D_{\frac{p_1-1}{2}}$. Therefore, $D_{\frac{p_1-1}{2}}=E_{p_1}\cup E_{2n+p_1}$. A similar argument shows that $D_{\frac{p_2-1}{2}}=E_{p_2}\cup E_{2n+p_2}$. It is easy to see that $\displaystyle \gcd\left(\frac{p_2-1}{2}-\frac{p_1-1}{2},n\right)=1$ as $\displaystyle \gcd\left(\frac{p_2-p_1}{2},n\right)=1$. By Theorem \ref{difference}, $D_{\frac{p_1-1}{2}}\cup D_{\frac{p_2-1}{2}}$ forms a Hamilton cycle in $G_{2n}$. \hspace*{\fill}\vrule height6pt width4pt depth0pt\medskip \section{Proof of Theorem \ref{mainthm}} Suppose that there exists a number $g$ such that there are infinitely many prime pairs $(p,p')$ satisfying the condition $p'-p=g$. Take $g=12$ for example. \begin{itemize} \item[(1)] Then, there must be infinitely many prime pairs $(p,p')$ with $p'-p=12$ and satisfying one of the following forms (all potential cases of a prime (mod 12)): \[ \begin{tabular}{rcl} $p$ & = & $12k+1$, \\ $p$ & = & $12k+5$, \\ $p$ & = & $12k+7$, \\ $p$ & = & $12k+11$. \\ \end{tabular} \] Notice that $12k+3$ and $12k+9$ are excluded because they are not primes. \item[(2)] For each form, find an explicit representative as shown in Table 1. \begin{table}[h!]\label{tab1} \[ \begin{tabular}{c|c} Form & Representative \\ \hline $p=12k+1$ & (1,13)\\ $p=12k+5$ & (5,17)\\ $p=12k+7$ & (7,19)\\ $p=12k+11$ & (11,23)\\ \end{tabular} \] \caption{Representatives} \end{table} \item[(3)] Assume that it is exactly the form $p=12k'+1$ such that there are infinitely many prime pairs $(p,p')$ with $p'-p=12$. Then, according to Theorem \ref{main}, one can conclude that there are infinitely many $G_{2n}$'s having a Hamilton cycle by setting $p_1=11, p_2=23$, $2n+p_1=p$ and $2n+p_2=p'$. It is easy to verify that $2n=(12k'+1)-11=12k+2$ and thus $n=6k+1$ for some $k$. Therefore, we have $\displaystyle\gcd\left(\frac{p'-p}{2},n\right)=\gcd\left(6,6k+1\right)=1$, as desired. Similarly, for each of the other forms we can conclude the same by setting $p_1$ and $p_2$ properly, as in Table 2. \begin{table}[h!]\label{tab2} \[ \begin{tabular}{l|l|l} Form & $(p_1,p_2)$ & gcd condition \\ \hline $p=12k'+1$ & (11,23) & gcd$(6,6k+1)=1$ \\ $p=12k'+5$ & (7,19) & gcd$(6,6k+5)=1$ \\ $p=12k'+7$ & (5,17) & gcd$(6,6k+1)=1$ \\ $p=12k'+11$ & (1,13) & gcd$(6,6k+5)=1$ \end{tabular} \] \caption{Match and gcd} \end{table} \item[(4)] It is known by Theorem \ref{246} \cite{maynard,polymath} that such a number $g\leq 246$ exists, but not knowing which number. We can check (1), (2), and (3) for all possibilities of $g=2,4,6,\cdots,246$ to see whether it is true or not. Since for each $g$ there are only finite cases, this process can be done simply by computers. As expected, for each $g$ and for each potential form $p=gk+t$, we successfully find a representative $(p_1,p_2)$ as a match with $(p,p')$ such that $\displaystyle \gcd(\frac{g}{2},\frac{g}{2}\cdot k+s)=1$. Because there are too many cases (total 6170 cases) to demonstrate in this paper, we list only the cases of $g=246$ below. The other cases can be found in the file of full lists in the webpage (http://www.davidguo.idv.tw/temp/Data.pdf). \end{itemize} The above discussion implies that there are infinitely many $G_{2n}$'s with a Hamilton cycle, verifying Conjecture 1 for infinitely many cases. The proof is complete.\hspace*{\fill}\vrule height6pt width4pt depth0pt\medskip \begin{table}[h!]\label{tab3} \footnotesize \[ \begin{tabular}{c|l|l||c|l|l} Form & $(p_1,p_2)$ & gcd condition & Form & $(p_1,p_2)$ & gcd condition\\ \hline $246k'+1$ & (5,251) & gcd$(123,123k+121)=1$ & $246k'+5$ & (2707,2953) & gcd$(123,123k+2)=1$\\ $246k'+7$ & (5,251) & gcd$(123,123k+1)=1$ & $246k'+11$ & (2707,2953) & gcd$(123,123k+5)=1$\\ $246k'+13$ & (5,251) & gcd$(123,123k+4)=1$ & $246k'+17$ & (2707,2953) & gcd$(123,123k+8)=1$\\ $246k'+19$ & (5,251) & gcd$(123,123k+7)=1$ & $246k'+23$ & (2707,2953) & gcd$(123,123k+11)=1$\\ $246k'+25$ & (5,251) & gcd$(123,123k+10)=1$ & $246k'+29$ & (2707,2953) & gcd$(123,123k+14)=1$\\ $246k'+31$ & (5,251) & gcd$(123,123k+13)=1$ & $246k'+35$ & (2707,2953) & gcd$(123,123k+17)=1$\\ $246k'+37$ & (5,251) & gcd$(123,123k+16)=1$& $246k'+43$ & (5,251) & gcd$(123,123k+19)=1$\\ $246k'+47$ & (2707,2953) & gcd$(123,123k+23)=1$& $246k'+49$ & (5,251) & gcd$(123,123k+22)=1$\\ $246k'+53$ & (2707,2953) & gcd$(123,123k+26)=1$& $246k'+55$ & (5,251) & gcd$(123,123k+25)=1$\\ $246k'+59$ & (2707,2953) & gcd$(123,123k+29)=1$& $246k'+61$ & (5,251) & gcd$(123,123k+28)=1$\\ $246k'+65$ & (2707,2953) & gcd$(123,123k+32)=1$& $246k'+67$ & (5,251) & gcd$(123,123k+31)=1$\\ $246k'+71$ & (2707,2953) & gcd$(123,123k+35)=1$& $246k'+73$ & (5,251) & gcd$(123,123k+34)=1$\\ $246k'+77$ & (2707,2953) & gcd$(123,123k+38)=1$& $246k'+79$ & (5,251) & gcd$(123,123k+37)=1$\\ $246k'+83$ & (991,1237) & gcd$(123,123k+38)=1$& $246k'+85$ & (5,251) & gcd$(123,123k+40)=1$\\ $246k'+89$ & (2707,2953) & gcd$(123,123k+44)=1$& $246k'+91$ & (5,251) & gcd$(123,123k+43)=1$\\ $246k'+95$ & (2707,2953) & gcd$(123,123k+47)=1$& $246k'+97$ & (5,251) & gcd$(123,123k+46)=1$\\ $246k'+101$ & (2707,2953) & gcd$(123,123k+50)=1$& $246k'+103$ & (5,251) & gcd$(123,123k+49)=1$\\ $246k'+107$ & (2707,2953) & gcd$(123,123k+53)=1$& $246k'+109$ & (5,251) & gcd$(123,123k+52)=1$\\ $246k'+113$ & (2707,2953) & gcd$(123,123k+56)=1$& $246k'+115$ & (5,251) & gcd$(123,123k+55)=1$\\ $246k'+119$ & (2707,2953) & gcd$(123,123k+59)=1$& $246k'+121$ & (5,251) & gcd$(123,123k+58)=1$\\ $246k'+125$ & (2707,2953) & gcd$(123,123k+62)=1$& $246k'+127$ & (5,251) & gcd$(123,123k+61)=1$\\ $246k'+131$ & (2707,2953) & gcd$(123,123k+65)=1$& $246k'+133$ & (5,251) & gcd$(123,123k+64)=1$\\ $246k'+137$ & (2707,2953) & gcd$(123,123k+68)=1$& $246k'+139$ & (5,251) & gcd$(123,123k+67)=1$\\ $246k'+143$ & (2707,2953) & gcd$(123,123k+71)=1$& $246k'+145$ & (5,251) & gcd$(123,123k+70)=1$\\ $246k'+149$ & (2707,2953) & gcd$(123,123k+74)=1$& $246k'+151$ & (5,251) & gcd$(123,123k+73)=1$\\ $246k'+155$ & (2707,2953) & gcd$(123,123k+77)=1$& $246k'+157$ & (5,251) & gcd$(123,123k+76)=1$\\ $246k'+161$ & (2707,2953) & gcd$(123,123k+80)=1$& $246k'+163$ & (5,251) & gcd$(123,123k+79)=1$\\ $246k'+167$ & (2707,2953) & gcd$(123,123k+83)=1$& $246k'+169$ & (11,257) & gcd$(123,123k+79)=1$\\ $246k'+173$ & (2707,2953) & gcd$(123,123k+86)=1$& $246k'+175$ & (5,251) & gcd$(123,123k+85)=1$\\ $246k'+179$ & (2707,2953) & gcd$(123,123k+89)=1$& $246k'+181$ & (5,251) & gcd$(123,123k+88)=1$\\ $246k'+185$ & (2707,2953) & gcd$(123,123k+92)=1$& $246k'+187$ & (5,251) & gcd$(123,123k+91)=1$\\ $246k'+191$ & (2707,2953) & gcd$(123,123k+95)=1$& $246k'+193$ & (5,251) & gcd$(123,123k+94)=1$\\ $246k'+197$ & (2707,2953) & gcd$(123,123k+98)=1$& $246k'+199$ & (5,251) & gcd$(123,123k+97)=1$\\ $246k'+203$ & (2707,2953) & gcd$(123,123k+101)=1$& $246k'+209$ & (2707,2953) & gcd$(123,123k+104)=1$\\ $246k'+211$ & (5,251) & gcd$(123,123k+103)=1$& $246k'+215$ & (2707,2953) & gcd$(123,123k+107)=1$\\ $246k'+217$ & (5,251) & gcd$(123,123k+106)=1$& $246k'+221$ & (2707,2953) & gcd$(123,123k+110)=1$\\ $246k'+223$ & (5,251) & gcd$(123,123k+109)=1$& $246k'+227$ & (2707,2953) & gcd$(123,123k+113)=1$\\ $246k'+229$ & (5,251) & gcd$(123,123k+112)=1$& $246k'+233$ & (2707,2953) & gcd$(123,123k+116)=1$\\ $246k'+235$ & (5,251) & gcd$(123,123k+115)=1$& $246k'+239$ & (2707,2953) & gcd$(123,123k+119)=1$\\ $246k'+241$ & (5,251) & gcd$(123,123k+118)=1$& $246k'+245$ & (2707,2953) & gcd$(123,123k+122)=1$ \end{tabular} \] \caption{The cases of $g=246$.} \end{table} \section{Conclusions} The notion of Hamilton cycles is one of the most central in modern Graph Theory, and many efforts have been devoted to obtain sufficient conditions for Hamiltonicity. One of the oldest results is the theorem of Dirac \cite{dirac}, who showed that if the minimum degree of a graph $G$ on $n$ vertices is at least $\frac{n}{2}$, then $G$ contains a Hamilton cycle. This result is only one example of a vast majority of known sufficient conditions for Hamiltonicity that mainly deal with fairly dense graphs. On the other hand, it appears that little has been known about Hamilton cycles in relatively sparse graphs. As mentioned previously, the minimum degree of $G_{2n}$ is roughly $\frac{n}{\log n} < \frac{n}{2}$. So, Dirac’s result or Dirac-type results for bipartite graphs \cite{erdos,moon} do not work in our case. Things are different in random graphs. P\'{o}sa's result \cite{posa} shows that the probability that a random graph with $n$ vertices and $cn\log n$ edges, for sufficiently large $c$, contains a Hamilton cycle tends to 1 as $n$ tends to infinity. The number of edges in $G_{2n}$ is around $n^2/\log n$, which is more than $cn\log n$ in the P\'{o}sa's result. Although $G_{2n}$ is not a real random graph, connecting of the edges in $G_{2n}$ relies on prime numbers, which have been long thought to distribute `randomly' in some sense. It is worth mentioning that the sufficient condition in Theorem \ref{main} involves only four primes, and the number four is much less than $\pi(4n-1)$, the number of all candidates of primes in $G_{2n}$. Though the sufficient condition in Theorem \ref{main} is much stronger than Conjecture 1 in this sense, we believe that it is likely to be satisfied for any even number $2n\geq 4$. To support this point, we have verified the condition for all $2n<10^8$ by a computer search. A wealth of numerical evidence supports Conjecture \ref{conjecture}, but so far a proof has eluded us. The numerical results of the existence of $p_2$ and $2n+p_2$ in the sufficient condition also lead us to pose the following question. \begin{center} ``Is there a prime $p<2n$ such that $2n+p$ is also a prime for any $2n\geq 4$?'' \end{center} Obviously, the prime $2n+p$ is between $2n$ and $4n$. So, this can be viewed as a generalization of Bertrand's postulate on even numbers with an extra condition. In addition, the conjecture can also be interpreted as a variant of Goldbach's conjecture that ``every even number $2n\geq 4$ is the difference of two primes $p$ and $2n+p$''. Chen's work \cite{chen} on Goldbach's conjecture also showed a related result that every even number is the difference between a prime and a product of two primes. However, it is still unknown ``whether every even number is the difference of two primes?''. According to the discussion, it seems difficult to prove Conjecture \ref{conjecture} by showing directly that the sufficient condition holds for any even number $2n\geq 4$. Another research direction is to consider an extension of the discussed problem alone the concept of {\it quasi-random graphs} by Chung, Graham and Wilson \cite{chung}. Quasi-random graphs (or pseudo-random graphs) can be informally described as graphs whose edge distribution resembles closely that of a truly random graph $G(n,p)$ of the same edge density. The connection between pseudo-randomness and Hamiltonicity has been explored in several articles \cite{frieze2000,frieze2002,krivelevich}. Although the distribution of prime numbers in integers is not random, the prime sum condition on the discussed problem can be relaxed to any subset of integers, nothing to do with primes. By choosing these integers randomly, one can construct a new class of graphs that might behave like pseudo-random graphs. We believe that proving Hamiltonicity in the new class of graphs is an interesting and challenging task, and will lead to more interesting problems. \section*{Acknowledgments} The first author would like to thank Professor Mikl\'{o}s Simonovits for valuable suggestions.
{ "timestamp": "2018-04-20T02:07:37", "yymm": "1804", "arxiv_id": "1804.07104", "language": "en", "url": "https://arxiv.org/abs/1804.07104", "abstract": "A consequence of Bertrand's postulate, proved by L. Greenfield and S. Greenfield in 1998, assures that the set of integers $\\{1,2,\\cdots, 2n\\}$ can be partitioned into pairs so that the sum of each pair is a prime number for any positive integer $n$. Cutting through it from the angle of Graph Theory, this paper provides new insights into the problem. We conjecture a stronger statement that the set of integers $\\{1,2,\\cdots, 2n\\}$ can be rearranged into a cycle so that the sum of any two adjacent integers is a prime number. Our main result is that this conjecture is true for infinitely many cases.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "From a Consequence of Bertrand's Postulate to Hamilton Cycles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464499040092, "lm_q2_score": 0.8479677506936878, "lm_q1q2_score": 0.8275711159225926 }
https://arxiv.org/abs/1912.02717
Investigating transversals as generating sets for groups
In [3] is was shown that for any group $G$ whose rank (i.e., minimal number of generators) is at most 3, and any finite index subgroup $H\leq G$ with index $[G:H]\geq rank(G)$, one can always find a left-right transversal of $H$ which generates $G$. In this paper we extend this result to groups of rank at most 4. We also extend this to groups $G$ of arbitrary (finite) rank $r$ provided all the non-trivial divisors of $[G:core_G(H)]$ are at least $2r-1$. Finally, we extend this to groups $G$ of arbitrary (finite) rank provided $H$ is malnormal in $G$.
\section{Introduction} Given a group $G$ and a subgroup $H \leq G$ of finite index, one can consider sets $S \subset G$ which are a \emph{left} (resp. \textit{right}) \textit{ transversals} of $H$ in $G$. That is, a complete set of left (resp. right) coset representatives. It then follows that we can consider \emph{left-right transversals}; sets which are simultaneously a left transversal, and right transversal, for $H$ in $G$. While it is immediate in the case of finite index subgroups that left (resp. right) transversals exist (the case of infinite index subgroups is more complicated; existence of left or right transversals is equivalent to the axiom of choice, as shown in \cite[Theorem 2.1]{choice}), one might then ask the following question: \begin{question} Given a finitely generated group $G$ and a finite index subgroup $H$, does there always exist a left-right transversal of $H$ in $G$? \end{question} It turns out that such a transversal always exists, by an application (see \cite{cosetgraph}) of Hall's Marriage Theorem to the \emph{coset intersection graph} $\Gamma_{H,H}^{G}$ of $H$ in $G$ (\Cref{defn:coset-intersection}); a graph whose vertex set is the disjoint union of the left and the right cosets of $H$ in $G$, with edges between vertices whenever the corresponding cosets intersect. $\Gamma_{H, H}^{G}$ is thus bipartite as the set of left (and right) cosets are mutually disjoint. A proof without using Hall's Theorem was given in \cite[Theorem 3]{cosetgraph}. In the context of a finitely generated group $G$, by defining $\rank(G)$ to be the smallest size of a generating set for $G$, one might ask the following question relating generating sets to transversals: \begin{question} Given a finitely generated group $G$ and a finite index subgroup $H$ with $\rank(G)\leq[G:H]$, does there always exist a left transversal of $H$ which generates $G$? \end{question} It is also true that such a transversal always exists, as shown in \cite[Theorem 3.7]{transgen}. And clearly the reverse statement is true: if there exists a left transversal of $H$ which generates $G$ then $\rank(G)\leq [G:H]$. So we now have a necessary and sufficient condition for a finite index subgroup $H$ of a finitely generated group $G$ to have a left transversal which generates $G$; namely, that $\rank(G)\leq [G:H]$. Moreover, by taking inverses, and noting that the element-wise inverse of a left transversal is a right transversal, we see that the same condition holds for the existence of right transversals as generating sets. Combining all the observations so far, one might now consider the following more general question, which forms the main motivation of this paper: \begin{question}\label{ques: motivating} Given a finitely generated group $G$ and a finite index subgroup $H$ with $\rank(G)\leq[G:H]$, does there always exist a left-right transversal of $H$ which generates $G$? \end{question} This question was first studied in \cite{transgen}, where it was shown in \cite[Theorem 3.11]{transgen} that such a transversal always exists under the additional hypothesis that $\rank(G)\leq 3$. To achieve this, the authors introduced a new technique called \textit{shifting boxes}. This involves using the transitive action of a group $G$ on the set of left (or right) cosets of a subgroup $H \leq G$ to apply Nielsen transformations to a generating set of $G$ in a way such that the resulting generators lie inside (or outside) particular desired cosets of $H$. A study of the graph $\Gamma_{H,H}^{G}$ was conducted in \cite{cosetgraph}, and it was shown that the components are always complete bipartite graphs. This description of $\Gamma_{H,H}^{G}$ gave rise to a combinatorial model of coset intersections known as `chessboards'. Chessboards provide an approach to answering versions of \Cref{ques: motivating}, as indeed was done in \cite{transgen}, and which we do throughout this paper. In particular, we use these techniques, as done in \cite[Theorem 3.11]{transgen}, to relax the hypothesis of $\rank(G)\leq 3$, up to $\rank(G)\leq 4$. That is, we show as the first main result of this paper that: \begin{thm}\label{thm: rank4case} Let $G$ be a group of rank $4$ with a finite index subgroup $H$, such that $[G:H] \geq 4$. Suppose $S$ is a set of $4$ elements which generate $G$, then there exists a sequence of Nielsen transformations taking $S$ to a new set $\tilde{S}$, such that the elements of $\tilde{S}$ may be extended to a left-right transversal of $H$ in $G$. \\Hence, given a finitely generated group $G$ of rank $4$ and a finite index subgroup $H$, there exists a left-right transversal of $H$ which generates $G$ if and only if $\rank(G)\leq[G:H]$. \end{thm} Notice that in the above theorem we speak of a much stricter condition on the left-right transversal found. Rather than showing existence of \textit{some} left-right transversal which generates the group, we instead take a generating set $S$ and Nielsen-transform it to a new generating set $S'$ which \textit{extends} to a left-right transversal (that is, $S'$ is a subset of a left-right transversal). As it turns out, all of our results making progress on \Cref{ques: motivating} are of this form: taking a generating set $S$ and Nielsen-transforming it to one which extends to a left-right transversal. We discuss this in more detail in \Cref{sec:finite-setting}, where we re-phrase our motivating question as \Cref{main Q}. The technique of shifting boxes, as applied to chessboards, also allows us to make progress on \Cref{ques: motivating} under different additional hypotheses, this time invoking divisibility conditions on various subgroup indices. Taking $\core_G(H) := \cap_{g \in G}gHg^{-1}$ to be the \textit{core} of the subgroup $H$ in $G$ (that is, the largest normal subgroup of $G$ contained in $H$), we state another main result of this paper which makes further progress on \Cref{ques: motivating}. \begin{thm} \label{thm: divisor_thm} Let $G$ be a group of rank $r$ with a finite index subgroup $H$, such that $[G:H] \geq r$. Suppose that each non-trivial divisor of $[G:\core_G(H)]$ is at least $2r-1$. Then any generating set $S$ having size $r$ may be Nielsen-transformed to a set $\tilde{S}$ that may be extended to a left-right transversal of $H$ in $G$. \end{thm} Unfortunately the method of shifting boxes becomes combinatorially intractable when the rank of the underlying group gets sufficiently large. Hence other techniques become necessary. In this paper we introduce one such technique, which we call \textit{L-spins} applied to \textit{configurations} of multisets. A configuration of a multiset $S\subset G$ is simply a selection of some rows and columns of a chessboard of $H\leq G$ and the corresponding elements of $S$ lying in the intersections of these rows and columns (\Cref{defn:configuration}); one might think of this as a ``minor'' of a chessboard. And an L-spin is a way to re-arrange, via Nielsen transformations, this multiset on the configuration (\Cref{defn:L-spin}). We use these new techniques to prove the following version of \Cref{ques: motivating} with the added hypothesis that $H\leq G$ is a malnormal subgroup: \begin{thm} \label{thm:malnormal} Given a malnormal $H\leq G$ of finite index and a generating multiset $S$ of size $[G:H]$, $S$ can be Nielsen transformed to a left-right transversal. \end{thm} Of course, only finite groups have finite index malnormal subgroups, as discussed in \Cref{sec:applications}, so the above theorem is only relevant for finite groups. Note that the case when $H$ is normal is easily resolved in the affirmative, as left and right cosets of normal subgroups match up so one can immediately apply \cite[Theorem 3.7]{transgen}. Moreover, we resolve the special case of \Cref{ques: motivating} where $H$ is very close to normal (that is, when $[H:xHx^{-1}\cap H]\leq 2$ for all $x\in G$), again in the affirmative as \Cref{prop:almost-normal}. Thus we see that if $H\leq G$ is very close to normal or very far from normal, then our motivating question is resolved in the affirmative in each case. So it is the instances where $H$ is somewhere between normal and malnormal where further investigation could be done, as are the instances where $\rank(G)>4$. This paper is laid out as follows: \\In \Cref{sec:graphs} we give an overview of the structure results and techniques introduced in \cite{cosetgraph, transgen} on coset intersection graphs and chessboards. In \Cref{sec:shifting} we generalise some of the shifting boxes techniques from \cite{transgen} to obtain further results which become our main approach to answering questions about left-right transversals as generating sets. We then use these new results to prove \Cref{thm: divisor_thm}. In \Cref{sec:rank4} we apply our new techniques on shifting boxes developed in \Cref{sec:shifting}, to prove the rank-4 case of our motivating question; this is \Cref{thm: rank4case}. This proof is done in the similar style as the proof of the rank-3 case given in \cite[Theorem 3.11]{transgen}; as a sequence of reductions to various sub-cases depending on where certain generators lie in the chessboards of $H\leq G$. In \Cref{sec:finite-setting} we focus on the special case of \Cref{ques: motivating} where $G$ is finite, looking at additional hypotheses on $H\leq G$. Namely, when $H$ is cyclic (\Cref{lem:cyclic-nielsen}) or isomorphic to $C_{p}\times C_{p}$ for $p$ prime (\Cref{lem:CpxCp}), answering \Cref{ques: motivating} in the affirmative in all of these cases. In \Cref{sec: configurations} we define configurations and L-spins, and then use these to show some normal form theorems for configurations. We use these results to answer \Cref{ques: motivating} under the additional hypothesis that $H$ is very close to normal in $G$ (\Cref{prop:almost-normal}) and when $H$ is malnormal in $G$ (\Cref{thm:malnormal}). \section{Coset intersection graphs and transversals as generating sets}\label{sec:graphs} The work in Sections \ref{sec:graphs},\ref{sec:shifting},\ref{sec:rank4} is a direct extension of work done by Button, Chiodo and Zeron-Medina in the two papers \cite{cosetgraph, transgen}. In \cite{cosetgraph}, the coset intersection graphs associated to a group $G$ were studied: \begin{defn}\label{defn:coset-intersection} For a group $G$ with finite index subgroups $H,K \leq G$, the associated \emph{coset intersection graph}, $\Gamma_{H,K}^{G}$, is the bipartite graph on vertex set $V = \{gH \ | \ g \in G\}\sqcup\{Kg \ | \ g \in G\}$ (where we treat $gH$ and $Kg$ as different vertices even if $gH=Kg$ as sets), with an edge joining two cosets with non-empty intersection in $G$. \end{defn} \begin{thm}{{\cite[Theorem 3]{cosetgraph}}} Let $G$ be a group and $H,K \leq G$ of finite index. Then $\Gamma_{H,K}^G$ is a disjoint union of components of the form $K_{s_i,t_i}$, where $\frac{t_i}{s_i}$ is independent of $i$. \end{thm} The main application of this result is to the case when $H = K$, where it says that every right $H$ coset contained in a double coset $HgH$ intersects every left $H$ coset in the double coset. Pictorially, one can consider $G$ as a disjoint union of square grids corresponding to double cosets of $H$, where the columns and rows of each grid respectively correspond to left and right $H$ cosets contained in the double coset, and each square in each grid corresponds to a non-empty intersection of the (cosets represented by) the associated row and column; see \Cref{fig:chessboard1}. In \cite{transgen}, these square grids are referred to as \textit{chessboards}. We will work extensively with chessboards in this paper, and from hereon in keeping with the notation in \cite{cosetgraph, transgen} we will always take rows to represent right cosets and columns to represent left cosets. \begin{figure}[h] \centering \begin{center} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \x in {0,1,...,3} { \draw[line width = 1pt] (1+\x,0) -- (1+\x, 3); } \foreach \y in {0,1,...,3} { \draw[line width = 1pt] (1, \y) -- (4, \y); } \draw (0.6, 0.4) node {$Hg_3$}; \draw (0.6, 1.4) node {$Hg_2$}; \draw (0.6, 2.4) node {$Hg_1$}; \draw (1.5, 3.3) node {$g_1'H$}; \draw (2.5, 3.3) node {$g_2'H$}; \draw (3.5, 3.3) node {$g_3'H$}; \draw (1.5, 2.5) node[scale = 0.7] {$g_1'H \cap Hg_1$}; \draw (2.5, 2.5) node[scale = 0.7] {$g_2'H \cap Hg_1$}; \draw (3.5, 2.5) node[scale = 0.7] {$g_3'H \cap Hg_1$}; \end{tikzpicture} \end{center} \caption{Example of chessboard representing $Hg_1H$.} \label{fig:chessboard1} \end{figure} The main results of this paper concern \textit{transversals}. \begin{defn} Given a group $G$ and $H\leq G$, a set $S \subset G$ is called a \emph{left} (resp. \emph{right}) \emph{transversal} for $H$ in $G$ if it is a collection of representatives of all of the left (resp. right) cosets in $G$. A \emph{left-right transversal} is a set which is simultaneously a left and a right transversal for $H$ in $G$. We say a transversal $S$ is a \emph{generating transversal} if it generates $G$ as a group. \end{defn} A well known application of Hall's marriage theorem implies that every finite index subgroup possesses a simultaneous left-right transversal. In \cite{cosetgraph}, it was noted that a left-right transversal of a finite index subgroup $H \leq G$ can be obtained directly, simply by choosing an element from each diagonal entry of each chessboard of $H$. The ease and directness of this observation suggests that chessboards may be a useful tool to answer related questions. The question approached in this paper is whether a finitely generated group $G$ necessarily has a generating set contained in a left-right transversal of a given finite index subgroup $H$. The direction of approach is `constructive' via chessboards, in the style of the above paragraph. In particular, using chessboards, we study how the positions (relative to cosets of $H$) of elements of a multiset\footnote{The reason why multisets are used instead of sets is that a Nielsen transformation of a set may result in multiple occurrences of a group element, which we do not want to discard. We do not use $n$-tuples because the order of the elements plays no role.} $S \subset G$ change under Nielsen moves. \begin{defn} Given a multiset of elements $S \subset G$, a multiset $\tilde{S}$ is said to be obtained from $S$ via a \textit{Nielsen move} if it is obtained by either of the following procedures: \begin{enumerate} \item obtained by replacing some $g \in S$ with $g^{-1}$ \item obtained by replacing some $g \in S$ with $hg$ or $gh$ for some $h \in S\smallsetminus\{g\}$ \end{enumerate} Two multisets are \textit{Nielsen equivalent} if they can be obtained from one another via a finite sequence of Nielsen moves; such a finite sequence is referred to as a \textit{Nielsen transformation}. \end{defn} To illustrate how chessboards provide a method of viewing `transversality' of generating sets, consider the most basic result in this direction, established in {{\cite[Theorem 3.7]{transgen}}}. The proof is included because many arguments in this paper will be of a similar style. \begin{prop} \label{prop: left-clean} Let $G$ be a group with finite index subgroup $H$, and take a multiset $S \subset G$ whose elements generate $G$. If $[G:H] \geq |S|$, then $S$ is Nielsen equivalent to a multiset with entries in distinct left cosets of $H$ (in particular, there exists a left transversal of $H$ generating $G$). \end{prop} \begin{defn} We shall refer to a coset as \emph{empty} if it contains no elements of $S$. \end{defn} \begin{proof} Left cosets of $H$ correspond to columns in the chessboards. Two elements of $S$ belong to the same left coset of $H$ if they belong to the same column of a chessboard. Suppose $s, s' \in S $ belong to the same column. Then $s'$ may be replaced by $s^{-1}s'$ (corresponding to a pair of Nielsen moves), which now belongs to $H$. Hence we may suppose every element of $S$ either belongs to $H$ (a $1\times 1$ chessboard), or to a column which contains no other entries of $S$. An example of such a configuration is shown in \Cref{fig:Box_Shifting_sec2}, with the dots representing elements of $S$. \begin{figure}[h] \centering \begin{center} \renewcommand{0}{1} \renewcommand{0}{1} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y+1.5) -- (0,\y+1.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,1.5) -- (\x,0+1.5); } \renewcommand{0}{3} \renewcommand{0}{3} \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (2.5,\y+0.5) -- (0+2.5,\y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+2.5,0.5) -- (\x+2.5,0+0.5); } \renewcommand{0}{3} \renewcommand{0}{3} \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (7,\y+0.5) -- (0 + 7, \y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x + 7, 0.5) -- (\x + 7, 0 + 0.5); } \draw [fill = black] (0.5, 2.25) circle (3.5pt); \draw [fill = black] (0.5, 1.75) circle (3.5pt); \draw [fill = black] (3, 3) circle (3.5pt); \draw [fill = black] (4, 3) circle (3.5pt); \draw [fill = black] (5, 2) circle (3.5pt); \draw [fill = black] (7.5, 1) circle (3.5pt); \draw [fill = black] (8.5, 2) circle (3.5pt); \end{tikzpicture} \end{center} \caption{A configuration in which elements outside of $H$ lie in distinct columns of chessboards.} \label{fig:Box_Shifting_sec2} \end{figure} The final stage of the proof is to `extract', via Nielsen moves, elements of $S\cap H$ into columns not yet containing elements of $S$. $S$ generates $G$ (this property is preserved under Nielsen equivalence) and $G$ acts transitively on the columns of the chessboards (i.e. on left cosets of $H$), and because $[G:H] \geq |S|$ there is at least one empty column provided $|S\cap H| \geq 2$ (if this is not the case, then the elements of $S$ already lie in distinct columns). Suppose $gH$ is an empty column. It is possible to write $g = t_{1}^{\epsilon_1}\ldots t_{n}^{\epsilon_n}$, where $t_i \in S$, $\epsilon_i \in \{1,-1\}$ for each $i$. By taking $n$ minimal, it follows that there is an empty column of the form $s_2^{\epsilon}s_1H$, where $s_2,s_1 \in S$ (this works even when $n = 1$ because $S\cap H$ is non-empty). Because $|S\cap H| \geq 2$, we may find $h \in S\cap H$ such that $h \neq s_2$. Then the replacement $h \mapsto s_2^{\epsilon}s_1h$ is a composition of Nielsen moves, and $s_2^{\epsilon}s_1h$ belongs to the empty column. This can be repeated until the multiset has only one entry in $H$. To obtain the parenthetical claim in the proposition, the set $S'$ obtained from $S$ by the above sequence of Nielsen moves still generates $G$, and may be extended to a left transversal of $H$ by adjoining elements from the remaining empty columns of the chessboards. \end{proof} There were two main steps in the above proof. The first was the `cleaning' procedure: performing Nielsen moves until no two elements of $S$ shared a column outside of $H$. The second was the `extraction' procedure, Nielsen-moving elements of $S\cap H$ outside of $H$, preserving the property that no two elements share a column outside of $H$. \begin{defn} Let $H \leq G$ be of finite index, and take $S \subset G$. We say $S$ is \emph{left-clean} relative to $H$ if $S\smallsetminus H$ is non-empty and its elements lie in distinct left cosets of $H$, and so distinct columns of the chessboards. We similarly define \emph{right-clean}, and \emph{left-right clean}. \end{defn} Given $H$, in addition to a multiset being clean we will also refer to a set of elements being \emph{diagonal}. A set is diagonal if no two elements belong to the same left or right coset of $H$. This terminology is motivated by chessboard diagrams: with appropriate ordering of columns and rows, a diagonal set can be drawn as belonging only to the diagonal of each board. \begin{defn} Let $S \subset G$ be a multiset which is left-clean relative to $H$. A \emph{left-extraction} of an element of $S$ is a Nielsen transformation $S \rightarrow S'$ such that $|S'\cap H| = |S\cap H|-1$ and if $S$ is left-clean, then $S'$ is. We can define \emph{right-extraction} analogously. For the majority of this paper we will be interested in \emph{left-right extraction} and when this is clear from the context we will omit the `left-right'. \end{defn} It is clear that \Cref{prop: left-clean} presents a procedure to Nielsen transform a given multiset into a left-clean set, and a general left-extraction procedure. Furthermore it is clear that the proposition holds with `right' replacing `left' throughout. In particular, given a multiset $S$, one may combine the left- and right- cleaning procedures (e.g. perform them in turn) to give a left-right cleaning procedure. However, it is not obvious how to combine the left- and right- extraction processes to give a left-right extraction process. Such a general procedure would, by modifying the argument in the proposition, result in a proof that any finitely generated group has a left-right transversal (of any subgroup $H$ with sufficiently large finite index) which generates the group. The next section of the paper gives some general left-right extraction techniques. Before proceeding with this, it is noted that when $S$ has few elements, it is possible to explicitly devise ad-hoc left-right extraction procedures. Such procedures were presented in \cite{transgen} (the method of `shifting boxes'), and an example of a result obtained is given below. \begin{thm}{{\cite[Theorem 3.11]{transgen}}} Let $G$ be a group, $S$ a generating set for $G$ such that $|S| \leq 3$, and $H \leq G$ a subgroup of finite index such that $|S| \leq [G:H]$. Then $S$ is Nielsen-equivalent to a set $S'$ which is contained in a left-right transversal of $H$ in $G$. \end{thm} \section{Results on shifting boxes, and a proof of \Cref{thm: divisor_thm}}\label{sec:shifting} In this section we present some left-right extraction procedures which are possible under certain conditions, as well as make some definitions which will be useful in the proof of \Cref{thm: rank4case}. To conclude the section, we apply our new extraction procedures to give a range of groups and subgroups for which any generating set of the group may be Nielsen transformed into a set which extends to a left-right transversal of the subgroup, finishing with a proof of \Cref{thm: divisor_thm}. Taking $H\leq G$, our first definition here is motivated by viewing chessboards as orbits of $H$ acting on its coset space $G/H$ by multiplication. We can visualise the left (respectively right) action of $H$ on $G/H$ (respectively $H \backslash G$) via the permutations it induces on the columns (respectively rows) of each chessboard. The same is true if we further restrict the action to $K \leq H$, however in this case whilst clearly the action still preserves the chessboards, it needn't be transitive on the columns/rows of a given chessboard. We will be interested in the case $K = \gen{S \cap H}$, where $S$ is a multiset of elements which we are trying to left-right extract from. \begin{defn} Let $K \leq H \leq G$, and suppose $S$ is a multiset of elements in $G$. With $K$ acting by left-multiplication on $G/H$, a $K$-orbit of a left coset of $H$ is \emph{sparse} if it contains a coset which doesn't contain an element of $S$. We also take the corresponding definition of sparseness for the right-multiplication action of $K$. \end{defn} For example, if we consider $K$ acting on the left, then saying $gH$ has a sparse $K$-orbit means we can find $k \in K$ such that $kgH$ is represented by a column (in the same chessboard as $gH$, as $K\leq H$) with no element of $S$ belonging to this new column. The following lemma gives a left-right extraction technique which works provided $\gen{S \cap H}$ has left \emph{and} right sparse orbits in some chessboard. \begin{rem}\label{rem: natural_condition} To see why this is a natural condition, consider the simple scenario where there exists $g \in S\smallsetminus H$ and distinct $h$, $ h'\in S \cap H$ with $hgH$ and $Hgh'$ representing a column and row (respectively) which do not contain any element of $S$. Then combining the pair of Nielsen moves: $$h \mapsto hg$$ $$hg \mapsto hgh'$$ constitutes a left-right extraction of an element (in this case $h$) from $S$. \end{rem} \begin{lem}[Extraction Lemma] \label{lem: EL} Let $G$ be a group, $H\leq G$ be of finite index, and $S\subset G$ a left-right clean multiset such that $|S| \leq [G:H]$. Suppose all of the following hypotheses hold: \begin{enumerate} \item $|S \cap H|\geq 2$ \item There exists $g \in S$ such that $gH$ and $Hg$ have sparse $\gen{S \cap H}$-orbits \end{enumerate} Then it is possible to left-right extract an element from $S \cap H$. \end{lem} \begin{proof} Let $S\cap H = \set{h_1,\ldots,h_a}$ and $S\smallsetminus H = \set{g_1, \ldots, g_b}$ (note that $S$ is a multiset so $|S\cap H| = a \geq 2$, but distinct indices do not necessarily correspond to distinct elements). The first aim is to show that suitable Nielsen transformations reduce the hypotheses to a situation which is only slightly harder than the simple scenario presented in \Cref{rem: natural_condition}; that is, we are looking for an element $g \in S\smallsetminus H$, elements $h_l, h_k \in H\cap S$ and $\delta, \epsilon = \pm 1$ such that the cosets $h_{l}^{\delta}gH$, $Hgh_{k}^{\epsilon}$ are empty. We do it in two stages: we first find $g'_1$, $h_k$, $\epsilon$ such that $Hg_1h_k^{\epsilon}$ is empty. Then we modify the element $g_1'$ to $g_1''$ by a Nielsen transformation, and find an $h_f$ and $\delta$ such that $h_l^{\delta}g_1''H$ is also empty. Taking $g_1$ satisfying the second condition of the lemma, then there exists $h_{i_{1}},\ldots, h_{i_{n}}$ (not necessarily distinct indices) and $\epsilon_{1},\ldots, \epsilon_{n} \in \set{-1, 1}$ such that $$Hg_{1}h_{i_{1}}^{\epsilon_{1}}\ldots h_{i_{n}}^{\epsilon_{n}}$$ does not contain any elements of $S$. We may assume $n$ is minimal. If $n=1$ we proceed to the next step. Suppose then that $n > 1$ so, by minimality we have $$Hg_{1}h_{i_{1}}^{\epsilon_{1}}\ldots h_{i_{n-1}}^{\epsilon_{n-1}} = Hg_{j} \ \ \ \ \textnormal{ for some } j \neq 1$$ And so now we make the pair of Nielsen transformations: $$ g_{1} \mapsto g_{1}h_{i_{1}}^{\epsilon_{1}}\ldots h_{i_{n-1}}^{\epsilon_{n-1}} =: g_{1}'$$ $$ g_{j} \mapsto g_{j}h_{i_{n-1}}^{-\epsilon_{n-1}}\ldots h_{i_{1}}^{-\epsilon_{1}}=: g_{j}'$$ labeling the new elements of $S$ as $g_{1}', g_{j}'$ respectively. To see pictorially what has happened (\Cref{fig:swap_rows}), the rows of the elements $g_{1}$, $g_{j}$ have been `swapped' and the columns have remained unchanged: \begin{figure}[h] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.2cm,y=1.2cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y+0.5) -- (0,\y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0.5) -- (\x,0+0.5); } \draw[->, line width = 2pt] (3.5,2) -- (4,2); \renewcommand{0}{3} \renewcommand{0}{3} \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (4.5,\y+0.5) -- (0 + 4.5, \y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x + 4.5, 0.5) -- (\x + 4.5, 0 + 0.5); } \draw (0.5,3) node {$g_1$}; \draw (1.5, 2) node {$g_j$}; \draw (5,2) node {$g_1'$}; \draw (6,3) node {$g_j'$}; \end{tikzpicture} \end{center} \caption{Nielsen transformation preserving columns but swapping rows.} \label{fig:swap_rows} \end{figure} Note that this preserves left-right cleanliness, and the fact that the columns are unchanged is to say that the left cosets are unchanged and in particular the orbits of the left cosets are unchanged. So $g_{1}'H = g_{1}H$ has a sparse $\gen{S\cap H}$-orbit. As before this says that there are indices $j_{1},\ldots, j_{m}$ and signs $\delta_{1},\ldots,\delta_{m}$ such that $$h_{j_{1}}^{\delta_{1}}\ldots h_{j_{m}}^{\delta_{m}}g_{1}'H$$ does not contain an element of $S$. We now repeat the previous argument but on the left. Suppose $m$ is minimal. If $m = 1$, we have found $h_{j_{1}}^{\delta_{1}}g_{1}'H$, $Hg_{1}'h_{i_{n}}^{\epsilon_{n}}$ both empty. Otherwise $m > 1$ and so $$ h_{j_{2}}^{\delta_{2}}\ldots h_{j_{m}}^{\delta_{m}}g_{1}'H = g_{k}H$$ where $g_{k}$, $g_{1}'$ are distinct elements of the multiset. Then we make the following Nielsen transformations: $$ g_{1}' \mapsto h_{j_{2}}^{\delta_{2}}\ldots h_{j_{m}}^{\delta_{m}}g_{1}' =:g_{1}''$$ $$ g_{k} \mapsto h_{j_{m}}^{-\delta_{m}}\ldots h_{j_{2}}^{-\delta_{2}}g_{k}$$ Once again we can see that this doesn't alter rows, but swaps the columns. In particular it preserves left-right cleanliness. Now, we know that the cosets $h_{j_1}^{\delta_1} g_1''H$ and $Hg_1''h_{i_n}^{\epsilon_n}$ are empty. If $j_1 \ne i_n$, then $h_{j_1}^{\delta_1}g_1''h_{i_n}^{\epsilon_n}$ lies in an empty row and empty column, so we can perform the desired extraction $$h_{j_1} \mapsto h_{j_1}^{\delta_1}\cdot g_1'' \mapsto h_{j_1}^{\delta_1}g_1''\cdot h_{i_n}^{\epsilon_n}$$ Thus, let's assume without loss of generality that $j_1 = i_n = 1$ and let's rename $\delta_1 =: \delta$, $\epsilon_n =: \epsilon$ and $g_1'' =: g_1$ (for simplicity). Now the empty cosets are $h_{1}^{\delta}g_{1}H$ and $Hg_{1}h_{1}^{\epsilon}$. Take $h_2$ (we can do this since $|S\cap H| \ge 2$) and consider the element $h_{2}g_{1}h_{1}^{\epsilon}$. If this lies in a left coset representing an empty column, then we're done (since we extract $h_2 \mapsto h_2 \cdot g_1 \mapsto h_2g_1\cdot h_1^\epsilon$). Otherwise $h_{2}g_{1}H = g_{r}H$ for some $r$. This case requires two Nielsen transformations but the order depends on whether $r = 1$ or $r \neq 1$. Both of the cases are illustrated with diagrams (Figures~\ref{fig:second_swap} and~\ref{fig:first_swap}), to help visualise the underlying process. The Nielsen transformations are represented with dashed arrows, whereas the elements in the chessboards which are not members of $S$ are underlined. We use this underlining notation from this point onwards in the rest of the paper. If $r = 1$, we make the following Nielsen moves, as illustrated in \Cref{fig:second_swap}. $$h_{2} \mapsto h_{2}\cdot g_{1} \mapsto h_{2}g_{1} \cdot h_{1}^{\epsilon}$$ $$g_{1} \mapsto h_{1}^{\delta} \cdot g_{1}$$ Here, since $Hg_1h_1^\epsilon$ is an empty row, it is one different from $Hg_1$. Thus $h_2g_1h_1^\epsilon$ actually lies in a row different to one that $g_1$ lies in. Similarly, $h_1^\delta g_1$ lies in a column different to one of $g_1$. \begin{figure}[h] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.2cm ,y=1.2cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y+0.5) -- (0,\y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0.5) -- (\x,0+0.5); } \draw[->, line width = 2pt] (3.5,2) -- (4,2); \renewcommand{0}{3} \renewcommand{0}{3} \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (4.5,\y+0.5) -- (0 + 4.5, \y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x + 4.5, 0.5) -- (\x + 4.5, 0 + 0.5); } \draw[->, line width = 2pt] (8, 2) -- (8.5, 2); \renewcommand{0}{3} \renewcommand{0}{3} \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (9 + \x, 0.5) -- (9 + \x, 0.5 + 0); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (9, \y+0.5) -- (9 + 0, \y +0.5); } \draw (0.5, 3) node {$g_1$}; \draw (0.5, 2) node {\underline{$h_2g_1h_1^\epsilon$}}; \draw (1.5, 3) node {\underline{$h_1^\delta g_1$}}; \draw (-0.5, 1) node {$h_2$}; \draw [dashed, ->, line width = 1.5pt, color = black] (-0.5+0.2,1) .. controls (-0.5+0.2+0.3,1) and (0.5,2-0.3-0.2) .. (0.5,2-0.2); \draw (5, 3) node {$g_1$}; \draw (5, 2) node {$h_2g_1h_1^\epsilon$}; \draw (6, 3) node {\underline{$h_1^\delta g_1$}}; \draw [dashed, ->, line width = 1.5pt, color = black] (5, 3+0.2) .. controls (5+0.3, 3+0.2+0.3) and (6-0.3-0.1, 3+0.2+0.3) .. (6-0.1, 3+0.2); \draw (5+4.5, 2) node {$h_2g_1h_1^\epsilon$}; \draw (6+4.5, 3) node {$h_1^\delta g_1$}; \end{tikzpicture} \end{center} \caption{Nielsen moves yielding left-right extraction when $r = 1$.} \label{fig:second_swap} \end{figure} In the second case, $r \neq 1$, we put for simplicity $r=2$. We can make the pair of Nielsen transformations: $$g_{2} \mapsto h_{2}^{-1} \cdot g_{2} \mapsto h_{1}^{\delta} \cdot h_{2}^{-1}g_{2}$$ $$h_{2} \mapsto h_{2}\cdot g_{1} \mapsto h_{2}g_{1} \cdot h_{1}^{\epsilon}$$ This is a left-right extraction, as shown pictorially in \Cref{fig:first_swap}. In this picture, $g_1$ and $g_2$ are diagonal, the row $Hg_1h_1^\epsilon$ is different to $Hg_1$ and $Hg_2$ because it is empty. Similarly, $h_1^\delta h_2^{-1}g_2H$ is different to $g_1H$ and $g_2H$ because $h_1^\delta g_1 h_1^\epsilon$ lies in an empty row and column. \begin{figure}[h!] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.3cm ,y=1.3cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y+0.5) -- (0,\y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0.5) -- (\x,0+0.5); } \draw[->, line width = 2pt] (3.5,2) -- (4,2); \renewcommand{0}{3} \renewcommand{0}{3} \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (4.5,\y+0.5) -- (0 + 4.5, \y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x + 4.5, 0.5) -- (\x + 4.5, 0 + 0.5); } \draw[->, line width = 2pt] (8, 2) -- (8.5, 2); \renewcommand{0}{3} \renewcommand{0}{3} \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (9 + \x, 0.5) -- (9 + \x, 0.5 + 0); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (9, \y+0.5) -- (9 + 0, \y +0.5); } \draw (0.5, 3) node {$g_1$}; \draw (1.5, 2) node {$g_2$}; \draw (1.5, 1) node {\underline{$h_2g_1h_1^\epsilon$}}; \draw (2.5, 2) node {\underline{$h_1^\delta h_2^{-1}g_2$}}; \draw (2.5, 1) node {\underline{$h_1^\delta g_1 h_1^\epsilon$}}; \draw [dashed, ->, line width = 1.5pt, color = black] (1.5,2+0.2) .. controls (1.5+0.3,2+0.3+0.2) and (2.5-0.3,2+0.3+0.2) .. (2.5,2+0.2); \draw (5, 3) node {$g_1$}; \draw (7, 2) node {$h_1^\delta h_2^{-1}g_2$}; \draw (6, 1) node {\underline{$h_2g_1h_1^\epsilon$}}; \draw (4, 1) node {$h_2$}; \draw [dashed, ->, line width = 1.5pt, color = black] (4,1+0.2) .. controls (4+0.3,1+0.3+0.2) and (6-0.3,1+0.3+0.2) .. (6,1+0.2); \draw (9.5, 3) node {$g_1$}; \draw (11.5, 2) node {$h_1^\delta h_2^{-1}g_2$}; \draw (10.5, 1) node {$h_2g_1h_1^\epsilon$}; \end{tikzpicture} \end{center} \caption{Nielsen moves yielding left-right extraction when $r \neq 1$.} \label{fig:first_swap} \end{figure} And so in any case, we have left-right extracted an element of $S\cap H$. \end{proof} This lemma will be very useful in shortening the arguments needed in the case-by-case analysis done in the proof of \Cref{thm: rank4case}, which we do in \Cref{sec:rank4}. More importantly it is the first extraction algorithm which can be applied to a general $H \leq G$ and (no constraints on the index $[G:H]$), with only combinatorial hypotheses on $S$. For example, below is a neat situation where the hypotheses of \Cref{lem: EL} hold. \begin{cor} \label{cor: DPL} Let $G$, $H$ and $S$ be as in \Cref{lem: EL}. Suppose that there is an element $s \in S$ such that $s^{n}$ lies diagonal to every element of $S$ for some $n$. Then, if $|S \cap H|\geq 2$, a left-right extraction is possible. \end{cor} \begin{proof} Suppose $h \in S \cap H$. Consider the element $hs^{n}$. This lies in $Hs^{n}$, which contains no other elements of $S$ by assumption. If $hs^{n}H$ contains no elements of $S$ then we may perform $n$ Nielsen moves to extract $h \mapsto hs^{n}$ and we're done. Otherwise, we have $gH = hs^{n}H$ for some $g \in S\smallsetminus H$. We break this up into two cases. \\Case 1: If $s$, $g$ are distinct elements, then we may make the pair of Nielsen moves: $g \mapsto h^{-1}g$, followed by $h \mapsto hs^{n}$ (noting that $g \notin H$ so $g_1$, $h$ are distinct entries). This is the desired left-right extraction because we have moved $g$ to an empty row without changing its column, and then have extracted $h$ to the empty row $Hs^{n}$ and the (newly) empty column $gH$. \\Case 2: $sH = hs^{n}H$. The previous sequence of Nielsen moves would no longer be valid. Instead we can note that the equation implies that $sH$ has a sparse orbit. Hence if we repeat the argument instead considering the right action, with $s^{n}h$ instead of $hs^{n}$, either we're done or we encounter the same barrier, i.e. $Hs = Hs^{n}h$. In this case both $sH$ and $Hs$ both have sparse orbits, and so we can extract by \Cref{lem: EL}. \end{proof} Before using this corollary to prove \Cref{thm: divisor_thm}, one further result of elementary group theory is needed. \begin{defn} For $G$ an arbitrary group and $H$ a finite index subgroup of $G$, the \emph{$H$-exponent} of $g \in G$ is the smallest positive integer $e$ such that $g^e \in H$. \end{defn} We will use $H$-exponents to enumerate cases of the proof in the next section. Below we will show that they are divisors of $[G:\core_G(H)]$, which will then allow us to prove \Cref{thm: divisor_thm}. The subgroup $\core_G(H)$ is defined as: \begin{defn}\label{defn:core_sg} If $G$ is a group with subgroup $H$, the \emph{core subgroup} of $H$ in $G$ is $$\core_G(H) := \bigcap\limits_{g \in G}gHg^{-1}$$ \end{defn} The core is the largest normal subgroup of $G$ contained in $H$. If $H$ has index $n$, $\core_G(H)$ is the kernel of the homomorphism $G \rightarrow S_n$ induced by the left-multiplication action of $G$ on $H$. The first isomorphism theorem implies that $G/\core_G(H)$ is isomorphic to a subgroup of $S_n$, and in particular $\core_G(H)$ has finite index in $G$. This index is connected to $H$-exponents by the following proposition: \begin{prop} The $H$-exponent of $g\in G$ divides $[G:\core_G(H)]$. \end{prop} \begin{proof} Let $m = [G:\core_G(H)]$, and $e$ be the $H$-exponent of $g$. If the proposition is false, then we may write $m = eq + r$ where $q$, $r$ are integers such that $1 \leq r < e $. But now, because $g^m, g^e \in H$, we have that: $$g^r = g^{m -eq} = g^m(g^e)^{-q} \in H$$ contradicting minimality of $e$. \end{proof} We conclude with the proof \Cref{thm: divisor_thm}, which relies on everything mentioned in this section. \vspace{\topsep} \begin{proof}[Proof of \Cref{thm: divisor_thm}] Let $d$ be the minimal non-trivial divisor of $[G:\core_G(H)]$. Suppose that $S$ is a generating set of $G$ with size $r$. If $|S\cap H| = 1$ then $S$ is already diagonal, and we may extend it to a left-right transversal of $H$ in $G$. Otherwise, suppose $|S \cap H| \geq 2$ with $S$ left-right clean. It is sufficient to show that a left-right extraction is possible. Choose $s \in S\smallsetminus H$, and let $e$ be its $H$-exponent. The previous proposition implies $e \geq d$. Minimality of $e$ implies that the set $A = \set{s^2, \ldots, s^{e-1}}$ is diagonal (every pair of elements lie in distinct rows and columns), and that $A \subset G\smallsetminus H$. Now suppose that the hypothesis of \Cref{cor: DPL} does not apply. That is, that every element of $A$ belongs to the same row or column of some element of $S \smallsetminus H$. Under this assumption, a chessboard which contains $k$ elements of $S$ can contain at most $2k$ elements of $A$ (counting over each row, and then each column). If we count the elements of $A$ per chessboard, this leads to the inequality $$e - 2 \leq 2|S\smallsetminus H|$$ And because $|S\cap H| \geq 2$, $$e - 2 \leq 2(r-2) $$ But $d$ is the smallest non-trivial divisor of $[G:\core_G(H)]$, so we deduce from the above that $d \leq 2r -2 $. The contrapositive assertion shows that if $d \geq 2r - 1$ we can indeed extract via \Cref{cor: DPL}, as required. \end{proof} \section{Proof of \Cref{thm: rank4case}; transversals as generating sets for rank~4 groups}\label{sec:rank4} In this section we prove \Cref{thm: rank4case}, by performing a case-by-case analysis of where elements in $S$ lie relative to chessboards of $H\leq G$. This is an extension of the main theorem of \cite{transgen} to groups of rank~4. The proof in \cite[Theorem 3.11]{transgen} for groups of rank at most $3$ uses the technique of shifting boxes, but also relies heavily on the fact that having just $3$ elements in the chessboards makes the case-by-case analysis of Nielsen moves tractable. Without further insight into shifting boxes as a general technique, attempting to mimic the arguments in \cite{transgen} for the case when the group has rank 4 quickly leads to case analysis which is too large to carry out. The results of \Cref{sec:shifting} are enough to allow the argument style of \cite{transgen} to extend the proof technique of \cite[Theorem 3.11]{transgen} to the case where $G$ has rank 4. In addition to the results of \Cref{sec:shifting}, we will also make use of the inversion map on chessboards. \begin{defn}\label{defn:inversion} Consider the boards $HgH$ and $Hg^{-1}H$. The \emph{inversion map} is the standard map on group elements $x \mapsto x^{-1}$. This naturally induces: \begin{itemize} \item A map on chessboards via $HgH \mapsto Hg^{-1}H$, \item A map between the set of left cosets and the set of right cosets via $gH \mapsto Hg^{-1}$. \end{itemize} If a board is mapped to itself via the inversion map, $HgH = Hg^{-1}H$, we say the board is \emph{self-inverse}. \end{defn} \begin{note} A chessboard is self-inverse if and only if it contains an element of $H$-exponent $2$. We make use of this in \Cref{sec:self-inverse-chessboards}. \end{note} As a map between left and right cosets of $H$, the inversion map commutes with the multiplication action of $G$. What we mean by this is that the diagram in \Cref{fig:diagram} commutes. This simple observation makes the inversion map a useful way to connect the orbits of left and right cosets, and hence we will make use of it to apply \Cref{lem: EL}. \begin{figure} \centering \begin{tikzcd}[column sep = 4em, row sep = 4em] gH \rar[mapsto]{\textrm{act by } x} \dar[mapsto]{\textrm{inv.}} & xgH \dar[mapsto]{\textrm{inv.}} \\ Hg^{-1} \rar[mapsto]{\textrm{act by } x} & Hg^{-1}x^{-1} \end{tikzcd} \caption{The inversion map commutes with the right action of $G$ on $G/H$ and $H\backslash G$.} \label{fig:diagram} \end{figure} Throughout the proof of \Cref{thm: rank4case}, the term \emph{box} will be used to refer to the intersection of a left and right coset in a double coset (forming a `box' in the corresponding chessboard). Also, we will always have a generating set $S$ of elements whose Nielsen transformations we are considering. Recall a coset or a box is referred to as \emph{empty} if it doesn't contain any elements of $S$. We are now ready to prove \Cref{thm: rank4case}. \vspace{\topsep} \begin{proof}[Proof of \Cref{thm: rank4case}] Given the left-right cleaning procedure given in \Cref{sec:graphs}, we may suppose $S$ is left-right clean with respect to $H$. There are either 1,2 or 3 elements of $S$ remaining in $H$. If there is only 1 element of $S$ in $H$, then $S$ is already diagonal and may be immediately extended to a left-right transversal. If there are 3 elements of $S$ inside $H$, we may extract one as follows: Suppose $h_1, h_2, h_3$ are the elements of $S$ belonging to $H$, and $g$ is the entry not in $H$. Firstly, if $g$ has $H$-exponent (simply referred to as \emph{exponent} subsequently) at least $3$, then $g^{2}$ lies outside of $H$, and diagonally to $g$. By \Cref{cor: DPL}, we may extract one of the $h_i$ from $H$. Now suppose $g$ has exponent $2$. By the argument given in the proof of \Cref{prop: left-clean}, there is an empty right coset of the form $Hs_is_j^{\epsilon}$ with $s_i$, $s_j$ elements of $S$. Since each $Hg^{\pm1} = Hg$ and each $h_i \in H$ we see the empty coset must be of the form $Hgh_i^{\epsilon} $ for some $i$, some $\epsilon \in \{\pm1\} $. This implies that the orbit of $Hg$ is sparse under the action of $\gen{h_1, h_2, h_3}$. Because $g^2 \in H$, the inversion map preserves the chessboard $HgH$ (that is, its columns are mapped to its rows and vice versa). Furthermore, because $g$ is the only entry of $S$ in the chessboard, and $gH \mapsto Hg$ under inversion, it follows that empty right cosets are mapped to empty left cosets under inversion. In particular $h_i^{-\epsilon}g^{-1}H = h_i^{-\epsilon}gH$ is empty, and so $g$ has a left and right sparse orbit, so we can extract by \Cref{lem: EL}. This process of extraction works for several more difficult configurations occurring later. Hence we are left with the case where there are 2 elements of $S$ in $H$, and we can label $S$ by $S \cap H = \{h_1, h_2\}$ and $S \smallsetminus H = \{g_1, g_2\}$. As we have seen above, the exponents of the elements outside of $H$ play an important role. Accordingly the remaining cases are indexed by the exponents of $g_1$ and $g_2$ ($e_1$ and $e_2$ respectively), where $e_{1}, e_{2}>1$ as $g_{1}, g_{2}\notin H$. Reordering so that $e_1 \leq e_2$, the cases are: \begin{itemize} \item[I.] $e_2 \geq 4$ \item[II.] $e_1 = 2 < e_2$ \item[III.] $e_1 = e_2 = 3$ \item[IV.] $e_1 = e_2 = 2$ \end{itemize} In diagrams, $g_1$ and $g_2$ will be depicted as belonging to the same chessboard. Whilst this is not necessarily the case, generally the arguments made apply equally well in both cases and so will not be made twice. \subsection*{Case I.} As $e_2 \geq 4$, we have that $\set{g_2, g_2^{2}, g_2^{3}}$ is a diagonal set lying outside of $H$. If either of $g_2^{2}$ or $g_2^{3}$ lies diagonally to $g_1$, then it necessarily lies diagonally to every entry of $S$. We may therefore extract by \Cref{cor: DPL}. Hence we need only consider the case when neither $g_2^{2}, g_2^{3}$ lies diagonally to $g_1$. That is, they both lie in $g_1H \cup Hg_1$. The powers of $g_{2}$ are mutually diagonal so in this case neither power can lie in $g_1H \cap Hg_1$. That is, exactly one of $g_2^{2}$, $g_2^{3}$ lies in the row of $g_1$ (and not the column), and the other lies in the the column (and not the row). Suppose $g_2^{i}H = g_1H$, then perform $g_1 \mapsto g_2^{-i}g_1 = h_3 \in H$. Now, within the chessboard $Hg_1H$ there is the diagonal set $\set{g_2^{2}, g_2^{3}}$ and potentially also $g_2$. We have $3$ elements in $H$, and to achieve our goal we must extract $2$ of them from $H$. To do this we will extract one element to such a position that we still have a diagonal power of $g_2$, i.e. a power of $g_2$ is diagonal to everything in the Nielsen transformed multiset $S$. Then we can apply \Cref{cor: DPL} to extract the other. Consider the elements $h_ig_2^{2}$ for $i = 1,2,3$. These each lie in $Hg_2^{2}$, and if one lies outside of the columns $g_2^{3}H$, $g_2H$ then we can extract it, noting that $g_2^{3}$ remains diagonal to the arrangement. If none of the $h_ig^{2}$ lie outside of the two columns $g_2^{3}H$, $g_2H$, then by the pigeonhole principle, without loss of generality $h_1g_2^{2}$, $h_2g_2^{2}$ lie in the same columns. Hence $h_1^{-1}h_2g_2^{2} \in g_2^{2}H$ and so $h_1^{-1}h_2g_2^{2} \in g_2^{2}H \cap Hg_2^{2}$. After extracting $h_2 \mapsto h_1^{-1}h_2g_2^{2}$, $g_2^{3}$ remains diagonal to the entire configuration, so we may extract again by \Cref{cor: DPL}. \Cref{fig:rank4_zeroth_fig} shows this extraction, assuming that $g_2^2 \in Hg_1$ and $g_2^2 \in g_1H$ (transposing $g_2^2$ and $g_2^3$ doesn't alter the argument). \begin{figure}[h!] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.4cm,y=1.4cm] \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x-0.5, 0-0.5) -- (\x-0.5, 3-0.5); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0-0.5, \y-0.5) -- (3-0.5,\y-0.5); } \draw (0, 0) node {\underline{$g_2^3$}}; \draw (0, 2) node {\underline{$g_1$}}; \draw (1, 1) node {$g_2$}; \draw (2, 2.2) node {\underline{$g_2^2$}}; \draw (2, 1.8) node {\underline{$h_1^{-1}h_2g_2^2$}}; \draw (-1, 1) node {$h_2$}; \draw [->, dashed, line width = 1.5pt, color = black] (-0.8, 1) .. controls (-0.8+0.6, 1) and (1.5-0.4, 1.8) .. (1.5, 1.8); \end{tikzpicture} \end{center} \caption{The extraction of $h_2$ to $h_1^{-1}h_2g_2^{2}$. Note $g_2^{3}$ is diagonal to $\set{g_2, h_1^{-1}h_2g_2^2}$} \label{fig:rank4_zeroth_fig} \end{figure} \subsection*{Case II.} In this case $e_1 = 2$ and so the inversion map is useful. Because $e_2 \neq 2$, we have that $1 \not \equiv -1 \mod{e_2}$ and hence $\set{g_2, g_2^{-1}}$ is a diagonal set. The inversion map gives a bijection between left and right cosets of $H$, and because $Hg_1 \neq Hg_2$ we have $g_2^{-1}H \neq g_1^{-1}H = g_1H$ and similarly on the right. Hence $\set{g_1, g_2, g_2^{-1}}$ is diagonal, and so we may extract by \Cref{cor: DPL}. \subsection*{Case III.} We consider the set of inverses $\set{g_1^{-1}, g_2^{-1}}$. From remarks on the inversion map, this new pair of elements is diagonal. Furthermore, neither of the $g_i$ have exponent $2$, so $\set{g_i, g_i^{-1}}$ is diagonal. It follows that if $g_1^{-1}$ lies diagonally to $g_2$, or vice-versa, then we may extract by \Cref{cor: DPL}. Therefore, we may suppose that neither inverse lies diagonally to $\set{g_1, g_2}$ and so $g_1^{-1} \in g_2H \cup Hg_2$, and $g_2^{-1} \in g_1H \cup Hg_1$. Within this configuration, we firstly consider the special case where $g_1^{-1} \in g_2H \cap Hg_2$. This says that the inverse of $g_i$ lies in the exact chessboard box of the other, so that $Hg_2^{-1} = Hg_1$ and $g_2^{-1}H = g_1H$ (by inverting we can see that the situation is symmetric with respect to $g_1$, $g_2$). Consider an empty right coset of the form $Hs_is_j^{\epsilon}$ where $\epsilon \in \{\pm 1\}$. In this subcase we have $$ Hg_ig_j^{\epsilon} = Hg_j^{\epsilon - 1} \in \set{H , Hg_1, Hg_2}$$ In particular the empty coset $Hs_is_j^{\epsilon}$ must be of the form $Hg_ih_j^{\epsilon}$. By symmetry, we may assume $Hg_1h_1$ is empty. Applying the same argument to the left shows that there is an empty left coset of the form $h_k^{\delta}g_jH$ for $\delta \in \{ \pm 1 \}$. If $j = 1$, then $g_1$ has a left and right sparse orbit (under the $\gen{h_1, h_2}$ action) so we're done by \Cref{lem: EL}. So suppose $j = 2$. Hence $Hg_1h_1$, $h_k^{\epsilon}g_2H$ are the pair of empty cosets. This says that $g_1$ has a right-sparse orbit and $g_2$ has a left-sparse orbit. If we are able to deduce that $g_1$, $g_2$ have the same left or right coset orbit (under the $\gen{h_1, h_2}$ action) then we're done, as this will imply that one of $\set{g_1, g_2}$ has a left and right sparse orbit. To show this, consider the element $ x = g_1h_k^{\epsilon}g_2$. Immediately one can see that $x \notin g_1H \cup Hg_2$. Furthermore $x \notin H$, because otherwise we would have $h_k^{\epsilon}g_2H = g_1^{-1}H = g_2H$ which contradicts the emptiness of $h_k^{\epsilon}g_2H$. If $x$ lies diagonally to $\set{g_1, g_2}$, then we may extract via $h_k \mapsto g_1h_k^{\epsilon}g_2$. Else we have $x \in Hg_1$ or $x \in g_2H$ (considering the cases we have just eliminated). Examining both of these cases: $$ x \in Hg_1 \iff Hg_1h_k^{\epsilon}g_2 = Hg_1 \implies Hg_1h_k^{\epsilon} = Hg_1g_2^{-1} = Hg_2^{-2} $$ $$x \in g_2H \iff g_1h_k^{\epsilon}g_2H = g_2H \implies h_k^{\epsilon}g_2H = g_1^{-1}g_2H = g_1^{-2}H $$ Recalling that $g_1$ and $g_2$ have exponent $3$, the two equations at the end of each line may be written as: $$ Hg_1h_k^{\epsilon} = Hg_2$$ $$ h_k^{\epsilon}g_2H = g_1H$$ Either case implies that $g_1,g_2$ have the same $\gen{h_1,h_2}$-orbit as desired. The above argument deals with the special case where neither inverse $g_1^{-1}$, $g_2^{-1}$ was diagonal to $S$, and the inverse of $g_2$ lies in $g_1H \cap Hg_1$. It remains to consider the case where $g_2^{-1} \in g_1H \Delta Hg_1$, where $\Delta$ denotes symmetric difference of sets. This implies that $g_1^{-1} \in g_2H \Delta Hg_2$. In particular: $$g_i^{-1} \in Hg_j \iff g_j^{-1} \in g_iH $$ Hence, without loss of generality we suppose $g_1^{-1} \in Hg_2 \smallsetminus g_2H$ and so $g_2^{-1} \in g_1H \smallsetminus Hg_1$; this is illustrated in \Cref{fig:rank4_first_fig}. This figure shows two conventions which we adopt from here on: by a suitable permutation of the rows and columns of the chessboards, the inversion map corresponds to reflection in the main diagonal (this clarifies arguments in \Cref{sec:self-inverse-chessboards}), and that the elements outside of $S$ (prior to any of the transformations which might also be drawn) are underlined. \begin{figure}[h] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.2cm,y=1.2cm] \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x, 0) -- (\x, 3); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0, \y) -- (3,\y); } \draw (0.5, 1.5) node {$g_1$}; \draw (0.5, 0.5) node {\underline{$g_2^{-1}$}}; \draw (1.5, 2.5) node {\underline{$g_1^{-1}$}}; \draw (2.5, 2.5) node {$g_2$}; \end{tikzpicture} \end{center} \caption{The configuration being considered so far (restricting to the appropriate $3\times 3$ minor after permuting rows and columns appropriately).} \label{fig:rank4_first_fig} \end{figure} The main aim of the next calculation is to reduce this subcase to the situation dealt with earlier in Case III, where $g_1^{-1} \in g_2H \cap Hg_2$. Consider the element $y = g_2g_1^{-1}$. We note that $y \in Hg_1$, and $y \notin g_2H$. If $ y \notin g_1H$ then consider $g_1 \mapsto y$ (which is a Nielsen move). The set $\set{g_2, y}$ is still diagonal, but now $g_2^{-1}$ is diagonal to $\set{y, g_2}$, and so we're done by \Cref{cor: DPL}. This is shown in \Cref{fig:rank4_second_fig} for the case when $g_1$, $g_2$ belong to the same chessboard: \begin{figure}[h] \centering \begin{center} \renewcommand{0}{4} \renewcommand{0}{4} \begin{tikzpicture}[line cap=round,line join=round,x=1.2cm,y=1.2cm] \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x, 0) -- (\x, 4); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0, \y) -- (4,\y); } \draw (0.5, 2.5) node {$g_1$}; \draw (1.5, 3.5) node {\underline{$g_1^{-1}$}}; \draw (0.5, 1.5) node {\underline{$g_2^{-1}$}}; \draw (2.5, 3.5) node {$g_2$}; \draw (3.5, 2.5) node {\underline{$y$}}; \draw [->, dashed, line width = 1.5pt, color = black] (0.5+0.2, 2.5+0.1) .. controls (0.5+0.2+0.3, 2.5+0.1+0.2) and (3.5-0.2-0.3, 2.5+0.1+0.2) .. (3.5-0.2, 2.5+0.1); \end{tikzpicture} \end{center} \caption{Example configuration of various elements, where $y = g_2g_1^{-1}$. Note however that $y$ may be in the same column as $g_1^{-1}$.} \label{fig:rank4_second_fig} \end{figure} If, instead, $y$ belongs to $g_1H$ then $y \in g_1H\cap Hg_1$, and so $y^{-1} \in g_1^{-1}H \cap Hg_1^{-1}$. Making the Nielsen move $g_2 \mapsto y^{-1}$, we see that $\set{g_1, y^{-1}}$ is diagonal, and also $g_1^{-1} \in yH\cap Hy$, $y \in g_1H \cap Hg_1$ (in particular $y^{-1}$ doesn't have exponent $2$). Either $y^{-1}$ has exponent greater than $3$, in which case we can extract by appealing to Case I, or $y^{-1}$ has exponent $3$, and we are done by the exact argument used in the first paragraph of Case III in which the inversion map swaps the boxes of the two elements of $S$ lying outside of $H$. We illustrate this in \Cref{fig:rank4_third_fig}. \begin{figure}[h] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.3cm,y=1.3cm] \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x-0.5, 0-0.5) -- (\x-0.5, 3-0.5); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0-0.5, \y-0.5) -- (3-0.5,\y-0.5); } \draw (0, 1) node {$g_1$, \underline{$y^{-1}$}}; \draw (0, 0) node {\underline{$g_2^{-1}$}}; \draw (1, 2) node {\underline{$y$}, \underline{$g_1^{-1}$}}; \draw (2, 2) node {$g_2$}; \draw [->, dashed, line width = 1.5pt, color = black] (2-0.1, 2+0.15) .. controls (2-0.1-0.2, 2+0.2+0.2) and (0.8+0.2, 2+0.2+0.2) .. (0.8, 2+0.1); \end{tikzpicture} \end{center} \caption{Configuration which reduces to the second subcase of Case III.} \label{fig:rank4_third_fig} \end{figure} \subsection*{Case IV.} Observe that an element $g_i$ of exponent $2$ satisfies $g_iH = g_i^{-1}H$, $Hg_i = Hg_i^{-1}$. From this it follows that having a one-sided sparse orbit is sufficient to have a left-right sparse orbit. For example if the orbit of $Hg_1$ is sparse, then $Hg_1x$ contains no entry of $S$ for some $x \in \gen{h_1, h_2}$. Then if we consider the image of this right coset under the inversion map $x^{-1}g_1^{-1}H = x^{-1}g_1H$, and note that the set of full cosets (those containing at least one element of $S$) is preserved by inversion, it follows that $x^{-1}g_1H$ is empty and hence $g_1$ has a right-and-left sparse orbit. This is shown in \Cref{fig:rank4_fourth_fig}. In light of this observation, we may suppose that no cosets of $g_1$, $g_2$ have sparse orbits (or else we may extract by \Cref{lem: EL}). \begin{figure}[h] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.3cm,y=1.3cm] \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x-0.5, 0-0.5) -- (\x-0.5, 3-0.5); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0-0.5, \y-0.5) -- (3-0.5,\y-0.5); } \draw (0, 0) node {\underline{$g_1x$}}; \draw (0, 2) node {$g_1$, \underline{$g_1^{-1}$}}; \draw (1, 1) node {$g_2$, \underline{$g_2^{-1}$}}; \draw (2, 2) node {\underline{$x^{-1}g_1$}}; \end{tikzpicture} \end{center} \caption{The empty row $Hg_1x$ is mapped to the empty column $x^{-1}g_1H$ under inversion.} \label{fig:rank4_fourth_fig} \end{figure} As always, there exists an empty right coset of the form $Hs_is_j^{\epsilon}$, where $s_i$, $s_j \in S$. It must be the case that $s_i \notin H$ (because $Hg_i^{\pm1} = Hg_i$ is clearly non-empty). Furthermore, because we have assumed to have no sparse orbits, it must also be the case that $s_j \notin \set{h_1, h_2}$. Hence there is an empty right coset of the form $Hg_ig_j^{\epsilon}$ (where $i$, $j$ are necessarily distinct). By symmetry between the $g_i$s and their inverses, we may suppose that $\epsilon = 1$, so that $Hg_1g_2$ is empty. By assumption, sparse orbits do not exist, so $Hg_1g_2h_1$ is also empty. If $g_1g_2H$ is also empty, then $g_1g_2h_1$ lies diagonally to $S$ and hence the Nielsen transformation $h_1 \mapsto g_1g_2h_1$ is an extraction. If instead, $g_1g_2H$ contains an entry of $S$ then it must be the case that $g_2H = g_1g_2H$. This is because $H \neq Hg_1g_2 $ implies $H \neq g_1g_2H$, and $g_2H \neq H$ implies $g_1g_2H \neq g_1H$. The configuration is shown in \Cref{fig:rank4_fifth_fig}. From the equations $g_1g_2H = g_2H = g_2^{-1}H$ it follows that $g_2g_1g_2 \in H$ and so $g_1(g_2g_1g_2) = (g_1g_2)^{2} \notin H$. It can be seen from \Cref{fig:rank4_fifth_fig}, the Nielsen transformation $\set{g_1, g_2} \mapsto \set{g_1, g_1g_2}$ preserves diagonality and increases an exponent: $(g_1g_2)^{2} \notin H$. Hence our ability to extract from this configuration is covered by one of the cases I, II or III. \begin{figure}[h] \centering \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.3cm,y=1.3cm] \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x-0.5, 0-0.5) -- (\x-0.5, 3-0.5); } \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0-0.5, \y-0.5) -- (3-0.5,\y-0.5); } \draw (0,2) node {$g_1$}; \draw (1,1) node {$g_2$}; \draw (1,0) node {\underline{$g_1g_2$}}; \draw [->, dashed, line width = 1.5pt, color = black] (1-0.15,1-0.1) .. controls (1-0.15-0.2,1-0.1-0.2) and (1-0.15-0.2,0+0.2+0.2) .. (1-0.15,0+0.2); \end{tikzpicture} \end{center} \caption{The Nielsen move $g_2 \mapsto g_1g_2$ takes $\set{g_1, g_2}$ to the diagonal set $\set{g_1, g_1g_2}$.} \label{fig:rank4_fifth_fig} \end{figure} \end{proof} \section{Results in the finite setting}\label{sec:finite-setting} On closer inspection of \Cref{thm: rank4case} and \Cref{thm: divisor_thm}, we see that in each instance we are proving special cases of a slightly stronger question than \Cref{ques: motivating}. That is, it seems that the question we have been approaching so far is: \begin{question} \label{main Q} Let $S$ be a generating set of minimal size in a finitely generated group $G$, and $H$ a finite index subgroup with $[G:H] \geq |S|.$ Is $S$ Nielsen equivalent to a set which extends to a left-right transversal of $H$ in $G$? \end{question} In fact, by examining our statements and proofs of \Cref{thm: rank4case} and \Cref{thm: divisor_thm}, the question we are making progress on is even tighter. We do not need to work with a generating set of minimal size; our results allow us to Nielsen transform any generating set of the group, provided that set has size no greater than $[G:H]$. That is: \begin{question} \label{general Q} Let $S$ be a finite multiset generating a group $G$, and $H$ a finite index subgroup with $[G:H] \geq |S|$. Is $S$ Nielsen equivalent to a set which extends to a left-right transversal of $H$ in $G$? \end{question} We note that \Cref{main Q} was raised in \cite{transgen}, where it was also noted that an affirmative answer for general $G$ would be implied by an affirmative answer for the case when $G$ is a free group of finite rank. In this section we make a reduction in the opposite direction, to the case where $G$ is a finite group, and use some of our techniques to make progress in this setting. As noted following \Cref{defn:core_sg}, if $H$ is a finite index subgroup of $G$, then so is $N := \core_G(H)$. We provide the following two elementary lemmata without proof. \begin{lem}\label{lem:lem1_np} Let $H\leq G$, and let $N:= \core_G(H)$. Because $N \leq H$, there is a natural bijection between left (right) cosets of $H/N$ in $G/N$, and left (right) cosets of $H$ in $G$. It follows that two elements of $G/N$ belong to the same left (right) coset of $H/N$ if and only if any two representatives of the elements lie in the same left (right) cosets of $H$ in $G$. \end{lem} \begin{lem}\label{lem:lem2_np} Let $N\trianglelefteq G$. Given a set $S$ in $G/N$, and a corresponding set of representatives $\tilde{S}\subset G$, any Nielsen transformation $S \rightarrow S_1 $ has a naturally corresponding Nielsen transformation $\tilde{S} \rightarrow \tilde{S_1}$, which commutes with the natural projection map $G \rightarrow G/N$. Explicitly, a Nielsen move on $S$ can be replicated on $\tilde{S}$ by performing the same move on the corresponding representatives. \end{lem} The reason that \Cref{lem:lem1_np} and \Cref{lem:lem2_np} do not allow us to reduce the question to the case where $G$ is finite is because the homomorphic image of minimal generating set is not necessarily a \emph{minimal} generating set. To get around this, we note that at no point in the previous sections do the arguments rely on the minimality of the generating set. That is, our results thus far make just as much progress towards the slightly more general \Cref{general Q}. Clearly an affirmative answer to \Cref{general Q} would imply the same for \Cref{main Q}. It is enough to answer \Cref{general Q} in the case where $G$ is finite. \begin{prop} \label{finite reduction} \Cref{general Q} has an affirmative answer whenever $G$ is finite, if and only if it has an affirmative answer for all $G$. \end{prop} \begin{proof} Suppose that the answer to \Cref{general Q} is `yes' whenever $G$ is finite. Let $G$ be arbitrary, $H$ a finite index subgroup and $S$ a generating multiset in $G$. Letting $\overline{G}$ denote quotient by $\core_G(H)$, $\overline{G}$ is a finite group containing the multiset $\overline{S}$ which generates $\overline{G}$. By assumption there is a Nielsen transformation $\overline{S} \mapsto \overline{S}_1$, such that $\overline{S}_1$ is a set of elements which each lie in distinct left and right cosets of $\overline{H}$. By \Cref{lem:lem2_np}, we may perform a corresponding Nielsen transformation $S \mapsto S_1$. Because this correspondence commutes with the map $G \rightarrow \overline{G}$, it follows from \Cref{lem:lem1_np} that the elements of $S_1$ each lie in distinct left and right cosets of $G$. \end{proof} Thus, while we cannot reduce our motivating question (\Cref{main Q}) to the finite case, we can reduce the question that we would like to answer (\Cref{general Q}) to the finite case. With this reduction in mind, we now present two results for the case where $G$ is finite which give affirmative answers to \Cref{main Q} when $H$ is cyclic or the product of two prime-order cyclic groups. The first argument is completely elementary whilst the second relies on \Cref{lem: EL}. \begin{lem} \label{lem:cyclic-nielsen} Suppose $H$ is a finite cyclic group of order $n$ and $\set{x,y} \subset H$. Then $\set{x,y}$ is Nielsen equivalent to a set containing the identity, $e$. \end{lem} \begin{proof} Assume that $H = \gen{g}$ and that $x = g^a, y = g^b$ for $a,b \in \set{0,1,\ldots, n-1}$ and $a\geq b$. Then we can perform Euclid's algorithm: if $b \ne 0$ take $q \in \mathbb{N}$ such that $|a-qb|< b$ and do the sequence of Nielsen moves: $$\set{x,y} \mapsto \set{xy^{-1},y} \mapsto \ldots \mapsto \set{xy^{-q},y}$$ Then, we can change the roles of $x$ and $y$ with $y$ and $xy^{-q}$, and proceed with the algorithm. The minimal positive exponent of $g$ appearing in the set strictly decreases on each iteration, and so this process terminates at $\set{g^{\gcd(a,b)},g^0} = \set{z,e}$. \end{proof} We apply this lemma to answer \Cref{general Q} when $H$ is cyclic. \begin{thm} Let $G$ be a finite group, and $H\leq G$ be a cyclic subgroup. Any generating multiset $S$ such that $|S|\leq [G:H]$ can be Nielsen transformed into a set contained in a left-right transversal of $H$ in $G$. \end{thm} \begin{proof} By left-right cleaning, we may suppose that $S$ is a disjoint union of: 1) a multiset whose elements are in $H$, and 2) a left-right diagonal set of elements outside of $H$. If $|S\cap H| = 1$, then the set is already diagonal with respect to $H$. Hence we may assume that there are at least two elements, $x, y \in S\cap H$. According to \Cref{lem:cyclic-nielsen}, we can Nielsen transform $\set{x,y}$ to a pair $\set{z,e}$ for some $z\in H$. Therefore we can assume $e \in S\cap H$. By assumption, $S$ generates $G$, so every element of $G$ can be written as a product of elements of $S$. Pick an element $g \in G$ that lies diagonally to $S$ (such an element exists by the index condition). $g$ can be written as a product of non-trivial elements of $S$, say $s_{i_1}s_{i_2}\ldots s_{i_n}$ Thus, the series $$e \mapsto e\cdot s_{i_1} \mapsto s_{i_1}\cdot s_{i_2} \mapsto \ldots \mapsto s_{i_1}s_{i_2}\ldots s_{i_{n-1}}\cdot s_{i_n}$$ is a series of valid Nielsen moves. This sequence of Nielsen moves amounts to a left-right extraction of an element of $S\cap H$. We may repeat this until $|S \cap H| = 1$. \end{proof} It is immediate from the reduction method of \Cref{finite reduction} that the corresponding result which doesn't require $|G| < \infty$ is: \begin{cor} Let $G$ be a group with finite generating set $S$, and $H$ a finite index subgroup such that $[G:H] \geq |S|$. If $H/\core_G(H) \cong C_n$ for some $n \geq 1$, then $S$ is Nielsen equivalent to a set contained in a left-right transversal of $H$ in $G$. \end{cor} A similar corollary follows from the next proposition. \begin{prop}\label{lem:CpxCp} Let $G$ be a finite group, and $H \leq G$ such that $H \cong C_p \times C_p$ for some prime $p$. Suppose $S$ is a generating multiset in $G$ such that $[G : H] \geq |S|$. Then $S$ is a Nielsen-equivalent to a set contained in a left-right transversal of $H$ in $G$. \end{prop} \begin{proof} We use the following property of $C_p \times C_p$: any two non-trivial elements either generate the whole group or are powers of each other. To see this, take $x, y \in C_p\times C_p$ and suppose they are not both trivial. If they don't generate the entire group, they belong to a proper non-trivial subgroup. Such a subgroup is necessarily $C_p$ and so any non-trivial element (in particular one of $x$ and $y$) generates it. As usual we may suppose $S$ is left-right clean. If $|S\cap H| = 1 $, we're done. Else, suppose we have a distinct pair $x, y \in S \cap H$. Suppose $\gen{x, y} \neq H$. Then by the first remark we (w.l.o.g) have $x = y^k$ for some $k \geq 0$, and we can perform a sequence of Nielsen moves resulting in $x \mapsto x y^{-k} = e$. As in the proof of \Cref{lem:cyclic-nielsen}, we can left-right extract $e$ from $S\cap H$. Hence we may suppose $\gen{x, y} = H$. It follows that $\gen{S\cap H}$ acts transitively on the rows and columns of any chessboard. Because $[G: H] \geq |S|$, there is a chessboard with a row and column not containing any element of $S$. Furthermore, we may suppose that there is such a chessboard which also contains an element of $S$ (otherwise, we may easily extract to the empty row of the form $Hs_is_j^{\epsilon}$ as in \Cref{prop: left-clean}, which would also be a left-right extraction). It follows that the element of $S$ in this chessboard has a sparse $\gen{S \cap H}$ orbit on the left and the right, and so we may perform a left-right extraction by \Cref{lem: EL}. \end{proof} \section{Further techniques and constructions}\label{sec: configurations} Motivated by the method of shifting boxes, we introduce some new techniques along a similar vein which help us resolve more special cases of \Cref{main Q} and \Cref{general Q}. In contrast to the extraction methods, which are useful in the situations of having relatively few elements of the multiset, these new techniques work well when the elements in chessboards are `packed densely'. This section concerns a technique called an \emph{L-spin} of transforming particular configurations with a specific Nielsen transformation. The technique rests heavily on the fact that the inversion map induces a canonical bijection between the left and right cosets of a given subgroup as described in \Cref{defn:inversion}. In this section we will always take $G$ to be an arbitrary group and $H\leq G$ a finite index subgroup. We will consider subsets of the chessboards of $H$ in $G$, given as follows: \begin{defn}\label{defn:configuration} Given $H\leq G$ and a multiset $S$ of elements of the group $G$, we define a \emph{configuration} to be a table $T$ representing intersections of cosets $a_iH\cap Hb_j$ for some sets of cosets $\{a_1H,\ldots, a_lH\}$, $\{Hb_1, \ldots, Hb_k\}$, and a submultiset $R\subset S$, such that all elements of $R$ belong to the coset intersections described by the table $T$. We represent the elements by either writing them in the corresponding boxes in the table, or putting dots representing them in the boxes. Sometimes we also denote some arbitrary elements of $G$, not necessarily belonging to the multiset $R$, by writing them in the corresponding boxes underlined or represented by hollow dots \end{defn} We can think of a configuration as ``choosing some rows and columns of a chessboard''; an example of what a configuration can be is given in \Cref{fig:configuration}. An important property of configurations is that if two elements of a chessboard lie in the same row/column and both of them feature in the configuration, then they lie in the same row/column in the configuration too. Similarly, if two elements lie in the same row/column of the configuration, then they lie in the same row/column of the chessboard. \begin{figure}[h!] \begin{center} \renewcommand{0}{4} \renewcommand{0}{4} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw[->, line width = 2pt] (4.5,2) -- (5.5,2); \renewcommand{0}{2} \renewcommand{0}{3} \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+6.5,\y+0.5) -- (0+6.5,\y+0.5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+6.5,0+0.5) -- (\x+6.5,0+0.5); } \draw (-0.5,3.5) node {$Ha_1$}; \draw (-0.5,2.5) node {$Ha_2$}; \draw (-0.5,0.5) node {$Ha_3$}; \draw (0.5,4.2) node {$b_1H$}; \draw (2.5,4.2) node {$b_2H$}; \draw (6,3) node {$Ha_1$}; \draw (6,2) node {$Ha_2$}; \draw (6,1) node {$Ha_3$}; \draw (7,3.7) node {$b_1H$}; \draw (8,3.7) node {$b_2H$}; \draw [fill = black] (0.5,3.5) circle (3.5pt); \draw [fill = black] (1.5,3.5) circle (3.5pt); \draw [fill = black] (1.5,1.5) circle (3.5pt); \draw [fill = black] (2.3,3.5) circle (3.5pt); \draw [fill = black] (2.7,3.5) circle (3.5pt); \draw [fill = black] (2.5,2.5) circle (3.5pt); \draw [fill = black] (3.5,2.5) circle (3.5pt); \draw [fill = black] (3.5,0.5) circle (3.5pt); \draw [fill = black] (7,3) circle (3.5pt); \draw [fill = black] (7.8,3) circle (3.5pt); \draw [fill = black] (8.2,3) circle (3.5pt); \draw [fill = black] (8,2) circle (3.5pt); \end{tikzpicture} \end{center} \caption{A entire chessboard, and a configuration obtained from that chessboard.} \label{fig:configuration} \end{figure} \subsection{L-spins} We start with an elementary result on configurations, which allows us to define the L-spin -- a useful technique. It applies to the configuration shown in \Cref{fig:L-spin set-up}, where $a,b,c$ are elements of $G$ belonging to the multiset $S$. \begin{figure}[h!] \begin{center} \renewcommand{0}{2} \renewcommand{0}{2} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw (0.5,1.5) node {$b$}; \draw (0.5,0.5) node {$c$}; \draw (1.5,1.5) node {$a$}; \draw (1.5,0.5) node {\underline{$ab^{-1}c$}}; \end{tikzpicture} \end{center} \caption{A configuration we may apply an L-spin to, defined below. The underlined element doesn't necessarily belong to the starting multiset.} \label{fig:L-spin set-up} \end{figure} \begin{lem}\label{lem:above} Given a configuration of a multiset $R$ in a table $T$, if there are elements $a,b,c\in R$ such that $a,b$ belong to the same row (right $H$ coset) and $b,c$ belong to the same column (left $H$ coset), then the element $ab^{-1}c$ lies in the same column as $a$ and the same row as $c$. Furthermore, we can perform Nielsen moves substituting any of the elements of the multiset $\{a,b,c\}$ with $ab^{-1}c$. \end{lem} \begin{proof} If $Ha = Hb$ and $bH=cH$, then $ab^{-1}\in H$ and $b^{-1}c\in H$. This means that $ab^{-1}cH = aH$ and that $Hab^{-1}c = Hc$. Thus indeed $ab^{-1}c$ lies in the column of $a$ and the row of $c$. Also, any of the following is a sequence of Nielsen moves: $$\{a,b,c\} \mapsto \{ab^{-1},b,c\} \mapsto \{ab^{-1}c,b,c\}$$ $$\{a,b,c\} \mapsto \{a,b,b^{-1}c\} \mapsto \{a,b,ab^{-1}c\}$$ $$\{a,b,c\} \mapsto \{a,b^{-1},c\} \mapsto \{a,ab^{-1},c\} \mapsto \{a,ab^{-1}c,c\}$$ \end{proof} This motivates the following definition. \begin{defn}\label{defn:L-spin} Given the configuration from \Cref{lem:above} (i.e. $Ha=Hb$, $bH=cH$), we call the transformation substituting an element of the multiset $\{a,b,c\}$ with the element $ab^{-1}c$ an \emph{L-spin}. \end{defn} Visually, an L-spin corresponds to rotating the L-shape formed in the table by the three elements $a,b,c$ in the configuration in question. This is shown in \Cref{fig:L-spin-visual}. \begin{figure}[h!] \begin{center} \renewcommand{0}{2} \renewcommand{0}{2} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.5,1.5) circle (3.5pt); \draw [fill = black] (0.5,0.5) circle (3.5pt); \draw [fill = black] (1.5,1.5) circle (3.5pt); \draw [] (1.5,0.5) circle (3.5pt); \draw [line width = 4pt] (0.5,0.5) -- (0.5,1.5); \draw [line width = 4pt] (0.5,1.5) -- (1.5,1.5); \draw [->, line width = 2pt] (2.5,1) -- (3.5,1); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+4,\y) -- (0+4,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+4,0) -- (\x+4,0); } \draw [fill = black] (0.5+4,1.5) circle (3.5pt); \draw [] (0.5+4,0.5) circle (3.5pt); \draw [fill = black] (1.5+4,1.5) circle (3.5pt); \draw [fill = black] (1.5+4,0.5) circle (3.5pt); \draw [line width = 4pt] (0.5+4,1.5) -- (1.5+4,1.5); \draw [line width = 4pt] (1.5+4,0.5) -- (1.5+4,1.5); \end{tikzpicture} \end{center} \caption{Visual representation of an L-spin.} \label{fig:L-spin-visual} \end{figure} If some of $a,b,c$ happen to lie in the same box, we can still perform an operation of the above form. We call such a move a \emph{degenerate L-spin}, since the visual representation is somewhat different -- one of the dots in a box with two dots moves to a different non-empty box in the same column/row. This is shown in \Cref{fig:L-spin-degenerate}. \begin{figure}[h!] \begin{center} \renewcommand{0}{2} \renewcommand{0}{2} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.33,1.5) circle (3.5pt); \draw [fill = black] (0.33,0.5) circle (3.5pt); \draw [fill = black] (0.66,0.5) circle (3.5pt); \draw [] (0.66,1.5) circle (3.5pt); \draw [line width = 4pt] (0.33,0.5) -- (0.66,0.5); \draw [line width = 4pt] (0.33,1.5) -- (0.33,0.5); \draw [->, line width = 2pt] (2.5,1) -- (3.5,1); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+4,\y) -- (0+4,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+4,0) -- (\x+4,0); } \draw [fill = black] (0.33+4,1.5) circle (3.5pt); \draw [fill = black] (0.33+4,0.5) circle (3.5pt); \draw [] (0.66+4,0.5) circle (3.5pt); \draw [fill = black] (0.66+4,1.5) circle (3.5pt); \draw [line width = 4pt] (0.33+4,0.5) -- (0.33+4,1.5); \draw [line width = 4pt] (0.33+4,1.5) -- (0.66+4,1.5); \end{tikzpicture} \end{center} \caption{Visual representation of a degenerate L-spin.} \label{fig:L-spin-degenerate} \end{figure} There is a similar, but more powerful construction if the chessboard containing the configuration is self-inverse. This is because it requires a configuration of only two elements instead of three. \begin{lem} Suppose that the element $a$ has $H$-exponent $2$ and that $b\in aH$. Then, the element $b^{-1}ab$ belongs to the row of $b$ and the column of $b^{-1}$. Furthermore, we can make a sequence of Nielsen moves transforming $\{a,b\}$ to $\{b^{-1}ab,b\}$. \label{lem:loop-shift} \end{lem} \begin{proof} The configuration is presented in \Cref{fig:inverse-L-spin-set-up}. Firstly, if $a$ is of exponent $2$, then $a^2\in H$ and so $a^2H = H$, which implies $aH=a^{-1}H$. Similarly for the right cosets: $Ha = Ha^{-1}$. Also, $a\notin H$, so the box of $a$ is definitely outside $H$. The configuration is presented in \Cref{fig:inverse-L-spin-set-up}. Looking at the element $b^{-1}ab$ we see that it belongs to $b^{-1}H$ since $a^{-1}$ and $b$ lie in the same column; it also belongs to $Hb$ as $a$ and $b$ lie in the same column. Then we can perform the Nielsen transformations: $$\{a,b\} \mapsto \{b^{-1} a,b\} \mapsto \{b^{-1}a^{-1}b,b\}$$ \end{proof} \begin{figure} \begin{center} \renewcommand{0}{2} \renewcommand{0}{2} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.33,1.5) circle (3.5pt); \draw [color = black] (0.33-0.16,1.5-0.2) node {$a$}; \draw [] (0.66,1.5) circle (3.5pt); \draw [] (0.66+0.16,1.5-0.23) node {\underline{$a^{-1}$}}; \draw [] (1.5,1.5) circle (3.5pt); \draw [] (1.5-0.16,1.5-0.23) node {\underline{$b^{-1}$}}; \draw [fill = black] (0.5,0.5) circle (3.5pt); \draw [color = black] (0.5-0.16,0.5-0.16) node {$b$}; \draw [] (1.5,0.5) circle (3.5pt); \draw [] (1.5,0.5-0.23) node {\underline{$b^{-1}a^{-1}b$}}; \draw [->, line width = 2pt] (2.5,1) -- (3.5,1); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+4,\y) -- (0+4,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+4,0) -- (\x+4,0); } \draw [] (0.33+4,1.5) circle (3.5pt); \draw [] (0.66+4,1.5) circle (3.5pt); \draw [] (1.5+4,1.5) circle (3.5pt); \draw [fill = black] (0.5+4,0.5) circle (3.5pt); \draw [fill = black] (1.5+4,0.5) circle (3.5pt); \end{tikzpicture} \end{center} \caption{A Nielsen transformation performed in a self-inverse chessboard.} \label{fig:inverse-L-spin-set-up} \end{figure} \subsection{Non self-inverse chessboards} \label{sec:non-self-inverse-chessboards} In this section we investigate subgroups which posses some non self-inverse chessboards. Recall \Cref{prop: left-clean} which says that a generating set of size equal to $[G:H]$ can be Nielsen transformed to a full left transversal. \begin{defn} Let $S$ be a left transversal for $H$ (which is a set of elements in distinct columns). Given $g \in G$, we define a \emph{$g$-section of $S$} to be the set $S_g := HgH \cap S$. \end{defn} The reason we have defined $g$-sections is that if $HgH$ isn't a self-inverse chessboard (i.e. $HgH \neq Hg^{-1}H$), given a left transversal $S$ we can look at the set $S_{g^{-1}}^{-1}$, interpreted as $(S_{g^{-1}})^{-1}$. Its elements are inverses of the elements of the $g^{-1}$-section $S_{g^{-1}}$ of $S$. Then, since $S_{g^{-1}}$ is left-diagonal (i.e. no two elements in the same left coset), the set $S_{g^{-1}}^{-1}$ is a right-diagonal set inside $HgH$. The following definition will be useful in the proof of the next lemma, as well as in later sections. \begin{defn} Given a configuration $S$ we define the \emph{graph of $S$} to be a graph whose vertices are elements of $S$ (with multiplicity). The edges are defined in the following way: for each row put an edge between every pair of elements lying in that row (call these \emph{horizontal edges}) and for each column put an edge between every pair of elements lying in that column (call them \emph{vertical edges}). We denote the graph by $\Sigma_S$. An example of how such a graph is created can be found in \Cref{fig:graph-example}. \end{defn} \begin{lem} \label{lem: solvable} Let $HgH \ne Hg^{-1}H$. Let $S$ be a multiset of elements in $HgH$ such that each row and each column contains exactly two elements of $S$. Then we can partition $S$ into two multisets $A$ and $B$ such that $A \cup B^{-1}$ is a left-right diagonal set with respect to $H \leq G$. \end{lem} \begin{figure} \begin{center} \renewcommand{0}{4} \renewcommand{0}{4} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.5,3.5) circle (3.5pt); \draw [fill = black] (1.5,3.5) circle (3.5pt); \draw [fill = black] (1.5,2.5) circle (3.5pt); \draw [fill = black] (1.5,0.5) circle (3.5pt); \draw [fill = black] (2.5,2.5) circle (3.5pt); \draw [fill = black] (3.33,1.5) circle (3.5pt); \draw [fill = black] (3.66,1.5) circle (3.5pt); \draw [color = black] (0.5-0.16,3.5-0.16) node {$v_1$}; \draw [color = black] (1.5-0.16,3.5-0.16) node {$v_2$}; \draw [color = black] (1.5-0.16,2.5-0.16) node {$v_3$}; \draw [color = black] (1.5-0.16,0.5-0.16) node {$v_4$}; \draw [color = black] (2.5-0.16,2.5-0.16) node {$v_5$}; \draw [color = black] (3.33-0.16,1.5-0.16) node {$v_6$}; \draw [color = black] (3.66+0.16,1.5-0.16) node {$v_7$}; \draw [line width=2pt, color=black] (0.5+6-0.5-1,3.5) -- (1.5+6-1,3.5); \draw [line width=2pt, dashdotted] (1.5+6-1,3.5) -- (1.5+6-0.5-1,2.5); \draw [line width=2pt, dashdotted] (1.5+6-1,3.5) -- (1.5+6-1,0.5); \draw [line width=2pt, dashdotted] (1.5+6-0.5-1,2.5) -- (1.5+6-1,0.5); \draw [line width=2pt, color=black] (1.5+6-0.5-1,2.5) -- (2.5+6-1,2.5); \draw [line width=2pt, color=black] (3.33+6-1,1.5) .. controls (3.33+6+0.2-1,1.5+0.2) and (3.66+6.5-0.2-1,1.5+0.2) .. (3.66+6.5-1,1.5); \draw [line width=2pt, dashdotted] (3.33+6-1,1.5) .. controls (3.33+6+0.2-1,1.5-0.2) and (3.66+6.5-0.2-1,1.5-0.2) .. (3.66+6.5-1,1.5); \draw [fill = black] (0.5+6-0.5-1,3.5) circle (3.5pt); \draw [color = black] (0.5-0.16+6-0.5-1,3.5-0.16) node {$v_1$}; \draw [fill = black] (1.5+6-1,3.5) circle (3.5pt); \draw [color = black] (1.5+0.16+6-1,3.5-0.16) node {$v_2$}; \draw [fill = black] (1.5+6-0.5-1,2.5) circle (3.5pt); \draw [color = black] (1.5-0.16+6-0.5-1,2.5-0.16) node {$v_3$}; \draw [fill = black] (1.5+6-1,0.5) circle (3.5pt); \draw [color = black] (1.5-0.16+6-1,0.5-0.16) node {$v_4$}; \draw [fill = black] (2.5+6-1,2.5) circle (3.5pt); \draw [color = black] (2.5+0.16+6-1,2.5-0.16) node {$v_5$}; \draw [fill = black] (3.33+6-1,1.5) circle (3.5pt); \draw [color = black] (3.33-0.16+6-1,1.5-0.16) node {$v_6$}; \draw [fill = black] (3.66+6.5-1,1.5) circle (3.5pt); \draw [color = black] (3.66+0.16+6.5-1,1.5-0.16) node {$v_7$}; \end{tikzpicture} \end{center} \caption{An example of $\Sigma_{S}$. The vertical edges are dashed, the horizontal ones are solid.} \label{fig:graph-example} \end{figure} \begin{proof} Firstly, the graph $\Sigma_{S}$ is a union of disjoint cycles. This is because every column and every row of $HgH$ contains exactly two elements of $S$, so for every element of $S$ there exists precisely one element in the same column and precisely one element in the same row. Any graph in which every vertex is of degree $2$ is a union of disjoint cycles. Now, since each column and row contains only two elements, two consecutive edges can't be both horizontal or both vertical. Thus, all of the cycles of $\Sigma_{S}$ are of even length, so the graph is bipartite. Let's take $A$ and $B$ to be the sets of elements of $S$ corresponding to the bipartite components of $\Sigma_{S}$. In such a setting, in each of the rows and columns there is exactly one element of $A$ and one element of $B$, which means that both $A$ and $B$ are left-right diagonal sets. This in turn means that $A\cup B^{-1}$ is a left-right diagonal set, since $A \subset HgH$ and $B^{-1} \subset Hg^{-1}H$ while because of the assumption that $HgH \neq Hg^{-1}H$, we have $HgH \cap Hg^{-1}H = \emptyset$. \end{proof} This gives the following corollary. \begin{cor} Suppose that $G$ has no self-inverse chessboards with respect to $H$ other than $H$ itself, and that $S$ is a left transversal such that for each $g \in G$ not in $H$, $S' = S_g \cup S_{g^{-1}}^{-1}$ satisfies the following condition: each row and column of $HgH$ contains exactly two elements of $S'$. Then, $S$ is Nielsen equivalent to a left-right transversal for $H$. \label{cor:trans-solv} \end{cor} \begin{proof} Let $S$ be as in the statement of the corollary. For each pair of inverse chessboards $HgH, Hg^{-1}H$, we can move all of the elements in these chessboards to just one of them using the inversion map, transforming $S_g\cup S_{g^{-1}}$ to $S_g \cup S_{g^{-1}}^{-1}$, which by the assumption is a configuration with two elements in each row and two elements in each column. By \Cref{lem: solvable} this can be partitioned into two multisets $A$ and $B$ such that $A\cup B^{-1}$ is diagonal with respect to $H$. After transforming $S_{g}\cup S_{g^{-1}} = A\cup B$ to $A\cup B^{-1}$, we obtain a diagonal set in the two chessboards $HgH$ and $Hg^{-1}H$. Doing this for each pair of inverse chessboards, we get a full left-right diagonal set (i.e. a left-right transversal). \end{proof} \subsubsection{Normal form of configurations} \label{sec:normal-form} In this section we will present a normal form of a configuration in a chessboard, meaning a representative of the equivalence class of configurations obtainable from a given one by performing L-spins. We start with a definition. \begin{defn} Let $S$ be a configuration. The connected components of its graph $\Sigma_S$ are called \emph{connected components} (or just \emph{components}) of the configuration. We call a configuration \emph{connected} if it has only one connected component. \end{defn} It is clear that two elements of the configuration lying in the same row or column are in the same connected component, so the sets of rows and columns are partitioned as $R=\cup_{i=1}^k R_i$, $C=\cup_{i=1}^k C_i$ (where $R_i$ is a set of rows for each $i$, similarly $C_i$ is a set of columns) with: \begin{itemize} \item The elements of $S$ lying in $R_i\times C_i$ form a connected component. \item There are no elements in $R_i\times C_j$ for $i\neq j$, \end{itemize} where by $R_i\times C_j$ we mean the union of boxes lying in the rows belonging to $R_i$ and at the same time lying in the columns belonging to $C_j$. We can perform permutations on the rows and columns to get a situation where $R_i$'s and $C_j$'s consist of consecutive rows and columns and the top-left boxes of $R_i\times C_i$ are not empty. An example showing the connected components is in \Cref{fig:connected-components-example} there, the partition is as follows. $$R_1 = \set{1,2}, \ R_2 = \set{3,4}, \ R_3 = \set{5}, \ C_1 = \set{1}, \ C_2 = \set{2,3,4}, \ C_3 = \set{5}$$ \begin{figure}[h!] \begin{center} \renewcommand{0}{5} \renewcommand{0}{5} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.5,4.5) circle (3.5pt); \draw [fill = black] (0.33,3.5) circle (3.5pt); \draw [fill = black] (0.66,3.5) circle (3.5pt); \draw [fill = black] (1.5,2.5) circle (3.5pt); \draw [fill = black] (2.5,2.5) circle (3.5pt); \draw [fill = black] (3.5,2.5) circle (3.5pt); \draw [fill = black] (2.5,1.5) circle (3.5pt); \draw [fill = black] (4.5,0.5) circle (3.5pt); \draw [line width = 3pt] (0,5) -- (0,3) -- (1,3) -- (1,5) -- (0,5); \draw [line width = 3pt] (1,3) -- (1,1) -- (4,1) -- (4,3) -- (1,3); \draw [line width = 3pt] (4,1) -- (4,0) -- (5,0) -- (5,1) -- (4,1); \end{tikzpicture} \end{center} \caption{Example of a configuration's connected components.} \label{fig:connected-components-example} \end{figure} The following is an important observation. \begin{prop} \label{prop:connectedness L-spins} L-spins don't change the connectedness of configurations in a chessboard. \end{prop} \begin{proof} We use the notation as in \Cref{fig:L-spin set-up}. In the case of a degenerate L-spin the result is clear since non-empty boxes before the L-spin are precisely those that are non-empty after the L-spin. In the case of a non-degenerate L-spin, note that the image of $c$ lies in the same column of $a$. Thus if our configuration were connected initially, the component containing the image of $c$ contains both $a$ and $b$, and so it remains connected. \end{proof} We can therefore focus our attention on the connected components. We want to transform a given configuration into a particularly ordered one. Let's consider a single connected component $D$, and let $s$ be one of its elements. By permuting the columns and rows, we can assume that it is in the top left box. Then, if in the graph $\Sigma_D$ of $D$ there are any vertices of distance $2$ (in the graph $\Sigma_{D}$) from $s$, we can transform $D$ by an L-spin, decreasing the distance from $s$ to $1$, as shown in \Cref{fig:move-closer}. \begin{figure}[h!] \begin{center} \renewcommand{0}{2} \renewcommand{0}{2} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.5,1.5) circle (3.5pt); \draw (0.5-0.2,1.5-0.2) node {$s$}; \draw [fill = black] (1.5,1.5) circle (3.5pt); \draw [fill = black] (1.5,0.5) circle (3.5pt); \draw [->, line width = 2pt] (2.5,1) -- (3.5,1); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+4,\y) -- (0+4,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+4,0) -- (\x+4,0); } \draw [fill = black] (0.5+4,1.5) circle (3.5pt); \draw (0.5-0.2+4,1.5-0.2) node {$s$}; \draw [fill = black] (0.5+4,0.5) circle (3.5pt); \draw [fill = black] (1.5+4,1.5) circle (3.5pt); \end{tikzpicture} \end{center} \caption{Performing an L-spin to move an element closer to the top left corner.} \label{fig:move-closer} \end{figure} As the process doesn't increase the distance from $s$ of any elements, applying this repeatedly we can get to a point where all elements are of distance $1$ away from each other in $\Sigma_D$. Then, if there is a box, other than the top-left corner, such that there are at least two elements in it, we perform a degenerate L-spin to put the additional element in the top-left corner as shown in the \Cref{fig:normal-form-final-step}. \begin{figure}[h!] \begin{center} \renewcommand{0}{2} \renewcommand{0}{2} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.5,1.5) circle (3.5pt); \draw (0.5-0.2,1.5-0.2) node {$s$}; \draw [fill = black] (1.33,1.5) circle (3.5pt); \draw [fill = black] (1.66,1.5) circle (3.5pt); \draw [->, line width = 2pt] (2.5,1) -- (3.5,1); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+4,\y) -- (0+4,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+4,0) -- (\x+4,0); } \draw [fill = black] (0.33+4,1.5) circle (3.5pt); \draw (0.33-0.2+4,1.5-0.2) node {$s$}; \draw [fill = black] (0.66+4,1.5) circle (3.5pt); \draw [fill = black] (1.5+4,1.5) circle (3.5pt); \end{tikzpicture} \end{center} \caption{Using a degenerate L-spin to move multiple elements into corner box.} \label{fig:normal-form-final-step} \end{figure} This leads us to define the normal form of a connected configuration. \begin{defn}\label{defn:config-nf} A connected configuration $S$ is said to be in its \emph{normal form} if all the elements of $S$ are in the first row/column, with at most $1$ element in each such box, aside from potentially the top left corner box. We also enforce that the non-empty boxes all lie next to each other (i.e. permute rows/columns such that there are no gaps). \end{defn} An example of a normal form is given in \Cref{fig:normal-form-example}. \begin{figure}[h!] \begin{center} \renewcommand{0}{3} \renewcommand{0}{3} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.25,2.5) circle (3.5pt); \draw [fill = black] (0.5,2.5) circle (3.5pt); \draw [fill = black] (0.75,2.5) circle (3.5pt); \draw [fill = black] (1.5,2.5) circle (3.5pt); \draw [fill = black] (2.5,2.5) circle (3.5pt); \draw [fill = black] (0.5,1.5) circle (3.5pt); \end{tikzpicture} \end{center} \caption{An example of a normal form.} \label{fig:normal-form-example} \end{figure} Finally, we introduce the following. \begin{defn} Two configurations are said to be \emph{L-spin equivalent} if there exists a series of L-spins, and possibly permutation of rows and columns, transforming one configuration into the other. \end{defn} The next immediate result helps us in classifying the configuration. \begin{prop} \label{prop:L-spin form determined} Let $C$ and $D$ be two L-spin equivalent configurations. Then they have the same: \begin{enumerate} \item number of elements, \item connected components, \item numbers of rows and columns. \end{enumerate} \end{prop} \begin{proof} 1) is immediate from the definition of an L-spin. 2) is a consequence of \Cref{prop:connectedness L-spins}. 3) comes from the fact that if a column contains an element taking part in an L-spin, after the L-spin either the element in the column isn't transformed or it is transformed to a new element, but lying in the same column. The same applies to rows. \end{proof} \begin{prop} Every connected configuration is L-spin equivalent to a unique normal form. \end{prop} \begin{proof} We proved that there is some normal form that a configuration is L-spin equivalent to in the course of defining normal forms. On the other hand, a normal form is determined by the number of its columns, the number of its rows and the number of its elements, neither of which is changed by L-spins (by \Cref{prop:L-spin form determined}) or by permutation of rows or columns. \end{proof} \subsubsection{Solvable configurations} \label{sec:solvable_non-self-inverse} In light of \Cref{lem: solvable}, we make the following definition. \begin{defn} A configuration is called \emph{solvable} if it is L-spin equivalent to a configuration with exactly two elements in each row and in each column. \end{defn} The term `solvable' is motivated by the observation in \Cref{lem: solvable} that if the left transversal $S$ has the property that for each $g \in G$ the multiset $S_g \cup S_{g^{-1}}^{-1}$ is in a solvable configuration, then $S$ can be Nielsen transformed to a left-right transversal. The normal forms provide a useful criterion for determining when a configuration is solvable. \begin{prop} \label{prop:solv} A connected configuration $S$ of $2n$ elements is solvable if and only if it has exactly $n$ rows and $n$ columns. In general, a configuration is solvable if and only if each of its connected components is solvable. \end{prop} From hereon we call connected solvable components \textit{square}. \begin{proof} The `only if' direction is due to the fact that if the number of element in every row and in every column is $2$, then the total number of elements is equal to twice the number of rows and equal to twice the number of columns (which thus have to be equal). For the `if' direction, we will give an explicit transformation, constructed of the L-spins taking an element in top left corner to the bottom right corner, as shown in \Cref{fig:flip}. \begin{figure}[h!] \begin{center} \renewcommand{0}{2} \renewcommand{0}{2} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.5,1.5) circle (3.5pt); \draw [fill = black] (1.5,1.5) circle (3.5pt); \draw [fill = black] (0.5,0.5) circle (3.5pt); \draw [->, line width = 2pt] (2.5,1) -- (3.5,1); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+4,\y) -- (0+4,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+4,0) -- (\x+4,0); } \draw [fill = black] (1.5+4,0.5) circle (3.5pt); \draw [fill = black] (0.5+4,0.5) circle (3.5pt); \draw [fill = black] (1.5+4,1.5) circle (3.5pt); \end{tikzpicture} \end{center} \caption{The only type of L-spin used in the construction.} \label{fig:flip} \end{figure} For the descriptions of transformations we assume the convention that the top left corner is a box with coordinates $(1,1)$. In the first move we apply the described L-spin to move one element from box $(1,1)$ to box $(2,2)$ (we use the fact that the boxes $(1,2)$ and $(2,1)$ contain some elements). In the $i^{th}$ move (where $2\leq i \leq n-1$), we move the elements in boxes $(1,i), (2, i-1), \ldots, (i,1)$ to boxes $(2,i+1), (3, i+2), \ldots, (i+1,2)$. Because there are elements in boxes $(1,i+1),(2,i+2), \ldots, (i+1,1)$ we can perform these L-spins. This is shown pictorially in \Cref{fig:solv}. The configuration that we get in the end has exactly two elements in every row and in every column, which is what we needed. We provide an indicative illustration of this in \Cref{fig:solv}. \begin{figure} \begin{center} \renewcommand{0}{4} \renewcommand{0}{4} \begin{tikzpicture}[line cap=round,line join=round,x=1.25cm,y=1.25cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.33,3.5) circle (3.5pt); \draw [fill = black] (0.66,3.5) circle (3.5pt); \draw [fill = black] (0.5,2.5) circle (3.5pt); \draw [fill = black] (0.5,1.5) circle (3.5pt); \draw [fill = black] (0.5,0.5) circle (3.5pt); \draw [fill = black] (1.5,3.5) circle (3.5pt); \draw [fill = black] (2.5,3.5) circle (3.5pt); \draw [fill = black] (3.5,3.5) circle (3.5pt); \draw [dashed, ->, line width = 1.5pt, color = black] (0.33,3.5) .. controls (0.33,3.5-0.5) and (1.5-0.5,2.5) .. (1.5,2.5); \draw [->, line width = 2pt] (4.5,2) -- (5.5,2); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+6,\y) -- (0+6,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+6,0) -- (\x+6,0); } \draw [fill = black] (0.5+6,3.5) circle (3.5pt); \draw [fill = black] (1.5+6,2.5) circle (3.5pt); \draw [fill = black] (0.5+6,2.5) circle (3.5pt); \draw [fill = black] (0.5+6,1.5) circle (3.5pt); \draw [fill = black] (0.5+6,0.5) circle (3.5pt); \draw [fill = black] (1.5+6,3.5) circle (3.5pt); \draw [fill = black] (2.5+6,3.5) circle (3.5pt); \draw [fill = black] (3.5+6,3.5) circle (3.5pt); \draw [dashed, ->, line width = 1.5pt, color = black] (0.5+6,2.5) .. controls (0.5+6,2.5-0.5) and (1.5+6-0.5,1.5) .. (1.5+6,1.5); \draw [dashed, ->, line width = 1.5pt, color = black] (1.5+6,3.5) .. controls (1.5+6,3.5-0.5) and (2.5+6-0.5,2.5) .. (2.5+6,2.5); \draw [->, line width = 2pt] (4.5+6,2) -- (5.5+6,2); \draw [->, line width = 2pt] (-1.5,2-5) -- (-0.5,2-5); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y-5) -- (0,\y-5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0-5) -- (\x,0-5); } \draw [fill = black] (0.5,3.5-5) circle (3.5pt); \draw [fill = black] (1.5,1.5-5) circle (3.5pt); \draw [fill = black] (2.5,2.5-5) circle (3.5pt); \draw [fill = black] (0.5,1.5-5) circle (3.5pt); \draw [fill = black] (0.5,0.5-5) circle (3.5pt); \draw [fill = black] (1.5,2.5-5) circle (3.5pt); \draw [fill = black] (2.5,3.5-5) circle (3.5pt); \draw [fill = black] (3.5,3.5-5) circle (3.5pt); \draw [dashed, ->, line width = 1.5pt, color = black] (0.5,1.5-5) .. controls (0.5,1.5-5-0.5) and (1.5-0.5,0.5-5) .. (1.5,0.5-5); \draw [dashed, ->, line width = 1.5pt, color = black] (1.5,2.5-5) .. controls (1.5,2.5-5-0.5) and (2.5-0.5,1.5-5) .. (2.5,1.5-5); \draw [dashed, ->, line width = 1.5pt, color = black] (2.5,3.5-5) .. controls (2.5,3.5-5-0.5) and (3.5-0.5,2.5-5) .. (3.5,2.5-5); \draw [->, line width = 2pt] (4.5,2-5) -- (5.5,2-5); \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0+6,\y-5) -- (0+6,\y-5); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x+6,0-5) -- (\x+6,0-5); } \draw [fill = black] (0.5+6,3.5-5) circle (3.5pt); \draw [fill = black] (0.5+6,0.5-5) circle (3.5pt); \draw [fill = black] (1.5+6,0.5-5) circle (3.5pt); \draw [fill = black] (1.5+6,1.5-5) circle (3.5pt); \draw [fill = black] (2.5+6,1.5-5) circle (3.5pt); \draw [fill = black] (2.5+6,2.5-5) circle (3.5pt); \draw [fill = black] (3.5+6,2.5-5) circle (3.5pt); \draw [fill = black] (3.5+6,3.5-5) circle (3.5pt); \end{tikzpicture} \end{center} \caption{A sequence of L-spins transforming the normal form to a solvable configuration} \label{fig:solv} \end{figure} \end{proof} \subsection{Self-inverse chessboards} \label{sec:self-inverse-chessboards} \begin{rem}\label{rem:self-inverse} In the case of a self-inverse chessboard $HgH=Hg^{-1}H$ we have a very different setting --- the inversion map won't enable us to move more elements into that chessboard. However, it gives us an additional operation in a given chessboard, apart from the L-spin. Note that without the inversion, when starting with a left transversal, we couldn't perform any L-spins --- this is since every column contained a single element. \end{rem} \subsubsection{Inverse-dual graphs} We now give a convenient way of describing what happens with self-inverse chessboards. It turns out that again graphs are useful for this. \begin{defn} Let $HgH = Hg^{-1}H$ and let $S$ be a configuration in this chessboard. According to \Cref{rem:self-inverse}, let the columns be $a_1H,a_2H,\ldots, a_nH$ and let the rows be $Ha_1^{-1},Ha_2^{-1}, \ldots, Ha_n^{-1}$. Then, we create a directed graph $\Theta_S$ as follows. Take $n$ vertices $v_1, \ldots, v_n$ and for each element $s$ of $S$, put a directed edge $v_iv_j$ if $s$ lies in the box of coordinates $(i,j)$ (that is, $s \in a_jH \cap Ha_i^{-1}$). We call this graph the \emph{inverse-dual graph} for a configuration $S$, reflecting the fact that, contrary to the previous graph of a configuration $\Sigma_S$, in $\Theta_S$ the elements of a configuration are represented with edges, not vertices (hence inverse-\textit{dual}); also, the graph is defined in the context of self-inverse chessboards (hence \textit{inverse}-dual). \end{defn} The construction for such a graph is shown in \Cref{fig:inverse-graph-example}. It is worth noting that the inverse-dual graph can contain both parallel edges (corresponding to elements lying in the same box) and loops (corresponding to elements lying on the diagonal). \begin{figure}[h!] \begin{center} \renewcommand{0}{4} \renewcommand{0}{4} \begin{tikzpicture}[line cap=round,line join=round,x=1.5cm,y=1.5cm] \foreach \y in {0,1,...,0} { \draw [line width = 1pt] (0,\y) -- (0,\y); } \foreach \x in {0,1,...,0} { \draw [line width = 1pt] (\x,0) -- (\x,0); } \draw [fill = black] (0.5,3.5) circle (3.5pt); \draw [fill = black] (1.5,3.5) circle (3.5pt); \draw [fill = black] (1.5,2.5) circle (3.5pt); \draw [fill = black] (1.5,0.5) circle (3.5pt); \draw [fill = black] (2.5,2.5) circle (3.5pt); \draw [fill = black] (3.33,1.5) circle (3.5pt); \draw [fill = black] (3.66,1.5) circle (3.5pt); \draw [color = black] (0.5-0.16,3.5-0.16) node {$a$}; \draw [color = black] (1.5-0.16,3.5-0.16) node {$b$}; \draw [color = black] (1.5-0.16,2.5-0.16) node {$c$}; \draw [color = black] (1.5-0.16,0.5-0.16) node {$d$}; \draw [color = black] (2.5-0.16,2.5-0.16) node {$e$}; \draw [color = black] (3.33-0.16,1.5-0.16) node {$f$}; \draw [color = black] (3.66+0.16,1.5-0.16) node {$g$}; \draw [fill = black] (1+5,1) circle (3.5pt); \draw [color = black] (1+5-0.2,1-0.2) node {$v_3$}; \draw [fill = black] (3+5,1) circle (3.5pt); \draw [color = black] (3+5+0.2,1-0.2) node {$v_4$}; \draw [fill = black] (1+5,3) circle (3.5pt); \draw [color = black] (1+5-0.2,3-0.2) node {$v_1$}; \draw [fill = black] (3+5,3) circle (3.5pt); \draw [color = black] (3+5+0.3,3-0.1) node {$v_2$}; \draw [latex-, line width = 2pt] (1+5,3) .. controls (1+5-0.5,3+0.5) and (1+5+0.5,3+0.5) .. (1+5,3); \draw [color = black] (1+5,3+0.5) node {$a$}; \draw [-latex, line width = 2pt] (1+5,3) .. controls (1+5+0.5,3) and (3+5-0.5,3) .. (3+5,3); \draw [color = black] (2+5,3+0.2) node {$b$}; \draw [-latex, line width = 2pt] (3+5,3) .. controls (3+5-0.5,3+0.5) and (3+5+0.5,3+0.5) .. (3+5,3); \draw [color = black] (3+5,3+0.5) node {$c$}; \draw [-latex, line width = 2pt] (3+5,3) .. controls (3+5-1,3-0.5) and (1+5+0.5,1+1) .. (1+5,1); \draw [color = black] (2+5-0.05,2+0.05) node {$e$}; \draw [-latex, line width = 2pt] (1+5,1) .. controls (1+5+0.5,1+0.5) and (3+5-0.3,1+0.3) .. (3+5,1); \draw [color = black] (2+5,1+0.5) node {$f$}; \draw [-latex, line width = 2pt] (1+5,1) .. controls (1+5+0.5,1-0.5) and (3+5-0.3,1-0.3) .. (3+5,1); \draw [color = black] (2+5,1-0.5) node {$g$}; \draw [-latex, line width = 2pt] (3+5,3) .. controls (3+5+0.5,3-0.5) and (3+5+0.3,1+0.3) .. (3+5,1); \draw [color = black] (3+5+0.1,2) node {$d$}; \end{tikzpicture} \end{center} \caption{A configuration $S$ with associated inverse-dual graph $\Theta_S$.} \label{fig:inverse-graph-example} \end{figure} Now, we want to know what the inversions and L-spins correspond to in the context of inverse-dual graphs. The inversion map turns out to be simple. \begin{prop} Let $S$ be a configuration in a self-inverse chessboard and $\Theta_S$ be its inverse-dual graph. Then, for $x\in S$, the Nielsen move $x \mapsto x^{-1}$ corresponds to changing the orientation of the edge corresponding to $x$. \end{prop} \begin{proof} Let $x$ be the edge $v_iv_j$, i.e. $x \in a_jH \cap Ha_i^{-1}$. Then $x^{-1} \in a_iH\cap Ha_j^{-1}$ and so it corresponds to the edge $v_jv_i$. \end{proof} Because of this, unless otherwise stated, from hereon we will consider our inverse-dual graphs to be undirected, since changing direction of an edge is one of the Nielsen moves. Now we want to understand what the L-spins correspond to. \begin{prop} \begin{enumerate} \item Let $a,b,c$ be elements of a configuration $S$ in a self-inverse chessboard $HgH$. Then: $a,b,c$ are in a configuration allowing an L-spin if and only if the three edges corresponding to these elements in the inverse-dual graph $\Theta_S$, when ordered appropriately satisfy: the first and the second go into the same vertex; the second and the third go out of the same vertex. \item If $a,b,c$ are indeed in a configuration allowing an L-spin, then an L-spin on $\set{a,b,c}$ corresponds to exchanging one of the edges with the directed edge from the vertex at the beginning of the first edge to the vertex at the end of the third edge, as shown in \Cref{fig:L-spin.0}. \end{enumerate} \end{prop} \begin{figure}[h!] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [-latex, line width=2pt] (0,-2)-- (0,0); \draw [latex-, line width=2pt] (0,0)-- (2,0); \draw [-latex, line width=2pt] (2,0)-- (2,-2); \draw [-latex, line width=2pt, dashed, color=black] (0,-2)-- (2,-2); \draw [fill=ttqqqq] (0,-2) circle (2.5pt); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [fill=ttqqqq] (2,-2) circle (2.5pt); \draw [color=black] (-0.3,-1) node {$a$}; \draw [color=black] (1,0.3) node {$b$}; \draw [color=black] (2.3,-1) node {$c$}; \draw [color=black] (1,-2.3) node {\underline{$ab^{-1}c$}}; \draw [color=black] (-0.2,0.2) node {$v_i$}; \draw [color=black] (2.2,0.2) node {$v_j$}; \end{tikzpicture} \end{center} \caption{A situation in which an L-spin can be performed.} \label{fig:L-spin.0} \end{figure} \begin{proof} The elements $a,b,c$ are in a configuration allowing an L-spin if (in the context of a chessboard) two of them lie in the same column and two of them lie in the same row. Without loss of generality, let's suppose that $b, c$ lie in the same column $a_iH$ and $a, b$ in the same row $Ha_j^{-1}$ (like the situation in \Cref{fig:L-spin set-up}). This gives a situation shown in \Cref{fig:L-spin.0} -- the edges $a$ and $b$ both go into the vertex $v_i$, while the edges $b$ and $c$ both go out of the vertex $v_j$. This proves the first part of the proposition. For the second part, let's note that the new element created in the L-spin is $ab^{-1}c$ lying in the same row as $c$ and the same column as $a$. These correspond to vertices at the beginning and the end of the first and third edge, respectively. Thus, the element created in the L-spin corresponds to an edge from the beginning of the first edge to the end of the third edge (drawn dashed in \Cref{fig:L-spin.0}). \end{proof} \Cref{fig:L-spin.1,fig:L-spin.2,fig:L-spin.3,fig:L-spin.4} on page~\pageref{fig:L-spin.1} show what the L-spins correspond to in the inverse-dual graph in situations differing by whether or not $\Theta_S$ involves parallel edges (i.e. two elements in the same box) and whether or not $\Theta_S$ involves loops (elements on the diagonal). The Nielsen moves are exactly the same as described above, the difference is purely visual. \begin{figure}[p] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt] (0,-2)-- (0,0); \draw [line width=2pt] (0,0)-- (2,0); \draw [line width=2pt] (2,0)-- (2,-2); \draw [line width=2pt, dashed, color=black] (0,-2)-- (2,-2); \draw [fill=ttqqqq] (0,-2) circle (2.5pt); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [fill=ttqqqq] (2,-2) circle (2.5pt); \draw [color=black] (-0.3,-1) node {$d_{1}$}; \draw [color=black] (1,0.3) node {$d_{2}$}; \draw [color=black] (2.3,-1) node {$d_{3}$}; \draw [color=black] (1,-2.3) node {\underline{$d_{4}$}}; \draw [<-, line width=2pt] (5,-1)-- (3,-1); \draw [line width=2pt, dashed, color=black] (6,-2)-- (6,0); \draw [line width=2pt] (6,0)-- (8,0); \draw [line width=2pt] (8,0)-- (8,-2); \draw [line width=2pt] (6,-2)-- (8,-2); \draw [fill=ttqqqq] (6,-2) circle (2.5pt); \draw [fill=ttqqqq] (6,0) circle (2.5pt); \draw [fill=ttqqqq] (8,0) circle (2.5pt); \draw [fill=ttqqqq] (8,-2) circle (2.5pt); \draw [color=black] (5.7,-1) node {\underline{$d_{1}$}}; \draw [color=black] (7,0.3) node {$d_{2}$}; \draw [color=black] (8.3,-1) node {$d_{3}$}; \draw [color=black] (7,-2.3) node {$d_{4}$}; \end{tikzpicture} \end{center} \caption{L-spin in $\Theta_S$.} \label{fig:L-spin.1} \end{figure} \begin{figure}[p] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt] (0,0)-- (2,0); \draw [line width=2pt] (2,0)-- (1,1.732); \draw [line width=2pt] (1,1.732)-- (0,0); \draw [line width=2pt, dashed, color=black](1,1.732) .. controls (1+1,1.732+1) and (1-1,1.732+1) .. (1,1.732); \draw [fill=ttqqqq] (1,1.732) circle (2.5pt); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [color=black] (0.5-0.3*1.732/2,1.732/2+0.3/2) node {$d_{1}$}; \draw [color=black] (1,-0.3) node {$d_{2}$}; \draw [color=black] (1.5+0.3*1.732/2,1.732/2+0.3/2) node {$d_{3}$}; \draw [color=black] (1,2.8) node {\underline{$d_{4}$}}; \draw [<-, line width=2pt] (5,1)-- (3,1); \draw [line width=2pt] (0+6,0)-- (2+6,0); \draw [line width=2pt] (2+6,0)-- (1+6,1.732); \draw [line width=2pt, dashed, color=black] (1+6,1.732)-- (0+6,0); \draw [line width=2pt, color=black](1+6,1.732) .. controls (1+1+6,1.732+1) and (1-1+6,1.732+1) .. (1+6,1.732); \draw [fill=ttqqqq] (1+6,1.732) circle (2.5pt); \draw [fill=ttqqqq] (0+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6,0) circle (2.5pt); \draw [color=black] (0.5-0.3*1.732/2+6,1.732/2+0.3/2) node {\underline{$d_{1}$}}; \draw [color=black] (1+6,-0.3) node {$d_{2}$}; \draw [color=black] (1.5+0.3*1.732/2+6,1.732/2+0.3/2) node {$d_{3}$}; \draw [color=black] (1+6,2.8) node {$d_{4}$}; \draw [<-, line width=2pt] (5+6,1)-- (3+6,1); \draw [line width=2pt, dashed, color=black] (0+12,0)-- (2+12,0); \draw [line width=2pt] (2+12,0)-- (1+12,1.732); \draw [line width=2pt] (1+12,1.732)-- (0+12,0); \draw [line width=2pt, color=black](1+12,1.732) .. controls (1+1+12,1.732+1) and (1-1+12,1.732+1) .. (1+12,1.732); \draw [fill=ttqqqq] (1+12,1.732) circle (2.5pt); \draw [fill=ttqqqq] (0+12,0) circle (2.5pt); \draw [fill=ttqqqq] (2+12,0) circle (2.5pt); \draw [color=black] (0.5-0.3*1.732/2+12,1.732/2+0.3/2) node {$d_{1}$}; \draw [color=black] (1+12,-0.3) node {\underline{$d_{2}$}}; \draw [color=black] (1.5+0.3*1.732/2+12,1.732/2+0.3/2) node {$d_{3}$}; \draw [color=black] (1+12,2.8) node {$d_{4}$}; \end{tikzpicture} \end{center} \caption{L-spin in $\Theta_S$ involving a loop.} \label{fig:L-spin.2} \end{figure} \begin{figure}[p] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt, color=black](0,0) .. controls (0+0.5,0+0.5) and (2-0.5,0+0.5) .. (2,0); \draw [line width=2pt, color=black](0,0) .. controls (0+0.5,0-0.5) and (2-0.5,0-0.5) .. (2,0); \draw [line width=2pt, color=black](2,0) .. controls (2+0.5,0+0.5) and (4-0.5,0+0.5) .. (4,0); \draw [line width=2pt, dashed, color=black](2,0) .. controls (2+0.5,0-0.5) and (4-0.5,0-0.5) .. (4,0); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [fill=ttqqqq] (4,0) circle (2.5pt); \draw [color=black] (1,0.6) node {$d_1$}; \draw [color=black] (1,-0.6) node {$d_2$}; \draw [color=black] (3,0.6) node {$d_3$}; \draw [color=black] (3,-0.6) node {\underline{$d_4$}}; \draw [<-, line width=2pt] (7,0)-- (5,0); \draw [line width=2pt, dashed, color=black](0+8,0) .. controls (0+0.5+8,0+0.5) and (2-0.5+8,0+0.5) .. (2+8,0); \draw [line width=2pt, color=black](0+8,0) .. controls (0+0.5+8,0-0.5) and (2-0.5+8,0-0.5) .. (2+8,0); \draw [line width=2pt, color=black](2+8,0) .. controls (2+0.5+8,0+0.5) and (4-0.5+8,0+0.5) .. (4+8,0); \draw [line width=2pt, color=black](2+8,0) .. controls (2+0.5+8,0-0.5) and (4-0.5+8,0-0.5) .. (4+8,0); \draw [fill=ttqqqq] (0+8,0) circle (2.5pt); \draw [fill=ttqqqq] (2+8,0) circle (2.5pt); \draw [fill=ttqqqq] (4+8,0) circle (2.5pt); \draw [color=black] (1+8,0.6) node {\underline{$d_1$}}; \draw [color=black] (1+8,-0.6) node {$d_2$}; \draw [color=black] (3+8,0.6) node {$d_3$}; \draw [color=black] (3+8,-0.6) node {$d_4$}; \end{tikzpicture} \end{center} \caption{L-spin in $\Theta_S$ involving parallel edges.} \label{fig:L-spin.3} \end{figure} \begin{figure}[p] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt, color=black](0,0) .. controls (0+0.5,0+0.5) and (2-0.5,0+0.5) .. (2,0); \draw [line width=2pt, color=black](0,0) .. controls (0+0.5,0-0.5) and (2-0.5,0-0.5) .. (2,0); \draw [line width=2pt, color=black](2,0) .. controls (2+0.5,0+0.5) and (2+0.5,0-0.5) .. (2,0); \draw [line width=2pt, dashed, color=black](0,0) .. controls (-0.5,-0.5) and (-0.5,+0.5) .. (0,0); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [color=black] (1,0.6) node {$d_1$}; \draw [color=black] (1,-0.6) node {$d_2$}; \draw [color=black] (2.7,0) node {$d_3$}; \draw [color=black] (-0.7,0) node {\underline{$d_4$}}; \draw [<-, line width=2pt] (6,0)-- (4,0); \draw [line width=2pt, dashed, color=black](0+8,0) .. controls (0+0.5+8,0+0.5) and (2-0.5+8,0+0.5) .. (2+8,0); \draw [line width=2pt, color=black](0+8,0) .. controls (0+0.5+8,0-0.5) and (2-0.5+8,0-0.5) .. (2+8,0); \draw [line width=2pt, color=black](2+8,0) .. controls (2+0.5+8,0+0.5) and (2+0.5+8,0-0.5) .. (2+8,0); \draw [line width=2pt, color=black](0+8,0) .. controls (-0.5+8,-0.5) and (-0.5+8,+0.5) .. (0+8,0); \draw [fill=ttqqqq] (0+8,0) circle (2.5pt); \draw [fill=ttqqqq] (2+8,0) circle (2.5pt); \draw [color=black] (1+8,0.6) node {\underline{$d_1$}}; \draw [color=black] (1+8,-0.6) node {$d_2$}; \draw [color=black] (2.7+8,0) node {$d_3$}; \draw [color=black] (-0.7+8,0) node {$d_4$}; \end{tikzpicture} \end{center} \caption{L-spin in $\Theta_S$ involving a loop and parallel edges.} \label{fig:L-spin.4} \end{figure} We now give an operation similar to an L-spin, which can be performed with only two elements -- it was alluded to in \Cref{lem:loop-shift} in the context of chessboards. \begin{prop} Suppose $HgH$ represents a self-inverse chessboard and $a \in S$ lies on its diagonal (so $a$ corresponds to a loop in $\Theta_S$). Let $b \in S$ lie in the column of $a$. Then the transformation $a \mapsto ab \mapsto b^{-1}ab$ corresponds to moving the loop $a$ from one vertex of the edge $b$ to the other. \label{prop:loop-shift} \end{prop} \begin{proof} Since $b\in aH$, we have $b^{-1}a \in H$, so $b^{-1}ab \in Hb$. At the same time $b\in a^{-1}H$ (since $a$ and $a^{-1}$ lie in the same box), so $ab \in H$ and so $b^{-1}ab \in b^{-1}H$. The box $Hb \cap b^{-1}H$ is on the diagonal, so $b^{-1}ab$ is a loop in $\Theta_S$ based at the vertex corresponding to $Hb$ and $b^{-1}H$. \end{proof} \begin{defn} We call a transformation described in \Cref{prop:loop-shift} a \emph{loop shift}, since it corresponds to shifting a loop from one vertex to another, as shown in \Cref{fig:loop-shift} on page~\pageref{fig:loop-shift}. \end{defn} \begin{figure}[h] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt, color=black] (0,0) .. controls (0+0.5,0+0.5) and (0-0.5,0+0.5) .. (0,0); \draw [line width=2pt, dashed, color=black] (2,0) .. controls (2+0.5,0+0.5) and (2-0.5,0+0.5) .. (2,0); \draw [line width=2pt, color=black] (0,0) -- (2,0); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [color=black] (1,-0.3) node {$b$}; \draw [color=black] (-0.3,0.6) node {$a$}; \draw [color=black] (2.3,0.6) node {\underline{$b^{-1}ab$}}; \draw [<-, line width=2pt] (5,0)-- (3.0,0); \draw [line width=2pt, dashed, color=black] (0+6,0) .. controls (0+0.5+6,0+0.5) and (0-0.5+6,0+0.5) .. (0+6,0); \draw [line width=2pt, color=black] (2+6,0) .. controls (2+0.5+6,0+0.5) and (2-0.5+6,0+0.5) .. (2+6,0); \draw [line width=2pt, color=black] (0+6,0) -- (2+6,0); \draw [fill=ttqqqq] (0+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6,0) circle (2.5pt); \draw [color=black] (1+6,-0.3) node {$b$}; \draw [color=black] (-0.3+6,0.6) node {\underline{$a$}}; \draw [color=black] (2.3+6,0.6) node {$b^{-1}ab$}; \end{tikzpicture} \end{center} \caption{A loop shift.} \label{fig:loop-shift} \end{figure} \subsubsection{Normal forms - octopuses and sweets} As in \Cref{sec:normal-form}, we wish to classify the configurations which can be obtained from a given one by performing the operations listed above -- inversions, L-spins and loop shifts. We want to give representatives of each of the classes and show that they are not equivalent to each other. \begin{defn} When given an inverse-dual graph of a configuration (in a self-inverse chessboard), we will call inversions, L-spins and loop shifts \emph{simple moves}. Two configurations (in a self-inverse chessboard) are called \emph{simply equivalent} if one can be obtained from the other with a series of simple moves. \end{defn} In fact, since we are considering the undirected inverse-dual graphs, the inversion doesn't do anything to the graph. \begin{prop} \label{prop:sweets_inv} Let $S$ be a configuration in a self-inverse chessboard. Let $\Theta_S$ be its inverse-dual graph. Then, performing simple moves doesn't change: \begin{enumerate} \item the number of edges of $\Theta_S$, \item the connectedness of the (undirected) $\Theta_S$ (i.e. whether two vertices are connected by a path or not remains unchanged), which implies that we can restrict our attention to connected components of inverse-dual graphs, \item whether or not a connected component of $\Theta_S$ is bipartite, and if it is: \item (if a connected component of $\Theta_S$ is indeed bipartite), the bipartite components of the connected component of $\Theta_S$ (and so, in particular, their sizes). \end{enumerate} \end{prop} \begin{proof} (1) Since Nielsen moves don't change the number of elements of a configuration and each simple move corresponds to series of Nielsen moves, none of them changes the number of elements of $S$, i.e. the number of edges of $\Theta_S$. (2) In an L-spin all vertices involved are connected in the beginning and remain connected after the L-spin. This is also the case for the loop shift. Thus, simple moves don't change the connected components of $\Theta_S$. (3 \& 4) If a connected component of $\Theta_S$ is bipartite, then there are no loops (so, no loop shifts are possible) and then the walk (i.e. a path which may self-intersect) of length $3$ needed for performing the L-spin must have the vertices alternatingly in the two bipartite components. The new edge is between the first and last vertex, which must be in different connected components, and therefore the new edge is between the two bipartite components, not changing them. \end{proof} Now, we are going to show the possible `normal forms' of connected graphs that aren't bipartite. \begin{defn} An undirected graph is called an \emph{octopus} if there exists a vertex $v_0$ (called the \emph{base}) such that for all vertices $v \ne v_0$ the degree of $v$ is $1$ and there exists a unique edge $vv_0$; these edges are called \emph{legs} of an octopus. The loops on $v_0$ are called \emph{heads} of an octopus. \end{defn} It is easy to see that the number of legs of an octopus is one less than the number of the vertices. An example of an octopus is given in \Cref{fig:octopus}. \begin{figure}[h] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt] (0,0)-- (-1.73,-1); \draw [line width=2pt] (0,0)-- (-1,-1.73); \draw [line width=2pt] (0,0)-- (0,-2); \draw [line width=2pt] (0,0)-- (+1,-1.73); \draw [line width=2pt] (0,0)-- (+1.73,-1); \draw [line width=2pt, color=black](0,0) .. controls (-2,0) and (0,2) .. (0,0); \draw [line width=2pt, color=black](0,0) .. controls (2,0) and (0,2) .. (0,0); \draw [line width=2pt, color=black](0,0) .. controls (-1.5,1.5) and (1.5,1.5) .. (0,0); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (-1.73,-1) circle (2.5pt); \draw [fill=ttqqqq] (-1,-1.73) circle (2.5pt); \draw [fill=ttqqqq] (0,-2) circle (2.5pt); \draw [fill=ttqqqq] (+1,-1.73) circle (2.5pt); \draw [fill=ttqqqq] (+1.73,-1) circle (2.5pt); \end{tikzpicture} \end{center} \caption{An octopus.} \label{fig:octopus} \end{figure} \begin{prop} \label{prop:normal_form_oct} Let $S$ be a configuration in a self-inverse chessboard, such that $\Theta_S$ is connected but not bipartite. Then, using simple moves, we can transform $S$ to some $S'$ such that $\Theta_{S'}$ is an octopus. \end{prop} \begin{proof} Since $\Theta_S$ isn't bipartite, there is an odd cycle in it, let's say it is $v_0v_1 \ldots v_{2n}$. We can use it to obtain a loop at $v_n$ by performing an L-spin on $v_1v_0,\:v_0v_{2n},\:v_{2n}v_{2n-1}$ interchanging $v_0v_{2n}$ with $v_1v_{2n-1}$, then with $v_2v_{2n-2}$ until getting $v_nv_n$, which is a loop at $v_n$. This procedure for $2n = 4$ is shown in \Cref{fig:oct.1}. \begin{figure}[h] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm, scale=0.5] \draw [fill=ttqqqq] (0,2) circle (3pt); \draw [fill=ttqqqq] (-1.9,0.62) circle (3pt); \draw [fill=ttqqqq] (-1.18,-1.62) circle (3pt); \draw [fill=ttqqqq] (+1.9,0.62) circle (3pt); \draw [fill=ttqqqq] (+1.18,-1.62) circle (3pt); \draw [line width=2pt] (0,2) -- (-1.9,0.62); \draw [line width=2pt] (0,2) -- (+1.9,0.62); \draw [line width=2pt] (-1.9,0.62) -- (-1.18,-1.62); \draw [line width=2pt] (+1.9,0.62) -- (+1.18,-1.62); \draw [line width=2pt] (-1.18,-1.62) -- (+1.18,-1.62); \draw [color=black] (0,2.3) node {$v_2$}; \draw [color=black] (-2.4,0.62) node {$v_1$}; \draw [color=black] (+2.4,0.62) node {$v_3$}; \draw [color=black] (-1.3,-2) node {$v_0$}; \draw [color=black] (+1.3,-2) node {$v_4$}; \draw [<-, line width=2pt] (5,0)-- (3.0,0); \draw [fill=ttqqqq] (0+8,2) circle (3pt); \draw [fill=ttqqqq] (-1.9+8,0.62) circle (3pt); \draw [fill=ttqqqq] (-1.18+8,-1.62) circle (3pt); \draw [fill=ttqqqq] (+1.9+8,0.62) circle (3pt); \draw [fill=ttqqqq] (+1.18+8,-1.62) circle (3pt); \draw [line width=2pt] (0+8,2) -- (-1.9+8,0.62); \draw [line width=2pt] (0+8,2) -- (+1.9+8,0.62); \draw [line width=2pt] (-1.9+8,0.62) -- (-1.18+8,-1.62); \draw [line width=2pt] (+1.9+8,0.62) -- (+1.18+8,-1.62); \draw [line width=2pt] (-1.9+8,0.62) -- (+1.9+8,0.62); \draw [<-, line width=2pt] (5+8,0)-- (3.0+8,0); \draw [fill=ttqqqq] (0+16,2) circle (3pt); \draw [fill=ttqqqq] (-1.9+16,0.62) circle (3pt); \draw [fill=ttqqqq] (-1.18+16,-1.62) circle (3pt); \draw [fill=ttqqqq] (+1.9+16,0.62) circle (3pt); \draw [fill=ttqqqq] (+1.18+16,-1.62) circle (3pt); \draw [line width=2pt] (0+16,2) -- (-1.9+16,0.62); \draw [line width=2pt] (0+16,2) -- (+1.9+16,0.62); \draw [line width=2pt] (-1.9+16,0.62) -- (-1.18+16,-1.62); \draw [line width=2pt] (+1.9+16,0.62) -- (+1.18+16,-1.62); \draw [line width=2pt] (16,2) .. controls (16-1,2+1) and (16+1,2+1) .. (16,2); \end{tikzpicture} \end{center} \caption{Constructing a loop in the case of a non-bipartite graph.} \label{fig:oct.1} \end{figure} Having obtained a loop at a vertex $v_0$, we can reduce the distance to $v_0$ of any vertex performing the following operation (which bears similarity to what we've done in the process of defining normal forms for non self-inverse chessboards). If $v_2$ is of distance 2 (in $\Theta_S$) from $v_0$, then let $v_0v_1v_2$ be a path. We can perform an L-spin on $v_0v_0, \: v_0v_1, \: v_1v_2$ interchanging $v_1v_2$ with $v_0v_2$ and thereby decreasing the distance of $v_2$ to $v_0$ without increasing the distance to $v_0$ of any other vertex. This procedure is illustrated in \Cref{fig:oct.2}. \begin{figure}[h] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt] (0+6,0)-- (2+6,0); \draw [line width=2pt] (2+6,0)-- (1+6,1.732); \draw [line width=2pt, color=black](1+6,1.732) .. controls (1+1+6,1.732+1) and (1-1+6,1.732+1) .. (1+6,1.732); \draw [fill=ttqqqq] (1+6,1.732) circle (2.5pt); \draw [fill=ttqqqq] (0+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6,0) circle (2.5pt); \draw [color=black] (-0.3+6,-0.2) node {$v_2$}; \draw [color=black] (2.3+6,-0.2) node {$v_1$}; \draw [color=black] (-0.4+1+6,1.732) node {$v_0$}; \draw [<-, line width=2pt] (5+6,1)-- (3+6,1); \draw [line width=2pt] (2+12,0)-- (1+12,1.732); \draw [line width=2pt] (1+12,1.732)-- (0+12,0); \draw [line width=2pt, color=black](1+12,1.732) .. controls (1+1+12,1.732+1) and (1-1+12,1.732+1) .. (1+12,1.732); \draw [fill=ttqqqq] (1+12,1.732) circle (2.5pt); \draw [fill=ttqqqq] (0+12,0) circle (2.5pt); \draw [fill=ttqqqq] (2+12,0) circle (2.5pt); \end{tikzpicture} \end{center} \caption{Decreasing the distances to $v_0$.} \label{fig:oct.2} \end{figure} Now we can assume that all vertices of $\Theta_S$ other than $v_0$ are of distance $1$ to $v_0$. Finally, if we get any edges between the vertices of distance $1$ to $v_0$, say an edge $v_1v_2$, we can move them to loops at $v_0$ via an L-spin on $v_1v_2, \:v_2v_0, \:v_0v_0$ interchanging $v_1v_2$ with $v_0v_0$ giving an additional head of the octopus. This procedure is illustrated in \Cref{fig:oct.3}. \begin{figure}[h] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt] (0+6,0)-- (2+6,0); \draw [line width=2pt] (2+6,0)-- (1+6,1.732); \draw [line width=2pt, color=black] (1+6,1.732)-- (0+6,0); \draw [line width=2pt, color=black](1+6,1.732) .. controls (1+1+6,1.732+1) and (1-1+6,1.732+1) .. (1+6,1.732); \draw [fill=ttqqqq] (1+6,1.732) circle (2.5pt); \draw [fill=ttqqqq] (0+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6,0) circle (2.5pt); \draw [color=black] (-0.3+6,-0.2) node {$v_2$}; \draw [color=black] (2.3+6,-0.2) node {$v_1$}; \draw [color=black] (-0.4+1+6,1.732) node {$v_0$}; \draw [<-, line width=2pt] (5+6,1)-- (3+6,1); \draw [line width=2pt] (2+12,0)-- (1+12,1.732); \draw [line width=2pt] (1+12,1.732)-- (0+12,0); \draw [line width=2pt, color=black](1+12,1.732) .. controls (1-2+12,1.732) and (1+12,1.732+2) .. (1+12,1.732); \draw [line width=2pt, color=black](1+12,1.732) .. controls (1+12,1.732+2) and (1+12+2,1.732) .. (1+12,1.732); \draw [fill=ttqqqq] (1+12,1.732) circle (2.5pt); \draw [fill=ttqqqq] (0+12,0) circle (2.5pt); \draw [fill=ttqqqq] (2+12,0) circle (2.5pt); \end{tikzpicture} \end{center} \caption{Final step towards an octopus.} \label{fig:oct.3} \end{figure} In the end we are left with a (possibly multiheaded) octopus. \end{proof} Now, we need to consider the case of the graph being bipartite. \begin{defn} A graph is called a \emph{sweet} if there is an edge $v_0v_1$ such that every vertex $v$ other than $v_0$ and $v_1$ is connected by an edge to either $v_0$ or $v_1$, and such that every vertex other than $v_0$ and $v_1$ is of degree $1$. We call $v_0$ and $v_1$ the \emph{bases} of the sweet, the set of all edges $v_0v_1$ the \emph{core} of the sweet, while the other edges are called \emph{sticks}. \end{defn} \begin{prop} \label{prop:normal_form_sweet} Let $S$ be a configuration in a self-inverse chessboard, such that $\Theta_S$ is connected and bipartite. Then, using simple moves, we can transform $S$ to some $S'$ such that $\Theta_{S'}$ is a sweet. \end{prop} \begin{proof} Let us then choose a particular edge $v_0v_1$, where $v_0$ and $v_1$ will be bases for the sweet. Now, if there is a vertex (say $v_3$) of distance to $\set{v_0,v_1}$ bigger than $1$, we can reduce it in a similar way to what was done in the proof of \Cref{prop:normal_form_oct}: Let $v_1v_2v_3$ be the path of length $2$. We perform an L-spin on $v_0v_1, \:v_1v_2,\:v_2v_3$ interchanging $v_2v_3$ with $v_0v_3$ and thereby reducing the distance of $v_3$ to $\set{v_0,v_1}$ without increasing the distances of any other vertices to it. A double usage of that procedure is shown in \Cref{fig:normal_form_sweet_red}, first applying that to $v_3$, then to $v_4$. \begin{figure}[h] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt] (0,0)-- (2,0); \draw [line width=2pt] (2,0)-- (2,-2); \draw [line width=2pt, color=black] (2,-2)-- (0,-2); \draw [line width=2pt, color=black] (2,-2)-- (1,-3); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [fill=ttqqqq] (2,-2) circle (2.5pt); \draw [fill=ttqqqq] (0,-2) circle (2.5pt); \draw [fill=ttqqqq] (1,-3) circle (2.5pt); \draw (-0.25,0.25) node {$v_0$}; \draw (2.25,0.25) node {$v_1$}; \draw (-0.25,-2.25) node {$v_3$}; \draw (2.25,-2.25) node {$v_2$}; \draw (1-0.3,-3) node {$v_4$};; \draw [<-, line width=2pt] (5,-1.5)-- (3,-1.5); \draw [line width=2pt] (0+6,0)-- (2+6,0); \draw [line width=2pt] (2+6,0)-- (2+6,-2); \draw [line width=2pt, color=black] (0+6,0)-- (0+6,-2); \draw [line width=2pt, color=black] (2+6,-2)-- (1+6,-3); \draw [fill=ttqqqq] (0+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6,-2) circle (2.5pt); \draw [fill=ttqqqq] (0+6,-2) circle (2.5pt); \draw [fill=ttqqqq] (1+6,-3) circle (2.5pt); \draw [<-, line width=2pt] (5+6,-1.5)-- (3+6,-1.5); \draw [line width=2pt] (0+6+6,0)-- (2+6+6,0); \draw [line width=2pt] (2+6+6,0)-- (2+6+6,-2); \draw [line width=2pt, color=black] (0+6+6,0)-- (0+6+6,-2); \draw [line width=2pt, color=black] (0+6+6,0)-- (1+6+6,-3); \draw [fill=ttqqqq] (0+6+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6+6,-2) circle (2.5pt); \draw [fill=ttqqqq] (0+6+6,-2) circle (2.5pt); \draw [fill=ttqqqq] (1+6+6,-3) circle (2.5pt); \end{tikzpicture} \end{center} \caption{Decreasing the distance of $v_3$ and $v_4$ to $\set{v_0,v_1}$.} \label{fig:normal_form_sweet_red} \end{figure} Now, we can assume that all of the vertices are of distance $1$ to $\set{v_0,v_1}$. Note that there can't be any edges within the neighbours of $v_0$ since the graph is bipartite, similarly with the neighbours of $v_1$. If there are any edges between the neighbours of $v_0$ and the neighbours of $v_1$, we can move these edges parallel to $v_0v_1$ obtaining a sweet. This is done via an L-spin. A double usage is shown in \Cref{fig:normal_form_sweet_core} where we first move the edge $v_2v_3$ and then $v_2v_4$. \begin{figure}[h] \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm, scale=0.7] \draw [line width=2pt] (0,0)-- (2,0); \draw [line width=2pt, color=black] (2,0)-- (2+0.81,0+0.59); \draw [line width=2pt, color=black] (2,0)-- (2+0.81,0-0.59); \draw [line width=2pt, color=black] (0,0)-- (-0.5,1.732/2); \draw [line width=2pt, color=black] (0,0)-- (-0.5,-1.732/2); \draw [line width=2pt, color=black] (0,0)-- (-1,0); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (2,0) circle (2.5pt); \draw [fill=ttqqqq] (2+0.81,0+0.59) circle (2.5pt); \draw [fill=ttqqqq] (2+0.81,0-0.59) circle (2.5pt); \draw [fill=ttqqqq] (-0.5,1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-0.5,-1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-1,0) circle (2.5pt); \draw [color=black] (3.1,0.8) node {$v_2$}; \draw [color=black] (-1.4,0) node {$v_3$}; \draw [color=black] (-0.8,1) node {$v_4$}; \draw [color=black] (0.3,-0.3) node {$v_0$}; \draw [color=black] (1.7,-0.3) node {$v_1$}; \draw [line width=2pt, color=black] (-0.5,1.732/2)-- (2+0.81,0+0.59); \draw [line width=2pt, color=black] (-1,0)-- (2+0.81,0+0.59); \draw [<-, line width=2pt] (6,0)-- (4,0); \draw [line width=2pt] (0+8,0)-- (2+8,0); \draw [line width=2pt, color=black] (2+8,0)-- (2+0.81+8,0+0.59); \draw [line width=2pt, color=black] (2+8,0)-- (2+0.81+8,0-0.59); \draw [line width=2pt, color=black] (0+8,0)-- (-0.5+8,1.732/2); \draw [line width=2pt, color=black] (0+8,0)-- (-0.5+8,-1.732/2); \draw [line width=2pt, color=black] (0+8,0)-- (-1+8,0); \draw [fill=ttqqqq] (0+8,0) circle (2.5pt); \draw [fill=ttqqqq] (2+8,0) circle (2.5pt); \draw [fill=ttqqqq] (2+0.81+8,0+0.59) circle (2.5pt); \draw [fill=ttqqqq] (2+0.81+8,0-0.59) circle (2.5pt); \draw [fill=ttqqqq] (-0.5+8,1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-0.5+8,-1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-1+8,0) circle (2.5pt); \draw [line width=2pt, color=black] (-0.5+8,1.732/2)-- (2+0.81+8,0+0.59); \draw [line width=2pt, color=black] (0+8,0) .. controls (0.5+8,-0.5) and (1.5+8,-0.5) .. (2+8,0); \draw [<-, line width=2pt] (6+8,0)-- (4+8,0); \draw [line width=2pt] (0+8+8,0)-- (2+8+8,0); \draw [line width=2pt, color=black] (2+8+8,0)-- (2+0.81+8+8,0+0.59); \draw [line width=2pt, color=black] (2+8+8,0)-- (2+0.81+8+8,0-0.59); \draw [line width=2pt, color=black] (0+8+8,0)-- (-0.5+8+8,1.732/2); \draw [line width=2pt, color=black] (0+8+8,0)-- (-0.5+8+8,-1.732/2); \draw [line width=2pt, color=black] (0+8+8,0)-- (-1+8+8,0); \draw [fill=ttqqqq] (0+8+8,0) circle (2.5pt); \draw [fill=ttqqqq] (2+8+8,0) circle (2.5pt); \draw [fill=ttqqqq] (2+0.81+8+8,0+0.59) circle (2.5pt); \draw [fill=ttqqqq] (2+0.81+8+8,0-0.59) circle (2.5pt); \draw [fill=ttqqqq] (-0.5+8+8,1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-0.5+8+8,-1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-1+8+8,0) circle (2.5pt); \draw [line width=2pt, color=black] (0+8+8,0) .. controls (0.5+8+8,-0.5) and (1.5+8+8,-0.5) .. (2+8+8,0); \draw [line width=2pt, color=black] (0+8+8,0) .. controls (0.5+8+8,+0.5) and (1.5+8+8,+0.5) .. (2+8+8,0); \end{tikzpicture} \end{center} \caption{Final step towards a sweet.} \label{fig:normal_form_sweet_core} \end{figure} \end{proof} Now, we notice that a sweet is specified by the number of edges in total and the number of sticks on each side. This allows us to conclude the following. \begin{prop}\label{prop:os-nf} Let $S$ be a configuration in a self-inverse chessboard with a connected inverse-dual graph $\Theta_S$. Then $\Theta_S$ can be transformed via simple moves to a (unique up to vertex relabelling) octopus or sweet, which we call the \emph{normal form} of $S$. Furthermore, the normal form is a sweet if and only if the inverse-dual graph $\Theta_S$ is bipartite. \end{prop} \begin{proof} The fact that $\Theta_S$ can be transformed to an octopus (if $\Theta_S$ isn't bipartite) and a sweet (if $\Theta_S$ is bipartite) is a consequence of Propositions~\ref{prop:normal_form_oct} and~\ref{prop:normal_form_sweet}. Now, as shown in \Cref{prop:sweets_inv}, the simple moves do not change whether or not $\Theta_S$ is bipartite, and, if it is bipartite, its bipartite components. So an octopus is possible if and only if $\Theta_S$ isn't bipartite, while a sweet only if $\Theta_S$ is bipartite. An octopus is determined by the number of elements of $S$ and number of its vertices (i.e. rows/columns that $S$ can occupy), so no two different octopuses are equivalent. A sweet is similarly determined by the number of its edges, its vertices and the sizes of its connected components. Since these are invariant by \Cref{prop:sweets_inv}, two sweets can be transformed by simple moves into each other only by relabelling of the vertices. \end{proof} \subsubsection{Solvable configurations} Just like in \Cref{sec:solvable_non-self-inverse}, we want to be able to tell which of the inverse-dual graphs can be transformed into left-right diagonal sets. For that we first study what being left-right diagonal corresponds to in the inverse-dual graph. \begin{prop} Let $S$ be a configuration in a self-inverse chessboard. Then $S$ is left-right diagonal if and only if $\Theta_S$ is a union of disjoint (directed) cycles. \end{prop} \begin{proof} Being left-right diagonal means that there is exactly one element in each of the columns and in each of the rows. Coming back to the directed graphs, this means that every vertex has exactly one edge coming in to it and exactly one edge coming out of it. Such a graph is a union of disjoint directed cycles (possibly including loops and $2$-cycles). \end{proof} Now, we want to identify the normal forms that these have. \begin{prop} \label{prop:normal-forms} In the context of inverse-dual graphs, the normal form of an odd cycle is a single-headed octopus with an even number of legs, while the normal form of an even cycle is a sweet with two edges in the core and equal number of sticks on either side. Also, all such forms correspond to configurations that can be transformed to left-right diagonal configurations via simple moves. \end{prop} \begin{proof} Odd cycles aren't bipartite, so their normal forms are octopuses by \Cref{prop:os-nf}. Since the number of edges and the number of vertices involved are equal, these octopuses must be single-headed. Odd length implies even number of legs. Even cycles are bipartite with the bipartite components of equal sizes, so their normal forms are sweets (\Cref{prop:os-nf}) with equal sizes of bipartite components. The number of edges equal to the number of vertices determines the size of the core to be $2$. Finally, for each number $n$, there are cycles of length $2n$ and $2n+1$, equivalent to respectively a sweet with $n-1$ sticks on each side and an octopus with $2n$ legs. Thus all such normal forms correspond to configurations that can be transformed to be left-right diagonal with simple transformations. \end{proof} The proposition motivates the following definition. \begin{defn} A connected inverse-dual graph $\Theta_S$ is \emph{solvable} if its normal form is either an octopus with even number of legs, or a sweet with equal number of sticks on each side. \end{defn} Examples of such an octopus and a sweet are illustrated in \Cref{fig:ex-solv}. \begin{figure} \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm] \draw [line width=2pt] (0,0)-- (-1.62,-1.18); \draw [line width=2pt] (0,0)-- (-0.62,-1.9); \draw [line width=2pt] (0,0)-- (+1.62,-1.18); \draw [line width=2pt] (0,0)-- (+0.62,-1.9); \draw [line width=2pt, color=black](0,0) .. controls (-2,2) and (2,2) .. (0,0); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (-1.62,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (-0.62,-1.9) circle (2.5pt); \draw [fill=ttqqqq] (+1.62,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (+0.62,-1.9) circle (2.5pt); \draw [line width=2pt] (0+6,0) .. controls (0.5+6,0.5) and (1.5+6,0.5) .. (2+6,0); \draw [line width=2pt] (0+6,0) .. controls (0.5+6,-0.5) and (1.5+6,-0.5) .. (2+6,0); \draw [line width=2pt, color=black] (2+6,0)-- (2+0.5+6,0+1.732/2); \draw [line width=2pt, color=black] (2+6,0)-- (2+0.5+6,0-1.732/2); \draw [line width=2pt, color=black] (2+6,0)-- (2+1+6,0); \draw [line width=2pt, color=black] (0+6,0)-- (-0.5+6,1.732/2); \draw [line width=2pt, color=black] (0+6,0)-- (-0.5+6,-1.732/2); \draw [line width=2pt, color=black] (0+6,0)-- (-1+6,0); \draw [fill=ttqqqq] (0+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+6,0) circle (2.5pt); \draw [fill=ttqqqq] (2+0.5+6,0+1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (2+0.5+6,0-1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (2+1+6,0) circle (2.5pt); \draw [fill=ttqqqq] (-0.5+6,1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-0.5+6,-1.732/2) circle (2.5pt); \draw [fill=ttqqqq] (-1+6,0) circle (2.5pt); \end{tikzpicture} \end{center} \caption{Examples of solvable normal forms.} \label{fig:ex-solv} \end{figure} \Cref{prop:normal-forms} is restated in the following corollary. \begin{cor} \label{cor:solv_self-inverse} If a configuration $S$ in a self-inverse chessboard is such that its inverse-dual graph $\Theta_S$ has all connected components solvable, then it is Nielsen equivalent to a left-right diagonal configuration in that chessboard. \end{cor} \subsubsection{Additional solvable configuration} \label{sec:odd-octopi} We now know that octopuses with an odd number of legs and sweets with bipartite components of different sizes cannot be solved with simple moves (inversions, L-spins and loop shifts). However, we are able to weaken the condition of \Cref{cor:solv_self-inverse}. For brevity, from now on we refer to configurations that can be transformed with simple moves to octopuses with even or odd number of legs, or sweets with equal number of sticks on each side as \emph{even octopi}, \emph{odd octopi} and \emph{equal sweets} respectively. \begin{prop} \label{prop:additional-solv} Let $S$ be a union of a singleton configuration and a configuration $S'$ in a self-inverse chessboard $HgH$ which satisfy following conditions. \begin{enumerate} \item The singleton configuration consists of an element $h$ lying in $H$; \item $S'$ is such that all of its connected components are solvable or are odd octopuses; \item (possibly after inversions) $S'$ is left-diagonal. \end{enumerate} Then, $S$ is Nielsen-equivalent to a left-right diagonal configuration with one element in $H$ and the rest in $HgH$. \end{prop} \begin{proof} We are going to proceed by repeated use of an algorithm, which can be performed as long as there is some connected component in the form of an odd octopus. Each usage will decrease the value of the following counter: $$C(S) = \#(\textrm{connected components of $S$}) + 2\times \#(\textrm{connected components of $S$ being odd octopuses})$$ thereby ensuring that at some point we are left with no odd octopuses. Then, by \Cref{cor:solv_self-inverse} the configuration can be transformed into a left-right diagonal form via simple moves. The condition that possibly after inversions $S$ is left-diagonal implies that no vertex in $\Theta_{S}$ has degree $0$. It also enforces the connected components to be either single-headed octopuses or sweets with two edges in the core. \textbf{Algorithm} Suppose that there is at least one connected component being an odd octopus, call it $S_0$. Take the element $x_0 \in S_0$ corresponding to the head of $S_0$ (i.e. the unique loop $v_0v_0$, where $v_0$ is the base of $S_0$) and the element $h\in H$, and perform the Nielsen move $h \mapsto x_0h$. Now, there are two options: \begin{enumerate} \item either $x_0h$ lies in one of the columns corresponding to vertices in $S_0$, \item or $x_0h$ lies in a column that doesn't correspond to a vertex in $S_0$. \end{enumerate} In the first case, we get a additional edge in $\Theta_{S_0}$, while the number of vertices remains unchanged, so the normal form of $S_0\cup\set{x_0h}$ is an octopus with two heads and an odd number of legs. The additional head can be shifted to the end (say $v_1$) of one of the legs via a loop shift and then this leg can be cut. Let $x_1$ be the element of $S_0$ corresponding to $v_0v_1$, we perform the Nielsen move $x_0 \mapsto x_0x_1^{-1} \in H$ (where $x_0$ corresponds to the loop) which leaves us with: two even octopuses and an element of $H$. Furthermore, none of the vertices (i.e. none of the rows of $HgH$) became empty, so we are in a situation satisfying the original hypotheses. This procedure is illustrated in \Cref{fig:cut_leg}. In the second case, we put an edge between two connected components of $\Theta_S$. Since we haven't done anything to the loop at the base of $S_0$, the new component must be a double-headed octopus. Let's transform it to that form and take $x_0, x_1$ to be elements corresponding to the two heads of the octopus. We perform the Nielsen move $x_0 \mapsto x_0x_1^{-1} \in H$, `cutting' one of the heads and thereby getting a single octopus and a single element in $H$. Also, no vertex became of degree $0$, so we have a configuration satisfying the original hypotheses. \textbf{Counter decreasing} The effect of both cases is summarised in \Cref{tab:counter}. \begin{table}[] \centering \begin{tabular}{c|ccc} & no. components & no. odd octopuses & $C(S)$ \\ \hline Case 1. & $+1$ & $-1$ & $-1$ \\ Case 2. & $-1$ & $0$ or $-1$ & $-1$ or $-3$ \end{tabular} \caption{The effect of algorithm on the counter.} \label{tab:counter} \end{table} In the first case, the number of connected components increased by $1$ (since we `cut the octopuses into two'), while the number of odd octopuses decreased by $1$, so in total the counter $C(S)$ decreased by $1$. In the second case, since the number of connected components decreased by $1$ and the number of odd octopuses remained the same (if we connected to a component of even number of elements) or decreased by $1$ (if we connected to a component with odd number of elements), the counter $C(S)$ decreased by $1$ or $3$. In both cases we observe the counter $C(S)$ decreasing, so at some point we get to a configuration where it is impossible to continue the algorithm, which is one that doesn't contain any odd octopuses. This is precisely what we wanted to get. \end{proof} \begin{figure} \begin{center} \definecolor{ttqqqq}{rgb}{0.2,0,0} \begin{tikzpicture}[line cap=round,line join=round,x=1cm,y=1cm, scale=0.5] \draw [line width=2pt] (0,0)-- (-1.62,-1.18); \draw [line width=2pt] (0,0)-- (-0.62,-1.9); \draw [line width=2pt] (0,0)-- (+1.62,-1.18); \draw [line width=2pt] (0,0)-- (+0.62,-1.9); \draw [line width=2pt] (0,0)-- (-2,0); \draw [fill=ttqqqq] (-2,0) circle (2.5pt); \draw [line width=2pt, color=black](0,0) .. controls (0,+2) and (-2,0) .. (0,0); \draw [line width=2pt, color=black](0,0) .. controls (0,+2) and (+2,0) .. (0,0); \draw [fill=ttqqqq] (0,0) circle (2.5pt); \draw [fill=ttqqqq] (-1.62,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (-0.62,-1.9) circle (2.5pt); \draw [fill=ttqqqq] (+1.62,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (+0.62,-1.9) circle (2.5pt); \draw [<-, line width=2pt] (4+1,0) -- (2+1,0); \draw [line width=2pt] (0+7+1+0.5,0)-- (-1.62+7+1+0.5,-1.18); \draw [line width=2pt] (0+7+1+0.5,0)-- (-0.62+7+1+0.5,-1.9); \draw [line width=2pt] (0+7+1+0.5,0)-- (+1.62+7+1+0.5,-1.18); \draw [line width=2pt] (0+7+1+0.5,0)-- (+0.62+7+1+0.5,-1.9); \draw [line width=2pt] (0+7+1+0.5,0)-- (-2+7+1+0.5,0); \draw [fill=ttqqqq] (-2+7+1+0.5,0) circle (2.5pt); \draw [line width=2pt, color=black](0+7+1+0.5,0) .. controls (0+7-2+1+0.5,+2) and (2+7+1+0.5,+2) .. (0+7+1+0.5,0); \draw [line width=2pt, color=black](0+7-2+1+0.5,0) .. controls (0+7-2-2+1+0.5,+2) and (+2+7-2+1+0.5,2) .. (0+7-2+1+0.5,0); \draw [fill=ttqqqq] (0+7+1+0.5,0) circle (2.5pt); \draw [fill=ttqqqq] (-1.62+7+1+0.5,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (-0.62+7+1+0.5,-1.9) circle (2.5pt); \draw [fill=ttqqqq] (+1.62+7+1+0.5,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (+0.62+7+1+0.5,-1.9) circle (2.5pt); \draw [<-, line width=2pt] (4+6+1+3,0) -- (2+6+1+3,0); \draw [line width=2pt] (0+7+7+4-0.5,0)-- (-1.62+7+7+4-0.5,-1.18); \draw [line width=2pt] (0+7+7+4-0.5,0)-- (-0.62+7+7+4-0.5,-1.9); \draw [line width=2pt] (0+7+7+4-0.5,0)-- (+1.62+7+7+4-0.5,-1.18); \draw [line width=2pt] (0+7+7+4-0.5,0)-- (+0.62+7+7+4-0.5,-1.9); \draw [fill=ttqqqq] (-2+7+7+4-0.5,0) circle (2.5pt); \draw [line width=2pt, color=black](0+7+7+4-0.5,0) .. controls (0+7-2+7+4-0.5,+2) and (2+7+7+4-0.5,+2) .. (0+7+7+4-0.5,0); \draw [line width=2pt, color=black](0+7-2+7+4-0.5,0) .. controls (0+7-2-2+7+4-0.5,+2) and (+2+7-2+7+4-0.5,2) .. (0+7-2+7+4-0.5,0); \draw [fill=ttqqqq] (0+7+7+4-0.5,0) circle (2.5pt); \draw [fill=ttqqqq] (-1.62+7+7+4-0.5,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (-0.62+7+7+4-0.5,-1.9) circle (2.5pt); \draw [fill=ttqqqq] (+1.62+7+7+4-0.5,-1.18) circle (2.5pt); \draw [fill=ttqqqq] (+0.62+7+7+4-0.5,-1.9) circle (2.5pt); \end{tikzpicture} \end{center} \caption{Cutting an odd octopus into two even octopuses.} \label{fig:cut_leg} \end{figure} \subsection{Applications of configurations and inverse-dual graphs}\label{sec:applications} The combined techniques of sections~\ref{sec:non-self-inverse-chessboards} and~\ref{sec:self-inverse-chessboards} give us the following proposition. \begin{thm} \label{cor:final} Let $H\leq G$ be a subgroup and $S$ be a left transversal generating $G$. Then, if the following conditions are satisfied, $S$ is Nielsen-equivalent to a left-right transversal. \begin{enumerate} \item For $g\in G$ such that $HgH \neq Hg^{-1}H$, for $T_g = S_g\cup S_{g^{-1}}^{-1}$, the configuration $T_g$ has only square connected components. \item For $g\in G$ such that $HgH = Hg^{-1}H$, the graph $\Theta_{S_g}$ has connected components in the form of any octopuses or equal sweets. \end{enumerate} \end{thm} \begin{proof} We look one-by-one at the self-inverse chessboards and pairs of non-self-inverse chessboards which are inverses of each other. For each pair $HgH$ and $Hg^{-1}H$ of non-self-inverse chessboards, we note that the hypotheses of \Cref{prop:solv} are satisfied, since each connected component of $T_g$ has an equal number of rows and columns, while because $T_g = S_g\cup S_{g^{-1}}^{-1}$ and $S_g$ has exactly one element in each row, while $S_{g^{-1}}^{-1}$ has one element in each column, each connected component must have the number of elements equal to twice the number of rows (and columns). For a self-inverse chessboard, we can apply \Cref{prop:additional-solv} directly. After these procedures, for each $g$, the set $S_g$ is diagonal and therefore the whole set $S$ is diagonal. \end{proof} We now present two consequences of our results on configurations and inverse-dual graphs. One application is to a situation where the possible chessboards are small enough to not allow non-square connected components, i.e. when $[H:xHx^{-1}\cap H]\leq 2$ for all $x$ (so $H$ is very close to normal in $G$). The other application uses a technique similar to the proof of \Cref{lem:cyclic-nielsen} for finding left-right transversal generating sets for $H\leq G$ when $H\cong C_n$ , i.e. being able to Nielsen transform the trivial element $e$ to any other element of the group. In each case, we make incremental progress on \Cref{main Q} or \Cref{general Q}, by adding additional hypotheses. \begin{prop}\label{prop:almost-normal} Let $H\leq G$ be a subgroup of finite index such that $\forall x\in G$ we have $ [H:xHx^{-1}\cap H]\leq 2$. Then any generating multiset of size $[G:H]$ can be Nielsen transformed to a left-right transversal. \end{prop} \begin{proof} We start by taking a generating multiset $S$ of size $[G:H]$ and Nielsen transforming it to a left transversal. All of the chessboards are either $1\times 1$ or $2\times 2$ as by \cite[Proposition 9]{cosetgraph} the size of the chessboard containing $gH$ is $[G : gHg^{-1} \cap H]/[G:H]$. The $1\times 1$ chessboards correspond to the cosets of $H$ in the normaliser, and the elements of $S$ in them already form a diagonal set. Therefore, we'll only consider the $2\times 2$ chessboards from now on. For the self-inverse chessboards, we are either in the situation of having a diagonal set, or a configuration represented by a single-headed octopus with one leg. In particular, we cannot get a non-equal sweet. For the non-self inverse chessboards, after moving the elements from $Hg^{-1}H$ to $HgH$ with the inversion map, we get $4$ elements in a $2\times 2$ board, which can either form two connected components (each of size $1\times 1$) or form one connected component of size $2\times 2$. In both cases, these are square. Now, we can apply \Cref{cor:final} to conclude that $S$ can be Nielsen transformed to a left-right transversal. \end{proof} We now provide a proof for \Cref{thm:malnormal}. We note that the hypotheses of this theorem imply that $G$ must be finite. This is due to the fact that infinite groups can't have malnormal groups of finite index. Indeed, taking $H$ malnormal in $G$, if $[G:H]=[G:gHg^{-1}] < \infty$, then we also have $[G:H\cap gHg^{-1}] < \infty$, but $H\cap gHg^{-1} = \set{e}$, so this would imply that $|G|< \infty$. \vspace{\topsep} \begin{proof}[Proof of \Cref{thm:malnormal}] We start with the observation that for a malnormal subgroup $H\leq G$, the individual boxes in the chessboards other than $H$ correspond to singleton sets, since $$|xH\cap Hx| = |xHx^{-1}\cap H| = 1$$ for $x\not \in H$. This means that whenever two dots end up in the same box, we are able to obtain the trivial element $e$, which can be Nielsen transformed to any other element. Similarly to the previous lemma, we start with a left generating transversal $S$. Firstly, given a pair $HgH\neq Hg^{-1}H$, we move all of the elements to one of these chessboards by inversion. Then, after transforming the configuration to its normal form (\Cref{defn:config-nf}) we get some number of connected components, each having two dots in its top left corner. We use this to first get $e$ in our multiset and then Nielsen transform $e$ (without using it in the Nielsen moves) to a box which will connect two distinct connected components in that chessboard. After a finite number of such operations we obtain just one connected component in $HgH$, which has to have equal vertical and horizontal dimensions. By \Cref{prop:solv} this is a solvable configuration, so by \Cref{lem: solvable} it is L-spin equivalent to one of the form $A\cup B^{-1}$, were $A \subset HgH$ and $B \subset Hg^{-1}H$ are diagonal sets. Secondly, given a self-inverse chessboard, i.e., a double coset $HgH=Hg^{-1}H$, by \Cref{prop:normal-forms} the normal forms of connected components of $S_g$ are either octopuses or sweets with core consisting of two edges. In the case of sweets, the two parallel edges in the core represent two elements in the same box, one of which can then be transformed to $e$, which then can be transformed to add a loop on one of the vertices of the inverse-dual graph of the component, thereby changing its normal form from a sweet to an octopus. Repeating the procedure for every self-inverse chessboard, for every connected component whose normal form is a sweet, we get to a situation, where all of components are actually octopuses. Now we can apply \Cref{cor:final} to conclude that $S$ is indeed equivalent to a left-right transversal. \end{proof} The procedure also gives us the following quantitative result. \begin{thm} Let $H\leq G$ be a malnormal subgroup such that $\rank(G) \leq [G:H]$. Then: $$\rank(G) \leq [G:H] - \frac{\#(\emph{non-self-inverse chessboards})}{2}$$ \end{thm} \begin{proof} According to \Cref{thm:malnormal}, we can take any generating set of size $[G:H]$ and Nielsen-transform it to a left-right transversal $S$. Now, for each pair $HgH,Hg^{-1}H$ of non-self inverse chessboards which are inverses to each other, we can use the inversion map to transform elements of the multiset from $Hg^{-1}H$ to $HgH$, forming configurations $S_g \cup S_{g^{-1}}^{-1}$ which are solvable. Now, when we transform these to their normal forms, \Cref{prop:solv} tells us that these normal forms have two dots in the top-left corner. Thus, denoting by $k$ the number of pairs $HgH,Hg^{-1}H$ of non-self-inverse chessboards, we Nielsen-transformed $S$ to a configuration with at least $k$ occurrences of two elements being in one box. Because $H$ is a malnormal subgroup, each box in a chessboard of $H$ (excluding the chessboard $H$ itself) contains precisely one group element. Thus a pair of elements of $S$ in the same box must be of the form \set{g,g}. Any such pair can be Nielsen transformed in the following way: $\set{g,g} \mapsto \set{g,e}$. Thus, we get $k$ occurrences of the trivial element $e$ in $S$, which don't play a role in generating $G$. This implies that the minimal size of a generating set, i.e. the rank of $G$, is less or equal to $|S|-k$, which is equal to $$[G:H] - \frac{\#(\textrm{non-self-inverse chessboards})}{2}$$ \end{proof}
{ "timestamp": "2019-12-06T02:23:30", "yymm": "1912", "arxiv_id": "1912.02717", "language": "en", "url": "https://arxiv.org/abs/1912.02717", "abstract": "In [3] is was shown that for any group $G$ whose rank (i.e., minimal number of generators) is at most 3, and any finite index subgroup $H\\leq G$ with index $[G:H]\\geq rank(G)$, one can always find a left-right transversal of $H$ which generates $G$. In this paper we extend this result to groups of rank at most 4. We also extend this to groups $G$ of arbitrary (finite) rank $r$ provided all the non-trivial divisors of $[G:core_G(H)]$ are at least $2r-1$. Finally, we extend this to groups $G$ of arbitrary (finite) rank provided $H$ is malnormal in $G$.", "subjects": "Group Theory (math.GR)", "title": "Investigating transversals as generating sets for groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9926541723389503, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.8272031426126139 }
https://arxiv.org/abs/1306.0943
Extremal Problems for Subset Divisors
Let $A$ be a set of $n$ positive integers. We say that a subset $B$ of $A$ is a divisor of $A$, if the sum of the elements in $B$ divides the sum of the elements in $A$. We are interested in the following extremal problem. For each $n$, what is the maximum number of divisors a set of $n$ positive integers can have? We determine this function exactly for all values of $n$. Moreover, for each $n$ we characterize all sets that achieve the maximum. We also prove results for the $k$-subset analogue of our problem. For this variant, we determine the function exactly in the special case that $n=2k$. We also characterize all sets that achieve this bound when $n=2k$.
\section{Introduction} Let $A$ be a finite set of positive integers and let $B$ be a subset of $A$. We say that $B$ is a \emph{divisor} of $A$, if the sum of the elements in $B$ divides the sum of the elements in $A$. We are interested in the number of divisors a set of positive integers can have. Toward that end, we let $d(A)$ be the number of divisors of $A$ and we let $d(n)$ be the maximum value of $d(A)$ over all sets $A$ of $n$ positive integers. We also study the $k$-subset version of this problem. That is, we define $d_k(A)$ to be the number of $k$-subset divisors of $A$ and we let $d(k,n)$ be the maximum value of $d_k(A)$ over all sets $A$ of $n$ positive integers. This work is motivated by Problem 1 of the 2011 International Mathematical Olympiad~\cite{IMO}, where it is asked to determine $d(2,4)$. \begin{problem} \label{imo} Determine $d(2,4)$. Moreover, find all sets of four positive integers $A$ with exactly $d(2,4)$ 2-subset divisors. \end{problem} We begin by presenting the solution to Problem~\ref{imo}. \begin{lemma} \label{olympiad} For all sets $A$ of four positive integers, $d_2(A) \leq 4$. Moreover, $d_2(A)=4$ if and only if \[ A= \{a, 5a, 7a, 11a\} \text{ or } A=\{a, 11a, 19a, 29a\} \] for some $a \in \mathbb{N}$. \end{lemma} \begin{proof} Let $A=\{a_1, a_2, a_3, a_4\}$ be a set of positive integers with $a_1 < a_2 < a_3 < a_4$. We use $\sum A$ to denote the sum of the elements in $A$. Note that $\frac{1}{2} \sum A < a_2+a_4 < a_3+a_4 < \sum A$. Thus $d_2(A) \leq 4$ as neither $\{a_2, a_4\}$ nor $\{a_3, a_4\}$ can divide $A$. Suppose $d_2(A)=4$. This implies that both $\{a_1,a_4\}$ and $\{a_2, a_3\}$ divide $A$, and hence $a_1+a_4=a_2+a_3$. Since $\{a_1,a_2\}$ and $\{a_1,a_3\}$ also divide $A$, $(a_1+a_2) | (a_3+a_4)$ and $(a_1+a_3) | (a_2+a_4)$. Therefore, there exist $2 \leq j < k$ such that \begin{enumerate} \item[$(i)$] $a_1+a_4=a_3+a_2$, \item[$(ii)$] $j(a_1+a_3)=a_2+a_4$, and \item[$(iii)$] $k(a_1+a_2)=a_3+a_4$. \end{enumerate} Adding $(i)$ and $(ii)$, we obtain $(j+1)a_1+(j-1)a_3=2a_2$. Since $a_3 > a_2$, it follows that $j=2$. Substituting $j=2$ and taking $3(i)+2(ii)+(iii)$ we obtain $(k+7)a_1=(5-k)a_2$. This implies $(5-k) >0$, and so $k \in \{3,4\}$. By solving the systems corresponding to the values $k=3$ and $k=4$ we are lead to the respective solutions \[ A=\{a, 5a, 7a, 11a\} \text{ and } A=\{a, 11a, 19a, 29a\}. \] It is easy to check that any set $A$ of the above form does indeed satisfy $d_2(A)=4$. \end{proof} \section{Lower bounds for $d(n)$ and $d(k,n)$} \label{lower} In this section, we give constructions for sets of positive integers with many divisors and many $k$-subset divisors. In the next section we derive matching upper bounds for $d(n)$ and $d(n,2n)$ and hence these sets are optimal. Moreover, in Section~\ref{uniqueness}, we will show that these are almost all the sets achieving the maximum values. Recall that $d(n)$ (respectively, $d(k,n)$) is the maximum number of divisors (respectively, $k$-subset divisors) a set of $n$ positive integers can have. By convention, the sum of the elements in the empty set is zero, and so the empty set does not divide any set (except itself). \begin{lemma} \label{lower1} For all $n \geq 1$, $d(n) \geq 2^{n-1}$. \end{lemma} \begin{proof} The lemma clearly holds if $n=1$. Thus, assume $n \geq 2$ and let $A'$ be any set of $n-1$ positive integers. We show that we can choose an element $a$ such that $A' \cup \{a\}$ has $2^{n-1}$ divisors. Let \[ S:=\{s \in \mathbb{N}: s=\sum B \text{ for some non-empty $B \subseteq A'$} \}. \] Let $\ell$ be the least common multiple of the elements in $S$, and let $\ell'$ be a multiple of $\ell$ such that $\ell' - \sum A' \notin A$. Set $a:= \ell' - \sum A'$ and consider $A:=A' \cup \{a\}$. Note that $\sum A= \ell'$. Therefore, every non-empty subset of $A'$ divides $A$. Also, $A$ divides $A$. Thus $d(A) \geq 2^{n-1}$, as required. \end{proof} A similar construction also gives lower bounds for $d(k,n)$. \begin{lemma} \label{lower2} For all $k,n \geq 1$, $d(k,n) \geq \binom{n-1}{k}$. \end{lemma} \begin{proof} Again, the lemma clearly holds for $n=1$. So, for $n \geq 2$ arbitrarily choose a set $A'$ of $n-1$ positive integers and let \[ S:=\{ s \in \mathbb{N} : s=\sum B \text{ for some $B \subseteq A'$ with $|B|=k$} \}. \] The rest of the proof is identical to the proof of the previous lemma. That is, we construct $a$ such that all $k$-subsets of $A'$ divide $A' \cup \{a\}$. \end{proof} We point out that the same technique shows that the corresponding minimization problems for $d(A)$ and $d_k(A)$ are easy. Namely, define $A$ to be \emph{prime} if the only divisor of $A$ is $A$ itself. \begin{claim} For each $n \in \mathbb{N}$, there exists infinitely many prime $n$-sets of integers. \end{claim} \begin{proof} Arbitrary choose a set $A'$ of $n-1$ positive integers, with $1 \notin A'$. Choose a prime number $p$ such that $p \geq 2\sum A'$. Finish by setting $a:=p-\sum A'$ and $A:=A' \cup \{a\}$. \end{proof} \section{Upper bounds for $d(n)$ and $d(n,2n)$} Let $A$ be a set of positive integers. We say that a subset $B$ of $A$ is a \emph{halving set} if $\sum B = \frac{1}{2} \sum A$. Evidently, $B$ is a halving set if and only if $A \setminus B$ is a halving set. The next lemma is also obvious, but quite useful. \begin{lemma} \label{half} If $B$ and $C$ are distinct halving sets, then $| B \triangle C | > 2$. \end{lemma} A \emph{separation} of $A$ is a pair $\{B,C\}$, where $B$ and $C$ are disjoint subsets of $A$ with $B \cup C=A$. Note that $\{B,C\}=\{C,B\}$. A \emph{strong separation} is a separation $\{B,C\}$ where $|B|=|C|$. We say that $\{B,C\}$ is \emph{barren} if neither $B$ nor $C$ divides $A$, \emph{neutral} if exactly one of $B$ or $C$ divides $A$, and \emph{abundant} if both $B$ and $C$ divide $A$. Note that $\{B,C\}$ is an abundant separation if and only if $B$ and $C$ are both halving sets. Thus, one approach to obtain upper bounds for $d(n)$ (respectively, $d(n,2n)$) is to bound the number of abundant separations (respectively, abundant strong separations) of $A$. \begin{lemma} \label{abundant1} Let $A$ be a set of $n$ positive integers. Then $d(A) \leq 2^{n-1}+h$, where $h$ is the number of abundant separations of $A$. \end{lemma} \begin{proof} Partition $2^A$ into pairs $\{B, A \setminus B\}$. There are $2^{n-1}$ such separations. Finish by observing that a separation contributes 0 to $d(A)$ if it barren, 1 to $d(A)$ if it is neutral, and 2 to $d(A)$ if it is abundant. \end{proof} Similarly, we have the following lemma. \begin{lemma} \label{abundant2} Let $A$ be a set of $2n$ positive integers. Then $d_n(A) \leq \frac{1}{2}\binom{2n}{n}+h$, where $h$ is the number of abundant strong separations of $A$. \end{lemma} Note that these bounds match the lower bounds from the previous section if $h=0$. However, it is possible for a set to have many halving sets. For example, consider $A=\{1, \dots, 4\ell\}$. A theorem of Stanley~\cite{lefschetz} shows that this example is in fact worst possible. Fortunately, we are able to determine $d(n)$ and $d(n,2n)$ using a different approach. We first handle $d(n)$ by showing that the bound from Lemma~\ref{lower1} is best possible for almost all values of $n$. \begin{lemma} \label{tight2} For all $n \geq 4$, $d(n)=2^{n-1}$. \end{lemma} \begin{proof} Let $n \geq 4$ and $A$ be a set of $n$ positive integers. By Lemma~\ref{lower1} it suffices to show $d(A) \leq 2^ {n-1}$. If no separations of $A$ are abundant, then we are done by Lemma~\ref{abundant1}. So we may assume that $A$ contains an abundant separation. We proceed by defining an injection $\phi$ from the set of abundant separations to the set of barren separations. Let $\{B,C\}$ be an abundant separation. We may assume that $\min A \in B$. Define $\phi (\{B,C\})$ to be $\{B \setminus \min A, C \cup \min A\}$. First note that $\phi$ is injective. Secondly, if $\min A < \frac{1}{6} \sum A$, then $\frac{1}{3}\sum A < \sum (B \setminus \min A) < \frac{1}{2} \sum A $. Thus, if $\min A < \frac{1}{6} \sum A$, then $\phi$ maps abundant separations to barren separations. So we are done unless $\min A \geq \frac{1}{6} \sum A$. Observe that if $A$ contains a halving set $H$ of size at least 3, then $\min A \leq \min H < \frac{1}{6} \sum A$. Therefore, we are done unless $n=4$. Let $A:=\{a_1, \dots, a_4\}$ with $a_1 < a_2 < a_3 < a_4$. Since there are no halving sets of $A$ of size 3, it follows that $\{\{a_1,a_4\}, \{a_2,a_3\} \}$ is the unique abundant separation of $A$. Now, since $a_1 \geq \frac{1}{6} \sum A$ it follows that $\frac{1}{3} \sum A < a_1 + a_2 < \frac{1}{2} \sum A$. Thus, $\{ \{a_1, a_2\}, \{a_3, a_4\} \}$ is a barren separation, so we are done by defining $\phi (\{\{a_1,a_4\}, \{a_2,a_3\} \}):=\{ \{a_1, a_2\}, \{a_3, a_4\} \}$. \end{proof} It is easy to determine the small values of $d(n)$ by hand. We omit the details. \begin{lemma} \label{easy} We have $d(1)=1, d(2)=2$, and $d(3)=5$. If $|A|=3$, then $d(A)=5$ if and only if $A=\{a, 2a, 3a\}$ for some $a \in \mathbb{N}$. \end{lemma} We now show that for $d(n, 2n)$, the lower bound from Lemma~\ref{lower2} is also best possible for $n \geq 3$. \begin{lemma} \label{tight} For all $n \geq 3$, $d(n, 2n) = \frac{1}{2} \binom{2n}{n}$. \end{lemma} \begin{proof} Let $A$ be a set of $2n$ positive integers, with $n \geq 3$. By Lemma~\ref{lower2}, it suffices to show that $d_n(A) \leq \frac{1}{2} \binom{2n}{n}$. First observe that if $A$ does not contain any abundant strong separations, then we are done by Lemma~\ref{abundant2}. So, we may assume that $A$ contains an abundant strong separation. In this case, we proceed by defining an injection $\phi$ from the family of abundant strong separations to the family of barren strong separations. Let $\{B,C\}$ be an abundant strong separation. We define \[ \phi(\{B,C\}):=\{(B \setminus \min B) \cup \min C, (C \setminus \min C) \cup \min B\}. \] First note that if $\phi(\{B_1, C_1\})=\phi(\{B_2, C_2\})$ for $\{B_1, C_1\} \neq \{B_2, C_2\}$, then by relabelling we may assume that $|B_1 \cap B_2|=n-1$. However, this contradicts Lemma~\ref{half}. So $\phi$ is indeed an injection. We finish the proof by showing that $\phi$ maps abundant separations to barren separations. We may assume that $\min B < \min C$. Let $B':=(B \setminus \min B) \cup \min C$ and $C':=(C \setminus \min C) \cup \min B$. Clearly, $B'$ does not divide $A$. Also, as $\sum B = \sum C = \frac{1}{2} \sum A$, both $\min C$ and $\min B$ are strictly less than $\frac{1}{2n} \sum A$. Therefore \[ (\frac{1}{2}-\frac{1}{2n}) \sum A < \sum C' < \frac{1}{2} \sum A. \] Since $n \geq 3$, $(\frac{1}{2}-\frac{1}{2n}) \geq \frac{1}{3}$. Thus, $C'$ also does not divide $A$. \end{proof} \section{Characterizing all extremal sets} \label{uniqueness} We now characterize all subsets of integers that achieve the bounds in Lemma~\ref{tight2} and Lemma~\ref{tight}. Let $A:=\{a_1, \dots, a_n\}$ be a set of $n$ positive integers with $a_1 < \dots < a_n$. We say that $A$ is an \emph{anti-pencil} if the set of divisors of $A$ consists of all non-empty subsets of $A \setminus \{a_n\}$ together with $A$ itself. Similarly, $A$ is a \emph{$k$-anti-pencil} if the set of $k$-subset divisors of $A$ is the set of all $k$-subsets of $A \setminus \{a_n\}$. Observe that the constructions in Section~\ref{lower} completely describe the set of all anti-pencils and the set of all $k$-anti-pencils. We will need the following two simple observations to aid with the case analysis. \begin{lemma} \label{diophantine} If $k,\ell$, and $m$ are positive integers such that $\frac{1}{k}+\frac{1}{\ell}=\frac{1}{m}$, then $k+\ell$ divides $k\ell$. \end{lemma} \begin{lemma} \label{diophantine2} If $k< \ell$ are positive integers such that $\frac{1}{k}+\frac{1}{\ell}=\frac{1}{2}$, then $k=3$ and $\ell=6$. \end{lemma} Here is our first characterization. \begin{lemma} \label{EKR2} For $n \geq 5$, a set of $n$ positive integers has exactly $2^{n-1}$ divisors if and only if it is an anti-pencil. A set of four positive integers has 8 divisors if and only if it is an anti-pencil or of the form $\{a,2a,3a,6a\}$ for some $a \in \mathbb{N}$. \end{lemma} \begin{proof} One direction is obvious. For the other direction, let $n \geq 4$ and $A:=\{a_1, \dots, a_n\}$ have exactly $2^{n-1}$ divisors. We may assume that $a_n \leq \frac{1}{2} \sum A$, else $A$ is an anti-pencil and we are done. We claim that $a_2 \geq \frac{1}{6} \sum A$. Suppose not. Given an abundant separation of $A$, let $\phi_1$ be the map which moves $a_1$ across the separation and let $\phi_2$ be the map which moves $a_2$ across the separation. Since $a_1$ and $a_2$ are both less than $\frac{1}{6} \sum A$, we again have that $\phi_1$ and $\phi_2$ are injective maps from the set of abundant separations to the set of barren separations. Moveover, by Lemma~\ref{half}, the images of $\phi_1$ and $\phi_2$ are disjoint. Therefore, $A$ has more barren separations than abundant separations, which is a contradiction. We next claim that $A$ does not contain any abundant separations or $A=\{a,2a,3a,6a\}$ for some $a \in \mathbb{N}$. Suppose $\{B,C\}$ is an abundant separation. If $\max \{|B|,|C|\} \geq 4$ or $\min \{|B|,|C|\} \geq 3$, then $a_2 < \frac{1}{6} \sum A$; a contradiction. In particular, this implies $n \in \{4,5\}$. We first handle the case $n=5$. Let $B:=\{b_1, b_2\}$ and $C:=\{c_1, c_2,c_3\}$ with $b_1<b_2$ and $c_1<c_2<c_3$. By Lemma~\ref{half}, $\{B,C\}$ is the unique abundant separation of $A$. Now, if $a_1=b_1$, then $a_2 \leq c_1 < \frac{1}{6} \sum A$; a contradiction. Thus, $a_1=c_1$. It follows that $\{\{b_1,b_2,c_1\}, \{c_2, c_3\} \}$ is the unique barren separation of $A$. In particular $\{a_i\}$ divides $A$ for all $i \in [5]$. Therefore, there exist positive integers $m_1>\dots>m_5$ such that $m_ia_i= \sum A$. Since $m_2 \leq 6$ and $m_5 \geq 3$ we must have $m_2=6, m_3=5, m_4=4$, and $m_5=3$. Now choose a 2-subset $A'$ of $\{a_2, a_3, a_4\}$ such that $A' \neq \{c_2, c_3\}$. Since $\sum A' < \frac{1}{2} \sum A$ and $A' \neq \{c_2,c_3\}$, it follows that $A'$ divides $A$. However, this is a contradiction, since the equation $\frac{1}{k}+\frac{1}{\ell}=\frac{1}{m}$ has no positive integer solutions for $\{k,\ell\} \subset \{4,5,6\}$ by Lemma~\ref{diophantine}. We thus have $n=4$. Again by Lemma~\ref{half}, $\{B,C\}$ is the unique abundant separation of $A$. Thus, there is a unique barren separation $\{B',C'\}$ of $A$. First suppose $|B|=|C|=2$. By relabelling if necessary, $B=\{a_1,a_4\}$ and $C=\{a_2,a_3\}$. Since $\{B',C'\}$ is the unique barren separation of $A$, at least three of $\{a_1\}, \{a_2\}, \{a_3\}$ or $\{a_4\}$ divide $A$. By the pigeonhole principle, both members of $B$ divide $A$ or both members of $C$ divide $A$. By Lemma~\ref{diophantine2}, we either have $6a_1=\sum A=3a_4$ or $6a_2=\sum A=3a_3$. In the second case, swapping $a_1$ and $a_2$ or swapping $a_3$ and $a_4$ in $\{B,C\}$ both yield barren separations, which contradicts the uniqueness of $\{B',C'\}$. The first case is also impossible as any single swap of $\{B,C\}$ yields a barren separation. Therefore, we may assume $B=\{a_4\}$ and $C=\{a_1,a_2,a_3\}$. Now consider the sets \[ \{a_2\}, \{a_3\}, \{a_1,a_2\}, \{a_1, a_3\}, \{a_2, a_3\}. \] Since $A$ has exactly one barren separation, at least four of these sets divide $A$. Let $m_1 \leq m_2 \leq m_3 \leq m_4$ be the multiples that appear for these four divisors $D_1, \dots, D_4$. Note that $m_4 \leq 6$ since $a_2 \geq \frac{1}{6} \sum A$. Suppose these multiples are all distinct. In this case, it follows that $m_1=3, m_2=4, m_3=5, m_4=6$. Since $\frac{1}{3} + \frac{1}{6}=\frac{1}{2}$, $D_1$ and $D_4$ must partition $\{a_1, a_2, a_3\}$. It cannot be that $\{D_1,D_4\}=\{\{a_3\}, \{a_1, a_2\}\}$ since then neither $\{a_1, a_3\}$ nor $\{a_2, a_3\}$ divides $A$. Thus, $D_1=\{a_1, a_3\}$ and $D_4=\{a_2\}$. But now, $D_2$ and $D_3$ also partition $\{a_1, a_2,a_3\}$. This is a contradiction since $\frac{1}{4}+\frac{1}{5} \neq \frac{1}{2}$. Therefore, $m_i=m_j$ for some $i \neq j$. This is only possible if $D_i$ and $D_j$ partition $\{a_1,a_2,a_3\}$. Thus, $m_i=m_j=4$, $D_i=\{a_1,a_2\}$, and $D_j=\{a_3\}$. Observe that at least one of $\{a_1,a_3\}$ or $\{a_2,a_3\}$ divides $A$, else $A$ has two barren separations. Thus, $a_1+a_3=\frac{1}{3} \sum A$ or $a_2+a_3=\frac{1}{3} \sum A$. So, either $a_1=\frac{1}{12} \sum A$ or $a_2=\frac{1}{12} \sum A$. It must be that $a_1=\frac{1}{12} \sum A$, since $a_2 \geq \frac{1}{6} \sum A$. Finally, $a_2=\frac{1}{6} \sum A$, since $a_1+a_2= \frac{1}{4} \sum A$. Thus, $A:=\{a,2a,3a,6a\}$ for some $a \in \mathbb{N}$, as required. We may hence assume that every separation of $A$ is a neutral separation. Thus, $B$ divides $A$ if and only if $\sum B < \frac{1}{2} \sum A$. In particular, $\{a_n\}$ divides $A$ and so $a_n \leq \frac{1}{3} \sum A$. Let $M$ be a maximal set (under inclusion) among all subsets of $A$ containing $a_n$ and with sum at most $\frac{1}{2}\sum A$. Note that $\{a_n\}$ is a candidate for $M$, so $M$ exists. Since $\sum M < \frac{1}{2} \sum A$, it follows that $M$ divides $A$ and hence $\sum M \leq \frac{1}{3} \sum A$. Choose $a \notin M$ and consider $M \cup \{a\}$. By choice of $M$ we have $\sum (M \cup \{a\}) > \frac{1}{2} \sum A$. Thus, $A \setminus (M \cup \{a\}) \leq \frac{1}{3} \sum A$, since $A \setminus (M \cup \{a\})$ divides $A$. But now, $a_n > a \geq \frac{1}{3} \sum A$, which is a contradiction. \end{proof} Combining Lemma~\ref{EKR2} with Lemma~\ref{easy} we have the following summary. \begin{theorem} For all $n \neq 3$, $d(n)=2^{n-1}$ and the sets that achieve this bound are precisely the anti-pencils or $A:=\{a,2a,3a,6a\}$ for some $a \in \mathbb{N}$. For $n=3$, $d(3)=5$ and the sets that achieve this bound are $\{a,2a,3a\}$ for some $a \in \mathbb{N}$. \end{theorem} We now present the end of the story for $d(n, 2n)$ as well. \begin{lemma} \label{EKR1} Let $n \geq 3$ and let $A$ be a set of $2n$ positive integers. If $A$ has exactly $\frac{1}{2} \binom{2n}{n}$ divisors of size $n$, then $A$ is an $n$-anti-pencil. \end{lemma} \begin{proof} Let $n \geq 3$ and let $A$ be a set of $2n$ positive integers with $\frac{1}{2} \binom{2n}{n}$ divisors of size $n$. Suppose $A:=\{a_1, \dots, a_{2n} \}$ with $a_1 < \dots < a_{2n}$. We claim that $A$ does not contain any abundant strong separations. Suppose not. We first suppose $n=3$. By Lemma~\ref{half}, $A$ has a unique abundant strong separation $\{\{b_1, b_2, b_3\}, \{c_1, c_2, c_3\}\}$ with elements labelled in increasing order. Note that $b_1 < \frac{1}{6} \sum A$ and $c_1 < \frac{1}{6} \sum A$. Thus, swapping $b_1$ and $c_1$ yields a barren separation. If $b_2 \leq \frac{1}{6} \sum A$, then swapping $b_2$ and $c_1$ yields another barren separation; a contradiction. Thus, $b_2 > \frac{1}{6} \sum A$. On the other hand, $b_2 < \frac{1}{4} \sum A$ since $b_2 < b_3$. By symmetry, we also have $\frac{1}{6}\sum A < c_2 < \frac{1}{4} \sum A$. Thus, swapping $b_2$ and $c_2$ yields another barren separation. So we may assume $n \geq 4$. Let $\{B,C\}$ be an abundant separation of $A$ with $a_1 \in B$. Let $c_1$ and $c_2$ be the two smallest elements of $C$. Define \[ \phi_1 (\{B,C\}):= \{ (B \setminus \{a_1\}) \cup \{c_1\} , (C \setminus \{c_1\}) \cup \{a_1\} \} \text{ and} \] \[ \phi_2 (\{B,C\}):= \{ (B \setminus \{a_1\}) \cup \{c_2\} , (C \setminus \{c_2\}) \cup \{a_1\}\}. \] Since $n \geq 4$, it follows that $a_1, c_1$ and $c_2$ are each less than $\frac{1}{6} \sum A$. Therefore, both $\phi_1$ and $\phi_2$ are injective maps from the set of abundant strong separations to the set of barren strong separations. Furthermore, by Lemma~\ref{half}, the images of $\phi_1$ and $\phi_2$ are disjoint. Therefore $A$ contains more barren strong separations than abundant strong separations; a contradiction. Thus, $A$ has no abundant strong separations as claimed. It follows that every strong separation of $A$ must be neutral. Thus, an $n$-subset $B$ divides $A$ if and only if $\sum B < \frac{1}{2} \sum A$. Consider $M:=\{a_1, \dots, a_{n-1}, a_{2n}\}$. Note that $\sum M \leq \frac{1}{2} \sum A$, else $A$ is an $n$-anti-pencil and we are done. Hence, in fact $\sum M \leq \frac{1}{3} \sum A$. Now let $B_1, \dots, B_{\ell}$ be a sequence of $n$-subsets of $A$ such that $B_1=M$, $B_{\ell}=\{a_{n+1}, \dots, a_{2n}\}$, and for each $1<j\leq \ell$, $B_j$ is obtained from $B_{j-1}$ by replacing some element of $B_{j-1}$ by a larger element not in $B_{j-1}$. Let $k$ be the first index such that $\sum B_k > \frac{1}{2} \sum A$, and let $B_k=B_{k-1} \triangle \{c,d\}$ with $c < d$. Since $B_{j-1}$ and $A \setminus B_j$ both have sum less than $\frac{1}{2} \sum A$, they both must divide $A$. Therefore, $B_{j-1} \leq \frac{1}{3} \sum A$ and $A \setminus B_j \leq \frac{1}{3} \sum A$. It follows that $\sum A - d+c \leq \frac{2}{3} \sum A$. Thus, $a_{2n}>d>\frac{1}{3} \sum A$, which is a contradiction since $a_{2n} \in M$. \end{proof} Combining Lemma~\ref{EKR1} with Lemma~\ref{olympiad} we have the following summary. \begin{theorem} For all $n \neq 2$, $d(n,2n)=\frac{1}{2} \binom{2n}{n}$ and the sets of $2n$ positive integers that achieve this bound are precisely the $n$-anti-pencils. For $n=2$, $d(2,4)=4$ and the sets that achieve this bound are $A= \{a, 5a, 7a, 11a\}$ or $A=\{a, 11a, 19a, 29a\}$ for some $a \in \mathbb{N}$. \end{theorem} \section{Continuous analogues and open problems} If one considers subsets of \textit{real} numbers instead of natural numbers, then the question of divisibility no longer makes sense. In this context, it is natural to instead ask about subsets with \textit{non-negative} sum. Let $A$ be a set of $n$ real numbers. Define $\mu(A)$ (respectively $\mu_k(A)$) to be the number of subsets (respectively $k$-subsets) $B$ of $A$ such that $\sum B \geq 0$. We then define $\mu_{\min} (n)$ (respectively $\mu_{\max} (n)$) to be the minimum (respectively maximum) of $\mu(A)$ over all sets $A$ of $n$ real numbers with $\sum A=0$. Similarly, we define $\mu_{\min}(k,n)$ (respectively $\mu_{\max}(k,n)$) to be the minimum (respectively maximum) of $\mu_k(A)$ over all sets $A$ of $n$ real numbers with $\sum A=0$. It is easy to check that $\mu_{\min}(n)=2^{n-1}+1$ for all $n \geq 1$, and that $A= \{1, \dots, n-1\} \cup \{\frac{-(n)(n-1)}{2}\}$ achieves this bound (recall that $\sum \emptyset = 0$). On the other hand, the minimization problem for $\mu_k$ is non-trivial. Indeed, the following nice conjecture of Manickam, Mikl{\'o}s, and Singhi asserts that $\mu_{\min}(k,n) \geq \binom{n-1}{k-1}$, for $n \geq 4k$. \begin{conjecture}[\cite{MMS1}, \cite{MMS2}] If $n$ and $k$ are positive integers with $n \geq 4k$, and $A$ is a set of $n$ real numbers with $\sum A=0$, then the number of subsets of $A$ with non-negative sum is at least $\binom{n-1}{k-1}$. \end{conjecture} Note that by choosing $A$ with exactly one non-negative element, we do obtain $\binom{n-1}{k-1}$ non-negative $k$-subsets. Such sets correspond to the extremal examples in the Erd{\H{o}}s-Ko-Rado theorem~\cite{EKR}, so we will call them \textit{$k$-pencils}. The Manickam-Mikl{\'o}s-Singhi conjecture has recently received substantial attention. We refer the reader to Alon, Huang and Sudakov~\cite{MMS3}, Chowdhury~\cite{ameerah}, Frankl~\cite{frankl}, and Pokrovskiy~\cite{linearMMS}. We now discuss $\mu_{\max}(n)$ and $\mu_{\max}(k,n)$, which can be viewed as continuous analogues of our functions $d(n)$ and $d(k,n)$. It is easy to see that maximizing $\mu(n)$ is equivalent to maximizing the number of subsets of $A$ whose sum is \textit{exactly} zero. For $n$ odd, this reduces to a conjecture of Erd{\H{o}}s and Moser, see~\cite{extremalNT1, extremalNT2}. Using the Hard Lefschetz theorem~\cite{lefschetzbook} from algebraic geometry, Stanley~\cite{lefschetz, stanley2} solved a (generalization) of the Erd{\H{o}}s-Moser conjecture. Thus, by~\cite[Corollary 5.1]{lefschetz}, we have the following summary for $\mu_{\max}(n)$. \begin{theorem} For $n=2\ell$, $\mu_{\max}(n)$ is achieved by taking $A=\{-\ell, \dots, -1\} \cup \{1, \dots, \ell\}$. For $n=2\ell+1$, $\mu_{\max}(n)$ is achieved by taking $A=\{-\ell, \dots \ell\}$. \end{theorem} As far as we know, determining $\mu_{\max}(k,n)$ is a wide open problem, although similar questions have been considered. For example, one can define $\mu'_{\max}(k,n)$ to be the maximum of $\mu(A)$ over all sets $A$ of $n$ real numbers such that $\sum A <0$ and $\sum B < 0$ for all $B \subseteq A$ with $|B| > k$. Recently, Alon, Aydinian and Huang~\cite{maxnonnegative} proved that $\mu'_{\max}(k,n)= \binom{n-1}{k-1}+ \dots + \binom{n-1}{0}+1$, settling a question of Tsukerman. Note that by choosing exactly one element of $A$ to be negative, we have the bound $\mu_{\max}(k,n) \geq \binom{n-1}{k}$. We overload terminology and call such a set a \textit{$k$-anti-pencil}. This construction is not optimal for $n=2k$, but in the range of the Manickam-Mikl{\'o}s-Singhi conjecture, we conjecture that it is. \begin{conjecture} If $n \geq 4k$, then $\mu_{\max}(k,n)=\binom{n-1}{k}$. \end{conjecture} One can also attempt to characterize the extremal examples for $\mu_{\min}(k,n)$ and $\mu_{\max}(k,n)$. For example, Chowdhury~\cite{ameerah} gives some values of $k$ and $n$ for which the extremal examples for $\mu_{\min}(k,n)$ are necessarily $k$-pencils. Thus, it would be quite interesting to determine for which $k$ and $n$, the extremal examples for minimizing $\mu(k,n)$ and maximizing $\mu(k,n)$ are necessarily `dual' to one another. \begin{problem} Determine for which $k$ and $n$ the only extremal examples for $\mu_{\min}(k,n)$ and $\mu_{\max}(k,n)$ are $k$-pencils and $k$-anti-pencils, respectively. \end{problem} We end by mentioning that determining $d(k,n)$ for $n \neq 2k$ is also an open problem. Recall that our proof technique relies on the fact that $B$ and $A \setminus B$ are both possible divisors of $A$, which fails when $n \neq 2k$. Nonetheless, we conjecture that for most values of $k$ and $n$, $d(k,n)=\binom{n-1}{k}$. Note that this agrees with the conjectured value for $\mu_{\max}(k,n)$. However, in the divisibility setting, it is possible that $d(k,n)=\binom{n-1}{k}$ for all but finitely many values. \begin{conjecture} For all but finitely values of $k$ and $n$, $d(k,n)=\binom{n-1}{k}$. \end{conjecture} \subsection*{Acknowledgements} We thank Alex Heinis for valuable discussions during the 2011 International Mathematical Olympiad in Amsterdam. We also thank Ameera Chowdhury and Gjergji Zaimi for providing useful references.
{ "timestamp": "2014-09-22T02:12:59", "yymm": "1306", "arxiv_id": "1306.0943", "language": "en", "url": "https://arxiv.org/abs/1306.0943", "abstract": "Let $A$ be a set of $n$ positive integers. We say that a subset $B$ of $A$ is a divisor of $A$, if the sum of the elements in $B$ divides the sum of the elements in $A$. We are interested in the following extremal problem. For each $n$, what is the maximum number of divisors a set of $n$ positive integers can have? We determine this function exactly for all values of $n$. Moreover, for each $n$ we characterize all sets that achieve the maximum. We also prove results for the $k$-subset analogue of our problem. For this variant, we determine the function exactly in the special case that $n=2k$. We also characterize all sets that achieve this bound when $n=2k$.", "subjects": "Combinatorics (math.CO)", "title": "Extremal Problems for Subset Divisors", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9899864278996301, "lm_q2_score": 0.8354835432479661, "lm_q1q2_score": 0.8271173685489801 }
https://arxiv.org/abs/1105.5178
The peak sidelobe level of random binary sequences
Let $A_n=(a_0,a_1,\dots,a_{n-1})$ be drawn uniformly at random from $\{-1,+1\}^n$ and define \[ M(A_n)=\max_{0<u<n}\,\Bigg|\sum_{j=0}^{n-u-1}a_ja_{j+u}\Bigg|\quad\text{for $n>1$}. \] It is proved that $M(A_n)/\sqrt{n\log n}$ converges in probability to $\sqrt{2}$. This settles a problem first studied by Moon and Moser in the 1960s and proves in the affirmative a recent conjecture due to Alon, Litsyn, and Shpunt. It is also shown that the expectation of $M(A_n)/\sqrt{n\log n}$ tends to $\sqrt{2}$.
\section{Introduction} Consider a binary sequence $A=(a_0,a_1,\dots,a_{n-1})$ of length $n$, namely an element of $\{-1,+1\}^n$. Define the \emph{aperiodic autocorrelation} at shift $u$ of $A$ to be \[ C_u(A):=\sum_{j=0}^{n-u-1}a_ja_{j+u}\quad \mbox{for $u\in\{0,1,\dots,n-1\}$} \] and define the \emph{peak sidelobe level} of $A$ as \[ M(A):=\max_{0<u<n}\,\abs{C_u(A)}\quad\mbox{for $n>1$}. \] Binary sequences with small autocorrelation at nonzero shifts have a wide range of applications in digital communications, including synchronisation and radar. \par Let $M_n$ be the minimum of $M(A)$ taken over all $2^n$ binary sequences $A$ of length~$n$. By a parity argument, it is seen that $M_n\ge 1$ and it is known that $M_n=1$ for $n\in\{2,3,4,5,7,11,13\}$, which arises from the existence of a Barker sequence for the corresponding lengths. It is a classical (and still unsolved) problem to decide whether $M_n>1$ for all $n>13$; the currently smallest undecided case arises for $n>10^{29}$~\cite{Mossinghoff2009}. It is conjectured that $M_n$ grows as $n\to\infty$, perhaps like $\sqrt{n}$. We refer to Turyn~\cite{Turyn1968} and Jedwab~\cite{Jedwab2008} for excellent surveys on this problem. \par In this paper, we will be concerned with the asymptotic behaviour, as $n\to\infty$, of $M(A)$ for almost all binary sequences $A$ of length $n$. This problem was first studied by Moon and Moser~\cite{Moon1968}. Let $A_n$ be a random binary sequence of length $n$, by which we mean that $A_n$ is taken uniformly from $\{-1,+1\}^n$. In other words, each of the $n$ sequence elements of $A_n$ takes on each of the values $-1$ and $+1$ independently with probability $1/2$. Then the current state of knowledge can be summarised as \begin{equation} 1\le \liminf_{n\to\infty}\frac{M(A_n)}{\sqrt{n\log n}}\le \limsup_{n\to\infty}\frac{M(A_n)}{\sqrt{n\log n}}\le \sqrt{2}\quad\mbox{in probability}. \label{eqn:lbup_known} \end{equation} The upper bound is due to Mercer~\cite{Mercer2006}. In fact, Mercer proved a weaker result but pointed out in a final remark~\cite[p.~670]{Mercer2006} that his proof establishes the above upper bound. The lower bound was proved by Alon, Litsyn, and Shpunt~\cite{Alon2010}, in response to numerical evidence provided by Dmitriev and Jedwab~\cite{Dmitriev2007}. The authors of~\cite{Alon2010} also conjectured that the lower bound can be improved to $\sqrt{2}$. The aim of this paper is to prove this conjecture and therefore to establish the limit distribution, as $n\to\infty$, of $M(A_n)/\sqrt{n\log n}$. In particular, we prove the following. \begin{theorem} \label{thm:main} Let $A_n$ be a random binary sequence of length $n$. Then, as $n\to\infty$, \[ \frac{M(A_n)}{\sqrt{n\log n}} \to\sqrt{2}\quad\mbox{in probability} \] and \[ \frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}} \to\sqrt{2}. \] \end{theorem} \par Alon, Litsyn, and Shpunt~\cite{Alon2010} already observed that, as a consequence of McDiarmid's inequality (Lemma~\ref{lem:bounded_differences}), $M(A_n)$ is concentrated around its expected value, but could only show that \begin{equation} \liminf_{n\to\infty}\,\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}\ge 1. \label{eqn:lb_E1} \end{equation} Their proof considers $C_u(A_n)$ only for $u\ge n/2$ and crucially relies on the fact that $C_u(A_n)$ and $C_v(A_n)$ are independent whenever $n/2\le u<v<n$. Our method considers $C_u(A_n)$ also for $u<n/2$. In particular, by a careful estimation of the moments of $C_u(A_n)C_v(A_n)$ for $0<u<v<n$, we will show that the lower bound~\eqref{eqn:lb_E1} can be improved to $\sqrt{2}$, which together with~\eqref{eqn:lbup_known} establishes the second part of Theorem~\ref{thm:main}. The first part of Theorem~\ref{thm:main} then follows from McDiarmid's inequality. \par As pointed out in~\cite{Alon2010}, given a binary sequence $A=(a_0,a_1,\dots,a_{n-1})$ of length $n$, the quantity $M(A)$ is related to the more general $r$th-order correlation measure $S_r(A)$, which was defined by Mauduit and S\'ark\"ozy~\cite{Mauduit1997} to be \[ S_r(A):=\max_{0\le u_1<u_2<\cdots<u_r<n}\;\max_{0\le k\le n-u_r}\Biggabs{\sum_{j=0}^{k-1}a_{j+u_1}a_{j+u_2}\cdots a_{j+u_r}}\quad\mbox{for $n\ge r$}. \] Alon {\itshape et al.}~\cite{Alon2007} established that, given a random binary sequence $A_n$ of length $n$, then for all $r\ge 2$, \[ \frac{2}{5}\le\liminf_{n\to\infty}\frac{S_r(A_n)}{\sqrt{n\log {n\choose r}}}\le\limsup_{n\to\infty}\frac{S_r(A_n)}{\sqrt{n\log {n\choose r}}}\le\frac{7}{4}\quad\mbox{in probability}. \] Since, for every binary sequence $A$, we have $M(A)\le S_2(A)$, Theorem~\ref{thm:main} implies that for $r=2$ the lower bound can be improved from $2/5$ to $1$. \section{Preliminary Results} The main results of this section are the following. Given a random binary sequence $A_n$ of length $n$, Proposition~\ref{pro:Pr_Cu} gives a lower bound for \begin{equation} \Pr\big[\abs{C_u(A_n)}\ge \sqrt{2n\log n}\big] \label{eqn:lb} \end{equation} for small $u$. This result can also be concluded from~\cite{Alon2010}. However, the proof presented here is considerably simpler and more direct. Proposition~\ref{pro:Pr_CuCv} gives an upper bound for \begin{equation} \Pr\big[\abs{C_u(A_n)}\ge \sqrt{2n\log n}\,\cap\,\abs{C_v(A_n)}\ge \sqrt{2n\log n}\big] \label{eqn:ub} \end{equation} for $0<u<v<n$. These bounds will be the crucial ingredients to prove the main result of this paper. \subsection{~} To bound~\eqref{eqn:lb}, we shall need the following refinement of the central limit theorem. \begin{lemma}[Cram\'er~{\cite[Thm.~2]{{Cramer1938}}}] \label{lem:asymptotic_tail} Let $X_0,X_1,\dots$ be identically distributed mutually independent random variables satisfying $\E[X_0]=0$ and $\E[X_0^2]=1$ and suppose that there exists $T>0$ such that $\E[e^{tX_0}]<\infty$ for all $\abs{t}<T$. Write $Y_n=X_0+X_1+\cdots+X_{n-1}$ and let $\Phi$ be the distribution function of a normal random variable with zero mean and unit variance. If $\theta_n>1$ and $\theta_n n^{-1/6}\to 0$ as $n\to\infty$, then \[ \frac{\Pr\big[\abs{Y_n}\ge\theta_n\sqrt{n}\big]}{2\Phi(-\theta_n)}\to 1. \] \end{lemma} \par \begin{proposition} \label{pro:Pr_Cu} Let $A_n$ be a random binary sequence of length $n>2$ and let $u$ be an integer satisfying $1\le u\le\frac{n}{\log n}$. Then \[ \Pr\big[\abs{C_u(A_n)}\ge \sqrt{2n\log n}\big]\ge\frac{1}{5n\sqrt{\log n}} \] for all sufficiently large $n$. \end{proposition} \begin{proof} Write $A_n=(a_0,a_1,\dots,a_{n-1})$. It is well known that the $n-u$ products \[ a_0a_u,\,a_1a_{1+u},\dots,a_{n-u-1}a_{n-1} \] are mutually independent. A proof of this fact was given by Mercer~\cite[Prop.~1.1]{Mercer2006}. Hence $C_u(A_n)$ is a sum of $n-u$ mutually independent random variables, each taking each of the values $-1$ and $+1$ with probability $1/2$. Notice that $\E[e^{ta_0a_u}]=\cosh(t)$ and, setting \[ \xi_n=\sqrt{\frac{2n\log n}{n-u}}, \] we find that $\xi_nn^{-1/6}\to 0$ since $u\le \frac{n}{\log n}$. We can therefore apply Lemma~\ref{lem:asymptotic_tail} to conclude, as $n\to\infty$, \begin{equation} \Pr\big[\abs{C_u(A_n)}\ge\sqrt{2n\log n}\big]\sim 2\Phi(-\xi_n), \label{eqn:Pr_Cu_F} \end{equation} where $\Phi$ is the distribution function of a standard normal random variable. It is well known (see~\cite[Thm.~1.2.3]{Durrett2010}, for example) that \[ \frac{1}{\sqrt{2\pi}\,z}\Big(1-\frac{1}{z^3}\Big)\,e^{-z^2/2}\le\Phi(-z)\le\frac{1}{\sqrt{2\pi}\,z}\,e^{-z^2/2}\quad\mbox{for $z>0$}, \] so that, since $\frac{n}{n-u}\sim 1$, as $n\to\infty$, \[ 2\Phi(-\xi_n)\sim \frac{1}{\sqrt{\pi \log n}}\,e^{-\frac{n}{n-u}\log n}. \] Using $u\le \frac{n}{\log n}$, we conclude for $n>2$ \[ e^{-\frac{n}{n-u}\log n}\ge e^{-\frac{\log n}{\log n-1}\,\log n}\sim \frac{1}{en} \] as $n\to\infty$. Hence for all $\alpha>e\sqrt{\pi}$ we certainly have \[ 2\Phi(-\xi_n)\ge\frac{1}{\alpha n\sqrt{\log n}} \] for all sufficiently large $n$. The claimed result follows from~\eqref{eqn:Pr_Cu_F}. \end{proof} \subsection{~} We now turn to the derivation of an upper bound for~\eqref{eqn:ub}. It will be convenient to define the notion of an even tuple as follows. \begin{definition} A tuple $(x_1,x_2,\dots,x_{2m})$ is \emph{even} if there exists a permutation $\sigma$ of $\{1,2,\dots,2m\}$ such that $x_{\sigma(2i-1)}=x_{\sigma(2i)}$ for each $i\in\{1,2,\dots,m\}$. \end{definition} \par For example, $(1,3,1,4,3,4)$ is even, while $(2,1,1,2,1,3)$ is not even. In the next two lemmas we will prove two results about even tuples, which we then use to estimate moments of $C_u(A_n)C_v(A_n)$. \par Recall that, for positive integer $k$, the double factorial \[ (2k-1)!!=\frac{(2k)!}{k!\,2^k}=(2k-1)(2k-3)\cdots 3\cdot 1 \] is the number of ways to arrange $2k$ objects into $k$ unordered pairs. \begin{lemma} \label{lem:even_tuple_single} Let $m$ and $q$ be positive integers and let $R$ be the set of even tuples in \[ \big\{(x_1,x_2,\dots,x_{2q})\,:\,x_i\in\mathbb{Z},\,0\le x_i<m\big\}. \] Then \[ \abs{R}\le(2q-1)!!\,m^q. \] \end{lemma} \begin{proof} There are $(2q-1)!!$ ways to arrange $x_1,x_2,\dots,x_{2q}$ into $q$ unordered pairs and to each of these $q$ pairs we assign a value of $\{0,1,\dots,m-1\}$. In this way we construct all elements of $R$ at least once, which proves the lemma. \end{proof} \par \begin{lemma} \label{lem:even_tuple_double} Let $u$, $v$, and $n$ be integers satisfying $0<u,v<n$ and $u\ne v$. Write $I=\{1,2,\dots,2q\}$ and let $t$ be an integer satisfying $0\le t<q$. Let $S$ be the subset of \[ \big\{(x_i,x_i+u,y_i,y_i+v)_{i\in I}\,:\,x_i,y_i\in\mathbb{Z},\,0\le x_i<n-u,\,0\le y_i<n-v\big\} \] containing all even elements $(x_i,x_i+u,y_i,y_i+v)_{i\in I}$ such that $(x_i)_{i\in J}$ is not even for all $(2q-2t)$-element subsets $J$ of $I$. Then \[ \abs{S}\le(8q-1)!!\,n^{2q-(t+1)/3}. \] \end{lemma} \begin{proof} We will construct a set of tuples that contains $S$ as a subset. Arrange the $8q$ variables \begin{equation} x_1,x_1+u,\dots,x_{2q},x_{2q}+u,y_1,y_1+v,\dots,y_{2q},y_{2q}+v \label{eqn:xy} \end{equation} into $4q$ unordered pairs $(a_1,b_1),(a_2,b_2),\dots,(a_{4q},b_{4q})$ such that there are at most $q-t-1$ pairs $(x_i,x_j)$. This can be done in at most $(8q-1)!!$ ways. We formally set $a_i=b_i$ for all $i\in\{1,2,\dots,4q\}$. If this assignment does not yield a contradiction, then we call the arrangement of~\eqref{eqn:xy} into $4q$ pairs \emph{consistent}. For example, if there are pairs of the form $(x_i,y_j)$ and $(x_i+u,y_j+v)$, then the arrangement is not consistent since $u\ne v$ by assumption. \par Now, for every consistent arrangement, pairs of the form $(x_i,x_j)$ or $(y_i,y_j)$ determine the value of another pair (namely, $(x_i+u,x_j+u)$ or $(y_i+v,y_j+v)$, respectively). On the other hand, for every consistent arrangement, pairs not of the form \[ (x_i,x_j),\;(y_i,y_j),\;(x_i+u,x_j+u),\;\mbox{or}\; (y_i+v,y_j+v) \] determine the value of at least two other pairs. For example, if there exists the pair $(x_i,y_j)$, then $x_i+u$ and $y_j+v$ must lie in different pairs. Therefore, since there are at most $q-t-1$ pairs of the form $(x_i,x_j)$ and at most $q$ pairs of the form $(y_i,y_j)$, for each consistent arrangement, at most \[ 2q-t-1+\tfrac{1}{3}(2t+2)=2q-\tfrac{1}{3}(t+1) \] of the variables $x_1,\dots,x_{2q},y_1,\dots,y_{2q}$ can be chosen independently. We assign to each of these a value of $\{0,1,\dots,n-1\}$. In this way, we construct a set of at most $(8q-1)!!\,n^{2q-(t+1)/3}$ tuples that contains $S$ as a subset, as required. \end{proof} \par We now use Lemmas~\ref{lem:even_tuple_single} and~\ref{lem:even_tuple_double} to bound moments of $C_u(A_n)C_v(A_n)$. \begin{lemma} \label{lem:moments} Let $p$ and $h$ be integers satisfying $0\le h<p$ and let $A_n$ be a random binary sequence of length $n$. Then, for $0<u<v<n$, \[ \E\Big[\big(C_u(A_n)C_v(A_n)\big)^{2p}\Big]\le n^{2p}\big[(2p-1)!!\big]^2\bigg(1+\frac{p^{2h}(8h)^{4h}}{n^{1/3}}+\frac{(8p)^{3p}}{n^{(h+1)/3}}\bigg). \] \end{lemma} \begin{proof} \par Write $I=\{1,2,\dots,2p\}$ and let $T$ be the set containing all even tuples of \[ \big\{(x_i,x_i+u,y_i,y_i+v)_{i\in I}\,:\,x_i,y_i\in\mathbb{Z},\,0\le x_i<n-u,\,0\le y_i<n-v\big\}. \] Writing $A_n=(a_0,a_1,\dots,a_{n-1})$, we have \begin{align} \lefteqn{\E\Big[\big(C_u(A_n)C_v(A_n)\big)^{2p}\Big]} \nonumber\\ &=\E\Bigg[\bigg(\sum_{i=0}^{n-u-1}a_ia_{i+u}\bigg)^{2p}\bigg(\sum_{j=0}^{n-v-1}a_ja_{j+v}\bigg)^{2p}\Bigg] \nonumber\\ &=\sum_{i_1,\dots,i_{2p}=0}^{n-u-1}\;\sum_{j_1,\dots,j_{2p}=0}^{n-v-1}\E\big[a_{i_1}a_{i_1+u}\cdots a_{i_{2p}}a_{i_{2p}+u}a_{j_1}a_{j_1+v}\cdots a_{j_{2p}}a_{j_{2p}+v}\big] \nonumber\\[1ex] &=\abs{T} \label{eqn:moment_T} \end{align} since $a_0,a_1,\dots,a_{n-1}$ are mutually independent, $\E[a_j]=0$, and $a_j^2=1$ for all $j\in\{0,1,\dots,n-1\}$. % We define the following subsets of $T$. \begin{enumerate} \item $T_1$ contains all elements $(x_i,x_i+u,y_i,y_i+v)_{i\in I}$ of $T$ such that $(x_i)_{i\in I}$ and $(y_i)_{i\in I}$ are even. \item $T_2$ contains all elements $(x_i,x_i+u,y_i,y_i+v)_{i\in I}$ of $T$ such that $(x_i)_{i\in I}$ or $(y_i)_{i\in I}$ is not even and $(x_i)_{i\in J}$ and $(y_i)_{i\in K}$ are even for some $(2p-2h)$-element subsets $J$ and $K$ of $I$. \item $T_3$ contains all elements $(x_i,x_i+u,y_i,y_i+v)_{i\in I}$ of $T$ such that either $(x_i)_{i\in J}$ is not even for all $(2p-2h)$-element subsets $J$ of $I$ or $(y_i)_{i\in K}$ is not even for all $(2p-2h)$-element subsets $K$ of $I$. \end{enumerate} It is immediate that $T_1$, $T_2$, and $T_3$ partition $T$, so that \begin{equation} \abs{T}=\abs{T_1}+\abs{T_2}+\abs{T_3}. \label{eqn:T1T2T3} \end{equation} We now bound the cardinalities of $T_1$, $T_2$, and $T_3$. \par {\itshape The set $T_1$.} Using Lemma~\ref{lem:even_tuple_single}, we have the crude estimate \begin{equation} \abs{T_1}\le \big[(2p-1)!!\big]^2\,n^{2p}. \label{eqn:T1} \end{equation} \par {\itshape The set $T_2$.} Let $(x_i,x_i+u,y_i,y_i+v)_{i\in I}$ be an element of $T_2$. Then there exist $(2p-2h)$-element subsets $J$ and $K$ of $I$ such that $(x_i)_{i\in J}$ and $(y_i)_{i\in K}$ are even and \begin{equation} (x_i)_{i\in I\setminus J}\quad\mbox{or}\quad(y_i)_{i\in I\setminus K} \label{eqn:tuple_2h_lr} \end{equation} is not even. Since $(x_i)_{i\in J}$ and $(y_i)_{i\in K}$ are even, $(x_i,x_i+u,y_i,y_i+v)_{i\in J,\,j\in K}$ is even. Since $(x_i,x_i+u,y_i,y_i+v)_{i\in I}$ is also even, it follows that \begin{equation} (x_i,x_i+u,y_j,y_j+v)_{i\in I\setminus J,\,j\in I\setminus K} \label{eqn:tuple_2h} \end{equation} is even as well. There are ${2p\choose 2h}$ subsets $J$ and ${2p\choose 2h}$ subsets $K$. By Lemma~\ref{lem:even_tuple_single}, for each such $J$ and $K$, there are at most $(2p-2h-1)!!\,n^{p-h}$ even tuples $(x_i)_{i\in J}$ satisfying $0\le x_i<n$ for each $i\in J$ and at most $(2p-2h-1)!!\,n^{p-h}$ even tuples $(y_i)_{i\in K}$ satisfying $0\le y_i<n$ for each $i\in K$. By Lemma~\ref{lem:even_tuple_double} applied with $t=0$ and by interchanging $u$ and $v$ and $(x_i)_{i\in I\setminus J}$ and $(y_i)_{i\in I\setminus K}$ if necessary, the number of even tuples in $\{0,1,\dots,n-1\}^{8h}$ of the form~\eqref{eqn:tuple_2h} such that one of the tuples in~\eqref{eqn:tuple_2h_lr} is not even is at most $(8h-1)!!\,n^{2h-1/3}$. Therefore, \begin{align} \abs{T_2}&\le n^{2p-1/3}\bigg[{2p\choose 2h}(2p-2h-1)!!\bigg]^2\,(8h-1)!! \nonumber\\ &\le n^{2p-1/3}\big[(2p-1)!!\big]^2\,p^{2h}(8h-1)!!. \label{eqn:T2} \end{align} \par {\itshape The set $T_3$.} By Lemma~\ref{lem:even_tuple_double} applied with $t=h$ and by interchanging $u$ and $v$ and $(x_i)_{i\in I}$ and $(y_i)_{i\in I}$ if necessary, \begin{equation} \abs{T_3}\le(8p-1)!!\,n^{2p-(h+1)/3}. \label{eqn:T3} \end{equation} \par Now the lemma follows by combining~\eqref{eqn:moment_T},~\eqref{eqn:T1T2T3},~\eqref{eqn:T1},~\eqref{eqn:T2}, and~\eqref{eqn:T3} and noting that $(8h-1)!!\le (8h)^{4h}$ and $(8p-1)!!/[(2p-1)!!]^2\le (8p)^{3p}$. \end{proof} \par Lemma~\ref{lem:moments} is now used to prove the desired upper bound for~\eqref{eqn:ub}. \begin{proposition} \label{pro:Pr_CuCv} Let $A_n$ be a random binary sequence of length $n$ and write $\lambda_n=\sqrt{2n\log n}$. Then, for $0<u<v<n$ and all sufficiently large $n$, \[ \Pr\big[\abs{C_u(A_n)}\ge \lambda_n\,\cap\,\abs{C_v(A_n)}\ge \lambda_n\big]\le \frac{23}{n^2}. \] \end{proposition} \begin{proof} Let $(X_1,X_2)$ be a random variable taking values in $\mathbb{R}\times\mathbb{R}$ and let $p$ be a positive integer. Then by Markov's inequality, for $\theta_1,\theta_2>0$, \[ \Pr\big[\abs{X_1}\ge\theta_1\,\cap\,\abs{X_2}\ge\theta_2\big]\le\frac{\E\big[(X_1X_2)^{2p}\big]}{(\theta_1\theta_2)^{2p}}. \] Let $h$ be an arbitrary integer satisfying $0\le h<p$. Application of Lemma~\ref{lem:moments} gives \begin{equation} \Pr\big[\abs{C_u(A_n)}\ge\lambda_n\,\cap\,\abs{C_v(A_n)}\ge\lambda_n\big)\le\frac{[(2p-1)!!\big]^2}{(2\log n)^{2p}}\big[1+K_1(n,p,h)+K_2(n,p,h)\big], \label{Pr_moment} \end{equation} where \begin{align*} K_1(n,p,h)=n^{-1/3}\,p^{2h}\,(8h)^{4h}\quad\mbox{and}\quad K_2(n,p,h)=n^{-(h+1)/3}\,(8p)^{3p}. \end{align*} We apply~\eqref{Pr_moment} with $p=\lfloor\log n\rfloor$ and $h=\lfloor 14\log\log n\rfloor$, so that for all sufficiently large $n$ we have $h<p$, as assumed. By Stirling's approximation \[ \sqrt{2\pi k}\;k^ke^{-k}\le k!\le\sqrt{3\pi k}\;k^ke^{-k}, \] we have \[ \frac{[(2p-1)!!\big]^2}{(2\log n)^{2p}}\le \frac{3p^{2p}e^{-2p}}{(\log n)^{2p}}\le\frac{3e^2}{n^2}. \] We also have \begin{align*} K_1(n,p,h)&\le K_1(n,\log n,14\log\log n)\\ &=n^{-\frac{1}{3}}\,n^{\frac{28(\log\log n)^2}{\log n}}n^{\frac{56\log\log n(\log 112+\log\log\log n)}{\log n}}&\\ &=O(n^{-1/4})\quad\mbox{as $n\to\infty$} \intertext{and} K_2(n,p,h)&\le K_2(n,\log n,13\log\log n)\\ &=n^{-\frac{1}{3}+3\log 8-\frac{4}{3}\log\log n}\\ &=O(n^{-\log\log n})\quad\mbox{as $n\to\infty$}. \end{align*} Substitute into~\eqref{Pr_moment} to obtain the claimed result. \end{proof} \section{Proof of Main Theorem} We require the following result, which is a consequence of Azuma's inequality for martingales. \begin{lemma}[{McDiarmid~\cite{McDiarmid1989}}] \label{lem:bounded_differences} Let $X_0,X_1,\dots,X_{n-1}$ be mutually independent random variables taking values in a set $S$. Let $f:S^n\to\mathbb{R}$ be a measurable function and suppose that $f$ satisfies \[ \bigabs{f(x)-f(y)}\le c \] whenever $x$ and $y$ differ only in one coordinate. Define the random variable $Y=f(X_0,X_1,\dots,X_{n-1})$. Then, for $\theta\ge 0$, \[ \Pr\big[\bigabs{Y-\E[Y]}\ge \theta\big]\le 2e^{-\frac{2\theta^2}{c^2n}}. \] \end{lemma} \par Given a random binary sequence $A_n=(a_0,a_1,\dots,a_{n-1})$ of length~$n$, we will apply Lemma~\ref{lem:bounded_differences} with $X_j=a_j$ for $j\in\{0,1,\dots,n-1\}$ and \[ f(x_0,x_1,\dots,x_{n-1})=\max_{0<u<n}\Biggabs{\sum_{j=0}^{n-u-1}x_jx_{j+u}}, \] so that $M(A_n)=f(a_0,a_1,\dots,a_{n-1})$. We can take $c=2$ in Lemma~\ref{lem:bounded_differences} and obtain the following corollary. \begin{corollary} \label{cor:Inequality_MA} Let $A_n$ be a random binary sequence of length $n$. Then, for $\theta\ge 0$, \[ \Pr\big[\bigabs{M(A_n)-\E[M(A_n)]}\ge \theta\big]\le 2e^{-\frac{\theta^2}{2n}}. \] \end{corollary} \par We now prove the second part of Theorem~\ref{thm:main}. \begin{theorem} \label{thm:M_mean} Let $A_n$ be a random binary sequence of length $n$. Then, as $n\to\infty$, \[ \frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}\to \sqrt{2}. \] \end{theorem} \begin{proof} By the union bound we have, for all $\epsilon>0$, \begin{multline*} \Pr\bigg[\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}-\sqrt{2}>\epsilon\bigg]\\ \le \Pr\bigg[\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}-\frac{M(A_n)}{\sqrt{n\log n}}>\tfrac{1}{2}\epsilon\bigg]+\Pr\bigg[\frac{M(A_n)}{\sqrt{n\log n}}-\sqrt{2}>\tfrac{1}{2}\epsilon\bigg]. \end{multline*} By Corollary~\ref{cor:Inequality_MA} and the upper bound of~\eqref{eqn:lbup_known}, the two terms on the right-hand side tend to zero as $n\to\infty$, hence \begin{equation} \limsup_{n\to\infty}\;\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}\le\sqrt{2}. \label{eqn:limsup_E} \end{equation} Let $\delta>0$ and define the set \begin{equation} N(\delta)=\bigg\{n>1:\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}<\sqrt{2}-\delta\bigg\}. \label{eqn:def_N_delta} \end{equation} We claim that the size of $N(\delta)$ is finite for all choices of $\delta$, which together with~\eqref{eqn:limsup_E} will prove the theorem. The proof of the claim is based on an idea developed in~\cite{Alon2010}. Let $n>2$ and write \[ W=\Big\{u\in\mathbb{Z}:1\le u\le\frac{n}{\log n}\Big\} \] and $\lambda_n=\sqrt{2n\log n}$. Then \begin{multline*} \Pr\big[M(A_n)\ge\lambda_n\big]\ge\Pr\big[\max_{u\in W}\;\abs{C_u(A_n)}\ge\lambda_n\big]\\ \ge \sum_{u\in W}\Pr\big[\abs{C_u(A_n)}\ge\lambda_n\big]-\sum_{\stack{u,v\in W}{u<v}}\Pr\big[\abs{C_u(A_n)}\ge\lambda_n\,\cap\,\abs{C_v(A_n)}\ge\lambda_n\big] \end{multline*} by the Bonferroni inequality. By Propositions~\ref{pro:Pr_Cu} and~\ref{pro:Pr_CuCv}, \begin{align} \Pr\big[M(A_n)\ge\lambda_n\big]&\ge\abs{W}\cdot\frac{1}{5n(\log n)^{\frac{1}{2}}}-\frac{\abs{W}^2}{2}\cdot\frac{23}{n^2} \nonumber\\ &\ge\frac{1}{8(\log n)^{\frac{3}{2}}}-\frac{12}{(\log n)^2} \nonumber\\ &\ge \frac{1}{10(\log n)^{\frac{3}{2}}} \label{eqn:PrMA_lb} \end{align} for all sufficiently large $n$, using $\frac{2}{3}\frac{n}{\log n}\le\abs{W}\le\frac{n}{\log n}$ for $n>2$. Now, by the definition~\eqref{eqn:def_N_delta} of $N(\delta)$, for all $n\in N(\delta)$ we have $\lambda_n>\E[M(A_n)]$, so that we can apply Corollary~\ref{cor:Inequality_MA} with $\theta=\lambda_n-\E[M(A_n)]$ to give, for all $n\in N(\delta)$, \[ \Pr\big[M(A_n)\ge\lambda_n\big]\le 2e^{-\frac{1}{2n}(\lambda_n-\E[M(A_n)])^2}. \] Comparison with~\eqref{eqn:PrMA_lb} yields, for all sufficiently large $n\in N(\delta)$, \[ \frac{1}{10(\log n)^{\frac{3}{2}}}\le 2e^{-\frac{1}{2n}(\lambda_n-\E[M(A_n)])^2}, \] which implies \[ \frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}\ge \sqrt{2}-\sqrt{\frac{3\log\log n+2\log 20}{\log n}}. \] From the definition~\eqref{eqn:def_N_delta} of $N(\delta)$ it then follows that $N(\delta)$ has finite size for all $\delta>0$, as required. \end{proof} \par Using Corollary~\ref{cor:Inequality_MA}, it is now straightforward to prove the first part of Theorem~\ref{thm:main}. \begin{corollary} Let $A_n$ be a random binary sequence of length $n$. Then, as $n\to\infty$, \[ \frac{M(A_n)}{\sqrt{n\log n}}\to\sqrt{2}\quad\mbox{in probability}. \] \end{corollary} \begin{proof} By the triangle inequality, \[ \biggabs{\frac{M(A_n)}{\sqrt{n\log n}}-\sqrt{2}}\le \biggabs{\frac{M(A_n)}{\sqrt{n\log n}}-\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}}+\biggabs{\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}-\sqrt{2}}. \] Given $\epsilon>0$, we then have by the union bound \begin{multline*} \Pr\Bigg[\biggabs{\frac{M(A_n)}{\sqrt{n\log n}}-\sqrt{2}}>\epsilon\Bigg]\\ \le \Pr\Bigg[\biggabs{\frac{M(A_n)}{\sqrt{n\log n}}-\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}}>\tfrac{1}{2}\epsilon\Bigg]+\Pr\Bigg[\biggabs{\frac{\E\big[M(A_n)\big]}{\sqrt{n\log n}}-\sqrt{2}}>\tfrac{1}{2}\epsilon\Bigg]. \end{multline*} By Corollary~\ref{cor:Inequality_MA} and Theorem~\ref{thm:M_mean}, the two terms on the right-hand side tend to zero as $n\to\infty$, which proves the corollary. \end{proof} \section*{Acknowledgement} I would like to thank Jonathan Jedwab for many valuable discussions and Jonathan Jedwab and Daniel J.\ Katz for their careful comments on this paper. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2011-05-27T02:00:51", "yymm": "1105", "arxiv_id": "1105.5178", "language": "en", "url": "https://arxiv.org/abs/1105.5178", "abstract": "Let $A_n=(a_0,a_1,\\dots,a_{n-1})$ be drawn uniformly at random from $\\{-1,+1\\}^n$ and define \\[ M(A_n)=\\max_{0<u<n}\\,\\Bigg|\\sum_{j=0}^{n-u-1}a_ja_{j+u}\\Bigg|\\quad\\text{for $n>1$}. \\] It is proved that $M(A_n)/\\sqrt{n\\log n}$ converges in probability to $\\sqrt{2}$. This settles a problem first studied by Moon and Moser in the 1960s and proves in the affirmative a recent conjecture due to Alon, Litsyn, and Shpunt. It is also shown that the expectation of $M(A_n)/\\sqrt{n\\log n}$ tends to $\\sqrt{2}$.", "subjects": "Combinatorics (math.CO); Information Theory (cs.IT)", "title": "The peak sidelobe level of random binary sequences", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750492324249, "lm_q2_score": 0.8376199714402812, "lm_q1q2_score": 0.82704506053891 }
https://arxiv.org/abs/0911.0563
Judicious partitions of 3-uniform hypergraphs
The vertices of any graph with $m$ edges can be partitioned into two parts so that each part meets at least $\frac{2m}{3}$ edges. Bollobás and Thomason conjectured that the vertices of any $r$-uniform graph may be likewise partitioned into $r$ classes such that each part meets at least $cm$ edges, with $c=\frac{r}{2r-1}$. In this paper, we prove this conjecture for the case $r=3$. In the course of the proof we shall also prove an extension of the graph case which was conjectured by Bollobás and Scott.
\section{Introduction} Given a graph $G$, it is easy to find a bipartition $V(G)=V_1\cup V_2$ such that at least half of the edges in $G$ join $V_1$ to $V_2$. It is only slightly less trivial to find a bipartition $V_1\cup V_2$ such that each of $V_1$ and $V_2$ meets at least $2/3$ of the edges; equivalently, each class in the bipartition contains at most $1/3$ of the edges (see, e.g., \cite{MGT}). (In fact, as $e(G) \to \infty$, it is shown in \cite{BS99} that there is a bipartition in which each class contains not much more than $1/4$ of the edges, which is trivially best possible by considering complete graphs.) These two ways of formulating the problem are equivalent for partitions into two parts, but give rise to different generalisations for more parts. In this paper, we shall be concerned only with the problem of meeting many edges; the problem of spanning few edges is addressed in \cite{BS99} for the graph case and \cite{BS97} for the hypergraph case. A particularly interesting case occurs when we partition the vertices of an $r$-uniform hypergraph into $r$ classes, so that an edge may meet every class. Bollob\'as and Thomason (see \cite{BRT93}, \cite{BS00}) conjectured that every $r$-uniform hypergraph with $m$ edges has an $r$-partition in which each class meets at least $\frac{r}{2r-1}m$ edges. In \cite{BS00}, Bollob\'as and Scott prove a bound of $0.27 m$; for $r=3$ they claim the better bound $(5m-1)/9$, but there is a gap in their proof. In this paper we prove the Bollob\'as--Thomason conjecture in the case $r=3$; we also prove a conjecture of Bollob\'as and Scott \cite{BS00}. \section{Good partitions}\label{secmain} Suppose we are given a 3-uniform hypergraph $G$ on vertex set $V$ with $m$ edges. For subsets $A, B, C$ of $V$, write $d(A)$ for the number of edges of $G$ meeting $A$, and $e(A,B,C)$ for the number of (distinct) edges of the form $\{a,b,c\}$ with $a\in A$, $b\in B$ and $c\in C$. Also, define the {\em degree of $(A,B,C)$} as $d(A,B,C)=d(A)+d(B)+d(C)$. Much of the time our triple $(A,B,C)$ will be a partition of $V$. We shall sometimes abuse this notation by writing $a$ for $\{a\}$. Also, we write $d_2(A)$ for the number of edges meeting $A$ in at least 2 vertices. As a shorthand, we call a partition of the vertex set $V$ a {\em partition of the graph $G$}. For $0<\varepsilon < 2/3$, we call a set of vertices {\em $\varepsilon$-good} if it meets at least $\left(\frac{2}{3}-\varepsilon\right)m$ of the edges; otherwise we call it {\em $\varepsilon$-bad}. As expected, we say that a set is {\em minimal $\varepsilon$-good} if it is $\varepsilon$-good and every proper subset of it is $\varepsilon$-bad. We shall deduce our main theorem from the following somewhat technical result. \begin{theorem}\label{engine} Let $\varepsilon\ge\frac{1}{15}$, and let $G$ be a 3-uniform hypergraph which cannot be partitioned into three $\varepsilon$-good sets. Then there is a partition $V=A\cup B\cup C$ such that $A$ and $B$ are minimal $\varepsilon$-good sets and \begin{equation} d(A, B, C)>(2+3\varepsilon)m \label{engines} \end{equation} \end{theorem} Our proof of Theorem~\ref{engine} is based on two lemmas. In order to reduce the clutter, we call a partition $V=A\cup B\cup C$ {\em optimal} if its degree $d(A, B, C)$ is as large as possible, {\em locally optimal} if this degree cannot be increased by moving a single vertex from one class to another, and {\em semi-optimal} if it cannot increased by moving a vertex into $C$. Note that semi-optimality depends on the order of the sets in our partition; we shall always take the last set, $C$, to be the exceptional set. Trivially, every optimal partition is locally optimal and every locally optimal partition is semi-optimal; however, the degree of a locally optimal partition can be rather small. A simple random argument shows that there is a partition with $d(A, B, C)\ge \frac{19m}{9}$; however, if $V=\{v_{ij}: 1\le i,j \le 3\}$, the edges are $\{v_{i1},v_{i2},v_{i3}\}$ and $\{v_{1i},v_{2i},v_{3i}\}$ for each $i$, then for $V_i=\{v_{i1},v_{i2},v_{i3}\}$ the partition $V=V_1\cup V_2 \cup V_3$ is locally optimal and $d(V_1, V_2, V_3)$ is only $2m$. As we shall see, however, this example is in some sense typical; locally optimal partitions for which $d(A, B, C)$ is close to $2m$ have $d(A),d(B),d(C)$ roughly equal. \begin{lemma}\label{prep} Let $V=A\cup B\cup C$ be a semi-optimal partition. Then \begin{equation}\label{acond} 3e(A,A,A)+2e(A,A,B)\le e(A,B,C)+e(A,C,C), \end{equation} and, similarly, \begin{equation}\label{bcond} 3e(B,B,B)+2e(A,B,B)\le e(A,B,C)+e(B,C,C). \end{equation} \end{lemma} \begin{proof} Clearly, it suffices to prove \eqref{acond}. Pick a vertex $a\in A$, and let us see how the degree $d(A,B,C)$ changes if we move $a$ from $A$ to $C$. Letting $A'=A\setminus\{a\}$, $C'=C\cup\{a\}$, the difference $d(A)-d(A')$ is the number of edges meeting $A$ only in $a$, i.e. \begin{equation*} d(A)-d(A')=e(a,B,B)+e(a,B,C)+e(a,C,C). \end{equation*} Similarly, $d(C')-d(C)$ is the number of edges which contain $a$ but are disjoint from $C$: \begin{equation*} d(C')-d(C)=e(a,A,A)+e(a,A,B)+e(a,B,B). \end{equation*} The difference of these identities gives us \begin{equation*} d(A, B, C)+e(a,A,A)+e(a,A,B)=d(A', B, C')+e(a,B,C)+e(a,C,C). \end{equation*} As $(A,B,C)$ is a semi-optimal partition, \begin{equation*} d(A, B, C)\ge d(A', B, C'), \end{equation*} and so we find that \begin{equation}\label{extra} e(a,A,A)+e(a,A,B)\le e(a,B,C)+e(a,C,C). \end{equation} Since this holds for every $a\in A$, \begin{equation*} \sum_{a\in A}e(a,A,A)+\sum_{a\in A}e(a,A,B) \le \sum_{a\in A}e(a,B,C)+\sum_{a\in A}e(a,C,C), \end{equation*} and so \begin{equation*} 3e(A,A,A)+2e(A,A,B) \le e(A,B,C)+e(A,C,C). \end{equation*} as required. \end{proof} Lemma~\ref{prep} tells us that, {\em a fortiori}, every optimal partition $(A, B, C)$ satisfies \eqref{acond} and \eqref{bcond}. Next, we show that the semi-optimality of a partition $(A, B, C)$ is preserved if we move vertices into $C$. \begin{lemma}\label{reduce} Let $(A,B,C)$ be a semi-optimal partition, $(A', B', C')$ be a partition with $A'\subseteq A$, $B'\subseteq B$ and $C' \supseteq C$. Then $(A', B', C')$ is also semi-optimal. \end{lemma} \begin{proof} It suffices to show that if $a\in A'$, $A''=A'\setminus \{a\}$ and $C''=C'\cup \{a\}$ then $(A'',B',C'')$ is semi-optimal. Clearly, \begin{equation*} d(A')-d(A'')=e(a,B',B')+e(a,B',C')+e(a,C',C') \end{equation*} and \begin{equation*} d(C'')-d(C')=e(a,A',A')+e(a,A',B')+e(a,B',B'). \end{equation*} Taking the difference of these identities, we find that \begin{eqnarray} d(A', B', C') + && \hspace{-15pt} e(a,A',A')\ + e(a,A',B') \nonumber \\ && \hspace{-27pt} = \ d(A'', B', C'')+e(a,B',C')+e(a,C',C'). \label{splodge} \end{eqnarray} Since $(A,B,C)$ is semi-optimal and $a\in A'\subseteq A$, inequality \eqref{extra} holds. Consequently, as $A'\subseteq A, B'\subseteq B, B'\cup C'\supseteq B\cup C$, we find that \begin{eqnarray*} e(a,A',A')+e(a,A',B')&\le& e(a,A,A)+e(a,A,B)\\ &\le& e(a,B,C)+e(a,C,C) \\ &=& e(a,B\cup C,B\cup C)-e(a,B,B) \\ &\le& e(a,B'\cup C',B'\cup C')-e(a,B',B') \\ &=& e(a,B',C')+e(a,C',C'). \end{eqnarray*} Together with (\ref{splodge}), this gives \begin{equation*} d(A', B', C') \ge d(A'', B', C''), \end{equation*} as required. \end{proof} After this preparation, we are ready to prove Theorem 1. \begin{proof} Let $V=A\cup B\cup C$ be an optimal partition; renaming the parts, if necessary, we may assume that $d(C)\le d(A), d(B)$. Since, by assumption, at least one of these classes is $\varepsilon$-bad, we must have \begin{equation} d(C)<\left(\frac{2}{3}-\varepsilon\right)m. \label{csmall} \end{equation} Since the partition $(A, B, C)$ is semi-optimal (in fact, optimal), inequalities \eqref{acond} and \eqref{bcond} hold. We claim that \eqref{csmall}, \eqref{acond} and \eqref{bcond} together imply \eqref{engines}. To prove this claim, add \eqref{acond} and \eqref{bcond}, and then add $e(C,C,C)$ to both sides to obtain \begin{eqnarray} 2\big(e(A,A,A)+ \hspace{-18pt} && e(B,B,B)+e(A,A,B)+e(A,B,B)\big) \nonumber \\ &&\hspace{-10pt}+e(A,A,A)+e(B,B,B)+e(C,C,C) \nonumber \\ &&\hspace{-10pt}\le 2e(A,B,C)+e(A,C,C)+e(B,B,C)+e(C,C,C). \label{long} \end{eqnarray} Note that \begin{equation} e(A,A,A)+e(B,B,B)+e(A,A,B)+e(A,B,B)=m-d(C) \label{ida} \end{equation} and \begin{equation} e(A,B,C)+e(A,C,C)+e(B,B,C)+e(C,C,C)\le d(C). \label{idb} \end{equation} Substituting (\ref{ida}) and (\ref{idb}) into (\ref{long}) gives \begin{equation*} 2\big(m-d(C')\big)+e(A,A,A)+e(B,B,B)+e(C,C,C) \le e(A,B,C)+d(C). \end{equation*} Recalling (\ref{csmall}), we see that \begin{equation} e(A,B,C)-\left(e(A,A,A)+e(B,B,B)+e(C,C,C)\right) > 3\varepsilon.m . \label{almost} \end{equation} We may regard $d(A, B, C)$ as the sum over the {\em edges} of the number of parts meeting that edge; thus, since $e(A,B,C)$ edges meet three parts, $e(A,A,A)+e(B,B,B)+e(C,C,C)$ meet one part and the others meet two, we have \begin{equation*} d(A)+d(B)+d(C) = 2m+e(A,B,C)-\big(e(A,A,A)+e(B,B,B)+e(C,C,C)\big). \end{equation*} Together with (\ref{almost}), this gives us \eqref{engines}: \begin{equation*} d(A, B, C)>(2+3\varepsilon)m, \end{equation*} proving our claim. We shall now find a new partition which satisfies the other requirements of Theorem 1. Notice that \eqref{engines} and \eqref{csmall} give $d(A)+d(B)>\left(\frac{4}{3}+4\varepsilon\right)m$. Since $d(A), d(B)\le m$, we must have $d(A), d(B)>\left(\frac{1}{3}+4\varepsilon\right)m \ge\left(\frac{2}{3}-\varepsilon\right)m$ since $\varepsilon\ge\frac{1}{15}$. Let $A'\subseteq A$ be a minimal subset with $d(A)\ge\left(\frac{2}{3}-\varepsilon\right)m$, and let $B'$ be defined similarly. Let $C'=C\cup(A\setminus A')\cup(B\setminus B')$. By Lemma \ref{reduce}, $A', B', C'$ still satisfies the conditions of Lemma \ref{prep} so \begin{equation*} 3e(A',A',A')+2e(A',A',B')\le e(A',B',C')+e(A',C',C') \end{equation*} and \begin{equation*} 3e(B',B',B')+2e(A',B',B')\le e(A',B',C')+e(B',C',C'). \end{equation*} Also, by the assumption that no partition into $\varepsilon$-good parts exists, \begin{displaymath} d(C')<\left(\frac{2}{3}-\varepsilon\right)m, \end{displaymath} and, as above, we still have \begin{displaymath} d(A')+d(B')+d(C')>(2+3\varepsilon)m. \end{displaymath} Since, by construction, $A',B'$ are minimal $\varepsilon$-good sets, $A',B',C'$ is of the form required. \end{proof} Note that we use the fact that $\varepsilon\ge\frac{1}{15}$ only to deduce that any locally optimal partition has at least two $\varepsilon$-good parts; the value of $\frac{1}{15}$ is in fact tight. To see this consider the hypergraph with vertex set $\{a,b,c,d,e,f,g\}$ and edges $abc, def, adg, beg, cfg$. A locally optimal partition is $A=\{a,d,g\}, B=\{b,e\}, C=\{c,f\}$, but $d(B)=d(C)=3=\left(\frac{2}{3}-\frac{1}{15}\right)m$. We can immediately deduce a partial result from Theorem \ref{engine}. \begin{corollary}\label{fivenine} There exists a partition with each part meeting at least $\frac{5m}{9}$ edges. \end{corollary} \begin{proof} Suppose not. Let $(A,B,C)$ be the partition guaranteed by Theorem \ref{engine} for $\varepsilon=\frac{1}{9}$. Since every edge not meeting $C$ meets either $A$ or $B$ in at least 2 vertices, we have \begin{displaymath} d(A)+d(B)+d_2(A)+d_2(B)>\left(\frac{4}{3}+4\varepsilon\right)m +m-d(C)>\frac{20m}{9}, \end{displaymath} where, as before, $d_2(A)$ denotes the number of edges meeting $A$ in at least 2 vertices, We may assume that $d(A)+d_2(A)>\frac{10m}{9}$. Write $d(A)=\frac{5m}{9}+\alpha$ so that $d_2(A)>\frac{5m}{9}-\alpha$. Trivially, $d_2(A)>0$, so $A$ must contain at least 2 vertices. Pick $a\in A$; since, by Theorem \ref{engine}, $A$ is a minimal $\varepsilon$-good set, $d(A\setminus\{a\})<\frac{5m}{9}$, so there are at least $\alpha$ edges meeting $A$ only in $a$. Pick a different vertex $a'\in A$; if $e$ is an edge which either meets $A$ in at least two vertices or meets $A$ only in $a$, the edge $e$ meets $A\setminus\{a'\}$, and so \begin{displaymath} d(A\setminus\{a'\})>\alpha+d_2(A)>\frac{5m}{9} \end{displaymath} contradicting minimality of $A$. \end{proof} \section{Multigraphs with special vertices} To go further we shall be more careful with our estimates. To this end, we shall prove a judicious partitioning result about (multi-)graphs with some ``special'' vertices. We may think of these as (multi-)hypergraphs with edges of size at most 2. However, we use the formulation of special vertices to highlight the vital point that we permit repeated edges of size 2, but do not permit repeated edges of size 1. This result, as well as more general results in a similar vein, were conjectured in \cite{BS02a}. For such a multigraph $G=(V,E,S)$ on vertex set $V$ with special vertices $S \subseteq V$, and sets $W_1,W_2 \subseteq V$ of vertices, we write $f(W_i)$ for the number of special vertices in $W_i$. Also, as usual, $e(W_1)$ denotes the number of edges spanned by $W_1$ and $e(W_1,W_2)$ the number of edges $xy$ of the form $x\in W_1$, $y\in W_2$. \begin{theorem}\label{multi} Let $G=(V,E,S)$ be a multigraph with $m$ edges and $k$ special vertices. Then there is a partition $V=V_1\cup V_2$ such that, for $i=1,2$: \begin{equation} e(V_i)+f(V_i)\le \frac{m}{3}+\frac{k+1}{2}. \label{spec} \end{equation} \end{theorem} \begin{proof} Again, we call a partition {\em optimal} if it minimizes $e(V_1)+e(V_2)$, and {\em locally optimal} if if this sum cannot be increased by moving a single vertex from one class to the other. We first note that if $V_1, V_2$ is locally optimal then $e(V_i)\le\frac{m}{3}$: indeed, for each vertex $v\in V_1$ we must have \begin{displaymath} e(v,V_1)\le e(v,V_2), \end{displaymath} and summing this over all $v\in V_1$ gives \begin{equation} 2e(V_1)\le e(V_1,V_2). \label{that} \end{equation} Observing that $m=e(V_1)+e(V_2)+e(V_1,V_2)$, \eqref{that} is equivalent to \begin{displaymath} 3e(V_1)+e(V_2)\le m. \end{displaymath} Suppose that no partition satisfying \eqref{spec} exists. Let us choose an optimal partition $V_1, V_2$ for which $|f(V_1)-f(V_2)|$ is as small as possible (among optimal partitions) and $f(V_1) \ge f(V_2)$. Since $V_1, V_2$ is optimal, $e(V_2)\le\frac{m}{3}$, and since $f(V_2)\le f(V_1)$, $f(V_2)\le \frac{k-1}{2}$; thus $V_2$ satisfies \eqref{spec}. By assumption, then, $V_1$ cannot also satisfy \eqref{spec}, so \begin{displaymath} e(V_1)+f(V_1) > \frac{m}{3}+\frac{k+1}{2}. \end{displaymath} Since the partition is optimal, $e(V_1)\le\frac{m}{3}$ and so we must have \begin{displaymath} f(V_1) > \frac{k+1}{2}, \end{displaymath} and since $f(V_1)+f(V_2)=m$, therefore \begin{equation} f(V_1) > f(V_2)+1. \label{gap} \end{equation} Let $v$ be any vertex in $V_1$; since $V_1,V_2$ is locally optimal we must have \begin{equation} e(v,V_1)\le e(v,V_2). \label{fg} \end{equation} If also $v$ is special, then by choice of our partition we must have \begin{displaymath} e(v,V_1)\ne e(v,V_2), \end{displaymath} and hence \begin{equation} e(v,V_1)+1\le e(v,V_2), \label{fgh} \end{equation} since otherwise moving $v$ into $V_2$ gives another optimal partition $V'_1, V'_2$, and using \eqref{gap}, \begin{eqnarray*} |f(V'_1)-f(V'_2)| &=& |f(V_1)-f(V_2)-2| \\ &<& |f(V_1)-f(V_2)|. \end{eqnarray*} Now we aim to show that we may move vertices across from $V_1$ to $V_2$ to get a partition satisfying \eqref{spec}. Take $W_2\subseteq V_2$ maximal such that \begin{displaymath} e(W_2)+f(W_2)\le \frac{m}{3}+\frac{k+1}{2}, \end{displaymath} and write $W_1=V\setminus W_2$. Now if $w\in W_1$ is special then by \eqref{fgh} \begin{equation} e(w,W_1)+1\le e(w,V_1)+1\le e(w,V_2)\le e(w,W_2), \label{jk} \end{equation} and if $w\in W_1$ is not special \begin{equation} e(w,W_1)\le e(w,V_1)\le e(w,V_2)\le e(w,W_2). \label{jkl} \end{equation} Equivalently to \eqref{jk} and \eqref{jkl} we may write for any $w\in W_1$ \begin{displaymath} e(w,W_1)+{\mathbf 1}_{w\in S}\le e(w,W_2), \end{displaymath} and sum over all $w\in W_1$ to give \begin{displaymath} 2e(W_1)+f(W_1) \le e(W_1,W_2). \end{displaymath} Adding $e(W_1)+2f(W_1)$ to both sides gives \begin{displaymath} 3e(W_1)+3f(W_1) \le e(W_1)+e(W_1,W_2)+2f(W_1), \end{displaymath} and since $m=e(W_1)+e(W_2)+e(W_1,W_2)$, we have \begin{displaymath} e(W_1)+f(W_1) \le \frac{1}{3}(m+2f(W_1)-e(W_2)). \end{displaymath} If $e(W_1)+f(W_1)\le \frac{m}{3}+\frac{k+1}{2}$ we are done. If not, certainly $\frac{2k}{3} > \frac{k+1}{2}$, i.e. $k>3$, and since $e(W_1)\le e(V_1)\le \frac{m}{3}$, we must have $f(W_1)>\frac{k+1}{2}>2$. Also \begin{displaymath} \frac{m}{3}+\frac{k+1}{2} < e(W_1)+f(W_1) \le\frac{1}{3}(m+2f(W_1)-e(W_2)), \end{displaymath} and, since $f(W_1)+f(W_2)=k$, \begin{displaymath} \frac{f(W_1)+f(W_2)+1}{2} < \frac{2f(W_1)-e(W_2)}{3}, \end{displaymath} {\em i.e.} \begin{displaymath} 2e(W_2)+3f(W_2)+3 < f(W_1), \end{displaymath} and since $f(W_2)\ge 0$, \begin{eqnarray*} e(W_2)+f(W_2) &<& \frac{f(W_1)-3}{2} \\ &\le& \frac{k+1}{2}-2. \end{eqnarray*} Recall that $W_2$ is a maximal set satisfying \begin{displaymath} e(W_2)+f(W_2)\le \frac{m}{3}+\frac{k+1}{2}. \end{displaymath} If $w\in W_1$, \begin{eqnarray*} e(W_2\cup\{w\})+f(W_2\cup\{w\})&\le& e(W_2)+e(w,W_2)+f(W_2)+1 \\ &<&\frac{k+1}{2}+e(w,W_2). \end{eqnarray*} But $e(W_2\cup\{w\})+f(W_2\cup\{w\}) >\frac{m}{3}+\frac{k+1}{2}$, so $e(w,W_2)>\frac{m}{3}$ for each $w\in W_1$, which is impossible since $|W_1| \ge f(W_1) \ge 3$. \end{proof} Theorem \ref{multi} tells us that if $G$ is a (multi-)hypergraph with $k$ distinct edges of size 1 and $m$ (not necessarily distinct) edges of size at least 2, we may find a partition of $G$ into two parts $V_1,V_2$ which satisfies \begin{displaymath} d(V_i)\ge \frac{2m}{3}+\frac{k-1}{2}. \end{displaymath} We may see this by first replacing each edge of size greater than 2 with a subedge of size 2. This is a strengthening of the result proved in \cite{BS00} that we can acheive \begin{displaymath} d(V_i)\ge \frac{2m}{3}+\frac{k-1}{3}. \end{displaymath} Since we may need to apply \eqref{multi} to a hypergraph with repeated edges, it is vital we ensure there are no repeated edges of size 1. As usual, we define the {\em restriction of a hypergraph $G$ to a subset $U$ of its vertices} as the multi-hypergraph with vertex set $U$, in which the multiplicity of an edge $e$ is $|{f\in E(G):f\cup U=e}|$. We shall check that the number of vertices in $V\setminus U$ is at most 2 before restricting $G$ to $U$; since we shall always start from a 3-uniform $G$ this will ensure no repeated edges of size 1. \section{The bound $c=\frac{3}{5}$} We shall first give the basic argument, which proves the conjecture for hypergraphs which are above a certain size. We shall then need to take more care in the details for small hypergraphs. \begin{theorem}\label{211} Let $G$ be a 3-uniform hypergraph with $m\ge 211$ edges. Then there exists a partition into three parts with each part meeting at least $\frac{3}{5}m$ edges. \end{theorem} \begin{proof} Suppose $G$ does not admit such a partition. Take $\varepsilon=\frac{1}{15}$ in Theorem \ref{engine}, and let $A,B,C$ be the partition guaranteed. Then, by the result of the theorem, \begin{displaymath} d(A)+d(B)>\left(\frac{4}{3}+4\varepsilon\right)m=\frac{8}{5}m \end{displaymath} and $A,B$ are minimal $\varepsilon$-good sets, so no proper subset of $A$ or $B$ meets $\frac{3}{5}m$ edges. Suppose without loss of generality that $d(A)\ge d(B)$. We distinguish two cases. \vspace{5pt} \textbf{Case 1.} $d(A)\ge\frac{9}{10}m$. By the minimality of $A$, for each vertex $a\in A$ there are more than $\frac{3}{10}m$ edges meeting $A$ only at $a$. Hence any two vertices of $A$ between them meet more than $\frac{3}{5}m$ edges, and so cannot be a proper subset. Hence $|A|\le 2$. Suppose $|A|=2$ and $d_2(A)\ge\frac{3}{10}m$. Then there are at least $\frac{3}{10}m$ edges which meet $A$ only at $a_1$, and at least $\frac{3}{10}m$ edges which meet $A$ in both vertices, totalling $\frac{3}{5}m$ edges which meet $a_2$. Since $A$ is minimal, this is impossible, so fewer than $\frac{3}{10}m$ edges meet $A$ in both vertices. Now we consider $H$, the restriction of $G$ to $B\cup C$ (recall that we defined this to be a multi-hypergraph), and note that as $|A|\le 2$, $e(H)=e(G)$ and there can be no repeated edges of size 1; also, since fewer than $\frac{3}{10}m$ edges meet $A$ in two vertices, there are $k<\frac{3}{10}m$ edges of size 1 in $H$. Applying the result of Theorem \ref{multi}, we may find a partition $D_1\cup D_2=B\cup C$ with \begin{displaymath} d_H(D_i)\ge\frac{2m-2k}{3}+\frac{k-1}{2}=\frac{2m}{3}-\frac{k+3}{6} \end{displaymath} and $k<\frac{3}{10}m$, so \begin{displaymath} d_H(D_i)\ge\frac{2m-2k}{3}-\frac{1}{2}-\frac{m}{20} =\frac{3m}{5}+\frac{m}{60}-\frac{1}{2}. \end{displaymath} Since $m\ge30$, $A,D_1,D_2$ form a suitable partition. \vspace{5pt} \textbf{Case 2.} $\frac{9}{10}m\ge d(A)\ge\frac{8}{10}m$, so $d(B)\ge\frac{7}{10}m$. Then for each $a\in A$ there must be more than $\frac{2}{10}m$ edges meeting $A$ only at $a$, and so any three vertices in $A$ between them meet more than $\frac{3}{5}m$ edges, so $|A|\le 3$. Similarly, $|B|\le 6$. Since $|A\cup B\le 9$, there can be at most $\binom{9}{3} = 84$ edges which do not meet $C$, so $\frac{2}{5}m < 84$, which is false if $m\ge 211$. \end{proof} \section{Small $m$} In this section we will show the same result for all hypergraphs, not just those with $m\ge 211$. In order to do this, we shall be careful to take into account that intermediate expressions must be integer valued. Throughout, $G$ shall be a 3-uniform hypergraph on vertex set $V$ with $m$ edges; we shall call a set {\em good} if it meets at least $\frac{3}{5}m$ edges, and {\em bad} otherwise. The following result is immediate, but we shall refer to it more than once. \begin{lemma}\label{degree} If there is a set $A\subset V$ with $d_2(A)=0$ and $d(A)\ge\frac{3}{5}m$ then there exists a partition of $V$ into three good sets. In particular, if $\Delta(G)\ge\frac{3}{5}m$ then such a partition exists. \end{lemma} \begin{proof} Suppose $d(A)\ge\frac{3}{5}m$. For each edge $e$ of $G$ pick a subedge of size 2 which does not meet $A$; since no edge contains more than one vertex in $A$, this is possible. This gives a multigraph $H$ on $V\setminus A$ with $m$ edges. By the result of (Theorem \ref{multi}), we can partition $V\setminus A$ into two parts $B$, $C$, each meeting at least $\frac{2}{3}m$ edges of $H$. \end{proof} We shall also use another similar result. \begin{lemma}\label{twodeg} If $m\ge 10$ and $G$ has two vertices of degree $\lceil\frac{3}{5}m\rceil -1$, then there is a partition of $V$ into three good sets. \end{lemma} \begin{proof} Let $a$, $b$ be two such vertices. There is an edge which does not contain $a$; let $c$ be a vertex in that edge other than $b$. There is an edge which does not contain $b$, let $d$ be a vertex in that edge other than $a,c$. $\{a,c\}$ and $\{b,d\}$ each meet at least $\lceil\frac{3}{5}\rceil$ edges, so are good. $V\setminus \{a,b,c,d\}$ meets at least $m-4$ edges, as there are only four possible edges contained in $\{a,b,c,d\}$; since $m\ge10$, $m-4\ge\lceil\frac{3}{5}\rceil$, so $V\setminus \{a,b,c,d\}$ is also good. \end{proof} Suppose $V$ cannot be partitioned into three good sets. Then every partition has some part meeting at most $\left\lceil\frac{3}{5}m\right\rceil-1$ edges. Let $\delta=\frac{2}{3}-\frac{\lceil\frac{3}{5}m\rceil-1}{m}>\frac{2}{3}-\frac{3}{5}=\frac{1}{15}$. For any $\frac{1}{15}<\varepsilon<\delta$, $G$ has no tripartition into $\varepsilon$-good parts, and so the partition guaranteed by Theorem \ref{engine} has $d(A)+d(B)+d(C)>(2+3\varepsilon)m$. If there is no such partition with $d(A)+d(B)+d(C)\ge(2+3\delta)m$, then by taking $\varepsilon$ suitably close to $\delta$ we obtain a contradiction. Hence Theorem \ref{engine} implies \begin{corollary}\label{discrete} Let $G$ be a 3-uniform hypergraph which cannot be partitioned into three good sets. Then there is a partition $V=A\cup B\cup C$ such that $A$ and $B$ are minimal good sets and \begin{displaymath} d(A)+d(B)+d(C)\ge(2+3\delta)m=4m-3(\lceil\frac{3}{5}m\rceil-1) \end{displaymath} and since $C$ is bad, \begin{eqnarray*} d(A)+d(B)&\ge&(2+3\delta)m \\ &=&4m-4(\lceil\frac{3}{5}m\rceil-1) \label{many} \\ &>&\frac{8}{5}m \label{clean} \end{eqnarray*} \end{corollary} We shall now be more careful about bounding the sizes of the sets $A$, $B$. \begin{lemma} In the partition guaranteed by Corollary \ref{discrete}, either $|A|=2$ or $|B|=2$. \end{lemma} \begin{proof} Since any edge which does not meet $C$ meets either $A$ or $B$ in at least two places, \begin{displaymath} d_2(A)+d_2(B)\ge m-d(C) >\frac{2}{5}m, \end{displaymath} and so, recalling \eqref{clean}, \begin{displaymath} 2d(A)+2d(B)+d_2(A)+d_2(B)>\frac{18}{5}m. \end{displaymath} Without loss of generality, assume $2d(A)+d_2(A)\ge 2d(B)+d_2(B)$, so that \begin{displaymath} 2d(A)+d_2(A)>\frac{9}{5}m. \end{displaymath} Since $A$ is minimally good, for each $a\in A$ there are more than $d(A)-\frac{3}{5}m$ edges which meet $A$ only at $a$. There are also $d_2(A)$ edges meeting $A$ in more than one vertex, and so if $|A|\ge 3$ \begin{displaymath} d(A)\ge 3(d(A)-\frac{3}{5}m)+d_2(A), \end{displaymath} contradicting the previous inequality. Thus $|A|\le 2$, and Lemma \ref{degree} imples that $|A|=2$. \end{proof} \begin{lemma}\label{upb}In the partition guaranteed by Corollary \ref{discrete}, reordering $A, B$ if necessary so that $|A|=2$, \begin{displaymath} d(A)\le 8\lceil\frac{3}{5}m\rceil -4m -5. \end{displaymath} \end{lemma} \begin{proof} Since $A$ is minimally good, for each $a\in A$ there are more than $d(A)-\frac{3}{5}m$, and so at least $d(A)-\lceil\frac{3}{5}m\rceil+1$, edges which meet $A$ only at $a$, and so \begin{displaymath} d(A)\ge 2(d(A)-\lceil\frac{3}{5}m\rceil+1)+d_2(A). \end{displaymath} If $d(A)\ge 8\lceil\frac{3}{5}m\rceil -4m -4$, \begin{eqnarray*} d_2(A)&\le& 2\lceil\frac{3}{5}m\rceil-d(A)-2 \\ &\le& 4m+2-6\lceil\frac{3}{5}m\rceil. \end{eqnarray*} Consider $H$, the multi-hypergraph obtained by restricting $G$ to $B\cup C$. Note that there can be no repeated edges of size 1 and at most $d_2(A)$ edges of size 1 in $H$. Writing $k$ for the number of such edges, and applying the result of Theorem \ref{multi}, we may find a partition $D_1\cup D_2=B\cup C$ with \begin{displaymath} d_H(D_i)\ge\frac{2m-2k}{3}+\frac{k-1}{2}=\frac{2m}{3}-\frac{k+3}{6}, \end{displaymath} and $k\le 4m+2-6\lceil\frac{3}{5}m\rceil$, so \begin{displaymath} d_H(D_i)\ge\frac{2m}{3}-\frac{1}{2}-\frac{k}{6}=\lceil\frac{3}{5}m\rceil-\frac{5}{6}. \end{displaymath} Since $d_H(D_i)$ must be an integer, it is at least $\lceil\frac{3}{5}m\rceil$, so $A,D_1,D_2$ contradicts our assumption that $G$ does not have a good partition. \end{proof} We shall now bound $|B|$. \begin{lemma}\label{upbb} In the partition guaranteed by Corollary \ref{discrete}, reordering $A, B$ if necessary so that $|A|=2$, $|B|\le3$. \end{lemma} \begin{proof} Suppose $|B|\ge4$. By Theorem \ref{discrete}, \begin{displaymath} d(A)+d(B)\ge 4m-4(\lceil\frac{3}{5}m\rceil-1). \end{displaymath} Since $A$ is minimally good, for each $a\in A$ there at least $d(A)-\lceil\frac{3}{5}m\rceil+1$ edges which meet $A$ only at $a$, and so \begin{equation} d(A)\ge 2(d(A)-\lceil\frac{3}{5}m\rceil+1)+d_2(A). \label{zx} \end{equation} Similarly, \begin{equation} d(B)\ge 4(d(B)-\lceil\frac{3}{5}m\rceil+1)+d_2(B). \label{zxc} \end{equation} \eqref{zx} and \eqref{zxc} together give \begin{eqnarray*} d_2(A)+d_2(B)&\le& 6\lceil\frac{3}{5}m\rceil-d(A)-3d(B)-6 \\ &=& 6\lceil\frac{3}{5}m\rceil+2d(A)-3(d(A)-d(B))-6 \\ &\le& 18\lceil\frac{3}{5}m\rceil+2d(A)-12m-18. \end{eqnarray*} By Lemma \ref{upb}, therefore, \begin{displaymath} d_2(A)+d_2(B)\le 34\lceil\frac{3}{5}m\rceil-20m-28. \end{displaymath} Since any edge not meeting $C$ meets either $A$ or $B$ in at least two vertices, \begin{eqnarray*} d(C)-\lceil\frac{3}{5}m\rceil&\ge& m-d_2(A)-d_2(B)-\lceil\frac{3}{5}m\rceil \\ &\ge& 21m+28-35\lceil\frac{3}{5}m\rceil \\ &\ge& 21m+28-35\left(\frac{3}{5}m+\frac{4}{5}\right) =0. \end{eqnarray*} Again, this contradicts our assumption that $G$ does not have a good partition. \end{proof} We now have a much better bound than the one given by Theorem \ref{211}. \begin{corollary}\label{large} If $m\ge 25$, there is a partition of $V$ into three good sets. \end{corollary} \begin{proof}Suppose not. Lemmas \ref{upb} and \ref{upbb} then imply that there is a partition with $A$, $B$ being good sets and $|A\cup B|\le 5$. But then at most 10 edges lie entirely within $A\cup B$, so $d(C)\ge m-10 \ge\frac{3}{5}m$ when $m\ge 25$. \end{proof} \begin{lemma}\label{rest} If $m\le 24$ and $m\neq 7$, there is a partition of $V$ into three good sets. \end{lemma} \begin{proof} If $m=1$ or $m=2$, finding such a partition is trivial, so we shall assume $m\ge 3$. Assume no good partition exists; taking $A,B,C$ to be the partition guaranteed by Corollary \ref{discrete} with $A$, $B$ reordered if necessary so that $|A|=2$, we shall now consider the two possible cases given by Lemma \ref{upbb}. \vspace{5pt} \textbf{Case 1.} $|B|=3$. Write $A=\{a_1, a_2\}$, $B=\{b_1, b_2, b_3\}$, and for each $i$ let the number of edges meeting $B$ only at $b_i$ be $x_i$. Now $d(B)=x_1+x_2+x_3+d_2(B)$ and for each $i$, \begin{displaymath} \lceil\frac{3}{5}m\rceil-1\ge d(B\setminus\{b_i\})=d_2(B)+x_1+x_2+x_3-x_i. \end{displaymath} Since $A$ contains only two vertices, and $B$ three, there are only three possible edges which do not meet $C$ and contain at most one vertex of $B$, so \begin{displaymath} d_2(B)\ge m-d(C)-3 \ge m-\lceil\frac{3m}{5}\rceil-2. \end{displaymath} Since $B$ is minimally good, \begin{eqnarray*} \lceil\frac{3}{5}m\rceil-1&\ge& d_2(B)+x_2+x_3 \\ &\ge& m-\lceil\frac{3}{5}m\rceil-2+x_2+x_3, \end{eqnarray*} so \begin{displaymath} x_2+x_3 \le 2\lceil\frac{3}{5}m\rceil+1-m. \end{displaymath} Hence one of $x_2$ and $x_3$ (say $x_3$) is at most $\lceil\frac{3m}{5}\rceil-\lfloor\frac{m-1}{2}\rfloor$. Since $B$ is minimally good, $d(B)-x_3 \le \lceil\frac{3m}{5}\rceil-1$, so \begin{displaymath} d(B) \le 2\lceil\frac{3m}{5}\rceil-\lfloor\frac{m-1}{2}\rfloor-1. \end{displaymath} Using \eqref{many}, \begin{displaymath} d(A)+d(B)\ge 4m-4(\lceil\frac{3}{5}m\rceil-1), \end{displaymath} so \begin{equation} d(A)\ge 4m-6\lceil\frac{3}{5}m\rceil+5+\lceil\frac{m-1}{2}\rceil. \label{qw} \end{equation} Comparing this with the upper bound obtained in Lemma \ref{upb}, we must have \begin{equation} 14\lceil\frac{3}{5}m\rceil\ge10+8m+\lceil\frac{m-1}{2}\rceil, \label{qwe} \end{equation} and equality in \eqref{qwe} implies equality in \eqref{qw}. \eqref{qwe} is false for all values in the range $3\le m\le 24$ except 7, 12 and 17. For $m=12$ and $m=17$ there is equality in \eqref{qwe} and so also in \eqref{qw}. If $m=12$, therefore, we have $d(A)=11$. Since $A$ is minimally good, each vertex in $A$ meets at most 7 edges, and so $d_2(A)\ge 3$. Since $d(A)=d(a_1)+d(a_2)-d_2(A)$, we must have $d(a_1)=d(a_2)=7$, and by Lemma \ref{twodeg} we can find a good partition. If $m=17$, similarly, we have $d(A)=15$, $d(a_i)\le 10$ and $d_2(A)\ge 5$. Again, this implies that $d(a_1)=d(a_2)=10$, and by Lemma \ref{twodeg} there is a good partition. \vspace{5pt} \textbf{Case 2.} $|B|=2$. In this case, since there are at most 4 edges contained in $A\cup B$, $d(C)\ge m-4$ and so there exists a good partition if $m\ge 10$. Write $B=\{b_1, b_2\}$, and for each $i$ let the number of edges meeting $B$ only at $b_i$ be $x_i$. Now $d(B)=x_1+x_2+d_2(B)$ and so, \begin{displaymath} \lceil\frac{3}{5}m\rceil-1\ge d_2(B)+x_i. \end{displaymath} Since $|A|=|B|=2$, there are only two possible edges which do not meet $C$ and contain at most one vertex of $B$. So \begin{displaymath} d_2(B)\ge m-d(C)-2 \ge m-\lceil\frac{3m}{5}\rceil-1. \end{displaymath} Since $B$ is minimally good, $d_2(B)+x_2 \le \lceil\frac{3m}{5}\rceil-1$, so \begin{displaymath} x_2 \le 2\lceil\frac{3}{5}m\rceil-m. \end{displaymath} Since $B$ is minimally good, $d(B)-x_2 \le \lceil\frac{3m}{5}\rceil-1$, so \begin{displaymath} d(B) \le 3\lceil\frac{3m}{5}\rceil-m-1 \end{displaymath} Using \eqref{many}, \begin{displaymath} d(A)+d(B)\ge 4m-4(\lceil\frac{3}{5}m\rceil-1), \end{displaymath} so \begin{displaymath} d(A)\ge 5m-7\lceil\frac{3}{5}m\rceil+5. \end{displaymath} Comparing this with the upper bound obtained in Lemma \ref{upb}, we must have \begin{displaymath} 15\lceil\frac{3}{5}m\rceil\ge10+9m, \end{displaymath} which holds if and only if $m\equiv 2$ mod 5. Since $3\le m\le 9$, we must have $m=7$. \end{proof} \begin{lemma}\label{seven} If $m=7$, there is a partition of $V$ into three good sets. \end{lemma} \begin{proof} Take $A,B,C$ to be the partition guaranteed by Corollary \ref{discrete}, reordering $A$, $B$ if necessary so that $d(A)\ge d(B)$. By \eqref{many}, $d(A)+d(B) \ge 12$, so either $d(A)=7$ or $d(A)=d(B)=6$. \vspace{5pt} \textbf{Case 1.} $d(A)=7$. Since $A$ is minimally good, each vertex in $A$ meets at least three edges which do not contain any other vertex in $A$. Also, by Lemma \ref{degree}, some edge meets more than one vertex in $A$. Therefore $|A|=2$ and each vertex in $A$ meets exactly three edges which do not contain the other, and one edge which does. Thus, writing $A=\{a,b\}$, $d(a)=d(b)=4$ and there is exactly one edge which contains both, $abc$, say. Choose any edge which does not contain $a$; this edge contains some vertex $d\neq b,c$. Choose any edge which does not contain $b$ and is not $acd$; this edge contains some vertex $e\neq a,c,d$. Now $\{a,d\}$ and $\{b,e\}$ are both good sets. Since neither $abd$ nor $abe$ is an edge of $G$, at most two edges do not meet $V\setminus \{a,b,d,e\}$, so this is also good and we have a good partition. \vspace{5pt} \textbf{Case 2.} $d(A)=d(B)=6$. Relabel if necessary so that $d_2(A)\ge d_2(B)$. Since $d(C)\le 4$, and $d(C)\ge m-d_2(A)-d_2(B)$, $d_2(A)\ge 2$. Since $A$ is minimally good, each vertex in $A$ meets at least two edges which do not contain any other vertex in $A$. Therefore $|A|=2$ and each vertex in $A$ meets exactly two edges which do not contain the other, and two edges which do. Thus, writing $A=\{a,b\}$, $d(a)=d(b)=4$ and there are exactly two edge which contains both, $abc$ and $abd$, say. There is one edge which meets neither $a$ nor $b$; this edge contains some vertex $e\neq c,d$. Since, by Lemma \ref{degree}, $\Delta(G)\le 4$, there must be at least 6 vertices of degree at least 1, so there exists a vertex $f\neq a,b,c,d,e$ which meets an edge. Since $abf$ is not an edge of $G$, either there is an edge which meets $f$ but not $a$, or there exists an edge which meets $f$ but not $b$; assume without loss of generality the former. Now $\{a,f\}$ and $\{b,e\}$ are both good sets. Since neither $abd$ nor $abe$ is an edge of $G$, at most two edges do not meet $V\setminus \{a,b,d,e\}$, so this is also good and we have a good partition. \end{proof} Corollary \ref{large}, together with Lemmas \ref{seven} and \ref{rest} give our main result. \begin{theorem} Let $G$ be a 3-uniform hypergraph with $m$ edges. Then there exists a partition into three parts with each part meeting at least $\frac{3}{5}m$ edges. \end{theorem} \section{Final Remarks} Two questions naturally arise from these results. Firstly, for $r=3$, can we acheive a better bound than $c=\frac{3}{5}$ for sufficiently large $m$? It seems very likely that such a result is true; indeed, by analogy with the results of Bollob\'as and Scott in the graph case \cite{BS99}, we might expect there exist partitions with each part meeting at least $(\frac{19}{27}-o(1))m$ edges (considering complete hypergraphs shows that we cannot do better). However, we cannot hope to do better than $(\frac{2}{3}-o(1))m$ using these methods. The immediate difficulty in doing better than $\frac{3}{5}m$ is the need to find an optimal partition with at most one bad part in the proof of Theorem \ref{engine}, The second question is whether we can say anything for $r>3$. Again, the need to find a starting partition with at most one bad part is likely to become much more difficult as the number of parts increases. A substantially different approach may well be needed to get a bound which remains good as $r$ becomes large.
{ "timestamp": "2009-11-03T13:07:39", "yymm": "0911", "arxiv_id": "0911.0563", "language": "en", "url": "https://arxiv.org/abs/0911.0563", "abstract": "The vertices of any graph with $m$ edges can be partitioned into two parts so that each part meets at least $\\frac{2m}{3}$ edges. Bollobás and Thomason conjectured that the vertices of any $r$-uniform graph may be likewise partitioned into $r$ classes such that each part meets at least $cm$ edges, with $c=\\frac{r}{2r-1}$. In this paper, we prove this conjecture for the case $r=3$. In the course of the proof we shall also prove an extension of the graph case which was conjectured by Bollobás and Scott.", "subjects": "Combinatorics (math.CO)", "title": "Judicious partitions of 3-uniform hypergraphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9822877028521422, "lm_q2_score": 0.8418256472515683, "lm_q1q2_score": 0.8269149812407608 }
https://arxiv.org/abs/1804.09455
Nonnegative Polynomials and Circuit Polynomials
The concept of sums of nonnegative circuit polynomials (SONC) was recently introduced as a new certificate of nonnegativity especially for sparse polynomials. In this paper, we explore the relationship between nonnegative polynomials and SONC polynomials. As a first result, we provide sufficient conditions for nonnegative polynomials with general Newton polytopes to be SONC polynomials, which generalizes the previous result on nonnegative polynomials with simplex Newton polytopes. Secondly, we prove that every SONC polynomial admits a SONC decomposition without cancellation. In other words, SONC decompositions can exactly preserve the sparsity of nonnegative polynomials, which is dramatically different from the classical sum of squares (SOS) decompositions and is a key property to design efficient algorithms for sparse polynomial optimization based on SONC decompositions.
\section{Introduction} A real polynomial $f\in{\mathbb{R}}[{\mathbf{x}}]={\mathbb{R}}[x_1,\ldots,x_n]$ is called a {\em nonnegative polynomial} if its evaluation on every real point is nonnegative. All of nonnegative polynomials form a convex cone, denoted by PSD. Certifying nonnegativity of polynomials is a central problem of real algebraic geometry and has a deep connection with polynomial optimization. A classical approach for handling this problem is sum of squares (SOS) decompositions. From the perspective of computation, checking whether a polynomial is a sum of squares boils down to a semidefinite programming (SDP) problem involving a positive semidefinite matrix of size $\binom{n+d}{d}$, where $n$ is the number of variables and $2d$ is the degree of the polynomial \cite{pa,pa1}. Hence, the size of the corresponding SDP problem grows rapidly with $n,d$ increasing, which greatly limits the scalability of this approach. The concept of sums of nonnegative circuit polynomials recently introduced by Iliman and Wolff in \cite{iw} is a substitute of sums of squares of polynomials to represent nonnegative polynomials. A polynomial $f$ is called a {\em circuit polynomial} if it is of the form \begin{equation} f({\mathbf{x}})=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}, \end{equation} where $\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}\subseteq(2{\mathbb{N}})^n$ comprises the vertices of a simplex, ${\boldsymbol{\beta}}$ is an interior point of the convex hull of $\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}$ and $c_i>0$ for $i=1,\ldots,m$. For every circuit polynomial $f$, we associate it with the {\em circuit number} defined as $\Theta_f:=\prod_{i=1}^m(c_i/\lambda_i)^{\lambda_i}$, where the $\lambda_i$'s are uniquely given by the convex combination ${\boldsymbol{\beta}}=\sum_{i=1}^m\lambda_i{\boldsymbol{\alpha}}_i \textrm{ with } \lambda_i>0 \textrm{ and } \sum_{i=1}^m\lambda_i=1$. Then the nonnegativity of $f$ is easy to decide: $f$ is nonnegative if and only if either ${\boldsymbol{\beta}}\in(2{\mathbb{N}})^n$ and $d\le\Theta_f$ or ${\boldsymbol{\beta}}\notin(2{\mathbb{N}})^n$ and $-\Theta_f\le d\le\Theta_f$. We say that a polynomial is a {\em sum of nonnegative circuit polynomials (SONC)}, if it can be written as a sum of nonnegative circuit polynomials. The set of SONC polynomials also forms a convex cone. Clearly, an explicit representation of a SONC polynomial as a sum of nonnegative circuit polynomials provides a certificate of its nonnegativity, which is called a {\em SONC decomposition}. By virtue of SONC decompositions, algorithms were proposed for polynomial optimization problems (see \cite{lw} for the unconstrained case and see \cite{dlw,diw,dkw} for the constrained case). Numerical experiments for unconstrained polynomial optimization problems in \cite{pap,se} have demonstrated the advantage of the SONC-based methods compared to the SOS-based methods especially in the high-degree but fairly sparse case. A comparison with details between SONC and SOS can be found in \cite{se}. From the perspective of theory, it is natural to ask which types of nonnegative polynomials admit SONC decompositions and how big the gap between the PSD cone and the SONC cone is. In \cite{iw}, it was proved that if the Newton polytope of a polynomial $f$ is a simplex and there exists a point such that all terms of $f$ except for those corresponding to the vertices of the Newton polytope have the negative sign on this point, then $f$ is nonnegative if and only if $f$ is a SONC polynomial (see Theorem \ref{nc-thm2}). The first contribution of this paper is that we generalize this conclusion to polynomials with general Newton polytopes. Specifically, we provide sufficient conditions for nonnegative polynomials with general Newton polytopes to be SONC polynomials in terms of combinatorial structure of the supports (Theorem \ref{npgp-thm7} and Theorem \ref{npmt-thm1}). Moreover, we give a counter example that a nonnegative polynomial does not admit a SONC decomposition if one of these conditions fails. As the second contribution of this paper, we clarify an important fact that every SONC polynomial can decompose into a sum of nonnegative circuit polynomials by just using the support of the original polynomial (Theorem \ref{sec3-thm2}). In other words, SONC decompositions can exactly preserve the sparsity of polynomials. It is dramatically different from the SOS decompositions of nonnegative polynomials, where extra monomials are needed in general. This key property of SONC decompositions explains the advantage of SONC decompositions for certifying nonnegativity of sparse polynomials compared to the classical SOS decompositions and is crucial to design efficient algorithms for sparse polynomial optimization based on SONC decompositions. The rest of this paper is organized as follows. In Section 2, we recall some basic facts on SONC polynomials. After that we consider the problem which types of nonnegative polynomials admit SONC decompositions. We deal with the case of nonnegative polynomials with one negative term in Section 3 and deal with the case of nonnegative polynomials with multiple negative terms in Section 4. In Section 5, we prove that every SONC polynomial decomposes into a sum of nonnegative circuit polynomials with the same support. What's more, we prove that no cancellation is needed in this decomposition. \section{Preliminaries} \subsection{Notation and Nonnegative Polynomials} Let ${\mathbb{R}}[{\mathbf{x}}]={\mathbb{R}}[x_1,\ldots,x_n]$ be the ring of real $n$-variate polynomial, ${\mathbb{R}}^*={\mathbb{R}}\backslash\{0\}$, and ${\mathbb{N}}^*={\mathbb{N}}\backslash\{0\}$. Let ${\mathbb{R}}_+$ be the set of positive real numbers and ${\mathbb{R}}_{\ge0}$ the set of nonnegative real numbers. For a finite set ${\mathscr{A}}\subseteq{\mathbb{N}}^n$, we denote by $\hbox{\rm{cone}}({\mathscr{A}})$ the conic hull of ${\mathscr{A}}$, by $\hbox{\rm{conv}}({\mathscr{A}})$ the convex hull of ${\mathscr{A}}$, and by $V({\mathscr{A}})$ the vertices of the convex hull of ${\mathscr{A}}$. We also denote by $V(P)$ the vertex set of a polytope $P$. We consider polynomials $f\in{\mathbb{R}}[{\mathbf{x}}]$ supported on the finite set ${\mathscr{A}}\subseteq{\mathbb{N}}^n$, i.e. $f$ is of the form $f({\mathbf{x}})=\sum_{{\boldsymbol{\alpha}}\in{\mathscr{A}}}c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}$ with $c_{{\boldsymbol{\alpha}}}\in{\mathbb{R}}, {\mathbf{x}}^{{\boldsymbol{\alpha}}}=x_1^{\alpha_1}\cdots x_n^{\alpha_n}$. The {\em support} of $f$ is $\hbox{\rm{supp}}(f):=\{{\boldsymbol{\alpha}}\in {\mathscr{A}}\mid c_{{\boldsymbol{\alpha}}}\ne0\}$ and the {\em Newton polytope} of $f$ is defined as $\hbox{\rm{New}}(f):=\hbox{\rm{conv}}(\hbox{\rm{supp}}(f))$. For a polytope $P$, we use $P^{\circ}$ to denote the interior of $P$. For $m\in{\mathbb{N}}$, let $[m]:=\{1,\ldots,m\}$. A polynomial $f\in{\mathbb{R}}[{\mathbf{x}}]$ which is nonnegative over ${\mathbb{R}}^n$ is called a {\em nonnegative polynomial}. The class of nonnegative polynomials is denoted by PSD. A nonnegative polynomial must satisfy the following necessary conditions. \begin{proposition}(\cite[Theorem 3.6]{re1})\label{nc-prop2} Let ${\mathscr{A}}\subseteq{\mathbb{N}}^n$ and $f=\sum_{{\boldsymbol{\alpha}}\in {\mathscr{A}}}c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}\in{\mathbb{R}}[{\mathbf{x}}]$ with $\hbox{\rm{supp}}(f)={\mathscr{A}}$. Then $f$ is nonnegative only if the followings hold: \begin{enumerate} \item $V({\mathscr{A}})\subseteq(2{\mathbb{N}})^n$; \item If ${\boldsymbol{\alpha}}\in V({\mathscr{A}})$, then the corresponding coefficient $c_{{\boldsymbol{\alpha}}}$ is positive. \end{enumerate} \end{proposition} For the remainder of this paper, we assume for simplicity that the monomial factor of any polynomial $f$ is $1$, that is, if $f={\mathbf{x}}^{{\boldsymbol{\alpha}}'}(\sum c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}})$ such that $\sum c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}\in{\mathbb{R}}[{\mathbf{x}}]$ and ${\boldsymbol{\alpha}}'\in{\mathbb{N}}^n$, then ${\mathbf{x}}^{{\boldsymbol{\alpha}}'}=1$. Otherwise, we can always factor out the monomial factor. \subsection{Nonnegative Polynomials Supported on Circuits} A subset ${\mathscr{A}}\subseteq(2{\mathbb{N}})^n$ is called a {\em trellis} if ${\mathscr{A}}$ comprises the vertices of a simplex. \begin{definition} Let ${\mathscr{A}}$ be a trellis and $f\in{\mathbb{R}}[{\mathbf{x}}]$. Then $f$ is called a {\em circuit polynomial} if it is of the form \begin{equation}\label{nc-eq} f({\mathbf{x}})=\sum_{{\boldsymbol{\alpha}}\in{\mathscr{A}}}c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}, \end{equation} with $c_{{\boldsymbol{\alpha}}}>0$ and ${\boldsymbol{\beta}}\in\hbox{\rm{conv}}({\mathscr{A}})^{\circ}$. Assume \begin{equation} {\boldsymbol{\beta}}=\sum_{{\boldsymbol{\alpha}}\in{\mathscr{A}}}\lambda_{{\boldsymbol{\alpha}}}{\boldsymbol{\alpha}}\textrm{ with } \lambda_{{\boldsymbol{\alpha}}}>0 \textrm{ and } \sum_{{\boldsymbol{\alpha}}\in{\mathscr{A}}}\lambda_{{\boldsymbol{\alpha}}}=1. \end{equation} For every circuit polynomial $f$, we define the corresponding {\em circuit number} as $\Theta_f:=\prod_{{\boldsymbol{\alpha}}\in{\mathscr{A}}}(c_{{\boldsymbol{\alpha}}}/\lambda_{{\boldsymbol{\alpha}}})^{\lambda_{{\boldsymbol{\alpha}}}}$. \end{definition} The nonnegativity of a circuit polynomial $f$ is decided by its circuit number alone. \begin{theorem}(\cite[Theorem 3.8]{iw})\label{nc-thm1} Let $f=\sum_{{\boldsymbol{\alpha}}\in{\mathscr{A}}}c_{{\boldsymbol{\alpha}}} {\mathbf{x}}^{{\boldsymbol{\alpha}}}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ be a circuit polynomial and $\Theta_f$ its circuit number. Then $f$ is nonnegative if and only if either ${\boldsymbol{\beta}}\in(2{\mathbb{N}})^n$ and $d\le\Theta_f$ or ${\boldsymbol{\beta}}\notin(2{\mathbb{N}})^n$ and $|d|\le\Theta_f$. \end{theorem} \begin{remark} For the concise of narrative, we also view a monomial square as a nonnegaive circuit polynomial. \end{remark} The following proposition characterizes the zeros of a circuit polynomial when the Newton polytope is full-dimensional. \begin{proposition}(\cite[Proposition 3.4 and Corollary 3.9]{iw})\label{nc-prop1} Let $f=\sum_{i=0}^nc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\Theta_f{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ be a circuit polynomial, $\Theta_f$ the circuit number and ${\boldsymbol{\beta}}=\sum_{i=0}^n\lambda_i{\boldsymbol{\alpha}}_i$ with $\lambda_i>0$ and $\sum_{i=0}^n\lambda_i=1$. Then $f$ has exactly one zero ${\mathbf{x}}_*$ in ${\mathbb{R}}_+^{n}$ which satisfies: \begin{equation} \frac{c_0{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_0}}{\lambda_0}=\cdots=\frac{c_n{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_n}}{\lambda_n}=\Theta_f{\mathbf{x}}_*^{{\boldsymbol{\beta}}}. \end{equation} Moreover, if ${\mathbf{x}}$ is any zero of $f$, then $|{\mathbf{x}}|={\mathbf{x}}_*$, i.e. $|x_i|=(x_*)_{i}$ for $i=1,\ldots,n$. \end{proposition} We shall say that a polynomial is a {\em sum of nonnegative circuit polynomials (SONC)}, if it can be written as a sum of nonnegative circuit polynomials. Clearly, an explicit representation of a SONC polynomial as a sum of nonnegative circuit polynomials provides a certificate of its nonnegativity, which is called a {\em SONC decomposition}. We denote by SONC the class of SONC polynomials. The following theorem gives a characterization for a nonnegative polynomial to be a SONC polynomial when the Newton polytope is a simplex. \begin{theorem}(\cite[Corollary 7.5]{iw})\label{nc-thm2} Let $f=\sum_{i=0}^n c_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\sum_{j=1}^l d_j{\mathbf{x}}^{{\boldsymbol{\beta}}_j}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n, c_i>0, i=0,\ldots,n$ such that $\hbox{\rm{New}}(f)$ is a simplex and ${\boldsymbol{\beta}}_j\in\hbox{\rm{New}}(f)^{\circ}\cap{\mathbb{N}}^n$ for $j=1,\ldots,l$. If there exists a point ${\boldsymbol{v}}=(v_j)\in({\mathbb{R}}^*)^n$ such that $d_j{\boldsymbol{v}}^{{\boldsymbol{\beta}}_j}>0$ for all $j$, then $f\in\hbox{\rm{PSD}}$ if and only if $f\in\hbox{\rm{SONC}}$. \end{theorem} \section{Nonnegative Polynomials with One Negative Term} Now we study which types of nonnegative polynomials with general Newton polytopes admit SONC decompositions. The well-known Hilbert's classification on the coincidence of nonnegative polynomials and SOS polynomials is according to the number of variables and the degree of polynomials. As to the SONC case, it depends on the combinatorical structure of the support of polynomials. In this section, we deal with the case of nonnegative polynomials with one negative term, i.e., we assume that the polynomial is of the form $f_d=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$ and ${\boldsymbol{\beta}}\notin V(\hbox{\rm{New}}(f_d))$. Let $\partial\hbox{\rm{New}}(f_d)$ denote the boundary of $\hbox{\rm{New}}(f_d)$. We first reduce the case of ${\boldsymbol{\beta}}\in\partial\hbox{\rm{New}}(f_d)$ to the case ${\boldsymbol{\beta}}\in\hbox{\rm{New}}(f_d)^{\circ}$ by the following lemma. \begin{lemma}\label{sec3-lm} Let $f_d=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$ and ${\boldsymbol{\beta}}\in\partial\hbox{\rm{New}}(f_d)$. Furthermore, let $F$ be the face of $\hbox{\rm{New}}(f_d)$ containing ${\boldsymbol{\beta}}$. Then $f_d$ is nonnegative if and only if the restriction of $f_d$ to the face $F$ is nonnegative. \end{lemma} \begin{proof} The necessity follows from \cite[Theorem 3.6]{re1}. For the sufficiency, note that the restriction to the face $F$ contains the term $-d{\mathbf{x}}^{{\boldsymbol{\beta}}}$ and this restriction is nonnegative. Moreover, all other terms in $f_d$ are monomial squares. Hence $f_d$ is nonnegative. \end{proof} Now we assume ${\boldsymbol{\beta}}\in\hbox{\rm{New}}(f_d)^{\circ}$. Without loss of generality, we further make the assumption that $\dim(\hbox{\rm{New}}(f_d))=n$. Otherwise, we can reduce to this case by applying an appropriate monomial transformation to $f_d$. It is easy to see that the set $\{d\in{\mathbb{R}}\mid f_d\in\hbox{\rm{PSD}}\}$ is nonempty and has upper bounds. So the supremum exists. Let \begin{equation}\label{npgp-eq1} d^*\triangleq\sup\{d\in{\mathbb{R}}\mid f_d\in\hbox{\rm{PSD}}\}. \end{equation} We need the following theorem. \begin{theorem}(\cite[Theorem 1.5]{mu})\label{sec1-thm2} Consider the following system of polynomial equations \begin{equation}\label{sec1-eq2} \sum_{i=1}^m c_i{\boldsymbol{\alpha}}_i{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}={\mathbf{b}}, \end{equation} where ${\boldsymbol{\alpha}}_i\in{\mathbb{R}}^n,c_i>0,i=1,\ldots,m$. Moreover, assume $\dim(\hbox{\rm{conv}}(\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}))=n$. Then for any ${\mathbf{b}}\in\hbox{\rm{cone}}(\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\})^{\circ}$, (\ref{sec1-eq2}) has exactly one zero in ${\mathbb{R}}_+^{n}$. \end{theorem} \begin{lemma}\label{sec1-lm} Consider the following system of polynomial equations on variables $({\mathbf{x}},d)$ \begin{equation}\label{sec1-eq1} \begin{cases} \sum_{i=1}^m c_i{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}=0\\ \sum_{i=1}^m c_i{\boldsymbol{\alpha}}_i{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\boldsymbol{\beta}}{\mathbf{x}}^{{\boldsymbol{\beta}}}=\mathbf{0} \end{cases}, \end{equation} where ${\boldsymbol{\alpha}}_i\in{\mathbb{R}}^n, c_i>0,i=1,\ldots,m$, ${\boldsymbol{\beta}}\in\hbox{\rm{conv}}(\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\})^{\circ}$. Moreover, assume $\dim(\hbox{\rm{conv}}(\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}))=n$. Then \eqref{sec1-eq1} has exactly one zero in ${\mathbb{R}}_+^{n+1}$. \end{lemma} \begin{proof} Eliminate $d$ from (\ref{sec1-eq1}) and we obtain \begin{equation}\label{sec1-eq3} \sum_{i=1}^m c_i({\boldsymbol{\alpha}}_i-{\boldsymbol{\beta}}){\mathbf{x}}^{{\boldsymbol{\alpha}}_i}=\mathbf{0}. \end{equation} Divide (\ref{sec1-eq3}) by ${\mathbf{x}}^{{\boldsymbol{\beta}}}$, and we have \begin{equation}\label{sec1-eq4} \sum_{i=1}^m c_i({\boldsymbol{\alpha}}_i-{\boldsymbol{\beta}}){\mathbf{x}}^{{\boldsymbol{\alpha}}_i-{\boldsymbol{\beta}}}=\mathbf{0}. \end{equation} Since ${\boldsymbol{\beta}}\in\hbox{\rm{conv}}(\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\})^{\circ}$, we have $\mathbf{0}\in\hbox{\rm{cone}}(\{{\boldsymbol{\alpha}}_1-{\boldsymbol{\beta}},\ldots,{\boldsymbol{\alpha}}_m-{\boldsymbol{\beta}}\})^{\circ}$. Thus by Theorem \ref{sec1-thm2}, (\ref{sec1-eq4}) and hence (\ref{sec1-eq3}) have exactly one zero in ${\mathbb{R}}_+^{n}$, say ${\mathbf{x}}_*$. Substitute ${\mathbf{x}}_*$ into the first equation of (\ref{sec1-eq1}), and we obtain $d=\sum_{i=1}^m c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i-{\boldsymbol{\beta}}}>0$. \end{proof} \begin{theorem}\label{npgp-thm3} Let $f_d=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$ such that ${\boldsymbol{\beta}}\in\hbox{\rm{New}}(f_d)^{\circ}\cap{\mathbb{N}}^n$, $\dim(\hbox{\rm{New}}(f_d))=n$, and let $d^*$ be defined as (\ref{npgp-eq1}). Then $f_d\in\hbox{\rm{PSD}}$ if and only if either ${\boldsymbol{\beta}}\in(2{\mathbb{N}})^n$ and $d\le d^*$ or ${\boldsymbol{\beta}}\notin(2{\mathbb{N}})^n$ and $|d|\le d^*$. Moreover, $f_{d^*}$ has exactly one zero in ${\mathbb{R}}_+^{n}$. \end{theorem} \begin{proof} First, if ${\boldsymbol{\beta}}\in(2{\mathbb{N}})^n$ and $d\le0$, then obviously $f_d$ is nonnegative since it is a sum of monomial squares. If ${\boldsymbol{\beta}}\notin(2{\mathbb{N}})^n$ and $d\le0$, then $f_d$ is nonnegative if and only if $f_{-d}$ is nonnegative. Thus without loss of generality, we can only consider the case of $d>0$. Since the only negative term in $f_d$ is $-d{\mathbf{x}}^{{\boldsymbol{\beta}}}$, $f_d$ is nonnegative over ${\mathbb{R}}^n$ if and only if $f_d$ is nonnegative over ${\mathbb{R}}_+^n$. Therefore, by the definition of $d^*$, $f_d\in\hbox{\rm{PSD}}$ if and only if $d\le d^*$. The zeros of $f_{d^*}$ are also the minimums of $f_{d^*}$. Then they coincide with the zeros of the system of equations $\{f_{d^*}({\mathbf{x}})=0,\nabla(f_{d^*}({\mathbf{x}}))=\mathbf{0}\}$ ($\nabla$ denotes the gradient) which is equivalent to \begin{equation}\label{sec1-eq5} \begin{cases} \sum_{i=1}^m c_i{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d^*{\mathbf{x}}^{{\boldsymbol{\beta}}}=0\\ \sum_{i=1}^m c_i{\boldsymbol{\alpha}}_i{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d^*{\boldsymbol{\beta}}{\mathbf{x}}^{{\boldsymbol{\beta}}}=\mathbf{0} \end{cases}. \end{equation} By Lemma \ref{sec1-lm}, (\ref{sec1-eq5}) has exactly one zero in ${\mathbb{R}}_+^{n}$, and so does $f_{d^*}$. \end{proof} We need the following theorem from discrete geometry. \begin{theorem}[Helly, \cite{dgk}]\label{npgp-thm5} Let $X_1,\ldots,X_r$ be a finite collection of convex subsets of ${\mathbb{R}}^n$ with $r>n$. If the intersection of every $n+1$ of these sets is nonempty, then the whole collection has a nonempty intersection. \end{theorem} \begin{corollary}\label{npgp-cor1} Let $X_1,\ldots,X_r$ be a finite collection of convex subsets of ${\mathbb{R}}^n$ with $r>n+1$. If the intersection of every $r-1$ of these sets is nonempty, then the whole collection has a nonempty intersection. \end{corollary} \begin{proof} Since $r>n+1$, the condition that the intersection of every $r-1$ of these sets is nonempty implies that the intersection of every $n+1$ of these sets is nonempty. So the corollary is immediate from Theorem \ref{npgp-thm5}. \end{proof} \begin{lemma}\label{npgp-lm1} Let $A=(a_{ij})\in{\mathbb{R}}^{m\times r}$, ${\mathbf{b}}=(b_j)\in{\mathbb{R}}^r$ and ${\mathbf{z}}=(z_1,\ldots,z_r)^T$ a set of variables. For each $j$, let $A_j$ be the submatrix by deleting all of the $i$-th rows with $a_{ij}\ne0$ and the $j$-th column from $A$. Assume that $A{\mathbf{z}}={\mathbf{b}}$ has a solution, $\hbox{\rm{rank}}(A)>1$ and $\hbox{\rm{rank}}(A_j)=\hbox{\rm{rank}}(A)-1$ for all $j$. Then $A{\mathbf{z}}={\mathbf{b}}$ has a nonnegative solution if and only if $A_j\bar{{\mathbf{z}}}_j=\bar{{\mathbf{b}}}_j$ has a nonnegative solution for $j=1,\ldots,r$, where $\bar{{\mathbf{z}}}_j={\mathbf{z}}\backslash z_j$, $\bar{{\mathbf{b}}}_j={\mathbf{b}}\backslash b_j$. \end{lemma} \begin{proof} Let $t=\hbox{\rm{rank}}(A)>1$. Then the system of linear equations $A{\mathbf{z}}={\mathbf{b}}$ has $r-t$ free variables. Without loss of generality, let the $r-t$ free variables be $\{z_1,\ldots,z_{r-t}\}$. We can figure out $\{z_{r-t+1},\ldots,z_r\}$ from $A{\mathbf{z}}={\mathbf{b}}$ and assume $z_{j}=f_{j-r+t}(z_1,\ldots,z_{r-t})$ for $j=r-t+1,\ldots,r$. Then $A{\mathbf{z}}={\mathbf{b}}$ has a nonnegative solution if and only if \begin{equation}\label{npgp-eq6} \{(z_1,\ldots,z_{r-t})\mid z_1\ge0,\ldots,z_{r-t}\ge0,z_{r-t+1}=f_{1}\ge0,\ldots,z_{r}=f_{t}\ge0\} \end{equation} is nonempty. Since both $z_j\ge0$ for $1\le j\le r-t$ and $f_j\ge0$ for $1\le j\le t$ define convex subsets of ${\mathbb{R}}^{r-t}$, then by Corollary \ref{npgp-cor1}, (\ref{npgp-eq6}) is nonempty if and only if \begin{align}\label{npgp-eq7} \begin{split} \{(z_1,\ldots,z_{r-t})\mid &z_1\ge0,\ldots,z_{j-1}\ge0,z_{j+1}\ge0,\ldots,z_{r-t}\ge0,\\ &z_{r-t+1}=f_{1}\ge0,\ldots,z_{r}=f_{t}\ge0\} \end{split} \end{align} is nonempty for $j=1,\ldots,r-t$ and \begin{align}\label{npgp-eq8} \begin{split} \{(z_1,\ldots,z_{r-t})\mid &z_1\ge0,\ldots,z_{r-t}\ge0,z_{r-t+1}=f_{1}\ge0,\ldots,z_{j-1+r-t}=f_{j-1}\ge0,\\ &z_{j+1+r-t}=f_{j+1}\ge0,\ldots,z_{r}=f_{t}\ge0\} \end{split} \end{align} is nonempty for $j=1,\ldots,t$. For $j\in[r-t]$, (\ref{npgp-eq7}) is nonempty if and only if $A{\mathbf{z}}={\mathbf{b}}$ has a solution with $\bar{{\mathbf{z}}}_j\in{\mathbb{R}}_{\ge0}^{r-1}$ and $z_j\in{\mathbb{R}}$, which is equivalent to the condition that $A_j\bar{{\mathbf{z}}}_j=\bar{{\mathbf{b}}}_j$ has a nonnegative solution since $\hbox{\rm{rank}}(A_j)=\hbox{\rm{rank}}(A)-1$. For $j\in[t]$, (\ref{npgp-eq8}) is nonempty if and only if $\{f_{1}\ge0,\ldots,f_{j-1}\ge0,f_{j+1}\ge0,\ldots,f_{t}\ge0\}$ has a nonnegative solution, which is also equivalent to the condition that $A_{j+r-t}\bar{{\mathbf{z}}}_{j+r-t}=\bar{{\mathbf{b}}}_{j+r-t}$ has a nonnegative solution since $\hbox{\rm{rank}}(A_{j+r-t})=\hbox{\rm{rank}}(A)-1$. Put all above together and we have that $A{\mathbf{z}}={\mathbf{b}}$ has a nonnegative solution if and only if $A_j\bar{{\mathbf{z}}}_j=\bar{{\mathbf{b}}}_j$ has a nonnegative solution for $j=1,\ldots,r$ as desired. \end{proof} Lemma \ref{npgp-lm1} needs the assumption that the solution set of $A{\mathbf{z}}={\mathbf{b}}$ is nonempty. We know that the system of linear equations $A{\mathbf{z}}={\mathbf{b}}$ has a solution if and only if ${\mathbf{b}}$ belongs to the image of $A$. For the later use, we give a more concrete description for the condition that $A{\mathbf{z}}={\mathbf{b}}$ has a solution here. \begin{lemma}\label{npgp-lm2} Let $A=(a_{ij})\in{\mathbb{R}}^{m\times r}$, ${\mathbf{b}}=(b_j)\in{\mathbb{R}}^r$, and ${\mathbf{z}}=(z_1,\ldots,z_r)^T$, ${\mathbf{y}}=(y_1,\ldots,y_r)^T$ be sets of variables. Let $\{\mathbf{a}_1,\ldots,\mathbf{a}_m\}$ be the set of row vectors of $A$ and let $I=(\mathbf{a}_1{\mathbf{z}}-y_1,\ldots,\mathbf{a}_m{\mathbf{z}}-y_m)\cap{\mathbb{R}}[y_1,\ldots,y_m]$. Assume that $\{\mathbf{c}_1{\mathbf{y}},\ldots,\mathbf{c}_l{\mathbf{y}}\}$ is a set of generators of $I$ and let $C$ be the matrix whose row vectors are $\{\mathbf{c}_1,\ldots,\mathbf{c}_l\}$. Then $\hbox{\rm{rank}}(C)=m-\hbox{\rm{rank}}(A)$ and $A{\mathbf{z}}={\mathbf{b}}$ has a solution if and only if $C{\mathbf{b}}=\mathbf{0}$. \end{lemma} \begin{proof} Observe that $\{\mathbf{c}_1,\ldots,\mathbf{c}_l\}$ generates the linear space of all linear relationships among $\{\mathbf{a}_1,\ldots,\mathbf{a}_m\}$. In other words, $\{\mathbf{c}_1^T,\ldots,\mathbf{c}_l^T\}$ generates the kernel space of $A^T$. Thus $\hbox{\rm{rank}}(C)=\hbox{\rm{rank}}(\ker(A^T))=m-\hbox{\rm{rank}}(A)$. If $C{\mathbf{b}}=\mathbf{0}$, i.e. ${\mathbf{b}}$ is a zero of the elimination ideal $I$, then by the Extension Theorem on p.125 of \cite{cls}, we can extend ${\mathbf{b}}$ to a zero of the ideal $(\mathbf{a}_1{\mathbf{z}}-y_1,\ldots,\mathbf{a}_m{\mathbf{z}}-y_m)$. Therefore, $A{\mathbf{z}}={\mathbf{b}}$ has a solution. The converse is easy. \end{proof} \begin{lemma}\label{npgp-thm4} Let $f_d=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$ such that ${\boldsymbol{\beta}}\in\hbox{\rm{New}}(f_d)^{\circ}\cap{\mathbb{N}}^n$, $\dim(\hbox{\rm{New}}(f_d))=n$, and let $d^*$ be defined as (\ref{npgp-eq1}). Then $f_{d^*}\in\hbox{\rm{SONC}}$. \end{lemma} \begin{proof} Without loss of generality, assume $m>n+1$. By Theorem \ref{npgp-thm3}, $f_{d^*}$ has exactly one zero in ${\mathbb{R}}_+^{n}$, which is denoted by ${\mathbf{x}}_*$. Let $$\{\Delta_1,\ldots,\Delta_r\}:=\{\Delta\mid\Delta\textrm{ is a simplex }, {\boldsymbol{\beta}}\in\Delta^{\circ}, V(\Delta)\subseteq\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}\}$$ and $I_k:=\{i\in[m]\mid{\boldsymbol{\alpha}}_i\in V(\Delta_k)\}$ for $k=1,\ldots,r$. Firstly, we assume $\dim(\Delta_k)=n$ for all $k$. Hence $|I_k|=n+1$ for all $k$. For each $\Delta_k$, since ${\boldsymbol{\beta}}\in\Delta_k^{\circ}$, we can write ${\boldsymbol{\beta}}=\sum_{i\in I_k}\lambda_{ik}{\boldsymbol{\alpha}}_i$, where $\sum_{i\in I_k}\lambda_{ik}=1, \lambda_{ik}>0, i\in I_k$. Let us consider the following system of linear equations on variables $\{c_{ik}\}$ and $\{s_k\}$: \begin{equation}\label{npgp-eq2} \begin{cases} \frac{c_{ik}{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}}{\lambda_{ik}}=s_k, &\textrm{for }i\in I_k,k=1,\ldots,r\\ \sum_{i\in I_k} c_{ik}=c_i, &\textrm{for }i=1,\ldots,m \end{cases}. \end{equation} Eliminate $\{c_{ik}\}$ from (\ref{npgp-eq2}) and we obtain: \begin{equation}\label{npgp-eq3} \sum_{i\in I_k}\lambda_{ik}s_k=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}, \quad \textrm{for }i=1,\ldots,m. \end{equation} {\bf Claim}: The linear system (\ref{npgp-eq3}) on variables $\{s_1,\ldots,s_r\}$ has a nonnegative solution. Denote the coefficient matrix of (\ref{npgp-eq3}) by $A$. Add up all of the equations of (\ref{npgp-eq3}) and we obtain: \begin{equation}\label{npgp-eq9} \sum_{k=1}^rs_k=\sum_{i=1}^m\sum_{i\in I_k}\lambda_{ik}s_k=\sum_{i=1}^mc_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}. \end{equation} Multiply the $i$-th equation of (\ref{npgp-eq3}) by ${\boldsymbol{\alpha}}_i$ and then add up all of them. We obtain: \begin{equation}\label{npgp-eq10} {\boldsymbol{\beta}}\sum_{k=1}^rs_k=\sum_{i=1}^m\sum_{i\in I_k}\lambda_{ik}{\boldsymbol{\alpha}}_is_k=\sum_{i=1}^mc_i{\boldsymbol{\alpha}}_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}. \end{equation} $(\ref{npgp-eq10})-(\ref{npgp-eq9})\times{\boldsymbol{\beta}}$ gives \begin{equation}\label{npgp-eq11} \sum_{i=1}^mc_i({\boldsymbol{\alpha}}_i-{\boldsymbol{\beta}}){\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}=\mathbf{0}. \end{equation} By (\ref{sec1-eq5}) in the proof of Theorem \ref{npgp-thm3}, $\{c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}\}_{i=1}^m$ satisfies (\ref{npgp-eq11}). Thus by Lemma \ref{npgp-lm2}, (\ref{npgp-eq3}) has a solution. Moreover, since ${\boldsymbol{\beta}}\in\hbox{\rm{New}}(f_d)^{\circ}$ and $\dim(\hbox{\rm{New}}(f_d))=n$, then $\hbox{\rm{rank}}(\{{\boldsymbol{\alpha}}_i-{\boldsymbol{\beta}}\}_{i=1}^m)=n$. Thus $\hbox{\rm{rank}}(A)=m-n>1$. For each $j$, denote the coefficient matrix of \begin{equation}\label{npgp-eq4} \{\sum_{i\in I_k}\lambda_{ik}s_k=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}\mid i\notin I_j\} \end{equation} by $A_j$. Note that \eqref{npgp-eq4} is obtained from \eqref{npgp-eq3} by removing the equations involving the variable $s_j$. For every $i\notin I_j$, since ${\boldsymbol{\beta}}\in\Delta_j^{\circ}$, there exists a facet $F$ of $\Delta_j$ such that ${\boldsymbol{\beta}}\in\hbox{\rm{conv}}(V(F)\cup\{{\boldsymbol{\alpha}}_i\})^{\circ}$. Assume $\hbox{\rm{conv}}(V(F)\cup\{{\boldsymbol{\alpha}}_i\})=\Delta_{p_i}$. For every $k\notin\{j\}\cup\bigcup_{i\notin I_j}\{p_i\}$, let $s_k=0$ in (\ref{npgp-eq4}) and we obtain: \begin{equation}\label{npgp-eq5} \{\lambda_{ip_i}s_{p_i}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}\mid i\notin I_j\}. \end{equation} Thus $\hbox{\rm{rank}}(A_j)=m-|I_j|=m-(n+1)=\hbox{\rm{rank}}(A)-1$. Therefore by Lemma \ref{npgp-lm1}, in order to prove the claim, we only need to show that the linear system (\ref{npgp-eq4}) on variables $\{s_1,\ldots,s_r\}\backslash\{s_j\}$ has a nonnegative solution for $j=1,\ldots,r$. Given $j\in[r]$, from (\ref{npgp-eq5}) we have $s_{p_i}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}/\lambda_{ip_i}$ for $i\notin I_j$ and hence \begin{equation} \begin{cases} s_k=0, &\textrm{for }k\notin\{j\}\cup\bigcup_{i\notin I_j}\{p_i\}\\ s_{p_i}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}/\lambda_{ip_i}, &\textrm{for }i\notin I_j \end{cases} \end{equation} is a nonnegative solution for (\ref{npgp-eq4}). Thus the claim is proved. Assume that $\{s_1^*,\ldots,s_r^*\}$ is a nonnegative solution for the system of equations (\ref{npgp-eq3}). Substitute $\{s_1^*,\ldots,s_r^*\}$ into the system of equations (\ref{npgp-eq2}), and we have $c_{ik}=\lambda_{ik}s_k^*/{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}$ for $i\in I_k,k=1,\ldots,r$. Let $d_k=s_k^*/{\mathbf{x}}_*^{{\boldsymbol{\beta}}}$ and $f_k=\sum_{i\in I_k}c_{ik}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d_k{\mathbf{x}}^{{\boldsymbol{\beta}}}$ for $k=1,\ldots,r$. Then by (\ref{npgp-eq2}) and by Proposition \ref{nc-prop1}, $d_{k}$ is the circuit number of $f_{k}$ and hence $f_k$ is a nonnegative circuit polynomial for all $k$. By (\ref{npgp-eq2}), $\sum_{k=1}^rd_k{\mathbf{x}}_*^{{\boldsymbol{\beta}}}=\sum_{k=1}^r\sum_{i\in I_k}c_{ik}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}_*=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}_*=d^*{\mathbf{x}}_*^{{\boldsymbol{\beta}}}$. So we have $\sum_{k=1}^rd_k=d^*$. It follows $f_{d^*}=\sum_{k=1}^rf_k$ as desired. For the case that $\dim(\Delta_k)=n$ does not hold for all $k$, note that all results above remain valid for ${\boldsymbol{\beta}}\in{\mathbb{R}}^n$. We then give ${\boldsymbol{\beta}}$ a small perturbation, say ${\boldsymbol{\delta}}$, such that $\dim(\Delta_k)=n$ holds for all $k$. Then the new linear system (\ref{npgp-eq3}) for ${\boldsymbol{\beta}}+{\boldsymbol{\delta}}$ has a nonnegative solution. Let ${\boldsymbol{\delta}}\to\mathbf{0}$, we obtain that (\ref{npgp-eq3}) also has a nonnegative solution for ${\boldsymbol{\beta}}$. Thus the theorem remains true in this case. \end{proof} \begin{theorem}\label{npgp-thm7} Let $f_d=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$ such that ${\boldsymbol{\beta}}\in\hbox{\rm{New}}(f_d)^{\circ}\cap{\mathbb{N}}^n$, $\dim(\hbox{\rm{New}}(f_d))=n$. Then $f_d\in\hbox{\rm{PSD}}$ if and only if $f_d\in\hbox{\rm{SONC}}$. \end{theorem} \begin{proof} The sufficiency is obvious. Assume that $f_d$ is nonnegative. If ${\boldsymbol{\beta}}\in(2{\mathbb{N}})^n$ and $d<0$, or $d=0$, then $f_d$ is a sum of monomial squares and obviously $f_d\in\hbox{\rm{SONC}}$. If ${\boldsymbol{\beta}}\notin(2{\mathbb{N}})^n$ and $d<0$, through a variable transformation $x_j\mapsto -x_j$ for some odd number $\beta_j$, we can always assume $d>0$. Let $d^*$ be defined as (\ref{npgp-eq1}). By Lemma \ref{npgp-thm4}, $f_{d^*}\in\hbox{\rm{SONC}}$. Suppose $f_{d^*}=\sum_{k=1}^r(\sum_{i\in I_k}c_{ik}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d_k{\mathbf{x}}^{{\boldsymbol{\beta}}})$, where $\sum_{i\in I_k}c_{ik}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d_k{\mathbf{x}}^{{\boldsymbol{\beta}}}$ is a circuit polynomial with $d_k$ the corresponding circuit number for all $k$. Since $f_d$ is nonnegative, then $d\le d^{*}$ by Theorem \ref{npgp-thm3}. We have $f_d=\sum_{k=1}^r(\sum_{i\in I_k}c_{ik}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\frac{d}{d^*}d_k{\mathbf{x}}^{{\boldsymbol{\beta}}})$, where $\sum_{i\in I_k}c_{ik}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\frac{d}{d^*}d_k{\mathbf{x}}^{{\boldsymbol{\beta}}}$ is a nonnegative circuit polynomial for all $k$ by Theorem \ref{nc-thm1}. Thus $f_d\in\hbox{\rm{SONC}}$. \end{proof} \begin{theorem}\label{npgp-thm8} Let $f=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$. Then $f\in\hbox{\rm{PSD}}$ if and only if $f\in\hbox{\rm{SONC}}$. Moreover, let $$\mathscr{F}:=\{\Delta\mid\Delta\textrm{ is a simplex }, {\boldsymbol{\beta}}\in\Delta^{\circ}, V(\Delta)\subseteq\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}\}.$$ If $f\in\hbox{\rm{PSD}}$, then $f$ admits a SONC decomposition as follows: \begin{equation} f=\sum_{\Delta\in\mathscr{F}}f_{\Delta}+\sum_{i\in I}c_i{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}, \end{equation} where $f_{\Delta}$ is a nonnegative circuit polynomial supported on $V(\Delta)\cup\{{\boldsymbol{\beta}}\}$ for each $\Delta$ and $I=\{i\in[m]\mid{\boldsymbol{\alpha}}_i\notin\cup_{\Delta\in\mathscr{F}}V(\Delta)\}$. \end{theorem} \begin{proof} It follows easily from Lemma \ref{sec3-lm} and Theorem \ref{npgp-thm8}. \end{proof} \section{Nonnegative Polynomials with Multiple Negative Terms} In this section, we deal with the case of nonnegative polynomials with multiple negative terms. Let $\Delta$ be a polytope of dimension $d$. For a vertex ${\boldsymbol{\alpha}}$ of $\Delta$, we say that $\Delta$ is {\em simple} at ${\boldsymbol{\alpha}}$ if ${\boldsymbol{\alpha}}$ is the intersection of precisely $d$ edges. \begin{theorem}\label{npmt-thm1} Let $f=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\sum_{j=1}^ld_j{\mathbf{x}}^{{\boldsymbol{\beta}}_j}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$, ${\boldsymbol{\beta}}_j\in\hbox{\rm{New}}(f)^{\circ}\cap{\mathbb{N}}^n,j=1,\ldots,l$. Assume that $\hbox{\rm{New}}(f)$ is simple at some vertex, all of the ${\boldsymbol{\beta}}_j$'s lie in the same side of every hyperplane determined by points among $\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}$ and there exists a point ${\boldsymbol{v}}=(v_k)\in({\mathbb{R}}^*)^n$ such that $d_j{\boldsymbol{v}}^{{\boldsymbol{\beta}}_j}>0$ for all $j$. Then $f\in\hbox{\rm{PSD}}$ if and only if $f\in\hbox{\rm{SONC}}$. \end{theorem} \begin{proof} Without loss of generality, assume $\dim(\hbox{\rm{New}}(f))=n$ and $m>n+1$. The sufficiency is obvious. Suppose $f$ is nonnegative. After a variable transformation $x_k\mapsto -x_k$ for all $k$ with $v_k<0$, we can assume $d_j>0$ for all $j$. Let \begin{equation}\label{npmt-eq9} d_l^*\triangleq\sup\{\tilde{d}_l\in{\mathbb{R}}\mid \tilde{f}=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\sum_{j=1}^{l-1}d_j{\mathbf{x}}^{{\boldsymbol{\beta}}_j}-\tilde{d}_l{\mathbf{x}}^{{\boldsymbol{\beta}}_l}\in\hbox{\rm{PSD}}\}. \end{equation} Note that $d_l^*$ is well-defined since the set in (\ref{npmt-eq9}) is nonempty and has upper bounds. Let $f^*=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\sum_{j=1}^{l-1}d_j{\mathbf{x}}^{{\boldsymbol{\beta}}_j}-d_l^*{\mathbf{x}}^{{\boldsymbol{\beta}}_l}$. Then $f^*=0$ has a zero in ${\mathbb{R}}_+^{n}$ (\cite[Lemma 4.6]{wang}), which is denoted by ${\mathbf{x}}_*$. The condition that all of the ${\boldsymbol{\beta}}_j$'s lie in the same side of every hyperplane determined by points among $\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}$ implies if a simplex $\Delta$ with vertices coming from $\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}$ contains some ${\boldsymbol{\beta}}_j$, then $\dim(\Delta)=n$ and it contains all ${\boldsymbol{\beta}}_j,j=1,\ldots,l$. Let $$\{\Delta_1,\ldots,\Delta_r\}:=\{\Delta\mid\Delta\textrm{ is a simplex }, {\boldsymbol{\beta}}_j\in\Delta^{\circ}, j\in[l], V(\Delta)\subseteq\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}\}$$ and $I_k:=\{i\in[m]\mid{\boldsymbol{\alpha}}_i\in V(\Delta_k)\}$ for $k=1,\ldots,r$. Then $\dim(\Delta_k)=n$ for all $k$. For every ${\boldsymbol{\beta}}_j$ and every $\Delta_k$, since ${\boldsymbol{\beta}}_j\in\Delta_k^{\circ}$, we can write ${\boldsymbol{\beta}}_j=\sum_{i\in I_k}\lambda_{ijk}{\boldsymbol{\alpha}}_i$, where $\sum_{i\in I_k}\lambda_{ijk}=1, \lambda_{ijk}>0, i\in I_k$. Let us consider the following system of linear equations on variables $\{c_{ijk}\}$, $\{d_{jk}\}$ and $\{s_{jk}\}$: \begin{equation}\label{npmt-eq1} \begin{cases} \frac{c_{ijk}{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}}{\lambda_{ijk}}=d_{jk}{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j}=s_{jk}, &\textrm{for }i\in I_k,k=1,\ldots,r,j=1,\ldots,l\\ \sum_{k=1}^rd_{jk}=d_j, &\textrm{for }j=1,\ldots,l-1\\ \sum_{k=1}^rd_{lk}=d_l^*, \\ \sum_{j=1}^l\sum_{i\in I_k} c_{ijk}=c_i, &\textrm{for }i=1,\ldots,m \end{cases}. \end{equation} Eliminate $\{c_{ijk}\}$ and $\{d_{jk}\}$ from (\ref{npmt-eq1}) and we obtain: \begin{equation}\label{npmt-eq2} \begin{cases} \sum_{j=1}^l\sum_{i\in I_k}\lambda_{ijk}s_{jk}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}, &\textrm{for }i=1,\ldots,m\\ \sum_{k=1}^rs_{jk}=d_j{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j}, &\textrm{for }j=1,\ldots,l-1\\ \sum_{k=1}^rs_{lk}=d_l^*{\mathbf{x}}_*^{{\boldsymbol{\beta}}_l}, \end{cases}. \end{equation} {\bf Claim}: The linear system (\ref{npmt-eq2}) on variables $\{s_{jk}\}$ has a nonnegative solution. Denote the coefficient matrix of (\ref{npmt-eq2}) by $A$. Add up all of the equations of the first part of (\ref{npmt-eq2}), and we obtain: \begin{equation}\label{npmt-eq6} \sum_{j=1}^l\sum_{k=1}^rs_{jk}=\sum_{i=1}^m\sum_{j=1}^l\sum_{i\in I_k}\lambda_{ijk}s_{jk}=\sum_{i=1}^mc_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}. \end{equation} Multiply the $i$-th equation of the first part of (\ref{npmt-eq2}) by ${\boldsymbol{\alpha}}_i$ and then add up all of them. We obtain: \begin{equation}\label{npmt-eq7} \sum_{j=1}^l{\boldsymbol{\beta}}_j\sum_{k=1}^rs_{jk}=\sum_{i=1}^m\sum_{j=1}^l\sum_{i\in I_k}\lambda_{ijk}{\boldsymbol{\alpha}}_is_{jk}=\sum_{i=1}^mc_i{\boldsymbol{\alpha}}_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}. \end{equation} Substitute the second and the third part of (\ref{npmt-eq2}) into (\ref{npmt-eq6}) and (\ref{npmt-eq7}), and we obtain: \begin{equation}\label{npmt-eq8} \begin{cases} \sum_{j=1}^{l-1}d_j{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j}+d_l^*{\mathbf{x}}_*^{{\boldsymbol{\beta}}_l}=\sum_{i=1}^mc_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}\\ \sum_{j=1}^{l-1}d_j{\boldsymbol{\beta}}_j{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j}+d_l^*{\boldsymbol{\beta}}_l{\mathbf{x}}_*^{{\boldsymbol{\beta}}_l}=\sum_{i=1}^mc_i{\boldsymbol{\alpha}}_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i} \end{cases}. \end{equation} The zero ${\mathbf{x}}_*$ of $f^*$ is also a minimizer of $f^*$. Then it must satisfy the equations $\{f^*({\mathbf{x}}_*)=0,\nabla(f^*({\mathbf{x}}_*))=\mathbf{0}\}$ which is actually equivalent to (\ref{npmt-eq8}). Thus by Lemma \ref{npgp-lm2}, (\ref{npmt-eq2}) has a solution on variables $\{s_{jk}\}$. Moreover, since $\dim(\Delta_1)=n$, the volume of $\Delta_1$, which equals $\frac{1}{n!}|\det(\{\begin{pmatrix}1\\ {\boldsymbol{\alpha}}_i\end{pmatrix}\}_{i\in I_1})|$, is nonzero. It follows $$\hbox{\rm{rank}}(\{\begin{pmatrix}1\\ {\boldsymbol{\alpha}}_1\end{pmatrix},\ldots,\begin{pmatrix}1\\ {\boldsymbol{\alpha}}_m\end{pmatrix},\begin{pmatrix}-1\\ -{\boldsymbol{\beta}}_1\end{pmatrix},\ldots,\begin{pmatrix}-1\\ -{\boldsymbol{\beta}}_l\end{pmatrix}\})=n+1$$ and hence by Lemma \ref{npgp-lm2}, $\hbox{\rm{rank}}(A)=m+l-(n+1)>1$. For every $u\in[l]$ and every $v\in[r]$, denote the coefficient matrix of \begin{equation}\label{npmt-eq3} \begin{cases} \sum_{i\in I_k}\lambda_{ijk}s_{jk}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}, &\textrm{for }i\notin I_v\\ \sum_{k=1}^rs_{jk}=d_j{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j},&\textrm{for }j\ne u,l\\ \sum_{k=1}^rs_{lk}=d_l^*{\mathbf{x}}_*^{{\boldsymbol{\beta}}_l},&\textrm{if }u\ne l \end{cases} \end{equation} by $A_{uv}$. Note that \eqref{npmt-eq3} is obtained from \eqref{npmt-eq2} by removing the equations involving the variable $s_{uv}$. For every $i\notin I_v$, since ${\boldsymbol{\beta}}_u\in\Delta_v^{\circ}$, there exists a facet $F$ of $\Delta_v$ such that ${\boldsymbol{\beta}}_u\in\hbox{\rm{conv}}(V(F)\cup\{{\boldsymbol{\alpha}}_i\})^{\circ}$. Assume $\hbox{\rm{conv}}(V(F)\cup\{{\boldsymbol{\alpha}}_i\})=\Delta_{p_i}$. For $j=u,k\notin\cup_{i\notin I_v}\{p_i\}$ or $j\ne u,k\ne v$, let $s_{jk}=0$ in (\ref{npmt-eq3}), and we obtain: \begin{equation}\label{npmt-eq4} \begin{cases} \lambda_{iup_i}s_{up_i}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}, &\textrm{for }i\notin I_v\\ s_{jv}=d_j{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j}, &\textrm{for }j\ne u,l\\ s_{lv}=d_l^*{\mathbf{x}}_*^{{\boldsymbol{\beta}}_l},&\textrm{if }u\ne l \end{cases}. \end{equation} Thus $\hbox{\rm{rank}}(A_{uv})=m-|I_v|+l-1=m-(n+1)+l-1=\hbox{\rm{rank}}(A)-1$. Therefore by Lemma \ref{npgp-lm1}, in order to prove the claim, we only need to show that the linear system (\ref{npmt-eq3}) on variables $\{s_{jk}\}_{j,k}\backslash\{s_{uv}\}$ has a nonnegative solution for all $u\in[l]$ and all $v\in[r]$. Given $v\in[r]$, from (\ref{npmt-eq4}) we have $s_{up_i}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}/\lambda_{iup_i}$ for $i\notin I_v$, $s_{jv}=d_j{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j}$ for $j\ne u,l$, and $s_{lv}=d_l^*{\mathbf{x}}_*^{{\boldsymbol{\beta}}_l}$. Hence \begin{equation} \begin{cases} s_{jk}=0, &\textrm{for }j=u,k\notin\cup_{i\notin I_v}\{p_i\} \textrm{ or }j\ne u,k\ne v\\ s_{up_i}=c_i{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}/\lambda_{iup_i}, &\textrm{for }i\notin I_v\\ s_{jv}=d_j{\mathbf{x}}_*^{{\boldsymbol{\beta}}_j}, &\textrm{for }j\ne u,l\\ s_{lv}=d_l^*{\mathbf{x}}_*^{{\boldsymbol{\beta}}_l},&\textrm{if }u\ne l \end{cases} \end{equation} is a nonnegative solution for (\ref{npmt-eq3}). So the claim is proved. Assume that $\{s_{jk}^*\}_{j,k}$ is a nonnegative solution for the system of equations (\ref{npmt-eq2}). Substitute $\{s_{jk}^*\}_{j,k}$ into the system of equations (\ref{npmt-eq1}), and we have $c_{ijk}=\lambda_{ijk}s_{jk}^*/{\mathbf{x}}_*^{{\boldsymbol{\alpha}}_i}$ for $i\in I_k, k=1,\ldots,r,j=1,\ldots,l$. Let $f_{jk}=\sum_{i\in I_k}c_{ijk}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-d_{jk}{\mathbf{x}}^{{\boldsymbol{\beta}}_j}$ for $k=1,\ldots,r,j=1,\ldots,l$. Then by (\ref{npmt-eq1}) and by Proposition \ref{nc-prop1}, $d_{jk}$ is the circuit number of $f_{jk}$ and $f_{jk}$ is a nonnegative circuit polynomial for all $j,k$. By (\ref{npmt-eq1}), we have $f=\sum_{j=1}^{l-1}\sum_{k=1}^rf_{jk}+\sum_{k=1}^r(\sum_{i\in I_k}c_{ilk}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\frac{d_l}{d_l^*}d_{lk}{\mathbf{x}}^{{\boldsymbol{\beta}}_l})$. Since $d_l\le d_l^*$, $\sum_{i\in I_k}c_{ilk}{\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\frac{d_l}{d_l^*}d_{lk}{\mathbf{x}}^{{\boldsymbol{\beta}}_l}$ is a nonnegative circuit polynomial for all $k$ by Theorem \ref{nc-thm1}. Thus $f\in\hbox{\rm{SONC}}$. \end{proof} \begin{example} Let $d^*=\sup\{d\in{\mathbb{R}}_+\mid 1+x^6+y^6+x^6y^6-x^2y-dx^4y\in\hbox{\rm{PSD}}\}$ and $f=1+x^6+y^6+x^6y^6-x^2y-d^*x^4y$. We have $(2,1)=\frac{1}{6}(6,6)+\frac{1}{6}(6,0)+\frac{2}{3}(0,0)=\frac{1}{3}(6,0)+\frac{1}{6}(0,6)+\frac{1}{2}(0,0)$, and $(4,1)=\frac{1}{6}(6,6)+\frac{1}{2}(6,0)+\frac{1}{3}(0,0)=\frac{2}{3}(6,0)+\frac{1}{6}(0,6)+\frac{1}{6}(0,0)$. \begin{center} \begin{tikzpicture} \draw (0,0)--(0,3); \draw (0,0)--(3,0); \draw (3,0)--(3,3); \draw (0,3)--(3,3); \draw (0,0)--(3,3); \draw (0,3)--(3,0); \fill (0,0) circle (2pt); \node[below left] (1) at (0,0) {$1$}; \fill (3,0) circle (2pt); \node[below right] (2) at (3,0) {$x^6$}; \fill (0,3) circle (2pt); \node[above left] (3) at (0,3) {$y^6$}; \fill (3,3) circle (2pt); \node[above right] (4) at (3,3) {$x^6y^6$}; \fill (1,1/2) circle (2pt); \node[below] (5) at (1,1/2) {$x^2y$}; \fill (2,1/2) circle (2pt); \node[below] (6) at (2,1/2) {$x^4y$}; \end{tikzpicture} \end{center} The system of equations $\{f=0,\nabla(f)=\mathbf{0}\}$ has exactly one zero $(x_*\approx1.04521,y_*\approx0.764724,d^*\approx2.11373)$ in ${\mathbb{R}}_+^{n}$. The following linear system \begin{equation}\label{npmt-eq12} \begin{cases} 1=\frac{2}{3}s_{11}+\frac{1}{2}s_{12}+\frac{1}{3}s_{21}+\frac{1}{6}s_{22}\\ x_*^6=\frac{1}{6}s_{11}+\frac{1}{3}s_{12}+\frac{1}{2}s_{21}+\frac{2}{3}s_{22}\\ y_*^6=\frac{1}{6}s_{12}+\frac{1}{6}s_{22}\\ x_*^6y_*^6=\frac{1}{6}s_{11}+\frac{1}{6}s_{21}\\ x_*^2y_*=s_{11}+s_{12}\\ d^*x_*^4y_*=s_{21}+s_{22} \end{cases} \end{equation} on variables $\{s_{11},s_{12},s_{21},s_{22}\}$ has a nonnegative solution $(s_{11}\approx0.835429,s_{12}=0,s_{21}\approx0.729142,s_{22}=1.2)$. Thus from the proof of Theorem \ref{npmt-thm1}, we obtain a SONC decomposition of $f$ which is $f\approx(0.556953+0.106793x^6+0.533967x^6y^6-x^2y)+(0.243047+0.27962x^6+0.466033x^6y^6-0.798909x^4y)+(0.2+0.613587x^6+y^6-1.31482x^4y)$. \end{example} \begin{corollary}\label{npmt-cor1} Let $f=\sum_{i=1}^mc_i {\mathbf{x}}^{{\boldsymbol{\alpha}}_i}-\sum_{j=1}^ld_j{\mathbf{x}}^{{\boldsymbol{\beta}}_j}\in{\mathbb{R}}[{\mathbf{x}}]$ with ${\boldsymbol{\alpha}}_i\in(2{\mathbb{N}})^n,c_i>0,i=1,\ldots,m$, ${\boldsymbol{\beta}}_j\in\hbox{\rm{New}}(f)^{\circ}\cap{\mathbb{N}}^n,d_j>0,j=1,\ldots,l$ and $\dim(\hbox{\rm{New}}(f))=n$. Assume that $f$ is nonnegative and has a zero, $\hbox{\rm{New}}(f)$ is simple at some vertex, and all of the ${\boldsymbol{\beta}}_j$'s lie in the same side of every hyperplane determined by points among $\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}$. Then $f$ has exactly one zero in ${\mathbb{R}}_+^{n}$. \end{corollary} \begin{proof} By Theorem \ref{npmt-thm1}, $f\in\hbox{\rm{SONC}}$. Suppose $f=\sum_{k=1}^rf_k$, where $f_k$ is a nonnegative circuit polynomial for all $k$. Let ${\mathbf{x}}$ be a zero of $f$. Then we have $f_k({\mathbf{x}})=0$ for all $k$. By Proposition \ref{nc-prop1}, $f_k(|{\mathbf{x}}|)=0$ and $|{\mathbf{x}}|$ is the only zero of $f_k$ in ${\mathbb{R}}_+^{n}$ for all $k$. Hence $|{\mathbf{x}}|$ is the only zero of $f$ in ${\mathbb{R}}_+^{n}$. \end{proof} \section{SONC Decompositions Preserve Sparsity} For a polynomial $f\in{\mathbb{R}}[{\mathbf{x}}]$, let $\Lambda(f):=\{{\boldsymbol{\alpha}}\in\hbox{\rm{supp}}(f)\mid{\boldsymbol{\alpha}}\in(2{\mathbb{N}})^n\textrm{ and }c_{{\boldsymbol{\alpha}}}>0\}$ and $\Gamma(f):=\hbox{\rm{supp}}(f)\backslash\Lambda(f)$. Then we can write $f=\sum_{{\boldsymbol{\alpha}}\in\Lambda(f)}c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-\sum_{{\boldsymbol{\beta}}\in\Gamma(f)}d_{{\boldsymbol{\beta}}}{\mathbf{x}}^{{\boldsymbol{\beta}}}$ with $c_{{\boldsymbol{\alpha}}}>0$. Assume $V(\hbox{\rm{New}}(f))\subseteq\Lambda(f)$. For every ${\boldsymbol{\beta}}\in\Gamma(f)$, let \begin{equation} \mathscr{F}({\boldsymbol{\beta}}):=\{\Delta\mid\Delta\textrm{ is a simplex, } {\boldsymbol{\beta}}\in\Delta^{\circ}, V(\Delta)\subseteq\Lambda(f)\}. \end{equation} Consider the following SONC decomposition for a nonnegative polynomial $f$: \begin{equation}\label{sec5-eq} f=\sum_{{\boldsymbol{\beta}}\in\Gamma(f)}\sum_{\Delta\in\mathscr{F}({\boldsymbol{\beta}})}f_{{\boldsymbol{\beta}}\Delta}+\sum_{{\boldsymbol{\alpha}}\in\tilde{{\mathscr{A}}}}c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}, \end{equation} where $f_{{\boldsymbol{\beta}}\Delta}$ is a nonnegative circuit polynomial supported on $V(\Delta)\cup\{{\boldsymbol{\beta}}\}$ for each $\Delta$ and $\tilde{{\mathscr{A}}}=\{{\boldsymbol{\alpha}}\in\Lambda(f)\mid{\boldsymbol{\alpha}}\notin\cup_{{\boldsymbol{\beta}}\in\Gamma(f)}\cup_{\Delta\in\mathscr{F}({\boldsymbol{\beta}})}V(\Delta)\}$. If $f$ admits a SONC decomposition of the form \eqref{sec5-eq}, then we say that $f$ decomposes into a {\em sum of nonnegative circuit polynomials with the same support}. In Theorem \ref{npgp-thm7} and Theorem \ref{npmt-thm1}, we have seen that nonnegative polynomials satisfying certain conditions decompose into sums of nonnegative circuit polynomials with the same support. We shall prove that actually every SONC polynomial decomposes into a sum of nonnegative circuit polynomials with the same support. For the proof, we first recall a connection between nonnegative circuit polynomials and sums of binomial squares (SBS). \subsection{Nonnegative Circuit Polynomials and Sums of Binomial Squares} We call a lattice point is {\em even} if it is in $(2{\mathbb{N}})^n$. For a subset $M\subseteq{\mathbb{N}}^n$, define $\overline{A}(M):=\{\frac{1}{2}({\boldsymbol{u}}+{\boldsymbol{v}})\mid{\boldsymbol{u}}\ne{\boldsymbol{v}},{\boldsymbol{u}},{\boldsymbol{v}}\in M\cap(2{\mathbb{N}})^n\}$ as the set of averages of distinct even points in $M$. For a trellis ${\mathscr{A}}$, we say that $M$ is an {\em ${\mathscr{A}}$-mediated set} if ${\mathscr{A}}\subseteq M\subseteq\overline{A}(M)\cup{\mathscr{A}}$ \cite{re1}. It turns out that the problem whether a nonnegative circuit polynomial is an SOS polynomial is closely related to ${\mathscr{A}}$-mediated sets. \begin{theorem}\label{sec3-thm1} Let $f=\sum_{{\boldsymbol{\alpha}}\in{\mathscr{A}}} c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}], d\ne0$ be a nonnegative circuit polynomial with ${\boldsymbol{\beta}}\in\hbox{\rm{New}}(f)^{\circ}$. If ${\boldsymbol{\beta}}$ belongs to an ${\mathscr{A}}$-mediated set $M$, then $f$ is a sum of binomial squares, i.e., $f=\sum_{2{\boldsymbol{u}},2{\boldsymbol{v}}\in M}(a_{{\boldsymbol{u}}}{\mathbf{x}}^{{\boldsymbol{u}}}-b_{{\boldsymbol{v}}}{\mathbf{x}}^{{\boldsymbol{v}}})^2$ for $a_{{\boldsymbol{u}}},b_{{\boldsymbol{v}}}\in{\mathbb{R}}$. \end{theorem} \begin{proof} The proof can be easily derived from Theorem 5.2 in \cite{iw} and Theorem 4.4 in \cite{re1}. \end{proof} Mediated sets were firstly studied by Reznick in \cite{re1}. For a trellis ${\mathscr{A}}$, there is a maximal ${\mathscr{A}}$-mediated set ${\mathscr{A}}^*$ satisfying $A({\mathscr{A}})\subseteq{\mathscr{A}}^*\subseteq\hbox{\rm{conv}}({\mathscr{A}})\cap{\mathbb{N}}^n$ which contains every ${\mathscr{A}}$-mediated set. Following \cite{re1}, a trellis ${\mathscr{A}}$ is called an {\em $H$-trellis} if ${\mathscr{A}}^*=\hbox{\rm{conv}}({\mathscr{A}})\cap{\mathbb{N}}^n$. The following theorem states that every trellis is an $H$-trellis after multiplied by a sufficiently large number. \begin{theorem}(\cite[Theorem 3.5]{po})\label{sec3-prop} Let ${\mathscr{A}}\subseteq{\mathbb{N}}^n$ be a trellis. Then $k{\mathscr{A}}$ is an $H$-trellis for any integer $k\ge n$. \end{theorem} From Theorem \ref{sec3-prop} together with Theorem \ref{sec3-thm1}, we know that every $n$-variate nonnegative circuit polynomial supported on $k{\mathscr{A}}$ and a lattice point in the interior of $\hbox{\rm{conv}}(k{\mathscr{A}})$ is a sum of binomial squares for any trellis ${\mathscr{A}}$ and an integer $k\ge n$. \begin{lemma}\label{sec4-lm1} Suppose that $f(x_1,\ldots,x_n)\in{\mathbb{R}}[{\mathbf{x}}]$ is a SONC polynomial. Then $f(x_1^k,\ldots,x_n^k)$ is a sum of binomial squares for any integer $k\ge n$. \end{lemma} \begin{proof} Assume $f=\sum f_i$, where $f_i$'s are nonnegative circuit polynomials. For any integer $k\ge n$, since every $f_i(x_1^k,\ldots,x_n^k)$ is a sum of binomial squares, so is $f(x_1^k,\ldots,x_n^k)$. \end{proof} \subsection{Supports of Sums of Nonnegative Circuit Polynomials} Now we can prove: every SONC polynomial decomposes into a sum of nonnegative circuit polynomials with the same support. The proof will take use of the SBS decompositions for SONC polynomials. Hence we first apply the map $x_i\mapsto x_i^k$ to $f\in{\mathbb{R}}[{\mathbf{x}}]$. \begin{lemma}\label{sec4-lm} Let $f(x_1,\ldots,x_n)\in{\mathbb{R}}[{\mathbf{x}}]$ and $k$ an odd number. Then $f(x_1,\ldots,x_n)$ decomposes into a sum of nonnegative circuit polynomials with the same support if and only if $f(x_1^k,\ldots,x_n^k)$ decomposes into a sum of nonnegative circuit polynomials with the same support. \end{lemma} \begin{proof} It is immediate from the fact that a polynomial $g(x_1,\ldots,x_n)$ is a nonnegative circuit polynomial if and only if $g(x_1^k,\ldots,x_n^k)$ is a nonnegative circuit polynomial for an odd number $k$. \end{proof} If a polynomial $f\in{\mathbb{R}}[{\mathbf{x}}]$ has the form $\sum_{{\boldsymbol{\alpha}}\in\Lambda(f)}c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d{\mathbf{x}}^{{\boldsymbol{\beta}}}$, where either ${\boldsymbol{\beta}}\in(2{\mathbb{N}})^n$ and $d>0$, or ${\boldsymbol{\beta}}\notin(2{\mathbb{N}})^n$, then we call $f$ a {\em banana polynomial}. By Theorem \ref{npgp-thm8}, a nonnegative banana polynomial decomposes into a sum of nonnegative circuit polynomials with the same support. If a polynomial $f$ can be written as $f=\sum_{{\boldsymbol{\beta}}\in\Gamma(f)}(\sum_{{\boldsymbol{\alpha}}\in\Lambda(f)}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}{\mathbf{x}}^{{\boldsymbol{\beta}}})$ such that every $\sum_{{\boldsymbol{\alpha}}\in\Lambda(f)}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}{\mathbf{x}}^{{\boldsymbol{\beta}}}$ is a nonnegative banana polynomial, then we say that $f$ decomposes into a {\em sum of nonnegative banana polynomials with the same support}. \begin{theorem}\label{sec3-thm2} Let $f=\sum_{{\boldsymbol{\alpha}}\in\Lambda(f)}c_{{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-\sum_{{\boldsymbol{\beta}}\in\Gamma(f)}d_{{\boldsymbol{\beta}}}{\mathbf{x}}^{{\boldsymbol{\beta}}}\in{\mathbb{R}}[{\mathbf{x}}]$. If $f\in\hbox{\rm{SONC}}$, then $f$ decomposes into a sum of nonnegative circuit polynomials with the same support, i.e. $f$ admits a SONC decomposition of the form \eqref{sec5-eq}. \end{theorem} \begin{proof} By Lemma \ref{sec4-lm}, we only need to prove the theorem for $f(x_1^{2n+1},\ldots,x_n^{2n+1})$. By Theorem \ref{npgp-thm7}, we finish the proof by showing that $f(x_1^{2n+1},\ldots,x_n^{2n+1})$ is a sum of nonnegative banana polynomials with the same support. For simplicity, let $h=f(x_1^{2n+1},\ldots,x_n^{2n+1})$. By Theorem \ref{sec4-lm1}, we can write $h$ as a sum of binomial squares, i.e. $h=\sum_{i=1}^m(a_i{\mathbf{x}}^{{\boldsymbol{u}}_i}-b_i{\mathbf{x}}^{{\boldsymbol{v}}_i})^2$. Let us do induction on $m$. When $m=1$, $h=(a_1{\mathbf{x}}^{{\boldsymbol{u}}_1}-b_1{\mathbf{x}}^{{\boldsymbol{v}}_1})^2=a_1^2{\mathbf{x}}^{2{\boldsymbol{u}}_1}+b_1^2{\mathbf{x}}^{2{\boldsymbol{v}}_1}-2a_1b_1{\mathbf{x}}^{{\boldsymbol{u}}_1+{\boldsymbol{v}}_1}$ and the conclusion is obvious. Assume that the conclusion is correct for $m-1$. Now consider the case of $m$. Without loss of generality, assume ${\boldsymbol{u}}_m+{\boldsymbol{v}}_m\in\Gamma(h)$. Let $h'=\sum_{i=1}^{m-1}(a_i{\mathbf{x}}^{{\boldsymbol{u}}_i}-b_i{\mathbf{x}}^{{\boldsymbol{v}}_i})^2=\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}}-\sum_{{\boldsymbol{\beta}}\in\Gamma(h')}d_{{\boldsymbol{\beta}}}'{\mathbf{x}}^{{\boldsymbol{\beta}}}$. By the induction hypothesis, we can write $h'=\sum_{{\boldsymbol{\beta}}\in\Gamma(h')}(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}'{\mathbf{x}}^{{\boldsymbol{\beta}}})$ as a sum of nonnegative banana polynomials with the same support. Then \begin{equation}\label{sec3-eq10} h=\sum_{{\boldsymbol{\beta}}\in\Gamma(h')}(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}'{\mathbf{x}}^{{\boldsymbol{\beta}}})+(a_m{\mathbf{x}}^{{\boldsymbol{u}}_m}-b_m{\mathbf{x}}^{{\boldsymbol{v}}_m})^2. \end{equation} From $h=h'+(a_m{\mathbf{x}}^{{\boldsymbol{u}}_m}-b_m{\mathbf{x}}^{{\boldsymbol{v}}_m})^2$, it follows that $\hbox{\rm{supp}}(h)$ and $\hbox{\rm{supp}}(h')$ differ among three elements: $2{\boldsymbol{u}}_m,2{\boldsymbol{v}}_m,{\boldsymbol{u}}_m+{\boldsymbol{v}}_m$. We will obtain the expression $h=\sum_{{\boldsymbol{\beta}}\in\Gamma(h)}(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h)}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}{\mathbf{x}}^{{\boldsymbol{\beta}}})$ of $h$ as a sum of nonnegative banana polynomials with the same support from (\ref{sec3-eq10}) by adjusting the terms involving $2{\boldsymbol{u}}_m,2{\boldsymbol{v}}_m,{\boldsymbol{u}}_m+{\boldsymbol{v}}_m$ in (\ref{sec3-eq10}). First let us consider the terms involving $2{\boldsymbol{u}}_m$. If $2{\boldsymbol{u}}_m\notin\Gamma(h')$, then we have nothing to do. If $2{\boldsymbol{u}}_m\in\Gamma(h')$ and $2{\boldsymbol{u}}_m\in\Gamma(h)$, then we must have $d_{2{\boldsymbol{u}}_m}'>a_m^2$. By the equality $\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{2{\boldsymbol{u}}_m{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{2{\boldsymbol{u}}_m}'{\mathbf{x}}^{2{\boldsymbol{u}}_m}+a_m^2{\mathbf{x}}^{2{\boldsymbol{u}}_m}+b_m^2{\mathbf{x}}^{2{\boldsymbol{v}}_m}-2a_mb_m{\mathbf{x}}^{{\boldsymbol{u}}_m+{\boldsymbol{v}}_m}=(1-\frac{a_m^2}{d_{2{\boldsymbol{u}}_m}'})(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{2{\boldsymbol{u}}_m{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}} -d_{2{\boldsymbol{u}}_m}'{\mathbf{x}}^{2{\boldsymbol{u}}_m})+\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}\frac{c_{2{\boldsymbol{u}}_m{\boldsymbol{\alpha}}}'a_m^2}{d_{2{\boldsymbol{u}}_m}'}{\mathbf{x}}^{{\boldsymbol{\alpha}}}+b_m^2{\mathbf{x}}^{2{\boldsymbol{v}}_m}-2a_mb_m{\mathbf{x}}^{{\boldsymbol{u}}_m+{\boldsymbol{v}}_m}$, we obtain the expression $h=\sum_{{\boldsymbol{\beta}}\in\Gamma(h')\backslash\{2{\boldsymbol{u}}_m\}}(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}'{\mathbf{x}}^{{\boldsymbol{\beta}}})+(1-\frac{a_m^2}{d_{2{\boldsymbol{u}}_m}'})(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{2{\boldsymbol{u}}_m{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}} -d_{2{\boldsymbol{u}}_m}'{\mathbf{x}}^{2{\boldsymbol{u}}_m})+\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}\frac{c_{2{\boldsymbol{u}}_m{\boldsymbol{\alpha}}}'a_m^2}{d_{2{\boldsymbol{u}}_m}'}{\mathbf{x}}^{{\boldsymbol{\alpha}}}+b_m^2{\mathbf{x}}^{2{\boldsymbol{v}}_m}-2a_mb_m{\mathbf{x}}^{{\boldsymbol{u}}_m+{\boldsymbol{v}}_m}$ which is still a sum of nonnegative banana polynomials. If $2{\boldsymbol{u}}_m\in\Gamma(h')$ and $2{\boldsymbol{u}}_m\in\Lambda(h)$, then we must have $a_m^2>d_{2{\boldsymbol{u}}_m}'$ and we can write $h$ as $h=\sum_{{\boldsymbol{\beta}}\in\Gamma(h')\backslash\{2{\boldsymbol{u}}_m\}}(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}'{\mathbf{x}}^{{\boldsymbol{\beta}}})+\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{2{\boldsymbol{u}}_m{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}} +(a_m^2-d_{2{\boldsymbol{u}}_m}'){\mathbf{x}}^{2{\boldsymbol{u}}_m}+b_m^2{\mathbf{x}}^{2{\boldsymbol{v}}_m}-2a_mb_m{\mathbf{x}}^{{\boldsymbol{u}}_m+{\boldsymbol{v}}_m}$ which is still a sum of nonnegative banana polynomials. If $2{\boldsymbol{u}}_m\in\Gamma(h')$ and $2{\boldsymbol{u}}_m\notin\hbox{\rm{supp}}(h)$, then the terms $-d_{2{\boldsymbol{u}}_m}'{\mathbf{x}}^{2{\boldsymbol{u}}_m}$ and $a_m^2{\mathbf{x}}^{2{\boldsymbol{u}}_m}$ must be cancelled in (\ref{sec3-eq10}). Hence we obtain the expression of $h$ as $h=\sum_{{\boldsymbol{\beta}}\in\Gamma(h')\backslash\{2{\boldsymbol{u}}_m\}}(\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{{\boldsymbol{\beta}}{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}}-d_{{\boldsymbol{\beta}}}'{\mathbf{x}}^{{\boldsymbol{\beta}}})+\sum_{{\boldsymbol{\alpha}}\in\Lambda(h')}c_{2{\boldsymbol{u}}_m{\boldsymbol{\alpha}}}'{\mathbf{x}}^{{\boldsymbol{\alpha}}} +b_m^2{\mathbf{x}}^{2{\boldsymbol{v}}_m}-2a_mb_m{\mathbf{x}}^{{\boldsymbol{u}}_m+{\boldsymbol{v}}_m}$ which is still a sum of nonnegative banana polynomials. Continue adjusting the terms involving $2{\boldsymbol{v}}_m$ and ${\boldsymbol{u}}_m+{\boldsymbol{v}}_m$ in the expression of $h$ in a similar way. Eventually we can write $h$ as a sum of nonnegative banana polynomials with the same support as desired. \end{proof} \begin{remark} Theorem \ref{sec3-thm2} ensures that every SONC polynomial admits a SONC decomposition by only using the support from the original polynomial and with no cancellation. This is a very desired property (sparsity-preservation) to design efficient algorithms for sparse polynomial optimization based on SONC decompositions and is a distinguished difference from SOS decompositions. \end{remark} \begin{remark} Although in the proof of Theorem \ref{sec3-thm2}, we use all simplices containing ${\boldsymbol{\beta}}$ as an interior point for each ${\boldsymbol{\beta}}\in\Gamma(f)$, it is possible to obtain a SONC decomposition for $f$ by using less simplices. Actually, by Theorem \ref{sec3-thm2} together with Carath\'eodory's theorem (\cite[Corollary 17.1.2]{ro}), we can write a SONC polynomial $f$ as a sum of at most $|\hbox{\rm{supp}}(f)|$ nonnegative circuit polynomials. \end{remark} Finally, we give an example to illustrate that the condition that all of the ${\boldsymbol{\beta}}_j$'s lie in the same side of every hyperplane determined by points among $\{{\boldsymbol{\alpha}}_1,\ldots,{\boldsymbol{\alpha}}_m\}$ in Theorem \ref{npmt-thm1} cannot be dropped. \begin{example} Let $f=1+4x^2+x^4-3x-3x^3$. Then $f\in\hbox{\rm{PSD}}$, but $f\notin\hbox{\rm{SONC}}$. \end{example} \begin{proof} It is easy to verify that the minimum of $f$ is $0$ with the only minimizer $x_*=1$. We have $1=\frac{1}{2}\cdot0+\frac{1}{2}\cdot2=\frac{3}{4}\cdot0+\frac{1}{4}\cdot4$, and $3=\frac{1}{2}\cdot2+\frac{1}{2}\cdot4=\frac{1}{4}\cdot0+\frac{3}{4}\cdot4$. \begin{center} \begin{tikzpicture} \draw (0,0)--(4,0); \fill (0,0) circle (2pt); \node[above] (1) at (0,0) {$1$}; \fill (1,0) circle (2pt); \node[above] (2) at (1,0) {$x$}; \fill (2,0) circle (2pt); \node[above] (3) at (2,0) {$x^2$}; \fill (3,0) circle (2pt); \node[above] (4) at (3,0) {$x^3$}; \fill (4,0) circle (2pt); \node[above] (5) at (4,0) {$x^4$}; \end{tikzpicture} \end{center} By Theorem \ref{sec3-thm2} and from the proof of Theorem \ref{npmt-thm1}, we have that if $f\in\hbox{\rm{SONC}}$, then the following linear system \begin{equation}\label{npmt-eq5} \begin{cases} 1=\frac{1}{2}s_1+\frac{3}{4}s_3+\frac{1}{4}s_4\\ 4x_*^2=\frac{1}{2}s_1+\frac{1}{2}s_3\\ x_*^4=\frac{1}{4}s_2+\frac{1}{2}s_3+\frac{3}{4}s_4\\ 3x_*=s_1+s_2\\ 3x_*^3=s_3+s_4 \end{cases} \end{equation} on variables $\{s_1,s_2,s_3,s_4\}$ should have a nonnegative solution. However, (\ref{npmt-eq5}) has no nonnegative solutions. This contradictory implies $f\notin\hbox{\rm{SONC}}$. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2020-01-08T02:16:44", "yymm": "1804", "arxiv_id": "1804.09455", "language": "en", "url": "https://arxiv.org/abs/1804.09455", "abstract": "The concept of sums of nonnegative circuit polynomials (SONC) was recently introduced as a new certificate of nonnegativity especially for sparse polynomials. In this paper, we explore the relationship between nonnegative polynomials and SONC polynomials. As a first result, we provide sufficient conditions for nonnegative polynomials with general Newton polytopes to be SONC polynomials, which generalizes the previous result on nonnegative polynomials with simplex Newton polytopes. Secondly, we prove that every SONC polynomial admits a SONC decomposition without cancellation. In other words, SONC decompositions can exactly preserve the sparsity of nonnegative polynomials, which is dramatically different from the classical sum of squares (SOS) decompositions and is a key property to design efficient algorithms for sparse polynomial optimization based on SONC decompositions.", "subjects": "Combinatorics (math.CO); Algebraic Geometry (math.AG)", "title": "Nonnegative Polynomials and Circuit Polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718468714469, "lm_q2_score": 0.8354835432479661, "lm_q1q2_score": 0.826854541276915 }
https://arxiv.org/abs/1409.7890
Using Brouwer's fixed point theorem
Brouwer's fixed point theorem from 1911 is a basic result in topology - with a wealth of combinatorial and geometric consequences. In these lecture notes we present some of them, related to the game of HEX and to the piercing of multiple intervals. We also sketch stronger theorems, due to Oliver and others, and explain their applications to the fascinating (and still not fully solved) evasiveness problem.
\section{The Game of HEX and the Brouwer Fixed Point Theorem} Let's start with a game: ``HEX'' is a board game for two players, invented by the ingenious Danish poet, designer and engineer Piet Hein in 1942 \cite{Gardner}, and rediscovered in 1948 by the mathematician John Nash \cite{Milnor-nash} who got a Nobel prize in economics in 1994 (for his work on game theory, but not really for this game \ldots). \def\labelboard{\unitlength 0.9cm \begin{picture}(0,0) \put(0.5,3.5){$W$} \put(3.3,0.2){$B$} \put(2.8,6.0){$B'$} \put(5.7,2.6){$W'$} \end{picture}} HEX, in Hein's version, is played on a rhombical board, as depicted in the figure. \[ \labelboard \def\epsfsize#1#2{.36#1}\epsffile{EPS/hexboard4.eps} \] The rules of the game are simple: there are two players, whom we call White and Black. The players alternate, with White going first. Each move consists of coloring one ``grey'' hexagonal tile of the board white resp.\ black. White has to connect the white borders of the board (marked $W$ and $W'$) by a path of his white tiles, while Black tries to connect $B$ and $B'$ by a black path. They can't both win: any winning path for white separates the two black borders, and conversely. (This isn't hard to prove---however, the statement is closely related to the Jordan curve theorem, which is trickier than it may seem when judged at first sight: see Exercise~\ref{exer:jct}.) However, here we concentrate on the opposite statement: there is no draw possible---when the whole board is covered by black and white tiles, then there always is a winner. (This is even true if one of the players has cheated badly and ends up with much more tiles than his/her opponent! It is also true if the board isn't really ``square,'' that is, if it has sides of unequal lenghts.) Our next figure depicts a final HEX position---sure enough one of the players has won, and the proof of the following ``HEX theorem'' will give us a systematic method to find out which one. \[ \labelboard \def\epsfsize#1#2{.36#1}\epsffile{EPS/hexend4.eps} \] \begin{Theorem} [The HEX theorem] If every tile of an $(n \times m)$-HEX board is colored black or white, then either there is a path of white tiles that connects the white borders $W$ and $W'$, or there is a path of black tiles that connects the black borders $B$ and $B'$. \end{Theorem} \noindent Our plan for this section is the following: \begin{compactitem}[ $\bullet$] \item We give a simple proof of the HEX theorem. \item We show that it implies the Brouwer fixed point theorem, \item And even conversely: the Brouwer fixed point theorem implies the HEX theorem. \item Then we prove that one of the players has a winning strategy. \item And then we see that on a square board, the first player can win, while on an uneven board, the player with the longer borders has a strategy to win. \end{compactitem} All of this is really quite simple, but it nicely illustrates how a topological theorem enters the analysis of a discrete situation. \proofheader{Proof of the HEX theorem} We follow a certain path \emph{between} the black and white tiles that starts in the lower left-hand corner of the HEX board on the edge that separates $W$ and $B$. Whenever this path reaches a corner of degree three, there will be both colors present at the corner (due to the edge we reach it from), and so there will be a unique edge to proceed on that does have different colors on its two sides. \[ \raisebox{3mm}{\labelboard} \def\epsfsize#1#2{.36#1}\epsffile{EPS/hexproof4.eps} \] Our path can never get stuck or branch or turn back onto itself: otherwise we would have found a vertex that has one or three edges that separate colors, whereas this number clearly has to be even at each vertex. Thus the path can be continued until it leaves the board---that is, until it reaches $W'$ or~$B'$. But that means that we find a path that connects $W$ to~$W'$, or $B$ to $B'$, and on its sides keeps a white path of tiles resp.\ a black path. That is, one of White and Black has won! \endproof Now this was easy, and (hopefully) fun. We continue with a re-interpretation of the HEX board---in Nash's version---that buys us two drinks for the price of one: \begin{itemize}\itemsep=-2pt \item[(i)] a $d$-dimensional version of the HEX theorem, and \item[(ii)] the connection to the Brouwer fixed point theorem. \end{itemize} \begin{Definition}[The $d$-dimensional HEX board] The $d$-dimensional \emph{HEX board} is the graph $H(n,d)$ with vertex set $V=\{-1,0,1,\ldots,n,n+1\}^d$, in which two vertices $\vv,\ww\in V$ are connected if $\vv-\ww\in\{0,1\}^d\cup\{0,-1\}^d$. The \emph{colors} for the $d$-dimensional HEX game are $1,2,\ldots,d$, where we identify {\rm ``$1 =$\,white''} and {\rm ``$2 =$\,black.''} The \emph{interior} of the HEX board is given by $V'=\{0,1,2,\ldots,n\}^d$. All the other vertices, in $V\setminus V'$, form the \emph{boundary} of the board. The vertices in the boundary of $H(n,d)$ get {{preassigned colors}} \[ \kappa(v_1,\ldots,v_d)\assg \begin{cases} \min\{i\sep v_i=-1 \} & \textrm{if this exists},\\ \min\{i\sep v_i=n+1\} & \textrm{otherwise}. \end{cases} \] \end{Definition} \[ \def\epsfsize#1#2{.6#1}\epsffile{EPS/hexgraph5.eps} \] Our drawing depicts the $2$-dimensional HEX board $H(5,2)$, which represents a dual graph for the $(6\times6)$-board that we used in our previous figures, with the preassigned colors on the boundary. The $d$-dimensional HEX game is played between $d$ players who take turns in coloring the interior vertices of $H(n,d)$. The $i$-th player \emph{wins} if he\footnote{Using ``he'' here is not politically correct.} achieves a path of vertices of color $i$ that connects a vertex whose $i$-th coordinate is $-1$ to a vertex whose $i$-th coordinate is $n+1$. \begin{Theorem}[The $d$-dimensional HEX theorem] There is no draw possible for $d$-dimensional HEX: if all interior vertices of $H(d,n)$ are colored, then at least one player has won. \end{Theorem} \proof The proof that we used for $2$-dimensional HEX still works: it just has to be properly translated for the new setting. For this we first check that $H(n,d)$ is the graph of a triangulation $\Delta(n,d)$ of $[-1,n+1]^d$, which is given by the \emph{clique complex} of~$H(n,d)$: that is, a set of lattice points $S\subseteq\{-1,0,1,\ldots,n+1\}^d$ forms a simplex in $\Delta(n,d)$ if and only if the points in $S$ are pairwise connected by edges. (To check this, verify that each point $x\in[{-}1,n{+}1]^d$ lies in the relative interior of a unique simplex, which is given by \begin{eqnarray*} \lefteqn{\Delta(\xx)\assg \conv\big\{\vv\in\{{-}1,\ldots,n+1\}^d\sep}\qquad\qquad\qquad\qquad\\ & & \lfloor x_i\rfloor\le v_i\le\lceil x_i\rceil \mbox{\rm~for all $i$,}\\[1mm] & & \lfloor x_i-x_j\rfloor\le v_i-v_j\le\lceil x_i-x_j\rceil \mbox{\rm~for all $i\neq j$}\big\}. \end{eqnarray*} Our picture illustrates the $2$-dimensional case.) Now every full-dimensional simplex in $\Delta(n,d)$ has $d+1$ vertices. A simplex $S$ in $\Delta(n,d)$ is \emph{completely colored} if it has all $d$ colors on its vertices. Thus each completely colored $d$-simplex in $\Delta$ has exactly two completely colored facets, which are $(d-1)$-faces of the complex $\Delta(n,d)$. Conversely, every completely colored $(d-1)$-face is contained in exactly two $d$-simplices---if it is not on the boundary of $[-1,n+1]^d$. With this the (constructive) proof that we gave before for the $2$-dimensional HEX theorem generalizes to the following: we start at the $d$-simplex \[ \Delta_0 \assg \conv\{-\one,-\one+\ee_1,-\one+\ee_1+\ee_2, \ \ldots\ ,-\ee_{d-1}-\ee_d,-\ee_d,\zero\} \] for which the facet ($(d-1)$-face) $\conv\{-\one,-\one+\ee_1,\ldots,-\ee_{d-1}-\ee_d,-\ee_d\}$ is completely colored. This simplex is shaded in the following figure for $H(5,2)$, which depicts the same final position that we considered before. \[ \def\epsfsize#1#2{.6#1}\epsffile{EPS/hexgraphend4.eps} \] Now we construct a sequence of completely colored $d$-dimensional simplices that starts at~$\Delta_0$: we find the second completely colored $(d-1)$-face of $\Delta_0$, find the second completely colored $d$-simplex it is contained in, etc. Thus we find a chain of completely colored $d$-simplices that ends on the boundary of $[-1,n{+}1]^d$---at a different simplex than the one we started from. In particular, the last $d$-simplex in the chain has a completely colored facet (a $(d-1)$-face) in the boundary, and by construction this facet has to lie in a hyperplane $H^+_i=\{\xx\sep x_i=n+1\}$. (In fact, at this point we check that every completely colored $(d-1)$-simplex in the boundary of $H(n,d)$ is contained in one of the hyperplanes $H^+_i$, with the sole exception of the boundary facet of our starting $d$-simplex.) And the chain of $d$-simplices thus provides us with an $i$-colored path from the $i$-colored vertex \[ -\one + \ee_1+\ldots+\ee_{i-1}\in H^-_i=\{\xx\sep x_i=-1\} \] to the $i$-colored vertex in $H^+_i$: so the $i$-th player wins. \endproof Our drawing illustrates the chain of completely colored simplices (shaded) and the sequence of (white) vertices for the winning path that we get from it. \[ \def\epsfsize#1#2{.6#1}\epsffile{EPS/hexgraphproof4.eps} \] Now we will proceed from the discrete mathematics setting of the HEX game to the continuous world of topological fixed point theorems. Here are three versions of the Brouwer fixed point theorem. \begin{Theorem}[Brouwer fixed point theorem]\label{t:brouwer} The following are equivalent (and true): \begin{compactenum}[ \rm (Br1)] \item Every continuous map $f\: B^d\to B^d$ has a fixed point. \item Every continuous map $f\: B^d\to S^{d-1}$ has a fixed point. \item Every null-homotopic map $f\: S^{d-1}\to S^{d-1}$ has a fixed point. \end{compactenum} \[ \def\epsfsize#1#2{.42#1}\epsffile{EPS/brouwer.eps} \] \end{Theorem} (The term \emph{null-homotopic} that appears here refers to a map that can be deformed to a constant map; see the proof below.) \proofheader{Proof of the equivalences} (Br1)$\Longrightarrow$(Br2) is trivial, since $S^{d-1}\sse B^d$. For (Br2)$\Longrightarrow$(Br3) let $h\:S^{d-1}\times[0,1]\to S^{d-1}$ be a null-homotopy for $f$, i.\,e., a continuous map that interpolates between our original map~$f$ and a constant map, with $h(\xx,0)=f(\xx)$ and $h(\xx,1)=\xx_0$ for all $\xx\in S^{d-1}$. From this we construct a continuous map $F\: B^d\to S^{d-1}$ that extends $f$, by \[ F(\xx)\assg \begin{cases} h(\frac{\xx}{| \xx|},2-2| \xx|) &\text{ if } \frac12\le|\xx|\le 1,\\ \xx_0 &\text{ for }|\xx|\le\frac12. \end{cases} \] \[ \input{EPS/brouwer2-3a.pstex_t} \] This map is continuous, and by (Br2) it has a fixed point, which must lie in the image, that is, in~$S^{d-1}$. For the converse, (Br3)$\Longrightarrow$(Br2), let $f\: B^d\to S^{d-1}$ be continuous. Then the restriction $f|_{S^{d-1}}$ is null-homotopic, since $h(\xx;t)\assg f((1-t)\xx)$ provides a null-homotopy. Thus by (Br3) $f|_{S^{d-1}}$ has a fixed point, hence so does~$f$. Finally, we get (Br2)$\Longrightarrow$(Br1): if $f\: B^d\to B^d$ has no fixed point, then we set $g(\xx)\assg\frac{f(\xx)-\xx}{| f(\xx)-\xx|}$. This defines a map $g\:B^d\to S^{d-1}$ that has a fixed point $\xx_0\in S^{d-1}$ by (Br2), with $\xx_0=\frac{f(\xx_0)-\xx_0}{| f(\xx_0)-\xx_0|}$. But this implies $f(\xx_0) =\xx_0(1+t)$ for $t\assg{| f(\xx_0)-\xx_0|}>0$, and this is impossible for $\xx_0\in S^{d-1}$. \endproof In the following we use the unit cube $[0,1]^d$ instead of the ball~$B^d$: it should be clear that the Brouwer fixed point theorem equally applies to self-maps of any domain $D$ that is homeomorphic to $B^d$, resp.\ of the boundary $\partial D$ of such a domain. \proofheader{Proof of the Brouwer fixed point theorem \rm(``HEX $\Longrightarrow$ (Br1)'')} If $f\: [0,1]^d\to [0,1]^d$ has no fixed point, then for some $\epsilon>0$ we have that $| f(\xx)-\xx|_\infty\ge \epsilon$ for all $\xx\in [0,1]^d$ (namely, one can take $\epsilon\assg\min\{| f(\xx)-\xx|_\infty\sep \xx\in [0,1]^d\}$, which exists since $[0,1]^d$ is compact). Furthermore, any continuous function on the compact set $[0,1]^d$ is uniformly continuous (see e.g.\ Munkres \cite[\S27]{Munkres1}), hence there exists some $\delta>0$ such that $| \xx - \xx' |_\infty <\delta$ implies $| f(\xx)-f(\xx')|_\infty <\epsilon$. We take $\delta <\epsilon$ (without loss of generality), and then choose $n$ with ${1\over n}<\delta$. {From}~$f$, we now define a coloring of $H(n,d)$, by setting \[ \kappa(\vv)\assg {\textstyle\min\{i\sep |f_i(\frac{\vv}{n})-\frac{v_i}{n}|\geq\epsilon\}} \] for the interior vertices $\vv\in H(n,d)$, where $f_i$ denotes the $i$th component of~$f$. This is well-defined, since $\frac{\vv}{n}\in [0,1]^d$, and thus at least one component of $f(\frac{\vv}{n})-\frac{\vv}{n}$ has to be at least $\epsilon$ in its absolute value. Now the $d$-dimensional HEX theorem guarantees, for some $i$, a chain $\vv^0,\vv^1,\ldots \vv^N$ of vertices of color~$i$, where $v^0{}_i=0$ and $v^N{}_i=n$. Furthermore, we know $|f_i({\vv^k\over n})-{v^k{}_i\over n}|\geq\epsilon$ for $0\leq k\leq N$. And at the ends we of the chain know the signs: \begin{itemize} \item[] $f (\tfrac{\vv^0}{n})\in [0,1]^d$ implies $f_i(\tfrac{\vv^0}{n})\ge0$ and hence $f_i(\tfrac{\vv^0}{n}) - \tfrac{v^0{}_i}{n}\geq\epsilon$, and \item[] $f (\tfrac{\vv^N}{n})\in [0,1]^d$ implies $f_i(\tfrac{\vv^N}{n})\le1$ and hence $f_i(\tfrac{\vv^N}{n}) - \tfrac{v^N{}_i}{n}\leq -\epsilon$. \end{itemize} Hence, for some $k\in\{1,2,\ldots,N\}$ we must have a sign change: \begin{itemize} \item[] $f_i(\tfrac{\vv^{k-1}}{n}) - \tfrac{v^{k-1}{}_i}{n}\ge \epsilon$ and $f_i(\tfrac{\vv^ k }{n}) - \tfrac{v^k {}_i}{n}\le-\epsilon$. \end{itemize} All this taken together provides a contradiction, since \begin{itemize} \item[] $|\tfrac{\vv^{k-1}}{n}-\tfrac{\vv^k}{n}|_\infty = \tfrac{1}{n}< \delta$ \end{itemize} whereas \[ | f (\tfrac{\vv^{k-1}}{n}) - f (\tfrac{\vv^k}{n}) |_\infty \ge | f_i(\tfrac{\vv^{k-1}}{n}) - f_i(\tfrac{\vv^k}{n}) | \ge 2\epsilon - |\tfrac{v^{k-1}{}_i}{n}-\tfrac{v^k{}_i}{n}| \ge 2\epsilon - \tfrac{1}{n} > 2\epsilon - \delta >\epsilon. \] \endproof \proofheader{Proof that the Brouwer fixed point theorem implies the HEX theorem \rm(``Br1 $\Longrightarrow$ HEX'')} Assume we have a coloring of $H(n,d)$. We use it to define a map $[0,n]^d\to [0,n]^d$, as follows: on the points in $\{0,1,\ldots,n\}^d$ we define \[ f(\vv)=\begin{cases} \vv+\ee_i &\text{if $\vv$ has color $i$, and there is a path on vertices of color $i$}\\[-4pt] &\text{that connects $\vv$ to a vertex $\ww$ with $w_i=0$}\\ \vv-\ee_i &\text{if $\vv$ has color $i$, but there is no such path.} \end{cases} \] If for the given coloring there is no winning path for HEX, then these definitions do not map any point $\vv$ outside $[0,n]^d$. Hence this defines a simplicial map $f\: [0,n]^d\to [0,n]^d$, by linear extension on the simplices of the triangulation $\Delta(n,d)$ that we have considered before. The following two observations now give us a contradiction, showing that this $f$ cannot have a fixed point: \begin{compactitem}[ $\bullet$] \item If $\Delta=\conv\{\vv^0,\vv^1,\vv^2,\ldots,\vv^d\}\subseteq\R^d$ is a simplex and $f\: \Delta\to \R^d$ is a linear map defined by $f(\vv^i)=\vv^i+\ww^i$, then $f$ has a fixed point on $\Delta$ if and only if $\colzero\in\conv\{\ww^0,\ldots,\ww^d\}$. \item If $\vv,\vv'$ are adjacent vertices, then we cannot get $f(\vv)=\vv-\ee_i$ and $f(\vv')=\vv'+\ee_i$. Hence for each simplex of $\Delta(n,d)$, all the vectors $\ww^i$ lie in one orthant of~$\R^d$! \endproof \end{compactitem} \subsection*{Exercises} \begin{enumerate} \item In the proof of the Brouwer fixed point theorem (Thm.~\ref{t:brouwer}, (Br2)$\Longrightarrow$(Br3)), we could simply have put $F(\xx)\assg h(\frac{\xx}{| \xx|},1-| \xx|)$. Is this continuous? \end{enumerate} \newpage \section{Who wins HEX?} So, who can win the $2$-dimensional HEX game? A simple but ingenious argument due to John Nash, known as “stealing a strategy,” shows that on a square board the first player (``White'') always has a winning strategy. In the following we first define winning strategies, then find that one of the players has one, and finally conclude that the first player has one. Still: the proof will be non-constructive, and we don't know how to win HEX. So, the game still remains interesting \ldots \begin{Definition} A \emph{strategy} is a set of rules that tells a player which move to make (i.\,e., which tile to color) for every legal position on the board. A \emph{winning strategy} here guarantees to lead to a win, starting from an empty board, for all possible moves of the opponent. A \emph{position} of the HEX game is a board on which some tiles may have been colored white or black, together with the information who moves next (unless all tiles are colored). A position is \emph{legal} if it can occur in a HEX game: that is, if either White moves next, and the numbers of white and black tiles agree, or if Black moves next, and White has one more tile. A \emph{winning position for White} is a position such that White has a \emph{winning strategy} that tells him how to proceed (for arbitrary moves of Black) and guarantees a win. Similarly, \emph{a winning position for Black} has a \emph{winning strategy} that guarantees to lead Black to a win. \end{Definition} \begin{Lemma}\label{nodraw} Every (legal) position for HEX is either a winning position for White or a winning position for Black. \end{Lemma} \proof Here we proceed by induction on the number $g$ of ``grey'' tiles (i.\,e., ``free'' positions on the board). If no grey tiles are present $(g=0)$, then one of the players has won---by the HEX theorem. If $g>0$ and White is to move, then any move that White could make reduces $g$, and thus (by induction) produces a winning position for one of the players. If there is a move that leads to a winning position for White, then this is really nice and great for White: this makes the present position into a winning position for White, and any such move can be used for a winning position for White. Otherwise---too bad: if every possible move for White produces a winning position for Black, then we are at a winning position for Black already. And the same argument applies for $g>0$ if Black is to move. \endproof Of course, the argument given here is \emph{much} more general: essentially we have proved that for any finite deterministic 2-person game without a draw and with ``complete information'' there is a winning strategy for one of the players. (This is a theorem of Zermelo, which was rediscovered by von Neumann and Morgenstern). Furthermore, for games where a draw is possible either one player has a winning strategy, or \emph{both} players can force a draw. We refer to Exercise~\ref{ex:drawplay}, and to Blackwell \& Girshick \cite[p.~21]{Blackwell}. For HEX, Lemma~\ref{nodraw} shows that at the beginning (for the starting position, where all tiles are grey, and White is to move), there is a winning strategy either for White or for Black. But who is the winner? Our first attempt might be to follow the proof of Lemma~\ref{nodraw}. Only for the $2\times 2$ board this can be done: {}\hspace{-10mm}\input{EPS/hexstrategy3corr.pstex_t} \noindent In this drawing, you can decide for every position whether it is a winning position for White or for Black, starting with the bottom row ($g=0$) that has three winning positions for each player, ending at the top node ($g=4$), which turns out to be a winning position for~White. For larger boards, this approach is hopeless---after all, there are $n^2\choose \lfloor n^2/2\rfloor$ final positions to classify for ``$g=0$,'' and from this one would have work one's way up to the top node of a huge tree (of height $n^2$). Nevertheless, people have worked out winning strategies for White on the $n\times n$ boards for $n\leq 5$ (see Gardner \cite{Gardner-hex}). \begin{Theorem} For the HEX game played on a HEX board with equal side lengths, White (the first player) has a winning strategy. \end{Theorem} \proof Assume not: then by Lemma~\ref{nodraw} Black has a winning strategy. But then White can start with an arbitrary move, and then---using the symmetry of the board and of the rules---just ignore his first tile, and follow Black's winning strategy ``for the second player.'' This strategy will tell White always which move to take. Here the ``extra'' white tiles cannot hurt White: if the move for White asks to occupy a tile that is already white, then an arbitrary move is o.k.\ for White. But this ``stealing a strategy argument'' produces a winning strategy for White, contradicting our assumption! \endproof \subsection*{Notes} Gale's beautiful paper \cite{Gale-hex} was the source and inspiration for our treatment of Brouwer's fixed point theorem in terms of the HEX game. Nash's analysis for the winning strategies for HEX is from Gardner's classical account in \cite{Gardner-hex}, some of which reappears in Milnor's \cite{Milnor-nash}. See also the accounts in Jensen \& Toft \cite[Sect.~17.14]{Jensen-hex}, and in Berlekamp, Conway \& Guy \cite[p.~680]{BCG2}, where other cases of ``strategy stealing'' are discussed. (A theoretical set-up for this is in Hales \& Jewett \cite[Sect.~3]{HalesJewett}.) The traditional combinatorial approach to the Brouwer fixed point theorem is via Sperner's lemma \cite{Sperner}; see e.g.\ the presentation in \cite{AZ5}. A more geometric version of the combinatorial lemmas is given by Mani \cite{Mani-lemma}. \subsection*{Exercises} \begin{enumerate} \item Stir your coffee cup. Show that the (moving, but flat) surface has at every moment at least one point that stands still (has velocity zero). \item Prove that if you tear a sheet of paper from your notebook, crumble it into a small ball, and put that down on your notebook, then at least one point of the sheet comes to rest exactly on top of its original position.\\ Could it happen that there are exactly two such points? \item For HEX on a $3\times3$ board, how large is the tree of possible positions? \item Can you write a computer program that plays HEX and wins (sometimes) \cite{Browne-hex_strat}? \item For $d$-dimensional HEX, is there always some ``short'' winning path? Show that for every $d\geq 2$ there is a constant $c_d$ such that for all $n$ there is a final configuration such that only one player wins, but his shortest path uses more than $c_d\cdot n^d$ tiles. \item Construct an algorithm that, for given $\epsilon >0$ and $f\: [0,1]^2\to [0,1]^2$, calculates a point $x_0\in [0,1]^2$ with $| f(x)-x| <\epsilon$. \cite[p.~827]{Gale-hex} \item\label{ex:drawplay} If in a complete information two player game a draw is possible, argue why either one of the players has a winning strategy, or \emph{both} can force at least a draw. \item\label{exer:jct} Prove that for $2$-dimensional HEX, not both players can win! For this, prove and use the ``polygonal Jordan curve theorem'': any simple closed polygon in the plane uniquely divides the plane into ``inside'' and ``outside.''\\ (The general Jordan curve theorem for simple ``Jordan arcs'' in the plane has extensive discussions in many books; see for example Munkres \cite{Munkres1}, Stillwell \cite[Sect.~0.3]{Stillwell}, or Thomassen~\cite{Thomassen-J}.) \item On an $(m\times n)$-board that is not square (that is, $m\neq n$), the player who gets the longer sides, and hence the shorter distance to bridge by a winning path, has a winning strategy. (Our figure illustrates the case of a $(6\times 5)$-board, where the claim is that Black has a winning strategy.) \begin{itemize} \item[(i)] Show that for this, it is sufficient to consider the case where $m=n+1$ (i.\,e., the second player Black, who gets the longer side, has a sure win). \[ \def\epsfsize#1#2{.36#1}\epsffile{EPS/hexboard4uneven.eps} \] \item[(ii)] Show that in the situation of (i), Black has the following winning strategy. Label the tiles in the ``symmetric'' way that is indicated by the figure, such there are two tiles of each label. The strategy for Black is to always take the second tile that has the same label as the one taken by White. Why will this strategy always win for Black? (Hint: you will need the Jordan curve theorem.)\\ (This is in Gardner \cite{Gardner-hex} and in Milnor \cite{Milnor-nash}, but neither source gives the proof. You'll have to work yourself!) \end{itemize} \end{enumerate} \newpage \section{Piercing multiple intervals}\label{s:kais} \heading{Packing number and transversal number. } Let $\Sfrak$ be a system of sets on a ground set~$X$; both $\Sfrak$ and $X$ may generally be infinite. The \emph{packing number% \index{packing number}% \index{number!packing}% } of $\Sfrak$, usually denoted by $\nu(\Sfrak)$ and often also called the \emph{matching number% \index{matching number!see{packing number}}% \index{number!matching!see{packing number}}% }, is the maximum cardinality of a system of pairwise disjoint sets in $\Sfrak$: \[ \nu(\Sfrak)=\sup\{|\MM|\sep \MM\subseteq\Sfrak,\, M_1\cap M_2=\es \mbox{ for all $M_1, M_2\in \MM$, $M_1\neq M_2$}\}.% \index{0nS@$\nu(\Sfrak)$}% \] \iipefig{match} The \emph{transversal number} or \emph{piercing number} of $\Sfrak$ is the smallest number of points of $X$ that capture all the sets in $\Sfrak$: \[ \tau(\Sfrak)=\min\{|T|\sep T\subseteq X,\, S\cap T\neq\es\mbox{ for all $S\in\Sfrak$}\}. \index{0tS@$\tau(\Sfrak)$}% \] \iipefig{pierc} A subsystem $\MM\subseteq \Sfrak$ of pairwise disjoint sets is usually called a \emph{matching} (this refers to the graph-theoretical matching, which is a system of pairwise disjoint edges), and a set $T\subseteq X$ intersecting all sets of $\Sfrak$ is referred to as a \emph{transversal} of~$\Sfrak$. Clearly, any transversal is at least as large as any matching, and so always \[ \nu(\Sfrak)\leq \tau(\Sfrak). \] In the reverse direction, very little can be said in general, since $\tau(\Sfrak)$ can be arbitrarily large even if $\nu(\Sfrak)=1$. As a simple geometric example, we can take the plane as the ground set of $\Sfrak$ and let the sets of $\Sfrak$ be lines in general position. Then $\nu=1$, since every two lines intersect, but $\tau\geq \frac 12|\Sfrak|$, because no point is contained in more than two of the lines. One of the basic general questions in combinatorics asks for interesting special classes of set systems where the transversal number can be bounded in terms of the matching number.\footnote{This kind of problem is certainly not restricted to combinatorics. For example, if $\Sfrak$ is the system of all open sets in a topological space, $\tau(\Sfrak)$ is the minimum size of a dense set and is called the \emph{density% \index{density}% }, while $\nu(\Sfrak)$ is known as the \emph{Souslin number% \index{Souslin number}% \index{number!Souslin}% } or \emph{cellularity% \index{cellularity}% } of the space. In 1920, Souslin asked whether a linearly ordered topological space exists (the open sets are unions of open intervals) with countable $\nu$ but uncountable $\tau$. It turned out in the 1970s that the answer depends on the axioms one is willing to assume beyond the usual (ZFC) axioms of set theory. For example, it is yes if one assumes the continuum hypothesis; see e.\,g.\ \cite{Engelking}.} Many such examples come from geometry. Here we restrict our attention to one particular type of systems, the \emph{$d$-intervals}, where the best results have been obtained by topological methods. \heading{Fractional packing and transversal numbers. } Before introducing $d$-intervals, we mention another important parameter of a set system, which always lies between $\nu$ and $\tau$ and often provides useful estimates for $\nu$ or $\tau$. This parameter can be introduced in two seemingly different ways. For simplicity, we restrict ourselves to finite set systems (on possibly infinite ground sets). A \emph{fractional packing% \index{fractional packing}% \index{packing!fractional}% } for a finite set system $\Sfrak$ on a ground set $X$ is a function $w\: \Sfrak\to [0,1]$ such that for each $x\in X$, we have $\sum_{S\in\Sfrak\sep x\in S} w(S)\leq 1$. The \emph{size} of a fractional packing $w$ is $\sum_{S\in\Sfrak} w(S)$, and the \emph{fractional packing number} $\nu^*(\Sfrak)% \index{0n*S@$\nu^*(\Sfrak)$}% $ is the supremum of the sizes of all fractional packings for~$\Sfrak$. So in a fractional packing, we can take, say, one-third of one set and two-thirds of another, but at each point, the fractions for the sets containing that point must add up to at most~1. We always have $\nu(\Sfrak)\leq \nu^*(\Sfrak)$, since a packing $\MM$ defines a fractional packing $w$ by setting $w(S)=1$ for $S\in\MM$ and $w(S)=0$ otherwise. Similar to the fractional packing, one can also introduce a fractional version of a transversal. A \emph{fractional transversal% \index{fractional transversal}% \index{transversal!fractional}% } for a (finite) set system $\Sfrak$ on a ground set $X$ is a function $\varphi\:X\to [0,1]$ attaining only finitely many nonzero values such that for each $S\in \Sfrak$, we have $\sum_{x\in S} \varphi(x)\geq 1$. The size of a fractional transversal $\varphi$ is $\sum_{x\in X}\varphi(x)$, and the \emph{fractional transversal number} $\tau^*(\Sfrak)% \index{0t*S@$\tau^*(\Sfrak)$}% $ is the infimum of the sizes of fractional transversals. By the duality of linear programming (or by the theorem about separation of disjoint convex sets by a hyperplane), it follows that $\nu^*(\Sfrak)=\tau^*(\Sfrak)$ for any finite set system~$\Sfrak$. When trying to bound $\tau$ in terms of $\nu$, in many instances it proved very useful to bound $\nu^*$ as a function of $\nu$ first, and then $\tau$ in terms of~$\tau^*$. The proof presented below follows a somewhat similar approach. \heading{The $d$-intervals. } Let $I_1,I_2,\ldots,I_d$ be disjoint parallel segments of unit length in the plane. A set $J\subset \bigcup_{i=1}^d I_i$ is a {\em $d$-interval% \index{interval, $d$-interval}% \index{dinterval@$d$-interval}% } if it intersects each $I_i$ in a closed interval. We denote this intersection by $J_i$ and call it the \emph{$i$th component} of~$J$. The drawing shows a $3$-interval: \iipefig{d-int} Intersection and piercing for $d$-intervals are taken in the set-theoretical sense, i.e. two $d$-intervals intersect if, for some $i$, their $i$th components intersect, etc. The $1$-intervals, which are just intervals in the usual sense, behave nicely with respect to packing and piercing: for any family $\FF$ of intervals, we have $\nu(\FF)=\tau(\FF)$ (this is well-known and easy to prove). The following family $\FF$ of three 2-intervals \iipefig{2int} has $\nu(\FF)=1$ while $\tau(\FF)=2$. By taking multiple copies of this family, one obtains families with $\tau=2\nu$ for all values of~$\nu$. Gy\'arf\'as and Lehel \cite{GyarfasLehel0} showed by elementary methods that for any $d$ and any family $\FF$ of $d$-intervals, $\tau(\FF)$ can be bounded by a function of $\nu(\FF)$ (also see \cite{GyarfasLehel}). Their function was rather large (about $\nu^{d!}$ for $d$ fixed). After an initial breakthrough by Tardos \cite{Tardos-2int}, who proved $\tau(\FF)\leq 2\nu(\FF)$ for any family of 2-intervals, Kaiser \cite{Kaiser} obtained the following result: \begin{Theorem}[The Tardos--Kaiser theorem on $d$-intervals] \label{t:d-int} \index{Tardos--Kaiser theorem|thmref{t:d-int}} \index{theorem!Tardos--Kaiser|thmref{t:d-int}} Every family $\FF$ of {$d$-in}\-ter\-vals, $d\geq 2$, has a transversal of size at most $(d^2-d)\cdot \nu(\FF)$. \end{Theorem} Here we present a proof using Brouwer's fixed point theorem. Alon \cite{Alon-piercing} found a short non-topological proof of the slightly weaker bound $\tau(\FF)\leq 2d^2\nu(\FF)$. \proof Let $\FF$ be a fixed system of $d$-intervals with $\nu(\FF)=k$, and let $t=t(d,k)$ be a suitable (yet undetermined) integer. The general plan of the proof is this: Assuming that there is no transversal of $\FF$ of size $dt$, we show by a topological method that the fractional packing number $\nu^*(\FF)$ is at least $t+1$. Then a simple combinatorial argument proves that the packing number $\nu(\FF)$ is at least $\frac {t+1}d$, which leads to $t<d^2\cdot\nu(\FF)$. An sharper combinatorial reasoning in this step leads to the slightly better bound in the theorem. Our candidates for a transversal of $\FF$ are all sets $T$ with each $T_i=T\cap I_i$ having exactly $t$ points; so $|T|=td$. For technical reasons, we also permit that some of the $t$ points in $I_i$ coincide, so $T$ can be a multiset. The letter $T$ could also abbreviate a \emph{trap}. The trap is set to catch all the $d$-intervals in~$\FF$, but if it is not set well enough, some of the $d$-intervals can escape. Each of them escapes through a hole in the trap, namely through a \emph{$d$-hole}. The points of $T_i$ cut the segment $I_i$ into $t+1$ open intervals (some of them may be empty), and these are the \emph{holes in $I_i$}; they are numbered 1 through $t+1$ from left to right. A $d$-hole consists of $d$ holes, one in each $I_i$. The \emph{type} of a $d$-hole $H$ is the set $\{(1,j_1),(2,j_2),\ldots,(d,j_d)\}$, where $j_i\in [t{+}1]$ is the number of the hole in $I_i$ contained in~$H$. A $d$-interval $J\in\FF$ \emph{escapes} through a $d$-hole $H$ if it is contained in the union of its holes. The drawing shows a $3$-hole, of type $\{(1,2),(2,4),(3,4)\}$, and a $3$-interval escaping through it: \iipefig{holes} Let $\HH_0$ be the hypergraph with vertex set $[d]\times [t{+}1]$ and with edges being all possible types of $d$-holes; for example, the hole in the picture yields the edge $\{(1,2),(2,4),(3,4)\}$. So $\HH_0$ is a complete $d$-partite $d$-uniform hypergraph (we will meet such hypergraphs several times in this book). By saying that a $J\in\FF$ escapes through an edge $H$ of $\HH_0$, we mean that $J$ escapes through the $d$-hole (uniquely) corresponding to~$H$. Next, we define weights on the edges of $\HH_0$; these weights depend on the set~$T$ (and also on $\FF$ but this is considered fixed). The weight of an edge $H$ is \[ q_H=\sup\{\dist(J,T)\sep J\in\FF,\mbox{ $J$ escapes through $H$}\}. \] Here $\dist(J,T)=\max_i\{\dist(J_i,T_i)\}$ and $\dist(J_i,T_i)$ is the distance of the $i$th component of $J$ to the closest point of $T_i$. Thus, $q_H$ can be interpreted as the slimmest margin by which a $d$-interval $J$ escaping through $H$ avoids being trapped. If no members of $\FF$ escape through~$H$, we define $q_H$ as~0. Note that this is the only case where $q_H=0$; otherwise, if anything escapes, it does so by a positive margin, since we are dealing with closed intervals. From the edge weights, we derive weights of vertices: the weight $w_v$ of a vertex $v=(i,j)$ is the sum of the weights of the edges of $\HH_0$ containing~$v$. These weights, too, are functions of~$T$; to emphasize this, we can write $w_v=w_v(T)$. \begin{Lemma}\label{l:samewt} For any $d\geq 1$, $t\geq 1$, and any $\FF$, there is a choice of\ \,$T$ such that all the vertex weights $w_v(T)$, $v\in [d]\times[t{+}1]$, coincide. \end{Lemma} It is this lemma whose proof is topological. We postpone that proof and finish the combinatorial part. Let us suppose that a trap $T$ was chosen as in the lemma, with $w_v(T)= W$ for all~$v$. If $W=0$ then $T$ is a transversal, since all edge weights are 0 and no $J\in\FF$ escapes. So suppose that $W>0$. Let $\HH=\HH(T)\subseteq \HH_0$, the \emph{escape hypergraph} of $T$, consist of the edges of $\HH_0$ with nonzero weights. Note that \begin{equation}\label{e:nuHnuF} \nu(\HH)\leq \nu(\FF). \end{equation} Indeed, given a matching $\MM$ in $\HH$, for each edge $H\in\MM$ choose a $J\in\FF$ escaping through $H$---this gives a matching in~$\FF$. We note that the re-normalized edge weights $\tilde q_H=\frac 1W\,q_H$ determine a fractional packing in $\HH$ (since the weights at each vertex sum up to~1). For the size of this fractional packing, which is the total weight of all vertices, we find by double counting \[ \nu^*(\HH)\geq\sum_{H\in\HH}\tilde q_H = \frac 1d \sum_{H\in\HH}\sum_{v\in H} \tilde q_H = \frac 1d \sum_{v\in [d]\times [t{+}1]}\frac{w_v}W= \frac 1d \sum_{v} 1 = t+1. \] The last step is to show that $\nu(\HH)$ cannot be small if $\nu^*(\HH)$ is large. Here is a simple argument leading to a slightly suboptimal bound, namely $\nu(\HH)\geq \frac 1d\,\nu^*(\HH)$. Given a fractional matching $\tilde q$ of size $t+1$ in $\HH$, a matching can be obtained by the following greedy procedure: Pick an edge $H_1$ and discard all edges intersecting it, pick $H_2$ among the remaining edges, etc., until all edges are exhausted. The $\tilde q$-weight of $H_i$ plus all the edges discarded with it is at most $d=|H_i|$, while all edges together have weight $t+1$. Thus, the number of steps, and also the size of the matching $\{H_1,H_2,\ldots\}$, is at least $\lceil\frac {t+1} d\rceil$. If we set $t=d\cdot\nu(\FF)$, we get $\nu(\HH)>\nu(\FF)$, which contradicts (\ref{e:nuHnuF}). Therefore, for this choice of $t$, all the vertex weights must be 0 and $T$ as in Lemma~\ref{l:samewt} is a transversal of $\FF$ of size at most $d^2\nu(\FF)$. The improved bound $\tau(\FF)\leq (d^2-d)\cdot\nu(\FF)$ for $d\ge 3$ follows similarly using a theorem of F\"uredi \cite{Furedi-nunustar}, which implies that the matching number of any $d$-uniform $d$-partite hypergraph $\HH$ satisfies $\nu(\HH)\leq (d-1)\nu^*(\HH)$. (For $d=2$, a separate argument needs to be used, based on a theoreom of Lov\'asz stating that $\nu^*(G)\le\frac32\nu(G)$ for all graphs $G$.) The Tardos--Kaiser theorem~\ref{t:d-int} is proved. \proofend \heading{Proof of Lemma~\ref{l:samewt}. } Let $\sigma^t$ denote the standard $t$-dimensional simplex in~$\R^{t+1}$, i.e. the set $\{\xx\in\R^{t+1}\sep x_j\geq 0,\,x_1+\cdots+x_{t+1}=1\}$. A point $\xx\in\sigma^t$ defines a $t$-point multiset $\{z_1,z_2,\ldots,z_t\}\subset [0,1]$, $z_1\leq z_2\leq\cdots\leq z_t$, by setting $z_k=\sum_{j=1}^k x_j$. Here is a picture for $t=2$: \iipefig{int-si} A candidate transversal $T$ with $t$ points in each $I_i$ can thus be defined by an ordered $d$-tuple $(\xx_1,\ldots,\xx_d)$ of points, $\xx_i\in\sigma^t$, where $\xx_i$ determines $T_i$. Such an ordered $d$-tuple can be regarded as a single point $\xx$ in the Cartesian product $P=\sigma^t\times\sigma^t\times\cdots\times\sigma^t= (\sigma^t)^d$. To each $\xx\in P$, we have thus assigned a candidate transversal $T(\xx)$. For each vertex $v=(i,j)$ of the hypergraph $\HH_0$, we define the function $g_{ij}\:P\to\R$ by $g_{ij}(\xx)=w_{(i,j)}(T(\xx))$, where $w_v(T)$ is the vertex weight. This is a continuous function of~$\xx$. We note that for each $\xx$, the sum \[ S_i(\xx)=\sum_{j=1}^{t+1} g_{ij}(\xx) \] is independent of~$i$; this is because $S_i(\xx)$ equals the sum of the weights of all edges. So we can write just $S(\xx)$ instead of $S_i(\xx)$. If there is an $\xx\in P$ with $S(\xx)=0$, then all the vertex weights $ w_{(i,j)}(T(\xx))$ are 0 and we are done. Otherwise, we define the normalized functions \[ f_{ij}(\xx)= \frac 1{S(\xx)}\, g_{ij}(\xx). \] For each $i$, $f_{i1}(\xx),\ldots,f_{i(t+1)}(\xx)$ are nonnegative and sum up to 1, and so they are the coordinates of a point in the standard simplex $\sigma^t$. All the maps $f_{ij}$ together can be regarded as a map $f\:P\to P$. To prove the lemma, we need to show that the image of $f$ contains the point of $P$ with all the $d(t+1)$ coordinates equal to $\frac 1{t+1}$. The product $P$ is a convex polytope, and its nonempty faces are exactly all Cartesian products $F_1\times F_2\times\cdots\times F_d$, where $F_1,\ldots,F_d$ are nonempty faces of $\sigma^t$ (Exercise~\ref{ex:polytprod}). We note that for any face $F$ of $P$, we have $f(F)\subseteq F$. Indeed, a face $G$ of $\sigma^t$ has the form $G=\{\xx\in \sigma^t\sep x_i=0\mbox{ for all }i\in I\}$, for some index set $I$, and the faces of $P$ are products of faces $G$ of this form. So it suffices to know that $f_{ij}(\xx)=0$ whenever $(\xx_i)_j=0$. This holds, since $(\xx_i)_j=0$ means that the $j$th hole in $I_i$ is empty, so nothing can escape through that hole, and thus $f_{ij}(\xx)=0$. The proof of Lemma~\ref{l:samewt} is now reduced to the following statement: \begin{Lemma} \label{l:polyt-surj} Let $P$ be a convex polytope and let $f\:P\to P$ be a continuous mapping satisfying $f(F)\subseteq F$ for each face\footnote{In fact, it suffices to require $f(F)\subseteq F$ for each facet of $P$ (that is, for each face of dimension $\dim(P)-1$), since each face is the intersection of some facets.} $F$ of $P$. Then $f$ is surjective. \end{Lemma} \proof Since the condition is hereditary for faces, it suffices to show that each point $\yy$ in the interior of $P$ has a preimage. For contradiction, suppose that some $\yy\in\inter P$ is not in the image of~$f$. For $\xx\in P$, consider the semiline emanating from $f(\xx)$ and passing through $\yy$, and let $g(\xx)$ be the unique intersection of that semiline with the boundary of~$P$. \iipefig{brou-p} This $g$ is a well-defined and continuous map $P\to P$, and by Brouwer's fixed point theorem, there is an $\xx_0\in P$ with $g(\xx_0)=\xx_0$. The point $\xx_0$ lies on the boundary of $P$, in some proper face $F$. But $f(\xx_0)$ cannot lie in $F$, because the segment $\xx_0 f(\xx_0)$ passes through the point $\yy$ outside~$F$---a contradiction. \proofendSkip \heading{Lower bounds. } It turns out that the bound in Theorem~\ref{t:d-int} is not far from being the best possible. In particular, for $\nu(\FF)=1$ and $d$ large, the transversal number can be near-quadratic in $d$, which is rather surprising. For all $k$ and $d$, systems $\FF$ of $d$-intervals can be constructed with $\nu(\FF)=k$ and \[ \tau(\FF)\geq c\,{d^2\over(\log d)^2}\,k \] for a suitable constant $c>0$ (Matou\v{s}ek \cite{Matousek-d-int}). The construction involves an extension of a construction due to Sgall \cite{Sgall} of certain systems of set pairs. Here we outline a (non-topological!) proof of a somewhat simpler result concerning families of \emph{homogeneous% \index{interval, $d$-interval!homogeneous}% \index{dinterval@$d$-interval!homogeneous}% \index{homogeneous $d$-interval}% } $d$-intervals, which are unions of at most $d$ closed intervals on the real line. These are more general than the $d$-intervals, but an upper bound only slightly weaker than Theorem~\ref{t:d-int} can be proved for them along the same lines (Exercise~\ref{ex:homog}): $\tau\leq (d^2-d+1)\nu$. \begin{Proposition}\label{p:d-int-lb} For every $d\ge 2$ and $k\geq 1$, there exists a system $\FF$ of homogeneous $d$-intervals with $\nu(\FF)=k$ and \[ \tau(\FF)\geq c\,{d^2\over \log d}\,k. \] \end{Proposition} \proof Given $d$ and $k$, we want to construct a system $\FF$ of homogeneous $d$-intervals. Clearly, it suffices to consider the case $k=1$, since for larger $k$, we can take $k$ disjoint copies of the $\FF$ constructed for $k=1$. Thus, we want an $\FF$ in which every two $d$-intervals intersect and with $\tau(\FF)$ large. In the construction, we will use homogeneous $d$-intervals of a quite special form: each component is either a single point or a unit-length interval. First, it is instructive to see why we cannot get a good example if all the components are only points. In that case, the family $\FF$ is simply a $d$-uniform hypergraph (whose vertices happen to be points of the real line). We require that any two edges intersect, and thus any edge is a transversal and we have $\tau(\FF)\leq d$. For the actual construction, let $n$ and $N$ be integer parameters (whose value will be set later). Let $V=[n]$ be an index set, and $I_v$, for $v\in V$, be auxiliary pairwise disjoint unit intervals on the real line. In each $I_v$, we choose $N$ distinct points $x_{v,i}$, $i=1,2,\ldots,N$. The constructed system $\FF$ consists of homogeneous $d$-intervals $J^1,J^2,\ldots,J^N$. For each $i=1,2,\ldots,N$, we choose auxiliary sets $B_i\subseteq A_i\subseteq V$, and we construct $J^i$ as follows: \[ J^i=\biggl(\,\bigcup_{v\in B_i} I_v\biggr) \cup \{ x_{u,i}\sep u\in A_i\setminus B_i\}. \] The picture shows an example of $J^1$ for $n=6$, $A_1=\{1,2,4,5\}$ and $B_1=\{2,4\}$: \[ \def\epsfsize#1#2{1#1}\epsffile{EPS/aibi.eps} \] The heart of the proof is the construction of suitable sets $A_i$ and $B_i$ on the ground set~$V$. Since the $J^i$ should be homogeneous $d$-intervals, we obviously require \begin{enumerate} \item[{\rm (C1)}] For all $i=1,2,\ldots,N$, $\es\subset B_i\subseteq A_i$ and $|A_i|\leq d$. \end{enumerate} The condition that every two members of $\FF$ intersect is implied by the following: \begin{enumerate} \item[{\rm (C2)}] For all $i_1,i_2$, $1\leq i_1<i_2\leq N$, we have $A_{i_1}\cap B_{i_2}\neq\emptyset$ or $A_{i_2}\cap B_{i_1}\neq\emptyset$ (or both). \end{enumerate} Finally, we want $\FF$ to have no small transversal. Since no two $d$-intervals of $\FF$ have a point component in common, a transversal of size $t$ intersects no more than $t$ members of $\FF$ in their point components, and all the other members of $\FF$ must be intersected in their interval components. Therefore, the transversal condition translates to \begin{enumerate} \item[{\rm (C3)}] Put $t=cd^2/\log d$ for a sufficiently small constant $c>0$, and let $\BB=\{B_1,B_2,\ldots,B_N\}$. Then $\tau(\BB)\geq 2t$, and consequently $\tau(\BB')\geq t$ for any~$\BB'$ arising from $\BB$ by removing at most $t$ sets. \end{enumerate} A construction of sets $A_1,\ldots,A_N$ and $B_1,\ldots,B_N$ as above was provided by Sgall \cite{Sgall}. His results give the following: \begin{Proposition}\label{p:sgall} Let $b$ be a given integer, let $n\leq cb^2/\log b$ for a sufficiently small constant $c>0$, and let $B_1,B_2,\ldots,B_N$ be $b$-element subsets of $V=[n]$. Then there exist sets $A_1,A_2,\ldots,A_N$, with $B_i\subseteq A_i$, $|A_i|\leq 3b$, and such that (C2) is satisfied. \end{Proposition} With this proposition, the proof of Proposition~\ref{p:d-int-lb} is easily finished. We set $b=\lfloor \frac d 3\rfloor$, $n=cb^2/\log b$, and we let $B_1,B_2,\ldots,B_N$ be all the $N={n\choose b}$ subsets of $V$ of size~$b$. We have $\tau(\{B_1,\ldots,B_n\})=n-b+1$ and condition (C3) holds. It remains to construct the sets $A_i$ according to Proposition~\ref{p:sgall}; then (C1) and (C2) are satisfied too. The proof of Proposition~\ref{p:d-int-lb} is concluded by passing from the $A_i$ and $B_i$ to the system $\FF$ of homogeneous $d$-intervals as was described above. \proofend \proofheader{Sketch of proof of Proposition~\ref{p:sgall}} Let $G=(V,E)$ be a graph on $n$ vertices of maximum degree $b$ with the following expander-type property: for any two disjoint $b$-element subsets $A,B\subseteq V$, there is at least one edge $e\in E$ connecting a vertex of $A$ to a vertex of $B$. (The existence of such a graph can be easily shown by the probabilistic method; the constant $c$ arises in this argument. See \cite{Sgall} for references.) For each $i$, let $v_i$ be an (arbitrary) element of the set $B_i$, and let \[ A_i= B_i\cup N(v_i)\cup \biggl(V\setminus \bigcup_{u\in B_i}N(u)\biggr), \] where $N(v)$ denotes the set of neighbors in $G$ of a vertex $v\in V$. It is easy to check that $|A_i|\leq 3b$, and some thought reveals that the condition (C2) is satisfied. \proofend \heading{A Helly-type problem for $d$-intervals. } Kaiser and Rabinovich \cite{KaiserRabinovich} investigated conditions on a family $\FF$ of $d$-intervals guaranteeing that $\FF$ can be pierced by a ``multipoint,'' i.e. $\tau(\FF)=d$ and there is a transversal using one point of each~$I_i$. They proved the following: \begin{Theorem}\label{t:multipoint} Let $k=\lceil\log_2(d+2)\rceil$ and let $\FF$ be a family of $d$-intervals such that any $k$ or fewer members of $\FF$ have a common point. Then $\FF$ can be pierced by a multipoint. \end{Theorem} \proof We use notation from the proof of Theorem~\ref{t:d-int}. We apply Lemma~\ref{l:samewt} with $t=1$, obtaining a set $T$ with one point in each $T_i$ such that all the $2d$ vertices of the escape hypergraph $\HH=\HH(T)$ have the same weight~$W$. If $W=0$ we are done, so let us assume $W>0$. By the assumption on $\FF$, every $k$ edges of $\HH$ share a common vertex. We will prove the following claim for every $\ell$: \begin{quote} \emph{if every $\ell+1$ edges of $\HH$ have at least $m$ common vertices, then every $\ell$ edges of $\HH$ have at least $2m+1$ common vertices.} \end{quote} For $\ell=k$, the assumption holds with $m=1$, and so by $(k-1)$-fold application of this claim, we get that every edge of $\HH$ ``intersects itself'' in at least $2^k-1$ vertices, i.e. $d>2^k-2$. The claim thus implies the theorem. The claim is proved by contradiction. Suppose that $\AA\subseteq\HH$ is a set of $\ell$ edges such that $C=\bigcap\AA$ has at most $2m$ vertices. Recall that the vertices of $\HH$ are pairs $(i,j)$, $j\in [2]$. Let $\bar C=\{(i,3-j)\sep (i,j)\in C\}$ (note that $C$ never contains both $(i,1)$ and $(i,2)$, since no edge of $\HH$ does). By the assumption, $\AA$ plus any other edge together intersect in at least $m$ vertices. Thus, any $H\in \HH\setminus \AA$ contains at least $m$ vertices of $C$, and consequently no more than $m$ vertices of~$\bar C$. Let $W$ be the total weight of the vertices in $C$ and $\bar W$ the total weight of the vertices in $\bar C$. The edges in $\AA$ contribute solely to $W$, while any other edge $H$ contributes at least as much to $W$ as to $\bar W$, and so $W>\bar W$. But this is impossible since all vertex weights are identical and $|C|=|\bar C|$. The claim, and Theorem~\ref{t:multipoint} too, are proved. \proofendSkip An interesting open problem is whether $k=\lceil\log_2(d+2)\rceil$ in Theorem~\ref{t:multipoint} could be replaced by $k=k_0$ for some constant $k_0$ independent of~$d$. The best known lower bound is $k_0\geq 3$. \begin{bibpar} Tardos \cite{Tardos-2int} proved the optimal bound $\tau\leq 2\nu$ for 2-intervals by a topological argument using the homology of suitable simplicial complexes. Kaiser's argument \cite{Kaiser} is similar to the presented one, but he proves Lemma~\ref{l:samewt} using a rather advanced Borsuk--Ulam-type theorem of Ramos \cite{Ramos} concerning continuous maps defined on products of spheres. The method with Brouwer's theorem was used by Kaiser and Rabinovich \cite{KaiserRabinovich} for a proof of Theorem~\ref{t:multipoint}. Alon's short proof \cite{Alon-piercing} of the bound $\tau\leq 2d^2\nu$ for families of $d$-intervals applies a powerful technique developed in Alon and Kleitman \cite{ak-pcs-92a}. For the so-called Hadwiger--Debrunner $(p,q)$-problem solved in the latter paper, the quantitative bounds are probably quite far from the truth. It would be interesting to find an alternative topological approach to that problem, which could perhaps lead to better bounds. See, for example, Hell \cite{Hell-frac-Helly}. The variant of the piercing problem for families of homogeneous% \index{interval, $d$-interval!homogeneous}% \index{dinterval@$d$-interval!homogeneous}% \index{homogeneous $d$-interval} $d$-intervals has been considered simultaneously with $d$-intervals (\cite{GyarfasLehel}, \cite{Tardos-2int}, \cite{Kaiser}, \cite{Alon-piercing}). The upper bounds obtained for the homogeneous case are slightly worse: $\tau\leq 3\nu$ for homogeneous $2$-intervals, which is tight, and $\tau\leq (d^2-d+1)\nu$ for homogeneous $d$-intervals, $d\geq 3$ \cite{Kaiser}. The reason for the worse bounds is that the escape hypergraph needs no longer be $d$-partite, and so F\"uredi's theorem \cite{Furedi-nunustar} relating $\nu$ to $\nu^*$ gives a little worse bound (for $d=2$, one uses a theorem of Lov\'asz instead, asserting that $\nu^*\leq \frac 32\nu$ for any graph). Sgall's construction \cite{Sgall} answered a problem raised by Wigderson in 1985. The title of Sgall's paper refers to a different, but essentially equivalent, formulation of the problem dealing with labeled tournaments. Alon \cite{Alon-trees} proved by the method of \cite{Alon-piercing} that if $T$ is a tree and $\FF$ is a family subgraphs of $T$ with at most $d$ connected components, then $\tau(\FF)\leq 2d^2\nu(\FF)$. More generally, he established a similar bound for the situation where $T$ is a graph of bounded tree-width (on the other hand, if the tree-width of $T$ is sufficiently large, then one can find a system of connected subgraps of $T$ with $\nu=1$ and $\tau$ arbitrarily large, and so the tree-width condition is also necessary in this sense). A somewhat weaker bound for trees has been obtained independently by Kaiser \cite{Kaiser-thesis}. \end{bibpar} \begin{exs} \bex\label{ex:polytprod} Let $P$ and $Q$ be convex polytopes. Show that there is a bijection between the nonempty faces of the Cartesian product $P\times Q$ and all the products $F\times G$, where $F$ is a nonempty face of $P$ and $G$ is a nonempty face of~$Q$. \eex \bex Show that the following ``Brouwer-like'' claim resembling Lemma~\ref{l:polyt-surj} is \emph{not} true: if $f\:B^n\to B^n$ is a continuous map of the $n$-ball such that the boundary of $B^n$ is mapped surjectively onto itself, then $f$ is surjective. \eex \bex\label{ex:homog} Prove the bound $\tau(\FF)\leq d^2\nu(\FF)$ for any family of \emph{homogeneous} $d$-intervals (unions of $d$ intervals on a single line). Hint: follow the proof for $d$-intervals above, but encode a candidate transversal $T$ by a point of a simplex (rather than a product of simplices). \eex \end{exs} \newpage \section{The fixed point theorems of Lefschetz, Smith, and Oliver} Fixed point theorems are ``global-local tools'': from global information about a space (such as its homology) they derive local effects, such as the existence of special points where ``something happens.'' Of course, in application to combinatorial problems we need to combine them with suitable ``continuous-discrete tools'': from continous effects, such as topological information about continuous maps of simplicial complexes, we have to find our way back to combinatorial information. In addition to the usual game of graphs, posets, complexes and spaces, we will in the following exploit the deep topological effects\footnote{In this section, we assume familiarity with more Algebra and Algebraic Topology than in other parts of these lecture notes, including some basic finite group theory, chain complexes, etc. However, this is a survey section, no detailed proofs will be given. Skim or skip, depending on your tastes and familiarity with these notions.} caused by symmetry, that is, by finite group actions. A (finite) group $G$ \emph{acts} on a (finite) simplicial complex% \footnote{See \cite{Mat-top} for a detailed discussion of simplicial complexes, their geometric realizations, etc. In particular, we use the notation $\polyh K $ for the polyhedron (the geometric realization of a simplicial complex $\K$).} $\K$ if each group element corresponds to a permutation of the vertices of $\K$, where composition of group elements corresponds to composition of permutations, in such a way that $g(A)\assg \{ gv\sep v\in A\}$ is a face of $\K$ for all $g\in G$ and for all $A\in \K$. This action on the vertices is extended to the geometric realization of the complex~$\K$, so that $G$ acts as a group of simplicial homeomorphisms $g\: \polyh{\K}\to \polyh{\K}$. The action is \emph{faithful} if only the identity element in $G$ acts as the identity permutation. In general, the set $G_0\assg\{g\in G\sep gv=v\mbox{ for all }v\in\vert(\K)\}$ is a normal subgroup of~$G$. Hence we get that the quotient group $G/G_0$ acts faithfully on~$\K$, and we usually only consider faithful actions. In this case, we can interpret $G$ as a subgroup of the \emph{symmetry group} of the complex~$\K$. The action is \emph{vertex transitive} if for any two vertices $v,w$ of~$\K$ there is a group element $g\in G$ with $gv=w$. A \emph{fixed point} (also known as \emph{stable point}) of a group action is a point $x\in \polyh{\K}$ that satisfies $gx=x$ for all $g\in G$. We denote the set of all fixed points by $\K^G$. Note: this is not in general a subcomplex of~$\K$. \begin{Example} Let $\K=2^{[3]}$ be the complex of a triangle, and let $G=\Z_3$ be the cyclic group (a proper subgroup of the symmetry group $\Symm_3$), acting such that a generator cyclically permutes the vertices, $1\mapsto2\mapsto3\mapsto1$. \[ \begin{picture}(0,0)% \epsfig{file=EPS/z3act.ps}% \end{picture}% \setlength{\unitlength}{0.00030000in}% \begin{picture}(3616,3695)(5375,-5995) \put(8806,-3151){\makebox(0,0)[lb]{2}} \put(5446,-3166){\makebox(0,0)[lb]{1}} \put(7156,-5956){\makebox(0,0)[lb]{3}} \end{picture} \] This is a faithful action; its fixed point set consists of the center of the triangle only---this is not a subcomplex of $\K$, although it corresponds to a subcomplex of the barycentric subdivision~$\sd(\K)$. \end{Example} \begin{Lemma}[Two barycentric subdivisions]\label{l:fps}~ \begin{compactenum}[\rm(1)] \item After replacing $\K$ by its barycentric subdivision, we get that the fixed point set $\K^G$ is a subcomplex of~$\K$. \item After replacing $\K$ by its barycentric subdivision once again, we even get that the quotient space $\polyh{\K}/G$ can be constructed from $\K$ by identifying all faces with their images under the action of~$G$; that is, the equivalence classes of faces of $\K$, with the induced partial order, form a simplicial complex that is homeomorphic to the quotient space $\polyh{\K}/G$. \end{compactenum} \end{Lemma} We leave the proof as an exercise. It is not difficult; for details and further discussion see Bredon \cite[Sect.~III.1]{Bredon-tg}. \medskip A powerful tool on our agenda is Hopf's trace theorem. Let $V$ be any finite-dimensional vector space~$V$, or a free abelian group of finite rank. When we consider an endomorphism $g\: V\to V$ then the \emph{trace} $\trace(g)$ is the sum of the diagonal elements of the matrix that represents~$g$. The trace is independent of the basis chosen for~$V$. In the case when $V$ is a free abelian group, then $\trace(g)$ is an integer. \begin{Theorem} [The Hopf trace theorem]\label{t:hopf-trace} Let $f\: \polyh{\K}\to \polyh{\K}$ be a self-map, and denote by $f_{\#i}$ resp.\ $f_{*i}$ the maps that $f$ induces on $i$-dimensional chain groups resp.\ homology groups. Using an arbitrary field of coefficients $k$, one has \[ \sum_i (-1)^i \trace(f_{\#i})\ \ =\ \ \sum_i (-1)^i \trace(f_{* i}). \] The same identity holds if we use integer coefficients, and compute the traces for homology in the quotients $H_i(\K,\Z)/T_i(\K,\Z)$ of the homology groups modulo their torsion subgroups; these quotients are free abelian groups. \end{Theorem} The proof for this uses the definition of simplicial homology, and simple linear algebra; we refer to Munkres \cite[Thm.~22.1]{Munkres} or Bredon~\cite[Sect.~IV.23]{Bredon-at}. \medskip For an arbitrary coefficient field $k$, we define the \emph{Lefschetz number} of the map~$f\:\polyh{\K}\to \polyh{\K}$ as \[ L_k(f)\assg \sum_i (-1)^i \trace(f_{* i})\ \ \in k. \] Similarly, taking integral homology modulo torsion, we define the \emph{integral Lefschetz number} as \[ L(f)\assg\sum_i (-1)^i \trace(f_{* i})\ \ \in \Z. \] The universal coefficient theorems imply that one always has $L_\Q(f)=L(f)$: thus the integral Lefschetz number $L(f)$ can be computed in rational homology, but it is an integer. The \emph{Euler characteristic} of a complex $\K$ coincides with the Lefschetz number of the identity map $\id_\K\: \polyh{\K}\to \polyh{\K}$, \[ \chi(\K)\ \ =\ \ L(\id_\K), \mbox{ where } \trace((\id_\K)_{*i})=\beta_i(\K). \] Thus the Hopf trace theorem yields that the Euler-characteristic of a finite simplicial complex $\K$ can be defined resp.\ computed without a reference to homology, simply as the alternating sum of the face numbers of the complex~$\K$, where $f_i=F_i(\K)$ denotes the number of $i$-dimensional faces of~$\K$: \[ \chi(\K)\assg f_0(\K) - f_1(\K) + f_2(\K) - \cdots . \] This is then a finite sum that ends with $(-1)^d f_d(\K)$ if $\K$ has dimension~$d$. Thus the Hopf trace theorem applied to the identity map just reproduces the Euler--Poincar\'e formula. This proves, for example, the $d$-dimensional Euler polyhedron formula, not only for polytopes, but also for general spheres, shellable or not (see Ziegler \cite{Z-shellballs}). For us the main consequence of the trace formula is the following theorem. \begin{Theorem} [The Lefschetz fixed point theorem]\label{t:LefschetzHopf} Let $\K$ be a finite simplicial complex, and $k$ an arbitrary field. If a self-map $f\: \polyh{\K}\to \polyh{\K}$ has Lefschetz number $L_k(f)\neq0$, then $f$ and any map homotopic to $f$ have a fixed point. In particular, if $\K$ is $\Z_p$-acyclic for some prime $p$, then every continuous map $f\: \polyh{\K}\to \polyh{\K}$ has a fixed point. \end{Theorem} \proofheader{Proof \rm(Sketch)} For a finite simplicial complex $\K$, the polyhedron $\polyh{\K}$ is compact. So if $f$ does not have a fixed point, then one has $\eps>0$ such that $|f(x)-x|>\eps$ for all $x\in \K$. Now take a subdivision into simplices of diameter smaller than $\eps$, and a simplicial approximation of error smaller than $\eps/2$, so that the simplicial approximation does not have a fixed point, either. Now apply the trace theorem, where the induced map $f_{*0}$ in $0$-dimensional homology is the identity. \endproof Note that Brouwer's Fixed Point Theorem~\ref{t:brouwer} is the special case of Theorem~\ref{t:LefschetzHopf} when $\K$ triangulates a ball. For a reasonably large class of spaces, a converse to the Lefschetz Fixed Point Theorem is also true: If $L(f)=0$, then $f$ is homotopic to a map without fixed points. See Brown \cite[Chap.~VIII]{Brown}. \medskip ``Smith Theory'' was started by P.\,A.~Smith \cite{Smith2} in the thirties---we refer to Bredon \cite[Chapter III]{Bredon-tg} for a nice textbook treatment. Smith Theory analyses finite group actions on compact spaces (such as finite simplicial complexes), providing relations between the structure of the group to its possible fixed point sets. Here is one key result. \begin{Theorem}[Smith \cite{Smith3}]\label{t:smith} If $P$ is a $p$-group (that is, a finite group of order $|P|=p^t$ for a prime $p$ and some $t>0$), acting on a complex $\K$ that is $\Z_p$-acyclic, then the fixed point set $\K^P$ is $\Z_p$-acyclic as well. In particular, it is not empty. \end{Theorem} \proofheader{Proof \rm(Sketch)} The key is that, with the preparations of Lemma~\ref{l:fps}, the maps that $f$ induces on the chain groups (with $\Z_p$ coefficients) nicely restrict to the chain groups on the fixed point set $\K^P$. Passing to traces and using the Hopf trace theorem, one can derive that $\K^P$ is non-empty. A more detailed analysis leads to the ``transfer isomorphism'' in homology, which proves that $\K^P$ must be acyclic. See Bredon \cite[Thm.~III.5.2]{Bredon-tg} and Oliver \cite[p.~157]{Oliver}, and also de Longueville \cite[Appendix D and~E]{deLongueville-book}. \endproof On the combinatorial side, one has an Euler characteristic relation due to Floyd \cite{Floyd} \cite[Sect.~III.4]{Bredon-tg}: \[ \chi(\K)\ +\ (p-1)\chi(\K^{\Z_p})\ \ =\ \ p\,\chi(\K/{\Z_p}). \] It implies that if $P$ is a $p$-group (in particular, if $P=\Z_p$), then \[ \chi(\K^P)\equiv \chi(\K) \pmod p, \] using induction on~$t$, where $|P|=p^t$. \begin{Theorem}[{Oliver \cite[Lemma I]{Oliver}}]\label{t:cyclic-oliver} If $G=\Z_n$ is a cyclic group, acting on a $\Q$-acyclic complex $\K$, then the action has a fixed point. In fact, in this case the fixed point set $\K^G$ has the Euler characteristic of a point, ${\chi}(\K^G)=1$. \end{Theorem} \proof The first statement follows directly from the Lefschetz fixed point theorem: any cyclic group is generated by a single element $g$, this element has a fixed point, this fixed point of $g$ is also a fixed point of all powers of $g$, and hence of the whole group $G$. For the second part, take $p^t$ to be a maximal prime power that divides $n$, consider the corresponding subgroup isomorphic to $\Z_{p^t}$, and use induction on~$t$ and the transfer homomorphism, as for the previous proof. \endproof Unfortunately, results like these may give an overly optimistic impression of the generality of fixed point theorems for acyclic complexes. There are fixed point free finite group actions on balls: examples were constructed by Floyd \& Richardson and others; see Bredon \cite[Sect.~I.8]{Bredon-tg}. On the positive side we have the following result due to Oliver, which will play a central role in the following section. \begin{Theorem} [Oliver's Theorem I~{\cite[Prop.~I]{Oliver}}]\label{t:oliver} If $G$ has a normal subgroup $P\triangleleft G$ that is a $p$-group, such that the quotient $G/P$ is cyclic, acting on a complex $\K$ that is $\Z_p$-acyclic, then the fixed point set $\K^G$ is $\Z_p$-acyclic as well. In particular, it is not empty. \end{Theorem} This is as much as we will need in this chapter. Oliver proved, in fact, a more general and complete theorem that includes a converse. \begin{Theorem}[Oliver's Theorem II~{\cite{Oliver}}]\label{t:oliver2} Let $G$ be a finite group. Every action of~$G$ on a $\Z_p$-acyclic complex $\K$ has a fixed point if and only if $G$ has the following structure: \begin{quote} $G$ has normal subgroups $P\triangleleft Q\triangleleft G$ such that $P$ is a $p$-group, $G/Q$ is a $q$-group (for a prime $q$ that need not be distinct from $p$), and the quotient $Q/P$ is cyclic. \end{quote} In this situation one always has $\chi(\K^G)\equiv 1 \bmod q$. \end{Theorem} \subsection*{Notes} The Lefschetz--Hopf fixed point theorem was announced by Lefschetz for a restriced class of complexes in 1923, with details appearing three years later. The first proof for the general version was by Hopf in 1929. There are generalizations, for example to Absolute Neighborhood Retracts; see Bredon \cite[Cor.~IV.23.5]{Bredon-at} and Brown \cite[Chap.~IIII]{Brown}. We refer to Brown's book~\cite{Brown}. \newpage \section{Evasiveness} \subsection{A general model} Evasiveness appears in different versions for graphs, digraphs and bipartite graphs. Therefore, we start with a general model that contains and, perhaps, explains them all. \begin{Definition} [Argument complexity of a set system; evasiveness]\label{d:ask} In the following, we are concerned with a fixed, known set system $\FF\sse 2^E$, and with the complexity of deciding whether some set $A\sse E$ is in the set system. Here our ``model of computation'' is such that \begin{description} \itemsep=-3.5pt \item[\quad given, and known,] is a set system $\FF\sse 2^E$, where $E$ is fixed, $|E|=m$. \item[\quad]On the other hand, there is a \item[\quad fixed, but unknown] subset $A \sse E$. We have to \item[\quad decide] whether $A\in \FF$, using only \item[\quad questions] of the type ``Is $e\in A$?'' \end{description} (It is assumed that we always get correct answers YES or NO. We only count the \emph{number} of questions that are needed in order to reach the correct conclusion: it is assumed that it is not difficult to decide whether $e\in A$. You can assume that some ``oracle'' that knows both $A$ and $\FF$ is answering.) The \emph{argument complexity} $c(\FF)$ of the set system $\FF$ is the number of elements of the ground set $E$ that we have to test in the worst case---with the optimal strategy. Clearly $0\le c(\FF) \le m$. The set system $\FF$ is \emph{trivial} if $c(\FF)=0$: then no questions need to be asked; this can only be the case if $\FF=\{\}$ or if $\FF=2^E$. (Otherwise $\FF$ is \emph{non-trivial}.) The set system $\FF$ is \emph{evasive} if $c(\FF)=m$, that is, if even with an optimal strategy one has to test all the elements of~$E$ in the worst case. \end{Definition} For example, if $\FF=\{\es\}$, then $c(\FF)=m$: if we again and again get the answer NO, then we have to test all the elements to be sure that $A=\es$. So $\FF=\{\es\}$ is an evasive set system: ``being empty'' is an evasive set property. \subsection{Complexity of graph properties} \begin{Definition} [Graph properties]\label{d:gp} For this we consider graphs on a fixed vertex set $V=[n]$. Loops and multiple edges are excluded. Thus any graph $G=(V,A)$ is determined by its edge set~$A$, which is a subset of the set $E={n\choose2}$ of ``potential edges.'' We identify a \emph{property} $\PP$ of graphs with the family of graphs that have the property~$\PP$, and thus with the set family $\FF(\PP)\sse 2^E$ given by \[ \FF(\PP)\assg\{A\sse E\sep \ ([n],A)\mbox{ has property }\PP\}. \] Furthermore, we will consider only graph properties that are isomorphism invariant; that is, properties of abstract graphs that are preserved under renumbering the vertices. A graph property is \emph{evasive} if the associated set system is evasive, and otherwise it is \emph{non-evasive}. \end{Definition} With the symmetry condition of Definition~\ref{d:gp}, we would accept ``being connected'', ``being planar,'' ``having no isolated vertices,'' and ``having even vertex degrees'' as graph properties. However, ``vertex~$1$ is not isolated,'' ``$123$ is a triangle,'' and ``there are no edges between odd-numbered vertices'' are not graph properties. \begin{Examples}[Graph properties] For the following properties of graphs on $n$ vertices we can easily determine the argument complexity. \begin{description} \item[Having no edge:] Clearly we have to check every single $e\in E$ in order to be sure that it is not contained in $A$, so this property is evasive: its argument complexity is $c(\FF)=m={n\choose2}$. \item[Having at most {\boldmath$k$} edges:] Let us assume that we ask questions, and the answer we get is YES for the first $k$ questions, and then we get NO answers for all further questions, except for possibly the last one. Assuming that $k<m$, this implies that the property is evasive. Otherwise, for $k\ge m$, the property is trivial. \item[Being connected:] This property is evasive for $n\ge2$. Convince yourself that for any strategy, a sequence of ``bad'' answers can force you to ask all the questions. \item[Being planar:] This property is trivial for $n\le 4$ but evasive for $n\ge5$. In fact, for $n=5$ one has to ask all the questions (in arbitrary order), and the answer will be $A\in\FF$ unless we get a YES answer for all the questions---including the last one. This is, however, not at all obvious for $n>5$: it was claimed by Hopcroft \& Tarjan \cite{HopcroftTarjan}, and proved by Best, Van Emde Boas \& Lenstra \cite[Example~2]{BEBL} \cite[p.~408]{Bollobas}. \item[A large star:] Let $\PP$ be the property of being a disjoint union of a star $\K_{1,n-4}$ and an arbitrary graph on $3$ vertices, and let $\FF$ be the corresponding set system. \[ \input EPS/scorpion1.pstex_t \] Then $c(\FF)<{n\choose2}$ for $n\ge7$. For $n\ge12$ we can easily see this, as follows. Test all the $\lfloor{n\over2}\rfloor\lceil{n\over2}\rceil$ edges $\{i,j\}$ with $i\le\lfloor{n\over2}\rfloor<j$. That way we will find exactly one vertex $k$ with at least $\lfloor{n\over2}\rfloor-3\ge 3$ neighbors (otherwise property $\PP$ cannot be satisfied): that vertex $k$ has to be the center of the star. We test all other edges adjacent to~$k$: we must find that $k$ has exactly $n-4$ neighbors. Thus we have identified three vertices that are not neighbors of~$k$: at least one of the edges between those three has not been tested. We test all other edges to check that $([n],A)$ has property~$\PP$. (This property was found by L.~Carter \cite[Example~16]{BEBL}.) \item[Being a scorpion graph:] A \emph{scorpion graph} is an $n$-vertex graph that has one vertex of degree~$1$ adjacent to a vertex of degree $2$ whose other neighbor has degree $n-2$. We leave it as an (instructive!) exercise to check that ``being a scorpion graph'' is not evasive if $n$ is large: in fact, Best, van Emde Boas \& Lenstra \cite[Example~18]{BEBL} \cite[p.~410]{Bollobas} have shown that $c(\FF)\le6n$. \[ \input EPS/scorpion.pstex_t \] \end{description} \end{Examples} {From} these examples it may seem that most ``interesting'' graph properties are evasive. In fact, many more examples of evasive graph properties can be found in Bollob\'as \cite[Sect.~VIII.1]{Bollobas}, alongside with techniques to establish that graph properties are evasive, such as Milner \& Welsh's ``simple strategy'' \cite[p.~406]{Bollobas}. Why is this model of interest? Finite graphs (similarly for digraphs and bipartite graphs) can be represented in different types of \emph{data structures} that are not at all equivalent for algorithmic applications. For example, if a finite graph is given by an adjacency list, then one can decide fast (``in linear time'') whether the graph is planar, e.g.\ using an old algorithm of Hopcroft \& Tarjan \cite{HopcroftTarjan}; see also Mehlhorn \cite[Sect.~IV.10]{Mehlhorn} and \cite{MehlhornMutzel}. Note that such a planar graph has at most $3n-6$ edges (for $n\ge3$). However, assume that a graph is given in terms of its adjacency matrix \[ M(G)\ \ =\ \ \Big(m_{ij}\Big)_{1\le i,j\le n}\ \ \in\ \{0,1\}^{n\times n}, \] where $m_{ij}=1$ means that $\{i,j\}$ is an edge of $G$, and $m_{ij}=0$ says that $\{i,j\}$ is not an edge. Here $G$ is faithfully represented by the set of all $n\choose2$ superdiagonal entries (with $i<j$). Then one possibly has to inspect a large part of the matrix until one {has enough information to decide whether the graph in question is planar}. In fact, if $\FF\sse 2^E$ is the set system corresponding to all planar graphs, then $c(\FF)$ is exactly the number of superdiagonal matrix entries that every algorithm for planarity testing has to inspect in the worst case. The statement that ``being planar'' is evasive (for $n\ge5$) thus translates into the fact that every planarity testing algorithm that starts from an adjacency matrix needs to read at least ${n\choose2}$ bits of the input, and hence its running time is bounded from below by ${n\choose2}=\Omega(n^2)$. This means that such an algorithm---such as the one considered by Fisher \cite{Fisher-planar}---cannot run in linear time, and thus cannot be efficient. \begin{Definition} [Digraph properties; bipartite graph properties]~ \begin{compactenum}[(1)] \item For digraph properties we again use the fixed vertex set $V=[n]$. Loops and parallel edges are excluded, but anti-parallel edges are allowed. Thus any digraph $G=(V,A)$ is determined by its arc set~$A$, which is a subset of the set $E'$ of all $m\assg n^2-n$ ``potential arcs'' (corresponding to the off-diagonal entries of an $n\times n$ adjacency matrix). A \emph{digraph property} is a property of digraphs $([n],A)$ that is invariant under relabelling of the vertex set. Equivalently, a digraph property is a family of arc sets $\FF\sse 2^{E'}$ that is symmetric under the action of $\Symm_n$ that acts by renumbering the vertices (and renumbering all arcs correspondingly). A digraph property is \emph{evasive} if the associated set system is evasive, otherwise it is \emph{non-evasive}. \item For bipartite graph properties we use a fixed vertex set $V\uplus W$ of size $m+n$, and use $E''\assg V\times W$ as the set of potential edges. A \emph{bipartite graph property} is a property of graphs $(V\cup W, A)$ with $A\sse E''$ that is preserved under renumbering the vertices in $V$, and also under permuting the vertices in~$W$. Equivalently, a bipartite graph property on $V\times W$ is a set system $\FF\sse 2^{V\times W}$ that is stable under the action of the automorphism group $\Symm_n\times \Symm_m$ that acts transitively on~$V\times W$. \end{compactenum} \end{Definition} \begin{Examples}[Digraph properties] For the following digraph properties on $n$ vertices we can determine the argument complexity. \begin{description} \item[Having at most {\boldmath$k$} arcs:] Again, this is clearly evasive with $c(\FF)=m$ if $k<m=n^2-n$, and trivial otherwise. \item[Having a sink:] A \emph{sink} in a digraph on $n$ vertices is a vertex $k$ for which all arcs going into $k$ are present, but no arc leaves $k$, that is, a vertex of out-degree $\delta^+(v)=0$, and in-degree $\delta^-(v)=n-1$. Let $\FF$ be the set system of all digraphs on $n$ vertices that have a sink. It is easy to see that $c(\FF)\le 3n-4$. In particular, for $n\ge3$ ``having a sink'' is a non-trivial but non-evasive digraph property. In fact, if we test whether $(i,j)\in A$, then either we get the answer YES, then $i$ is not a sink, or we get the answer NO, then $j$ is not a sink. So, by testing arcs between pairs of vertices that ``could be sinks,'' after $n-1$ questions we are down to one single ``candidate sink''~$k$. At this point at least one arc adjacent to~$k$ has been tested. So we need at most $2n-3$ further questions to test whether it is a sink. \end{description} \end{Examples} \begin{Remark} [History: The Aanderaa--Rosenberg Conjecture]\label{ARC} Originally, Arnold L. Rosenberg had conjectured that all non-trivial digraph properties have qua\-dra\-tic argument complexity, that is, that there is a constant $\gamma>0$ such that for all non-trivial properties of digraphs on $n$ vertices one has $c(\FF) \ge \gamma n^2$. However, S. Aanderaa found the counter-example (for digraphs) of ``having a sink'' \cite[Example~15]{BEBL} \cite[p.~372]{RV1}. We have also seen that ``being a scorpion graph'' is a counter-example for graphs. Hence Rosenberg modified the conjecture: at least all \emph{monotone} graph properties, that is, properties that are preserved under deletion of edges, should have quadratic argument complexity. This is the statement of the \emph{Aanderaa--Rosenberg conjecture} \cite{Rosenberg}. Richard Karp considerably sharpened the statement, as follows. \end{Remark} \begin{Conjecture} [Evasiveness conjecture, Karp \cite{Rosenberg}] Every non-trivial monotone graph property or digraph property is evasive. \end{Conjecture} We will prove this below for graphs and digraphs in the special case when $n$ a prime power; from this one can derive the Aanderaa--Rosenberg conjecture, with $\gamma\approx{1\over4}$. Similarly, we will prove that monotone properties of bipartite graphs on a fixed ground set $V\cup W$ are evasive (without any restriction on $|V|=m$ and $|W|=n$). However, we first return to the more general setting of set systems. \subsection{Decision trees} Any strategy to determine whether an (unknown) set $A$ is contained in a (known) set system~$\FF$---as in Definition~\ref{d:ask}---can be represented in terms of a decision tree of the following form. \begin{Definition}\rm A \emph{decision tree} is a rooted, planar, binary tree whose leaves are labelled ``YES'' or ``NO,'' and whose internal nodes are labelled by questions (here they are of the type ``$e\in A?$''). Its edges are labelled by answers: we will represent them so that the edges labelled ``YES'' point to the right child, and the ``NO'' edges pointing to the left child. A \emph{decision tree for $\FF\sse 2^E$} is a decision tree such that starting at the root with an arbitrary $A\sse E$, and going to the right resp.\ left child depending on whether the question at an internal node we reach has answer YES or NO, we always reach a leaf that correctly answers the question ``$A\in\FF?$''. \[ \input EPS/inner2.pstex_t \] The root of a decision tree is at \emph{level}~$0$, and the children of a node at level $i$ have level $i+1$. The \emph{depth} of a tree is the greatest $k$ such that the tree has a vertex at level~$k$ (a leaf). We assume (without loss of generality) that the trees we consider correspond to strategies where we never ask the same question twice. A decision tree for $\FF$ is \emph{optimal} if it has the smallest depth among all decision trees for $\FF$; that is, if it leads us to ask the smallest number of questions for the worst possible input. \end{Definition} Let us consider an explicit example. \[ \input EPS/k3d.pstex_t \] The following figure represents an optimal algorithm for the ``sink'' problem on digraphs with $n=3$ vertices. This has a ground set $E=\{12,21,13,31,23,32\}$ of size $m=6$. The algorithm first asks, in the root node at {level~$0$}, whether $12\in A?$. In case the answer is YES (so we know that $1$ is not a sink), it branches to the right, leading to a question node at level~$1$ that asks whether $23\in A?$, etc. In case the answer to the question $12\in A?$ is NO (so we know that $2$ is not a sink), it branches to the left, leading to a question node at level~$1$ that asks whether $13\in A?$, etc. For every possible input $A$ (there are $2^6=32$ different ones), after two questions we have identified a unique ``candidate sink''; after not more than $5$ question nodes one arrives at a leaf node that correctly answers the question whether the graph $(V,A)$ has a sink node: YES or NO. (The number of the unique candidate is noted next to each node at level~$2$.) \[\hspace*{-40pt}\input{EPS/tree2.pstex_t}\] For each node (leaf or inner) of level $k$, there are exactly $2^{m-k}$ different inputs that lead to this node. This proves the following lemma. \begin{Lemma} \label{l:evas-char} The following are equivalent: \begin{compactitem}[ $\bullet$] \item $\FF$ is non-evasive. \item The optimal decision trees $T_\FF$ for $\FF$ have depth smaller than~$m$. \item Every leaf of an optimal decision tree $T_\FF$ is reached by at least two distinct inputs. \end{compactitem} \end{Lemma} \begin{Corollary} If $\FF$ is non-evasive, then $|\FF|$ is even. \end{Corollary} This can be used to show, for example, that the directed graph property ``has a directed cycle'' is evasive \cite[Example~4]{BEBL}. Another way to view a (binary) decision tree algorithm is as follows. In the beginning, we do not know anything about the set $A$, so we can view the collection of possible sets as the complete boolean algebra of all $2^m$ subsets of~$E$ In the first node (at ``level~$0$'') we ask a question of the type ``$e\in A?$''; this induces a subdivision of the boolean algebra into two halves, depending on whether we get answer YES or NO. Each of the halves is an interval of length $m-1$ of the boolean algebra $(2^E,\sse)$. At level~$1$ we ask a new question, depending on the outcome of the first question. Thus we \emph{independently} bisect the two halves of level $0$, getting four pieces of the boolean algebra, all of the same size. \[ \input EPS/boolean5.pstex_t \] This process is iterated. It stops---we do not need to ask a further question---on the parts which we create that either contain only sets that are in $\FF$ (this yields a YES-leaf) or that contain only sets not in $\FF$ (corresponding to NO-leaves). Thus the final result is a special type of partition of the boolean algebra into intervals. Some of them are YES intervals, containing only sets of~$\FF$, all the others are NO-intervals, containing no sets from~$\FF$. If the property in question is monotone, then the union of the YES intervals (i.\,e., the set system~$\FF$) forms an \emph{ideal} in the boolean algebra, that is, a ``down-closed'' set such that with any set that it contains it must also contain all its subsets. Let $p_\FF(t)$ be the generating function for the set system $\FF$, that is, the polynomial \[ p_\FF(t)\assg\sum_{A\in\FF} t^{|A|} \ \ =\ \ f_{-1}+tf_0+t^2f_1+t^3f_2+\ldots. \] where $f_i= |\{A\in\FF\sep |A|=i+1\}|$. \begin{Proposition} \[ (1+t)^{m-c(\FF)}\ \ \Big|\ \ p_{\FF}(t). \] \end{Proposition} \proof Consider one interval $\II$ in the partition of $2^E$ that is induced by any optimal algorithm for $\FF$. If the leaf, at level $k$, corresponding to the interval is reached through a sequence of $k_Y$ YES-answers and $k_N$ NO-answers (with $k_Y+k_N=k$), then this means that there are sets $A_Y\sse E$ with $|A_Y|=k_Y$ and $A_N\sse E$ with $|A_N|=k_N$, such that \[ \II\ \ =\ \ \{A\sse E\sep A_Y\sse A\sse E\sm A_N\}. \] In other words, the interval $\II$ contains all sets that give YES-answers when asked about any of the $k_Y$~elements of~$A_Y$, NO-answers when asked about any of the $k_N$~elements of~$A_N$, while the $m-k_Y-k_N$ elements of $E\sm(A_Y\cup A_N)$ may or may not be contained in~$A$. Thus the interval $\II$ has size $2^{m-k_Y-k_N}$, and its counting polynomial is \[ p_\II(t)\assg\sum_{A\in\II} t^{|A|} \ \ =\ \ t^{k_Y}(1+t)^{m-k_Y-k_N}. \] Now the complete set system~$\FF$ is a disjoint union of the intervals~$\II$, and we get \[ p_\FF(t)\ \ =\ \ \sum_{\II} p_\II(t). \] In particular, for an optimal decision tree we have $k_Y+k_N=k\le c(\FF)$ and thus $m-c(\FF)\le m-k_Y-k_N$ at every leaf of level~$k$, which means that all the summands $p_{\II}(t)$ have a common factor of $(1+t)^{m-c(\FF)}$. \endproof \begin{Corollary} \label{c:euler-char} If $\FF$ is non-evasive, then $|\FF^{even}| = |\FF^{odd}|$, that is, \[-f_{-1}+f_0-f_1+f_2\mp\ldots = 0.\] \end{Corollary} \proof Use Lemma~\ref{l:evas-char}, and put $t=-1$. \endproof We can now draw the conclusion, based only on simple counting, that most set families are evasive. This cannot of course be used to settle any specific cases, but it can at least make the various evasiveness conjectures seem more plausible. \begin{Corollary} Asymptotically, almost all set families $\FF$ are evasive. \end{Corollary} \proof The number of set families $\FF\subseteq 2^E$ such that $$\# \{A\in \FF \mid \# A \mbox{ odd} \} = \# \{A\in \FF \mid \# A \mbox{ even} \} =k $$ is ${\binom{2^{m-1}}{k}}^2$. Hence, using Stirling's estimate of factorials, $$ \mbox{ Prob ($\FF$ non-evasive)} \, \le\, \frac{\sum_{k=0}^{2^{m-1}} {\binom{2^{m-1}}{k}}^2}{2^{2^m}}\, = \, \frac{\binom{2^{m}}{2^{m-1}}}{2^{2^m}} \, \sim\, \frac{1}{\sqrt{\pi2^{m-1}}} \rightarrow 0, $$ as $m\rightarrow \infty$. \endproof \begin{Conjecture}[The ``Generalized Aanderaa--Rosenberg Conjecture'', Rivest \& Vuillemin \cite{RV2}] \label{c:GAR} If $\FF\sse 2^E$, with symmetry group $G\sse \Symm_E$ that is transitive on the ground set $E$, and if $\es\in\FF$ but $E\notin\FF$, then $\FF$ is evasive. \end{Conjecture} Note that for this it is \emph{not} assumed that $\FF$ is monotone. However, the assumption that $\es\in\FF$ but $E\notin\FF$ is satisfied neither by ``being a scorpion'' nor by ``having a sink.'' \begin{Proposition}[\rm Rivest \& Vuillemin \cite{RV2}]\label{p:RV} The Generalized Aanderaa--Rosenberg Conjecture~\ref{c:GAR} holds if the size of the grounds set is a prime power, $|E|=p^t$. \end{Proposition} \proof Let $\OO$ be any $k$-orbit of $G$, that is, a collection of $k$-sets $\OO\sse\FF$ on which $G$ acts transitively. While every set in $\OO$ contains $k$ elements $e\in E$, we know from transitivity that every element of $E$ is contained in the same number, say $d$, of sets of the orbit~$\OO$. Thus, double-counting the edges of the bipartite graph on the vertex set $E\uplus\OO$ defined by ``$e\in A$'' (displayed in the figure below) we find that $k|\OO|= d|E| = d p^t$. Thus for $0<k<p^t$ we have that $p$ divides $|\OO|$, while $\{\es\}$ is one single ``trivial'' orbit of size~$1$, and $k=p^t$ doesn't appear. Hence we have \[ -f_{-1}+f_0-f_1+f_2\mp\ldots \equiv -1 \bmod p, \] which implies evasiveness by Corollary~\ref{c:euler-char}. \endproof \[ \input EPS/orbit.pstex_t \] \begin{Proposition}[Illies \cite{Illies}]\label{e:illies} The Generalized Aanderaa--Rosenberg Conjecture~\ref{c:GAR} fails for $n=12$. \end{Proposition} \proof Here is Illies' counterexample: take $E=\{1,2,3,\ldots,12\}$, and let the cyclic group $G=\Z_{12}$ permute the elements of $E$ with the obvious cyclic action. Take $\FF_I \sse 2^E$ to be the following system of sets \begin{itemize}\itemsep=0pt \item $\es$, so we have $f_{-1}=1$ \item $\{1\}$ and all images under $\Z_{12}$, that is, all singleton sets: $f_0=12$, \item $\{1,4\}$ and $\{1,5\}$ and all images under $\Z_{12}$, so $f_1=12+12=24$, \item $\{1,4,7\}$ and $\{1,5,9\}$ and all their $\Z_{12}$-images, so $f_2=12+4=16$, \item $\{1,4,7,10\}$ and their $\Z_{12}$-images, so $f_3=3$. \end{itemize} An explicit decision tree of depth~$11$ for this $\FF_I$ is given in our figure below. Here the \emph{pseudo-leaf} ``YES(7,10)'' denotes a decision tree where we check all elements $e\in E$ that have not been checked before, other than the elements $7$ and $10$. If none of them is contained in $A$, then the answer is YES (irrespective of whether $7\in A$ or $10\in A$), otherwise the answer is~NO. The fact that two elements need not be checked means that this branch of the decision tree denoted by this ``pseudo-leaf'' does not go beyond depth~$10$. Similarly, a pseudo-leaf of the type ``YES(7)'' represents a subtree of depth~$11$. Thus the following figure completes the proof. Here dots denote subtrees that are analogous to the ones just above. \endproof \[\hspace{-20pt} \input EPS/illies3.pstex_t \] Note that Illies' example is not monotone: for example, we have $\{1,4,7\}\in\FF_I$, but $\{1,7\}\notin\FF_I$. \subsection{Monotone systems} We now concentrate on the case where $\FF$ is closed under taking subsets, that is, $\FF$ is an abstract simplicial complex, which we also denote by $\K\assg\FF$. In this setting, the symmetry group acts on~$\K$ as a group of simplicial homeomorphisms. If $\FF$ is a graph or digraph property, then this means that the action of $G$ is transitive on the vertex set $E$ of~$\K$, which corresponds to the edge set of the graph in question. Again we denote the cardinality of the ground set (the vertex set of~$\K$) by $|E|=m$. A complex $\K\sse 2^E$ is \emph{collapsible} if it can be reduced to a one-point complex (equivalently, to a simplex) by steps of the form \[ \K\to \K\sm \{A\in \K\sep A_0\sse A\sse A_1\} \] $\es\subset A_0\subset A_1$ are faces of $\K$ with $\es\neq A_0\neq A_1$, where $A_1$ is the \emph{unique} maximal element of $\K$ that contains $A_0$. Our figure illustrates a sequence of collapses that reduces a $2$-dimensional complex to a point. In each case the face $A_0$ that is contained in a unique maximal face is drawn fattened. \[ \includegraphics[width=12cm]{EPS/collapse.eps} \] \begin{Theorem}\label{t:collapsible} We have the following implications: \\ $\K$ is a cone $\ \Lra\ $ $\K$ is non-evasive $\ \Lra\ $ $\K$ is collapsible $\ \Lra\ $ $\K$ is contractible. \end{Theorem} \proof The first implication is clear: for a cone we don't have to test the apex~$e_0$ in order to see whether a set $A$ is a face of~$\K$, since $A\in \K$ if and only if $A\cup\{e_0\}\in \K$. The third implication is easy topology: one can write down explicit deformation retractions. The middle implication we will derive from the following lemma, which uses the notion of a \emph{link} of a vertex $e$ in a simplicial complex $\K$: this is the complex $\K/e$ formed by all faces $A\in\K$ such that $e\notin A$ but $A\cup\{e\}\in\K$. \begin{Lemma} $\K$ is non-evasive if and only if either $\K$ is a simplex, or it is not a simplex but it has a vertex $e$ such that both the deletion $\K\sm e$ and the link $\K/e$ are non-evasive. \end{Lemma} \proof If no questions need to be asked (that is, if $c(\K)=0$), then $\K$ is a simplex. Otherwise we have some $e$ that corresponds to the first question to be asked by an optimal algorithm. If one gets a YES answer, then the problem is reduced to the link~$\K/e$, since the faces $B\in \K/e$ correspond to the faces $A=B\cup\{e\}$ of $\K$ for which $e\in A$. In the case of a NO-answer the problem similarly reduces to the deletion~$\K\sm e$. \endproof \proofheader{Proof of Theorem~\ref{t:collapsible} \rm($\K$ is non-evasive $\Lra$ $\K$ is collapsible)} We use induction on $m$, where $m=1$ is clear. If the vertex $e$ corresponds to a good first vertex to ask, then we start with a sequence of collapses of the complex that correspond to a collapsing sequence for the link of~$e$ in~$\K$: this is possible by induction, since the link of $e$ is non-evasive and has at most $m-1$ vertices. (A non-maximal face in the link $\K/e$ that is contained in a unique maximal face provides the same type of face in the complete complex~$\K$.) Thus we can apply collapses to $\K$ until we get that $\K/e=\{\es,\{f\}\}$. Then one further collapsing step (with $A_0=\{e\}$ and $A_1=\{e,f\}$) takes us to the one-point complex. \endproof \subsection{A topological approach} The following trivial lemma provides the step from the topological fixed point theorems for complexes to combinatorial information. \begin{Lemma}\label{l:transitive-fixed} If a (finite) group $G$ acts vertex-transitively on a finite complex~$\K$ with a fixed point, then $\K$ is a simplex. \end{Lemma} \proof If $V\assg\{v_1,\ldots,v_n\}$ is the vertex set of~$\K$, then any point $x\in \K$ has a unique representation of the form \[ x\ \ =\ \ \sum_{i=1}^n \la_i\,v_i , \] with $\la_i\ge0$ and $\sum_{i=1}^n\la_i=1$. If the group action, with \[ gx\ \ =\ \ \sum_{i=1}^n \la_i\,gv_i, \] is transitive, then this means that for every $i,j$ there is some $g\in G$ with $gv_i=v_j$. Furthermore, if $x$ is a fixed point, then we have $gx=x$ for all $g\in G$, and hence we get $\la_i=\la_j$ for all $i,j$. From this we derive $\la_i={1\over n}$ for all~$i$. Hence we get \[ x\ \ =\ \ {1\over n}\sum_{i=1}^n \,v_i \] and this is a point in~$\K$ only if $\K$ is the complete simplex with vertex set~$V$. \smallskip Alternatively: the fixed point set of any group action is a subcomplex of the barycentric subdivision, by Lemma~\ref{l:fps}. Thus a vertex $x$ of the fixed point complex is the barycenter of a face $A$ of~$\K$. Since $x$ is fixed by the whole group, so is its support, the set $A$. Thus vertex transitivity implies that $A=E$, and $\K=2^E$. \endproof \begin{Theorem} [The Evasiveness Conjecture for prime powers: Kahn, Saks \& Sturtevant \cite{KSS}] All monontone non-trivial graph properties and digraph properties for graphs on a prime power number of vertices $|V|=q=p^t$ are evasive. \end{Theorem} \proof We identify the fixed vertex set $V$ with $GF(q)$. Corresponding to a non-evasive monotone non-trivial graph property we have a non-evasive complex $\K$ on a set $E={V\choose 2}$ of ${q\choose 2}$ vertices. By Theorem~\ref{t:collapsible} $\K$ is collapsible and hence $\Z_p$-acyclic. The symmetry group of $\K$ includes the symmetric group~$\Symm_q$, but we take only the subgroup of all ``affine maps'' \begin{eqnarray*} G & \assg & \{x\mapsto ax+b \sep a,b\in GF(q),\ a\neq0\},\\ \noalign{\noindent and its subgroup} P & \assg & \{x\mapsto x+b \sep b\in GF(q)\} \end{eqnarray*} that permute the vertex set~$V$, and (since we are considering graph properties) extend to an action on the vertex set $E={V\choose 2}$ of~$\K$. Then we can easily verify the following facts: \begin{compactitem}[ $\bullet$] \item $G$ is doubly transitive on $V$, and hence induces a vertex transitive group of symmetries of the complex $\K$ on the vertex set~$E={V\choose 2}$ (interpret $GF(q)$ as a $1$-dimensional vector space, then any (ordered) pair of distinct points can be mapped to any other such pair by an affine map on the line); \item $P$ is a $p$-group (of order $p^t=q$); \item $P$ is the kernel of the homomorphism that maps $(x\mapsto ax+b)$ to $a\in GF(q)^*$, the multiplicative group of $GF(q)$, and thus a normal subgroup of $G$; \item $G/P\cong GF(q)^*$ is cyclic (this is known from your algebra class). \end{compactitem} Taking these facts together, we have verified all the requirements of Oliver's Theorem~\ref{t:oliver}. Hence $G$ has a fixed point on $\K$, and by Lemma~\ref{l:transitive-fixed} $\K$ is a simplex, and hence the corresponding (di)graph property is trivial. \endproof From this one can also deduce---with a lemma due to Kleitman \& Kwiatowski \cite[Thm.~2]{KleitmanKwiatowski}---that every non-trivial monotone graph property on $n$ vertices has complexity at least $n^2/4+o(n^2) = m/2 + o(m)$. (For the proof see \cite[Thm.~6]{KSS}.) This establishes the Aanderaa--Rosenberg Conjecture~\ref{ARC}. On the other hand, the Evasiveness Conjecture is still an open problem for every $n\ge10$ that is not a prime power. Kahn, Saks \& Sturtevant \cite[Sect.~4]{KSS} report that they verified it for $n=6$. The following treats the bipartite version of the Evasiveness Conjecture. Note that in the case where $mn$ is a prime power it follows from Proposition~\ref{p:RV}. \begin{Theorem} [The Evasiveness Conjecture for bipartite graphs, Yao~\cite{Yao}] All monotone non-trivial bipartite graph properties are evasive. \end{Theorem} \proof The ground set now is $E=V\times W$, where any monotone bipartite graph property is represented by a simplicial complex $\K\sse 2^E$. An interesting aspect of Yao's proof is that it does not use a vertex transitive group. In fact, let the cyclic group $G\assg \Z_n$ act by cyclically permuting the vertices in $W$, while leaving the vertices in $V$ fixed. The group $G$ satisfies the assumptions of Oliver's Theorem~\ref{t:oliver}, with $P=\{0\}$. It acts on the complex $\K$ which is acyclic by Theorem~\ref{t:collapsible}. Thus we get from Oliver's Theorem that the fixed point set $\K^G$ is acyclic. This fixed point set is not a subcomplex of $\K$ (it does not contain any vertices of $\K$), but it is a subcomplex of the order complex $\Delta(\K)$, which is the barycentric subdivision of~$\K$ (Lemma~\ref{l:fps}). The bipartite graphs that are fixed under $G$ are those for which every vertex in $V$ is adjacent to none, or to all, of the vertices in~$W$; thus they are complete bipartite graphs of the type $K_{k,n}$ for suitable $k$. Our figure illustrates this for the case where $m=6$, $n=5$, and $k=3$. \[ \input EPS/Kkn.pstex_t \] Monotonicity now implies that the fixed graphs under $G$ are \emph{all} the complete bipartite graphs of type $K_{k,n}$ with $0\le k\le r$ for some $r$ with $0\le r< m$. (Here $r=m$ is impossible, since then $\K$ would be a simplex, corresponding to a trivial bipartite graph property.) Now we observe that $\K^G$ is the order complex (the barycentric subdivision) of a different complex, namely of the complex whose vertices are the complete bipartite subgraphs $K_{1,n}$, and whose faces are \emph{all} sets of at most $r$ vertices. Thus $\K^G$ is the barycentric subdivision of the $(r-1)$-dimensional skeleton of an $(m-1)$-dimensional simplex. In particular, this space is not acyclic. Even its reduced Euler characteristic, which can be computed to be $(-1)^{r-1}{m-1\choose r}$, does not vanish. \endproof \begin{Remark} We have the following sequence of implications: \begin{small}% \[ \mbox{% non-evasive\textsuperscript{(1)} $\Lra$ collapsible\textsuperscript{(2)} $\Lra$ contractible\textsuperscript{(3)} $\Lra$ $\Q$-acyclic\textsuperscript{(4)} $\Lra$ $\chi =1$\textsuperscript{(5)},} \]\end{small}% which corresponds to a sequence of conjectures: \\[2mm] \textbf{Conjecture $(k)$:} \emph{Every vertex-homogeneous simplicial complex with property~$(k)$ is a simplex.} \\[2mm] The above implications show that \begin{small}% \[\mbox{Conj.(5)} \Lra \mbox{Conj.(4)} \Lra \mbox{Conj.(3)} \Lra \mbox{Conj.(2)} \Lra \mbox{Conj.(1)} \Lra \begin{array}{l} \mbox{\small Evasiveness}\\ \mbox{\small Conjecture} \end{array} \]\end{small}% Here Conjecture (5) is \emph{true} for a prime power number of vertices, by Theorem~\ref{p:RV}. However, Conjectures (5) and (4) fail for $n=6$: a counterexample is provided by the six-vertex triangulation of the real projective plane (see \cite[Section~5.8]{Mat-top}). Even Conjectures (3) and possibly (2) fail for $n=60$: a counterexample by Oliver (unpublished), of dimension~$11$, is based on $A_5$; see Lutz~\cite{lutz02:_examp_z}. So, it seems that Conjecture (1)---the monotone version of the Generalized Aanderaa--Rosenberg Conjecture~\ref{c:GAR}---may be the right generality to prove, even though its non-monotone version fails by Proposition~\ref{e:illies}. \end{Remark} \subsection*{Exercises} \begin{enumerate} \item What kind of values of $c(\FF)$ are possible for graph properties of graphs on $n$~vertices? For monotone properties, it is assumed that one has $c(\FF)\in \{0,m\}$, and this is proved if $n$ is a prime power. In general, it is known that $c(\FF)\ge 2n-4$ unless $c(\FF)=0$, by Bollob\'as \& Eldridge \cite{BollobasEldridge}, see \cite[Sect.~VIII.5]{Bollobas}. \item Show that the digraph property ``has a sink'' has complexity \[ c(\FF_{sink})\le 3(n-1)-\lfloor\log_2(n)\rfloor. \] Can you also prove that for any non-trivial digraph property one has $c(\FF)\ge c(\FF_{sink})$?\\ (This is stated in Best, van Emde Boas \& Lenstra \cite[p.~17]{BEBL}; there are analogous results by Bollob\'as \& Eldridge \cite{BollobasEldridge} \cite[Sect.~VIII.5]{Bollobas} in a different model for digraphs.) \item Show that if a complex $\K$ corresponds to a non-evasive monotone graph property, then it has a complete $1$-skeleton. \item Give examples of simplicial complexes that are contractible, but not collapsible. (The “dunce hat” is a key word for a search in the literature~\ldots) \item Assume that when testing some unknown set $A$ with respect to a set system $\FF$, you always get the answer YES (unless you have already proved that the answer is NO, in which case you wouldn't ask). \begin{itemize} \item[(i)] Show that with this type of answers you \emph{always} need $m$ questions for \emph{any} algorithm (and thus $\FF$ is evasive) if and only if $\FF$ satisfies the following property: \begin{itemize} \item[(*)] for any $e\in A\in\FF$ there is some $f\in E\sm A$ such that $A\sm\{e\}\cup \{f\}\in\FF$. \end{itemize} \item[(ii)] Show that for $n\ge 5$, the family $\FF$ of edge sets of planar graphs satisfies property (*). \item[(iii)] Give other examples of graph properties that satisfy (*), and are thus evasive. \end{itemize} (This is the ``simple strategy'' of Milner \& Welsh \cite{MilnerWelsh}; see Bollob\'as \cite[p.~406]{Bollobas}.) \item Let $\Delta$ be a vertex-homogeneous simplicial complex with $n$ vertices and Euler characteristic $\chi(\Delta) = -1$. Suppose that $n= p_1^{e_1} \cdots p_k^{e_k}$ is the prime factorization and let $m= \max \{ p_1^{e_1}, \ldots , p_k^{e_k}\}$. Prove that $\dim \Delta \ge m-1.$ \item Let $W^q_n$ be the set of all words of length $n$ in the alphabet $\{1, 2, \ldots , q\}$, $q\ge 2$. For subsets $\FF \subseteq W^q_n$, let $c(\FF)$ be the least number of inspections of single letters (or rather, positions) that the best algorithm needs in the worst case $s\in W^q_n$ in order to decide whether ``$s\in \FF?$'' Define the polynomial $$p_{\FF} (x_1, \ldots , x_q)=\sum_{s\in \FF} x_1^{\mu_1} \cdots x_q^{\mu_q},$$ where $\mu_i =\# \{j \mid s_j=i\}$ for $s=s_1 \cdots s_q$. Show that $$(x_1 + \cdots + x_q)^{n-c(\FF)} \ \ \Big|\ \ p_{\FF} (x_1, \ldots , x_q) $$ \end{enumerate} \vfill \subsubsection*{Acknowledgements.} We are grateful to Marie-Sophie Litz for very careful reading and valuable comments on the manuscript. \newpage
{ "timestamp": "2014-09-30T02:11:03", "yymm": "1409", "arxiv_id": "1409.7890", "language": "en", "url": "https://arxiv.org/abs/1409.7890", "abstract": "Brouwer's fixed point theorem from 1911 is a basic result in topology - with a wealth of combinatorial and geometric consequences. In these lecture notes we present some of them, related to the game of HEX and to the piercing of multiple intervals. We also sketch stronger theorems, due to Oliver and others, and explain their applications to the fascinating (and still not fully solved) evasiveness problem.", "subjects": "Combinatorics (math.CO); Algebraic Topology (math.AT)", "title": "Using Brouwer's fixed point theorem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795102691455, "lm_q2_score": 0.8376199714402812, "lm_q1q2_score": 0.8267137492037845 }
https://arxiv.org/abs/1603.00317
Finite element approximation for the fractional eigenvalue problem
The purpose of this work is to study a finite element method for finding solutions to the eigenvalue problem for the fractional Laplacian. We prove that the discrete eigenvalue problem converges to the continuous one and we show the order of such convergence. Finally, we perform some numerical experiments and compare our results with previous work by other authors.
\section{Introduction and Main Results} Anomalous diffusion phenomena are ubiquitous in nature \cite{Klafter, MetzlerKlafter}, and the study of nonlocal operators has been an active area of research in different branches of mathematics. Such operators arise in applications as image processing~\cite{BuadesColl, GattoHesthaven, GilboaOsher, YifeiZhang}, finance~\cite{CarrHelyette, RamaTankov}, electromagnetic fluids~\cite{McCayNarasimhan}, peridynamics \cite{Silling}, porous media flow~\cite{BensonWheatcraft, CushmanGinn}, among others. \medskip One striking example of a nonlocal operator is the fractional Laplacian $(-\Delta)^s$, defined by \[ (-\Delta)^su(x) \coloneqq 2 C(n,s) \int_{\mathbb{R}^n}\dfrac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy, \quad x\in\mathbb{R}^n. \] Here the integral is understood in the principal value sense and the normalization constant $C(n,s)$ is given by \[ C(n,s) \coloneqq \frac{2^{2s-1} s \Gamma(s+\frac{n}{2})}{\pi^{n/2} \Gamma(1-s)} . \] In the theory of stochastic processes, this operator appears as the infinitesimal generator of a stable L\'evy process, see for instance \cite{Bertoin, Valdinoci}. Moreover, the fractional Laplacian is also one of the simplest examples of a pseudo-differential operator, because its symbol is just $P(\xi)=|\xi|^{2s}.$ \medskip An interesting problem concerning the fractional Laplacian is to find its eigenvalues on bounded domains. Namely, to find a positive number $\lambda$ (eigenvalue) and a function $u\not\equiv0$ (eigenfunction) such that \begin{equation}\label{eq:int.autovalores} \begin{cases} (-\Delta)^s u = \lambda u &\mbox{ in }\Omega, \\ u = 0 &\mbox{ in }\Omega^c=\mathbb{R}^n\setminus\Omega, \end{cases} \end{equation} where $\Omega$ is a bounded smooth domain in $\mathbb{R}^n$ and $s\in(0,1)$. Observe that, due to the fact that pointwise values of $(-\Delta)^s u$ depend on the value of $u$ over the whole space, the ``boundary'' conditions need to be imposed on the complement of $\Omega$. Even if $\Omega$ is an interval, it is very challenging to obtain closed analytical expressions for the eigenvalues and eigenfunctions of the fractional Laplacian. This motivates the utilization of discrete approximations of this problem (see, for example, \cite{DuoZhang, Kwasnicki, ZoiaRossoKardar}); in this work we consider a finite element method. In first place, we prove that the discrete eigenvalue problem converges to the continuous one. Then, we show the order of convergence for eigenvalues and eigenfunctions, both in the energy and the $L^2$-norm. Finally, we perform some numerical experiments and compare our results with previous work by other authors. These results are in good agreement with our theory. The finite element method is flexible enough to deal with non-convex domains, and enables us to provide estimates and sharp upper bounds for eigenvalues even in this context. Moreover, as a consequence of our numerical experiments in the $L$-shaped domain $\Omega = [-1,1]^2\setminus [0,1]^2$, we conjecture that the first eigenfunction for this domain is as regular as the first one in any smooth domain. \subsection*{Main Results} In order to state our results we need some notations and definitions. The natural functional space for the eigenvalue problem \eqref{eq:int.autovalores} is \[ \widetilde{H}^s (\Omega) \coloneqq \left\{ v \in H^s({\mathbb{R}^n}) \colon \text{ supp } v \subset \bar{\Omega} \right\}, \] where $H^s({\mathbb {R}}^n)$ is the space of all functions $u\in L^2({\mathbb {R}}^n)$ such that \[ |v|_{H^s({\mathbb {R}}^n)}^2 \coloneqq \iint_{{\mathbb {R}}^{2n}} \frac{|v(x)-v(y)|^2}{|x-y|^{n+2s}} \, dx \, dy<\infty. \] Moreover, $\left({\mathbb {V}},\|\cdot\|_{\mathbb {V}}\right)\coloneqq \left(\widetilde{H}^s (\Omega), \sqrt{C(n,s)} \, |\cdot|_{H^s({\mathbb {R}}^n)} \right)$ is a Hilbert space with the inner product \[ \left\langle u, v\right\rangle \coloneqq C(n,s) \iint_{\mathbb{R}^{2n}} \frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}} \, dx \, dy. \] Obviously, the constant $\sqrt{C(n,s)}$ has no effect on the definition of the space; it is included in order to make the notation simpler in the rest of the paper. The fractional space $H^s({\mathbb {R}}^n)$ can also be defined for any $s>1.$ If $s = m + \sigma$, where $m \in\mathbb{N}$ and $\sigma \in (0,1)$, $H^s({\mathbb {R}}^n)$ is the space of all functions $v \in H^m({\mathbb {R}}^m)$ such that its weak derivatives of order $m$ belong to $H^{\sigma}({\mathbb {R}}^n)$. For more details, see Section \ref{sec.preliminares}. \medskip In this context, the eigenvalue problem \eqref{eq:int.autovalores} has the following variational formulation: find $\lambda\in (0,+\infty)$ and $u\in {\mathbb {V}} $ such that $u\not\equiv 0$ and \begin{equation} \label{eq:int.autovalores.de.intro} \left\langle u, v \right\rangle = \lambda (u, v) \quad \mbox{ for all } v \in {\mathbb {V}}, \end{equation} where $( \cdot, \cdot )\colon L^2(\Omega) \times L^2(\Omega) \to \mathbb{R}$ is the bilinear form \[ ( u, v) \coloneqq \int_{\Omega} u(x)v(x) \, dx. \] In \cite{MR3002745}, the authors show that there is an infinite sequence of eigenvalues $\{\lambda^{(k)}\}_{k\in\mathbb{N}}$, \[ 0<\lambda^{(1)}< \lambda^{(2)}\leq \cdots \leq \lambda^{(k)}\leq\cdots,\quad \lambda^{(k)}\to\infty \mbox{ as } k\to \infty, \] where the same eigenvalue can be repeated several times according to its multiplicity. The corresponding eigenfunctions $u^{(k)}$ (normalized by $\|u^{(k)}\|_{L^2(\Omega)}=1$) form a complete orthonormal set in $L^2(\Omega).$ See also \cite{MR3089742,ServadeiValdinoci, MR3271254} and the references therein for further details on the fractional eigenvalue problem. \medskip Let us now introduce the discrete space. Let ${\mathcal {T}}_h$ be a family of triangulations of $\Omega$ satisfying \begin{align} & \exists \sigma > 0 \mbox{ s.t. } h_T \leq \sigma \rho_T, \tag{Regularity} \label{eq:regularity.intro} \end{align} for any element $T \in \mathcal{T}_h$, where $h_T$ is the diameter of $T$ and $\rho_T$ is the radius of the largest ball contained in $T$. This is the only requirement we need to impose to our family of triangulations. We consider continuous piecewise linear functions on $\mathcal{T}_h$, namely $$ {\mathbb {V}}_h \coloneqq \left\{ v \in {\mathbb {V}} \colon v \big|_T \in \mathcal{P}_1(T) \ \forall T \in {\mathcal {T}}_h \right\} . $$ Our Galerkin approximation consists in looking for discrete eigenvalues $\lambda_{h}\in \mathbb{R}$ and $u_h\in{\mathbb {V}}_h$ such that $u_h\not\equiv0$ and \begin{equation} \label{eq:int.autovalores.di.intro} \left\langle u_h, v\right\rangle=\lambda_h(u_h,v)\quad\forall v\in{\mathbb {V}}_h. \end{equation} We can order the discrete eigenvalues of \eqref{eq:int.autovalores.di.intro} as follows \[ 0<\lambda_h^{(1)}\leq \lambda_h^{(2)}\leq \cdots \leq \lambda_h^{(k)}\leq \cdots \le\lambda_h^{(\text{dim} {\mathbb {V}}_h )}, \] where the same eigenvalue is repeated according to its multiplicity. The corresponding eigenfunctions $u^{(k)}_h$ (normalized by $\|u^{(k)}_h\|_{L^2(\Omega)}=1$) form an orthonormal set in $L^2(\Omega).$ \medskip The purpose of this work is to prove the convergence of this discrete problem to the continuous one. To this end, we provide a Sobolev regularity result. Namely, using the regularity results of Grubb \cite{Grubb} and a bootstrap argument, we prove \begin{proposition}\label{prop:regintro} Let $u$ be an eigenfunction of $(-\Delta)^s$ in $\Omega$ with homogeneous Dirichlet boundary conditions. Then, $u \in \widetilde{H}^{s+\nicefrac12-\varepsilon}(\Omega)$ for any $\varepsilon > 0.$ \end{proposition} Another important tool is given by considering approximation properties of $\Pi_h \colon {\mathbb {V}} \to {\mathbb {V}}_h$, the projection with respect to the $\| \cdot \|_{{\mathbb {V}}}$ norm. To be specific, \begin{proposition}\label{estimacionlinf0} Let $u$ be an eigenfunction of \eqref{eq:int.autovalores.de.intro}. Then, for any $\varepsilon>0$ there exists a positive constant $C$ independent of $h$ such that \begin{equation} \label{estimacionlinf1} \|u-\Pi_h u\|_{{\mathbb {V}}} \le C h^{\nicefrac12-\varepsilon}|u|_{H^{s+\nicefrac12-\varepsilon}(\Omega)}, \end{equation} and \begin{equation}\label{eq:aproxL2} \|u-\Pi_h u\|_{L^2(\Omega)} \le C h^{\nicefrac12+\alpha-\varepsilon} |u|_{H^{s+\nicefrac12-\varepsilon}(\Omega)}. \end{equation} Here, $\alpha = s$ if $s < \nicefrac12$ and $\alpha = \nicefrac12 - \varepsilon$ if $s\ge \nicefrac12.$ \end{proposition} Our first result concerning the convergence is to determine it in a ``gap" distance, that is, we prove that the discrete eigenvalue problem \eqref{eq:int.autovalores.di.intro} converges to the continuous one \eqref{eq:int.autovalores.de.intro}, following the definition of convergence given in \cite{Kato,Boffia} (see also \cite{Boffi}). More precisely, we define the gap between Hilbert spaces $E, F \subset H$ by \[ \delta(E,F)=\sup_{u\in E,\|u\|_{H}=1} \inf_{v\in F} \|u-v\|_{H}, \quad \hat{\delta}(E,F)=\max(\delta(E,F),\delta(F,E)). \] Then, taking $H={\mathbb {V}},$ we say that the discrete eigenvalue problem \eqref{eq:int.autovalores.di.intro} converges to the continuous one \eqref{eq:int.autovalores.de.intro} if, for any $\varepsilon>0$ and $k>0$, there is $h_0>0$ such that \[ \max_{1\leq i \leq m(k)} |\lambda^{(i)}-\lambda_h^{(i)}| \leq \varepsilon,\qquad \hat{\delta}\left( \bigoplus_{i=1}^{m(k)} E^{(i)}, \bigoplus_{i=1}^{m(k)} E_h^{(i)}\right)\leq \varepsilon, \] for all $h<h_0$. Here, $m(k)$ is the dimension of the space spanned by the first distinct $k$ eigenspaces and $E^{(i)}$ and $E^{(i)}_h$ are the eigenspace and the discrete eigenspace associated to $\lambda^{(i)}$ and $\lambda^{(i)}_h,$ respectively. \begin{theorem}\label{teo:gap} The discrete eigenvalue problem \eqref{eq:int.autovalores.di.intro} converges to the continuous one \eqref{eq:int.autovalores.de.intro}. \end{theorem} Having established the convergence of the discrete problem to the continuous one, we aim to study the order of such convergence. Our procedure is in the same spirit as in \cite{Boffi}, and we prove the following results \begin{theorem}\label{teo:ordensimple} Let $(u^{(k)}, \lambda^{(k)})$ be an eigenpair of multiplicity one. For any $\varepsilon>0,$ there is a positive constant $C$ independent of $h$ such that \[ \|u^{(k)}-u^{(k)}_h\|_{L^2(\Omega)}\leq C h^{\alpha+\nicefrac12-\varepsilon}, \quad 0\leq \lambda_h^{(k)}-\lambda^{(k)}\leq C h^{1-\varepsilon}, \] and \[ \|u^{(k)}-u^{(k)}_h\|_{{\mathbb {V}}}\leq C h^{\nicefrac12-\varepsilon}. \] Here $u_h^{(k)}$ is chosen in such a way that $\left(\Pi_h u^{(k)}, u_h^{(k)}\right) \ge 0$ and $\alpha$ is given by Proposition \ref{estimacionlinf0}. \end{theorem} \begin{theorem}\label{teo:ordenmultiple} Let $\lambda^{(k)}$ be an eigenvalue of multiplicity $m$ (that is, $\lambda^{(k)}=\lambda^{(k+1)}=\cdots=\lambda^{(k+m-1)}$ and $\lambda^{(i)} \neq \lambda ^{(k)}$ for $i\neq k,\dots,k+m-1 $). For any $\varepsilon>0,$ there exists a positive constant $C$ independent of $h$ such that \[ 0\leq \lambda_h^{(j)}-\lambda^{(k)}\leq C h^{1-\varepsilon} \quad \forall k\leq j\leq k+m-1. \] Moreover, if $u^{(k)}$ is an eigenfunction associated to $\lambda^{(k)}$, there is \[ \{ w_h^{(k)} \} \subset E_h^{(k)} \oplus \ldots \oplus E_h^{(k+m-1)} \] such that \[ \|u^{(k)}-w^{(k)}_h\|_{L^2(\Omega)}\leq C h^{\alpha+\nicefrac12-\varepsilon} \] and \[ \|u^{(k)}-w^{(k)}_h\|_{{\mathbb {V}}}\leq C h^{\nicefrac12-\varepsilon}, \] where $\alpha$ is given by Proposition \ref{estimacionlinf0}. \end{theorem} Propositions \ref{prop:regintro} and \ref{estimacionlinf0} and Theorem \ref{teo:gap} are the key ingredients in the proof of the above two theorems. Let us mention at this point that in \cite{Boffi}, where the author studies the eigenvalue problem for the laplacian, the order of convergence of the eigenvalues is proved in first place, by using the min-max formulation and estimate \eqref{estimacionlinf1}. If we followed the sketch of this proof for the fractional laplacian case we would obtain order $\nicefrac12+s$ when $s<\nicefrac12$, which is less than $1$. So, in order to obtain the optimal order we need to prove first the convergence of the eigenvalues and afterwards the order of convergence of the eigenfunctions. In this manner, using both results and estimate \eqref{estimacionlinf1}, we obtain optimal order for the eigenvalues for any $0<s<1$. \subsection*{The paper is organized as follows} Section \ref{sec.preliminares} collects the notation we employ, and reviews some previous works on the problem under consideration. In particular, regularity of eigenfunctions is proved. The section concludes with a discussion of certain aspects of finite element approximations of the fractional Laplacian. Afterwards, Section \ref{sec.conv} deals with the convergence of the discrete eigenvalue problem to the continuous one in gap distance. In Sections \ref{sec.orden} and \ref{sec.orden_multiple} the study of the order of convergence is developed, first for simple eigenvalues and later for eigenspaces of dimension greater than one. Finally, numerical experiments are discussed in Section \ref{sec:numerico}. \subsection*{Acknowledgments} The authors would like to thank G. Grubb and R. Rodr\'iguez for the valuable help they provided through clarifying discussions on the topic of this paper, and to F. Bersetche for improving the efficiency of the Matlab code employed. \section{Preliminaries and Definitions} \label{sec.preliminares} In this section we review the basic aspects of the problem under consideration. In first place, we set notation regarding Sobolev spaces. Afterwards, we analyze theoretical properties of the eigenvalue problem \eqref{eq:int.autovalores}. Regularity results for weak solutions of fractional Laplace equations are recalled next. The section concludes with the introduction of the finite element spaces we work with. \subsection{Sobolev spaces} Given an open set $\Omega \subset {\mathbb{R}^n}$ and $s \in(0,1)$, define the fractional Sobolev space $H^s(\Omega)$ as \[ H^s(\Omega) \coloneqq \left\{ v \in L^2(\Omega) \colon |v|_{H^s(\Omega)} < \infty \right\}, \] where $|\cdot|_{H^s(\Omega)}$ is the seminorm \[ |v|_{H^s(\Omega)}^2 \coloneqq \iint_{\Omega^2} \frac{|v(x)-v(y)|^2}{|x-y|^{n+2s}} \, dx \, dy. \] Obviously, $H^s(\Omega)$ is a Hilbert space endowed with the norm $\|\cdot\|_{H^s(\Omega)} = \|\cdot\|_{L^2(\Omega)} + |\cdot|_{H^s(\Omega)} .$ If $s>1$ and it is not an integer, the decomposition $s = m + \sigma$, where $m \in \mathbb{N}$ and $\sigma \in (0,1)$, allows to define $H^s(\Omega)$ by setting \[ H^s(\Omega) \coloneqq \left\{ v \in H^m(\Omega) \colon |D^\alpha v|_{H^{\sigma}(\Omega)} < \infty \text{ for all } \alpha \text{ s.t } |\alpha| = m \right\}. \] Let us also define the space of functions supported in $\Omega$, \[ \widetilde{H}^s (\Omega) \coloneqq \left\{ v \in H^s({\mathbb{R}^n}) \colon \text{ supp } v \subset \bar{\Omega} \right\}. \] For $0\le s \le 1$, this space may be defined through interpolation, \[ \widetilde{H}^s (\Omega) = \left[L^2(\Omega), H^1_0(\Omega) \right]_{s}. \] Moreover, depending on the value of $s$, different characterizations of this space are available (see, for example \cite[Chapter 11]{LionsMagenes}). If $s<\nicefrac12$ the space $\widetilde{H}^s (\Omega)$ coincides with $H^s (\Omega)$, and if $s>\nicefrac12$ it may be characterized as the closure of $C^\infty_0(\Omega)$ with respect to the $|\cdot|_{H^s (\Omega)}$ norm. In the latter case, it is also customary to denote it by $H^s_0(\Omega)$. The particular case of $s=\nicefrac12$ gives raise to the Lions-Magenes space $H^{\nicefrac12}_{00}(\Omega)$, which can be characterized by \[ H^{\nicefrac12}_{00} (\Omega) \coloneqq \left\{ v \in H^{\nicefrac12}(\Omega) \colon \int_\Omega \frac{v(x)^2}{\text{dist}(x,\partial \Omega)} \, dx < \infty \right\}. \] Note that the inclusions $H^{\nicefrac12}_{00}(\Omega) \subset H^{\nicefrac12}_{0}(\Omega) \subset H^{\nicefrac12}(\Omega)$ are strict. It is apparent that $\left\langle \cdot, \cdot\right\rangle$ defines an inner product on $\widetilde{H}^s(\Omega).$ In addition, the norm induced by it, which is just the $H^s({\mathbb{R}^n})$ seminorm, is equivalent to the full $H^s({\mathbb{R}^n})$ norm on this space, because of the following well known result. \begin{proposition}[Poincar\'e inequality] There is a constant $c=c(\Omega,n,s)$ such that \begin{equation} \label{eq:poincare} \| v \|_{L^2(\Omega)} \leq c |v|_{H^s({\mathbb{R}^n})} \quad \forall v \in \widetilde{H}^s(\Omega) . \end{equation} \end{proposition} \subsection{Eigenvalue Problem} In the sequel, we will work within the Hilbert space $$ ({\mathbb {V}}, \| \cdot \|_{\mathbb {V}}) \coloneqq (\widetilde{H}^s(\Omega), \sqrt{C(n,s)} \, |\cdot|_{H^s({\mathbb{R}^n})}). $$ In \cite{MR3002745}, the authors prove that for any $k\in\mathbb{N}$ the eigenvalues of \eqref{eq:int.autovalores.de.intro} can be characterized as follows: \[ \lambda^{(k)}= \min\left\{\dfrac{\|u\|_{{\mathbb {V}}}^2}{\|u\|_{L^2(\Omega)}^2} \colon u\in {\mathbb {V}}^{(k)}\setminus \{0\} \right\}, \] where ${\mathbb {V}}^{(1)}={\mathbb {V}}$ and \[ {\mathbb {V}}^{(k)}\coloneqq\left\{u\in{\mathbb {V}}\colon \left\langle u,u^{(j)}\right\rangle=0\quad\forall j=1,\dots,k-1\right\} \] for all $k\ge2.$ Therefore, by the min-max theorem, \[ \lambda^{(k)} = \min_{E \in S^{(k)}} \max_{u \in E} \frac{\| u \|^2_{\mathbb {V}}}{\| u \|^2_{L^2(\Omega)}} \] where $S^{(k)}$ denotes the set of all $k-$dimensional subspaces of ${\mathbb {V}}.$ In \cite{MR3002745}, the authors show that $\lambda^{(1)}$ is simple, see also \cite{2014arXiv1412.4722D,MR3148135}. Whereas in \cite{MR3307955} it is proved that $u^{(k)}\in L^{\infty}(\Omega)$ for all $k\in\mathbb{N}.$ Therefore, by \cite[Proposition 1.1]{RosOtonSerra}, $u^{(k)}\in C^s({\mathbb {R}}^n)$ for all $k\in\mathbb{N}.$ \subsection{Regularity results} Given a function $f \in H^r(\Omega)$ ($r \ge -s$), let us consider the homogeneous Dirichlet problem for the fractional Laplacian, \begin{equation}\label{eq:fuente} \begin{cases} (-\Delta)^s u = f &\mbox{ in }\Omega, \\ u = 0 &\mbox{ in }\Omega^c . \end{cases} \end{equation} Existence and uniqueness of a weak solution $u \in \widetilde{H}^s(\Omega)$ of the above equation is an immediate consequence of the Lax-Milgram lemma. Following Grubb \cite{Grubb}, it is possible to obtain Sobolev regularity results for the solution. In that paper, the author deals with H\"ormander $\mu-$spaces $H^{\mu(s)}_p$; see that work for a definition and further details. The following result is a particular case of Theorem 7.1 therein: \begin{theorem}\label{teo:Grubb} Let $\ell>s-\nicefrac12$ and assume $u \in \widetilde{H}^{\sigma}(\Omega)$ for some $\sigma>s-\nicefrac12$ and consider a right hand side function $f\in H^{\ell-2s} (\Omega)$. Then, it holds that $u \in H^{s(\ell)}(\Omega)$. \end{theorem} In particular, considering $\ell = r+2s$ in the previous theorem and taking into account that \[ H^{s(r+2s)}(\overline \Omega) \begin{cases} = \widetilde{H}^{2s+r}(\Omega) &\mbox{ if } 0 < s+r < \nicefrac12, \\ \subset \widetilde{H}^{s+\nicefrac12-\varepsilon}(\Omega) \ \forall \varepsilon > 0, &\mbox{ if } \nicefrac12 \le s+r < 1, \end{cases} \] (see \cite[Theorem 5.4]{Grubb}), we obtain the following: \begin{proposition}\label{prop:regHr} Let $f\in H^r(\Omega)$ for $r\geq -s$ and $u\in \widetilde{H}^s(\Omega)$ be the solution of the Dirichlet problem \eqref{eq:fuente}. Then, the following regularity estimate holds $$ |u|_{H^{s+\alpha}({\mathbb{R}^n})} \leq C(n, \alpha) \|f\|_{H^r(\Omega)}. $$ Here, $\alpha = s+r$ if $s+r < \nicefrac12$ and $\alpha = \nicefrac12 - \varepsilon$ if $s+r \ge \nicefrac12$, with $\varepsilon > 0$ arbitrarily small. \end{proposition} \begin{remark} Assuming further Sobolev regularity in the right hand side function does not imply that the solution will be any smoother than what is given by the previous proposition. Indeed, if $f \in H^r(\Omega)$, then Theorem \ref{teo:Grubb} gives $u \in H^{s(r+2s)}(\Omega)$, which can not be embedded in any space sharper than $H^{s+\nicefrac12-\varepsilon}(\Omega)$ if $r+s\ge \nicefrac12$. Moreover, regularity estimates for eigenfunctions of the fractional Laplacian are derived in \cite{Grubb_autovalores, MR3294242}. These estimates are formulated in terms of H\"older norms. Letting $d$ be a smooth function that behaves like $\text{dist}(x,\partial \Omega)$ near the boundary of $\Omega$, it is shown that any eigenfunction $u$ of \eqref{eq:int.autovalores} lies in the space $d^sC^{2s(-\varepsilon)}(\Omega)$, where the $\varepsilon$ is active only if $s=\nicefrac12$ and that $\nicefrac{u}{d^s}$ does not vanish near $\partial\Omega$. This shows that no further regularity than $H^{s+\nicefrac12-\varepsilon}(\Omega)$ should be expected for eigenfunctions. \end{remark} Sobolev regularity of eigenfunctions is a consequence of Proposition \ref{prop:regHr}. \begin{proof}[Proof of Proposition \ref{prop:regintro}] Since $f=\lambda u\in H^s(\Omega)$ the case $s\ge \nicefrac14$ is a direct consequence of Proposition \ref{prop:regHr}. When $s<\nicefrac14$ the result follows using a bootstrap argument. \end{proof} \begin{remark} As in Grubb's work the domain $\Omega$ is required to be smooth, we assumed that hypothesis in the two previous propositions. However, numerical evidence indicates that this assumption is not necessary (see Subsection \ref{sub:2d}). Observe that the theory of Section \ref{sec.conv} does not necessitate the smoothness of $\Omega$, and that in sections \ref{sec.orden} and \ref{sec.orden_multiple} it is only required in order to apply Theorem \ref{teo:Grubb}. \end{remark} \subsection{Finite element approximations} Let ${\mathcal {T}}_h$ be a family of shape-regular triangulations of $\Omega$ (see introduction). Observe that for all $h>0$ the discrete space ${\mathbb {V}}_h$ is a subset of the continuous one. We have the analogous min-max characterization for eigenvalues of the discrete problem, \[ \lambda_h^{(k)} = \min_{E \in S_h^{(k)}} \max_{u \in E} \frac{\| u \|^2_{\mathbb {V}}}{\| u \|^2_{L^2(\Omega)}}, \] where $S_h^{(k)}$ denotes the set of all $k$ dimensional subspaces of ${\mathbb {V}}_h.$ \medskip \begin{remark}\label{re:compara} It follows from ${\mathbb {V}}_h \subset {\mathbb {V}}$ that \[ \lambda^{(k)}\leq \lambda ^{(k)}_h . \] \end{remark} In the remainder of this section, we analyze approximation properties of finite element spaces. Consider a quasi-interpolation operator $I_h\colon H^l(\Omega) \to {\mathbb {V}}_h $ satisfying the following estimate: there exists $C>0$ such that for any $w\in H^{l}(\Omega)$, \begin{equation} \label{estiminterpol} \|w - I_h w\|_{{\mathbb {V}}} \le C h^{l-s} |w|_{H^{l}(\Omega)} . \end{equation} Quasi-interpolation operators were introduced in \cite{clement} (see also \cite{ScottZhang}), and an estimate like \eqref{estiminterpol} is derived, for example, in \cite{AcostaBorthagaray}. Given $u \in {\mathbb {V}}$, recall that by $\Pi_h u$ we denote the projection with respect to the $\|\cdot\|_{\mathbb {V}}$ norm. Clearly, this is the only function in ${\mathbb {V}}_h$ such that the Galerkin orthogonality \[ \left\langle u - \Pi_h u , \, v_h \right\rangle = 0 \quad\forall v_h \in {\mathbb {V}}_h \] holds, or equivalently, \begin{equation}\label{eq:cea} \| u - \Pi_h u \|_{{\mathbb {V}}} = \inf_{v_h \in {\mathbb {V}}_h} \|u - v_h \|_{{\mathbb {V}}}. \end{equation} Observe that if $u$ is the solution of \eqref{eq:fuente}, then $\Pi_h u$ corresponds to the solution of the corresponding discrete problem on ${\mathbb {V}}_h$. Now, we can prove our estimates for the error. \begin{proof}[Proof of Proposition \ref{estimacionlinf0}] The first inequality follows by combining estimate \eqref{eq:cea}, interpolation inequality \eqref{estiminterpol} and Proposition \ref{prop:regintro}. The second one follows from the following Aubin--Nitsche argument. Let $w\in{\mathbb {V}}$ be the weak solution of the boundary value problem \[ \begin{cases} (-\Delta)^sw=u-\Pi_h u &\mbox{ in }\Omega,\\ w=0&\mbox{ in }\Omega^c.\\ \end{cases} \] Then, resorting to Galerkin orthogonality again we obtain \[ \|u-\Pi_h u\|_{L^2(\Omega)}^2 = \left\langle w, u-\Pi_h u \right\rangle \leq \|w - I_h w\|_{{\mathbb {V}}} \|u-\Pi_h u\|_{{\mathbb {V}}}, \] where $I_h w \in {\mathbb {V}}_h$ is the interpolator of $w.$ Taking into account the regularity given by Proposition \ref{prop:regHr} with $r=0$, interpolation estimate \eqref{estiminterpol} gives \[ \|w - I_h w\|_{{\mathbb {V}}} \le C h^{\alpha} |w|_{H^{s+\alpha}(\Omega)} \le C h^{\alpha} \|u-\Pi_h u\|_{L^2(\Omega)}. \] Finally, using the error estimate \eqref{estimacionlinf1} we obtain \begin{equation}\label{estimacionlinf} \|u-\Pi_h u\|_{L^2(\Omega)}^2 \le C h^{\nicefrac12+\alpha-\varepsilon}|u|_{H^{s+\nicefrac12-\varepsilon}(\Omega)}\|u-\Pi_h u\|_{L^2(\Omega)}, \end{equation} and then estimate \eqref{eq:aproxL2} follows. \end{proof} \section{Convergence of eigenvalues and eigenfunctions in gap distance} \label{sec.conv} In this section we prove Theorem \ref{teo:gap}, stating that the discrete eigenvalue problem \eqref{eq:int.autovalores.di.intro} converges to the continuous one \eqref{eq:int.autovalores.de.intro} in the gap distance (recall the definition of convergence given in the introduction). Let us start by defining the solution operators of the continuous and discrete problems, $T:L^2(\Omega)\to {\mathbb {V}}$ and $T_h:L^2(\Omega)\to {\mathbb {V}}_h$. Given $f\in L^2(\Omega)$, we define $Tf\in {\mathbb {V}}$ as the unique solution of \begin{equation}\label{ecuT} \left\langle Tf,v \right\rangle =( f, v) \quad \forall v\in {\mathbb {V}}, \end{equation} and $T_h f\in {\mathbb {V}}_h$ as the unique solution of \[ \left\langle T_h f,v_h \right\rangle =(f,v_h) \quad \forall v_h\in {\mathbb {V}}_h. \] Observe that if $(u,\lambda)$ is an eigenpair, then $T(\lambda u) = u$ and $T_h (\lambda u) = \Pi_h u$. To prove Theorem \ref{teo:gap}, by \cite[Proposition 7.4 and Remark 7.5]{Boffi}, we only need to show that the operators $T$ and $T_h$ are compact and \[ \| T- T_h\|_{\mathcal{L}(L^2(\Omega),{\mathbb {V}})} \to 0 \mbox{ as } h \to 0. \] \medskip \begin{lemma}\label{compacto} The operators $T$ and $T_h$ are compact. \end{lemma} \begin{proof} Let $\{f_k\}_{k\in\mathbb{N}}$ be a bounded sequence in $L^2(\Omega).$ Then, there exists a subsequence of $\{f_k\}_{k\in\mathbb{N}}$ (still denoted by $\{f_k\}$) and $f\in L^2(\Omega)$ such that $f_k \rightharpoonup f$ weakly in $L^2(\Omega).$ Taking $v=T f_k$ in \eqref{ecuT}, we get \[ \|T f_k\|^2_{{\mathbb {V}}}=(f_k, T f_k)\leq C \|T f_k\|_{L^2(\Omega)} \quad\forall k\in\mathbb{N}. \] Therefore, by Poincar\'e inequality \eqref{eq:poincare}, we have that $\{T f_k\}_{k\in\mathbb{N}}$ is bounded in ${\mathbb {V}}$. Thus, there exists a subsequence of $\{f_k\}_{k\in\mathbb{N}}$ (still denoted by $\{f_k\}_{k\in\mathbb{N}}$) and $u\in{\mathbb {V}}$ such that $T f_k \rightharpoonup u$ weakly in ${\mathbb {V}}.$ Hence \[ \left\langle u,v \right\rangle =\lim_{k\to\infty} \left\langle Tf_k,v \right\rangle =\lim_{k\to\infty}( f_k, v) =( f, v)\quad\forall v\in{\mathbb {V}}, \] that is, $u=Tf.$ On the other hand, since the inclusion ${\mathbb {V}}\hookrightarrow L^2({\mathbb {R}}^n)$ is compact, passing if necessary to a subsequence we may assume \begin{align*} T f_k \rightharpoonup Tf &\mbox{ weakly in } {\mathbb {V}},\\ T f_k \rightarrow Tf & \mbox{ strongly in } L^2({\mathbb {R}}^n). \end{align*} Then, \[ \|T f_k\|_{{\mathbb {V}}}^2=(f_k, T f_k) \to (f, T f)=\|T f\|^2_{{\mathbb {V}}} \] as $n\to\infty.$ Since the space ${\mathbb {V}}$ is uniformly convex, it follows that \[ T f_k \rightarrow Tf \mbox{ strongly in } {\mathbb {V}}. \] The proof for the operator $T_h$ is similar. \end{proof} \begin{lemma}\label{convop} The following norm convergence holds true: \[ \| T- T_h\|_{\mathcal{L}(L^2(\Omega),{\mathbb {V}})} \to 0 \mbox{ as } h \to 0. \] \end{lemma} \begin{proof} For each $h$, take $f_h\in L^2(\Omega) $ such that $\|f_h\|_{L^2(\Omega)}=1$ and \[ \sup_{\|f\|_{L^2(\Omega)}=1} \| T f - T_h f\|_{{\mathbb {V}}} = \| T f_h - T_h f_h\|_{{\mathbb {V}}}. \] Then, to prove the result, it is enough to show that for any sequence $h_k\to0$ there is a subsequence $\{h_{k_j}\}_{j\in{\mathbb {N}}}$ such that \[ \| T f_{h_{k_j}} - T_{h_{k_j}} f_{h_{k_j}}\|_{{\mathbb {V}}}\to 0 \quad\text{ as } j\to\infty. \] Let $\{h_k\}_{k\in{\mathbb {N}}}$ be a sequence sucht that $h_{k}\to0.$ It follows from $\|f_{h_k}\|_{L^2(\Omega)}=1$ for all $k\in{\mathbb {N}}$ that there exist a subsequence $\{f_{h_{k_j}}\}_{j\in\mathbb{N}}$ of $\{f_{h_k}\}_{k\in{\mathbb {N}}}$ and $f\in L^2(\Omega)$ such that $ f_ {h_{k_j}} \rightharpoonup f$ weakly in $L^2(\Omega)$. Proceeding as in the proof of Lemma \ref{compacto} (and passing if necessary to a subsequence), we may assume \begin{align*} T_{h_{k_j}} f_{h_{k_j}} \rightharpoonup v &\mbox{ weakly in } {\mathbb {V}}, \\ T_{h_{k_j}} f_{h_{k_j}} \rightarrow v &\mbox{ strongly in } L^2({\mathbb {R}}^n). \end{align*} On the other hand, it follows from \eqref{estiminterpol} that \begin{align*} I_h \varphi \rightarrow \varphi &\mbox{ strongly in } {\mathbb {V}}, \\ I_h \varphi \rightarrow \varphi &\mbox{ strongly in } L^2({\mathbb {R}}^n), \end{align*} for any $\varphi\in C^{\infty}_0(\Omega).$ Therefore, \[ \left\langle v,\varphi \right\rangle =\lim_{j\to \infty} \left\langle T_{h_{k_j}} f_{h_{k_j}}, I_{h_{k_j}} \varphi \right\rangle =\lim_{j \to \infty } ( f_{h_{k_j}} ,I_{h_{k_j}} \varphi) =( f , \varphi)\quad\forall\varphi\in C^{\infty}_0(\Omega), \] which means that $v=T f$. Then, \[ \| T f_{h_{k_j}} - T_{h_{k_j}} f_{h_{k_j}} \|^2_{{\mathbb {V}}}=( f_{h_{k_j}}, T f_{h_{k_j}} - T_{h_{k_j}} f_{h_{k_j}}) \to 0. \] \end{proof} Now we conclude the convergence of the discrete eigenvalue problem to the continuous one in the gap distance. \begin{proof}[Proof of Theorem \ref{teo:gap}] The proof follows by Lemmas \ref{compacto} and \ref{convop} and using Proposition 7.4 and Remark 7.5 of \cite{Boffi}. \end{proof} \section{Order of convergence: simple eigenfunctions}\label{sec.orden} In this section we assume that $\lambda^{(k)}$ is an eigenvalue of multiplicity one; in the next section we extend the discussion in order to tackle the case of multiple eigenfunctions. In order to simplify, we split the proof of Theorem \ref{teo:ordensimple} into three lemmas. We first estimate the rate of convergence of $u_h^{(k)}$ towards $u^{(k)}$ in the $L^2$-norm. \begin{lemma}\label{lema:ordenl2} For any $\varepsilon>0$ there is a positive constant $C$ independent of $h$ such that \begin{equation} \|u^{(k)} - u_h^{(k)} \|_{L^2(\Omega)} \leq C h^{\alpha + \nicefrac12 -\varepsilon}, \label{eq:convL2} \end{equation} where $u_h^{(k)}$ is chosen in such a way that $\left(\Pi_h u^{(k)}, u_h^{(k)}\right) \ge 0$ and $\alpha$ is given by Proposition \ref{estimacionlinf0}. \end{lemma} \begin{proof} In first place, we define the $L^2$-projection of $\Pi_h u^{(k)}$ into the space spanned by $u_h^{(k)}$, $$ \omega_h^{(k)} \coloneqq \left( \Pi_h u^{(k)}, u_h^{(k)} \right) u_h^{(k)} , $$ and the quantity $$ \rho_h^{(k)} \coloneqq \max_{i \neq k} \frac{\lambda^{(k)}}{|\lambda^{(k)} - \lambda_h^{(i)}|}. $$ Then \begin{equation} \|u^{(k)} - u_h^{(k)} \|_{L^2(\Omega)} \leq \|u^{(k)} - \Pi_h u^{(k)} \|_{L^2(\Omega)} + \| \Pi_h u^{(k)} - \omega_h^{(k)} \|_{L^2(\Omega)} + \| \omega_h^{(k)} - u_h^{(k)} \|_{L^2(\Omega)} . \label{eq:triangular} \end{equation} We are going to estimate the terms in the right hand side separately. Given $\varepsilon>0,$ it follows from our regularity estimate \eqref{eq:aproxL2} that there exists $C>0$ independent of $h$ such that \begin{equation}\label{eq:des1} \| u^{(k)} - \Pi_h u^{(k)} \|_{L^2(\Omega)} \leq C h^{\alpha+\nicefrac12-\varepsilon} . \end{equation} Moreover, since $$ \left( \Pi_h u^{(k)}, u_h^{(i)} \right) = \frac{1}{\lambda_h^{(i)}} \left\langle \Pi_h u^{(k)}, u_h^{(i)} \right\rangle = \frac{1}{\lambda_h^{(i)}} \left\langle u^{(k)}, u_h^{(i)} \right\rangle = \frac{\lambda^{(k)}}{\lambda_h^{(i)}} \left( u^{(k)}, u_h^{(i)} \right), $$ we have \[ \left| \left( \Pi_h u^{(k)}, u_h^{(i)} \right) \right| \leq \rho_h^{(k)} \left| \left( u^{(k)} - \Pi_h u^{(k)}, u_h^{(i)} \right) \right|. \] So, \begin{equation}\label{eq:des2} \begin{split} \| \Pi_h u^{(k)} - \omega_h^{(k)} \|_{L^2(\Omega)}^2 & = \sum_{i\neq k} \left( \Pi_h u^{(k)}, u_h^{(i)} \right)^2 \leq \left[\rho_h^{(k)}\right]^{2} \sum_{i\neq k} \left( u^{(k)} - \Pi_h u^{(k)}, u_h^{(i)} \right)^2 \\ & \leq \left[\rho_h^{(k)}\right]^2 \| u^{(k)} - \Pi_h u^{(k)} \|^2_{L^2(\Omega)} \leq C h^{\alpha+\nicefrac12 -\varepsilon} . \end{split} \end{equation} Finally, let us show that \begin{equation} \| \omega_h^{(k)} - u_h^{(k)} \|_{L^2(\Omega)} \leq \| \omega_h^{(k)} - u^{(k)} \|_{L^2(\Omega)} , \label{eq:estimacion_proyeccion} \end{equation} so that $$ \| \omega_h^{(k)} - u_h^{(k)} \|_{L^2(\Omega)} \leq \| u^{(k)} - \Pi_h u^{(k)} \|_{L^2(\Omega)} + \| \Pi_h u^{(k)} - \omega_h^{(k)} \|_{L^2(\Omega)} . $$ Indeed, on one hand we have $$ u_h^{(k)}-\omega_h^{(k)} = \left[1 - \left(\Pi_h u^{(k)}, u_h^{(k)}\right) \right] u_h^{(k)}. $$ On the other hand, due to the normalizations $\|u^{(k)}\|_{L^2(\Omega)}=\|u_h^{(k)}\|_{L^2(\Omega)}=1,$ we have $$ \left| 1 - \|\omega_h^{(k)} \|_{L^2(\Omega)} \right| \leq \|u^{(k)}- \omega_h^{(k)} \|_{L^2(\Omega)} $$ and $$ \|\omega_h^{(k)}\|_{L^2(\Omega)} = \left|\left(\Pi_h u^{(k)}, u_h^{(k)}\right)\right|. $$ Therefore, if we choose the sign of $u_h^{(k)}$ in such a way that $\left(\Pi_h u^{(k)}, u_h^{(k)}\right) \ge 0$, we deduce $$ \|u_h^{(k)} - \omega_h^{(k)} \|_{L^2(\Omega)} = \left|1 - \left(\Pi_h u^{(k)}, u_h^{(k)}\right) \right| = \left| 1 - \left|\left(\Pi_h u^{(k)}, u_h^{(k)}\right)\right| \, \right| \le \|u^{(k)} - \omega_h^{(k)} \|_{L^2(\Omega)}, $$ as stated in \eqref{eq:estimacion_proyeccion}. Thus, estimate \eqref{eq:convL2} is obtained by combining \eqref{eq:triangular}, \eqref{eq:des1}, \eqref{eq:des2} and \eqref{eq:estimacion_proyeccion}. \end{proof} Our next step is to prove the rate of convergence for the eigenvalues. \begin{lemma} For any $\varepsilon>0$ there is a positive constant $C$ independent of $h$ such that \begin{equation}\label{ordenaut} 0\leq \lambda_h^{(k)}-\lambda^{(k)}\leq C h^{1-\varepsilon} . \end{equation} \end{lemma} \begin{proof} By Remark \ref{re:compara} we only have to prove the second inequality. Developing the inner product and taking into account that $u^{(k)}$ is an eigenfunction and $u^{(k)}_h$ is its corresponding discrete eigenfunction, we obtain \begin{align*} \| u^{(k)}-u^{(k)}_h\|^2_{{\mathbb {V}}} & = \left\langle u^{(k)}-u^{(k)}_h, \, u^{(k)}-u^{(k)}_h \right\rangle\\ & = \lambda^{(k)} \left(u^{(k)}, u^{(k)}-u^{(k)}_h\right) - \lambda^{(k)} \left(u^{(k)}, u^{(k)}_h\right) + \lambda^{(k)}_h \left(u^{(k)}_h, u^{(k)}_h\right) \\ & = \lambda^{(k)} \left(u^{(k)}-u^{(k)}_h, u^{(k)}-u_h^{(k)}\right) + \left(\lambda^{(k)}_h - \lambda^{(k)}\right) \left(u^{(k)}_h, u^{(k)}_h\right) . \end{align*} That is, \begin{equation}\label{igualdadauto} \left(\lambda^{(k)}_h-\lambda^{(k)}\right)\|u^{(k)}_h\|^2_{L^2(\Omega)}=\|u^{(k)}-u^{(k)}_h\|^2_{{\mathbb {V}}}-\lambda^{(k)} \|u^{(k)}-u^{(k)}_h\|^2_{L^2(\Omega)}. \end{equation} On the other hand, \begin{align*} \|u^{(k)}_h-\Pi_h u^{(k)}\|^2_{{\mathbb {V}}}&= \left(\lambda^{(k)}_h u^{(k)}_h-\lambda^{(k)} u^{(k)},u^{(k)}_h-\Pi_h u^{(k)}\right)\\ &\leq \|\lambda^{(k)}_h u^{(k)}_h-\lambda^{(k)} u^{(k)}\|_{L^2(\Omega)} \|u^{(k)}_h-\Pi_h u^{(k)} \|_{L^2(\Omega)}\\ &\leq \left(\lambda^{(k)}_h \| u^{(k)}_h-u^{(k)}\|_{L^2(\Omega)}+|\lambda^{(k)}_h-\lambda^{(k)}| \|u^{(k)}\|_{L^2(\Omega)}\right) \|u^{(k)}_h-\Pi_h u^{(k)} \|_{L^2(\Omega)}. \end{align*} Dividing by $ \|u^{(k)}_h-\Pi_h u^{(k)} \|_{L^2(\Omega)}$ and using the fact that \[ \|u^{(k)}_h-\Pi_h u^{(k)}\|_{{\mathbb {V}}}^2\geq \lambda^{(1)} \|u^{(k)}_h-\Pi_h u^{(k)} \|_{L^2(\Omega)}^2, \] we get \begin{equation}\label{cota2auto} \|u^{(k)}_h-\Pi_h u^{(k)}\|_{{\mathbb {V}}}\leq C \left( \lambda^{(k)}_h \| u^{(k)}_h-u^{(k)}\|_{L^2(\Omega)}+|\lambda^{(k)}_h-\lambda^{(k)}| \|u^{(k)}\|_{L^2(\Omega)} \right). \end{equation} Therefore, by \eqref{igualdadauto}, \eqref{cota2auto}, \eqref{estimacionlinf1} and taking into account that $\|u_h^{(k)}\|_{L^2(\Omega)}= \|u^{(k)}\|_{L^2(\Omega)}=1,$ we arrive at \begin{align*} \lambda^{(k)}_h-\lambda^{(k)} & \leq C\left(\| u^{(k)}-\Pi_h u^{(k)}\|^2_{{\mathbb {V}}}+\|u^{(k)}_h-\Pi_h u^{(k)}\|^2_{{\mathbb {V}}}\right)-\lambda^{(k)} \|u^{(k)}-u^{(k)}_h\|^2_{L^2(\Omega)}\\ & \leq C\left(h^{1-\varepsilon}+ \left[\lambda^{(k)}_h\right]^2 \| u^{(k)}_h-u^{(k)}\|^2_{L^2(\Omega)}+|\lambda^{(k)}_h-\lambda^{(k)}|^2 \right)- \lambda^{(k)} \|u^{(k)}-u^{(k)}_h\|^2_{L^2(\Omega)}\\ &= C\left(h^{1-\varepsilon}+|\lambda^{(k)}_h-\lambda^{(k)}|^2 \right)+\left(C\left[\lambda^{(k)}_h\right]^2-\lambda^{(k)}\right)\|u^{(k)}_h-u^{(k)}\|^2_{L^2(\Omega)}. \end{align*} Finally, using that $\lambda^{(k)}_h\to \lambda^{(k)}$ and that $\|u^{(k)}-u^{(k)}_h\|^2_{L^2(\Omega)}\leq C h^{2\alpha+1}$, we conclude $$ \lambda^{(k)}_h-\lambda^{(k)}\leq C h^{1-\varepsilon}+O(h^{2\alpha+1})\leq C h^{1-\varepsilon}. $$ \end{proof} In last place, we have the following estimate for the error in the energy norm. \begin{lemma} \label{conv_V} For any $\varepsilon>0$ there is a positive constant $C$ independent of $h$ such that \[ \|u^{(k)}-u^{(k)}_h\|_{{\mathbb {V}}}\leq C h^{\nicefrac12-\varepsilon}, \] where $u_h^{(k)}$ is chosen in such a way that $(\Pi_h u^{(k)}, u_h^{(k)}) \ge 0$. \end{lemma} \begin{proof} By \eqref{igualdadauto} and since $\|u^{(k)}_h\|_{L^2(\Omega)} = \| u^{(k)} \|_{L^2(\Omega)}=1$, we have that \[ \|u^{(k)} - u^{(k)}_h\|_{{\mathbb {V}}} \leq C \left( \|u^{(k)}-u^{(k)}_h\|_{L^2(\Omega)} + \sqrt{\lambda^{(k)}_h - \lambda^{(k)}} \right) \leq C h^{\nicefrac12 - \varepsilon}. \] \end{proof} \section{Order of convergence: Multiple eigenfunctions} \label{sec.orden_multiple} In this section we show the order of convergence in the case of multiple eigenspaces. For simplicity we prove the case of double eigenvalues, but the argument we carry may be easily generalized to any multiplicity. We write $\lambda=\lambda^{(k)}=\lambda^{(k+1)}$ and take $u^{(k)}$ and $u^{(k+1)}$ to be two corresponding orthogonal eigenfunctions. Throughout this section we also employ the notation \[ \alpha_h = \left( \Pi_h u^{(k)}, u_h^{(k)} \right), \beta_h = \left( \Pi_h u^{(k)}, u_h^{(k+1)} \right), \delta_h = \left( \Pi_h u^{(k+1)}, u_h^{(k)} \right), \gamma_h = \left( \Pi_h u^{(k+1)}, u_h^{(k+1)} \right) \] \[ w_h^{(k)}= \alpha_h u_h^{(k)} + \beta_h u_h^{(k+1)},\quad \text{ and } \quad w_h^{(k+1)}= \delta_h u_h^{(k)} + \gamma_h u_h^{(k+1)}. \] As in the previous section, we divide the proof of Theorem \ref{teo:ordenmultiple} into three lemmas. The proof of the first one is obtained as in \cite{Boffi}, following the lines of Lemma \ref{lema:ordenl2}. \begin{lemma}\label{lema:ordenmultiple1} For any $\varepsilon>0,$ there is a positive constant $C$ independent of $h$ such that \begin{equation}\label{ordenmultautovfun} \|u^{(j)}-w^{(j)}_h\|_{L^2(\Omega)}\leq C h^{\alpha+\nicefrac12-\varepsilon}, \quad j=k,k+1. \end{equation} where $\alpha$ is given by Proposition \ref{estimacionlinf0}. \end{lemma} Our next step is to prove the rate of convergence for the eigenvalues. \begin{lemma}\label{lem:ordenmultiple2} For any $\varepsilon>0,$ there is a positive constant $C$ independent of $h$ such that \begin{equation}\label{ordenmultiaut} 0\leq \lambda_h^{(j)}-\lambda\leq C h^{1-\varepsilon}, \quad j=k,k+1. \end{equation} \end{lemma} \begin{proof} By Remark \ref{re:compara}, in order to prove \eqref{ordenmultiaut} we only have to show the second inequality. First observe that an identity analogous to \eqref{igualdadauto} is feasible. More precisely, in this case we have \begin{equation}\label{igualdadauto2} \alpha_h^2 \left(\lambda_h^{(k)}-\lambda\right)+ \beta_h^2 \left(\lambda_h^{(k+1)}-\lambda\right) = \|u^{(k)}-w^{(k)}_h\|^2_{{\mathbb {V}}}- \lambda \|u^{(k)}-w^{(k)}_h\|^2_{L^2(\Omega)}. \end{equation} On the other hand, \begin{align*} \|w^{(k)}_h-\Pi_h u^{(k)}\|^2_{{\mathbb {V}}} &=\left\langle u^{(k)}-w_h^{(k)}, \Pi_h u^{(k)}-w^{(k)}_h \right\rangle \\ &=\left(\lambda u^{(k)},\Pi_h u^{(k)}-w^{(k)}_h\right)- \left(\alpha_h \lambda_h^{(k)} u_h^{(k)}+\beta_h \lambda_h^{(k+1)} u_h^{(k+1)}, \Pi_h u^{(k)}-w^{(k)}_h\right)\\ & \leq \|\lambda u^{(k)}-\alpha_h \lambda_h^{(k)} u_h^{(k)}-\beta_h \lambda_h^{(k+1)} u_h^{(k+1)}\|_{L^2(\Omega)} \|\Pi_h u^{(k)}-w^{(k)}_h\|_{L^2(\Omega)}. \end{align*} Thus, by the Poincar\'e inequality \begin{align*} \|w^{(k)}_h-\Pi_h u^{(k)}\|_{{\mathbb {V}}}&\leq C \left[ \|\lambda u^{(k)}-\alpha_h \lambda_h^{(k)} u_h^{(k)}-\beta_h \lambda_h^{(k+1)} u_h^{(k+1)}\|_{L^2(\Omega)} \right] \\ &\leq C \Big[ \lambda^{(k)}_h \| w^{(k)}_h-u^{(k)}\|_{L^2(\Omega)}+|\lambda^{(k)}_h-\lambda| \|u^{(k)}\|_{L^2(\Omega)}\\ &\hspace{1cm}+|\beta_h| \|u_h^{(k+1)}\|_{L^2(\Omega)}|\lambda^{(k)}_h-\lambda^{(k+1)}_h| \Big]. \end{align*} Then, since $\|u^{(k)}\|^2_{L^2(\Omega)}= \|u_h^{(k+1)}\|^2_{L^2(\Omega)}=1,$ we obtain \begin{equation}\label{cota2auto2} \begin{aligned} \|w^{(k)}_h-\Pi_h u^{(k)}\|_{{\mathbb {V}}}&\leq C \Big[ \lambda^{(k)}_h \|w^{(k)}_h-u^{(k)}\|_{L^2(\Omega)}+ |\lambda^{(k)}_h-\lambda|\\ &\hspace{1.5cm}+|\beta_h|\left(|\lambda^{(k)}_h-\lambda|+|\lambda^{(k+1)}_h-\lambda| \right) \Big]. \end{aligned} \end{equation} Therefore, by \eqref{igualdadauto2}, \eqref{cota2auto2} and \eqref{estimacionlinf1} we arrive at \begin{align*} \alpha_h^2 &\left(\lambda_h^{(k)}-\lambda\right)+ \beta_h^2\left(\lambda_h^{(k+1)}-\lambda\right)\\ & \leq C\left( \|u^{(k)}-\Pi_h u^{(k)}\|^2_{{\mathbb {V}}}+\|w^{(k)}_h-\Pi_h u^{(k)}\|^2_{{\mathbb {V}}}\right)-\lambda \|u^{(k)}-w^{(k)}_h\|^2_{L^2(\Omega)}\\ & \leq C\left(h^{1-\varepsilon}+ \left[\lambda^{(k)}_h\right]^2 \| w^{(k)}_h-u^{(k)}\|^2_{L^2(\Omega)}+|\lambda^{(k)}_h-\lambda|^2 +\beta_h^2\left(|\lambda^{(k)}_h-\lambda|+|\lambda^{(k+1)}_h-\lambda|\right)^2\right)\\ &\hspace{1cm}- \lambda \|u^{(k)}-w^{(k)}_h\|^2_{L^2(\Omega)}. \end{align*} By Lemma \ref{lema:ordenmultiple1}, $\|u^{(k)}-w^{(k)}_h\|^2_{L^2(\Omega)}\leq C h^{2\alpha+1-\varepsilon}\leq C h^{1-\varepsilon},$ and thus \begin{equation}\label{multival} \alpha_h^2 \left(\lambda_h^{(k)}-\lambda\right)+ \beta_h^2 \left(\lambda_h^{(k+1)}-\lambda\right) \leq C \left(h^{1-\varepsilon}+ |\lambda^{(k)}_h-\lambda|^2+ |\lambda^{(k+1)}_h-\lambda|^2\right). \end{equation} Taking into account that $\alpha_h^2+\beta_h^2=\|w^{(k)}_h\|^2_{L^2(\Omega)}\to \|u^{(k)}\|^2_{L^2(\Omega)}=1$, we split the rest of the proof in three cases. \medskip {\noindent{\it Case 1}.} There is a positive constant $c$ such that $|\alpha_h|\geq c$ and $ |\beta_h|\geq c$. Since $\lambda^{(k)}_h\to \lambda$, $\lambda^{(k+1)}_h\to \lambda$ we conclude, by \eqref{multival}, that \eqref{ordenmultiaut} follows. \medskip {\noindent{\it Case 2}.} There exist subsequences satisfying $\alpha_{h_n}\to 0$ and $|\beta_{h_n}|\to 1$. Observe that \begin{equation}\label{igualdadalphabeta} \begin{aligned} \left(\alpha_h^2+\beta_h^2\right) \left(\left(\lambda_h^{(k)}-\lambda\right)+ \left(\lambda_h^{(k+1)}-\lambda\right)\right) &=2\left(\alpha_h^2 \left(\lambda_h^{(k)}-\lambda\right)+ \beta_h^2 \left(\lambda_h^{(k+1)}-\lambda\right) \right)\\ &\hspace{1cm}+ \left(\alpha_h^2-\beta_h^2\right) \left(\lambda_h^{(k+1)}-\lambda_h^{(k)}\right). \end{aligned} \end{equation} Since $\alpha_{h_n}\to 0$ and $|\beta_{h_n}|\to 1$, for $n$ great enough $|\alpha_{h_n}|\leq |\beta_{h_n}|$, and therefore by \eqref{igualdadalphabeta}, \eqref{multival} and the fact that $\alpha_h^2+\beta_h^2 \to 1$ we have that \eqref{ordenmultiaut} is true for a subsequence. But since $\{\lambda^{(j)}_h \}$ is monotone in $h$ ($j = k, k+1$), the desired conclusion follows. \medskip {\noindent{\it Case 3}.} There exist subsequences satisfying $\beta_{h_n}\to 0$ and $|\alpha_{h_n}|\to 1$. First, observe that $$ u_h^{(k)}=w_h^{(k)}+(1-\alpha_h)u_h^{(k)}-\beta_h u_h^{(k+1)}. $$ Then, since $\|u_h^{(k+1)}\|_{L^2(\Omega)}=\|u_h^{(k)}\|_{L^2(\Omega)}=1,$ $w_h^{(k)} \to u^{(k)},$ $\alpha_{h_n}\to 1$ and $\beta_{h_n}\to 0,$ we get \begin{equation}\label{convato} u_{h_n}^{(k)}\to u^{(k)} \mbox{ in } L^2(\Omega). \end{equation} On the other hand, it follows form the definition of $\Pi_h$ that \[ \alpha_h =\frac{\lambda}{\lambda_h^k} \left(u^{(k)},u_h^{(k)}\right),\ \ \beta_h =\frac{\lambda}{\lambda_h^{k+1}} \left(u^{(k)},u_h^{(k+1)}\right), \] \[ \delta_h =\frac{\lambda}{\lambda_h^{(k)}} \left(u^{(k+1)},u_h^{(k)}\right),\quad \gamma_h = \frac{\lambda}{\lambda_h^{(k+1)}} \left(u^{(k+1)},u_h^{(k+1)}\right). \] By Lemma \ref{lema:ordenmultiple1}, \[ \|u^{(k+1)}-w^{(k+1)}_h\|_{L^2(\Omega)}\leq C h^{\alpha+\nicefrac12-\varepsilon} \] and then \begin{equation}\label{gamadelta} \delta_h^2+\gamma_h^2 \to 1.\end{equation} Finally, by \eqref{convato}, using the orthogonality of the eigenfunctions and \eqref{gamadelta} we have \[ \delta_{h_n}=\frac{\lambda}{\lambda_{h_n}^{(k)}} \left(u^{(k+1)},u_{h_n}^{(k)}\right)\to \left(u^{(k+1)},u^{(k)}\right)=0\quad\text{ and } \quad|\gamma_{h_n}|\to 1. \] Observe that changing $u^{(k)}$ by $u^{(k+1)}$, this is just {\it{Case 2}}. This concludes the proof. \end{proof} Our last result follows from \eqref{igualdadauto2}, \eqref{ordenmultautovfun} and \eqref{ordenmultiaut}, in the same fashion as Lemma \ref{conv_V}. \begin{lemma} For any $\varepsilon>0,$ there is a positive constant $C$ independent of $h$ such that \[ \|u^{(j)}-w^{(j)}_h\|_{{\mathbb {V}}}\leq C h^{\nicefrac12-\varepsilon}, \quad j=k,k+1. \] \end{lemma} \section{Numerical results} \label{sec:numerico} In order to illustrate the convergence estimates obtained in Sections \ref{sec.orden} and \ref{sec.orden_multiple}, we present the results of numerical tests for finite element discretizations of one and two-dimensional eigenvalue problems. Moreover, in the latter case, some examples in domains that do not satisfy the hypotheses of Proposition \ref{prop:regintro} are displayed. These examples provide numerical evidence that the assertion of this proposition still holds true under weaker assumptions about the domain. Since in general no closed formula for the eigenvalues of the fractional Laplacian is available, we have estimated the order of convergence by means of a least-squares fitting of the model \[ \lambda^{(k)}_h = \lambda^{(k)} + Ch^\alpha . \] This allows to extrapolate approximations of the eigenvalues as well (in the tables we denote this extrapolated value of $\lambda^{(k)}$ as $\lambda_{ext}^{(k)}$). Throughout this section, the results are compared with those available in the literature. In first place we consider one-dimensional problems, which have been widely studied both theoretically and from the numerical point of view. Next, we show some examples in two-dimensional domains: the unit ball, a square and an $L$-shaped domain. As for the ball, the deep results of \cite{DydaKuznetsovKwasnicki} allow to obtain sharp estimates on the eigenvalues, and thus provide a point of comparison for the validity of the FE implementation. Regarding the square, some estimates for the eigenvalues are found in \cite{Kwasnicki}. The main interest of the $L$-shaped domain is that, although it does not satisfy the ``standard'' requirements to regularity of eigenfunctions to hold, the numerical order of convergence is the same as in problems posed on smooth, convex domains. General estimates for eigenvalues, valid for a class of domains, are obtained by Chen and Song \cite{ChenSong}. In that paper, the authors state upper and lower bounds for eigenvalues of the fractional Laplacian on domains satisfying the exterior cone condition. Calling $\mu^{(k)}$ the $k$-th eigenvalue of the Laplacian with Dirichlet boundary conditione domain $\Omega$, they prove that there exists a constant $C=C(\Omega)$ such that \begin{equation} \label{eq:chensong} C \left(\mu^{(k)}\right)^s \le \lambda^{(k)} \le \left(\mu^{(k)}\right)^s . \end{equation} If $\Omega$ is a bounded convex domain, then $C$ can be taken as $\nicefrac12$. It is noteworthy that, due to the scaling property of the fractional Laplacian, eigenvalues for the dilations of a domain $\Omega$ are obtained by means of $\lambda^{(k)}(\gamma\Omega) = \gamma^{-2s} \lambda^{(k)}(\Omega).$ Since we are working with conforming methods, as well as providing approximations, the discrete eigenvalues yield upper bounds for the corresponding continuous eigenvalues. This is of special interest in those cases in which theoretical estimates are not sharp, or the non-symmetry of the domain precludes the possibility of developing arguments such as the ones in \cite{DydaKuznetsovKwasnicki}. \subsection{One-dimensional intervals} Eigenvalues for the fractional Laplacian in intervals have been studied by other authors previously. In \cite{ZoiaRossoKardar}, a discretized model of the fractional Laplacian is developed, and a numerical study of eigenfunctions and eigenvalues is implemented for different boundary conditions. In \cite{MR2679702}, the authors deal with one dimensional problems for $s=\nicefrac12$, and provide asymptotic expansion for eigenvalues. Later, Kwa{\'s}nicki \cite{Kwasnicki} extended this work to the whole range $s\in (0,1)$. Namely, he showed the following identity for the $k$-th eigenvalue in the interval $(-1,1)$: \begin{equation} \lambda^{(k)} = \left( \frac{k\pi}{2} - \frac{(1-s)\pi}{4}\right)^{2s} + \frac{1-s}{\sqrt{s}} \, O\left(k^{-1}\right) . \label{eq:kwasnicki} \end{equation} Moreover, in that work a method to obtain lower bounds in arbitrary bounded domains is developed, and it is proved that, on one spacial dimension, eigenvalues are simple if $s\ge \nicefrac12$. As eigenvalues are simple and we are working in one dimension, it is not difficult to numerically estimate the order of convergence of eigenfunctions in the $L^2$-norm. Indeed, normalizing the discrete eigenfunctions so that $\| u_h^{(k)} \|_{L^2(-1,1)} = 1$ and choosing their sign adequately (recall Section \ref{sec.orden}), these are then compared with a solution on a very fine grid. On the other hand, in \cite{DuoZhang} it is performed a numerical study of the fractional Schr\"odinger equation in an infinite potential well in one spacial dimension. The authors find numerically the ground and first excited states and their corresponding eigenvalues for the stationary linear problem, which corresponds to the first two eigenpairs of our equation \eqref{eq:int.autovalores}. In Table \ref{tab:ordenes1d}, our results for the first $2$ eigenpairs are displayed. The extrapolated numerical values are compared with the estimates from \cite{DuoZhang, Kwasnicki}; the orders of convergence are in good agreement with those predicted correspondingly by \eqref{ordenaut} and \eqref{eq:convL2}. \begin{table}[htbp] \label{tab:ordenes1d} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-11} \multicolumn{1}{c}{} & \multicolumn{6}{|c|}{Numerical values} & \multicolumn{4}{c|}{Orders} \\ \hline $s$ & $\lambda_{ext}^{(1)}$ & $\lambda^{(1)}$ \cite{DuoZhang} & $\lambda^{(1)}$ \cite{Kwasnicki} & $\lambda_{ext}^{(2)}$ & $\lambda^{(2)}$ \cite{DuoZhang} & $\lambda^{(2)}$ \cite{Kwasnicki}& $\lambda^{(1)}$ & $\lambda^{(2)}$ & $u^{(1)}$ & $u^{(2)}$ \\ \hline $0.05$& $0.9726$ & $0.9726$ & $0.9809$ & $1.0922$ & $1.0922$ & $1.0913$ & $1.1082$ & $1.1488$ & $0.5507$ & $0.5676$ \\ $0.1$& $0.9575$ & $0.9575$ & $0.9712$ & $1.1965$ & $1.1966$ & $1.1948$ & $1.0706$ & $1.1017$ & $0.6117$ & $0.6250$ \\ $0.25$& $0.9702$ & $0.9702$ & $0.9908$ & $1.6015$ & $1.6016$ & $1.5977$ & $1.0210$ & $1.0375$ & $0.7616$ & $0.7823$ \\ $0.5$& $1.1577$ & $1.1578$ & $1.1781$ & $2.7548$ & $2.7549$ & $2.7488$ & $1.0005$ & $0.9793$ & $0.9605$ & $0.9691$ \\ $0.75$& $1.5975$ & $1.5976$ & $1.6114$ & $5.0598$ & $5.0600$ & $5.0545$ & $0.9983$ & $0.9990$ & $0.9983$ & $0.9984$ \\ $0.9$& $2.0487$ & --- & $2.0555$ & $7.5031$ & --- & $7.5003$ & $1.0035$ & $1.0214$ & $0.9989$ & $0.9989$ \\ $0.95$& $2.2481$ & $2.2441$ & $2.2477$ & $8.5958$ & $8.5959$ & $8.5942$ & $1.0352$ & $1.1418$ & $0.9989$ & $0.9989$ \\ \hline \end{tabular} \caption{First $2$ eigenpairs in the interval $(-1,1)$. On the left, the extrapolated numerical values are compared with the results from \cite{DuoZhang} and with approximation \eqref{eq:kwasnicki}, obtained in \cite{Kwasnicki}. On the right, orders of convergence for eigenvalues and eigenfunctions in the $L^2$-norm (obtained by a least-square fitting) are displayed.} \end{table} \subsection{Two-dimensional experiments} \label{sub:2d} The theoretical order of convergence for eigenvalues is also attained in the following examples in ${\mathbb {R}}^2$. The implementation for these experiments is based on the code from \cite{AcostaBorthagaray}. \subsubsection*{Unit ball} Let us consider the fractional eigenvalue problem on the two-dimensional unit ball. In \cite{DKKMeijer}, the weighted operator $u \mapsto (-\Delta)^s (\omega \, u)$ is studied, where $\omega (x) = (1-|x|^2)^s_+.$ In particular, explicit formulas for eigenvalues and eigenfunctions are established. Furthermore, in \cite{DydaKuznetsovKwasnicki} the same authors exploit these expressions to obtain two-sided bounds for the eigenvalues of the fractional Laplacian in the unit ball in any dimension. This method provides sharp estimates; however, it depends on the decomposition of the fractional Laplacian as a weighted operator, and the weight $\omega$ is only explicitly known for the unit ball. In Table \ref{tab:ordenes_bola}, our results for the first eigenvalue are compared with those of \cite{DydaKuznetsovKwasnicki} for different values of $s$. This comparison serves as a test for the validity of the code we are employing. As well as the extrapolated value of $\lambda^{(1)}$ and the numerical order of convergence, for every $s$ considered we exhibit an upper bound for the first eigenvalue. These outcomes are consistent with those from \cite{DydaKuznetsovKwasnicki} and the theoretical order of convergence given by \eqref{ordenaut}. \begin{table}[htbp] \begin{tabular}{|c|c|c|c|c|} \hline $s$ & ${\lambda^{(1)}}$& $\lambda_{ext}^{(1)}$ & $\lambda_h^{(1)}$ (UB) & Order \\ \hline $0.005$ & $1.00475$ & $1.00475$ & $1.00480$ & $0.9462$ \\ $0.05$ & $1.05095$ & $1.05094$ & $1.05145$ & $0.9455$ \\ $0.25$ & $1.34373$ & $1.34367$ & $1.34626$ & $0.9497$ \\ $0.5$ & $2.00612$ & $2.00607$ & $2.01060$ & $0.9686$ \\ $0.75$ & $3.27594$ & $3.27632$ & $3.28043$ & $1.0092$ \\ \hline \end{tabular} \caption{First eigenvalue in the unit ball in ${\mathbb {R}}^2$. Estimate from \cite{DydaKuznetsovKwasnicki}; extrapolated value of $\lambda^{(1)}$; upper bound obtained by FEM with a meshsize $h\sim 0.02$; numerical order of convergence.} \label{tab:ordenes_bola} \end{table} \medskip \subsubsection*{Square} Eigenvalue estimates for the case in which the domain $\Omega$ is a square in ${\mathbb {R}}^2$ were also addressed in \cite{Kwasnicki}. However, in order to obtain upper bounds, the method proposed in that work depends on having pointwise bounds of the Green function for the fractional Laplacian on $\Omega$. The estimates from \cite{ChenSong, Kwasnicki} are compared with our results in Table \ref{tab:ordenes_cuadrado}, where numerical orders of convergence are also displayed. \begin{table}[htbp] \begin{tabular}{|c|c|c|c|c|c|} \hline $s$ & $\lambda^{(1)}$ (LB)& $\lambda^{(1)}$ (UB) & $\lambda_h^{(1)}$ (UB)& $\lambda_{ext}^{(1)}$ & Order \\ \hline $0.05$ & $1.0308^b$ & $1.0831^a$ & $1.0412$ & $1.0405$ & $0.9229$ \\ $0.1$ & $1.0506^b$ & $1.1731^a$ & $1.0895$ & $1.0882$ & $0.9230$ \\ $0.25$ & $1.1587^b$ & $1.4905^a$ & $1.2844$ & $1.2813$ & $0.9283$ \\ $0.5$ & $1.3844^b$ & $2.2214^a$ & $1.8395$ & $1.8344$ & $0.9622$ \\ $0.75$ & $1.6555^a$ & $3.3109^a$ & $2.8921$ & $2.8872$ & $0.9940$ \\ $0.9$ & $2.1034^a$ & $4.2067^a$ & $3.9492$ & $3.9467$ & $1.0654$ \\ $0.95$ & $2.2781^a$ & $4.5562^a$ & $4.4083$ & $4.4062$ & $1.1496$ \\ \hline \multicolumn{6}{l}{\footnotesize \, $^a$See \cite{ChenSong}. $^b$See \cite{Kwasnicki}.} \\ \end{tabular} \caption{First eigenvalue in the square $[-1,1]^2$. Best lower (LB) and upper (UB) bounds known before; upper bound obtained by FEM with a meshsize $h\sim 0.04$; extrapolated value of $\lambda^{(1)}$; numerical order of convergence.} \label{tab:ordenes_cuadrado} \end{table} \medskip \subsubsection*{$L$-shaped domain} To the authors knowledge, there is no \emph{efficient} method to estimate eigenvalues of the fractional Laplacian if the domain $\Omega$ lacks symmetry. The bound \eqref{eq:chensong} remains valid as long as $\Omega$ satisfies the assumptions required, but the range that estimate provides is quite wide. The main advantage of employing the finite element method is that it is flexible enough to cope with a variety of domains. Moreover, as we are working with conforming approximations, sharp upper bounds for the eigenvalues may be obtained by considering discrete solutions on refined meshes. In Proposition \ref{prop:regintro}, which states that eigenfunctions belong to $\widetilde{H}^{s+\nicefrac12-\varepsilon}(\Omega)$, it was assumed that the domain $\Omega$ was smooth. For the Laplacian, in order to prove regularity of solutions, it is customary to assume that $\Omega$ is either smooth or at least convex. In those cases, it is well known that if $f \in H^{r}(\Omega)$ for some $r$, then the solutions of the Dirichlet problem with datum $f$ belong to $H^{r+2}(\Omega)$. However, if the domain has a re-entrant corner, solutions are less regular. This also applies to eigenfunctions: in the $L$-shaped domain $\Omega = [-1,1]^2\setminus [0,1]^2$, the first eigenvalue of the Laplacian is known not to belong to $H^{3/2}(\Omega)$. Surprisingly, numerical evidence indicates that eigenvalues of the fractional Laplacian on this $L$-shaped domain converge with the same order as in the previous examples. This motivates us to conjecture that eigenfunctions and solutions to the Dirichlet equation \eqref{eq:fuente} have the same Sobolev regularity than in smooth domains. Our findings for the first eigenvalue are summarized in Table \ref{tab:ordenes_L}. \begin{table}[htbp] \begin{tabular}{|c|c|c|c|} \hline $s$ & $\lambda_h^{(1)}$ (UB)& $\lambda_{ext}^{(1)}$ & Order $\lambda^{(1)}$ \\ \hline $0.1$ & $1.1434$ & $1.1413$ & $0.9085$ \\ $0.2$ & $1.3386$ & $1.3342$ & $0.9103$ \\ $0.3$ & $1.6025$ & $1.5956$ & $0.9160$ \\ $0.4$ & $1.9593$ & $1.9499$ & $0.9267$ \\ $0.5$ & $2.4440$ & $2.4322$ & $0.9459$ \\ $0.6$ & $3.1072$ & $3.0936$ & $0.9812$ \\ $0.7$ & $4.0228$ & $4.0069$ & $0.9822$ \\ $0.8$ & $5.2994$ & $5.2831$ & $1.0609$ \\ $0.9$ & $7.0975$ & $7.0790$ & $1.1891$ \\ \hline \end{tabular} \caption{First eigenvalues in the $L-$shaped domain $[-1,1]^2\setminus [0,1]^2$. Upper bound obtained by FEM with a meshsize $h\sim 0.04$; extrapolated value of $\lambda^{(1)}$; numerical order of convergence.} \label{tab:ordenes_L} \end{table}
{ "timestamp": "2016-03-02T02:13:01", "yymm": "1603", "arxiv_id": "1603.00317", "language": "en", "url": "https://arxiv.org/abs/1603.00317", "abstract": "The purpose of this work is to study a finite element method for finding solutions to the eigenvalue problem for the fractional Laplacian. We prove that the discrete eigenvalue problem converges to the continuous one and we show the order of such convergence. Finally, we perform some numerical experiments and compare our results with previous work by other authors.", "subjects": "Numerical Analysis (math.NA)", "title": "Finite element approximation for the fractional eigenvalue problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9891815523039175, "lm_q2_score": 0.8354835391516133, "lm_q1q2_score": 0.8264449041823637 }
https://arxiv.org/abs/1712.02499
Connecting the q-Multiplicative Convolution and the Finite Difference Convolution
In a recent paper, Brändén, Krasikov, and Shapiro consider root location preservation properties of finite difference operators. To this end, the authors describe a natural polynomial convolution operator and conjecture that it preserves root mesh properties. We prove this conjecture using two methods. The first develops a novel connection between the additive (Walsh) and multiplicative (Grace-Szegö) convolutions, which can be generically used to transfer results from multiplicative to additive. We then use this to transfer an analogous result, due to Lamprecht, which demonstrates logarithmic root mesh preservation properties of a certain $q$-multiplicative convolution operator. The second method proves the result directly using a modification of Lamprecht's proof of the logarithmic root mesh result. We present his original argument in a streamlined fashion and then make the appropriate alterations to apply it to the additive case.
\section{Introduction} Let $\mathbb{C}[x]$ denote the space of polynomials with complex coefficients, and let $\mathbb{C}_n[x]$ denote the subspace of polynomials in $\mathbb{C}[x]$ of degree at most $n$. (We make the analogous definitions for $\mathbb{R}[x]$ and $\mathbb{R}_n[x]$.) The Walsh additive (\cite{walsh}) and Grace-Szeg{\"o} multiplicative (\cite{szego}) polynomial convolutions on $f,g \in \mathbb{C}_n[x]$ have been denoted $\boxplus^n$ and $\boxtimes^n$ respectively (e.g., in \cite{finiteconvolutions}): \[ f \boxplus^n g := \sum_{k=0}^n \partial_x^k f \cdot (\partial_x^{n-k} g)(0) \] \[ f \boxtimes^n g := \sum_{k=0}^n \binom{n}{k}^{-1} (-1)^k f_k g_k x^k \] This notation is suggestive, as these convolutions can be thought of producing polynomials whose roots are contained in the (Minkowski) sum and product of complex discs containing the roots of the input polynomials. When the inputs have real roots (additive) or non-negative roots (multiplicative), this fact also holds in terms of real intervals containing the roots. In \cite{finitemesh}, the authors show that the additive convolution can only increase root mesh, which is defined as the minimum absolute difference between any pair of roots of a given polynomial. That is, the mesh of the output polynomial is at least as large as the mesh of either of the input polynomials. They use similar arguments to show that, for polynomials with non-negative roots, the multiplicative convolution can only increase logarithmic root mesh. This is similarly defined as the minimum ratio (greater than 1) between any pair of positive roots of a given polynomial. Regarding mesh and logarithmic mesh, there are natural generalized convolution operators which also preserve such properties. The first will be called the $q$-multiplicative convolution, and it was shown to preserve logarithmic root mesh of at least $q$ in \cite{lamprecht}. The second will be called the $b$-additive convolution (or finite difference convolution), and the main concern of this paper is to demonstrate that it preserves root mesh of at least $b$. \subsection*{The $q$-Multiplicative Convolution} In \cite{lamprecht}, Lamprecht proves logarithmic mesh preservation properties of $q$-multiplicative convolution. This convolution is defined as follows, where $p_k$ an $r_k$ are the coefficients of $p$ and $r$, respectively. (Note that as $q \to 1$ this limits to the classical multiplicative convolution.) \[ p \boxtimes_q^n r := \sum_{k=0}^n \binom{n}{k}_q^{-1} q^{-\binom{k}{2}} (-1)^k p_k r_k x^k \] Here, $\binom{n}{k}_q$ denotes the $q$-binomial coefficients, to be explicitly defined later. We state his result formally as follows. \begin{definition} Fix $p \in \mathbb{R}[x]$ with all non-negative roots. We say that $p$ is \emph{$q$-log mesh} if the minimum ratio (greater than 1) between any pair of non-zero roots of $p$ is at least $q$. We also say that $p$ is \emph{strictly $q$-log mesh} if the minimum ratio is greater than $q$. In these situations, we write $\lmesh(p) \geq q$ and $\lmesh(p) > q$ respectively. \end{definition} \begin{theorem}[Lamprecht] \label{thm:qlogmesh} Let $p$ and $r$ be polynomials of degree at most $n$ such that $\lmesh(p) \geq q$ and $\lmesh(r) \geq q$, for some $q \in (1,\infty)$. Then, $\lmesh(p \boxtimes_q^n r) \geq q$. \end{theorem} This result is actually an analogue to an earlier result of Suffridge \cite{suffridge} regarding polynomials with roots on the unit circle. In Suffridge's result, $q$ is taken to be an element of the unit circle, and log mesh translates to mean that the roots are pairwise separated by at least the argument of $q$. Roughly speaking, he obtains the same result for the corresponding $q$-multiplicative convolution. Remarkably, the known proofs of his result (even a proof of Lamprecht) differ fairly substantially from Lamprecht's proof of the above theorem. Additionally, we note here that Lamprecht uses different notation and conventions in \cite{lamprecht}. In particular, he uses $q \in (0,1)$, considers polynomials $p$ with all non-positive roots, and his definition of $\boxtimes_q^n$ does not include the $(-1)^k$ factor. These differences are generally speaking unsubstantial, but it is worth noting that the arguments of \S\ref{sect:connect} seem to require the $(-1)^k$ factor. \subsection*{The $b$-Additive Convolution} In this paper, we show the $b$-additive convolution (or, finite difference convolution) preserves the space of polynomials with root mesh at least $b$. Br{\"a}nd{\'e}n, Krasikov, and Shapiro define the $b$-additive convolution (only for $b=1$) in \cite{finitemesh} as follows. (Note that as $b \to 0$ this limits to the classical additive convolution.) \[ p \boxplus_b^n r := \sum_{k=0}^n \Delta_b^k p \cdot (\Delta_b^{n-k} r)(0) \] Here, $\Delta_b$ is a finite $b$-difference operator, defined as: \[ \Delta_b : p \mapsto \frac{p(x) - p(x-b)}{b} \] Our result then solves the first (and second) conjecture stated in \cite{finitemesh}. We state it formally here. \begin{definition} Fix $p \in \mathbb{R}[x]$ with all real roots. We say that $p$ is \emph{$b$-mesh} if the minimum non-negative difference of any pair of roots of $p$ is at least $b$. We also say that $p$ is \emph{strictly $b$-mesh} if this minimum difference is greater than $b$. In these situations, we write $\mesh(p) \geq b$ and $\mesh(p) > b$ respectively. \end{definition} \begin{theorem} \label{thm:main} Let $p$ and $r$ be polynomials of degree at most $n$ such that $\mesh(p) \geq b$ and $\mesh(r) \geq b$, for some $b \in (0,\infty)$. Then, $\mesh(p \boxplus_b^n r) \geq b$. \end{theorem} As a note, Br{\"a}nd{\'e}n, Krasikov, and Shapiro actually use the \emph{forward} finite difference operator in their definition of the convolution. This is not a problem as our result then differs from their conjecture by a shift of the input polynomials. \begin{remark} Although the $q$-multiplicative and $b$-additive convolutions preserve $q$-log mesh and $b$-mesh respectively, they do not preserve real-rootedness. \end{remark} \subsection*{An Analytic Connection} Our first method of proof of Theorem \ref{thm:main} will demonstrate a way to pass root properties of the $q$-multiplicative convolution to the $b$-additive convolution. As this is interesting in its own right, we state the most general version of this result here. \begin{theorem} \label{thm:qbancon} Fix $b \geq 0$ and let $p,r$ be polynomials of degree $n$. We have the following, where convergence is uniform on compact sets. \[ \lim_{q \rightarrow 1} (1-q)^n \left[E_{q,b}(p) \boxtimes_{q^b}^n E_{q,b}(r) \right](q^x) = p \boxplus_b^n r \] Note that for $b=0$, this result pertains to the classical convolutions. \end{theorem} Here, the $E_{q,b}$ are certain linear isomorphisms of $\mathbb{C}[x]$, to be explicitly defined below. Notice that uniform convergence allows us to use Hurwitz' theorem to obtain root properties in the limit of the left-hand side. That is, any information about how the $q$-multiplicative convolution acts on roots will transfer to some statement about how the $b$-additive convolution acts on roots. As it turns out, a special case of Lamprecht's result (Theorem 3 from \cite{lamprecht}) will become our result (Theorem \ref{thm:main}) in the limit. We discuss this transfer process in more detail in \S\ref{sect:connect}. \subsection*{Extending Lamprecht's Method} Our second method of proof of Theorem \ref{thm:main} is an extension of the method used by Lamprecht to prove the log mesh result for the $q$-multiplicative convolution. Specifically, he demonstrates that the $q$-multiplicative convolution preserves a root-interlacing property for $q$-log mesh polynomials. More formally he proves the following result which gives Theorem \ref{thm:main} as a corollary. \begin{theorem}[Lamprecht Interlacing-Preserving] \label{thm:qinterlacing} Let $f,g \in \mathbb{R}_n[x]$ be $q$-log mesh polynomials of degree $n$ with only negative roots. Let $T_g: \mathbb{R}_n[x] \to \mathbb{R}_n[x]$ be the real linear operator defined by $T_g: r \mapsto r \boxtimes_q^n g$. Then, $T_g$ preserves the set of polynomials whose roots interlace the roots of $f$. \end{theorem} We achieve an analogous result for the $b$-additive convolution using techniques similar to those found in Lamprecht's paper. We state it formally here. \begin{theorem} \label{thm:binterlacing} Let $f,g \in \mathbb{R}_n[x]$ be $b$-mesh polynomials of degree $n$. Let $T_g: \mathbb{R}_n[x] \to \mathbb{R}_n[x]$ be the real linear operator defined by $T_g: r \mapsto r \boxplus_b^n g$. Then, $T_g$ preserves the set of polynomials whose roots interlace the roots of $f$. \end{theorem} In both cases, mesh and log mesh properties can be shown to be equivalent to root interlacing properties ($f(x)$ interlaces $f(x-b)$ for $b$-mesh, and $f(x)$ interlaces $f(q^{-1}x)$ for $q$-log mesh). The above theorems then immediately imply the desired mesh preservation properties for the respective convolutions. We discuss this further in \S\ref{sect:lamprecht}. \section{First Proof Method: The Finite Difference Convolution as a Limit of \texorpdfstring{$q$}{q}-Multiplicative Convolutions} \label{sect:connect} In what follows we establish a general analytic connection between the multiplicative (Grace-Szego) and additive (Walsh) convolutions on polynomials of degree at most $n$. We then extend this connection to the $q$-multiplicative convolution and the $b$-additive convolution (Theorem \ref{thm:qbancon}). Using this connection, we transfer root information results of the multiplicative convolution ($q$ or classical) to the additive convolution ($b$ or classical). Specifically, we use this connection to prove Theorem \ref{thm:main}, which is the conjecture of Br{\"a}nd{\'e}n, Krasikov, and Shapiro mentioned above. To begin we state an observation of Vadim Gorin demonstrating an analytic connection in the classical case using matrix formulations of the classical convolutions given in \cite{finiteconvolutions}: \[ \chi(A) \boxtimes^n \chi(B) = \E_P \left[\chi(APBP^T)\right] \] \[ \chi(A) \boxplus^n \chi(B) = \E_P \left[\chi(A + PBP^T)\right] \] Here, $A$ and $B$ are real symmetric matrices, $\chi$ denotes the characteristic polynomial, and the expectations are taken over all permutation matrices. We then write: \[ \begin{split} \lim_{t \to 0} t^{-n} \left[\chi(e^{tA}) \boxtimes^n \chi(e^{tB})\right](tx+1) &= \lim_{t \to 0} t^{-n}\E_P\left[\det\left(txI + I - e^{tA}Pe^{tB}P^T\right)\right] \\ &= \lim_{t \to 0} t^{-n}\E_P\left[\det\left(txI - t(APP^T + PBP^T) + O(t^2)\right)\right] \\ &= \lim_{t \to 0} \E_P\left[\chi\left(A + PBP^T + O(t)\right)\right] \\ &= \chi(A) \boxplus^n \chi(B) \end{split} \] This connection is suggestive and straightforward, but seemingly confinded to the classical case. Therefore, we instead state below a slightly modified (but equivalent) version of this observation for the classical convolutions (Theorem \ref{thm:clsancon}) which we are able to then generalize to the $q$-multiplicative and $b$-additive convolutions. \subsection{The Classical Convolutions} We begin by sketching the proof of the connection between the classical additive and multiplicative convolutions. We then state rigorously the more general result for the $q$-multiplicative and $b$-additive convolutions. In this section, many quantities will be defined with $b=0$ in mind (this corresponds to the classical additive convolution), with the more general quantities given in subsequent sections. Further, we will leave the proofs of the lemmas to the generic $b$ case, omitting them here. To go from the multiplicative world to the additive world, we use a linear map which acts as an exponentiation on roots, and a limiting process which acts as a logarithm. In particular, we will refer to the following algebra endomorphism on $\mathbb{C}[x]$ as our ``exponential map'': \[ E_{q,0}: x \mapsto \frac{1-x}{1-q} \] In what follows, any limiting process will mean uniform convergence on compact sets in $\mathbb{C}$, unless otherwise specified. This will allow us to extract analytic information about roots using the classical Hurwitz' theorem. In particular, the following result hints at the analytic information provided by the exponential map $E_{q,0}$. \begin{proposition} \label{prop:clsexponentation} We have the following for any $p \in \mathbb{C}[x]$. \[ \lim_{q \rightarrow 1} [E_{q,0}(p)](q^x) = p \] \end{proposition} \begin{proof} We first consider $[E_{q,0}(x)](q^x) = \frac{1-q^x}{1-q}$, for which we obtain the following by the generalized binomial theorem: \[ \lim_{q \to 1} [E_{q,0}(x)](q^x) = \lim_{q \to 1} \frac{1-q^x}{1-q} = \lim_{q \to 1} \sum_{m=1}^\infty \binom{x}{m}(q-1)^{m-1} = \binom{x}{1} = x \] To show that convergence here is uniform on compact sets, consider the tail for $|x| \leq M$: \[ \begin{split} \left|\sum_{m=2}^\infty \binom{x}{m} (q-1)^{m-1}\right| &\leq \sum_{m=0}^\infty \left|(q-1)^{m+1} \cdot \frac{x(x-1) \cdots (x-m-1)}{(m+2)!}\right| \\ &\leq |q-1| \sum_{m=0}^\infty |q-1|^m \prod_{k=1}^{m+2} \left(1 + \frac{|x|}{k}\right) \\ &\leq |q-1| (1+M)^2 \sum_{m=0}^\infty \left(|q-1| (1+M)\right)^m \\ &= \frac{|q-1| (1+M)^2}{1 - |q-1| (1+M)} \end{split} \] This limits to zero as $q \to 1$, which proves the desired convergence. Since $E_{q,0}$ is an algebra morphism, we can use the fundamental theorem of algebra to complete the proof. Specifically, letting $p(x) = c_0 \prod_k (x - \alpha_k)$ we have: \[ \lim_{q \to 1}\left[E_{q,0}\left(c_0 \prod_k (x-\alpha_k)\right)\right](q^x) = c_0 \prod_k \left(\lim_{q \to 1} [E_{q,0}(x)](q^x) - \alpha_k\right) = c_0 \prod_k (x-\alpha_k) \] \end{proof} We now state our result in the classical case, which gives an analytic connection between the additive and multiplicative convolutions. As a note, many of the analytic arguments used in the proof of this result will have a flavor similar to that of the proof of Proposition \ref{prop:clsexponentation}. \begin{theorem} \label{thm:clsancon} Let $p,r \in \mathbb{C}[x]$ be of degree at most $n$. We have the following. \[ \lim_{q \rightarrow 1} (1-q)^n \left[E_{q,0}(p) \boxtimes^n E_{q,0}(r) \right](q^x) = p \boxplus^n r \] \end{theorem} \subsubsection*{Proof Sketch} We will establish the above identity by calculating it on basis elements. Specifically, we will expand everything into powers of $(1-q)$. To prove the theorem, it then suffices to show that: (1) the negative degree coefficients are all zero, (2) the series has the desired constant term, and (3) the tail of the series converges to zero uniformly on compact sets. Our first step towards establishing this is expanding $q^{kx}$ in terms of powers of $(1-q)$. \begin{remark} Since we will only be considered with behavior for $q$ near $1$, we will use the notation $q \approx 1$ to indicate there exists some $\epsilon > 0$ such that the statement holds for $q \in (1- \epsilon, 1 + \epsilon)$. \end{remark} \begin{definition} We define a constant, which will help us to simplify the following computations. \[ \alpha_{q,0} := \frac{\ln q}{1-q} \] Note that $\lim_{q \rightarrow 1} \alpha_{q, 0} = -1$. \end{definition} \begin{lemma} \label{lem:clsqexp} Fix $k \in \mathbb{N}_0$. For $q \approx 1$, we have the following. \[ q^{kx} = \sum_{m=0}^\infty \frac{x^m}{m!} \alpha_{q,0}^m k^m (1-q)^m \] For fixed $q \approx 1$, this series has a finite radius of convergence, and this radius approaches infinity as $q \rightarrow 1$. \end{lemma} Notice that this is not a true power series in $(1-q)$, as $\alpha_{q,0}$ depends on $q$. Using this, we can calculate the series obtained after plugging in specific basis elements. \begin{lemma} \label{lem:clsbasiscomp} Fix $q \approx 1$ in $\mathbb{R}_+$ and $j,k,n \in \mathbb{N}_0$ such that $0 \leq j \leq k \leq n$. We have the following. \[ (1-q)^n \left[(1-x)^j \boxtimes^n (1-x)^k\right](q^x) = \sum_{m=0}^\infty \frac{x^m}{m!} \alpha_{q,0}^m (1-q)^{n+m-j-k} \sum_{i=0}^j \frac{\binom{j}{i}\binom{k}{i}}{\binom{n}{i}}(-1)^i i^m \] \end{lemma} We use interpolation arguments to handle the terms of this series, which are combinatorial in nature. In particular we show that this series has no nonzero negative degree terms, as seen in the following. \begin{proposition} Fix $j,k,m,n \in \mathbb{N}_0$ such that $j \leq k$ and $n+m-j-k \leq 0$. We have the following identity. \[ \sum_{i=0}^j \frac{\binom{j}{i}\binom{k}{i}}{\binom{n}{i}}(-1)^i i^m = \left\{ \begin{array}{ll} (-1)^{n-j-k}\frac{(j)!(k)!}{(n)!} & m = j+k-n \\ 0 & m < j+k-n \end{array} \right. \] \end{proposition} To deal with the tail of the series, we then use crude bounds to get uniform convergence on compact sets. \begin{lemma} Fix $M > 0$, and $j,k,n \in \mathbb{N}_0$ such that $j \leq k \leq n$. For $|x| \leq M$, there exists $\gamma > 0$ such that the following bound holds for $q \in (1-\gamma, 1+\gamma)$. \[ \left|\sum_{m>j+k-n} \frac{x^m}{m!} \alpha_{q,0}^m (1-q)^{n+m-j-k} \sum_{i=0}^j \frac{\binom{j}{i}\binom{k}{i}}{\binom{n}{i}}(-1)^i i^m\right| \leq c_0c_1 \sum_{m=1}^\infty c_2^m |1-q|^m \] Here, $c_0,c_1,c_2$ are independent of $q$. \end{lemma} With this, we can now complete the proof of the theorem by comparing the desired quantity to the constant term in our series in $(1-q)$. \begin{proof}[Proof of Theorem \ref{thm:clsancon}.] For $j,k,n \in \mathbb{N}_0$ such that $0 \leq j \leq k \leq n$, we can combine the above results to obtain the following. Recall that $\lim_{q \rightarrow 1} \alpha_{q,0} = -1$. \[ \lim_{q \rightarrow 1} (1-q)^n \left[(1-x)^j \boxtimes^n (1-x)^k\right](q^x) = \frac{j!k!}{n!(j+k-n)!} x^{j+k-n} = x^j \boxplus^n x^k \] By symmetry, this demonstrates the desired result on a basis. Therefore, the proof is complete. \end{proof} \subsection{General Connection Preliminaries} We now prove the previous results in more generality, which allows for extension to these generalized convolutions. First though, we give some preliminary notation. \begin{definition} Fix $q \in \mathbb{R}_+$ and $x \in \mathbb{C}$. We define $(x)_q := \frac{1-q^x}{1-q}$. Note that $\lim_{q \rightarrow 1} (x)_q = x$, using the generalized binomial theorem on $q^x = (1+(q-1))^x$. \end{definition} Specifically, for any $n \in \mathbb{Z}$, we have: \[ (n)_q := \frac{1-q^n}{1-q} = 1 + q + q^2 + \cdots + q^{n-1} \] We then extend this notation to $(n)_q! := (n)_q(n-1)_q \cdots (2)_q(1)_q$ and $\binom{n}{k}_q := \frac{(n)_q!}{(k)_q!(n-k)_q!}$. We also define a system of bases of $\mathbb{C}[x]$ which will help us to understand the mesh convolutions. \begin{definition} For $b \geq 0$ and $q \in \mathbb{R}_+$, we define the following bases of $\mathbb{C}[x]$. \[ v_{q,b}^k := \frac{(1-x)(1-q^bx) \cdots (1-q^{(k-1)b}x)}{(1-q)^k} \] \[ \nu_b^k := x(x+b)(x+2b) \cdots (x+(k-1)b) \] \end{definition} We demonstrate the relevance of these bases to the generalized convolutions by giving alternate definitions. Consider a linear map $A_b$ on $\mathbb{C}[x]$ defined via $A_b : \nu_0^k \mapsto \nu_b^k$. That is, $A_b : x^k \mapsto x(x+b)\ldots(x+(k-1)b)$. We can then define the $b$-additive convolution as follows: \[ p \boxplus_b^n r := A_b(A_b^{-1}(p) \boxplus^n A_b^{-1}(r)) \] That is, the $b$-additive convolution is essentially a change of basis of the classical additive convolution. Note that equivalently, one can conjugate $\partial_x$ by $A_b$ to obtain $\Delta_b : p \mapsto \frac{p(x)-p(x-b)}{b}$ which demonstrates the definition of $\boxplus_b^n$ in terms of finite difference operators. Similarly, the $q$-multiplicative convolution can be seen as a change of basis of the classical multiplicative convolution. Consider a linear map $M_q^{(n)}$ on $\mathbb{C}[x]$ defined via $M_q^{(n)} : \binom{n}{k}x^k \mapsto \binom{n}{k}_q q^{\binom{k}{2}} x^k$, which has the property that $M_{q^b}^{(n)}: (1-x)^n \mapsto (1-q)^n v_{q,b}^n$. We can then define the $q$-multiplicative convolution as follows: \[ p \boxtimes_q^n r := M_q^{(n)}\left((M_q^{(n)})^{-1}(p) \boxtimes^n (M_q^{(n)})^{-1}(r)\right) \] These bases will be used to simplify the proof of the general analytic connection for the mesh (non-classical) convolutions. In what follows they will play the role that the basis elements $x^k$ and $(1-x)^k$ did in the classical proof sketch above. \subsection{Main Result for Mesh Convolutions} We now generalize the results from the classical ($b=0$) setting. First we define a generalized ``exponential map'' using the basis elements defined above. Note for $b > 0$ these are no longer algebra morphisms. \[ E_{q,b}: \nu_b^k \mapsto v_{q,b}^k \] Notice that specialization to $b = 0$ recovers the original ``exponential map''. Also notice that for any $p$, the roots of $E_{q,b}(p)$ approach 1 as $q \to 1$ (multiply the output polynomial by $(1-q)^{\deg(p)}$). In all that follows, previously stated results can be immediately recovered by setting $b=0$. \begin{proposition} We obtain the same key relation for the generalized exponential maps: \[ \lim_{q \rightarrow 1} [E_{q,b}(p)](q^x) = p \] \end{proposition} \begin{proof} We compute on basis elements, using Proposition \ref{prop:clsexponentation} in the process: \[ \begin{split} \lim_{q \rightarrow 1} [E_{q,b}(\nu_b^k)](q^x) &= \lim_{q \rightarrow 1} [v_{q,b}^k](q^x) \\ &= \lim_{q \rightarrow 1} \frac{(1-q^x)(1-q^{x+b}) \cdots (1-q^{x+(k-1)})}{(1-q)^k} \\ &= \prod_{j=0}^{k-1} \lim_{q \rightarrow 1} \frac{1-q^{x+jb}}{1-q} = \prod_{j=0}^{k-1} (x+jb) = \nu_b^k \end{split} \] \end{proof} As in Proposition \ref{prop:clsexponentation}, one can interpret the $E_{q,b}$ maps as a way to exponentiate the roots of a polynomial. The inverse to these maps is given in the previous proposition by plugging in an exponential and limiting, which corresponds to taking the logarithm of the roots. This discussion will be made more precise in \S \ref{sect:applications}. We now state and prove the main result, which gives an analytic link between the $b$-additive and $q$-multiplicative convolutions. We follow the proof sketch of the classical result given above, breaking the following full proof up into more manageable sections. \newtheorem*{qbancon}{Theorem \ref{thm:qbancon}} \begin{qbancon} Fix $b \geq 0$ and let $p,r$ be polynomials of degree $n$. We have the following. \[ \lim_{q \rightarrow 1} (1-q)^n \left[E_{q,b}(p) \boxtimes_{q^b}^n E_{q,b}(r) \right](q^x) = p \boxplus_b^n r \] \end{qbancon} \subsubsection*{Series Expansion} In order to prove this theorem, we first expand the left-hand side of the expression in a series in $(1-q)^m$. As above, this is not quite a power series in $(1-q)^m$ as $\alpha_{q,b}$ (which we now define) depends on $q$. \begin{definition} We define the $b$-version of the $\alpha_{q,0}$ constants as follows. \[ \alpha_{q,b} := \left\{ \begin{array}{ll} \frac{-(b)_{q^{-1}}}{qb} & b > 0 \\ \frac{\ln q}{1-q} & b = 0 \end{array} \right. \] Note that $\lim_{b \rightarrow 0} \alpha_{q,b} = \alpha_{q,0}$ for $q \in \mathbb{R}_+$, and $\lim_{q \rightarrow 1} \alpha_{q,b} = -1$ for fixed $b \geq 0$. \end{definition} We now need to understand how exponential polynomials in $q$ relate to our basis elements. \begin{lemma} \label{lem:qbqexp} Fix $b \geq 0$, and $k \in \mathbb{N}_0$. For $q \approx 1$ in $\mathbb{R}_+$, we have the following. \[ q^{kx} = \sum_{m=0}^\infty \frac{\nu_b^m}{m!} \alpha_{q,b}^m (k)_{q^{-b}}^m (1-q)^m \] For fixed $q \approx 1$, this series has a finite radius of convergence, and this radius approaches infinity as $q \rightarrow 1$. \end{lemma} \begin{proof} For $b > 0$, we use the generalized binomial theorem to compute: \[ \begin{split} q^{kx} = (q^{-bk}-1+1)^{-x/b} &= \sum_{m=0}^\infty \frac{(-b)^{-m}}{m!}x(x+b) \cdots (x+b(m-1)) (q^{-bk}-1)^m \\ &= \sum_{m=0}^\infty \frac{(-b)^{-m} \nu_b^m}{m!} (k)_{q^{-b}}^m (b)_{q^{-1}}^m (q^{-1}-1)^m \\ &= \sum_{m=0}^\infty \frac{\nu_b^m}{m!} \left(-\frac{(b)_{q^{-1}}}{qb}\right)^m (k)_{q^{-b}}^m (1-q)^m \end{split} \] For $b = 0$, manipulating the Taylor series of $q^{kx} = e^{kx \ln q}$ gives the result. For fixed $q \approx 1$, let $\delta > 0$ be small enough such that $|\alpha_{q,b}| < \frac{1+\delta}{q}$ and $(k)_{q^{-b}} < k + \delta$. Consider: \[ |\nu_b^m| = |x(x+b)\cdots(x+(m-1)b)| \leq m!(|x|+b)^m \] From this, we obtain: \[ |q^{kx}| \leq \sum_{m=0}^\infty \left(\frac{(|x|+b)(1+\delta)(k+\delta)}{q}\right)^m |1-q|^m \] It is then easy to see that the radius of convergence of this series limits to infinity as $q \rightarrow 1$. \end{proof} We will now proceed by proving the main result on a basis. To that end, we will prove a number of results related to basis element computations. Most of these are rather tedious and not very illuminating. Perhaps this can be simplified through some more detailed and generalized theory of $q$- and $b$-polynomial operators. \begin{lemma} \label{lem:qbbasiscomp} Fix $q \approx 1$ in $\mathbb{R}_+$, $b \geq 0$, and $j,k,n \in \mathbb{N}_0$ such that $0 \leq j \leq k \leq n$. We have the following. \[ (1-q)^n \left[v_{q,b}^j \boxtimes_{q^b}^n v_{q,b}^k\right](q^x) = \sum_{m=0}^\infty \frac{\nu_b^m}{m!} \alpha_{q,b}^m (1-q)^{n+m-j-k} \sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i (i)_{q^{-b}}^m \] \end{lemma} \begin{proof} We compute: \[ \begin{split} (1-q)^n \left[v_{q,b}^j \boxtimes_{q^b}^n v_{q,b}^k\right](q^x) &= (1-q)^{n-j-k} \sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i q^{ix} \\ &= \sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i \sum_{m=0}^\infty \frac{\nu_b^m}{m!} \alpha_{q,b}^m (1-q)^{n+m-j-k} (i)_{q^{-b}}^m \\ &= \sum_{m=0}^\infty \frac{\nu_b^m}{m!} \alpha_{q,b}^m (1-q)^{n+m-j-k} \sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i (i)_{q^{-b}}^m \end{split} \] \end{proof} \subsubsection*{Q-Lagrange Interpolation} To prove convergence in Theorem \ref{thm:qbancon}, we break up the infinite sum of Lemma \ref{lem:qbbasiscomp} into two pieces. For $n+m-j-k \leq 0$, we use an interpolation argument to obtain the following identity. Note that this generalizes a similar identity (for $q=1$) found in \cite{qlagrange_mathof}. \begin{proposition} \label{prop:qlagrange} Fix $q \approx 1$, $b \geq 0$, and $j,k,m,n \in \mathbb{N}_0$ such that $j \leq k$ and $n+m-k \leq j$. We have the following identity. \[ \sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i (i)_{q^{-b}}^m = \left\{ \begin{array}{ll} (-1)^{n-j-k}q^{b\binom{n}{2}-b\binom{j}{2}-b\binom{k}{2}}\frac{(j)_{q^b}!(k)_{q^b}!}{(n)_{q^b}!} & m = j+k-n \\ 0 & m < j+k-n \end{array} \right. \] \end{proposition} We first give a lemma, and then the proof of the proposition will follow. Let $[t^j]p(t)$ denote the coefficient of $p$ corresponding to the monomial $t^j$. \begin{lemma}\label{interpolate} Fix $p \in \mathbb{C}_j[x]$. We have the following identity: \[ (-1)^j q^{-\binom{j}{2}} \cdot [t^j]p(t) = \sum_{i=0}^j p((i)_{q^{-1}}) \frac{(-1)^{i}}{(i)_q! (j-i)_q!} q^{\binom{i}{2}} \] \end{lemma} \begin{proof} Using Lagrange interpolation, the following holds for any polynomial of degree at most $j$: \[ [t^j]p(t) = \sum_{i=0}^j p((i)_q) \frac{(-1)^{j-i}}{(i)_q! (j-i)_q!} q^{-\binom{i}{2}} q^{-i(j-i)} \] Using the identity $(i)_{q^{-1}}! = q^{-\binom{i}{2}} (i)_q!$ (via $(i)_{q^{-1}} = q^{-i+1} (i)_q$) and replacing $q$ by $q^{-1}$ gives: \[ \begin{split} [t^j]p(t) &= \sum_{i=0}^j p((i)_{q^{-1}}) \frac{(-1)^{j-i}}{(i)_q! (j-i)_q!} q^{2\binom{i}{2}} q^{i(j-i)} q^{\binom{j-i}{2}} \\ &= \sum_{i=0}^j p((i)_{q^{-1}}) \frac{(-1)^{j-i}}{(i)_q! (j-i)_q!} q^{\binom{i}{2}} q^{\binom{j}{2}} \end{split} \] The result follows. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:qlagrange}] Consider the polynomial $p(t) = t^m((n)_{q^{-b}}-t)((n-1)_{q^{-b}}-t) \cdots ((k+1)_{q^{-b}}-t)$, which is of degree $m+n-k \leq j$. So, $[t^j]p(t) = (-1)^{n-k}\delta_{m=j+k-n}$. Also, recall the identity $(i)_{q^{-b}}! = q^{-b\binom{i}{2}} (i)_{q^b}!$. Using the previous lemma and replacing $q$ by $q^b$, we obtain: \[ \begin{split} (-1)^{j} q^{-b\binom{j}{2}} \cdot (-1)^{n-k}\delta_{m=j+k-n} &= \sum_{i=0}^{j} p((i)_{q^{-b}}) \frac{(-1)^{i}}{(i)_{q^b}! (j-i)_{q^b}!} q^{b\binom{i}{2}} \\ &= \sum_{i=0}^{j} q^{-b i (n-k)} \frac{(n-i)_{q^{-b}}!}{(k-i)_{q^{-b}}!}\frac{(-1)^{i}}{(i)_{q^b}! (j-i)_{q^b}!} q^{b\binom{i}{2}}(i)_{q^{-b}}^m \\ &= \sum_{i=0}^{j} q^{b \binom{k}{2}} q^{-b \binom{n}{2}} \frac{(n-i)_{q^{b}}!}{(k-i)_{q^{b}}!}\frac{(-1)^{i}}{(i)_{q^b}! (j-i)_{q^b}!} q^{b\binom{i}{2}}(i)_{q^{-b}}^m \\ &= \sum_{i=0}^j q^{b \binom{k}{2}} q^{-b \binom{n}{2}} \frac{(n)_{q^b}}{(j)_{q^b} (k)_{q^b}} \cdot \frac{\binom{j}{i}_{q^b} \binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}} q^{b\binom{i}{2}} (-1)^i (i)_{q^{-b}}^m \end{split} \] The result follows. \end{proof} \subsubsection*{Tail of the Series} For $n+m-j-k > 0$, we show that the tail of the infinite series in Lemma \ref{lem:qbbasiscomp} is bounded by a geometric series in $\epsilon \rightarrow 0$ as $q \rightarrow 1$. The proof, is somewhat similar to the discussion of convergence in the proof of Lemma \ref{lem:qbqexp}. \begin{lemma} Fix $b \geq 0$, $M > 0$, and $j,k,n \in \mathbb{N}_0$ such that $j \leq k \leq n$. For $|x| \leq M$, there exists $\gamma > 0$ such that the following bound holds for $q \in (1-\gamma, 1+\gamma)$. \[ \left|\sum_{m>j+k-n} \frac{\nu_b^m}{m!} \alpha_{q,b}^m (1-q)^{n+m-j-k} \sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i (i)_{q^{-b}}^m\right| \leq c_0c_1 \sum_{m=1}^\infty c_2^m |1-q|^m \] Here, $c_0,c_1,c_2$ are independent of $q$. \end{lemma} \begin{proof} Fix $n+m-j-k > 0$ with $j \leq k \leq n$ and $q \approx 1$. We have the following bound, where $c_0$ is some positive constant independent of $q$: \[ \left|\sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i (i)_{q^{-b}}^m\right| \leq \sum_{i=0}^j c_0 (i+\delta)^m \leq c_0 (n+\delta)^{m+1} \] For $|x| \leq M$, we have: \[ |\nu_b^m| = |x(x+b)\cdots(x+(m-1)b)| \leq |M(M+b)\cdots(M+(m-1)b)| \leq m!(M+b)^m \] This then implies the following bound on the tail. Let $c_1 := (n+\delta) \big[(1+\delta)(M+b)(n+\delta)\big]^{j+k-n}$ and $c_2 := (1+\delta)(M+b)(n+\delta)$, where small $\delta > 0$ is needed to deal with limiting details. \[ \begin{split} \Bigg|\sum_{m=j+k+1-n}^\infty \frac{\nu_b^m}{m!} \alpha_{q,b}^m &(1-q)^{n+m-j-k} \sum_{i=0}^j \frac{\binom{j}{i}_{q^b}\binom{k}{i}_{q^b}}{\binom{n}{i}_{q^b}}q^{bi(i-1)/2}(-1)^i (i)_{q^{-b}}^m\Bigg| \\ &\leq \sum_{m=j+k+1-n}^\infty \left|\frac{\nu_b^m}{m!} \alpha_{q,b}^m (1-q)^{n+m-j-k}\right| c_0(n+\delta)^{m+1} \\ &\leq c_0\sum_{m=j+k+1-n}^\infty (1+\delta)^m(M+b)^m |1-q|^{n+m-j-k}(n+\delta)^{m+1} \\ &\leq c_0c_1 \sum_{m=j+k+1-n}^\infty \big[(1+\delta)(M+b)(n+\delta)|1-q|\big]^{n+m-j-k} \\ &= c_0c_1 \sum_{m=1}^\infty c_2^m |1-q|^m \end{split} \] So, for any $\epsilon > 0$ we can select $q$ close enough to 1 such that $|1-q| < \frac{\epsilon}{c_2}$. This implies the above series is geometric with terms bounded by $\epsilon^m$. \end{proof} The above lemma in particular demonstrates that the tail of the series in Lemma \ref{lem:qbbasiscomp} converges to 0 uniformly on compact sets. With this, we can now complete the proof of the theorem. \begin{proof}[Proof of Theorem \ref{thm:qbancon}.] For $j,k,n \in \mathbb{N}_0$ such that $0 \leq j \leq k \leq n$, we can combine the above results. When we expand our limit as a sum of powers of $(1-q)$, we have shown that everything limits to zero except for the constant term. Recall that $\lim_{q \rightarrow 1} \alpha_{q,b} = -1$. \[ \begin{split} \lim_{q \rightarrow 1} (1-q)^n \left[v_{q,b}^j \boxtimes_{q^b}^n v_{q,b}^k\right](q^x) &= \lim_{q \to 1} \frac{\nu_{b}^{j+k-n} \alpha_{q,b}^{j+k-n}}{(j+k-n)!}(-1)^{j+k-n}q^{b\binom{n}{2}-b\binom{j}{2}-b\binom{k}{2}}\frac{(j)_{q^b}!(k)_{q^b}!}{(n)_{q^b}!} \\ &= \frac{j!k!}{n!(j+k-n)!} \nu_b^{j+k-n} \\ &= \nu_b^j \boxplus_b^n \nu_b^k\\ \end{split} \] By symmetry, this demonstrates the desired result on a basis. Therefore, the proof is complete. \end{proof} \subsection{Applications To Previous Results} \label{sect:applications} The main motivation for the multiplicative to additive convolution connection was to be able to relate seemingly analogous root information results. The following table outlines the results we proceed to connect. \[ \begin{tabular}{ c | c } Additive Convolution & Multiplicative Convolution \\ \hline Preserves Real Rooted Polynomials & Preserves Positive Rooted Polynomials \\ Additive Max Root Triangle Inequality & Multiplicative Max Root Triangle Inequality \\ Preserves $b-$Mesh & Preserves $q-$Logarithmic Mesh \\ \end{tabular} \] All of these connections have a similar flavor, and rely on the following elementary facts about exponential polynomials. We say $f(x) = \sum_{k=0}^n c_k q^{kx}$ is an \emph{exponential polynomial of degree $n$ with base $q$}. A real number $x$ is a root of $f$ if and only if $q^x$ is a root of $\sum_{k=0}^n c_k x^k$. Because of this we can bootstrap the fundamental theorem of algebra. \begin{definition} We call $\left\{x \in \mathbb{C} : \frac{-\pi}{|\ln(q)|} < \Im(x) < \frac{\pi}{|\ln(q)|}\right\}$ the \emph{principal strip} (with respect to $q$). Let $p(q^x)$ be an exponential polynomial of degree $n$ with base $q$. The number of roots of $p(q^x)$ in the principal strip is the same as the number of roots of $p$ in $\mathbb{C} \setminus (-\infty,0]$. We call the roots in the principal strip the \emph{principal roots}. \end{definition} \begin{lemma} The principal roots of $E_{q,b}(p)[q^x]$ converge to the roots of $p$ as $q \to 1$. In particular, $E_{q,b}(p)[q^x]$ has $\deg(p)$ principal roots for $q \approx 1$. \end{lemma} \begin{proof} This follows from the fact that, as $q \to 1$, $E_{q,b}(p)[q^x]$ converges uniformly on compact sets to $p$ and the principal strip grows towards the whole plane. \end{proof} We can analyze the behavior of this convergence when $p$ is real rooted with distinct roots. \begin{lemma} \label{lem:realrootconverge} Suppose $p$ is real with real distinct roots. For $q \approx 1$, we have that $E_{q,b}(p)[q^x]$ has principal roots which are real and distinct (and converging to the roots of $p$). \end{lemma} \begin{proof} Since $p$ has real coefficients, the roots of $E_{q,b}(p)[q^x]$ are either real or come in conjugate pairs. (Consider the fact that $q^{\overline{x}} = \overline{q^x}$.) If $p$ has real distinct roots, the previous lemma implies the principal roots of $E_{q,b}(p)[q^x]$ have distinct real part for $q$ close enough to 1. Therefore, the principal roots of $E_{q,b}(p)[q^x]$ must all be real. \end{proof} If we exponentiate (with base $q$) the principal roots of $E_{q,b}(p)[q^x]$, we get the roots of $E_{q,b}(p)$. So if the principal roots of $E_{q,b}(p)[q^x]$ are real, then the roots of $E_{q,b}(p)$ are positive. Considering the above results, this means that $E_{q,b}$ maps polynomials with distinct real roots to polynomials with distinct positive roots for $q \approx 1$. (In fact, the roots will be near 1.) \subsubsection*{Root Preservation} The most classical results about the roots are the following: \begin{theorem*}[Root Preservation] \leavevmode \begin{itemize} \item If $p,r \in \mathbb{R}_n[x]$ have positive roots, then $p \boxtimes^n r$ has positive roots. \item If $p,r \in \mathbb{R}_n[x]$ have real roots, then $p \boxplus^n r$ has real roots. \end{itemize} \end{theorem*} \noindent Neither of these results are particularly hard to prove, but showing how the additive result follows from the multiplicative serves as a prime example of how our theorem connects results on the roots. \begin{proof}[Proof of Additive from Multiplicative] We can reduce to showing that the additive convolution preserves real rooted polynomials with distinct roots since the closure of polynomials with distinct real roots is all real rooted polynomials. By Lemma \ref{lem:realrootconverge}, the roots of $E_{q,0}(p)$ are real, distinct, and exponentials of the principal roots of $E_{q,0}[p](q^x)$ for $q \approx 1$. This implies that $E_{q,0}(p)$ has positive real roots. By the multiplicative result, $E_{q,0}(p) \boxtimes^n E_{q,0}(r)$ has positive real roots, and therefore $[E_{q,0}(p) \boxtimes^n E_{q,0}(r)](q^x)$ has real principal roots. By our main result, $(1-q)^n[E_{q,0}(p) \boxtimes^n E_{q,0}(r)](q^x)$ converges to $p \boxplus^n r$. The real-rootedness of $[E_{q,0}(p) \boxtimes^n E_{q,0}(r)](q^x)$ for $q \approx 1$ then implies $p \boxplus^n r$ is real-rooted. \end{proof} \subsubsection*{Triangle Inequality} The next classical theorem relates to the max root of a given polynomial. Given a real-rooted polynomial $p$, let $\lambda({p})$ denote its max root. Given an exponential polynomial $f$ with principal roots all real, let $\lambda({f})$ denote the largest principal root of $f$. Also, denote $\exp_q(\alpha) := q^\alpha$. \begin{theorem*}[Triangle Inequalities] \leavevmode \begin{itemize} \item Given positive-rooted polynomials $p, r$ we have $\lambda({p \boxtimes^n r}) \leq \lambda({p}) \cdot \lambda({r})$ \item Given real-rooted polynomials $p, r$ we have $\lambda({p \boxplus^n r}) \leq \lambda({p}) + \lambda({r})$ \end{itemize} \end{theorem*} \noindent As before, neither of these have particularly complicated proofs, but we can use the multiplicative result to deduce the additive result in the following. \begin{proof}[Proof of Additive from Multiplicative] As in the previous proof, we can reduce to showing that the result holds for $p,r$ with distinct roots. For this proof, we only consider $q > 1$. By Lemma \ref{lem:realrootconverge}, we have that the roots of $E_{q,0}(p)$ are real, distinct, and exponentials of the principal roots of $E_{q,0}[p](q^x)$ for $q \approx 1$. This implies the roots of $E_{q,0}(p)$ are positive for $q \approx 1$. Additionally, notice that $\exp_q(\lambda(f(q^x))) = \lambda(f(p))$ whenever $f$ is positive-rooted. From the multiplicative result and the fact that $\boxtimes^n$ preserves positive-rootednes, we have the following for $q \approx 1$: \[ \begin{split} \exp_q(\lambda([E_{q,0}(p) \boxtimes^n E_{q,0}(r)](q^x))) &= \lambda({E_{q,0}(p) \boxtimes^n E_{q,0}(r)}) \\ &\leq \lambda({E_{q,0}(p)}) \cdot \lambda({E_{q,0}(r)}) \\ &= \exp_q(\lambda({E_{q,0}[p](q^x)}) + \lambda({E_{q,0}[r](q^x)})) \end{split} \] Therefore, $\lambda([E_{q,0}(p) \boxtimes^n E_{q,0}(r)](q^x)) \leq \lambda(E_{q,0}[p](q^x)) + \lambda(E_{q,0}[r](q^x))$. By our main result, $(1-q)^n[E_{q,0}(p) \boxtimes^n E_{q,0}(r)](q^x)$ converges to $p \boxplus^n r$, and therefore $\lambda([E_{q,0}(p) \boxtimes^n E_{q,0}(r)](q^x))$ converges to $\lambda(p \boxplus^n r)$. Similarly $\lambda({E_{q,0}[p](q^x)})$ converges to $\lambda({p})$, and the result follows. \end{proof} \subsubsection*{Application to Mesh Preservation Conjecture} Recall the log mesh result of Lamprecht in \cite{lamprecht} regarding the $q$-multiplicative convolution. \newtheorem*{qlogmesh}{Theorem \ref{thm:qlogmesh}} \begin{qlogmesh} Fix $q > 1$. Given positive-rooted polynomials $p,r \in \mathbb{R}_n[x]$ with $\lmesh(p), \lmesh(r) \geq q$, we have: \[ \lmesh(p \boxtimes_q^n r) \geq q \] \end{qlogmesh} In \cite{finitemesh}, Br{\"a}nd{\'e}n, Krasikov, and Shapiro conjectured the analogous result for the $b$-additive convolution (for $b=1$). Using our connection we will confirm this conjecture: \newtheorem*{main}{Theorem \ref{thm:main}} \begin{main} Given real-rooted polynomials $p,r \in \mathbb{R}_n[x]$ with $\mesh(p), \mesh(r) \geq b$, we have: \[ mesh(p \boxplus_b^n r) \geq b \] \end{main} \begin{proof} We will prove this claim for polynomials $p, r$ with $\mesh(p), \mesh(r) > b$. Since we can approximate any polynomial with $\mesh(p) = b$ by polynomials with larger mesh, the result then follows. By Lemma \ref{lem:realrootconverge}, $E_{q,b}[p](q^x)$ has real roots which converge to the roots of $p$ for $q \approx 1$. Since the roots of $p$ satisfy $\mesh(p) > b$, the principal roots of $E_{q,b}[p](q^x)$ will have mesh greater than $b$ for $q \approx 1$. Further, $\lmesh(E_{q,b}(p)) = \exp_q(\mesh(E_{q,b}[p](q^x))) > q^b$. (All of this discussion holds for $r$ as well.) By our main result, we have: \[ \lim_{q \rightarrow 1} (1-q)^n \left[E_{q,b}(p) \boxtimes_{q^b}^n E_{q,b}(r) \right](q^x) = p \boxplus_b^n r \] By the previous theorem, the $q^b$-multiplicative convolution of $E_{q,b}(p)$ and $E_{q,b}(r)$ has logarithmic mesh at least $q^b$. Precomposition by $q^x$ then yields an exponential polynomial with mesh (of the principal roots) at least $b$. The principal roots of this exponential polynomial then converge to $p \boxplus_b^n r$, and hence $p \boxplus_b^n r$ has mesh at least $b$. \end{proof} \section{Second Proof Method: A Direct Proof using Interlacing} \label{sect:lamprecht} While the previous framework generically transferred Lamprecht's multiplicative result to prove the conjectured result in the additive realm, one might desire a direct proof to gain insight on the underlying structure of the convolution. In what follows, we first outline the preliminary knowledge required to understand a special case of Lamprecht's argument. Then we outline his approach in the multiplicative case and extend this approach to the additive realm to prove the desired conjecture. \subsection{Interlacing Preserving Operators} Given $f,g \in \mathbb{R}[x]$, we say $f \ll g$ iff $f'g-fg' \leq 0$ iff $\left(\frac{f}{g}\right)' \leq 0$ wherever defined. Further, we say $f \ll g$ \emph{strictly} iff $f'g-fg' < 0$. Additionally, it is well known that $f$ and $g$ are real-rooted with interlacing roots iff $f \ll g$ or $g \ll f$. Further, $f$ and $g$ have strictly interlacing roots (no shared roots) iff $f \ll g$ strictly or $g \ll f$ strictly. Let $\lambda_f$ denote the largest root of $f$. If $f$ and $g$ are monic, then $f \ll g$ implies $\lambda_f \leq \lambda_g$. We give a short proof of this now. If $f$ has a double root at $\lambda_f$, then interlacing implies $g(\lambda_f) = 0$, and therefore $\lambda_f \leq \lambda_g$. Otherwise, consider that $f \ll g$ implies $f'(\lambda_f) \cdot g(\lambda_f) = (f'g-fg')(\lambda_f) \leq 0$. Since $f$ is monic, we have $f'(\lambda_f) > 0$ which in turn implies $g(\lambda_f) \leq 0$. Since $g$ is monic, this implies the result. Note that this further implies that if $f$ and $g$ are monic and $\deg(f) = \deg(g)-1$, then $f$ and $g$ have interlacing roots iff $f \ll g$. Another classical result allows us to combine interlacing relations. If $f \ll g$ and $f \ll h$, then $f \ll ag+bh$ for any $a,b \in \mathbb{R}_+$. A similar result holds if $g \ll f$ and $h \ll f$. Note also that $f \ll g$ iff $g \ll -f$, and that $af \ll bf$ for all $a,b \in \mathbb{R}$. Finally, the Hermite-Biehler theorem says $af+bg$ is real-rooted for all $a,b \in \mathbb{R}$ iff either $f \ll g$ or $g \ll f$. \begin{remark} A polynomial $f$ with non-negative roots is $q$-log mesh if and only if $f \ll f(q^{-1}x)$ and strictly $q$-log mesh if and only if $f \ll f(q^{-1}x)$ strictly (for $q > 1$). Similarly, a polynomial $f$ with real roots is $b$-mesh if and only if $f \ll f(x-b)$ and strictly $b$-mesh if and only if $f \ll f(x-b)$ strictly (for $b > 0$). \end{remark} Now let $f$ and $g$ be of degree $n$, and suppose $f$ has $n$ simple real roots, $\alpha_1,...,\alpha_n$. By partial fraction decomposition, we have: \[ \frac{g(x)}{f(x)} = c + \sum_{k=1}^n \frac{c_{\alpha_k}}{x-\alpha_k} \] Denoting $f_{\alpha_k}(x) := \frac{f(x)}{(x-\alpha_k)}$, this implies: \[ g(x) = cf(x) + \sum_{k=1}^n c_{\alpha_k} f_{\alpha_k}(x) \] If $g(\alpha_k) = 0$, then $c_{\alpha_k}=0$. Otherwise we compute: \[ c_{\alpha_k} = \lim_{x \to \alpha_k} \frac{(x-\alpha_k)g(x)}{f(x)} = \left[\frac{f'(\alpha_k)}{g(\alpha_k)}\right]^{-1} = \left[\left(\frac{f}{g}\right)'(\alpha_k)\right]^{-1} \] This leads to the first result, which is a classical one. \begin{proposition} Fix $f,g \in \mathbb{R}_n[x]$. Suppose $f$ is monic and has $n$ simple real roots, $\alpha_1,...,\alpha_n$. Consider the decomposition: \[ g(x) = cf(x) + \sum_{k=1}^n c_{\alpha_k} f_{\alpha_k}(x) \] Then, $g \ll f$ iff $c_{\alpha_k} \geq 0$ for all $k$, and $f \ll g$ iff $c_{\alpha_k} \leq 0$ for all $k$. \end{proposition} \begin{proof} $(\Rightarrow)$. If $f \ll g$, then $\left(\frac{f}{g}\right)' \leq 0$. This implies $c_{\alpha_k} \leq 0$ for all $k$. If $g \ll f$, then $f \ll -g$. The same argument implies $c_{\alpha_k} \geq 0$ for all $k$. $(\Leftarrow)$. By the above argument, $cf \ll f$ and $f_{\alpha_k} \ll f$ for all $k$. So, if $c_{\alpha_k} \geq 0$ for all $k$, then: \[ g = cf + \sum_{k=1}^n c_{\alpha_k} f_{\alpha_k}(x) \ll f \] A similar argument works to show $f \ll g$ if $c_{\alpha_k} \leq 0$ for all $k$. \end{proof} There is actually another way to state this result, in terms of cones of polynomials. Let $\cone(f_1,...,f_m)$ denote the closure of the positive cone generated by the polynomials $f_1,...,f_m$. \begin{corollary} Let $f \in \mathbb{R}_n[x]$ be a monic polynomial with $n$ simple roots, $\alpha_1,...,\alpha_n$. Then, $\{g \in \mathbb{R}_n[x] : g \ll f\} = \cone(f,-f,f_{\alpha_1},...,f_{\alpha_n})$ and $\{g \in \mathbb{R}_n[x] : f \ll g\} = \cone(f,-f,-f_{\alpha_1},...,-f_{\alpha_n})$. \end{corollary} \begin{proof} Since any $g \in \mathbb{R}_n[x]$ can be written as a linear combination of $f,f_{\alpha_1},...,f_{\alpha_n}$, the result follows from the previous proposition. \end{proof} This immediately yields the following result concerning linear operators preserving certain interlacing relations. \begin{definition} Given a real linear operator $T: \mathbb{R}_n[x] \to \mathbb{R}[x]$, and a real-rooted polynomial $f$, we say that $T$ \emph{preserves interlacing with respect to $f$} if $g \ll f$ implies $T[g] \ll T[f]$ and $f \ll g$ implies $T[f] \ll T[g]$ for all $g \in \mathbb{R}_n[x]$ \end{definition} \begin{corollary}\label{interlacing_op_cor} Fix a real linear operator $T: \mathbb{R}_n[x] \to \mathbb{R}[x]$, and fix $f \in \mathbb{R}_n[x]$. Suppose $f$ is monic with $n$ simple roots, $\alpha_1,...,\alpha_n$, and that $T[f_{\alpha_k}] \ll T[f]$ for all $k$. Then, $T$ preserves interlacing with respect to $f$. \end{corollary} \subsection{Lamprecht's Approach} In what follows, we follow Lamprecht's approach to proving that the space of $q$-log mesh polynomials is preserved by the $q$-mutliplicative convolution. Here, we are only interested in proving this result for $q$-log mesh polynomials with non-negative roots, which simplifies the proof. (Lamprecht demonstrates this result for a more general class of polynomials.) The main structure of the proof is: (1) establish properties of two distinguished polar derivatives, (2) show how these derivatives relate to the $q$-multiplicative convolution, and (3) use this to prove that the $q$-multiplicative convolution preserves certain interlacing properties. In the next section, we will emulate this method for $b$-mesh polynomials and the $b$-additive convolution. \subsubsection*{\texorpdfstring{$q$}{q}-Polar Derivatives} In \cite{lamprecht}, Lamprecht defines the following $q$-derivative operators, which generalize the operators $\partial_x$ and $-\partial_y$ on homogeneous polynomials. Here, $q > 1$ is always assumed, as above. (As a note, Lamprecht uses the $\Delta$ symbol for these derivatives, and actually gives different definitions as his convention is $q \in (0,1)$.) \[ (d_{q,n} f)(x) := \frac{f(qx) - f(x)}{q^{1-n}(q^n-1)x} ~~~~~~~~~~~~~~~ (d_{q,n}^* f)(x) := \frac{f(qx) - q^nf(x)}{q^n-1} \] He then goes on to show that these ``derivative'' operators have similar preservation properties to that of the usual derivatives. In particular, he obtains the following. \begin{proposition}\label{q_deriv_prop} The operators $d_{q,n}: \mathbb{R}_n[x] \to \mathbb{R}_{n-1}[x]$ and $d_{q,n}^*: \mathbb{R}_n[x] \to \mathbb{R}_{n-1}[x]$ preserve the space of $q$-log mesh polynomials and the space of strictly $q$-log mesh polynomials. Further, we have that $d_{q,n}f \ll f$ and $d_{q,n}^*f \ll d_{q,n}f$. \end{proposition} The above result is actually spread across a number of results in Lamprecht's paper. We omit the proof for now, referring the reader to \cite{lamprecht}. \subsubsection*{Recursive Identities} Lamprecht then determines the following identities, which are crucial to his inductive proof of the main result of this section. Fix $f \in \mathbb{R}_{n-1}[x]$ and $g \in \mathbb{R}_n[x]$. \[ f \boxtimes_q^n g = f \boxtimes_q^{n-1} d_{q,n}^*g ~~~~~~~~~~~~~~~ (xf) \boxtimes_q^n g = x(f \boxtimes_q^{n-1} d_{q,n}g) \] \subsubsection*{Lamprecht's Proof} With this, we now state an interesting result about interlacing preservation of the $q$-convolution operator. We will then derive the main result as a corollary. \newtheorem*{qinterlacing}{Theorem \ref{thm:qinterlacing}} \begin{qinterlacing}[Lamprecht Interlacing-Preserving] Let $f,g \in \mathbb{R}_n[x]$ be $q$-log mesh polynomials of degree $n$ with only positive roots. Let $T_g: \mathbb{R}_n[x] \to \mathbb{R}_n[x]$ be the real linear operator defined by $T_g: r \mapsto r \boxtimes_q^n g$. Then, $T_g$ preserves interlacing with respect to $f$. \end{qinterlacing} \begin{proof} We prove the theorem by induction. For $n=1$ the result is straightforward, as $\boxtimes_q^1 \equiv \boxtimes^1$. For $m>1$, we inductively assume that the result holds for $n=m-1$. By Corollary \ref{interlacing_op_cor} and the fact that $f$ has $n$ simple roots, we only need to show that $T_g[f_{\alpha_k}] \ll T_g[f]$ for all roots $\alpha_k$ of $f$. That is, we want to show $f_{\alpha_k} \boxtimes_q^m g \ll f \boxtimes_q^m g$ for all $k$. By Proposition \ref{q_deriv_prop}, we have that $d_{q,m}g$ and $d_{q,m}^*g$ are $q$-log mesh and $d_{q,m}^*g \ll d_{q,m}g$. Further, $d_{q,m}g$ and $d_{q,m}^*g$ are of degree $m-1$ and have no roots at 0. The inductive hypothesis and symmetry of $\boxtimes_q^n$ then imply: \[ f_{\alpha_k} \boxtimes_q^{m-1} d^*_{q,m}g \ll f_{\alpha_k} \boxtimes_q^{m-1} d_{q,m}g \] The fact that these polynomial have leading coefficients with the same sign means that the max root of $f_{\alpha_k} \boxtimes_q^{m-1} d_{q,m}g$ is larger than that of $f_{\alpha_k} \boxtimes_q^{m-1} d^*_{q,m}g$. Further, since all roots are positive we obtain: \[ f_{\alpha_k} \boxtimes_q^{m-1} d_{q,m}^*g \ll x(f_{\alpha_k} \boxtimes_q^{m-1} d_{q,m}g) \] By properties of $\ll$, this gives: \[ f_{\alpha_k} \boxtimes_q^{m-1} d_{q,m}^*g \ll x(f_{\alpha_k} \boxtimes_q^{m-1} d_{q,m}g) - \alpha_k(f_{\alpha_k} \boxtimes_q^{m-1} d_{q,m}^*g) \] By the above identities and the fact that $f(x) = (x-\alpha_k)f_{\alpha_k}(x)$, this is equivalent to $f_{\alpha_k} \boxtimes_q^m g \ll f \boxtimes_q^m g$. \end{proof} \newtheorem*{qlogmeshcor}{Corollary \ref{thm:qlogmesh}} \begin{qlogmeshcor} Let $f,g \in \mathbb{R}_n[x]$ be $q$-log mesh polynomials (with non-negative roots), not necessarily of degree $n$. Then, $f \boxtimes_q^n g$ is $q$-log mesh. \end{qlogmeshcor} \begin{proof} First suppose $f,g$ are of degree $n$ with only positive roots. Since $f \ll f(q^{-1}x)$, the previous theorem implies: \[ f \boxtimes_q^n g \ll f(q^{-1}x) \boxtimes_q^n g = (f \boxtimes_q^n g)(q^{-1}x) \] That is, $f \boxtimes_q^n g$ is $q$-log mesh. Otherwise, suppose $f$ is of degree $m_f \leq n$ with $z_f$ roots at 0 and $g$ is of degree $m_g \leq n$ with $z_g$ roots at zero. Intuitively, we now add roots ``near 0 and $\infty$'' and limit. Let new polynomials $F$ and $G$ be given as follows: \[ F(x) := f(x) \cdot x^{-z_f}\prod_{j=1}^{z_f} \left(x - \frac{1}{\alpha_j}\right) \cdot \prod_{j=m_f+1}^n \left(\frac{x}{\alpha_j} - 1\right) \] \[ G(x) := g(x) \cdot x^{-z_g}\prod_{j=1}^{z_g} \left(x - \frac{1}{\beta_j}\right) \cdot \prod_{j=m_g+1}^n \left(\frac{x}{\beta_j} - 1\right) \] Here, $\alpha_j$ and $\beta_j$ are any large positive numbers such that $F$ and $G$ are $q$-log mesh polynomials of degree $n$. By the previous argument, $F \boxtimes_q^n G$ is $q$-log mesh. Letting $\alpha_j$ and $\beta_j$ limit to $\infty$ (while preserving $q$-log mesh) implies $F \boxtimes_q^n G \to f \boxtimes_q^n g$ root-wise, which implies $f \boxtimes_q^n g$ is $q$-log mesh. \end{proof} Lamprecht is actually able to remove the degree $n$ with positive roots restriction earlier in the line of argument, albeit at the cost of a more complicated proof. We have elected here to take the simpler route. He also proves similar results for a class of $q$-log mesh polynomials with possibly negative roots, which we omit here. \subsection{\texorpdfstring{$b$}{b}-Additive Convolution} The main structure of Lamprecht's argument revolves around the two ``polar'' $q$-derivatives, $d_{q,n}$ and $d_{q,n}^*$. The key properties of these derivatives are: (1) they preserve the space of $q$-log mesh polynomials, and (2) they recursively work well with the definition of the $q$-multiplicative convolution. So, when extending this argument to the $b$-additive convolution we face an immediate problem: there is only one natural derivative which preserves the space of $b$-mesh polynomials. This stems from the fact that 0 and $\infty$ have special roles in the $q$-multiplicative world, whereas only $\infty$ is special in the $b$-additive world. The key idea we introduce then is that given a fixed $b$-mesh polynomial $f$, we can pick a polar derivative with pole ``close enough to $\infty$'' so that it maps $f$ to a $b$-mesh polynomial. The fact that we use a different polar derivative for each fixed input $f$ does not affect the proof method. We now give a few facts about the finite difference operator $\Delta_b$, which plays a crucial role in the definition of the $b$-additive convolution. Recall its definition: \[ (\Delta_{b,n} f)(x) \equiv (\Delta_b f)(x) := \frac{f(x) - f(x-b)}{b} \] (We use the notation $\Delta_{b,n}$ when we want to restrict the domain to $\mathbb{R}_n[x]$, as in Proposition \ref{b_deriv_prop} below.) This operator acts on rising factorial polynomials as the usual derivative acts on monomials. That is, for all $k$: \[ \Delta_b x(x+b)\cdots(x+(k-1)b) = kx(x+b)\cdots(x+(k-2)b) \] This operator has preservation properties similar to that of the usual derivative and the $q$-derivatives. The following result, along with many others regarding mesh and log-mesh polynomials, can be found in \cite{fisk}. \begin{proposition}\label{b_deriv_prop} The operator $\Delta_{b,n}: \mathbb{R}_n[x] \to \mathbb{R}_{n-1}[x]$ preserves the space of $b$-mesh polynomials, and the space of strictly $b$-mesh polynomials. Further, we have $\Delta_b f \ll f$ and $\Delta_b f \ll f(x-b)$. If $f$ is strictly $b$-mesh, then these interlacings are strict. \end{proposition} \subsubsection*{Finding Another Polar Derivative} We now define another ``derivative-like'' operator that is meant to generalize $\partial_y$ and $d_{q,n}^*$. Notice that unlike $\Delta_b$, this operation depends on $n$. \[ (\Delta^*_{b,n} f)(x) := nf(x-b) - (x-b)\Delta_b f(x) \] Unfortunately, this operator does \emph{not} preserve $b$-mesh. However, it does generalize other important properties of $\partial_y$. In particular, it maps $\mathbb{R}_n[x]$ to $\mathbb{R}_{n-1}[x]$, and as $b \to 0$ it limits to $\partial_y f$, the polar derivative of $f$ with respect to 0. Further, we have the following results. \begin{lemma}\label{b_dy_basis_lemma} Fix $f \in \mathbb{R}_n[x]$ and write $f = \sum_{k=0}^n a_k x(x+b)\cdots(x+(k-1)b)$. Then: \[ (\Delta_{b,n}^* f)(x+b) = \sum_{k=0}^{n-1} (n-k)a_k x(x+b)\cdots(x+(k-1)b) \] \end{lemma} This next lemma is a generalization of the corollary following it. \begin{lemma} Fix monic polynomials $f,g \in \mathbb{R}[x]$ of degree $m$ and $m-1$, respectively, such that $g$ is strictly $b$-mesh and $g \ll f$ strictly. Denote $h_{a,t}(x) := af(x) - (x-t)g(x)$ for $a \geq 1$ and $t > 0$. For all $t$ large enough, we have $g \ll h_{a,t}$ strictly, $h_{a,t} \ll f$ strictly, and $h_{a,t}$ is strictly $b$-mesh. \end{lemma} \begin{proof} Denote $h_{a,t}(x) := af(x) - (x-t)g(x)$. Since $f,g$ are monic, we have that $h_{a,t}$ is of degree at most $m$ with positive leading coefficient (for large $t$ if $a=1$). Further, if $\alpha_1 < \cdots < \alpha_{m-1}$ are the roots of $g$ and $\beta_1 < \cdots < \beta_m$ are the roots of $f$, then $g \ll f$ strictly and $t$ large implies: \[ \begin{array}{cc} h_{a,t}(\alpha_{m-1}) = af(\alpha_{m-1}) < 0 ~~~~~&~~~~~ h_{a,t}(\beta_m) = -(\beta_m-t)g(\beta_m) > 0 \\ h_{a,t}(\alpha_{m-2}) = af(\alpha_{m-2}) > 0 ~~~~~&~~~~~ h_{a,t}(\beta_{m-1}) = -(\beta_{m-1}-t)g(\beta_{m-1}) < 0 \\ h_{a,t}(\alpha_{m-3}) = af(\alpha_{m-3}) < 0 ~~~~~&~~~~~ h_{a,t}(\beta_{m-2}) = -(\beta_{m-2}-t)g(\beta_{m-2}) > 0 \\ \vdots ~~~~~&~~~~~ \vdots \\ \end{array} \] The alternating signs imply $h_{a,t}$ has an odd number of roots in the interval $(\alpha_k, \beta_{k+1})$ and an even number of roots in the interval $(\beta_k,\alpha_k)$ for all $1 \leq k \leq m-1$. Since the degree of $h_{a,t}$ is at most $m$, each of these intervals must contain exactly one root and zero roots, respectively. If $h_{a,t}$ is of degree $m$, then it has one more root which must be real since $h_{a,t} \in \mathbb{R}[x]$. Additionally, since $h_{a,t}$ has positive leading coefficient, this last root must lie in the interval $(-\infty,\beta_1)$ (and not in $(\beta_m,\infty)$). Therefore, $g \ll h_{a,t}$ strictly and $h_{a,t} \ll f$ strictly. Finally, $h_{a,t} \to g$ as $t \to \infty$ coefficient-wise, and so therefore also in terms of the zeros. This means that the root in the interval $(\alpha_k,\beta_{k+1})$ will limit to $\alpha_k$ from above (for all $k$). Further, the possible root in the interval $(-\infty,\beta_1)$ will then limit to $-\infty$, as $g$ is of degree $m-1$. Since $g$ is strictly $b$-mesh, this implies $h_{a,t}$ is also strictly $b$-mesh for large enough $t$. \end{proof} \begin{corollary}\label{b_pol_deriv_cor} Let $f \in \mathbb{R}_n[x]$ be strictly $b$-mesh. Then for all $t > 0$ large enough, we have that $(t\Delta_{b,n} + \Delta^*_{b,n})f$ is strictly $b$-mesh and $\Delta_{b,n} f \ll (t\Delta_{b,n} + \Delta^*_{b,n})f$ strictly. \end{corollary} \begin{proof} Consider $(t\Delta_{b,n} + \Delta^*_{b,n})f = nf(x-b) - (x-b-t)\Delta_{b,n}f$. Note that $\Delta_{b,n}f \in \mathbb{R}_{n-1}[x]$ is strictly $b$-mesh and of degree one less than $f$, and $\Delta_{b,n}f \ll f(x-b)$ strictly by Proposition \ref{b_deriv_prop}. Now assume WLOG that $f$ is monic and of degree at least 1. Letting $c$ denote the leading coefficient of $\Delta_{b,n}f$, we have $1 \leq c \leq n$. We can then write: \[ \frac{1}{c}(t\Delta_{b,n} + \Delta^*_{b,n})f = \frac{n}{c}f(x-b) - (x-b-t)\frac{\Delta_{b,n}f}{c} \] Applying the previous lemma to $f(x-b)$ and $\frac{\Delta_{b,n}f}{c}$ with $a = \frac{n}{c}$ gives the result. \end{proof} This corollary says that $t\Delta_{b,n} + \Delta^*_{b,n}$ preserves $b$-mesh, even though $\Delta_{b,n}^*$ does not. The operator $t\Delta_{b,n} + \Delta^*_{b,n}$ can be thought of as the polar derivative with respect to $t$, since by limiting $b \to 0$ we obtain the classical polar derivative. \subsubsection*{Recursive Identities} The $\Delta_{b,n}^*$ operator is also required to obtain $b$-additive convolution identities similar to Lamprecht's given above. \begin{lemma} Fix $f \in \mathbb{R}_{n-1}[x]$ and $g \in \mathbb{R}_n[x]$. We have: \[ f \boxplus_b^n g = f \boxplus_b^{n-1} \Delta_{b,n}g ~~~~~~~~~~~~~~~ (xf) \boxplus_b^n g = x(f \boxplus_b^{n-1} \Delta_{b,n}g) + f \boxplus_b^{n-1} \Delta^*_{b,n} g \] \end{lemma} \begin{proof} The first identity is straightforward from the definition of $\boxplus_b^n$. As for the second, we compute: \[ \Delta_b^k(xf) = \Delta_b^{k-1}(x\Delta_b f + f(x-b)) = \cdots = x\Delta_b^k f + k\Delta_b^{k-1}f(x-b) \] Notice that $\Delta_b$ commutes with shifting, so this is unambiguous. This implies: \[ \begin{split} (xf) \boxplus_b^n g &= \sum_{k=0}^n (x\Delta_b^k f + k\Delta_b^{k-1}f(x-b)) \cdot (\Delta_b^{n-k} g)(0) \\ &= x(f \boxplus_b^{n-1} \Delta_{b,n}g) + \sum_{k=1}^n k\Delta_b^{k-1}f(x-b) \cdot (\Delta_b^{n-k} g)(0) \\ &= x(f \boxplus_b^{n-1} \Delta_{b,n}g) + \sum_{k=0}^{n-1} \Delta_b^{n-1-k}f(x-b) \cdot ((n-k)\Delta_b^{k} g)(0) \\ &= x(f \boxplus_b^{n-1} \Delta_{b,n}g) + f(x-b) \boxplus_b^{n-1} (\Delta_{b,n}^* g)(x+b) \end{split} \] The last step of the above computation uses Lemma \ref{b_dy_basis_lemma} and the fact that $(\Delta_b^k g)(0)$ picks out the coefficient corresponding to the $k^\text{th}$ rising factorial term. Finally: \[ f(x-b) \boxplus_b^{n-1} (\Delta_{b,n}^* g)(x+b) = (f \boxplus_b^{n-1} (\Delta_{b,n}^* g)(x+b))(x-b) = f \boxplus_b^{n-1} \Delta_{b,n}^* g \] This implies the second identity. \end{proof} With this we can now emulate Lamprecht's proof to prove interlacing preserving properties of the $b$-additive convolution. \subsubsection*{Lamprecht-Style Proof} \newtheorem*{binterlacing}{Theorem \ref{thm:binterlacing}} \begin{binterlacing} Let $f,g \in \mathbb{R}_n[x]$ be strictly $b$-mesh polynomials of degree $n$. Let $T_g: \mathbb{R}_n[x] \to \mathbb{R}_n[x]$ be the real linear operator defined by $T_g: r \mapsto r \boxplus_b^n g$. Then, $T_g$ preserves interlacing with respect to $f$. \end{binterlacing} \begin{proof} We prove the theorem by induction. For $n=1$ the result is straightforward, as $\boxplus_b^1 \equiv \boxplus^1$. For $m>1$, we inductively assume that the result holds for $n=m-1$. By Corollary \ref{interlacing_op_cor}, we only need to show that $T_g[f_{\alpha_k}] \ll T_g[f]$ for all roots $\alpha_k$ of $f$. That is, we want to show $f_{\alpha_k} \boxplus_b^m g \ll f \boxplus_b^m g$ for all $k$. By Proposition \ref{b_deriv_prop} and Corollary \ref{b_pol_deriv_cor}, we have that $\Delta_{b,m}g$ and $(t\Delta_{b,m}+\Delta_{b,m}^*)g$ are strictly $b$-mesh and $\Delta_{b,m}g \ll (t\Delta_{b,m}+\Delta_{b,m}^*)g$ strictly for large enough $t$. Further, $\Delta_{b,m}g$ and $(t\Delta_{b,m}+\Delta_{b,m}^*)g$ are of degree $m-1$. The inductive hypothesis and symmetry of $\boxplus_b^n$ then imply: \[ f_{\alpha_k} \boxplus_b^{m-1} \Delta_{b,m}g \ll f_{\alpha_k} \boxplus_b^{m-1} (t\Delta_{b,m}+\Delta_{b,m}^*)g \] Considering the discussion near the beginning of the paper, we also have: \[ f_{\alpha_k} \boxplus_b^{m-1} \Delta_{b,m}g \ll (x-\alpha_k-t)(f_{\alpha_k} \boxplus_b^{m-1} \Delta_{b,m}g) \] By properties of $\ll$, this gives: \[ \begin{split} f_{\alpha_k} \boxplus_b^{m-1} \Delta_{b,m}g &\ll (x-\alpha_k-t)(f_{\alpha_k} \boxplus_b^{m-1} \Delta_{b,m}g) + f_{\alpha_k} \boxplus_b^{m-1} (t\Delta_{b,m}+\Delta_{b,m}^*)g \\ &= (x-\alpha_k)(f_{\alpha_k} \boxplus_b^{m-1} \Delta_{b,m}g) + f_{\alpha_k} \boxplus_b^{m-1} \Delta_{b,m}^* g \end{split} \] By the above identities and the fact that $f(x) = (x-\alpha_k)f_{\alpha_k}(x)$, this is equivalent to $f_{\alpha_k} \boxplus_b^m g \ll f \boxplus_b^m g$. \end{proof} \newtheorem*{maincor}{Corollary \ref{thm:main}} \begin{maincor} Let $f,g \in \mathbb{R}_n[x]$ be strictly $b$-mesh polynomials. Then, $f \boxplus_b^n g$ is $b$-mesh. \end{maincor} \begin{proof} First suppose $f,g$ are of degree $n$. Since $f \ll f(x-b)$ strictly, the previous theorem implies the following: \[ f \boxplus_b^n g \ll f(x-b) \boxplus_b^n g = (f \boxplus_b^n g)(x-b) \] That is, $f \boxplus_b^n g$ is $b$-mesh. If $f,g$ are not of degree $n$, then the result follows by adding new roots and limiting them to $\infty$, in a fashion similar to the proof of Corollary \ref{thm:qlogmesh} given above. \end{proof} \section{Conclusion} \section{Conclusion} \subsubsection*{Extensions of other classical convolution results} In this paper we investigate connections between the additive and multiplicative convolutions and their mesh generalizations. Looking forward, it is natural to look at other results in the classical case and ask for mesh generalizations. To the authors knowledge, there are two classical results which have been extended to mesh analogues: in \cite{finitemesh}, the authors explore extensions of the Hermite-Poulain theorem to the $1$-mesh world, and in \cite{lamprecht}, Lamprecht extends classical results for the multiplicative convolution to the $q$-log mesh world. An important related result in the classical case is the triangle inequality, which we discuss in \S\ref{sect:applications}. To our knowledge, there is not a known generalization of the triangle inequality to the mesh and log mesh cases. If one could establish such a result for the $q$-multiplicative convolution, it would automatically extend to the $b$-additive convolution using our analytic connection. \subsubsection*{Extensions of other $q-$multiplicative convolution results} In addition to log mesh preservation, Lamprecht proves other results about the $q$-multiplicative convolution. Here we comment on these and their relation to the mesh world. Beyond the finite degree case, Lamprecht discusses the extension of Laguerre-Polya functions to the $q$-multiplicative world, and then establishes a $q$-version of Polya-Schur multiplier sequences via a power series convolution. Since we are not aware of analogous power series results for the classical additive convolution, we have not explored the connections to the $b$-additive case. Additionally, Lamprecht classifies log-concave sequences in terms of $q$-log mesh polynomials using the Hadamard product and a limiting argument. There might be an analogue result in the mesh world for concave sequences, but it is unclear what would take the place of the Hadamard product. Lamprecht details the classes of polynomials that the $q$-multiplicative convolution preserves. Most of these results come from the presence of two poles in the $q$-multiplicative case, yielding derivative operators which preserve negative- and positive-rootedness respectively. The $b$-additive case does not have such complications. In our simplification of Lamprecht's argument, we assume the input polynomials to be generic (strictly $b$-mesh), and then limit to obtain the result for all $b$-mesh polynomials. By keeping track of boundary case information, Lamprecht is able to get more precise results about boundary elements of the space of $b$-mesh polynomials. We believe it is likely possible to emulate this in the above proof with more bookkeeping. \subsubsection*{The analytic connection applied to other known classical results} There are other results known about the classical multiplicative convolution which we believe could be transferred to the additive convolution using our generic framework. Specifically in \cite{finiteconvolutions}, Marcus, Spielman, and Srivastava establish a refinement of the triangle inequality for both the additive and multiplicative convolutions. These refinements parallel the well studied transforms from free probability theory. We have not yet worked out the details of this connection. \subsubsection*{Further directions for the generic analytic connection} Finally, it is worth nothing that our analytic connection can only transfer results about the multiplicative convolution to the additive convolution. The main obstruction is finding the appropriate analogue to the exponential map. The following limiting connection between exponential polynomials and polynomials motivated our investigation: \[ \lim_{q\to1} \frac{1 - q^x}{1-q} = x \] Finding the appropriate ``logarithmic analogue'' could yield a way to pass results from the additive convolution to the multiplicative convolution. That said, some heuristic evidence suggests that such an analogue might not exist. Above all, our analytic connection still remains rather mysterious. We suspect that there exists a more general theory which provides better intuition for this multiplicative-to-additive connection. While developing this connection, we found multiple candidate exponential maps which experimentally worked. We settled on the ones introduced in this paper due to their relatively nice combinatorial properties. Ideally, an alternative approach would avoid proving the result on a basis and better explain the role of these ``exponential maps''.
{ "timestamp": "2017-12-08T02:05:20", "yymm": "1712", "arxiv_id": "1712.02499", "language": "en", "url": "https://arxiv.org/abs/1712.02499", "abstract": "In a recent paper, Brändén, Krasikov, and Shapiro consider root location preservation properties of finite difference operators. To this end, the authors describe a natural polynomial convolution operator and conjecture that it preserves root mesh properties. We prove this conjecture using two methods. The first develops a novel connection between the additive (Walsh) and multiplicative (Grace-Szegö) convolutions, which can be generically used to transfer results from multiplicative to additive. We then use this to transfer an analogous result, due to Lamprecht, which demonstrates logarithmic root mesh preservation properties of a certain $q$-multiplicative convolution operator. The second method proves the result directly using a modification of Lamprecht's proof of the logarithmic root mesh result. We present his original argument in a streamlined fashion and then make the appropriate alterations to apply it to the additive case.", "subjects": "Complex Variables (math.CV); Classical Analysis and ODEs (math.CA)", "title": "Connecting the q-Multiplicative Convolution and the Finite Difference Convolution", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9888419713825141, "lm_q2_score": 0.8354835330070839, "lm_q1q2_score": 0.8261611838363526 }
https://arxiv.org/abs/2011.12218
Tverberg's theorem, disks, and Hamiltonian cycles
For a finite set $S$ of points in the plane and a graph with vertices on $S$ consider the disks with diameters induced by the edges. We show that for any odd set $S$ there exists a Hamiltonian cycle for which these disks share a point, and for an even set $S$ there exists a Hamiltonian path with the same property. We discuss high-dimensional versions of these theorems and their relation to other results in discrete geometry.
\section{Introduction} In 1966, Helge Tverberg proved that \textit{for any set of $(r-1)(d+1)+1$ points in $\mathds{R}^d$ there exists a partition of them into $r$ parts whose convex hulls intersect} \cite{Tverberg:1966tb}. We call these partitions \emph{Tverberg partitions}. Among the variations and generalizations of Tverberg's theorem, two kinds stand out. In the first, we impose additional conditions on the partitions or try to deduce structural properties of the family of all Tverberg partitions of a set. The colorful versions of Tverberg's theorem, or Sierksma's conjecture on the number of Tverberg partition fall into this category. In the second, we relax or modify the geometric conditions while not breaking the existence of Tverberg's partitions. The topological versions of Tverberg's theorem are an example of such extensions. We recommend \cites{Barany:2016vx, Barany:2018fya, DeLoera:2019jb} and the references therein for the developments around Tverberg's theorem. This manuscript focuses on a variation of the second kind. For a segment $e$ in $\mathds{R}^d$ with endpoints $x,y$, we denote by $D(e)$ or $D(x,y)$ the closed ball for which $e$ is a diameter. Given a finite set of points in $\mathds{R}^d$, instead of looking at the the convex hulls of its subsets, we are interested in the balls spanned by pairs of its points. For any graph $G$ with vertices in $\mathds{R}^d$, we consider a rectilinear drawing of $G$, where each segment is represented by a straight segment. \begin{definition}\label{def:tverberg-graph} Let $S$ be a finite set of points in $\mathds{R}^d$. Let $G$ be a graph whose vertex set is $S$ and whose edge set is $E$. We say that $G$ is a \emph{Tvereberg graph} for $S$ if \[ \bigcap_{e\in E}D(e) \neq \emptyset. \] \end{definition} \begin{figure} \centerline{\includegraphics[width = \textwidth]{tverberg-fig}} \caption{Two different partitions for the same set of points.} \end{figure} A natural way to extend Tverberg's theorem to disks is to ask if for any $2r$ points in $\mathds{R}^d$ there exists a perfect matching that is a Tverberg graph. In the plane, Huemer, P\'erez-Lantero, Seara, and Silveira proved said result \cite{Huemer:2019ji}. They even showed that for any $r$ blue point and any $r$ red point there is a perfect red-blue matching that is a Tverberg graph. This extends the colorful Tverberg theorem to disks. Bereg, Chac\'on-Rivera, Flores-Pe\~naloza, Huemer, and P\'erez-Lantero found a second proof of the monochromatic version \cite{bereg2019maximum}. Definition \ref{def:tverberg-graph} leads us to the following problem. \begin{problem}\label{prob:main-problem} Given a finite set of points $S$ in $\mathds{R}^d$, determine the family of Tverberg graphs for $S$. \end{problem} There is no need for the graphs from Problem \ref{prob:main-problem} to be matchings. Once $S$ is fixed, Tverberg graphs are closed under containment, so finding the containment-maximal Tverberg graphs is of interest. In this manuscript, we show that the family of Tverberg graphs for a set of points always contains other interesting graphs. We are particularly interested in Hamiltonian cycles and Hamiltonian paths. \begin{theorem}\label{thm-odd} Let $S$ be a finite set of points in the plane. If $S$ has odd cardinality there exists a Hamiltonian cycle that is a Tverberg graph for $S$. If $S$ has even cardinality, there exists a Hamiltonian path that is a Tverberg graph for $S$. \end{theorem} The planar result by Huemer et al. and by Bereg et al. rely on a particular choice of a perfect matching and then a reduction of the problem with $2r$ points to a problem with six points via Helly's theorem. Such a reduction does not work for Hamiltonian cycles or paths, since the property of being a Hamiltonian cycle cannot be verified locally. Our methods rely instead on choosing the point of intersection of the disks and then constructing the cycle. We note that the essence of Theorem \ref{thm-odd} is the existence of Hamiltonian cycles when $|S|$ is odd. This result implies the existence of the Hamiltonian path for $|S|$ even, as we show below. \begin{corollary} Let $S$ be a set of $2r$ points in $\mathds{R}^2$. There exists a Hamiltonian path that is a Tverberg graph for $S$. \end{corollary} \begin{proof} We append an additional point $x$ to $S$. By the first part of Theorem \ref{thm-odd}, there exists a Hamiltonian cycle that is a Tverberg graph for $S\cup \{x\}$. By removing $x$ and its two edges, we are left with a Hamiltonian path on $S$, as desired. \end{proof} A Hamiltonian path contains a perfect matching, so this corollary implies the most simple version of ``Tverberg for disks''. The trick of appending an additional point is quite useful in other Tverberg-type problems \cite{Blagojevic:2014js, Blagojevic:2011vh, Blagojevic:2015wya, Frick:2020uj}. If the appended point $x$ is a vertex of $S$, then we get a slightly stronger result. There is a Tverberg graph on $S$ which is the union of two cycles sharing exactly one vertex. The union of the two cycles contains every point in $S$, and the shared vertex may be fixed in advance. The reason our results work in dimension two is because we use an idea similar to Birch's proof of Tverberg's theorem in the plane \cite{Birch:1959ii}. Other Tverberg-type theorems are much better understood in the plane \cite{Barany:1992tx} In high dimensions we obtain the following results, which provides a clear connection between Tverberg graphs (for disks) and Tverberg's original theorem. \begin{theorem}\label{thm-tverbergapplied} Let $S$ be a finite set of points in $\mathds{R}^d$ such that $|S| \ge (r-1)(d+1)+1$. There exists a partition of $S$ into $r$ non-empty sets $A_1,\ldots, A_r$ and a Tverberg graph with vertices of $S$ such that every point of $S$ is adjacent to at least one point of each $A_i$. \end{theorem} In particular, this result shows that every finite set in $\mathds{R}^d$ has Tverberg graphs which are dense. \begin{corollary} Let $S$ be a finite set of points in $\mathds{R}^d$. There is a Tverberg graph for $S$ whose minimum degree is greater than or equal to $|S|/(d+1)$. \end{corollary} We set notation and preliminaries in Section \ref{sec:notation} and prove Theorem \ref{thm-odd} in Section \ref{sec:main-proofs}. Finally, we prove Theorem \ref{thm-tverbergapplied} in Section \ref{sec:high-dimensions}. We present open problems throughout the manuscript. \section{Notation and general position assumptions}\label{sec:notation} In our proofs, we assume that our set of points $S$ is in general position. In this paper, we say that a finte set $S$ in $\mathds{R}^2$ is in general position if: \begin{itemize} \item no three points of $S$ are collinear, \item for any $x,y,z \in S$, we know $z$ is not in the boundary of $D(x,y)$ if $z \not\in\{x,y\}$, and \item for any three pairs $\{x_1, y_1\}, \{x_2, y_2\}, \{x_3, y_3\}$ of points of $S$ the boundaries of $D(x_1, y_1)$, $D(x_2, y_2)$, and $D(x_3,y_3)$ do not intersect unless $\bigcap_{i=1}^3 \{x_i,y_i\} \neq \emptyset$. \item If $\{x_1,y_1\}, \{x_2,y_2\}$ are two pairs of points of $S$, then the boundaries of $D(x_1, y_1), D(x_2,y_2)$ are not tangent. \end{itemize} It is clear that any $n$-tuple of points $S$ can be approximated by a sequence $\{S_i\}$ of $n$-tuples of points in general position. If we can find a particular Tverberg graph $G_i$ for each $S_i$, a subsequence of those graphs will determine the same adjacent pairs. Therefore, the corresponding limiting graph $G$ will be a Tverberg graph for $S$. If each $G_i$ satisfies an additional property (such as being regular, a Hamiltonian cycle or satisfying a bound on its minimum degree), so will $G$. Given a graph $G$, we denote by $E(G)$ its edges and by $V(G)$ its vertices. \section{An odd number of points in the plane}\label{sec:main-proofs} The particular case when $S$ is in convex position showcases some of the obstacles in solving this problem. For two points $x, y \in \mathds{R}^2$ and $\alpha \in (0,\pi)$ we define the $\alpha$-lens $\alpha(x,y)$ as \[ \alpha(x,y) =\{z \in \mathds{R}^d : \angle xzy \ge \alpha \}. \] If $\alpha = \pi/2$, then $\alpha(x,y) = D(x,y)$. Alpha-lenses have interesting intersection porperties \cite{barany1987extension, Barany:1987ef, Magazinov:2016ia}. If $e$ denotes the segment $xy$, we use the notation $\alpha(e) = \alpha(x,y)$. \begin{theorem} Let $S$ be a finite set of points in convex position in the plane. If $|S|$ is odd, then there exists a Hamiltonian cycle $G$ with vertex set $S$ such that \[ \bigcap_{e \in E(G)} D(e) \neq \emptyset. \] \end{theorem} \begin{proof} Let $|S| = 2n+1$. Order the points $x_1, x_2, \ldots, x_{2n+1}$ clockwise along the the boundary of the convex hull of $S$. The Hamiltonian cycle is formed by making $x_i$ adjacent to $x_{i+n}$ and $x_{i+n+1}$, where the indices are considered modulo $2n+1$. By construction, any two of the segments intersect, so any three segments contain the sides of a triangle. If $e_1, e_2, e_3$ are three edges of $G$, let $s_1, s_2, s_3$ be the sides of the triangle so that $s_i \subset e_i$ for each $i$. Notice that $D(s_1) \cap D(s_2) \cap D(s_3) \neq \emptyset$ since the foot of the height from the largest angle is contained in the opposite side, and therefore it is in each disk. Since $D(s_i) \subset D(e_i)$, we know that $D(e_1) \cap D(e_2) \cap D(e_3) \neq \emptyset$. Finally, by Helly's theorem we obtain that $G$ is a Tverberg graph. \end{proof} The convex position scenario shows why the case when $|S|$ is odd behaves differently from the case when $|S|$ is even. The proof above works for $\alpha$-lenses with $\alpha = 2\pi / 3$. This is because for a triangle $xyz$ in the plane, either one of the angles is at least $2\pi/3$, or the Fermat-Toricelli point lies inside the triangle. The Fermat-Toricelli point $p$ is the point that minimizes the sum of the distance to the vertices, and has the property that the angles $\angle xpy, \angle ypz, \angle zpx$ are all equal to $2\pi/3$. This implies that the alpha-lenses $\alpha(x,y), \alpha(y,z), \alpha(x,z)$ intersect for $\alpha = 2\pi/3$. For $|S|$ even, one cannot hope for such an extension to $\alpha$-lenses. If $S$ is the set of vertices of a square, for any Hamiltonian cycle $G$, we have $\bigcap_{e\in E(G)}\alpha (e) = \emptyset$ for $\alpha> \pi/2$. The extension to $\alpha$-lenses cannot does not hold for odd sets of points in general. If $S$ is the set of vertices of a square and its center, then for any $\alpha > \pi/2$ the $\alpha$-lenses induced by any Hamiltonian cycles do not all intersect. The general case of Theorem \ref{thm-odd}, when $S$ is not in convex position, is not so simple. We do not have a canonical way to order the points and make a Hamiltonian cycle. We circumvent this problem by creating Hamiltonian cycles ``around a point $p$''. We choose a point $p$ as a candidate for the intersection of the disks. Notice that for any two points $x,y \in S$, we have \[ p \in D(x,y) \iff \angle xpy \ge \pi/2. \] We assume that our set of points $S$ is in general position. For a point $p \in \mathds{R}^2$, we construct a Hamiltonian cycle for $p$ as follows: \begin{itemize} \item If $p \not\in S$. Consider $C(p)$ be the circle of radius $1$ around $p$. We project radially every point of $S$ onto $C(p)$. We label them $x_1, \ldots, x_{2n+1}$ clockwise around $C(p)$. We form our Hamiltonian cycle by connecting $x_i$ with $x_{i+n}$ and $x_{i+n+1}$. The cycle does not depend on the choice of $x_1$. We call this a Hamiltonian cycle of type I. See Figure \ref{fig:type1}. \item If $p \in S$. Consider $C(p)$ the circle of radius $1$ around $p$. We choose a point $p^* \in C(p)$ to represent $p$. We project radially every other point of $S$ onto $C(p)$. We label the points $x_1, \ldots, x_{2n+1}$ clockwise around $C(p)$ so that $p^* = x_{2n+1}$. We form the Hamiltonian cycle by connecting $x_i$ with $x_{i+n}$ and $x_{i+n+1}$. The cycle only depends on the choice of $p^*$. We call this a Hamiltonian cycle of type II. See Figure \ref{fig:type2}. \end{itemize} \begin{figure} \centerline{\includegraphics{fig-type1.pdf}} \caption{A type I Hamiltonian cycle by five points from a center $p \not\in S$} \label{fig:type1} \end{figure} \begin{figure} \centerline{\includegraphics{fig-type2.pdf}} \caption{A type II Hamiltonian cycle by five points from a center $p \in S$. The choice of representative $p^* \in C(p)$ is important. } \label{fig:type2} \end{figure} Notice that $p \in \bigcap_{e \in E(G)}D(e)$ where $E(G)$ is the set of edges of the Hamiltonian cycle of Type I if and only if $\angle x_i p x_{i+n} \ge \pi /2$ for each $i\in [2n+1]$. For Hamiltonian cycle of type II, we only need to check $\angle x_i p x_{i+n} \ge \pi /2$ for $i \in [2n] \setminus \{n+1\}$, since the disks from edges ending at $p$ contain $p$. Even though different points may induce the same cycles, we are checking if $p$ is in the disks induced by all edges of the cycle. In Figure \ref{fig:type1} the answer would be negative since $p \not\in D(x_3,x_5)$, and in Figure \ref{fig:type2} the answer would be positive. Given a point $p \in \mathds{R}^2 \setminus S$ we define $\ell(p)$ as the number of indices $i\in [2n+1]$ such that the angle $\angle x_i p x_{i+n}$ is strictly smaller than $\pi /2$ for the type I Hamiltonian cycle around $p$. For a point $p \in S$, we define $\ell(p)$ as the smallest number of indices $i \in [2n] \setminus\{n+1\}$ such that the angle $\angle x_i p x_{i+n}$ is strictly smaller than $\pi/2$ for all Hamiltonian cycles of type II around $p$. The parameter $\ell(p)$ is the number of disks that do not contain $p$. If $\ell(p) = 0$ for some $p$, then we have a Hamiltonian cycle as we wanted. For any $k \in \{0,1, \ldots, 2n+1\}$, the set of points $p$ such that $\ell (p) \le k$ is a closed set, which can be checked by sequences. For $k < 2n+1$, it is contained in the union of all disks spanned by pairs of $S$, and is therefore a compact set. Now, for a value $k \in \{0,1,\ldots, 2n+1\}$ we consider $C_k \subset \mathds{R}^2$ as the region of the plane defined by \[ C_k = \{p \in \mathds{R}^2 : \ell(p) \le k \}. \] Let $k$ be the smallest value such that $C_k \neq \emptyset$. We assume $k \ge 1$, and look for a contradiction. In this case, $C_k$ is compact. For a point $p \in C_k \setminus S$ we define a function $f(p)$ as the sum of the $k$ angles $\angle x_i p x_{i+n}$ which are strictly smaller than $\pi / 2$. If $p \in C_k \cap S$, we take the largest such sum among all type II Hamiltonian cycles around $p$, and don't consider the values $i=n+1, i=2n+1$ in the sum. \begin{claim}\label{claim:odd1} The function $f$ attains a maximum value in $C_k$. \end{claim} \begin{proof} Notice that $f$ is continuous in $C_k \setminus S$. Let $M = \max \{f(p) : p \in S \cap C_k\}$. If $M$ is not the maximum, then there exists a point $p_0 \in C_k$ such that $f(p_0) > M$. Let $R = \{p \in C_k : f(p) \ge f(p_0)\}$. Let us show that no point of $S$ is in the closure of $R$. If that's not the case, there is a sequence $p_1, p_2, \ldots$ in $R$ that converges to a point $p$ in $S$. The sequence of points of the form \[ q_i = p + \frac{1}{||p-p_i||}(p-p_i) \] lies in $C(p)$. Since $C(p)$ is compact, by taking subsequences we can assume that $q_i$ converges to some point $q^*$. The type I Hamiltonian cycle induced by $p_i$ is eventually equal to the type II Hamiltonian cycle induced by $p$ using the reprresentative $q^*$ for $p$. Therefore, the limit of $f(p_i)$ is equal to the function evaluated at $p$ with representative $q^*$. The value of $f(p)$ is greater than or equal to this value. This contradicts the fact that $f(p_i) \ge f(p_0)$ for each $i \ge 1$. Since $S$ is not in the closure of $R$, $f$ is continuous in $C_k \setminus S$, and $C_k$ is compact, then $R$ is compact. The function $f$ must have a maximum value in $R$, which is a maximum value over all of $C_k$. \end{proof} The following claim will give us the contradiction we seek, showing that $k=0$. \begin{claim}\label{claim:ood2} If $k \ge 1$, the function $f$ cannot attain a maximum value in $C_k$. \end{claim} \begin{proof} First, consider the case $p \not\in S$. Consider the set of arcs $x_ix_{i+n}$ on $C(p)$ such that $\angle x_i p x_{i+n} \le \pi/2$. Of the two circular arcs determined by $x_i, x_{i+n}$ we are considering the smaller one. There are $k$ arcs which make an angle strictly smaller than $\pi/2$ and at most two that make an angle equal to $\pi/2$ due to the general position assumptions. If we include its extremes, each of these arcs contains at least $n+1$ points $x_i$, possibly more if the extremes $x_i, x_{i+n}$ coincide other projections. If two of these arcs were disjoint, there would be at least $2n+2$ points in $S$, a contradiction. Therefore, any two of these arcs intersect. For any three of these arcs, their union cannot cover $C(p)$, or at least one of them would make an angle of at least $2\pi/3$. Therefore, any three of these arcs intersect. Now, for each of these arcs $x_ix_{i+n}$, we define the convex set $K_i$ formed by the convex hull of $p$ and the circular arc $x_ix_{i+n}$. Since $p$ is an extreme point of $K_i$, the set $K_i\setminus \{p\}$ is convex. By Helly's theorem, since every three sets of the sets of the form $K_i \setminus\{p\}$ intersect, there is a point $y^*$ common to all of them. The radial projection $x^*$ of $y^*$ onto $C(p)$ is in all the arcs $x_ix_{i+n}$ we were considering. If we move $p$ in the direction $x^*-p$ a small enough distance, we remain in $C_k$ and the value of $f(p)$ increases. Therefore, $p$ was not a maximum of $f$. Now, consider the case $p \in S$. We assume without loss of generality that $p$ is the origin, so antipodal points in $C(p)$ are negative of each other. Let $p^*$ be a representative of $p$ in $C(p)$ that realizes $f(p)$. No arcs $x_i x_j$ for $i,j \neq 2n+1$ have an angle equal to $\pi/2$ due to the general position argument. As before, the arcs $x_ix_{i+n}$ which form angles smaller than $\pi/2$ all have a nonempty intersection $Q$. We first show that we can assume $-p^* \in Q$. If $-p^*$ was not a point in the intersection of all the small arcs, let $q \in Q$ be an arbitrary point in that intersection. We replace $p^*$ by $-q$ and reconstruct the Hamiltonian cycle. Denote by $\tilde{x}_1, \ldots, \tilde{x}_{2n}$ the new numbering of the points in $S \setminus p$. If there is no new arc $\tilde{x}_i \tilde{x}_{i+n}$ that contains $-q$ and whose angle is smaller than $\pi/2$, all the arcs we would consider now are either arcs of the form $x_ix_{i+n}$ that did not contain $p^*$ or a widening of one of such arcs. Therefore, we may assume that there is an arc $\tilde{x}_i\tilde{x}_{i+n}$ that contains $-q$ and has angle smaller than $\pi/2$. Let $A$ be the set of all such arcs. Let $B$ be the set of all arcs $x_jx_{j+n}$ in the first Hamiltonian cycle that had angle smaller than $\pi/2$. We know that $(\cup A) \cap (\cup B) = \emptyset$, because any arc in $B$ contains $q$, any arc in $A$ contains $-q$, and both had angles smaller than $\pi/2$. The arc $\cup A$ contains at least $n+|A|-1$ projections of points of $S$ (not counting $q$). The arc $B$ contains at least $n+|B|-1$ projections of points of $S$. In total this gives us at least $2n +|A|+|B|-2$. Since there is a total of $2n$ projections of points of $S$ onto $C(p)$, this means that $|A|=|B| = 1$. Therefore, for $k \ge 2$ we can replace $p^*$ by $-q$ without decreasing $f(p)$. If $k=1$, the (single) arc $A$ as defined above contains $n$ of the projections $x_i$ and the arc $B$ contains another $n$ (and therefore must contain $p^*$, or it would need to contain at least $n+1$ of the projections). There are no points of the form $x_j$ outside $A \cup B$. The set $C(p) \setminus (A \cup B)$ is the union of two arc whose angles sum to more than $\pi$, so one of them must have angle greater than $\pi/2$. Let $p^{**}$ be a point in this arc. We replace $p^*$ by $p^{**}$. When we make the Hamiltonian cycle with this new representative, the arc $B$ either widens or gets removed. In the first case, $f(p)$ increases, and in the second we would have $p \in C_0$. Notice that no new short arc containing $p^{**}$ was created because the two projection points $x_i, x_j$ next to $p^{**}$ form an angle greater than $\pi/2$. This means we can always assume that $-p^*$ is in the intersection of all the arcs of the form $x_{i}x_{i+n}$ with angle smaller than $\pi/2$. Finally, let us show that $\angle x_npp^* > \pi/2$ and $\angle p^*p x_{n+1} > \pi/2$. We show the bound on the first angle, and the second is analogous. If $\angle x_n p p^* \le \pi/2$, notice that the arc $A = x_n p^*$ contains at least $n$ points of the form $x_i$. Take an arc $B$ of the form $x_i x_{i+n}$ with angle smaller than $\pi/2$. This arc contains $-p^*$ and therefore cannot intersect $A$. Moreover, $B$ must contain at least $n+1$ points of the form $x_i$. Then, $A \cup B$ contain at least $2n+1$ such points, but there are only $2n$ of them. With these conditions, we can now move $p$ a small distance in the direction $p-p^*$, reaching a new point $p'$. Since $p\in S$, when we represent it in $C(p')$ it will be the point in direction of $p^*$. This means that the canonical Hamiltonian cycle of type I will be the same that we had with $p$ when the representative was $p^*$. Since $-p^*$ was in the intersection of all the short arcs, they all widen when we reach $p'$. Therefore, $f(p)$ was not maximal. \end{proof} \section{Results in high dimensions and remarks}\label{sec:high-dimensions} We show how to use Tverberg's theorem to prove Theorem \ref{thm-tverbergapplied}. \begin{proof} Let $S$ be a set of at least $(r-1)(d+1)+1$ points in $\mathds{R}^d$. By Tverberg's theorem, there exists a partition of the points into $r$ sets $A_1, \ldots, A_r$ whose convex hulls intersect. Let $p$ a point the point of intersection of $\conv A_j$ of $j=1,\ldots, d$. Given a point $q \in S$ and a value $1\le j \le r$, consider the half-space \[ H^+ = \{x \in \mathds{R}^d: \langle x, q-p\rangle \ge \langle p, q-p\rangle \}. \] By definition, $p \in H^+$. Therefore, since $p \in \conv A_j$, there is an element $q_j \in A_j \cap H^+$. Notice that $p \in D(q,q_j)$. We construct a graph $G$ with vertices in $S$ and make $\{q,q_j\}$ one of the edges. We repeat this process for each $q \in S$ and each $j =1,\ldots, r$. The graph we constructed satisfies the properties of the problem. \end{proof} A general unsolved problem is the following. \begin{problem} Let $S$ be a set of points in $\mathds{R}^d$. Determine if there exists a Hamiltonian cycle with vertices on $S$ that is a Tverberg graph for $S$. \end{problem} We suspect that the answer should be positive in the plane, regardless of the parity of $S$. In high dimension we don't even know if, for even $|S|$, there exists a perfect matching that is a Tverberg graph for $S$. As an additional bit of evidence that the answer should be positive in the plane, consider the case of four points in $\mathds{R}^2$. If the convex hull of the points is a triangle with vertices $x,y,z$, notice that every point inside $\triangle xyz$ is covered twice by the sets $D(x,y), D(y,z), D(x,z)$. Assume without loss of generality that the point $w$ is covered by $D(x,y), D(x,z)$. Then, the Hamiltoninan cycle $(w,y,x,z,w)$ is a Tverberg graph (the point $w$ is in the intersection of all disks). If the convex hull of the four points is a convex quadrilateral with intersecting diagonals $xy, wz$, let $p$ be the point of intersection of the diagonals. One of the two angles $\angle xpw$ and $\angle wpy$ must be greater than or equal to $\pi/2$. If, without loss of generality, $\angle xpw \ge \pi/2$, then the Hamiltonian cyclee $(x,w,z,y,x)$ is a Tverberg graph (the point $p$ is in the intersection of all disks). \begin{bibdiv} \begin{biblist} \bib{barany1987extension}{article}{ author={B\'ar\'any, Imre}, title={An extension of the erd{\H{o}}s-szekeres theorem on large angles}, date={1987}, journal={Combinatorica}, volume={7}, number={2}, pages={161\ndash 169}, } \bib{Barany:2016vx}{article}{ author={B{\'a}r{\'a}ny, Imre}, author={Blagojevi{\'c}, Pavle V.~M.}, author={Ziegler, G{\"u}nter~M.}, title={{Tverberg's Theorem at 50: Extensions and Counterexamples}}, date={2016}, journal={Notices of the American Mathematical Society}, volume={63}, pages={732\ndash 739}, } \bib{bereg2019maximum}{article}{ author={Bereg, Sergey}, author={Chac{\'o}n-Rivera, Oscar}, author={Flores-Pe{\~n}aloza, David}, author={Huemer, Clemens}, author={P{\'e}rez-Lantero, Pablo}, author={Seara, Carlos}, title={On maximum-sum matchings of points}, date={2019}, journal={arXiv preprint arXiv:1911.10610}, volume={[cs.CG]}, } \bib{Blagojevic:2014js}{article}{ author={Blagojevi{\'c}, Pavle V.~M.}, author={Frick, Florian}, author={Ziegler, G{\"u}nter~M.}, title={{Tverberg plus constraints}}, date={2014}, journal={Bulletin of the London Mathematical Society}, volume={46}, number={5}, pages={953\ndash 967}, } \bib{Birch:1959ii}{article}{ author={Birch, Bryan~John}, title={{On 3N points in a plane}}, date={1959}, journal={Mathematical Proceedings of the Cambridge Philosophical Society}, volume={55}, number={04}, pages={289\ndash 293}, } \bib{Barany:1987ef}{article}{ author={B{\'a}r{\'a}ny, Imre}, author={Lehel, Jeno}, title={{Covering with Euclidean Boxes}}, date={1987}, journal={European Journal of Combinatorics}, volume={8}, number={2}, pages={113\ndash 119}, } \bib{Barany:1992tx}{article}{ author={B{\'a}r{\'a}ny, Imre}, author={Larman, David~G}, title={{A Colored Version of Tverberg's Theorem}}, date={1992}, journal={J. London Math. Soc}, volume={s2-45}, number={2}, pages={314\ndash 320}, } \bib{Blagojevic:2011vh}{article}{ author={Blagojevi{\'c}, Pavle V.~M.}, author={Matschke, Benjamin}, author={Ziegler, G{\"u}nter~M.}, title={{Optimal bounds for a colorful Tverberg-Vrecica type problem}}, date={2011}, journal={Advances in Mathematics}, volume={226}, number={6}, pages={5198\ndash 5215}, } \bib{Blagojevic:2015wya}{article}{ author={Blagojevi{\'c}, Pavle V.~M.}, author={Matschke, Benjamin}, author={Ziegler, G{\"u}nter~M.}, title={{Optimal bounds for the colored Tverberg problem}}, date={2015}, journal={Journal of the European Mathematical Society}, volume={17}, number={4}, pages={739\ndash 754}, } \bib{Barany:2018fya}{article}{ author={B{\'a}r{\'a}ny, Imre}, author={Sober{\'o}n, Pablo}, title={{Tverberg{\textquoteright}s theorem is 50 years old: A survey}}, date={2018}, journal={Bulletin of the American Mathematical Society}, volume={55}, number={4}, pages={459\ndash 492}, } \bib{DeLoera:2019jb}{article}{ author={De~Loera, Jes{\'u}s~A.}, author={Goaoc, Xavier}, author={Meunier, Fr{\'e}d{\'e}ric}, author={Mustafa, Nabil~H.}, title={{The discrete yet ubiquitous theorems of Carath{\'e}odory, Helly, Sperner, Tucker, and Tverberg}}, date={2019}, journal={Bulletin of the American Mathematical Society}, volume={56}, number={3}, pages={1\ndash 97}, } \bib{Frick:2020uj}{article}{ author={Frick, Florian}, author={Sober{\'o}n, Pablo}, title={{The topological Tverberg problem beyond prime powers}}, date={2020}, journal={arXiv preprint arXiv:2005.05251}, volume={[math.CO]}, } \bib{Huemer:2019ji}{article}{ author={Huemer, Clemens}, author={P{\'e}rez-Lantero, Pablo}, author={Seara, Carlos}, author={Silveira, Rodrigo~I}, title={{Matching points with disks with a common intersection}}, date={2019}, journal={Discrete Mathematics}, volume={342}, number={7}, pages={1885\ndash 1893}, } \bib{Magazinov:2016ia}{article}{ author={Magazinov, Alexander}, author={Sober{\'o}n, Pablo}, title={{Positive-fraction intersection results and variations of weak epsilon-nets}}, date={2016}, journal={Monatshefte f{\"u}r Mathematik}, volume={183}, number={1}, pages={165\ndash 176}, } \bib{Tverberg:1966tb}{article}{ author={Tverberg, Helge}, title={{A generalization of Radon{\textquoteright}s theorem}}, date={1966}, journal={J. London Math. Soc}, volume={41}, number={1}, pages={123\ndash 128}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2020-11-30T02:02:23", "yymm": "2011", "arxiv_id": "2011.12218", "language": "en", "url": "https://arxiv.org/abs/2011.12218", "abstract": "For a finite set $S$ of points in the plane and a graph with vertices on $S$ consider the disks with diameters induced by the edges. We show that for any odd set $S$ there exists a Hamiltonian cycle for which these disks share a point, and for an even set $S$ there exists a Hamiltonian path with the same property. We discuss high-dimensional versions of these theorems and their relation to other results in discrete geometry.", "subjects": "Combinatorics (math.CO)", "title": "Tverberg's theorem, disks, and Hamiltonian cycles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9912886137407895, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.8260651890379478 }
https://arxiv.org/abs/2009.05000
Primes in short intervals: Heuristics and calculations
We formulate, using heuristic reasoning, precise conjectures for the range of the number of primes in intervals of length $y$ around $x$, where $y\ll (\log x)^2$. In particular we conjecture that the maximum grows surprisingly slowly as $y$ ranges from $\log x$ to $(\log x)^2$. We will show that our conjectures are somewhat supported by available data, though not so well that there may not be room for some modification.
\section{Introduction} We are interested in estimating the maximum and minimum number of primes in a length $y$ sub-interval of $(x,2x]$, denoted by \[ M(x,y):=\max_{X\in (x,2x]} \pi(X+y)-\pi(X) \text{ and } m(x,y):=\min_{X\in (x,2x]} \pi(X+y)-\pi(X) , \] respectively, so that \[ m(x,y)\leq \pi(X+y)-\pi(X) \leq M(x,y) \text{ whenever } x<X\leq 2x, \] and these bounds cannot be improved (by definition). It is widely believed that $m(x,y)=0$ for $y\ll (\log x)^2$ though we do not know the precise value of the implicit constant. However there has been little study of how $m(x,y)$ subsequently grows, or of how $M(x,y)$ behaves for $y\ll (\log x)^{2+o(1)}$. In this article we will conjecture a series of guesstimates for $M(x,y)$ and $m(x,y)$ in different ranges, comparing these estimates to what relevant data we can compute, and discussing some of the issues that prevent us from being too confident of these guesses. The starting point for our investigations came from a comparison of two known observations: Based on the (conjectured) size of admissible sets we believe that there exists a constant $c>0$ such that \[ M(x,y) \sim \frac y{\log y} \] for $y\leq c \log x$, as long as $y\to \infty$ as $x\to \infty$ (see sections 1.1, 4.1, 8.1 and 9.1). On the other hand, based on a modification of Cram\'er's probabilistic model \cite{Cra} for the distribution of primes (which in turn is based on Gauss's observation that the primes have density $\frac 1{\log x}$ around $x$), we believe that \[ M(x,y) \sim \sigma_+(A) \frac y{\log x} \] for $y=(\log x)^A$ with $A>2$, for some constant $\sigma_+(A)>1$, for which $\sigma_+(A)\to 1^+$ as $A\to \infty$ (see sections 1.5, 3.1, and 7.2). Therefore it seems that in both ranges, $M(x,y)$ is roughly linear in $y$: In particular, \[ M(x,y) \sim \frac y{\log\log x} \text{ for } y \text{ a little smaller than }\log x, \] whereas, if $c_+:=\sigma_+(2)$ then \[ M(x,y)\sim c_+\frac y{\log x}\text{ for } y \text{ a little bigger than } (\log x)^2. \] If true then $ M(x,y)$ has quite different slopes, $\frac 1{\log\log x}$ vs. $\frac {c_+}{\log x}$, in these two different ranges, and so there is a substantial change in behaviour of $M(x,y)$ as $y$ grows from around $ \log x$ to slightly beyond $(\log x)^{2}$. Our main goal is to investigate what happens in-between, though also to give heuristic support for the claims above. At the end-points of this in-between interval, the above claims suggest that \[ M(x,\log x)\sim \frac {\log x}{\log\log x} \text{ whereas } M(x,(\log x)^2) \asymp \log x, \] so $M(x,y)$ does not seem to get much bigger as $y$ grows from $\log x$ to $(\log x)^2$; indeed it grows by only a factor of $\log\log x$. This is very different from before and after this interval: As $y$ goes from $1$ to $\log x$ we expect $M(x,y)$ to grow by a factor of $\asymp \frac {\log x}{\log\log x} $, and as $y$ goes from $(\log x)^2$ to $(\log x)^3$ to grow by a similar factor of $\asymp \log x$ (and indeed for any subsequent interval of multiplicative length $\log x$). This does not seem to have been previously observed. Based on an appropriate heuristic we conjecture that if $1<A<2$ then \[ M(x,(\log x)^A)\sim \frac 1{2-A} \cdot \frac {\log x}{\log\log x}; \] more precisely that if $\log x\leq y=o( (\log x)^2)$ then \begin{equation} \label{eq: Intermediate intervals} M(x,y) \sim \frac {\log x}{\log \left(\tfrac{(\log x)^2} y\right) } . \end{equation} We will provide data with $x$ up to $10^{12}$ to support this claim, though it should be noted that although this is as far as we have been able to compute, these $x$ are still small enough that secondary terms are likely to have a significant impact (see sections 1.2, 8.3, 9.2). For this reason we also look at \[ M(x,2y)/M(x,y) \] because we expect that, as $x\to \infty$ this looks much like $1$ in this range, and $2$ outside this range. However we will compare the data for this ratio to a more precise conjecture. In this article we will argue that there are four ranges of $y$ in each of which we expect different behaviour for $M(x,y)$, namely: \[ y\ll \log x;\ \log x\ll y =o(\log x)^2;\ y\asymp (\log x)^2;\ \text{ and } y/(\log x)^2\to \infty \text{ with } y\leq x. \] We will present these separately in the introduction though there is significant overlap in the theory; and when it comes to presenting data for a given value of $x$ up to which we can compute, it is often unclear where one $y$-interval should end and the next begin. \subsection{Guesstimates for very short intervals: $y\ll \log x$} We believe that if $y\leq \log x$ then \begin{equation} \label{eq: Very short interval} M(x,y) \sim \frac y{\log y} \end{equation} provided $x,y\to \infty$. We will now formulate a more precise conjecture than this for $y\leq (1-o(1)) \log x$: A set of integers $A$ is \emph{admissible} if for every prime $p$ there is a residue class mod $p$ that does not contain any element from the set (otherwise $A$ is \emph{inadmissible}). Let $S(y)$ denote the maximum size of an admissible set $A$ which is a subset of $[1,y]$,\footnote{We say that $A$, and any translate of $A$, has \emph{length} $\leq y$.} so that \[ M(x,y) \leq S(y) \text{ if } x\geq y \] (for if $X<p_1<\dots<p_k\leq X+y$ are primes then $\{ p_1-X,\ldots, p_k-X\}$ is an admissible set). We believe that if $y\leq (1-o(1))\log x$ then\footnote{The ``$o(1)$'' here can be interpreted as saying that for any fixed $\epsilon>0$, if $x$ is sufficiently large then \eqref{eq: Very short interval2} holds for all $y\leq (1-\epsilon)\log x$.} \begin{equation} \label{eq: Very short interval2} M(x,y) = S(y). \end{equation} These two conjectures are consistent since it is believed that $S(y)\sim \frac y{\log y}$. The data seems to confirm the conjecture \eqref{eq: Very short interval2} for $x=10^k$ for $k=9,10,11$ and $12$: \begin{figure}[H]\centering{ \includegraphics[scale=.5]{max_diff_9_2_42_B1.pdf} \includegraphics[scale=.5]{max_diff_10_2_46_B1.pdf} \newline \includegraphics[scale=.5]{max_diff_11_2_50_B1.pdf} \includegraphics[scale=.5]{max_diff_12_2_56_B1.pdf} } \caption{$M(x,y)$ vs. $S(y)$ for $x=10^k, k=9,\dots,12$ and $y\leq 2\log x$.\newline We observe that $M(x,y)=S(y)$ up to the dashed line at $y=\log x$} \end{figure} \noindent In these graphs, for each $y$ (the horizontal axis), a colored-in dot represents $M(x,y)$, and an empty box represents the value of $S(y)$. In this data, it appears that $M(x,y) = S(y)$ for $y$ up to about $\tfrac 32 \log x$, and then $M(x,y)$ is at worst a little less than $S(y)$ for $y$ between $\tfrac 32 \log x$ and $2 \log x$, for these values of $x$. Although we do believe that $M(x,y) = S(y)$ for all $y\leq (1-\epsilon) \log x$, for all sufficiently large $x$, and perhaps even for all $y\leq \log x$ for all $x$, we do not believe that this should be so for $y>(1+\epsilon) \log x$ and that the data we see here is an artifice of the relatively small values of $x$ we can compute with. Indeed, if we are wrong about this, if $M(x,y) = S(y)$ for a sequence of $x,y$ with $y>(1+\epsilon) \log x$ and $x$ arbitrarily large, then this would contradict the key conjecture in section 1.2. More discussion of this heuristic in section 4, as well as in sections 8.1 and 9.1 \subsection{Intermediate length intervals: $ \log x\leq y=o( (\log x)^2)$} In this range we believe that \eqref{eq: Intermediate intervals} holds: \[ M(x,y) \sim L(x,y) \text{ where } L(x,y):=\frac {\log x}{\log \left(\tfrac{(\log x)^2} y\right) } . \] However, when comparing this prediction to the data, it is not obvious how to interpret ``$o( (\log x)^2)$'' for a given $x$-value. We have made the rather arbitrary choice of $\tfrac 12 (\log x)^2$ as the upper bound for the $y$-range. We have also taken $\tfrac 12 \log x$ as a lower bound which reflects our uncertainty as to whether things can really be predicted so precisely, though we have marked $\log x$ with a dashed line. \begin{figure}[H]\centering{ \includegraphics[scale=.5]{max_diff_9_8_216_B2_logxmarked.pdf} \includegraphics[scale=.5]{max_diff_10_10_266_B2_logxmarked.pdf} \newline \includegraphics[scale=.5]{max_diff_11_10_320_B2_logxmarked.pdf} \includegraphics[scale=.5]{max_diff_12_12_382_B2_logxmarked.pdf} } \caption{$M(x,y)$ vs.$L(x,y)$ for $x=10^k, k=9,\dots,12$ and $\tfrac 12\log x\leq y\leq \tfrac 12(\log x)^2$. Dashed line at $y=\log x$, which is the end of the range of the $M(x,y)=S(y)$ conjecture.} \end{figure} \noindent Here, for each $y$ (the horizontal axis), a colored-in dot represents $M(x,y)$, and the continuous curve $L(x,y)$ (our prediction in \eqref{eq: Intermediate intervals}). Our prediction and the data seem to co-incide at $y=\log x$ (where the dashed line is), and again at a point that seems to be slowly increasing (towards $\tfrac 12(\log x)^2$) as $x$ grows. The graph indicates that our prediction provides a pretty good approximation to the data in the whole range, though it is concave up whereas the data itself appears to yield a curve that is concave down. We have no explanation for that. \subsection{The maximum on longer intervals: $y\asymp (\log x)^2$} Here we mean that $y=t(\log x)^2$ for some fixed value of $t$. In this range we will need to define two implicit functions to formulate our conjectures for $m(x,y)$ and $M(x,y)$: For every given $t>0$ consider the equation \[ u(\log u-\log t-1) +t = 1. \] We will show that for every $t>0$ there is a unique solution $u_+(t)$ with $u_+(t)>t$. If $0<t<1$ there is no solution in $u\in (0,t)$, so we let $u_-(t)=0$. If $t>1$ then there is a unique solution $u_-(t)$ with $0<u_-(t)<t$. We believe that there exist constants $c_-,c_+>0$ such that if $y=t(\log x)^2$ then \begin{equation} \label{eq: LogSquared intervals} m(x,y)\sim u_-(c_-t) \log x \text{ and } M(x,y)\sim u_+(c_+t) \log x. \end{equation} We will see at the end of section 3 that $c_\pm$ are constants that can be defined in terms of sieving intervals. We know that $c_+\geq 1.015\dots$ and $c_-\leq \frac {e^\gamma}2 = 0.890536\dots$, and perhaps both of these inequalities should be equalities.\footnote{We will assume that $c_+ =1.015\dots$ and $c_- = 0.8905\dots$ throughout for the purpose of comparing our conjectures to our data. We will explain the significance of $1.015\dots$ at the end of section 3.} Here is the data for $M(x,y)$ in this range: \begin{figure}[H]\centering{ \includegraphics[scale=.5]{max_diff_9_142_858_B3v.pdf} \includegraphics[scale=.5]{max_diff_10_174_1060_B3v.pdf} \newline \includegraphics[scale=.5]{max_diff_11_212_1278_B3v.pdf} \includegraphics[scale=.5]{max_diff_12_254_1526_B3v.pdf} } \caption{$M(x,y)$ vs. $u_+(1.015 t) \log x$ where $y=t(\log x)^2$ \newline . \hskip .7in for $x=10^k, k=9,\dots,12$ and $\tfrac 13(\log x)^2\leq y\leq 2(\log x)^2$.} \end{figure} Here, for each $y$ (the horizontal axis), a colored-in dot represents $M(x,y)$, and the red curve represents our prediction $u_+(1.015 t) \log x$ where $y=t(\log x)^2$. It appears that this prediction is too large by a factor of about $35\%$ (and if $c_+ $ is larger than $1.015$ then the red curve will be even further above the data). However we believe this is a consequence of only calculating up to $x=10^{12}$ and hopefully the data will get closer to our curve the larger $x$ gets.\footnote{One referee asks whether we expect that $u_+(c_+t) \log x\geq M(x,t(\log x)^2)$ will persist for larger $x$; we have no idea how to make predictions that are this precise, and doubt the value of trying to do so given how far out our predictions currently are from the data!} In this range for $y$ it is already well-known that data for the minimum does not yet satisfy the standard conjectures: \subsection{The minimum on longer intervals: $y\asymp (\log x)^2$} The prediction \eqref{eq: LogSquared intervals} implies that if $c_-t<1$ then $m(x,t(\log x)^2)=0$ but not if $c_-t>1$. That is, we conjecture the following lower bound for the maximal gap between consecutive primes: \[ \max_{x<p_n\leq 2x} p_{n+1}-p_n \sim c_-^{-1} ( \log x)^2 \geq 2e^{-\gamma}( \log x)^2; \] and it is feasible that we have equality here. This is larger than Cram\'er's original conjecture (that this maximal gap is $\sim ( \log x)^2$). As we will discuss, Cram\'er's reasoning is flawed by failing to take account of divisibility by small primes (a point originally made by the first author back in \cite{Gr1} and recently re-iterated by the in-depth analysis of Banks, Ford and Tao in \cite{BFT}.) However the data does not really support either conjecture, as the largest gap between consecutive primes that has been found is about $.9206 (\log x)^2$ (a shortfall of around $22\% $ from $2e^{-\gamma}\approx 1.1229\cdots$). \medskip \begin{figure}[H]\centering{ \centerline{\vbox{\offinterlineskip \halign{\vrule #&\ \ #\ \hfill && \hfill\vrule #&\ \ \hfill # \hfill\ \ \cr \noalign{\hrule} \cr height5pt&\omit && \omit && \omit & \cr &$ \ \ p_n $&& $p_{n+1}-p_n$ && \hfill$(p_{n+1}-p_n)/\log^2 p_n$\hfill & \cr height5pt&\omit && \omit && \omit & \cr \noalign{\hrule} \cr height3pt&\omit && \omit && \omit & \cr & 113 && \ 14 && .6264 & \cr height3pt&\omit && \omit && \omit & \cr & 1327 && \ 34 && .6576 & \cr height3pt&\omit && \omit && \omit & \cr & 31397 && \ 72 && .6715 & \cr height3pt&\omit && \omit && \omit & \cr & 370261 && 112 && .6812 & \cr height3pt&\omit && \omit && \omit & \cr & 2010733 && 148 && .7026 & \cr height3pt&\omit && \omit && \omit & \cr & 20831323 && 210 && .7395 & \cr height3pt&\omit && \omit && \omit & \cr & 25056082087 && 456 && .7953 & \cr height3pt&\omit && \omit && \omit & \cr & 2614941710599 && 652 && .7975 & \cr height3pt&\omit && \omit && \omit & \cr & 19581334192423 && 766 && .8178 & \cr height3pt&\omit && \omit && \omit & \cr & 218209405436543 && 906 && .8311 & \cr height3pt&\omit && \omit && \omit & \cr & 1693182318746371 && 1132 && \hskip -.05in .9206& \cr height3pt&\omit && \omit && \omit & \cr \noalign{\hrule}}} } \caption{ (Known) record-breaking gaps between primes}} \end{figure} \smallskip In \cite{BFT} Banks, Ford and Tao graphed how the maximal gap between primes grows, as compared to the proposed asymptotics $2e^{-\gamma}( \log x)^2, (\log x)^2$ and the more precise $(\log x)(\log x-\log\log x)$. In section 9.5 we discuss the heuristic justification for these conjectures and variants. All such heuristics seem to suggest that the maximal gap between consecutive primes up to $x$ should grow like $ \log x (a\log x+b\log\log x+c)$ for some constants $a,b,c$. The only possibilities for $a$ seem to be $a=1$ or $2e^{-\gamma}$, though there are many possible guesses for $b$ and $c$. Here we graph $2e^{-\gamma}( \log x)^2$ and $(\log x)^2$ as well as the best fit functions of the form $ \log x (a\log x+b\log\log x+c)$ where $a=1$ or $2e^{-\gamma}$.\footnote{One referee correctly feels that it is inappropriate to try to fit a justification to the data but, who knows, perhaps some enterprising future researcher will see a clearly good reason for our favourite candidate, $ \log x (2e^{-\gamma}\log x-5\log\log x+6)$.} \begin{figure}[H]\centering{ \includegraphics[scale=1]{Large_gaps_curvefit.pdf} \caption{$\displaystyle\max_{p_n\leq x} (p_{n+1}-p_n)$ vs. Conjectured approximations } } \end{figure} \noindent The data for the largest gap between consecutive primes is substantially smaller than our two predictions. No one has suggested a good reason for this shortfall, though in appendix A we explain how at least some of this shortfall is due to the use of asymptotic estimates for primes and sieves, for relatively small values. In Figure 5, we have also graphed the best fit to the data of curves of the form $ \log x (a\log x+b\log\log x+c)$ with $a=1$ and $2e^{-\gamma}$, and the fit is tight. This suggests that we should be looking harder at possible secondary terms and reasons why they might occur. \bigskip If \eqref{eq: LogSquared intervals} really does hold then $m(x,y)\sim u_-(c_-t) \log x$ for $y=t(\log x)^2$, where $u_-(c_-t)=0$ when $c_-t\leq 1$, but $u_-(c_-t)>0$ for $c_-t>1$. It is of interest to compare this prediction for $m(x,y)$ to the data, and we will assume that $c_-=\frac {e^\gamma}2 = 0.8905 \dots$ for the purpose of comparison: \begin{figure}[H]\centering{ \includegraphics[scale=.5]{min_diff_9_142_858_delta_min.pdf} \includegraphics[scale=.5]{min_diff_10_176_1060_delta_min.pdf} \newline \includegraphics[scale=.5]{min_diff_11_212_1282_delta_min.pdf} \includegraphics[scale=.5]{min_diff_12_254_1528_delta_min.pdf} } \caption{$m(x,y)$ vs. $u_-(0.8905 t) \log x$ where $y=t(\log x)^2$ \newline . \hskip .7in for $x=10^k, k=9,\dots,12$ and $\tfrac 13(\log x)^2\leq y\leq 2(\log x)^2$.} \end{figure} For these values of $x$ it appears that the smallest $y$ for which $m(x,y)>0$ is at about $y=\tfrac 34 (\log x)^2$, which is significantly smaller than in the prediction (though the ratio $y/(\log x)^2$ appears to be growing slowly with $x$). This confirms what we saw in the previous two figures when studying $\displaystyle\max_{p_n\leq x} (p_{n+1}-p_n)$. We plotted the maximum $M(x,y)$ vs our prediction in this same range in Figure 3 and that data there appears to have a similar shape to our prediction. However it is not obvious here whether the data for the minimum, $m(x,y)$, has a similar shape to our prediction. We now compare our predictions for both the maxima and the minima with the data in the range $ \tfrac 13(\log x)^2\leq y\leq 2(\log x)^2$, on the same graph, to get a better sense of how well these fit: \begin{figure}[H]\centering{ \includegraphics[scale=.5]{__max__min_diff_9_142_858_allB3.pdf} \includegraphics[scale=.5]{__max__min_diff_10_174_1060_allB3.pdf} \newline \includegraphics[scale=.5]{__max__min_diff_11_212_1278_allB3.pdf} \includegraphics[scale=.5]{__max__min_diff_12_254_1526_allB3.pdf} } \caption{ $u_-(c_-t)\log x$ vs $m(x,y)$ vs. $\tfrac y{\log x}$ vs. $M(x,y)$ vs $u_+(c_+t)\log x$ \newline in ascending order, where $y=t(\log x)^2$ for $x=10^k, k=9,\dots,12$ and \newline . \hskip1.6in $ \tfrac 13(\log x)^2 \leq y\leq 2(\log x)^2$.} \end{figure} We do not know what conclusions to draw from this data! \subsection{Long intervals: $y/(\log x)^2\to \infty$} We believe that there exist continuous functions $0<\sigma_-(A)<1<\sigma_+(A)$ for which $\sigma_-(A),\sigma_+(A)\to 1$ as $A\to \infty$, such that if $y/(\log x)^2\to \infty$ then \begin{equation} \label{eq: logtotheAPredictions} m(x,y) \sim \sigma_-(A) \frac y{\log x} \text{ and } M(x,y) \sim \sigma_+(A) \frac y{\log x} \end{equation} writing $y=(\log x)^A$. Moreover we should take \[ c_-=\sigma_-(2) \text{ and } c_+=\sigma_+(2) \] above. We will obtain these conjectures from a discussion of sieve theory. At first sight these conjectures seem to be inconsistent with Selberg's result that \[ \pi(x+y)-\pi(x) \sim \frac y{\log x} \] for almost all $x$, assuming that $y/(\log x)^2\to \infty$ (which he proved assuming the Riemann Hypothesis). However the ``almost all'' in the statement allows for exceptions and in 1984, Maier \cite{Mai} exhibited, for all $A>2$, constants $\delta_+(A) ,\delta_-(A)>0$ for which there is an infinite sequence of integers $x_+$ and $x_-$ with \[ m(x_-,y_-) \lesssim \delta_-(A) \frac {y_-}{\log x_-} \text{ and } M(x_+,y_+) \gtrsim \delta_+(A) \frac {y_+}{\log x_+} \] where $y_\pm=(\log x_\pm)^A$. As far as we know it could be that $ \sigma_-(A)=\delta_-(A) $ and $ \sigma_+(A)=\delta_+(A)$ for each $A$, as we will discuss in sections 2.2 and 3. \subsection{Another statistic} The data in sections 1.1 and 1.2 seem to support our conjectures for $M(x,y)$ in the range $y=o((\log x)^2)$, but the data in sections 1.3 and 1.4 for larger $y$ are less encouraging. For this reason it seems appropriate to return to the question of how $M(x,y)$ grows as a function of $y$ in the range $y\asymp (\log x)^2$, and so we examine the ratio \[ r_+(x,y):= M(x,2y)/M(x,y). \] Our \emph{asymptotic predictions} suggest that this looks like $2+o(1)$ if $y\leq \frac 12 \log x$ and if $y/(\log x)^2\to \infty$, and $1+o(1)$ if $\log x\leq y=o((\log x)^2)$. For $y\asymp (\log x)^2$ our prediction for $M(x,y)$ is more complicated; indeed if $y=t(\log x)^2$ then we predict that this looks like \[ \rho_+(t):= u_+(2c_+t)/u_+(c_+t) \] and we now compare this new statistic to the data: \begin{figure}[H]\centering{ \includegraphics[scale=.5]{max_diff_9_20_428_delta.pdf} \includegraphics[scale=.5]{max_diff_10_22_530_delta.pdf} \newline \includegraphics[scale=.5]{max_diff_11_24_638_delta.pdf} \includegraphics[scale=.5]{max_diff_12_26_762_delta.pdf} } \caption{$M(10^k,2y)/M(10^k.y)$ for $k=9,\dots,12$ and $ y\leq (\log (10^k))^2$.} \end{figure} We can see the shape of our prediction looks correct but it is a little on the low side. What is encouraging is that the fit seems to get better as $k$ grows. \subsection{Summary of conjectures} We now recall in one place the conjectures given above: Fix $\epsilon>0$. If $x$ is sufficiently large and $y\leq (1-\epsilon)\log x$ then \[ M(x,y)=S(y). \] A weaker conjecture claims if $y\leq (1-o(1))\log x$ and $y\to \infty$ as $x\to \infty$ then \[ M(x,y) \sim \frac y{\log y} . \] If $ \log x\leq y=o( (\log x)^2)$ then \[ M(x,y) \sim L(x,y) \text{ where } L(x,y):=\frac {\log x}{\log \left(\tfrac{(\log x)^2} y\right) } . \] We conjecture that there exist constants $c_-,c_+>0$ such that if $y=t(\log x)^2$ then \[ m(x,y)\sim u_-(c_-t) \log x \text{ and } M(x,y)\sim u_+(c_+t) \log x, \] and we even have tentative guesses about the values of $c_-$ and $c_+$. Moreover this suggests that \[ \max_{x<p_n\leq 2x} p_{n+1}-p_n \sim c_-^{-1} ( \log x)^2. \] Finally for any fixed $A>2$ we believe that there exist continuous functions $ \sigma_-(A)<1<\sigma_+(A)$ such that if $y=(\log x)^A$ then \[ m(x,y) \sim \sigma_-(A) \frac y{\log x} \text{ and } M(x,y) \sim \sigma_+(A) \frac y{\log x}. \] \section{Some historical comparisons} \subsection{Best results known for small and large gaps between consecutive primes} Following up the 2013 breakthrough by Yitang Zhang \cite{Zha} on small gaps between primes, Maynard \cite{May1} and Tao \cite{Tao} proved that there are shortish intervals that contain $m$ primes for any fixed $m$. Their remarkable work implies that there exists a constant $c>0$ such that for each $y\geq 2$ we have \[ M(x,y) \geq c \log y \text{ if } x \text{ is sufficiently large}, \] which unfortunately is far smaller than what is conjectured here, in all ranges of $y$. However, before Zhang's work we could only say, for $y\ll \log x$, that $M(x,y)\geq 1$, and after Zhang only that $M(x,y)\geq 2$, so these latest efforts are significant leap forward in our understanding. \footnote{In \cite{May3} Maynard asks similar questions for integers that are the sum of two squares. He proved unconditionally the remarkably strong result that there are intervals $(X,X+y]$ which contain $\gg \frac{y}{(\log x)^{1/2}} + y^{1/10}$ integers that are the sum of two squares for all $y\geq 1$. This is still much smaller than what is probably the truth for $y\ll (\log x)^c$ but it is at least a power of $y$, as we might conjecture, so far closer to the truth than what is known unconditionally about primes.} Similarly Ford, Green, Konyagin, Maynard and Tao \cite{FGM}, following up on \cite{FGK, May2}, recently showed that \[ m(x,y)=0 \text{ for some } y \gg \frac{\log x \log\log x \log\log\log\log x}{\log\log\log x} , \] and they believe their technique (which consists of looking only at divisibility by small primes) can be extended no further than $y$ as large as $(\log x)(\log\log x)^{2+o(1)}$ which is far smaller than what is conjectured (here and previously). \subsection{Unusual distribution of primes in intervals} As discussed in section 1.5, Maier \cite{Mai} proved that there can be surprisingly few or many primes in an interval of length $(\log x)^A$ with $A>2$. His proof can be easily modified to express his result in terms of certain sieving constants: Define \[ S(x,y,z) := \#\{ n\in (x,x+y]:\ (n,P(z))=1\} \] where $P(z):=\prod_{p\leq z} p$, and let \begin{align*} S^+(y,z):= \max_{x} S(x,y,z) \text{ and } S^-(y,z): = \min_{x} S(x,y,z). \end{align*} For each fixed $u\geq 1$ we define \begin{align*} \sigma_+(u) :&= \limsup_{z\to \infty} S^+(z^u,z)\bigg/ \bigg\{ \prod_{p\leq z}\bigg( 1-\frac 1p \bigg) \cdot z^u \bigg\} \\ \text{ and } \sigma_-(u) :&= \liminf_{z\to \infty} S^-(z^u,z)\bigg/ \bigg\{ \prod_{p\leq z}\bigg( 1-\frac 1p \bigg) \cdot z^u \bigg\} . \end{align*} We will discuss what we know about the constants $\sigma_-(u)$ and $\sigma_+(u)$ in the next section, although we state here that we believe that both the limsup's and the liminf's are actually limits so that \begin{equation} \label{eq: asymp for S's} S^+(z^u,z) \sim \sigma_+(u)\prod_{p\leq z}\bigg( 1-\frac 1p \bigg) \cdot z^u \text{ and } S^-(z^u,z) \sim \sigma_-(u)\prod_{p\leq z}\bigg( 1-\frac 1p \bigg) \cdot z^u. \end{equation} Maier's proof in \cite{Mai} can be modified to show that for $y=(\log x)^A$ and $z=\epsilon \log x$ we have \[ M(x,y) \geq \{ 1+o_{x\to \infty}(1)\} S^+(y,z) \cdot \frac {e^\gamma \log z}{\log x} \] which implies that there exist arbitrarily large $x$ ($=x_+$) for which \[ M(x,y) \geq \{ 1+o(1)\}\sigma_+(A) \frac{y}{\log x} . \] Analogously that there are arbitrarily large $x$ ($=x_-$) for which \[ m(x,y) \leq \{ 1+o(1)\} \sigma_-(A) \frac{y}{\log x} . \] If, as we believe, \eqref{eq: asymp for S's} holds then these estimates are true for all $x$. In \eqref{eq: logtotheAPredictions} we have conjectured that these bounds are ``best possible''; paraphrasing, we are postulating that Maier's observation about the effect of small prime factors is the key issue in estimating the extreme number of primes in intervals with lengths significantly longer than $(\log x)^2$. In fact our conjectures come from firstly sieving by small primes, and secondly looking at the tail probabilities of the binomial distribution that comes from a probabilistic model which takes account of divisibility by small primes. We will study in Appendix B how well some (relatively small) data for the full distribution compares to reality. \section{Sieve methods and their limitations} Let $\mathcal A$ be a set of integers (of size $y$) to be \emph{sieved} (in our case the integers in the interval $(X,X+y]$), such that \[ \# \{ a\in \mathcal A:\ d|a\} = \frac{g(d)}d X + r(\mathcal A,d) \] where $g(d)$ is a multiplicative function, which is more-or-less $1$ on average over primes $p$ in short intervals (in our case each $g(p)=1$), and the error terms $r(\mathcal A,d)$ are small on average (in our case each $|r(\mathcal A,d)|\leq 1$). The goal in sieve theory is to give upper and lower bounds for \[ S(\mathcal A,z) := \{ n\in \mathcal A:\ (n,P(z))=1\}. \] This equals $G(z)y$ ``on average'' where $G(z):= \prod_{p\leq z} (1 -\frac{g(p)}p)$. In 1965, Jurkat and Richert \cite{JR} showed that if $y=z^u$ then \begin{equation} \label{eq: JR theorem} (f(u)+o(1)) \cdot G(z)y \leq S(\mathcal A,z) \lesssim F(u) \cdot G(z)y , \end{equation} where $f(u)= e^\gamma(\omega(u) -\frac{\rho(u)}u)$ and $F(u)= e^\gamma(\omega(u) +\frac{\rho(u)}u)$, and $\rho(u)$ and $\omega(u)$ are the Dickman-de Bruijn and Buchstab functions, respectively. One can define these functions directly by \[ f(u)=0 \text{ and } F(u)=\frac{2e^\gamma}u \text{ for } 0<u\leq 2 \] (in fact $F(u)=\frac{2e^\gamma}u$ also for $ 2<u\leq 3$) and \[ f(u)= \frac 1u \int_1^{u-1} F(t) dt \text{ and } F(u)=\frac {2e^\gamma} u + \frac 1u \int_2^{u-1} f(t) dt \text{ for all } u\geq 2. \] Iwaniec \cite{Iw1} and Selberg \cite{Se1} showed that this result is ``best possible'' by noting that the sets \[ \mathcal A^\pm =\{ n\leq x: \lambda(n)=\mp 1\} \] where $\lambda(n)$ is Liouville's function (so that $\lambda(\prod_p p^{e_p})= (-1)^{\sum_p e_p}$) satisfy the above hypotheses, with \begin{equation} \label{eq: Selbergexample} S(\mathcal A^-,z) \sim f(u) \cdot G(z) \# \mathcal A^- \text{ and } S(\mathcal A^+,z) \sim F(u) \cdot G(z) \# \mathcal A^+. \end{equation} Since our question (bounding $S(x,y,z)$) is an example of this \emph{linear sieve} we deduce that \[ f(u) \leq \sigma_-(u) \leq 1 \leq \sigma_+(u) \leq F(u), \] and we expect that all of these inequalities are strict. However, in \cite{Gr3}, it is shown that if there are infinitely many ``Siegel zeros'',\footnote{That is, putative counterexamples to the Generalized Riemann Hypothesis, the most egregious that cannot be ruled out by current methods.} then, in fact, \[ \sigma_-(u) = f(u) \text{ and } \sigma_+(u) =F(u) \text{ for all } u\geq 1. \] Given that eliminating Siegel zeros seems like an intractable problem for now, we are stuck. However in this paper we are allowed to guess at the truth, though we know too few interesting examples to even take an educated guess as to the true values of $\sigma_-(u) $ and $\sigma_+(u)$. It is useful to note the following: \begin{lemma}\label{lem: SieveBehaviour} $\sigma_+(u)$ is non-increasing, $\sigma_-(u)$ is non-decreasing, and $\sigma_+(u), \sigma_-(u)\to 1$ as $u\to \infty$ \end{lemma} \begin{proof} [Proof of Lemma \ref{lem: SieveBehaviour}] Select $x$ so that $S(x,z^B,z)=S^+(z^B,z)$ is attained. For $A<B$, partition the interval $(x,x+z^B]$ into $z^{B-A}$ disjoint subintervals of length $z^A$, and select the subinterval with $ \#\{ n\in (X,X+z^A]: (n,P(z))=1\}$ maximal. Therefore \begin{align*} S^+(z^A,z)&\geq \max_{\substack{X=x+jz^A\\ 0\leq j\leq z^{B-A}-1}} \#\{ n\in (X,X+z^A]: (n,P(z))=1\} \\ &\geq \frac 1{z^{B-A}} \#\{ n\in (x,x+z^B]: (n,P(z))=1\} = \frac {S^+(z^B,z)}{z^{B-A}}, \end{align*} so that $\sigma_+(A)\geq \sigma_+(B)$. The analogous proof, with the inequalities reversed, yields the result for $\sigma_-$. The fundamental lemma of the small sieve (see, eg, \cite{FI}) gives that \[ S(x,z^u,z) =\{ 1+O(u^{-u}) \} \prod_{p\leq z}\bigg( 1-\frac 1p \bigg) \cdot z^u \] so that $\sigma_+(u), \sigma_-(u)=1+O(u^{-u})=1+o_{u\to \infty}(1)$. \end{proof} \subsection{Best bounds known} In Maier's paper he used the well-known fact that for all $u\geq 1$, \[ \#\{ n\leq z^u:\ (n,P(z))=1\} \sim \omega(u) \frac{z^u}{\log z} \] where $\omega(u)$ is the \emph{Buchstab function}, defined by $\omega(u)=\frac 1u$ for $1\leq u\leq 2$, and $(u\omega(u))'=\omega(u-1)$ for all $u\geq 2$. By Lemma \ref{lem: SieveBehaviour} we have \[ \sigma_+(A) =\max_{B\geq A}\sigma_+(B) \geq e^{\gamma} \max_{B\geq A} \omega(B), \] and, similarly, $\sigma_-(A) \leq e^{\gamma} \min_{B\geq A} \omega(B)$. For all we know, it could be that \[ \sigma_+(A) = e^{\gamma} \max_{B\geq A} \omega(B) . \] That is, it could be that the most extreme example of sieving an interval, $S(x,z^A,z)$, occurs where $|x|< z^{O(1)}$, that is when $x$ is very small, but there is little evidence that there are no other intervals with even more extreme behaviour. In \cite{MS}, Maier and Stewart noted one could obtain smaller upper bounds for $\sigma_-(A)$ for small $A$. Their idea was to construct a sieve based on the ideas used to prove that there are long gaps between primes: Fix $2>u>1$. One first sieves the interval $[1,x]$ where $x=z^u$ with the primes in $(z^{1/v},z]$ where $1\leq v\leq \frac 1{u-1}$. The integers left are the $z^{1/v}$-smooth integers up to $x$, and the integers of the form $mp\leq x$ for some prime $p\in (z,x]$ (note that $m\leq x/p<x/z=z^{u-1}\leq z^{1/v}$). The number of these is \[ \psi(z^u,z^{1/v}) + \sum_{z<p\leq x} \left[ \frac xp \right] \lesssim x \rho(uv)+ x \sum_{z<p\leq z^u} \frac 1p \sim x(\rho(uv)+\log u). \] Next we sieve ``greedily'' with the primes $\leq z^{1/v}$ so that the number of integers left is \[ \lesssim \prod_{p\leq z^{1/v}}\bigg( 1-\frac 1p \bigg)\cdot x(\rho(uv)+\log u) \sim v(\rho(uv)+\log u) \frac{e^{-\gamma} x}{\log z} \] We now select $v=v_u\in [1,\frac 1{u-1}]$ to minimize $r_u(v):=v(\rho(uv)+\log u)$. Since \[ r_u(v)'=\rho(uv)+\log u +uv\rho'(uv)=\rho(uv)+\log u -\rho(uv-1) , \] we select $v_u$ so that $r_u'(v_u)=0$. If $u=1+1/\Delta$ with $1/\Delta=o(1)$ then \[ v_u\sim \frac{\log \Delta}{\log\log\Delta} \text{ and so } r_u(v_u)\sim \frac{\log \Delta}{\Delta \log\log\Delta}. \] On the other hand if we use the Buchstab function then we cannot obtain a constant smaller than $e^\gamma/2$. Thus for $1\leq A\leq 2$, we have \[ \sigma_-(A) \leq \min\{ e^\gamma/2, r_A(v_A) \} \] In \cite{MS} this argument is extended to show that $r_A(v_A)$ is the minimum exactly when $1\leq A\leq 1.50046\dots$. Unfortunately we are only really interested in $\sigma_-(A) $ for $A\geq 2$ in this article. Now $\omega'(u)$ changes sign in every interval of length 1, so $\omega(u)$ has lots of minima and maxima, which occur whenever $\omega(u)=\omega(u-1)$ (since $u\omega'(u)=\omega(u-1)-\omega(u)$). Nonetheless its global minimum occurs at $u=2$ so that $\sigma_-(2) \leq e^{\gamma} \omega(2)=\frac {e^\gamma}2$ (and we saw earlier that the linear sieve bounds give $\sigma_-(2)\geq 0$). We are most interested in $\sigma_+(2)$, which is bounded below by $e^{\gamma} \max_{B\geq 2} \omega(B)$. This maximum occurs at $B\approx 2.75$ with $\omega(B)\approx 0.57$, so that $\sigma_+(2)\geq 1.015\dots $ (and we saw earlier that the linear sieve bounds give $\sigma_+(2)\leq e^\gamma=1.78107\dots$) In section 1.3 we have \[ c_+=\sigma_+(2) \] and took this to be equal to $1.015\dots $ in our computations as this is the best lower bound known on $\sigma_+(2)$. Similarly in section 1.4 we have \[ c_-=\sigma_-(2) \] and took this to be equal to $\frac {e^\gamma}2 $ in our model as this is the best upper bound known on $\sigma_-(2)$. It could be that these are equalities, but there is little evidence either way. \section{Very short intervals ($y\leq \log x$)} If a set of integers $A$ is inadmissible then there exists a prime $p$ which divides $n+a$ for some $a\in A$, for each integer $n$, and so obstructs these from all being simultaneously prime, once $n$ is sufficiently large. On the other hand, Hardy and Littlewood's prime $k$-tuplets conjecture \cite{HL} states that if $A$ is an admissible set then there are infinitely many integers $n$ for which $n+a$ is prime for every $a\in A$, and this seems to be supported by an accumulation of evidence. We are interested in $\pi(n,n+y]$, the number of primes in intervals $(n,n+y]$ of length $y$ (with $y$ small compared to $n$), particularly the minimum, $m(x,y)$, and the maximum, $M(x,y)$, as $n$ varies between $x$ and $2x$. If the primes in $(n,n+y]$ are $\{ n+a: a\in A\}$ with $n>y$, then $A$ is an admissible set, say of size $k$, and therefore \[ \pi(n,n+y]:= \pi(n+y)-\pi(n) = k \leq S(y), \] where $S(y)$ is the maximum size of an admissible set $A$ of length $y$. Moreover this implies that if the prime $k$-tuplets conjecture holds then \[ \max_{n\geq y} \pi(n,n+y] = S(y). \] How large is $S(y)$? One can show that the primes in $(y,2y]$ yield an admissible set and so $S(y)\gtrsim \frac y{\log y}$ (by the prime number theorem). It is believed that \[ S(y)\sim \frac y{\log y} \] but the best upper bound known is $S(y) \lesssim \frac {2y}{\log y}$ (by the upper bound in \eqref{eq: JR theorem}), and this upper bound seems unlikely to be significantly improved in the foreseable future (as we again run into the Siegel zero obstruction). Calculations support the believed size of $S(y)$. One interesting theorem, due to Hensley and Richards, is that if $y$ is sufficiently large then $S(y)>\pi(y)$ and so, if the prime $k$-tuplets conjecture is true then for all sufficiently large $y$ there exist infinitely many intervals of length $y$ that have more primes than the initial interval $[1,y]$. The known values of $S(y)$ and bounds, can be found on \texttt{http://math.mit.edu/$\sim$primegaps/} and from there we see that $S(3432)\geq 481> \pi(3432)=480$. Therefore we believe that there are infinitely many intervals of length $3432$ containing exactly 481 primes, more than the 480 primes $\leq 3432$ found at ``the start''. However, finding such an interval (via methods based on this discussion) involves finding a prime $481$-tuple, which would be an extraordinary challenge unless one is very lucky. So assuming the prime $k$-tuplets conjecture we know that $\max_{n\geq y} \pi(n,n+y] = S(y)$ for fixed $y$, and we might expect that $M(x,y) = S(y)$ for $y$ which (slowly) grows with $x$. In sections 4.1 and 8.1 we present two quite different heuristics to suggest that \begin{equation} \label{eq: VSmall.Prediction} M(x,y) = S(y) \text{ for all } y\leq \{ 1-o(1)\} \log x; \end{equation} and we saw, in section 2.1, that this is well supported by the data that we have. By a simple sieving argument Westzynthius showed in the 1930s that for any constant $C>0$ there exist intervals $[x,x+C\log x]$ which do not contain any primes. This argument is easily modified to show that for any $c>0$ \[ m(x,c\log x):=\min_{X\in (x,2x]} (\pi(X+c\log x)-\pi(X)) =0 \text{ if } x \text{ is sufficiently large.} \] We will give two theoretical justifications for our prediction \eqref{eq: VSmall.Prediction}, supporting the conclusions we have drawn from the data represented in the graphs above. The first is explained in the next section and relies on guessing at what point a given admissible set yields roughly as many prime $k$-tuplets as conjectured. The second a more traditional approach is explained in section 8.1, developing the Gauss-Cram\'er heuristic (given in section 6) so that it takes account of divisibility by small primes. \subsection{An explicit prime $k$-tuplets conjecture} For a given admissible set of linear forms $b_jn+a_j,\ j=1,\dots, k$, Hardy and Littlewood \cite{HL} conjectured that \begin{equation} \label{eq: kTuplets} \#\{ x<n\leq 2x:\ \text{ Each } b_jn+a_j \text{ is prime}\} \sim \prod_p \bigg( 1-\frac 1p \bigg)^{-k} \bigg( 1-\frac {\omega(p)}p \bigg) \cdot \frac x{(\log x)^k} , \end{equation} where $\omega(p)$ is the number of $n\pmod p$ for which $p$ divides $\prod_{j=1}^k (b_jn+a_j)$.\footnote{Here \emph{admissible} can be defined to be those $k$-tuples for which every $\omega(p)<p$. A set $A$ is admissible if and only if the set $\{ n+a: a\in A\}$ of linear forms is admissible.} We wish to know for what $x$ are the two sides of \eqref{eq: kTuplets} equal up to a small factor, and for what $x$ can we obtain a good lower bound on the right-hand side. This conjecture is known to be true as $x\to \infty$ for $k=1$ (where we may assume that $1\leq a\leq b-1$). There is a lot of data on primes in arithmetic progression and these all suggest that \eqref{eq: kTuplets} holds uniformly for all $x\geq b^{\epsilon}$ for any fixed $\epsilon>0$.\footnote{Surprisingly there is no way known to try to prove this. The best we know how to obtain, assuming the Generalized Riemann Hypothesis, is that if $k=1$ then \eqref{eq: kTuplets} holds for all $x\geq b^{1+\epsilon}$, though this can be obtained ``on average'' unconditionally. Linnik's Theorem implies that there exists a constant $\lambda$ such that one can obtain a lower bound on the left-hand side of \eqref{eq: kTuplets} once $x\gg b^\lambda$ (and so there is a prime $\ll b^{\lambda+1}$ in each reduced residue class mod $b$). In 2011, Xylouris \cite{Xyl} showed that we can take $\lambda=4$, the smallest $\lambda$ known-to-date} Let $A$ be an admissible set of size $k=S(y)\sim \frac {\alpha y}{\log y}$ (where we believe $\alpha=1$), a subset of the positive integers $\leq y$. Since there are $\ll \frac y{(\log y)^2}$ integers in $S(y)$ that are $<\frac y{\log y}$ (by the sieve), we deduce that $Q:=\prod_{a\in A} a = e^{(\alpha+o(1))y}=k^{(1+o(1))k}$. Now $\omega(p)=k$ for all $p\geq y$ (since no two elements of $A$ can be in the same congruence class mod $p$), so that \[ \prod_{p>y} \bigg( 1-\frac 1p \bigg)^{-k} \bigg( 1-\frac {\omega(p)}p \bigg) = \prod_{p>y} \bigg( 1-\frac 1p \bigg)^{-k} \bigg( 1-\frac {k}p \bigg) = e^{-o(k^2/y)}. \] Otherwise $1\leq \omega(p)\leq \min \{ k,p-1\} $ so that \[ e^{o(k)} = \bigg( \frac{\log 2y}{\log k} \bigg)^k \gg \prod_{y\geq p>k} \bigg( 1-\frac 1p \bigg)^{-k} \bigg( 1-\frac {\omega(p)}p \bigg) \geq e^{-o(k)}. \] For the primes $\leq k$ we have $p-1\geq \omega(p)\geq 1$ and so \[ 1\geq \prod_{p\leq k} \bigg( 1-\frac {\omega(p)}p \bigg) \geq 1\bigg/ \prod_{p\leq k} p=e^{-k+o(k)} . \] Therefore, by Mertens' theorem, we have \[ \prod_p \bigg( 1-\frac 1p \bigg)^{-k} \bigg( 1-\frac {\omega(p)}p \bigg) = (e^{O(1)} \log k)^{k}. \] So there exists a constant $C>0$ such that the right-hand side of \eqref{eq: kTuplets} is $\geq 1$ when $(C\log k)^{k}x>(\log x)^k$. This certainly happens when $x=k^{ck}$ for any fixed $c>1$; that is, $x>Q^{1+\epsilon}$. One might guess that there is an error term in \eqref{eq: kTuplets} of size $x^{1/2+o(1)}$, in which case we must take $c>2$, that is $x>Q^{2+\epsilon}$, to guarantee that the left-hand side of \eqref{eq: kTuplets} is positive. Now if $\#\{ x<n\leq 2x:\ \text{ Each } n+a \text{ is prime, for each } a\in A\}\geq 1$ then $M(x,y)=S(y)$. From the above we might guess this holds when $x>Q^{1+\epsilon}$ where $Q=e^{(1+o(1))y}$; that is, if $y\leq (1-o(1))\log x$. Indeed we only need the above heuristic discussion to be roughly correct ``on average'' over all such admissible sets, to support the conjecture in \eqref{eq: VSmall.Prediction}.\footnote{This reasoning suggests that even if we are pessimistic then we would simply change the range in \eqref{eq: VSmall.Prediction} to $y\leq (c+o(1))\log x$ for some constant $c\in (0,1)$.} \section{Cram\'er's heuristic} Gauss noted from calculations of the primes up to 3 million, that the density of primes at around $x$ is about $\frac 1{\log x}$. Cram\'er used this as his basis for a heuristic to make predictions about the distribution of primes: Consider an infinite sequence of independent random variables $(X_n)_{n\geq 3}$ for which \[ \text{Prob}(X_n=1) = \frac 1{\log n} \text{ and } \text{Prob}(X_n=0) =1- \frac 1{\log n}. \] By determining what properties are true with probability $1+o(1)$ for the sequence of $0$'s and $1$'s given by $X_3,X_4,\dots$, Cram\'er suggested that such properties must also be true of the sequence $1,0,1,0,1,0,0,0,1,\dots$ of $0$'s and $1$'s which is characteristic of the odd prime numbers. For example, if $N$ is sufficiently large then \[ S_N:=\sum_{n=3}^N X_n \] has mean $\int_2^N \frac{dt}{\log t}+O(1)$ and roughly the same variance, which suggests the conjecture that $\pi(N)=\int_2^N \frac{dt}{\log t}+O(N^{1/2+o(1)})$; it is known that this conjecture is equivalent to the Riemann Hypothesis. So for this particular statistic, Cram\'er's heuristic makes an important prediction and it can be applied to many other problems to make equally suggestive predictions. However Cram\'er's heuristic does have an obvious flaw: Since it treats all the random variables as independent, we have $\text{Prob}(X_n=X_{n+1}=1)\approx \frac 1{(\log n)^2} $, so that \[ \mathbb{E} \bigg( \sum_{n=3}^{N-1} X_nX_{n+1} \bigg)= \int_2^N \frac{dt}{(\log t)^2}+O(N^{1/2+o(1)}) \] with probability $1+o(1)$, which, Cram\'er's heuristic suggests, implies that there are infinitely many prime pairs $n,n+1$. But we have seen this is not so as $\{ 0,1\}$ is an inadmissible set. More dramatically this heuristic would even suggest that $M(x,y)=y$ for all values of $y\leq \{ 1+o(1)\} \log x$. From the previous section we know that this is false because $M(x,y)\leq S(y)$, as every $\pi(n,n+y]$ is restricted by those integers that are divisible by ``small'' primes, that is primes $\leq y^{1+o(1)}$. This heuristic also suggests that the primes are equi-distributed amongst all of the residue classes modulo a given integer $q$, rather than just the reduced classes. It therefore makes sense to modify Cram\'er's probabilistic model for the primes to take account of divisibility by ``small'' primes. The obvious way to proceed is to begin by sieving out the integers $n$ that are divisible by a prime $p\leq z$ (perhaps with $z=y$), and then to apply an appropriate modification of Cram\'er's model to the remaining integers, that is the integers that have no prime factor $\leq z$. The number of such integers up to $x$ is \[ \sim \kappa x \text{ where } \kappa=\kappa(z):=\prod_{p\leq z} \bigg( 1 - \frac 1p \bigg) \] if $z=x^{o(1)}$, and so the density of primes amongst such integers is $\frac 1{\kappa \log x}$. We therefore proceed as follows: Define $P=P(z):=\prod_{p\leq z} p$ so that $\kappa(z)=\frac{\phi(P)}{P}$. We consider an infinite sequence of independent random variables $(X_n)_{n\geq 3}$ for which $X_n=0$ if $(n,P)>1$; and \[ \text{Prob}(X_n=1) = \frac 1{\kappa \log n} \text{ and } \text{Prob}(X_n=0) =1- \frac 1{\kappa \log n} \text{ if } (n,P)=1. \] With this model we can again accurately predict the prime number theorem (and the Riemann Hypothesis), as well as asymptotics for primes in arithmetic progressions, for prime pairs, and even for admissible prime $k$-tuplets (with $k\leq z$). Moreover, this will allow us to obtain our predictions for maximal and minimal values of $\pi(x,x+y]$ (including the prediction for $y\ll \log x$ that we already deduced from assuming enough uniformity in the prime $k$-tuplets conjecture in section 4.1). If $n\in (x,2x]$ with $(n,P)=1$ then $\text{Prob}(X_n=1) = \frac 1{L}+O( \frac 1{L\log x})$ where $L:=\kappa \log x$, so for convenience we will work with a model where each $\text{Prob}(X_n=1) = \frac 1{L}$. There are, say, $N$ integers in $(X,X+y]$ that are coprime to $P$ where, a priori, $N$ could be any number between $0$ and $y$ (though we can refine that to $0\leq N\leq S^+(y,z) \ll \frac y{\log z}$ by the sieve). We now develop a model where $L$ and $N$ are fixed: \section{The maxima and minima of a binomial distribution} \label{sec: maxmin} Suppose that we have a sequence of independent, identically distributed random variables $X_1,\dots,X_N$ with \[ \mathbb{P}(X_n=1) = \frac 1L \text{ and } \mathbb{P}(X_n=0) =1- \frac 1L, \] where $L$ is large. Let \[ \mathbb Y:=\sum_{n\leq N} X_n. \] Then $ \mathbb Y$ is a \emph{binomially distributed random variable}, which is often denoted $B(N,\tfrac 1L)$. \begin{prop} \label{prop: k-values} Suppose that $N\ll L\log x$, and that $L\to \infty$ as $x\to \infty$. If $k_-=k_-(N,L,x)$ is the largest integer for which \[ \mathbb{P} ( \mathbb Y < k_- ) \leq \frac 1{x} \] then \[ k_- = \begin{cases} \, \qquad 0 &\text{ if } N\leq \{ 1+o(1)\} L\log x;\\ \{ \delta_-(\lambda)+o(1)\} \tfrac NL &\text{ if } N= \{ \lambda+o(1)\} L\log x \text{ with } \lambda>1;\\ \end{cases} \] where $\delta_-=\delta_-(t)$ is the smallest positive solution to $ \delta (\log \delta-1)+1 =1/t$. If $k_+=k_+(N,L,x)$ is the smallest integer for which \[ \mathbb{P} ( \mathbb Y \geq k_+ ) \leq \frac 1{x}. \] then \[ k_+= \begin{cases} \, \qquad N &\text{ if } N\leq \frac{\log x}{\log L} ;\\ \{ 1+o(1)\} \frac {\log x}{ \log \big(\tfrac{L\log x}{N} \big) } & \text{ if } \frac{\log x}{\log L} \leq N=o(L\log x);\\ \{ \delta_+(\lambda)+o(1)\} \tfrac NL &\text{ if } N= \{ \lambda+o(1)\} L\log x \text{ with } \lambda>0;\\ \end{cases} \] where $\delta_+=\delta_+(t)$ is the largest positive solution to $ \delta (\log \delta-1)+1 =1/t$. We observe that $k_-\leq k_+\ll \log x$ if $N\ll L\log x$. \end{prop} \begin{proof} From the independent binomial distributions we deduce that if $0\leq k\leq N$ then \[ \mathbb{P}(\mathbb Y=k) = \mathbb{P}\bigg( \sum_{n\leq N} X_n=k\bigg) = \binom Nk \bigg( \frac 1L\bigg)^k \bigg( 1- \frac 1L \bigg)^{N-k}. \] Therefore $\mathbb{P}(\mathbb Y=N) =1/L^N$ and this is $>1/x$ provided $N\leq \tfrac{\log x}{\log L}$. Also $\mathbb{P}(\mathbb Y=0) = ( 1- \frac 1L )^{N}=e^{- N/L+O(N/L^2)}$ which is $>\frac 1x$ for $N\leq \{ L+O(1)\} \log x$.\footnote{To be more precise we obtain $N\leq \frac{\log x}{-\log(1-\frac 1L)} = (L-\tfrac 12-\tfrac 1{12L}+O(\tfrac 1{L^2})) \log x$.} We now estimate the terms in our formula for $\mathbb{P}(\mathbb Y=k)$: \begin{align*} \binom Nk &= \frac{N^k}{k!} \prod_{i=0}^{k-1} \bigg( 1-\frac iN\bigg) = \frac{N^k}{(k/e)^k} k^{O(1)} \exp \bigg( \sum_{i=0}^{k-1} O\bigg( \frac iN \bigg) \bigg) \\ & = \frac{N^k}{(k/e)^k} \exp \bigg( O\bigg( \frac {k^2}{N} +\log k\bigg) \bigg) . \end{align*} by Stirling's formula. We also have $( 1- \frac 1L )^{N-k}=\exp( -\frac NL +O(\frac kL+\frac N{L^2}))$, and so \[ \mathbb{P}(\mathbb Y=k) = \bigg( \frac {eN}{kL} \bigg)^k \exp\bigg(- \frac NL +O\bigg( \frac{k^2}N + \log k+ \frac kL + \frac N{L^2} \bigg)\bigg) \] Therefore if $N=o(L\log x)$ and $k=o(\log x)$ then $k^2/N\leq k=o(\log x)$ so that \[ \mathbb{P}(\mathbb Y=k) = \bigg( \frac {eN}{kL}\bigg)^k x^{o(1)}, \] and this equals $x^{-1+o(1)}$ if and only if \[ k \sim \frac {\log x}{ \log (\tfrac{L\log x}{N} ) } \] Finally we deal with the range $N=\lambda L\log x$ with $\lambda>0$. If $k=\delta \lambda \log x$ with $\delta>0$ then, by the above estimate, \[ \mathbb{P}(\mathbb Y=k)= \bigg( \frac{e\lambda \log x}k \bigg)^k \exp\bigg( -\lambda \log x +O\bigg( \frac{\log x}L \bigg)\bigg) = 1/x^{\lambda(1-\delta \log(e/\delta)) +o(1)}, \] which equals $1/x^{1+o(1)}$ if $\delta=\delta_{\pm}(\lambda)$ so that $\lambda(1-\delta \ \log(e/\delta) )=1$. \end{proof} \begin{remark} There are well-known bounds on the tail of the binomial distribution (see, e.g., \cite{Fell}) which can be used to obtain this last result: \[ \frac 1{\sqrt{8k(1-\tfrac kN)}}\exp\bigg( -N\, \mathbb D \bigg( \frac kN \bigg| \frac 1L \bigg) \bigg) \leq \begin{cases} \mathbb{P} ( \mathbb Y \leq k) &\text{ if } k\leq \frac NL\\ \mathbb{P} ( \mathbb Y \geq k) &\text{ if } k\geq \frac NL \end{cases} \leq \exp\bigg( -N\, \mathbb D \bigg( \frac kN \bigg| \frac 1L \bigg) \bigg) \] where \[ \mathbb D(a|p):= a \log \frac ap + (1-a) \log \frac {1-a}{1-p} \] which is called the \emph{relative entropy} in some circles (this clean upper bound can be obtained by an application of Hoeffding's inequality); the two cases are equivalent since if $k\geq \frac NL$ then $\mathbb D(1-a|1-p)=\mathbb D(a|p)$. Using these inequalities we would determine $\delta=\delta(t,L)$ from the functional equation \[ L\, \mathbb D \bigg( \frac \delta L \bigg| \frac 1L \bigg) = \frac 1 t\bigg( 1 + O \bigg( \frac {\log\log x}{ \log x} \bigg) \bigg) , \] which is slightly different, but yields $\delta(t,L)=\delta(t)+O(\tfrac 1{\log \delta(t)} ( \tfrac 1L+ \tfrac {\log\log x}{\log x}))$, a negligible difference in the ranges we are concerned about. \end{remark} \section{Asymptotics} \label{sec: uandv} In section 1.3 we used the solutions $u=u_-\in (0,t)$ and $u=u_+\in (t,\infty)$ to \[ u(\log u -\log t -1)+t=1 \] where $u(t)=t\delta(t)$, and $\delta=\delta_-\in (0,1)$ and $\delta=\delta_+\in (1,\infty)$ are the solutions to \[ f(\delta):=1-\delta \log(e/\delta)=\frac 1t. \] To verify these claims, we note that $f(0)=1, f(1)=0$ and $f(\infty)=\infty$ We have $\frac{df}{d\delta}= \log \delta $ so $f$ (as a function of $\delta$) has its minimum $f(1)=0$ with $f''(\delta)>0$ for all $\delta>0$. Therefore there exists a unique $\delta_-\in (0,1)$ with $f(\delta_-) =1/t$ for all $t>1$ and no such $\delta_-$ otherwise. Moreover $\delta_-(t)$ is an increasing function with limit $1$. Also, there exists a unique $\delta_+>1$ with $f(\delta_+) =1/t$ for all $t>0$. Moreover $\delta_+(t)$ is a decreasing function with limit $1$. We will now show that $ u_+(t)$ is increasing in $t>0$ and $ u_-(t)$ is increasing in $t\geq 1$ Differentiating $f(\delta)=\frac 1t$ we obtain $\log \delta \cdot \frac{d\delta}{dt} = - \frac 1{t^2}$. Therefore \[ \frac d{dt} \log u(t)=\frac d{dt} \log t\delta= \frac{1}{\delta} \frac{d\delta}{dt} +\frac 1t = \frac 1t - \frac 1{t^2\delta \log \delta} = \frac {t\delta \log \delta-1} {t^2\delta \log \delta}= \frac {\delta-1} {t\delta\log \delta} >0 \] for all $\delta>0$. We can be more precise about the limits: \subsection{Estimates as $t\to \infty$} Write $\delta= 1+\theta$ so that \[ 1-1/t=(1+\theta)(1-\log (1+\theta)) =1-\frac{\theta^2}2+\frac{\theta^3}6-\frac{\theta^4}{12}+\dots \] Therefore $\theta=\pm \frac{2^{1/2}}{t^{1/2}} + \frac 1{3t}\pm \frac{1}{ 9(2t)^{3/2}} +O(\frac 1{t^2})$ as $t\to \infty$, so that \begin{align*} u_+(t)=t \delta_+(t)&=t+ (2 t)^{1/2} + \frac 1{3 }+ \frac{1}{ 9\cdot 2^{3/2}t^{1/2}} +O(\frac 1{t}) \\ u_-(t)=t \delta_-(t)&=t- (2 t)^{1/2} + \frac 1{3 }- \frac{1}{ 9\cdot 2^{3/2}t^{1/2}} +O(\frac 1{t}), \end{align*} for large $t$. So if $t$ is large and $N=t L \log x$ then, in Proposition \ref{prop: k-values}, \[ k_\pm = \bigg(t \pm (2 t)^{1/2} + \frac 1{3 }- O\bigg( \frac 1{t^{1/2}} \bigg) \bigg) \log x \text{ as } t\to \infty. \] \subsection{Approximating the normal distribution} A random variable given as the sum of enough independent binomial distributions tends to look like the normal distribution, at least at the center of the distribution. However since we are looking here at tail probabilities, the explicit meaning of ``enough'' is larger than we are used to. To be specific, $\mathbb Y$ has mean $\mu:=\tfrac NL$ and variance $\sigma^2=\tfrac NL(1-\tfrac 1L)$, and we expect $\mathbb Y$ will eventually be normally distributed with these parameters. If so, then \[ \mathbb{P}(\mathbb Y<\mu-\tau\sigma), \mathbb{P}(\mathbb Y>\mu+\tau\sigma) \approx \frac 1{\sqrt{2\pi}} \int_{\tau}^\infty e^{-t^2/2} dt\sim \frac{e^{-\tau^2/2}}{\tau \sqrt{2\pi}} \] and if this is $\approx 1/x$ then $\tau\sim \sqrt{2\log x}$. Therefore $\tau\sigma \sim ( 2\tfrac NL \log x)^{1/2}$. Writing $N=\lambda L\log x$ we have $\tau\sigma \sim ( 2\lambda )^{1/2}\log x$. Therefore we might expect the maximum and minimum values of $\mathbb Y$ to be $(\lambda\pm (2 \lambda)^{1/2} +o(1))\log x$. We see from section 7.1 that this is correct as $\lambda\to\infty$ (but not for fixed $\lambda$). We can see this issue more simply: If $k=\kappa N/L$ with $\kappa>1$ then the binomial distribution gives \[ \text{Prob} (\mathbb Y\geq k) \asymp \bigg(1-\frac 1L\bigg)^N \binom Nk \frac 1{(L-1)^k} =\exp\bigg( -\frac NL (\kappa(\log \kappa-1)+1+o(1))\bigg) \] and the normal distribution (with the same mean and variance) gives \[ \text{Prob} (\mathbb Y\geq k)=\exp\bigg( -\frac NL ( \tfrac 12(\kappa-1)^2+o(1))\bigg) \] and the main terms here are only the same when $\kappa\to 1^+$. \subsection{Estimates as $t\to 0^+$} In the other direction we obtain estimates for $\delta_\pm(t)$ as $t$ gets smaller. If $t\to 0^+$ then we deduce from $\delta_+ ( \log \delta_+-1)+1=1/t$ that \begin{equation} \label{eq: delta_+estimate} \delta_+(t) = \frac{1/t}{ \log \big( \frac{1/t}{e\log 1/t} \big) } \bigg( 1 + O \bigg( \frac{\log\log 1/t}{( \log 1/t)^2} \bigg) \bigg) \end{equation} so that \[ u_+(t)=t \delta_+(t)= \frac{1}{ \log ( 1/t) } \bigg( 1 + O \bigg( \frac{\log\log 1/t}{ \log 1/t} \bigg) \bigg) \] and therefore \[ k_+ \sim u_+(t) \log x \sim \frac{\log x}{ \log(1/t) } \text{ as } t\to 0^+. \] Combining this with the second estimate for $k_+$ in Proposition 1, we deduce that $k_+(N)$ is a continuous function in $N$ in the range of Proposition 1. If $t\to 1^+$ then writing $t=1+\eta$ with $\eta>0$ small and $\delta_-=1/B$, we deduce from $\delta_- ( 1-\log \delta_-)+1=1/t$ that $\frac{1+\log B}B=\eta+O(\eta^2)$ and so \[ 1/\delta_-=B = (1/\eta) \log (1/\eta) \bigg( 1 + O \bigg( \frac{\log\log 1/\eta}{\log 1/\eta} \bigg) \bigg) . \] This implies that \[ u_-(t)=t \delta_-(t)= \frac{\eta}{ \log (1/\eta) } \bigg( 1 + O \bigg( \frac{\log\log 1/\eta}{\log 1/\eta} \bigg) \bigg) \] and therefore \[ k_- \sim u_-(t) \log x \sim \frac{(t-1)\log x}{\log (\tfrac 1{t-1})} \text{ as } t\to 1^+, \] which $\to 0$ as $ t\to 1^+$. This suggests that $k_-=0$ for $N<\{ 1-o(1)\} L\log x$ but grows like \[ \frac {N-L\log x} {L \log \tfrac{N} {N-L\log x}} \] for a small range near $L\log x$ which we denote by $L\log x<N<\{ 1+o(1)\} L\log x$. \section{Applying the modified Cram\'er heuristic} Here is the general set-up. For some $z\leq y$ define $P=P(z):=\prod_{p\leq z} p$ so that $P(z)=e^{(1+o(1))z}$ by the prime number theorem. For $S(x,y,z) := \#\{ n\in (x,x+y]:\ (n,P(z))=1\}$ (as in section 2.2) we define \[ I(N)=\{ X\in (x,2x]: S(X,y,z) =N\} . \] for each integer $N$ in the range $0\leq N\leq S^+(y,z)$. Our heuristic is that the values \[ \pi(X,X+y] \text{ for } X\in I(N), \] are distributed like the binomially distributed random variable \[ B(N,\tfrac 1L) \text{ where } L=\frac {\phi(P)}P \log x. \] We therefore use Proposition 1 (with $x$ there equal to $\# I(N)$) to predict the value of \[ M_N(x,y) := \max_{X\in I(N)} \pi(X,X+y] \] for each $N$ with $I(N)$ non-empty. From these predictions we obtain our predictions for \[ M(x,y) = \max_N M_N(x,y). \] One can work out the details of this heuristic to make precise conjectures provided we can get a good estimate for $\log \# I(N)$. This is not difficult when $z\leq \epsilon \log x$: For each $m,0\leq m\leq P-1$ we have \[ S(X,y,z)=S(m,y,z) \text{ whenever } X\equiv m \pmod {P(z)}, \] since $(X+j,P)=(m+j,P)$ for all $j$. Moreover these intervals $(X,X+y]$ with $X\equiv m \pmod {P(z)}$ are all disjoint so can be considered to be independent. Therefore if $N=S(m,y,z) $ then $P=P(z)\leq x^{\epsilon+o(1)}$ and so \[ \#I(N)\geq \#\{ X\in (x,2x]: X\equiv m \pmod {P(z)}\} = x/P+O(1) \geq x^{1-\epsilon+o(1)}. \] Hence, when $z$ is this small, the answer given by our heuristic depends only on the extreme values, $S^-(y,z)$ and $S^+(y,z)$. Getting a good estimate for $\log \# I(N)$ is not straightforward if $z$ (and therefore $y$) is significantly larger than $\log x$. However one expects our heuristic to be more accurate the larger $z$ is, so we have to find the right balance in our selection of $z$. \subsection{Very short intervals ($y\ll \log x$)}\label{applyingmodel-veryshort} If $y\leq \eta \log x$ with $0<\eta<\tfrac 12$ small, then the above discussion suggests taking $z=y$. Hence $S^+(y,z)=S^+(y,y)=S(y)$. For each $m\pmod P$ we apply Proposition 1 with \[ N=S(m,y,y),\ L=\frac {\phi(P)}P \log x, \text{ and } x \text{ replaced by } x^{1-\eta}. \] For given $L$ and $x$, one obtains the largest value of $k_+$ in Proposition 1, when $N$ is as large as possible. This happens here when $N=S(y)$, which we believe is $\sim \frac{y}{\log y}$ and know is no more than twice this. Now $ L\asymp \frac{\log x}{ \log y}$ and Proposition 1 then implies that $k_+=N=S(y)$ as long as $S(y) \leq (1-\eta+o(1)) \frac{\log x}{\log L}$, which should be true for any fixed $\eta < \tfrac 12$ (and at worst for $\eta < \tfrac 13$). This supports the conjecture \eqref{eq: VSmall.Prediction} in a range like $y\leq (\tfrac 12-o(1))\log x$. What about for larger $y$? \subsection{Larger $y$ with a different choice of intervals} For larger $y$, say $ \log x\ll y <(\log x)^A$ with $A>2$, we need to decide how to select our value for $z$. One might guess that the right way to do so is to take $z=y$.\footnote{We do not wish to sieve with primes larger than the length of the interval, since any larger primes cannot divide more than one element in an interval of length $y$, so cannot be helpful in a sieve argument.} That is, to sieve the intervals of length $y$ with all of the primes $\leq z=y$, and then apply the modified Cram\`er model. In this case the sets $\{ j\in [1,y]: (X+j,P)=1\}$ are probably different for every $X\in (x,2x]$ (certainly they do not repeat periodically as in the earlier subsection), which seems difficult to cope with. However we do not need to understand these sets so precisely, we only need to understand their size, that is, to have good estimates for $\log \# I(N)$ for each $N$, but even this seems to be out of reach. Therefore this is the less desirable option (though we work through some of the details in Appendix C). In general, we do not know how to get good estimates for $\log \# I(N)$ whenever $z$ is substantially larger than $\log x$. These (for now insurmountable) issues, suggest that we should proceed as before, with a smallish value of $z$, so as to recover the sieved sets repeating predictably. Therefore we pre-sieve the intervals of length $y$ with all of the primes $\leq z:=\epsilon \log x$, and then apply the modified Cram\`er model. There might be a substantial difference when sieving with the primes $\leq z$, as opposed to $y$, though we hope not. If there is a substantial difference then this needs further investigation. \subsection{Larger $y$; Predictions by pre-sieving up to $z=o( \log x)$} We pre-sieve with the primes up to $z=\epsilon \log x$ where $\epsilon\to 0$ very slowly as $x\to \infty$. In this case we have seen that we may cut to the chase by taking \[ N_+=S^+(y,z)=:e^{-\gamma} \frac{y}{\log\ z}c_+ \text{ and } L=\frac{\phi(P)}P \log x\sim e^{-\gamma} \frac{\log x}{\log\log x} \] \medskip \noindent \textbf{Prediction:\ Pre-sieving up to $z=\epsilon \log x$}: \textsl{ If $\log x\ll y= o( (\log x)^2)$ then \[ M(x,y) = \min\bigg\{ S^+(y,z), \{ 1+o(1)\} \frac {\log x}{ \log \big(\tfrac{(\log x)^2}{y} \big) }\bigg\} . \] If $y=\lambda (\log x)^2$ with $\lambda>0$ then \[ M(x,y) \sim u_+(\lambda c_+) {\log x} \text{ and } m(x,y) \sim u_-(\lambda c_-) {\log x} . \] } \smallskip If $y\asymp \log x$ then this might predict that $M(x,y)=S^+(y,z)>S(y)$ which is obviously false (though not by much) -- in this range it therefore makes sense to sieve up to $z=y$, which will assure the feasible prediction $M(x,y)=S(y)$ (as we work out in Appendix C). If $\lambda$ is large and $y=\lambda (\log x)^2$ then \[ u_+(\lambda c_+) = \lambda c_+ + \sqrt{ 2\lambda c_+ } + O(1) , \] and so $M(x,\lambda (\log x)^2)\sim c_+ \frac y{\log x}$ as $\lambda\to \infty$; and analogously $m(x,\lambda (\log x)^2)\sim c_- \frac y{\log x}$. \begin{proof}[Deduction from the predictions of Proposition 1] We apply Proposition 1 to predict, for each $ 0\leq j\leq P-1$ where $P=P(z)$, \[ M_j(x,y):=\max_{ \substack{X\in (x,2x]\\ X\equiv j \pmod P}} \pi(X+y)-\pi(X) \] and then we guess that $M(x,y)=\max_j M_j(x,y)$. We observe that \[ \# \{ X\in (x,2x]: \ X\equiv j \pmod P\}= \frac xP+O(1)=x^{1-o(1)}\] for each $j$, so we apply Proposition 1 to a set of this size, and the result follows directly. The analogous proof works for $m(x,y)$. \end{proof} \section{Which choices should we make?} We will now distill these discussions, which each yield slightly different predictions. \subsection{Very short intervals ($y\ll \log x$)} In section 1.1, we predicted that if $y\leq c \log x$ then $M(x,y)=S(y)$. This was confirmed by one heuristic in section 4.1, and by a very different heuristic in section 8.1, giving us some confidence in this conclusion. From all three discussions it is not obvious what explicit constant one should take in place of the inexplicit ``$c$''. Our guess is that for any $\epsilon>0$ one has \[ M(x,y)=S(y) \text{ for } y\leq (1-\epsilon) \log x, \] if $x$ is sufficiently large, as well \[ M(x,y)\sim \frac{\log x}{\log\log x} \text{ for } (1-\epsilon) \log x\leq y\leq (1+o(1)) \log x. \] The ``$o(1)$'' is inexplicit and our methods do not pinpoint the transition more accurately. The data represented in figure 1 appear to more-or-less confirm these predictions. However these small $x$-values do suggest that $c>1$ which we do not believe, since that would force contradictions to our predictions for $M(x,y)$ for larger $y$. \subsection{Intermediate length intervals ($\log x\leq y=o( (\log x)^2)$)} In the range $\log x\leq y= o( (\log x)^2)$ we have predicted \eqref{eq: Intermediate intervals} no matter whether we presieve up to $z$ or up to $y$. One can revisit the heuristic arguments above to try to get a more accurate approximation: By \eqref{eq: delta_+estimate} we believe that if $y=\lambda (\log x)^2$ with $\lambda\to 0$ then \[ M(x,y) \text{ is better approximated by } \frac{\log x}{ \log \big( \frac{1/\lambda}{e\log 1/\lambda} \big) } . \] However the data for this prediction is no more compelling then for the less precise prediction $L(x,y)$ in this range, presumably because $x$ is so small. \subsection{Comparatively long intervals ($y/(\log x)^2\to \infty$ with $y\leq x$)} Here we write $y=(\log x)^A$ with $A\geq 2$ and understanding that if $A=2$ then $y/(\log x)^2\to \infty$. If \eqref{eq: asymp for S's} holds then Proposition 1 suggests that \[ M(x,y) \sim \sigma_+(A) \frac y{\log x} \text{ and } m(x,y) \sim \sigma_-(A) \frac y{\log x} \] which is what we believe. If we were to pre-sieve up to $y$ then Proposition 1 suggests that one should make a similar prediction but with $\sigma_+(A) $ replaced by \[ \max_{x<X\leq 2x} \#\{ j\leq y:\ (X+j,P(y))=1\} \bigg/\frac{\phi(P(y))}{P(y)} y. \] (and $\sigma_-(A) $ by the analogous expression with the min). However we have no idea how to study this ratio in this restricted range for $X$. \subsection{Longish intervals ($y\asymp (\log x)^2$) } In section 1.3 we saw that if $y=\lambda (\log x)^2$ then we should expect that \[ M(x,y) \sim u_+( c_+\lambda) \cdot {\log x} \] Now $u_+( c_+\lambda) \sim c_+\lambda$ as $\lambda\to \infty$ and so $M(x,y) \sim c_+ \frac y{\log x}$. This implies, letting $\lambda\to \infty$ and comparing this prediction to that in the last subsection, that $c_+=\sigma_+(2) $. Following the same heuristic but now focusing on the minimum we see that if $y=\lambda (\log x)^2$ then we should expect that \[ m(x,y) \sim u_-( c_-\lambda) \cdot {\log x} \] for some constant $c_->0$. This analogously yields that $c_-=\sigma_-(2) $. \subsection{More precise guesses for the maximal gap between primes} We can be more precise about our prediction for gaps between primes using the footnote in the proof of Proposition 1. The estimate there $N\leq (L-\tfrac 12+o(1))\log x$ with $L= \frac {\phi(P)}P \log x$ which would suggest that \[ \max_{x<p_n\leq 2x} p_{n+1}-p_n \approx c_-^{-1} \log x \bigg( \log x - \frac 12\frac P{\phi(P)} \bigg) \approx c_-^{-1} \log x \bigg( \log x - \tfrac 12 \log\log x \bigg) . \] Here $P=P(z)$ and $c_-$ depend on $z$. Cadwell \cite{Cad} presented a variant of Cram\'er's model. He took the viewpoint that certain aspects of the distribution of $H:=\pi(2x)-\pi(x)$ primes in $(x,2x]$ can be assumed to be like the distribution of $H$ randomly selected integers in $(x,2x]$. He very elegantly proved that the expected largest gap has length $\frac x{H+1} (\frac 11+ \frac 12 +\dots +\frac 1{H+1})$. This can be used to predict that\footnote{Cadwell's conjecture of $\log x(\log x -\log\log x)$ for the largest prime gap $\leq x$ was briefly mentioned in section 1.4. However since $x/\pi(x)$ is more accurately approximated by $\log x -1$, a famous correction of Legendre's prediction by Gauss, he should have deduced $(\log x-1)(\log x -\log\log x)$ from his model! Here we are looking at gaps in $(x,2x]$ rather than up to $x$, which explains the difference in the constants.} \[ \max_{x<p_n\leq 2x} p_{n+1}-p_n \approx \log (4x/e) (\log x -\log\log x +\gamma). \] It is not clear how to incorporate divisibility by small primes into this argument, particularly working only with those intervals with an unexpectedly small number of integers left unsieved. There are some similarities in these two conjectural formulas but it is not clear which to choose and on what basis. We did see in Figure 5 that the data suggests that one should subtract a larger multiple of $\log\log x$ in the formulas above but we have not found a believable heuristic to do so, though finding a way to combine the two heuristics would be a good start. \section{Short arithmetic progressions} We can proceed similarly with the distribution of $\pi(qy;q,a)$, the number of primes among the smallest $y$ positive integers in the arithmetic progression $\equiv a \pmod q$, as we vary over reduced residue classes $a \pmod q$ and where $y$ is small compared to $q$. As before we sieve out with the primes $\leq z$ (that do not divide $q$) before trying to find primes. If $P_q(z):=\prod_{p\leq z,\ p\nmid q} p$ then the probability that a random such integer of size $q^{1+o(1)}$ is prime is \[ \sim \frac{ qP_q(z)}{\phi(qP_q(z))} \frac 1{\log q} \] Now the number of unsieved integers in such an interval of length $y$ is expected to be \[ \frac{\phi(P_q(z))} { P_q(z)} y, \] and so the ``expected'' number of primes is \[ \sim \frac{ q}{\phi(q)} \frac y{\log q} \] (which is what suggested by the prime number theorem for arithmetic progressions). This set up allows us to proceed much as in the questions about primes on short intervals. We shall explore this in detail, with copious calculations, in a subsequent article.
{ "timestamp": "2021-05-05T02:04:47", "yymm": "2009", "arxiv_id": "2009.05000", "language": "en", "url": "https://arxiv.org/abs/2009.05000", "abstract": "We formulate, using heuristic reasoning, precise conjectures for the range of the number of primes in intervals of length $y$ around $x$, where $y\\ll (\\log x)^2$. In particular we conjecture that the maximum grows surprisingly slowly as $y$ ranges from $\\log x$ to $(\\log x)^2$. We will show that our conjectures are somewhat supported by available data, though not so well that there may not be room for some modification.", "subjects": "Number Theory (math.NT)", "title": "Primes in short intervals: Heuristics and calculations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429619433692, "lm_q2_score": 0.8397339696776499, "lm_q1q2_score": 0.8257464889872836 }
https://arxiv.org/abs/1902.00051
Shape Analysis, Lebesgue Integration and Absolute Continuity Connections
As shape analysis of the form presented in Srivastava and Klassen's textbook 'Functional and Shape Data Analysis' is intricately related to Lebesgue integration and absolute continuity, it is advantageous to have a good grasp of the latter two notions. Accordingly, in these notes we review basic concepts and results about Lebesgue integration and absolute continuity. In particular, we review fundamental results connecting them to each other and to the kind of shape analysis, or more generally, functional data analysis presented in the aforementioned textbook, in the process shedding light on important aspects of all three notions. Many well-known results, especially most results about Lebesgue integration and some results about absolute continuity, are presented without proofs. However, a good number of results about absolute continuity and most results about functional data and shape analysis are presented with proofs. Actually, most missing proofs can be found in Royden's 'Real Analysis' and Rudin's 'Principles of Mathematical Analysis' as it is on these classic textbooks and Srivastava and Klassen's textbook that a good portion of these notes are based. However, if the proof of a result does not appear in the aforementioned textbooks, nor in some other known publication, or if all by itself it could be of value to the reader, an effort has been made to present it accordingly.
\section{\large Introduction} The concepts of Lebesgue integration and absolute continuity play a major role in the theory of shape analysis or more generally in the theory of functional data analysis of the form presented in~\cite{srivastava}. In fact, well-known connections between Lebesgue integration and absolute continuity are of great importance in the development of functional data and shape analysis of the kind in~\cite{srivastava}. Accordingly, understanding functional data and shape analysis as presented in~\cite{srivastava} requires understanding the basics of Lebesgue integration and absolute continuity, and the connections between them. It is the purpose of these notes to provide a way to do exactly~that. \par In Section~2, we review fundamental concepts and results about Lebesgue integration. Then, in Section~3, we review fundamental concepts and results about absolute continuity, some results connecting it to Lebesgue integration. Finally, in Section~4, we shed light on some important aspects of functional data and shape analysis of the type in Srivastava and Klassen's textbook \cite{srivastava}, in the process illustrating its dependence on Lesbesgue integration, absolute continuity and the connections between them. Accordingly, without page numbers, a table of contents for these notes would be roughly as follows: \\ 1. Introduction\\ 2. Lebesgue Integration\\ Algebras of sets, Borel sets, Cantor set\\ Outer measure\\ Measurable sets, Lebesgue measure\\ Measurable functions, Step functions, Simple functions\\ The Riemann integral\\ The Lebesgue integral\\ The $L^p$ Spaces\\ 3. Absolute Continuity and its Connections to Lebesgue Integration\\ 4. Functional Data and Shape Analysis and its Connections to Lebesgue Integration and Absolute Continuity\\ Summary\\ Acknowledgements\\ References\\ Index of Terms \par The material in these notes about Lebesgue integration and absolute continuity is mostly based on Royden's ``Real Analysis"~\cite{royden} and Rudin's ``Principles of Mathematical Analysis"~\cite{rudin}. The fundamental ideas on functional data and shape analysis are mostly from Srivastava and Klassen's ``Functional and Shape Data Analysis"~\cite{srivastava}. An index of terms has been included at the end of the notes. \section{\large Lebesgue Integration} {\bf \large Algebras of sets, Borel sets, Cantor set}\\ \smallskip\\ {\bf Definition 2.1:} A collection $\cal A$ of subsets of a set $X$ is called an {\bf algebra} on~$X$ if for $A$, $B$ in $\cal A$, $A\cup B$ is in $\cal A$, and for $A$ in $\cal A$, $\tilde{A}=X\setminus A$ is in~$\cal A$.\\ \smallskip \\ {\bf Observation 2.1:} From De Morgan's laws if $\cal A$ is an algebra, then for $A$, $B$ in $\cal A$, $A\cap B$ is in~$\cal A$. \\ \smallskip \\ {\bf Definition 2.2:} An algebra $\cal A$ is called a {\bf {\boldmath $\sigma$}-algebra} if the union of every countable collection of sets in $\cal A$ is in~$\cal A$.\\ \smallskip \\ {\bf Observation 2.2:} From De Morgan's laws if $\cal A$ is a $\sigma$-algebra, then the intersection of a countable collection of sets in $\cal A$ is in~$\cal A$. \\ \smallskip \\ {\bf Definition 2.3:} A set of real numbers $O$ is said to be {\bf open} if for each $x\in O$ there is $\delta>0$ such that each number $y$ with $|x-y|<\delta$ belongs to~$O$. A set of real numbers $F$ is said to be {\bf closed} if its complement in {\bf R} is open, i.e., ${\bf R}\setminus F$ is open, where {\bf R} is the set of real numbers. The collection of {\bf Borel} sets is the smallest $\sigma$-algebra on the set {\bf R} of real numbers which contains all open sets of real numbers. \\ \smallskip \\ {\bf Observation 2.3:} The collection of Borel sets contains in particular all closed sets, all open intervals, all countable unions of closed sets, all countable intersections of open sets, etc. \\ \smallskip \\ {\bf Proposition 2.1:} Every open set of real numbers is the union of a countable collection of disjoint open intervals. Proof in~\cite{royden}.\\ \smallskip \\ {\bf Proposition 2.2 (Lindel\"{o}f):} Given a collection $\cal C$ of open sets of real numbers, then there is a countable subcollection $\{ O_i\}$ of $\cal C$ with \[ \cup_{O\in \cal C}\, O = \cup_{i=1}^{\infty}\, O_i. \] Proof in~\cite{royden}.\\ \smallskip \\ {\bf Definition 2.4:} A set of real numbers $F$ is said to be {\bf compact} if every open cover of $F$ contains a finite subcover, i.e., if $\cal C$ is a collection of open sets of real numbers such that $F\subseteq \cup_{O\in \cal C}\, O$, then there is a finite subcollection $\{O_i,\ i=1,\ldots,n\}$ of $\cal C$ with $F\subseteq \cup_{i=1}^n\, O_i$.\\ \smallskip\\ {\bf Proposition 2.3 (Heine-Borel):} A set of real numbers $F$ is compact if and only if it is closed and bounded. Proof in~\cite{royden} and \cite{rudin}.\\ \smallskip \\ {\bf Proposition 2.4:} Given a collection ${\cal K}$ of closed sets of real numbers such that at least one of the sets is bounded and the intersection of every finite subcollection of ${\cal K}$ is nonempty, then $\cap_{F\in \cal K}\, F\not = \emptyset$. Proof in~\cite{royden} and~\cite{rudin}.\\ \smallskip \\ {\bf Definition 2.5:} A number $x$ is said to be a {\bf limit point} of a set of real numbers $E$ if every open set that contains $x$ contains $y\not= x$, $y$ in~$E$. \\ \smallskip\\ {\bf Definition 2.6:} A set of real numbers $E$ is said to be {\bf perfect} if it is closed and if every number in $E$ is a limit point of $E$. \\ \smallskip\\ {\bf Proposition 2.5:} A set is closed if and only if every limit point of the set is a point of the set. A nonempty perfect set is uncountable. Proofs in~\cite{rudin}. \\ \smallskip\\ {\bf Corollary 2.1:} Every interval is uncountable, thus the set of real numbers is uncountable. \\ \smallskip\\ {\bf Observation 2.4:} Let $E_1$ be the union of the intervals $[0,\frac{1}{3}]$, $[\frac{2}{3},1]$ that are obtained by removing the open middle third of the interval $[0,1]$. Let $E_2$ be the union of the intervals $[0,\frac{1}{9}]$, $[\frac{2}{9},\frac{3}{9}]$, $[\frac{6}{9},\frac{7}{9}]$, $[\frac{8}{9},1]$ that are obtained by removing the open middle thirds of the intervals $[0,\frac{1}{3}]$ and $[\frac{2}{3},1]$. Continuing this way, a sequence of compact sets $E_n$ is obtained with $E_n\supset E_{n+1}$ for every positive integer~$n$. The set $\cap_{n=1}^{\infty}\, E_n$, called the {\bf Cantor set}, is compact, nonempty, perfect thus uncountable, and contains no interval. Proofs in~\cite{rudin}. \\ \smallskip\\ {\bf Definition 2.7:} The {\bf extended real numbers} consist of the real numbers together with the two symbols $-\infty$ and~$+\infty$. The definition of $<$ is extended by declaring that if $x$ is a real number, then $-\infty < x < \infty$. The operation $\infty -\infty$ is left undefined, the operation $0\cdot(\pm\infty)$ is defined to be $0$, while other definitions are extended: If $x$ is a real number, then\\ $x+\infty = \infty,\ \ \ \ $ $x-\infty = -\infty,\ \ \ \ $ $x/+\infty = x/-\infty = 0$,\\ $x\cdot\infty = \infty,\ \ \ \ $ $x\cdot -\infty = -\infty\ \ \ \ $ if $x>0$,\\ $x\cdot\infty = -\infty,\ \ \ \ $ $x\cdot -\infty = \infty\ \ \ \ $ if $x<0$.\\ Finally\\ $\infty + \infty = \infty,\ \ \ \ $ $-\infty -\infty = -\infty,\ \ \ \ $ $\infty\cdot (\pm\infty) = \pm\infty,\ \ \ \ $ $-\infty\cdot (\pm\infty) = \mp\infty.$ \\ \medskip\\ {\bf \large Outer measure}\\ \smallskip\\ {\bf Definition 2.8:} Given a set $A$ of real numbers, the {\bf outer measure} $m^*A$ of $A$ is the extended real number defined by \[ m^*A = \inf_{A\subseteq \cup I_n} \sum l(I_n), \] where the $\{ I_n\}$ are countable collections of open intervals that cover~$A$, and $l(I_n)$ is the length of the interval~$I_n$. \\ \smallskip\\ {\bf Observation 2.5:} $m^*$ is a set function, $m^* \emptyset = 0$, $m^* A\leq m^* B$ if $A\subseteq B$, and the outer measure of a set consisting of a single point is zero. \\ \smallskip\\ {\bf Proposition 2.6:} $m^*(I) = l(I)$ if $I$ is an interval. Proof in~\cite{royden}.\\ \smallskip \\ {\bf Proposition 2.7:} {\bf Countable subadditivity} of $m^*$: $m^*(\cup A_n) \leq \sum m^*A_n$ for any countable collection $\{ A_n\}$ of sets of real numbers. Proof in~\cite{royden}.\\ \smallskip \\ {\bf Corollary 2.2:} $m^* A = 0$ if $A$ is countable. \\ \smallskip\\ {\bf Observation 2.6:} The Cantor set is an example of an uncountable set with outer measure zero. Proof in~\cite{rudin}. \\ \smallskip\\ {\bf Proposition 2.8:} $m^*$ is translation invariant, i.e., $m^*(E+y) = m^*E$ for any set $E$ of real numbers and any number $y$, where $E+y = \{x+y: x\in E\}$. \\ \smallskip\\ {\bf Proof:} If $\{I_n\}$ is a countable collection of open intervals that covers $E$, then $\{I_n + y\}$ is a countable collection of open intervals that covers $E+y$. Since $l(I_n) = l(I_n +y)$ for each $n$, then $m^*(E+y)\leq m^*E$. Similarly, if $\{I_n\}$ is a countable collection of open intervals that covers $E+y$, then $\{I_n -y\}$ covers $E$, $l(I_n) = l(I_n -y)$, and therefore $m^*E\leq m^*(E+y)$. Thus, $m^*E = m^*(E+y)$. \\ \smallskip\\ {\bf Proposition 2.9:} For any set $A$ of real numbers and any $\epsilon >0$, there is an open set $O$ with $A\subseteq O$ and $m^*O\leq m^*A + \epsilon$ ($m^*O < m^*A + \epsilon$ if $m^*A < \infty$). In addition, there is a set $G$ that is the intersection of a countable collection of open sets with $A\subseteq G$ and $m^*G = m^*A$. \\ \smallskip\\ {\bf Proof:} From the definition of the outer measure there is a countable collection $\{ I_n\}$ of open intervals that covers~$A$ with $\sum l(I_n)\leq m^*A + \epsilon$. With $O=\cup I_n$, then $O$ is open, $A\subseteq O$, and $m^*O\leq \sum l(I_n)\leq m^*A +\epsilon$, which proves the first part. Now let $k>0$ be an integer. Then from the first part there is an open set $O_k$, $A\subseteq O_k$, with $m^*O_k \leq m^*A + \frac{1}{k}$. With $G=\cap_{k=1}^\infty O_k$, then for any integer $n>0$ we have $A\subseteq G \subseteq O_n$ and therefore $m^*A\leq m^*G \leq m^*O_n \leq m^*A + \frac{1}{n}$. Letting $n\rightarrow\infty $, then $m^*A\leq m^*G \leq m^*A $. Thus $m^*A = m^*G$, which proves the second part. \\ \medskip\\ {\bf \large Measurable sets, Lebesgue measure}\\ \smallskip\\ {\bf Definition 2.9 (Carath\'{e}odory's criterion):} A set $E$ of real numbers is said to be {\bf (Lebesgue) measurable} if for every set $A$ of real numbers, then \[ m^*A = m^*(A\cap E) + m^*(A\cap \tilde {E}),\] $\tilde{E}$ the complement of $E$ in {\bf R}, i.e., $\tilde{E} = {\bf R}\setminus E$, {\bf R} the set of real numbers. \\ \smallskip\\ {\bf Observation 2.7:} Clearly $\tilde {E}$ is measurable if and only if $E$ is, and $\emptyset$ and the set {\bf R} of real numbers are measurable. As for an example of a nonmeasurable set, a rather complex one is presented in~\cite{royden}. Finally, note that it is always true that $m^*A \leq m^*(A\cap E) + m^*(A\cap \tilde {E})$, thus $E$ is measurable if and only if for every set $A$ we have $m^*A \geq m^*(A\cap E) + m^*(A\cap \tilde {E})$. \\ \smallskip\\ {\bf Proposition 2.10:} If $m^*E = 0$, then $E$ is measurable. \\ \smallskip\\ {\bf Proof:} For any set $A$, since $A \supseteq A\cap \tilde {E}$, $m^*E = 0$, and $E \supseteq A\cap E$, then \[m^*A \geq m^*(A\cap \tilde {E}) = m^*(A\cap \tilde {E})+m^*(E) \geq m^*(A\cap \tilde {E})+m^*(A\cap E).\] {\bf Proposition 2.11:} The collection of measurable sets is a $\sigma$-algebra on~{\bf R}. Proof in~\cite{royden}. \\ \smallskip\\ {\bf Definition 2.10:} The {\bf Lebesgue measure} $m$ is the set function obtained by restricting the set function $m^*$ to the collection of (Lebesgue) measurable~sets. \\ \smallskip\\ {\bf Proposition 2.12:} {\bf Countable subadditivity} of $m$: $m(\cup A_n) \leq \sum mA_n$ for any countable collection $\{ A_n\}$ of measurable sets (Proposition~2.7).\\ {\bf Countable additivity} of $m$: $m(\cup A_n) = \sum mA_n$ if the sets in $\{ A_n\}$ as above are pairwise disjoint. Proof in~\cite{royden}. \\ \smallskip \\ {\bf Observation 2.8:} It is in the proof of countable additivity of the Lebesgue measure $m$ that Carath\'{e}odory's criterion plays a major role. On the other hand, the importance of the countable additivity of $m$ is immediately apparent in the proofs of the two parts of the following very useful proposition. \\ \smallskip\\ {\bf Proposition 2.13 (Nested sequences of measurable sets Lemma):}\\ 1. Given a countable collection of measurable sets $\{E_n\}$ with $E_{n+1}\subseteq E_n$ for each $n$, $mE_1<\infty$, then $ m(\cap_{i=1}^{\infty}\, E_i) = \lim_{n\rightarrow\infty} mE_n. $\\ 2. Given a countable collection of (not necessarily measurable) sets $\{E_n\}$ with $E_{n+1}\supseteq E_n$ for each $n$, then $ m^*(\cup_{i=1}^{\infty}\, E_i) = \lim_{n\rightarrow\infty} m^*E_n. $\\ Proof in~\cite{royden} for the first part. In~\cite{rana} for the second part using Proposition~2.9. As mentioned above, in the proofs of both parts the countable additivity of~$m$ (Proposition~2.12) is~used. \\ \smallskip\\ {\bf Proposition 2.14:} Every Borel set is measurable. Proof in~\cite{royden}. \\ \smallskip\\ {\bf Observation 2.9:} A rather complex example of a measurable set that is not Borel is presented in~\cite{burk}. \\ \smallskip\\ {\bf Proposition 2.15 (Equivalent conditions for a measurable set):} Let $E$ be a set of real numbers. Then the following five conditions are equivalent:\\ i. $E$ is measurable.\\ ii. Given $\epsilon >0$, there is an open set $O$, $O\supseteq E$ with $m^*(O\setminus E)<\epsilon$.\\ iii. Given $\epsilon >0$, there is a closed set $F$, $F\subseteq E$ with $m^*(E\setminus F)<\epsilon$.\\ iv. There is a set $G$ that is the intersection of a countable collection of open sets, $G\supseteq E$ with $m^*(G\setminus E)=0$.\\ v. There is a set $F$ that is the union of a countable collection of closed sets, $F\subseteq E$ with $m^*(E\setminus F)=0$. \\ \smallskip\\ {\bf Proof:} We only prove i $\Leftrightarrow$ ii. Proofs of all cases in~\cite{royden2}.\\ i $\Rightarrow$ ii: $E$ is measurable.\\ Case 1: $mE < \infty$.\\ There exists $O$ open such that $E\subseteq O$, $mO < mE +\epsilon$ (Proposition~2.9). $mE < \infty$ then implies $mO-mE < \epsilon$. Accordingly, \begin{eqnarray*} \epsilon & > & m(O\cap E) + m(O\cap\tilde{E})-mE \\ & = & mE + m(O\setminus E) - mE \\ & = & m(O\setminus E). \end{eqnarray*} Case 2: $mE = \infty$.\\ For each integer $n>0$ let $E_n=E\cap((n-1,n]\cup (-n,-n+1])$. Then each $E_n$ is measurable, $mE_n <\infty$, and $E=\cup E_n$. From Case 1 above it follows that there is $O_n$ open such that $E_n\subseteq O_n$ and $m(O_n\setminus E_n)<\epsilon/2^n$. With $O=\cup\, O_n$, then $O$ is open, $E\subseteq O$ and $O\setminus E\subseteq \cup(O_n\setminus E_n)$. Thus, \[ m(O\setminus E) \leq m(\cup (O_n\setminus E_n)) \leq \sum m(O_n\setminus E_n)< \sum \epsilon/2^n =\epsilon. \] ii $\Rightarrow$ i: Given $\epsilon >0$, there is an open set $O$, $O\supseteq E$ with $m^*(O\setminus E)<\epsilon$.\\ $O$ open implies $O$ is measurable. Thus, for any set $A$, we have $m^*A = m^*(A\cap O) + m^*(A\cap \tilde{O})$. Accordingly, \begin{eqnarray*} m^*A + \epsilon & > & m^*A + m^*(O\cap\tilde{E})\\ \ & = & m^*(A\cap O) + m^*(A\cap \tilde{O})+m^*(O\cap\tilde{E})\\ \ & \geq & m^*(A\cap E) + m^*(A\cap \tilde{O}\cap\tilde{E})+m^*(O\cap\tilde{E}\cap A)\\ \ & = & m^*(A\cap E) + m^*(A\cap\tilde{E}). \end{eqnarray*} $\epsilon$ arbitrary implies $m^*A \geq m^*(A\cap E) + m^*(A\cap\tilde{E})$. Thus, $E$ is measurable.\\ \smallskip \\ {\bf Corollary 2.3:} Every (Lebesgue) measurable set is the union of a Borel set and a set of (Lebesgue) measure zero.\\ \smallskip \\ {\bf Proposition 2.16:} The translate of a measurable set is measurable, i.e., if $E$ is a measurable set, then $E+y$ is measurable for any number~$y$.\\ \smallskip\\ {\bf Proof:} For any set $A$, setting $B=A-y$, $F=E+y$, and noting $(B\cap E)+y = A\cap F$, $(B\cap\tilde{E})+y= A\cap\tilde{F}$, by Proposition~2.8, then \[ m^*A=m^*B = m^*(B\cap E) + m^*(B\cap\tilde{E}) = m^*(A\cap F) + m^*(A\cap \tilde{F}). \] Thus, $F=E+y$ is measurable.\\ \medskip\\ {\bf \large Measurable functions, Step functions, Simple functions}\\ \smallskip\\ {\bf Definition 2.11:} Let $f$ be an extended real-valued function defined on a (Lebesgue) measurable set. Then $f$ is said to be {\bf (Lebesgue) measurable} if the set $\{x\, |\, f(x)>a\}$ is (Lebesgue) measurable for every real number~$a$.\\ \smallskip\\ {\bf Proposition 2.17 (Equivalent conditions for a measurable function):} Let $f$ be an extended real-valued function of measurable domain. Then the following conditions are equivalent:\\ i. $\{x\, |\, f(x)>a\}$ is measurable for every real number~$a$.\\ ii. $\{x\, |\, f(x)\geq a\}$ is measurable for every real number~$a$.\\ iii. $\{x\, |\, f(x)<a\}$ is measurable for every real number~$a$.\\ iv. $\{x\, |\, f(x)\leq a\}$ is measurable for every real number~$a$.\\ Proof in~\cite{royden} and \cite{rudin}.\\ \smallskip\\ {\bf Observation 2.10:} Conditions ii, iii, iv can be used instead of condition~i to define a measurable function. We note that if a real-vaued function $f$ defined on a closed or open interval $I$ is {\bf continuous}~\cite{rudin}, then $f$ is measurable since the set in condition~i is relatively open in~$I$~\cite{rudin}. Also the restriction of a measurable function to a measurable subset of its domain, is measurable.\\ \smallskip\\ {\bf Definition 2.12:} Given real numbers $a$, $b$, $a<b$, and an integer $n>0$, by a {\bf partition} or {\bf subdivision} of~$[a,b]$ we mean a finite set of points $P= \{\xi_0, \xi_1,\ldots,\xi_n\}$ with $a=\xi_0<\xi_1 < \ldots < \xi_n = b$. A function $\psi: [a,b]\rightarrow$ {\bf R} is called a {\bf step function} (on $[a,b]$) if for an integer $n>0$, there are numbers $c_i$, $i=1,\ldots,n$, and a partition or subdivision $P= \{\xi_0, \xi_1,\ldots,\xi_n\}$ of $[a,b]$, such that $\psi(a)=\psi(\xi_0) = c_1$, $\psi(x)=c_i$, $\xi_{i-1}<x\leq\xi_i$, $i=1,\ldots,n$. \\ \smallskip\\ {\bf Definition 2.13:} Given sets $A$, $X$ of real numbers, $A\subseteq X$, the {\bf characteristic function} $\chi_A$ of $A$ on $X$, $\chi_A: X\rightarrow \{0,1\}$, is defined by \[ \chi_A(x) = \left\{ \begin{array}{ll} 1 & x\in A\\ 0 & x\in X\setminus A. \end{array} \right. \] {\bf Definition 2.14:} Given a measurable set $E$, an integer $n>0$, and for $i=1,\ldots,n$, nonzero numbers $c_i$, measurable sets $E_i\subseteq E$, and characteristic functions $\chi_{E_i}$ of $E_i$ on~$E$, a function $\varphi: E \rightarrow$ {\bf R} defined by $\varphi(x) = \sum_{i=1}^n c_i\chi_{E_i}(x)$ for $x$ in $E$ is called a {\bf simple function} on $E$. \\ \smallskip\\ {\bf Observation 2.11:} Step functions are measurable and a function is simple if and only if it is measurable and assumes only a finite number of values. We note that the representation of a simple function $\varphi$ is not unique. However it does have a so-called {\bf canonical representation}: for some integer $m>0$, $\varphi(x) = \sum_{i=1}^m a_i\chi_{A_i}(x)$ for $x$ in $E$, where $\{a_1,\ldots,a_m\}$ is the set of distinct nonzero values of $\varphi$, and $A_i=\{x\in E:\,\varphi(x) = a_i\}$, $i=1,\ldots,m$. This representation is characterized by the fact that the $a_i$'s are distinct and nonzero, and the $A_i$'s are pairwise disjoint.\\ \smallskip\\ {\bf Proposition 2.18:} If $f$ is a measurable function, then the function $|f|$ is measurable. Proof in~\cite{rudin}.\\ \smallskip\\ {\bf Proposition 2.19:} Let $f$ and $g$ be measurable real-valued (not extended) functions defined on the same domain~$X$, $c$ a constant, and $F$ a continuous real-valued function on~{\bf R}$^2$. Then the function $h$ defined by $h(x) = F(f(x),g(x))$, $x\in X$, is measurable. In particular $f+g$, $fg$, $cf$, $f+c$, $f-g$ are measurable; $f/g$ is measurable if $g\not=0$ on~$X$. Proofs in~\cite{royden} and~\cite{rudin}. \\ \smallskip\\ {\bf Observation 2.12:} For extended real-valued measurable functions $f$ and $g$, the function $fg$ is still measurable. However in order for $f+g$ to be measurable, $f+g$ must be given the same value at points where it is undefined, unless these points form a set of measure zero in which case it makes no difference which values $f+g$ is given on the set.\\ \smallskip\\ {\bf Proposition 2.20:} Let $\{ f_n\}$ be a sequence of measurable functions defined on the same domain~$X$ and let $N>0$ be an integer. The functions defined for each $x\in X$ by $\sup_{n\leq N} f_n(x)$, $\sup_n f_n(x)$, $\limsup_{n\rightarrow\infty} f_n(x)$, $\inf_{n\leq N} f_n(x)$, $\inf_n f_n(x)$, $\liminf_{n\rightarrow\infty} f_n(x)$, are all measurable. Proofs in~\cite{royden} and~\cite{rudin}.\\ \smallskip\\ {\bf Definition 2.15:} Given a set of real numbers $X$, a property associated with points in $X$ is said to hold {\bf almost everywhere} ({\bf a.e.} for short) in $X$ if the set of points in $X$ where the property fails has measure zero.\\ \smallskip\\ {\bf Proposition 2.21:} Let $f$ and $g$ be functions defined on the same measurable domain~$X$. If $f$ is measurable and $f=g$ a.e., then $g$ is measurable.\\ \smallskip\\ {\bf Proof:} Let $a$ be any real number. $f=g$ a.e. means the set $E=\{ x:f(x)\not= g(x)\}$ has measure zero. Thus $E$ and the set $\{ x\in E: g(x)>a\} \subseteq E$ are measurable (Proposition~2.10). $X\setminus E$ must then be measurable and the set $\{ x\in X\setminus E: f(x)>a\}$ is then measurable as $f$ is measurable. Thus, since \[ \{x: g(x)>a\} = \{ x\in X\setminus E: f(x)>a\}\cup \{ x\in E: g(x)>a\} \] it follows that $\{x:g(x)>a\}$ is measurable and therefore $g$ is measurable. \\ \smallskip\\ {\bf Observation 2.13:} The definition of a measurable function and 1 of Proposition~2.13 allow the following proposition to be true. \\ \smallskip\\ {\bf Proposition 2.22 (Egoroff's Theorem):} Let $\{ f_n\}$ be a sequence of measurable functions on a measurable set~$E$, $mE<\infty$, that converge a.e. to a real-valued (not extended) function $f$ on~$E$, i.e., there is a set $B\subseteq E$ such that $mB=0$, $f_n\rightarrow f$ pointwise on~$E\setminus B$. Then for every $\delta >0$ there is a closed set $F\subseteq E$ such that $m(E\setminus F)<\delta$ and $f_n\rightarrow f$ uniformly on~$F$.\\ \smallskip\\ {\bf Proof:} Let $m>0$ be an integer. For each integer $n>0$ let \[ G_n^m = \{ x\in E: |f_n(x)-f(x)| \geq 1/m\},\] and for each integer $N>0$ set \[ E_N^m = \cup_{n=N}^{\infty}\, G_n^m = \{ x\in E: |f_n(x)-f(x)|\geq 1/m \ \mathrm{for\ some\ } n\geq N\}. \] We note $f$ is measurable (Proposition~2.20 and Proposition~2.21). It follows that each $E_N^m$ is measurable and of finite measure, $E_{N+1}^m \subseteq E_N^m$, and for each $x\in E\setminus B$ there must be some $N$ for which $x\not\in E_N^m$, since $f_n(x)\rightarrow f(x)$. Thus $\cap E_N^m\subseteq B$ and must therefore have measure zero. It follows then that $\lim_{N\rightarrow\infty} mE_N^m=0$ (1 of Proposition~2.13). Hence there exists $N$ such that \[ mE_N^m =m(\{ x\in E: |f_n(x)-f(x)|\geq 1/m \ \mathrm{for\ some\ } n\geq N\}) < \delta/2^{m+1}.\] Letting $A^m=E_N^m$, then $A^m$ is measurable, $mA^m < \delta/2^{m+1}$ and \[E\setminus A^m = \{ x\in E: |f_n(x)-f(x)| < 1/m\ \mathrm{for\ all\ } n\geq N\}. \] Now let $A=\cup_{m=1}^{\infty} A^m$. Then $mA \leq \sum_{m=1}^{\infty}mA^m<\sum_{m=1}^{\infty}\delta/2^{m+1}=\delta/2.$\\ Given $\epsilon>0$ choose integer $m>0$ with $1/m<\epsilon$. For some $N$ and for all $x\in E\setminus A$ then $x\in E\setminus A^m$ and $|f_n(x)-f(x)|<1/m<\epsilon$ for all~$n\geq N$.\\ Thus, $f_n\rightarrow f$ uniformly on~$E\setminus A$.\\ Finally, let $G=E\setminus A$. Clearly $G$ is measurable and~$m(E\setminus G)=mA<\delta/2$.\\ Thus, there exists a closed set $F$, $F\subseteq G$ with $m(G\setminus F)<\delta/2$ (Proposition~2.15). It then follows that $m(E\setminus F) = m(E\setminus G)+ m(G\setminus F)<\delta/2 + \delta/2 = \delta$ and $f_n\rightarrow f$ uniformly on~$F$.\\ \smallskip\\ Proof also in \cite{halmos} for spaces and measures more general than {\bf R} and the Lebesgue measure.\\ \smallskip\\ {\bf Observation 2.14:} The assumption $mE<\infty$ is necessary in the above proposition. To see this, let $E=$ {\bf R} and \[ f_n(x) = \left\{ \begin{array}{ll} 1 & x > n\\ 0 & x \leq n. \end{array} \right. \] Clearly $f_n\rightarrow 0$ pointwise on $E$. However, for any integer $N>0$, $0<\epsilon<1$, the set $\{x\in E: f_N(x) \geq \epsilon\}=(N,\infty)$ is of infinite measure. Thus, the uniform convergence of $f_n$ to $0$ as proposed in the proposition can not occur. \\ \smallskip\\ {\bf Definition 2.16:} Given a function $f$ defined on a set $E$, the {\bf positive part $f^+$} of $f$ and the {\bf negative part $f^-$} of $f$ are the functions defined respectively by $f^+(x) = \max\{f(x),0\}$, and $f^-(x) = \max\{-f(x),0\}$, $x\in E$. \\ \smallskip\\ {\bf Proposition 2.23 (Approximation of a measurable function by simple functions):} Let $f$ be a real-valued (not extended) measurable function on a measurable set~$E$. Then there exists a sequence of simple functions $\{s_n\}$ on $E$ such that $s_n\rightarrow f$ pointwise on~$E$. Since $f = f^+ - f^-$, and $f^+$ and $f^-$ are measurable because $f$ is, then $\{s_n\}$ can be chosen so that $s_n= s_n^u - s_n^l$, where $s_n^u$ and $s_n^l$ are simple functions on $E$ such that $s_n^u\rightarrow f^+$, $s_n^l\rightarrow f^-$ pointwise on $E$, and $s_n^u$ and $s_n^l$ increase monotonically to $f^+$ and~$f^-$, respectively. If $f$ is bounded, then $\{s_n\}$ can also be chosen to converge uniformly to~$f$ on~$E$. Proof in~\cite{rudin}. \\ \smallskip\\ {\bf Proposition 2.24 (Lusin's Theorem):} Let $f$ be a real-valued (not extended) measurable function on a measurable set $E$. Then given $\epsilon>0$, there exists a closed set $F\subseteq E$ with $m(E\setminus F)<\epsilon$ such that $f|_F$ is continuous. \\ \smallskip\\ {\bf Proof:} First we prove the proposition for $f$ a simple function on~$E$. Accordingly, for some integer $n>0$, assume $f(x) = \sum_{i=1}^{n} c_i\chi_{E_i}(x)$ for $x$ in $E$, $E_i\subseteq E$ for each~$i$, the canonical representation of~$f$. In addition let $E_0 = E\setminus \cup_{i=1}^{n}E_i$. Clearly the $E_i$'s are pairwise disjoint. Given $\epsilon >0$, since each $E_i$ is measurable there exists a closed set $F_i\subseteq E_i$ such that $m(E_i\setminus F_i) < \epsilon/(n+1)$, $i=0,\ldots,n$ (Proposition~2.15). Accordingly, $F=\cup_{i=0}^n\,F_i$ is closed, and \[m(E\setminus F) = m(\cup_{i=0}^n\, E_i\setminus \cup_{i=0}^n\, F_i) =m(\cup_{i=0}^n(E_i\setminus F_i)) = \sum_{i=0}^n m(E_i\setminus F_i)< \epsilon.\] Now, to show $f|_F$ is continuous we show that if $\{x_k\}$, $x$ in $F$ are such that $x_k\rightarrow x$, then $f(x_k)\rightarrow f(x)$.\\ We note that for some unique $j$, $0\leq j\leq n$, it must be that $x\in F_j$. Thus, since $f$ is constant on $F_j$, it then suffices to show that for some integer $K>0$, $x_k$ is in $F_j$ for $k\geq K$. If this is not the case and since there is a finite number of $F_i$'s then for some $l$, $0\leq l\leq n$, $l\not = j$, there is a subsequence $\{x_{k_m}\}$ of $\{x_k\}$ all contained in~$F_l$. But then $x_{k_m}\rightarrow x$ so that $x$ is in $F_l$, a contradiction. \\ \smallskip\\ Now we prove the proposition for a general $f$.\\ Case 1: $mE<\infty.$\\ Let $s_n$ be simple functions such that $s_n\rightarrow f$ pointwise on $E$ (Proposition~2.23). Given $\epsilon>0$, as established above for simple functions on $E$, for each $n$ there exists a closed set $F_n\subseteq E$ with $m(E\setminus F_n) <\epsilon/2^{n+1}$ such that $s_n|_{F_n}$ is continuous. In addition, since $mE<\infty$, there exists a closed set $F_0\subseteq E$ such that $m(E\setminus F_0)<\epsilon/2$ and $s_n\rightarrow f$ uniformly on~$F_0$ (Proposition~2.22~(Egoroff's Theorem)).\\ Finally let $F=\cap_{n=0}^{\infty}\,F_n$. Then $F$ is closed and \[m(E\setminus F) = m(\cup_{n=0}^{\infty}\,(E\setminus F_n))\leq \sum_{n=0}^{\infty} m(E\setminus F_n)<\sum_{n=0}^{\infty}\epsilon/2^{n+1}=\epsilon. \] Since $s_n|_{F_n}$ is continuous so must be $s_n|_F$. And since $s_n \rightarrow f$ uniformly on $F\subseteq F_0$ then $f|_F$ must be continuous (proof in \cite{rudin}). \\ \smallskip\\ Case 2: $mE = \infty$.\\ For each integer $n>0$, let $\tilde{E}_n=(n-1,n]\cup (-n,-n+1]$ and $E_n=E\cap\tilde{E}_n$. Then each $E_n$ is measurable, $mE_n <\infty$, and $E=\cup_{n=1}^{\infty} E_n$. From Case~1 above, given $\epsilon>0$, it follows that there is a closed set $F_n\subseteq E_n$ with $m(E_n\setminus F_n)<\epsilon/2^n$ such that $f|_{F_n}$ is continuous. Let $F=\cup_{n=1}^{\infty}\, F_n$. Then \begin{eqnarray*} m(E\setminus F) &=& m(\cup_{n=1}^{\infty}\, E_n\setminus \cup_{n=1}^{\infty}\, F_n) = m(\cup_{n=1}^{\infty}(E_n\setminus F_n))\\ &=& \sum_{n=1}^{\infty} m(E_n\setminus F_n) < \sum_{n=1}^{\infty}\epsilon/2^n=\epsilon. \end{eqnarray*} We show $F$ is closed. Let $x$ be a limit point of~$F$ so that for $\{x_k\}$ in $F$, then $x_k\rightarrow x$. Clearly for some integer $j>0$ it must be that $x\in \tilde{E}_j$. It suffices to show that for some integer $K>0$, $x_k$ is in $F_j$ for~$k\geq K$ so that $x$ is also~in $F_j\subseteq F$. With $\tilde{E}_n=\emptyset$ for $n\leq 0$, from the definition of the $\tilde{E}_n$'s a neighborhood of the point $x$ exists that does not intersect $\tilde{E}_i$, $i>j+1$ or $i<j-1$. Thus, with $F_0=\emptyset$, for $k$ large enough the points $x_k$ can only be in $F_{j-i}$, $F_j$ and $F_{j+1}$. If it is not the case that $K$ as described exists, then there must be a subsequence $\{x_{k_m}\}$ of $\{x_k\}$, all of it contained in either $F_{j-1}$ or~$F_{j+1}$. But then $x_{k_m} \rightarrow x$ so that $x$ is in either $F_{j-1}$ or $F_{j+1}$, a contradiction. (Actually showing that $x$ is in the union of $F_{j-1}$, $F_j$ and $F_{j+1}$, would have sufficed).\\ Now, to show $f|_F$ is continuous we show that if $\{x_k\}$, $x$ in $F$ are such that $x_k\rightarrow x$, then $f(x_k)\rightarrow f(x)$.\\ We note that for some unique $j>0$, it must be that $x\in F_j$. Thus, since $f$ is continuous on $F_j$, it then suffices to show that for some integer $K>0$, $x_k$ is in $F_j$ for $k\geq K$. If this is not the case, again with $F_0=\emptyset$, an argument, similar to the one used above for proving $F$ is closed, can be used to get the same contradiction that $x$ is in either $F_{j-1}$ or $F_{j+1}$. \\ \medskip\\ {\bf \large The Riemann integral}\\ \smallskip\\ {\bf Definition 2.17:} Let $[a,b]$ be an interval and $f$ a bounded real-valued function defined on~$[a,b]$. Given an integer $n>0$, and $P = \{ \xi_0, \xi_1,\ldots,\xi_n\}$, a partition or subdivision of~$[a,b]$ (Definition 2.12), for $i=1,\ldots,n$, we define \begin{eqnarray*} m_i &=& \inf f(x),\ \xi_{i-1} \leq x\leq \xi_i,\\ M_i &=& \sup f(x),\ \xi_{i-1} \leq x\leq \xi_i,\ \mathrm{and}\\ L(P,f) &=& \sum_{i=1}^n\,m_i(\xi_i-\xi_{i-1}),\\ U(P,f) &=& \sum_{i=1}^n\,M_i(\xi_i-\xi_{i-1}). \end{eqnarray*} Then we define the {\bf lower Riemann integral} and the {\bf upper Riemann integral} of $f$ over $[a,b]$, respectively, by \[ {\cal R} \underline{\int_a^b} f(x)dx = \sup_P L(P,f), \] \[ {\cal R} \overline{\int_a^b} f(x)dx = \inf_P U(P,f), \] where the infimum and supremum are taken over all partitions $P$ of~$[a,b]$.\\ If the two are equal, $f$ is said to be {\bf Riemann integrable} over $[a,b]$, and the common value is then called the {\bf Riemann integral} of $f$ over $[a,b]$ and denoted by \[ {\cal R} \int_a^b f(x)dx. \] {\bf Observation 2.15:} Since $f$ is bounded, there exist numbers $m$ and $M$, such that $m\leq f(x)\leq M$, $x\in [a,b]$. Thus, for every partition $P$, it must be that $m(b-a)\leq L(P,f)\leq U(P,f)\leq M(b-a)$, so that the lower and upper Riemann integrals of $f$ over $[a,b]$ are finite numbers. \\ \smallskip\\ {\bf Proposition 2.25:} $ {\cal R} \underline{\int_a^b} f(x)dx \leq {\cal R} \overline{\int_a^b} f(x)dx $. Proof in~\cite{rudin}.\\ \smallskip\\ {\bf Observation 2.16:} Let $\psi: [a,b]\rightarrow$ {\bf R} be a step function so that for an integer $n>0$, there are numbers $c_i$, $i=1,\ldots,n$, and a partition $P= \{\xi_0, \xi_1,\ldots,\xi_n\}$ of $[a,b]$, such that $\psi(a)=\psi(\xi_0) = c_1$, $\psi(x)=c_i$, $\xi_{i-1}<x\leq\xi_i$, \mbox{$i=1,\ldots,n$}. Clearly $L(P,\psi)=U(P,\psi)$ and since $ L(P,\psi) \leq {\cal R} \underline{\int_a^b} \psi(x)dx \leq {\cal R} \overline{\int_a^b} \psi(x)dx \leq U(P,\psi) $ (Proposition~2.25), it must be that $\psi$ is Riemann integrable over $[a,b]$ and ${\cal R} \int_a^b \psi(x)dx = \sum_{i=1}^n c_i(\xi_i-\xi_{i-1})$.\\ From this it is then apparent that \[ {\cal R} \underline{\int_a^b} f(x)dx = \sup_P L(P,f) = \sup_{\psi\leq f} {\cal R} \int_a^b \psi(x)dx, \] \[ {\cal R} \overline{\int_a^b} f(x)dx = \inf_P U(P,f) = \inf_{\psi\geq f} {\cal R} \int_a^b \psi(x)dx, \] where the $\psi$'s are all possible step functions on~$[a,b]$ satisfying the given conditions. \\ \smallskip \\ {\bf Definition 2.18:} Given an interval $[a,b]$, let $P= \{\xi_0, \xi_1,\ldots,\xi_n\}$ be a partition of $[a,b]$. The number \[ \mu(P) = \max_{i=1,\ldots,n} (\xi_i-\xi_{i-1}) \] is called the {\bf mesh} of~$P$.\\ Let $f$ be a real-valued function defined on $[a,b]$. Given a partition $P= \{\xi_0, \xi_1,\ldots,\xi_n\}$ of $[a,b]$, a {\bf Riemann sum} of $f$ with respect to $P$ is a sum of the form \[ S(P,f) = \sum_{i=1}^n f(t_i)(\xi_i-\xi_{i-1}),\] where the choice of points $t_1,\ldots,t_n$, $\xi_{i-1} \leq t_i \leq \xi_i$, $i=1,\ldots,n$, is arbitrary. The Riemann sums of $f$ are said to converge to a finite number~$I$ as \mbox{$\mu(P)\rightarrow 0$}, i.e., \[ I = \lim_{\mu(P)\rightarrow 0} S(P,f), \] if given $\epsilon>0$, there exists $\delta>0$ such that for every partition $P$ with mesh \mbox{$\mu(P)<\delta$} it must be that \[ |S(P,f) - I| < \epsilon \] (obviously for every choice of points $t_1,\ldots,t_n$, $\xi_{i-1} \leq t_i \leq \xi_i$, $i=1,\ldots,n$). \\ \smallskip \\ {\bf Proposition 2.26 (Riemann sums of $f$ that converge implies $f$ is bounded):} If $\lim S(P,f)$ exists as $\mu(P)\rightarrow 0$, then $f$ is bounded on~$[a,b]$. Proof in~\cite{pugh} and~\cite{richardson}. \\ \smallskip\\ {\bf Proposition 2.27 (Riemann sums of $f$ converge if and only $f$ is Riemann integrable):} Let $[a,b]$ be an interval and $f$ a bounded real-valued function defined on $[a,b]$. Then $f$ is Riemann integrable over $[a,b]$ if and only~if \[ I = \lim_{\mu(P)\rightarrow 0} S(P,f) \] exists. If this is the case, then $I$~equals~${\cal R} \int_a^b f(x)dx$. Proof in~\cite{rudin} and~\cite{wade}. \\ \smallskip\\ {\bf Observation 2.17:} Given an interval $[a,b]$ and a set $A\subseteq [a,b]$, ideally the characteristic function $\chi_A$ of $A$ on $[a,b]$, $\chi_A: [a,b]\rightarrow \{0,1\}$, defined by \[ \chi_A(x) = \left\{ \begin{array}{ll} 1 & x\in A\\ 0 & x\in [a,b]\setminus A \end{array} \right. \] should be (Riemann) integrable over $[a,b]$, especially if $A$ is measurable, and its integral over $[a,b]$ should equal the (outer) measure of~$A$. However, if $A$ is the set of rational numbers in $[a,b]$, which is measurable with~$mA=0$, we see that ${\cal R} \underline{\int_a^b} \chi_A(x)dx = 0$ and ${\cal R} \overline{\int_a^b} \chi_A(x)dx = b-a$, not the ideal situation.\\ \medskip\\ {\bf \large The Lebesgue integral}\\ \smallskip\\ {\bf Definition 2.19:} Given a measurable set $E$, let $\varphi(x) = \sum_{i=1}^m a_i\chi_{A_i}(x)$ be the canonical representation of a simple function $\varphi$ on $E$, where for some integer $m>0$, $\{a_1,\ldots,a_m\}$ is the set of distinct nonzero values of $\varphi$, and $A_i=\{x\in E:\,\varphi(x) = a_i\}$, $i=1,\ldots,m$. We define the {\bf Lebesgue integral} of $\varphi$ over $E$ as the extended real number \[ \int_E \varphi(x)dx = \sum_{i=1}^m a_i mA_i. \] {\bf Observation 2.18:} A consequence of the following two propositions is that if $\varphi(x)= \sum_{i=1}^n c_i\chi_{E_i}(x)$ is any representation of a simple function $\varphi$ on a measurable set $E$, then the Lebesgue integral of $\varphi$ over~$E$ (Definition 2.19) can be computed directly from the representation, i.e., by computing~$\sum_{i=1}^n c_i mE_i$. \\ \smallskip\\ {\bf Proposition 2.28:} Let $\varphi(x)= \sum_{i=1}^n c_i\chi_{E_i}(x)$ be a representation of a simple function $\varphi$ on a measurable set~$E$, with $E_i\cap E_j=\emptyset$ for $i\not=j$ (not necessarily the canonical representation of $\varphi$). Then \[ \sum_{i=1}^n c_i mE_i = \int_E \varphi(x)dx.\] Proof in~\cite{royden} for $E_i$'s of finite measure. Same proof for the general case. \\ \smallskip\\ {\bf Proposition 2.29:} Let $\varphi$ and $\psi$ be simple functions on a measurable set~$E$. Then for any real numbers $a$ and $b$ we must have \[ \int_E (a\varphi + b\psi)(x)dx = a\int_E\varphi(x)dx + b\int_E\psi(x)dx, \] and, if $\varphi \geq \psi$ a.e., then $\int_E\varphi(x)dx \geq \int_E\psi(dx)$. \\ \smallskip \\ Proof in~\cite{royden} using Proposition~2.28 for $E_i$'s of finite measure. Proof essentially the same for the general case. \\ \smallskip\\ {\bf Corollary 2.4:} Let $\varphi(x)= \sum_{i=1}^n c_i\chi_{E_i}(x)$ be any representation of a simple function $\varphi$ on a measurable set~$E$, the $E_i$'s not necessarily pairwise disjoint. Then \[ \sum_{i=1}^n c_i mE_i = \int_E \varphi(x)dx.\] {\bf Proof:} Apply the first part of Proposition~2.29 to $\varphi(x) = \sum_{i=1}^n c_i\chi_{E_i}(x)$. \\ \smallskip\\ {\bf Definition 2.20:} Given a measurable set $E$, let $f$ be a measurable nonnegative function on~$E$. We define the {\bf Lebesgue integral} of $f$ over $E$ as the extended real number \[\int_E f(x)dx = \sup_{\varphi\leq f} \int_E \varphi(x)dx, \] where the $\varphi$'s are all possible simple functions on~$E$ satisfying the given condition. \\ \smallskip\\ {\bf Definition 2.21:} Given a measurable set $E$, let $f$ be a measurable function on~$E$. With $f^+$ and $f^-$ as the positive and negative parts of $f$ (Definition~2.16), we define the {\bf Lebesgue integral} of $f$ over $E$ as the extended real number \[\int_E f(x)dx = \int_E f^+(x)dx - \int_E f^-(x)dx, \] if at least one of the integrals $\int_E f^+(x)dx$, $\int_E f^-(x)dx$ (Definition~2.20) is finite.\\ If $\int_E f(x)dx$ is finite, then $f$ is said to be {\bf Lebesgue integrable} over~$E$. \\ \smallskip\\ {\bf Proposition 2.30:} Let $f$ and $g$ be Lebesgue integrable functions over a measurable set~$E$, and $c$ a real number. Then\\ i. $cf$ is Lebesgue integrable over~$E$ with $\int_E cf(x)dx = c\,\int_E f(x)dx$.\\ ii. $f+g$ is Lebesgue integrable over~$E$ with \[\int_E (f+g)(x)dx = \int_E f(x)dx + \int_E g(x)dx.\] iii. If $f\leq g$ a.e., then $\int_E f(x)dx \leq \int_E g(x)dx$.\\ iv. If $A,B\subseteq E$ are disjoint measurable sets, then \[ \int_{A\cup B} f(x)dx = \int_A f(x)dx + \int_B f(x)dx. \] Proofs~in \cite{royden} and~\cite{rudin}.\\ \smallskip\\ {\bf Proposition 2.31:} A measurable function $f$ is Lebesgue integrable over $E$ if and only if $|f|$ is Lebesgue integrable over $E$, in which case \[ |\int_E f(x)dx|\leq \int_E |f(x)|dx. \] Also, if $0\leq f\leq g$ on~$E$ and $g$ is Lebesgue integrable over~$E$, then $f$ is Lebesgue integrable over~$E$. In particular, if $|f|\leq g$ and $g$ is Lebesgue integrable, then $|f|$, and therefore $f$, is Lesbegue integrable over~$E$. \\ \smallskip\\ {\bf Proof:} The first part follows from $f=f^+ -f^-$, $|f| = f^+ + f^-$, and~iv of Proposition~2.30. The inequality from $f\leq |f|$, $-f\leq |f|$, and~i and~iii of Proposition~2.30. The rest from~Definition~2.20. \\ \smallskip\\ {\bf Observation 2.19:} Let $f$ be a measurable function on a measurable set $E$ with $mE$ finite, and let $a$, $b$ be real numbers such that \mbox{$a\leq f(x)\leq b$} for $x\in E$. By looking at $\int_E f^+(x)dx$ and $\int_E f^-(x)dx$ for the different possible signs of $a$ and $b$, then it is evident that $a\,mE\leq\int_E f(x)dx\leq b\,mE$. Accordingly, if $f$ is a measurable and bounded function on a measurable set $E$ with $mE$ finite, since then for some $M>0$, $-M\leq f(x) \leq M$ for $x\in E$, it must be that $-M\,mE\leq\int_E f(x)dx\leq M\,mE$, and therefore $f$ is Lebesgue integrable. However, there is more to this situation as the following proposition shows.\\ \smallskip\\ {\bf Proposition 2.32 (Integrable equivalent to measurable):} Let $f$ be a bounded function defined on a measurable set $E$ with $mE$ finite. Let \[ L(f) = \sup_{\varphi\leq f} \int_E \varphi(x)dx, \ \ \ \ U(f)= \inf_{\varphi\geq f} \int_E \varphi(x)dx, \] where the $\varphi$'s are all possible simple functions on~$E$ satisfying the given conditions. Then $L(f)=U(f)$ if and only if $f$ is measurable. Whenever $L(f)=U(f)$ then $f$ is Lebesgue integrable and $\int_E f(x) dx=L(f)=U(f)$. Proof in~\cite{royden}. \\ \smallskip\\ {\bf Proposition 2.33 (Riemann integrable implies Lebesgue integrable):} Let $f$ be a bounded function on interval $[a,b]$. If $f$ is Riemann integrable over $[a,b]$, then $f$ is measurable and Lebesgue integrable over $[a,b]$~with \[ \int_{[a,b]} f(x)dx = {\cal R} \int_a^b f(x)dx. \] {\bf Proof:} Since step functions are simple functions, then \[ \sup_{\psi\leq f} {\cal R} \int_a^b \psi(x)dx \leq \sup_{\varphi\leq f} \int_{[a,b]} \varphi(x)dx \leq \inf_{\varphi\geq f} \int_{[a,b]} \varphi(x)dx \leq \inf_{\psi\geq f} {\cal R} \int_a^b \psi(x)dx, \] where the $\psi$'s and the $\varphi$'s are all possible step functions and simple functions on~$[a,b]$, respectively, satisfying the given conditions. Since $f$ is Riemann integrable over $[a,b]$, then all the inequalities above are equalities so that $f$ must be measurable and Lebesgue integrable over $[a,b]$ with \[ \int_{[a,b]} f(x)dx = \sup_{\varphi\leq f} \int_{[a,b]} \varphi(x)dx = \inf_{\varphi\geq f} \int_{[a,b]} \varphi(x)dx = {\cal R} \int_a^b f(x)dx \] by Proposition~2.32. \\ \smallskip\\ {\bf Proposition 2.34:} Let $f$ be a measurable function on a measurable set~$E$.\\ i. If $f\geq 0$ on~$E$ and $\int_E f(x)dx = 0$, then $f=0$ a.e. on~$E$.\\ ii. If $f$ is Lebesgue integrable over $E$, then $f$ is finite a.e. on~$E$.\\ \smallskip\\ {\bf Proof:} For each integer $n>0$ let $E_n = \{x\in E: f(x)>1/n \}$.\\ For each $n$, we note $mE_n = 0$ or else $\int_E f(x)dx >0$.\\ Let $A = \{x\in E: f(x) \not=0\}$. Then $A=\cup_{n=1}^{\infty} E_n$. Thus, $mA = m(\cup_{n=1}^{\infty} E_n) \leq \sum_{n=1}^{\infty} m(E_n) = 0$ so that $f=0$~a.e. on~$E$, which proves~i.\\ In order to prove ii, for each integer $n>0$ let $E_n = \{x\in E: |f(x)|\geq n \}$.\\ Then $n\cdot mE_n \leq \int_{E_n} |f(x)|dx \leq \int_E |f(x)|dx=C$, so that $mE_n \leq C/n$.\\ Let $A = \{x\in E: |f(x)| = \infty\}$. Then $A=\cap_{n=1}^{\infty} E_n$. Since for each $n$, $A\subseteq E_n$, then $mA\leq mE_n\leq C/n$ so that $mA=0$, which proves~ii. \\ \smallskip\\ {\bf Proposition 2.35 (Lebesgue's criterion for Riemann integrability):} Let $f$ be a bounded function on~$[a,b]$. Then $f$ is Riemann integrable over $[a,b]$ if and only if $f$ is continuous a.e. on~$[a,b]$. Proof in~\cite{rudin}. It involves Proposition~2.33 and i of Proposition~2.34.\\ \smallskip\\ {\bf Observation 2.20:} Function $\chi_A$ in Observation 2.17 with $A$ equal to the set of rational numbers fails the continuity hypothesis of Proposition~2.35 and thus it is not Riemann integrable over~$[a,b]$ as observed there. Actually, it can be easily shown to be nowhere continuous on~$[a,b]$. \\ \smallskip\\ {\bf Proposition 2.36 (Countable additivity of the Lebesgue integral):} Let $\{E_n\}$ be a countable collection of pairwise disjoint measurable sets. Let $E=\cup_{i=1}^{\infty}\, E_n$, and let $f$ be a measurable function on~$E$. Assume either $f\geq 0$ on~$E$ or $f$ is Lebesgue integrable over~$E$. Then \[ \int_E f(x)dx = \sum_{i=1}^{\infty} \int_{E_i} f(x) dx. \] Proof in~\cite{rudin}.\\ \smallskip\\ {\bf Observation 2.21:} If $f$ is a measurable function on a set $E$ with \mbox{$mE=0$}, then $\int_E f(x)dx = 0$. Also, if sets $A$, $E$ are measurable with $A\subseteq E$, and $f$ is Lebesgue integrable over $E$, then it is Lebesgue integrable over~$A$. From all this then, if $f$ and $g$ are functions on a measurable set $E$, $f=g$ a.e. on~$E$, $f$ Lebesgue integrable over $E$, then so is $g$ and $\int_E g(x)dx = \int_E f(x)dx$. Finally, we note that since integrals over sets of measure zero are zero, throughout these notes, if sets $F$ and $E$ are measurable with $F\subseteq E$, $mF=0$, and $f$ is a function on $E\setminus F$, possibly not defined on part or all of~$F$, $f$~Lebesgue integrable over $E\setminus F$, we say $f$ is Lebesgue integrable over~$E$ with $\int_E f(x)dx = \int_{E\setminus F} f(x)dx$. This makes sense as it is always possible to define $f$ arbitrarily for points in~$F$ so that then $f$ is defined on all of~$E$ and $\int_E f(x)dx = \int_{E\setminus F} f(x)dx + \int_{F} f(x)dx = \int_{E\setminus F}f(x)dx + 0 = \int_{E\setminus F}f(x)dx$. \\ \smallskip\\ {\bf Proposition 2.37 (Lebesgue's Monotone Convergence Theorem):} Let $\{f_n\}$ be an increasing sequence of nonnegative measurable functions on a measurable set~$E$. Let $f$ be defined by $f(x) = \lim_{n\rightarrow\infty}f_n(x)$ for~$x\in E$. Then \[ \int_E f(x)dx = \lim_{n\rightarrow\infty} \int_E f_n(x)dx. \] Proof in~\cite{royden} and~\cite{rudin}. It involves Proposition~2.13.\\ \smallskip\\ {\bf Corollary 2.5:} Let $\{f_n\}$ be a sequence of nonnegative measurable functions on a measurable set~$E$. Let $f$ be defined by $f(x) = \sum_{n=1}^{\infty}\, f_n(x)$ for~$x\in E$. Then \[ \int_E f(x)dx = \sum_{n=1}^{\infty} \int_E f_n(x)dx. \] {\bf Proof:} $\{h_n\}$ defined by $h_n(x) =\sum_{i=1}^{n}\,f_i(x)$ for $x\in E$ is an increasing sequence of nonnegative measurable functions on~$E$.\\ \smallskip\\ {\bf Observation 2.22:} Proposition~2.36 can now be proved more easily. It suffices to prove it for $f\geq 0$ on~$E$. Let $f_n(x) = f(x)\cdot \chi_{E_n}(x)$ for $x\in E$. Then $f(x)=\sum_{n=1}^{\infty}\,f_n(x)$ for $x\in E$ and the result follows from Corollary~2.5.\\ \smallskip\\ {\bf Observation 2.23:} The following proposition says that if a nonnegative function is Lebesgue integrable over a measurable set, then the Lebesgue integral of the function over a measurable subset of the set is arbitrarily small if the measure of the subset is small enough. Later we will see that it can be used to show that every indefinite integral is absolutely continuous (indefinite integrals and absolute continuity defined in the next section).\\ \smallskip\\ {\bf Proposition 2.38 (Absolute continuity of the Lebesgue integral):} Let $f$ be a nonnegative Lebesgue integrable function over a measurable set~$E$. Then given $\epsilon>0$ there is $\delta>0$ such that for each measurable set $A\subseteq E$ with $mA<\delta$, then~$\int_A f(x)dx <\epsilon$. Proof in~\cite{royden}. It involves Lebesgue's Monotone Convergence Theorem (Proposition~2.37). \\ \smallskip\\ {\bf Proposition 2.39 (Fatou's Lemma):} Let $\{f_n\}$ be a sequence of nonnegative measurable functions on a measurable set~$E$. Let $f$ be defined by \mbox{$f(x)= \liminf_{n\rightarrow\infty} f_n(x)$} for $x\in E$. Then \[ \int_E f(x)dx \leq \liminf_{n\rightarrow\infty} \int_E f_n(x)dx. \] Proof in~\cite{royden} and~\cite{rudin}. It involves Proposition~2.37.\\ \smallskip\\ {\bf Proposition 2.40 (Lebesgue's Dominated Convergence Theorem):} Let $\{f_n\}$ be a sequence of measurable functions on a measurable set~$E$ such that there is a function $f$ on~$E$ with $f_n\rightarrow f$ pointwise a.e. on~$E$. If there is a function $g$ that is Lebesgue integrable over~$E$ such that $|f_n|\leq g$ on~$E$ for all~$n$, then \[ \int_E f(x)dx = \lim_{n\rightarrow\infty} \int_E f_n(x)dx. \] Proof in~\cite{royden} and~\cite{rudin}. It involves Proposition~2.39.\\ \smallskip\\ {\bf Corollary 2.6 (Bounded Convergence Theorem):} Let $\{f_n\}$ be a sequence of measurable functions on a measurable set~$E$ of finite measure such that there is a function $f$ on~$E$ with $f_n\rightarrow f$ pointwise a.e. on~$E$. If there is a real number $M$ such that $|f_n|\leq M$ on~$E$ for all~$n$, then \[ \int_E f(x)dx = \lim_{n\rightarrow\infty} \int_E f_n(x)dx. \] {\bf \large\ \\ The $L^p$ Spaces}\\ \smallskip\\ {\bf Definition 2.22:} Given a real number $p>0$, the $L^p[0,1]$ or $L^p$ {\bf space} is the space of measurable functions on~$[0,1]$ satisfying: the $p$-th power of the absolute value of each function in the space is Lebesgue integrable over~$[0,1]$. Thus, a measurable function~$f$ on~$[0,1]$ is in $L^p$ (the $L^p$ space) if and only if \[ \int_{[0,1]} |f(x)|^p dx < \infty. \] Writing $\int_0^1 |f(x)|^p dx$ instead of $\int_{[0,1]} |f(x)|^p dx$ for $f$ in~$L^p$, we define \[ ||f||_p = \{ \int_0^1 |f(x)|^p dx \}^{1/p} \] and call $||\cdot||_p$ the $L^p$ {\bf norm}, and $||f||_p$ the $L^p$ {\bf norm} of~$f$.\\ Finally, the $L^{\infty}[0,1]$ or $L^{\infty}$ {\bf space} is the space of measurable functions on $[0,1]$ satisfying: each function in the space is bounded on~$[0,1]$ except possibly on a set of measure zero. Thus, a measurable function $f$ on~$[0,1]$ is in~$L^{\infty}$ (the $L^{\infty}$ space) if and only if the essential supremum of $|f|$ on~$[0,1]$ is finite, i.e., \[ \mathrm{ess}\ \sup |f(t)| = \inf \{M: m(\{t:|f(t)|>M\}) = 0\} <\infty. \] We also note $\mathrm{ess}\ \sup |f(t)| = \inf\ \{\sup_{t\in [0,1]} |g(t)|: g=f\ \mathrm{a.e.}\}$. Defining \[ ||f||_{\infty} = \mathrm{ess}\ \sup |f(t)| \] we call $||\cdot||_{\infty}$ the $L^{\infty}$ {\bf norm}, and $||f||_{\infty}$ the $L^{\infty}$ {\bf norm} of~$f$. \\ \smallskip\\ {\bf Observation 2.24:} In the definition of the $L^p$ spaces, the interval~$[0,1]$ was chosen for simplicity. Given a real number $p>0$, if $f\in L^p$, then clearly $cf\in L^p$ for any real number~$c$. In addition, if $f,\ g\in L^p$, since $|f+g|^p\leq 2^p(|f|^p + |g|^p)$, then $f+g\in L^p$. Thus, $L^p$ is a linear space and so is $L^{\infty}$.\\ Given $f$ in $L^p$, $0<p\leq\infty$, then the $L^p$ norm of $f$, i.e., $||f||_p$ (Definition~2.22), equals zero if and only if~$f=0$~a.e. on~$[0,1]$. Accordingly, we think of the elements of $L^p$ as equivalent classes of functions, each class composed of functions that are equal to one another~a.e. on~$[0,1]$, and as noted in Observation~2.21, some functions undefined on subsets of $[0,1]$ of measure zero. Thus, assuming there is no distinction between two functions in the same equivalence class, we note that given $p$, $1\leq p\leq \infty$, then the $L^p$ norm $||\cdot||_p$ is indeed a norm since clearly $||cf||_p=c||f||_p$ for any real number~$c$, and as will be seen below, if $f,\ g\in L^p$, then $||f+g||_p\leq ||f||_p + ||g||_p$. \\ \smallskip\\ {\bf Proposition 2.41 (H\"{o}lder's inequality):} Given $p, q$, $1\leq p,q \leq\infty$, with \mbox{$1/p + 1/q = 1$}, if $f\in L^p$ and $g\in L^q$, then $f\cdot g \in L^1$ and \[ \int_0^1 |(f\cdot g)(x)|dx \leq ||f||_p \cdot ||g||_q\,, \] with equality for $p, q$, $1<p,q<\infty$ if and only if $\alpha|f|^p = \beta|g|^q$~a.e. for nonzero constants $\alpha$ and~$\beta$. Proof in \cite{royden}, \cite{rudin2}. Proof in \cite{rudin} for $p=q=2$. \\ \smallskip\\ {\bf Proposition 2.42 (Minkowski's inequality):} Given $p$, $1\leq p\leq \infty$, if $f, g\in L^p$, then $f+g\in L^p$ and \[ ||f+g||_p \leq ||f||_p + ||g||_p. \] Proof in \cite{royden}, \cite{rudin2}. Proof in \cite{rudin} for $p=q=2$.\\ \smallskip\\ {\bf Observation 2.25:} For $p=q=2$, H\"{o}lder's inequality becomes {\bf Schwarz's inequality}: \[ \int_0^1 |(f\cdot g)(x)|dx \leq ||f||_2 \cdot ||g||_2 = \{\int_0^1 |f(x)|^2dx\}^{1/2} \cdot \{\int_0^1 |g(x)|^2dx\}^{1/2}. \] Note all of the above inequalities (H\"{o}lder's, Minkowski's, Schwarz's), in which all integrations are over $[0,1]$, can be generalized by integrating everywhere over a measurable set instead. Proof in~\cite{rudin2}. \\ \smallskip\\ {\bf Definition 2.23:} Given a norm $||\cdot||$ on a linear space $X$, we say $X$ is a {\bf normed linear space with norm}~$||\cdot||$. We say this especially if among all the possible norms that can be defined on~$X$, our current intent is to associate $X$ exclusively with~$||\cdot||$.\\ A sequence $\{x_n\}$ in a normed linear space with norm~$||\cdot||$ is said to {\bf converge in norm} to an element~$x$ in the space if, given~$\epsilon>0$, there is an integer $N>0$ such that for $n\geq N$, then~$||x_n-x||<\epsilon$.\\ A sequence $\{x_n\}$ in a normed linear space with norm~$||\cdot||$ is said to be a {\bf Cauchy sequence} if, given~$\epsilon$, there is an integer $N>0$ such that for $n, m\geq N$, then~$||x_n-x_m||<\epsilon$.\\ A normed linear space with norm $||\cdot||$ is called {\bf complete} if every Cauchy sequence in the space converges in norm to an element of the space. \\ \smallskip\\ {\bf Proposition 2.43 (Riesz-Fischer):} Given $p$, $1\leq p\leq \infty$, then $L^p$ is complete. Moreover, given $\{f_n\} \rightarrow f$ in $L^p$, then a subsequence of $\{f_n\}$ converges pointwise to $f$ a.e. on~$[0,1]$. Proof of first part in~\cite{royden}, \cite{royden2}. It involves Proposition~2.37 (Lebesgue's Monotone Convergence Theorem), Proposition~2.39 (Fatou's Lemma), Proposition~2.40 (Lebesgue's Dominated Convergence Theorem) and ii of Proposition~2.34. Proof of last part in~\cite{royden2}. \\ \smallskip\\ {\bf Proposition 2.44 (Density of simple and step functions in $L^p$ space):} Given $p$, $1\leq p\leq \infty$, then the subspace of simple functions on $[0,1]$ in $L^p$ is dense in $L^p$. Given $p$, $1\leq p<\infty$, then the subspace of step functions on $[0,1]$ is dense in~$L^p$. Proof in~\cite{royden2}. \section{\large Absolute Continuity and its Connections to\\ Lebesgue Integration} {\bf Definition 3.1:} Let $f$ be a real-valued function defined on an interval~$[a,b]$. Given $x\in [a,b]$, if for some finite number~$I$, \[ I = \lim_{t\rightarrow x}\frac{f(t)-f(x)}{t-x},\ \ \ a<t<b,\ \ \ t\not= x, \] then $f$ is said to be {\bf differentiable} at $x$; a number $f'(x)$ is defined and said to exist by setting $f'(x)$ equal to~$I$; and $f'$ is said to exist at~$x$. Accordingly, $f'$ is a function associated with $f$, called the {\bf derivative} of~$f$, whose domain of definition is the set of points $x$ at which $f'$ exists. If $f'$ exists at every point of a set $E\subseteq [a,b]$, we say $f$ is differentiable on~$E$ or $f'$ exists on $E$.\\ Note that given $x\in [a,b]$, if the limit defining $I$ above equals $\infty$ or $-\infty$ then the convention here is to say that $f$ is not differentiable at~$x$. \\ \smallskip \\ {\bf Proposition 3.1 (Fundamental Theorem of calculus I):} Let $f$ be Riemann integrable over an interval~$[a,b]$. If there is a function $F$ differentiable on~$[a,b]$ such that $F'=f$ on~$[a,b]$, then \[ {\cal R} \int_a^b f(x)dx = F(b)-F(a). \] Proof in~\cite{apostol} and~\cite{rudin}.\\ \smallskip\\ {\bf Proposition 3.2 (Fundamental Theorem of calculus II):} Let $f$ be Riemann integrable over an interval~$[a,b]$. Define a function $F$ by \[ F(x) = {\cal R} \int_a^x f(t)dt,\ \ \ x\in [a,b]. \] Then $F$ is continuous on $[a,b]$, and if $f$ is continuous at~$x\in [a,b]$, then $F$ is differentiable at~$x$ with~$F'(x)=f(x)$. Proof in~\cite{apostol} and~\cite{rudin}.\\ \smallskip\\ {\bf Corollary 3.1 (Differentiability of the Riemann integral - Fundamental Theorem of calculus for continuous functions):}\\ i. If $f$ is Riemann integrable over $[a,b]$ and $F(x) = {\cal R} \int_a^x f(t)dt$, $x\in [a,b]$, then $F'=f$ a.e. on~$[a,b]$.\\ ii. If $f$ is continuous on~$[a,b]$, then there is a differentiable function~$F$ on~$[a,b]$ such that $F'=f$ on~$[a,b]$, and ${\cal R} \int_a^x f(t)dt = F(x)$. If $G$ is any differentiable function on~$[a,b]$ such that $G'=f$ on~$[a,b]$, then $G-F= C$, $C$ a constant, and ${\cal R} \int_a^x f(t)dt = G(x)- G(a)$, $G(a) = C$. \\ \smallskip\\ {\bf Proof:} i follows from Proposition~3.2 and Proposition~2.35 (Lebesgue's criterion). First part of ii from Proposition~3.2. Proof in~\cite{rudin} that $G'-F'=0$ on~$[a,b]$ implies $G-F=C$ on~$[a,b]$, $C$ a constant. ${\cal R} \int_a^x f(t)dt = G(x)- G(a)$ from Proposition~3.1. Clearly $G(a)=C$ as ${\cal R} \int_a^x f(t)dt = F(x)$. \\ \smallskip\\ {\bf Definition 3.2:} Let $f$ be a real-valued function defined on an interval~$[a,b]$. Given $x\in [a,b]$, if for some finite number~$I$, $I = \lim_{t\rightarrow x} f(t)$, $a\leq t\leq x$, then a number $f(x^-)$ called the {\bf left-hand limit} of $f$ at~$x$ is defined by setting $f(x^-)$ equal to~$I$. Similarly, if for some finite number~$I$, $I = \lim_{t\rightarrow x} f(t)$, $x\leq t\leq b$, then a number $f(x^+)$ called the {\bf right-hand limit} of $f$ at~$x$ is defined by setting $f(x^+)$ equal to~$I$. \\ \smallskip\\ {\bf Observation 3.1:} A function $f$ is continuous at $x\in [a,b]$ if and only if $f(x^-)$ and $f(x^+)$ exist and $f(x)=f(x^-)=f(x^+)$. \\ \smallskip\\ {\bf Proposition 3.3 (\bf Monotonic functions: continuity):} Let $f$ be a monotonic real-valued function on an interval~$[a,b]$. Then $f(x^-)$ and $f(x^+)$ exist for every point $x\in [a,b]$, and the set of points of $[a,b]$ at which $f$ is discontinuous is at most countable. Proof in~\cite{rudin}.\\ \smallskip \\ {\bf Corollary 3.2 (Monotonic surjective $f$ implies $f$ is continuous):} If $f$ is monotonic from $[a,b]$ onto $[c,d]$, then $f$ is continuous on~$[a,b]$. \\ \smallskip\\ {\bf Proof:} Assume $f$ is discontinuous at $x\in [a,b]$. Since $f(x^-)$ and $f(x^+)$ exist from Proposition~3.3, it must be that $f(x^-)\not=f(x^+)$ so that $y$ exists in $[c,d]$ between $f(x^-)$ and $f(x^+)$, $y\not= f(x)$. But then $y$ can not be in the range of~$f$ as $f$ is monotonic, which contradicts that the range of $f$ is all of~$[c,d]$. \\ \smallskip\\ {\bf Observation 3.2:} A function $f$ is described below from $[0,1]$ into~$[0,1]$ that is strictly increasing on $[0,1]$, discontinuous at each rational number in~$(0,1]$, continuous at each irrational number in~$[0,1]$ and at~zero, $f(0)=0$, $f(1)=1$. \\ \smallskip\\ Let $\{r_n\}_{n=1}^{\infty}$ be an enumeration of the rational numbers in~$(0,1]$.\\ Given $x\in (0,1]$, let $R(x)=\{n: r_n\leq x\}$, and set $R(0)=\emptyset$.\\ Define $f:[0,1]\rightarrow [0,1]$ by $f(0)=0$ and \[ f(x)=\sum_{n\in R(x)} 1/2^n,\ \ \ x\in (0,1]. \] Given $x,x' \in [0,1]$, $x<x'$, then $R(x)\subseteq R(x')$, and since there is a rational number~$r$ such that $x<r<x'$, then $R(x)\not= R(x')$. Thus, it must be that $f(x)<f(x')$ so that $f$ is strictly increasing on~$[0,1]$.\\ \smallskip\\ Since $R(1)$ includes every $n$ then $f(1)=1$.\\ \smallskip\\ Let $x$ be a rational number in $(0,1]$. We show $f$ is discontinuous at~$x$.\\ For some integer $k>0$, $x= r_k$. Thus, $k\in R(x)$ but $k\not\in R(x')$ for every $x'\in [0,1]$, $x'<x$. $R(x')\subseteq R(x)$ then implies $f(x)-f(x')>1/2^k$.\\ Thus, $f$ is discontinuous at $x$ (a rational number in~$(0,1]$).\\ \smallskip\\ Let $x$ be an irrational number in $[0,1]$. We show $f$ is continuous at~$x$.\\ Given $\epsilon>0$, choose integer $N>0$ such that $1/2^N <\epsilon$, and let \[ \delta=\min_{n\leq N}|x-r_n|. \] Given $x'\in [0,1]$, $x'<x$, $|x-x'|<\delta$, then $R(x')\subseteq R(x)$, and if $n\in R(x)\setminus R(x')$, it must be that $x'<r_n<x$ so that $|x-r_n|<\delta$ and thus~$n>N$. Accordingly, $f(x)-f(x')\leq \sum_{n=N+1}^{\infty}1/2^n =1/2^N<\epsilon$.\\ Finally, given $x'\in [0,1]$, $x'>x$, $|x'-x|<\delta$, then $R(x)\subseteq R(x')$, and if $n\in R(x')\setminus R(x)$, it must be that $x<r_n\leq x'$ so that $|x-r_n|<\delta$ and thus~$n>N$. Accordingly, $f(x')-f(x)\leq \sum_{n=N+1}^{\infty}1/2^n =1/2^N<\epsilon$.\\ Thus, $f$ is continuous at $x$ (an irrational number in~$[0,1]$) and at zero by an argument similar to the one just used for the case~$x'>x$. \\ \smallskip\\ {\bf Proposition 3.4 (\bf Monotonic functions: differentiability):} Let $f$ be a monotonic real-valued function on an interval~$[a,b]$. Then $f$ is differentiable a.e. on~$[a,b]$, and $f'$ is measurable. If, in addition, $f$ is increasing on~$[a,b]$ (note $f'\geq 0$ where it exists), then $f'$ is Lebesgue integrable over~$[a,b]$, and \[ \int_a^b f'(x) dx \leq f(b)- f(a), \] where we write $\int_a^b f'(x)dx$ instead of $\int_{[a,b]} f'(x)dx$. Proof in~\cite{bruckner} and \cite{royden}. It involves Proposition~2.39 (Fatou's Lemma) and ii of Proposition~2.34. \\ \smallskip\\ {\bf Definition 3.3:} Let $f$ be a real-valued function defined on an interval~$[a,b]$. Given a partition $P= \{x_0, x_1,\ldots,x_n\}$ of $[a,b]$, set $\Delta f_i = f(x_i)-f(x_{i-1})$, $i=1,\ldots,n$, and define \[ V(f;a,b) = \sup_P \sum_{i=1}^n |\Delta f_i|, \] the supremum taken over all partitions $P$ of $[a,b]$.\\ $f$ is said to be of {\bf bounded variation} on~$[a,b]$ if~$V(f;a,b)<\infty$. \\ \smallskip\\ {\bf Proposition 3.5 (Jordan decomposition):} A function $f$ is of bounded variation on $[a,b]$ if and only if it is the difference of two monotonically increasing real-valued functions on~$[a,b]$. Proof in \cite{royden} and~\cite{rudin}.\\ \smallskip\\ {\bf Corollary 3.3:} If $f$ is of bounded variation on $[a,b]$ then $f$ is differentiable a.e. on~$[a,b]$, and $f'$ is measurable and Lebesgue integrable over~$[a,b]$.\\ \smallskip\\ {\bf Proof:} By Proposition~3.5, $f = f_1 - f_2$ on~$[a,b]$, where $f_1$ and $f_2$ are monotonically increasing on~$[a,b]$. Thus, by Proposition~3.4, $f'$ is measurable and exists a.e. on~$[a,b]$. Since $|f'|\leq |f_1'| + |f_2'| = f_1' + f_2'$ a.e. on~$[a,b]$, then again by Proposition~3.4, \[ \int_a^b |f'(x)|dx \leq \int_a^b f_1'(x)dx+\int_a^b f_2'(x)dx \leq f_1(b)-f_1(a) + f_2(b)-f_2(a), \] and therefore $f'$ is Lesbegue integrable over $[a,b]$ (Proposition~2.31). \\ \smallskip\\ {\bf Definition 3.4:} Given a Lebesgue integrable function $f$ over $[a,b]$, and a real-valued function $F$ on~$[a,b]$ such that \[ F(x) = F(a) + \int_a^x f(t)dt,\ \ \ x\in [a,b],\] then the function $F$ is said to be an {\bf indefinite integral} of~$f$ over~$[a,b]$. \\ \smallskip\\ {\bf Proposition 3.6 (Indefinite integral of $f$ zero everywhere, then $f$ is zero a.e.):} If $f$ is Lebesgue integrable over $[a,b]$ and $\int_a^x f(t)dt = 0$ for all $x\in [a,b]$, then $f = 0$ a.e. on $[a,b]$. Proof in~\cite{royden}. It involves Proposition~2.15. \\ \smallskip\\ {\bf Proposition 3.7 (Differentiability of the indefinite integral):} Let $f$ be Lebesgue integrable over an interval~$[a,b]$, and $F$ a function such that \[ F(x) = F(a) + \int_a^x f(t)dt,\ \ \ x\in [a,b], \] i.e., an indefinite integral. Then $F'=f$ a.e. on~$[a,b]$.\\ Proof in~\cite{royden}. It involves Proposition~3.6, Corollary~2.6 (Bounded Convergence Theorem), the inequality in Proposition~3.4, and i of Proposition~2.34. \\ \smallskip\\ {\bf Definition 3.5:} A real-valued function $f$ defined on an interval $[a,b]$ is said to be {\bf absolutely continuous} on $[a,b]$ if for every $\epsilon>0$ there is $\delta>0$ such that \[ \sum_{i=1}^n |f(x_i') - f(x_i)| < \epsilon \] for any integer $n>0$ and any disjoint collection of open intervals $(x_i,x_i') \subseteq [a,b]$, $i=1,\ldots,n$, with \[ \sum_{i=1}^n (x_i' - x_i) < \delta. \] {\bf Proposition 3.8 (Absolutely continuous $f$ is constant if $f'$ is zero a.e.):} If $f$ is absolutely continuous on $[a,b]$ with $f'=0$ a.e. on~$[a,b]$, then $f$ is constant on~$[a,b]$, i.e., $f(x)=f(a)$ for all~$x\in [a,b]$. Proof in~\cite{royden}.\\ \smallskip\\ {\bf Observation 3.3:} Absolutely continuous $\Rightarrow$ {\bf uniformly continuous}~\cite{rudin} $\Rightarrow$ continuous. Moreover, a continuous real-valued function of compact domain is uniformly continuous~\cite{rudin}. Accordingly, a function $f$ called the {\bf Cantor function} from $[0,1]$ onto $[0,1]$ that is continuous, thus uniformly continuous, but not absolutely continuous is described below. This function $f$ is monotonically increasing on~$[0,1]$ and thus differentiable a.e. on~$[0,1]$. Actually, $f'=0$ at points not in the Cantor set (described in Observation~2.4) and does not exist at points in it. Thus, $f'=0$ a.e. on~$[0,1]$, $f$ is not constant on~$[0,1]$, hence $f$ can not be absolutely continuous on~$[0,1]$ by Proposition~3.8. \\ \smallskip\\ For this purpose, we note that given $x\in [0,1]$, $x$ can be expressed in its ternary expansion as $0.a_1a_2a_3\,$$\cdot\cdot\cdot$ so that $x=\sum_{n=1}^{\infty} a_n/3^n$, $a_n\in \{0,1,2\}$. Note $x=1$ is then expressed as~$0.222\,$$\cdot\cdot\cdot$. Similarly, given $y\in [0,1]$, $y$ can be expressed in its binary expansion as $0.b_1b_2b_3\,$$\cdot\cdot\cdot$ so that $y=\sum_{n=1}^{\infty} b_n/2^n$, $b_n\in \{0,1\}$. Note $y=1$ is then expressed as~$0.111\,$$\cdot\cdot\cdot$.\\ \smallskip\\ In Observation 2.4 the Cantor set was identified as $\cap_{n=1}^{\infty} E_n$, where $E_1$ is the union of $[0,1/3]$ and $[2/3,1]$ obtained by removing the open middle third of $[0,1]$, $E_2$ is the union of $[0,1/9]$, $[2/9,3/9]$, $[6/9,7/9]$, $[8/9,1]$ obtained by removing the open middle thirds of $[0,1/3]$ and $[2/3,1]$, and so on. Actually, with $E_0=[0,1]$, then at stage~$m$, open intervals of the form $((3k-2)/3^m,(3k-1)/3^m)$, $k\in \{1,\ldots,3^{m-1}\}$, are removed from $E_{m-1}$, if contained in it, to obtain~$E_m$. We note that endpoints of any such intervals have two ternary expansions, and in what follows, only the expansion of any such point that contains no 1's is considered. Fixing one of these removed open intervals, we note it is the open middle third of a closed interval in $E_{m-1}$, all numbers in the closed interval in $E_{m-1}$ having the same first $m-1$ digits in their ternary expansions, none of them equal to~1. Finally, we note numbers in the removed open interval have 1 as the $m^{th}$ digit of their ternary expansions, while numbers in the closed left and right thirds of the closed interval, closed thirds that become part of $E_m$, have 0 and 2, respectively, as the $m^{th}$ digit of their ternary expansions. Thus, the Cantor set is exactly the set of numbers in $[0,1]$ that have no 1's in their ternary expansions. \\ \smallskip\\ An attempt can be made to identify the Cantor function as follows. Recalling that $(1/3,2/3)$ was the open middle third that was removed from $[0,1]$ to obtain~$E_1$, given $x$ in its closure, i.e., in $[1/3,2/3]$, set $f(x) = 1/2$. Again, recalling that $(1/9,2/9)$ and $(7/9,8/9)$ were the open middle thirds that were removed from $[0,1/3]$ and $[2/3,1]$, respectively, to obtain~$E_2$, given $x$ in the closure of $(1/9,2/9)$, i.e., in $[1/9,2/9]$, set $f(x)=1/4$, and given $x$ in the closure of $(7/9,8/9)$, i.e., in $[7/9,8/9]$, set $f(x)=3/4$. Accordingly, $f$ can be identified this way at each stage of the contruction of the Cantor set but this is not enough as it has not been identified for points in ``the limit" that are part of the Cantor~set. \\ \smallskip \\ The Cantor function is properly identified as follows. Given $x\in [0,1]$ with ternary expansion $0.a_1a_2a_3\,$$\cdot\cdot\cdot$ so that $x=\sum_{n=1}^{\infty} a_n/3^n$, $a_n\in \{0,1,2\}$, let $N$ be the smallest $n$ such that $a_n$ equals~1. If such an $n$ does not exist, i.e., $x$~is in the Cantor set, let~$N=\infty$. With $b_n=a_n/2$ if $n<N$, $b_n=1$ if $n=N$, and $b_n=0$ if $n>N$, let $y$ be the number in $[0,1]$ with binary expansion $0.b_1b_2b_3\,$$\cdot\cdot\cdot$ so that $y=\sum_{n=1}^{\infty} b_n/2^n = \sum_{n=1}^N b_n/2^n$, and set~$f(x)=y$. The function $f$ identified this way is then called the Cantor function.\\ \smallskip\\ {\bf Proposition 3.9:} Let $f$ be the Cantor function. Then $f$ is continuous, thus uniformly continuous, from $[0,1]$ onto~$[0,1]$. In addition, $f$ is monotonically increasing on~$[0,1]$ and thus differentiable a.e. on~$[0,1]$. Actually, $f'=0$ at points not in the Cantor set and does not exist at points in it. \\ \smallskip\\ {\bf Proof:} Given $x_1$, $x_2 \in [0,1]$, $x_1<x_2$, we show $f(x_1)\leq f(x_2)$.\\ Let $0.a_1a_2a_3$$\cdot\cdot\cdot$, $0.c_1c_2c_3$$\cdot\cdot\cdot$ be $x_1$, $x_2$, respectively, in their ternary expansions. Let $0.b_1b_2b_3$$\cdot\cdot\cdot$, $0.d_1d_2d_3$$\cdot\cdot\cdot$ be $f(x_1)$, $f(x_2)$, respectively, in their binary expansions.\\ Let $N_1$ be the smallest $n$ such that $a_n=1$; $N_1=\infty$ if there is no such~$n$.\\ Let $N_2$ be the smallest $n$ such that $c_n=1$; $N_2=\infty$ if there is no such~$n$.\\ Let $N'$ be the smallest $n$ such that $a_n<c_n$.\\ If $N'>N_1$, since then $c_n=a_n$, $n=1,\ldots,N_1$, and, in particular, $c_{N_1}=a_{N_1}=1$, it must be that $N_2=N_1$ so that $b_n=d_n$, $n=1,\ldots,N_1=N_2$, and therefore~$f(x_1)=\sum_{n=1}^{N_1}b_n/2^n=\sum_{n=1}^{N_2}d_n/2^n=f(x_2)$.\\ Similarly if $N'>N_2$, and the case $N'=N_1=N_2$ can not be.\\ If $N'=N_1$ and $N_2>N'$, since $a_{N_1}=1$, it must be that $c_{N_1}=2$ so that $b_n=d_n$, $n=1,\ldots,N_1-1$, $b_{N_1}=d_{N_1}=1$. Therefore, $\sum_{n=1}^{N_1}b_n/2^n=\sum_{n=1}^{N_1}d_n/2^n$ thus $f(x_1)=\sum_{n=1}^{N_1}b_n/2^n\leq \sum_{n=1}^{N_2}d_n/2^n=f(x_2)$.\\ If $N'=N_2$ and $N_1>N'$, since $c_{N_2}=1$, it must be that $a_{N_2}=0$ so that $b_n=d_n$, $n=1,\ldots,N_2-1$, $b_{N_2}=0$, $d_{N_2}=1$. Therefore, $\sum_{n=1}^{N_2}b_n/2^n<\sum_{n=1}^{N_2}d_n/2^n$ thus $f(x_1)=\sum_{n=1}^{N_1}b_n/2^n\leq \sum_{n=1}^{N_2}d_n/2^n=f(x_2)$.\\ Finally, if $N_1>N'$, $N_2>N'$, since $a_{N'}<c_{N'}$, it must be that $a_{N'}=0$, $c_{N'}=2$, so that $b_n=d_n$, $n=1,\ldots,N'-1$, $b_{N'}=0$, $d_{N'}=1$. Therefore, $\sum_{n=1}^{N'}b_n/2^n<\sum_{n=1}^{N'}d_n/2^n$ thus $f(x_1)=\sum_{n=1}^{N_1}b_n/2^n\leq \sum_{n=1}^{N_2}d_n/2^n=f(x_2)$. Thus, $f(x_1)\leq f(x_2)$ for all cases and therefore $f$ is monotonically increasing. \\ \smallskip\\ Given $y\in [0,1]$, we show there is $x\in [0,1]$ with~$f(x)=y$.\\ Let $0.b_1b_2b_3$$\cdot\cdot\cdot$ be $y$ in its binary expansion.\\ For each $n$, let $a_n=2b_n$. Then for each $n$, $a_n$ is either zero or two.\\ Let $x$ be the point in $[0,1]$ which in its ternary expansion is $0.a_1a_2a_3$$\cdot\cdot\cdot$.\\ Then $x$ is actually a point in the Cantor set and~$f(x)=y$.\\ Thus, $f$ is onto~$[0,1]$.\\ \smallskip\\ That $f$ is continuous, thus uniformly continuous, on $[0,1]$, now follows from Corollary~3.2. \\ \smallskip\\ Finally, given $x\in [0,1]$, if $x$ is not in the Cantor set, we show $f'(x)=0$. On the other hand, if $x$ is in the Cantor set, we show that $f'(x)$ does not exist.\\ If $x$ is not in the Cantor set, its ternary expansion must contain 1 as one of its digits. Then for some integer $m$, $m>0$, the $m^{th}$ digit of the expansion equals~1 with no previous digits equal to~1. It follows that $x$ must be contained in an open interval of the form $((3k-2)/3^m,(3k-1)/3^m)$, $k\in \{1,\ldots,3^{m-1}\}$. Thus, it suffices to show $f$ is constant on any such interval. But this follows immediately since all numbers in the interval have the same first $m$ digits in their ternary expansions with 1 as the $mth$ digit and no previous digits equal to~1.\\ On the other hand, if $x$ is in the Cantor set, its ternary expansion consists of 0's and 2's. Given an integer $n>0$, define $x_n$ to be the number in~$[0,1]$ whose ternary expansion is exactly that of $x$ except at its $n^{th}$ digit. Its $n^{th}$ digit is 0 if the $n^{th}$ digit of $x$ is 2, and it is 2 if that of $x$ is~0. It follows then that $|x_n-x|=2/3^n$ so that $x_n\rightarrow x$. Also, $|f(x_n)-f(x)|=1/2^n$. Thus, since $f$ is monotonically increasing, $(f(x_n)-f(x))/(x_n-x)=|f(x_n)-f(x)|/|x_n-x|=(1/2)(3/2)^n\rightarrow\infty$ so that $f'(x)$ does not~exist. \\Thus, $f'$ does not exist at points in the Cantor set and equals zero otherwise. \\ \smallskip \\ {\bf Corollary 3.4:} The Cantor function is not absolutely continuous on~$[0,1]$. \\ \smallskip\\ {\bf Proof:} Let $f$ be the Cantor function and assume it is absolutely continuous on~$[0,1]$. By Proposition~3.9, $f'=0$ a.e. on $[0,1]$. Thus, by Proposition~3.8, $f$ must be constant on~$[0,1]$, i.e., $f(x)=f(0)$ for all $x\in [0,1]$. But this is a contradiction as for instance $f(0)=0$ and~$f(1)=1$. Thus, $f$ is not absolutely continuous on~$[0,1]$. \\ \smallskip\\ {\bf Observation 3.4:} For the sake of completeness, we analyze the nondifferentiability of the Cantor function $f$ on the Cantor set.\\ Let $x_L$ and $x_R$ be points in the Cantor set that are the left and right endpoints of an open interval $I$ removed at the $m^{th}$ stage of the construction of the Cantor~set. It must then be that in their ternary expansions,~$x_L$ can be expressed as $0.a_1a_2$$\cdot\cdot\cdot$$a_{m-1}0\overline{2}$ (a~bar on a digit means the digit is infinitely repeated), and $x_R$ as $0.a_1a_2$$\cdot\cdot\cdot$$a_{m-1}2\overline{0}$, the set $\{a_1,a_2,\ldots,a_{m-1}\}$ with elements equal to $0$ or $2$ if~$m>1$, empty if~$m=1$. Given $x$ in the open interval $I$, it must be that in its ternary expansion the $m^{th}$ digit is~1, and if $m>1$, then the first $m-1$ digits are also $a_1,a_2,\ldots,a_{m-1}$. Define for each integer $n$, $n>0$, a number $x_L^n$ in $I$ that in its ternary expansion the first $m$ digits are as described above, and all other digits are~$0$ except the $(m+n)^{th}$ digit which is~$1$. Then $\lim_{n\rightarrow\infty} x_L^n = x_L$ and $f(x_L^n)=f(x_L)$ for all~$n$ so that $\lim_{n\rightarrow\infty} (f(x_L^n)-f(x_L))/(x_L^n-x_L)=0$. Since for any sequence $\{x_n\}$ in $I$, with $\lim_{n\rightarrow\infty} x_n = x_L$, then $f(x_n)=f(x_L)$ for all~$n$, it follows that $(f(t)-f(x_L))/(t-x_L)$ has a limit as $t\rightarrow x_L$ from the right side of~$x_L$ and it is~zero. Similarly, define for each integer $n$, $n>0$, a number $x_R^n$ in $I$ that in its ternary expansion the first $m$ digits are as described above, and all other digits are~$0$ except the $(m+1)^{th},\ldots,(m+n)^{th}$ digits which are~$2$. Then $\lim_{n\rightarrow\infty} x_R^n = x_R$ and $f(x_R^n)=f(x_R)$ for all~$n$ so that $\lim_{n\rightarrow\infty} (f(x_R^n)-f(x_R))/(x_R^n-x_R)=0$. Since for any sequence $\{x_n\}$ in $I$, with $\lim_{n\rightarrow\infty} x_n = x_R$, then $f(x_n)=f(x_R)$ for all~$n$, it follows that $(f(t)-f(x_R))/(t-x_R)$ has a limit as $t\rightarrow x_R$ from the left side of~$x_R$ and it is~zero.\\ In the proof of Proposition 3.9, given any $x$ in the Cantor set, a sequence $\{x_n\}$ of points in the Cantor set was identifed with $x_n\rightarrow x$ and $(f(x_n)-f(x))/(x_n-x)\rightarrow\infty$. We show that with $x_L$, $x_R$ as above, then $(f(t)-f(x_L))/(t-x_L)$ has a limit as $t\rightarrow x_L$ from the left side of~$x_L$ and it is~$\infty$, and $(f(t)-f(x_R))/(t-x_R)$ has a limit as $t\rightarrow x_R$ from the right side of~$x_R$ and it is also~$\infty$. Actually, we only show it for $x_R$ as the proof for $x_L$ can be similarly accomplished. Accordingly, let $n\geq m$ be an integer such that the ternary expansions of~$t$ and~$x_R$ coincide in the first $n$ digits and the $(n+1)^{th}$ digit of~$t$ is~1 or~2. As mentioned above, all digits of~$x_R$ after the $m^{th}$ digit equal~0. Thus, $f(t)-f(x_R)\geq 1/2^{n+1}$ and $t-x_R \leq 2/3^{n+1} +2/3^{n+2} +\,\cdot\cdot\cdot=1/3^n$, so that \[ \lim_{t\rightarrow x_R^+} \frac{f(t)-f(x_R)}{t-x_R} \geq \lim_{n\rightarrow\infty}(1/2)(3/2)^n=\infty. \] Finally, it is of interest to note that if $x$ is any point in the Cantor set, then at stage~$m$ of the contruction of the Cantor set, $x$ is in a closed interval $[a_m,b_m]\subset [0,1]$, where if $0.x_1x_2$$\cdot\cdot\cdot$ is~$x$ in its ternary expansion, then $0.x_1\cdot\cdot\cdot x_m\overline{0}$ is $a_m$ in its ternary expansion, and $0.x_1\cdot\cdot\cdot x_m\overline{2}$ is $b_m$ in its ternary expansion. It follows that $b_m-a_m = \sum_{i=m+1}^{\infty}2/3^i=1/3^m$ and $f(b_m)-f(a_m) = \sum_{i=m+1}^{\infty}1/2^i= 1/2^m$. Thus, with $a_m\leq x\leq b_m$, we have \[ \lim_{m\rightarrow\infty} \frac{f(b_m)-f(a_m)}{b_m-a_m}= \lim_{m\rightarrow\infty}(3/2)^m=\infty. \] If $x= x_L$, $x_L$ as above, then for some $m$, $x=b_m$ and $\lim_{m\rightarrow\infty} (f(x)-f(a_m))/(x-a_m)=\infty$, as expected. Similarly, if $x= x_R$, $x_R$ as above, then for some $m$, $x=a_m$ and $\lim_{m\rightarrow\infty} (f(x)-f(b_m))/(x-b_m)=\infty$, also as expected. As for a point $x$ in the Cantor set that is not an endpoint of an open interval removed at some stage of the construction of the Cantor set, it is easier to see that $\lim_{m\rightarrow\infty} (f(x)-f(a_m))/(x-a_m)=\infty$, and $\lim_{m\rightarrow\infty} (f(x)-f(b_m))/(x-b_m)=\infty$, by looking at the ternary expansions of $x$, $a_m$ and $b_m$. Actually, we only show it for $\{a_m\}$ as the proof for $\{b_m\}$ can be similarly accomplished. Accordingly, let $m>0$ be an integer such that the $(m+1)^{th}$ digit of $x$ in its ternary expansion, i.e., $x_{m+1}$, equals~2. As mentioned above, the ternary expansions of $x$ and $a_m$ coincide in the first $m$ digits and all digits of $a_m$ after the $m^{th}$ digit equal~0. Thus, $f(x)-f(a_m)\geq 1/2^{m+1}$ and $x-a_m \leq 2/3^{m+1} +2/3^{m+2} +\,\cdot\cdot\cdot=1/3^m$, so that \[ \lim_{m\rightarrow\infty} \frac{f(x)-f(a_m)}{x-a_m} \geq \lim_{n\rightarrow\infty}(1/2)(3/2)^m=\infty. \] Note that $\lim_{m\rightarrow\infty} (f(x)-f(a_m))/(x-a_m)=$ $\lim_{m\rightarrow\infty} (f(x)-f(b_m))/(x-b_m)=\infty$ does not imply that $\lim_{t\rightarrow x} (f(x)-f(t))/(x-t)=\infty$. \\ \smallskip\\ {\bf Observation 3.5:} A function on $[a,b]$ that is a finite linear combination of absolutely continuous functions on~$[a,b]$ is absolutely continuous on~$[a,b]$. The proof is analogous to the proof that a finite linear combination of continuous functions is continuous. In addition, the product of two absolutely continuous functions on $[a,b]$ is absolutely continuous on~$[a,b]$.\\ \smallskip\\ {\bf Proposition 3.10 (Absolutely continuous $f$ implies $f$ is of bounded variation):} If $f$ is absolutely continuous on $[a,b]$, then $f$ is of bounded variation on~$[a,b]$. Proof in~\cite{royden}. \\ \smallskip\\ {\bf Corollary 3.5:} If $f$ is absolutely continuous on $[a,b]$ then $f$ is differentiable a.e. on~$[a,b]$, and $f'$ is measurable and Lebesgue integrable over~$[a,b]$.\\ \smallskip\\ {\bf Proposition 3.11 (Absolute continuity of the indefinite integral):} If $F$ is an indefinite integral over~$[a,b]$, then $F$ is absolutely continuous on~$[a,b]$. \\ \smallskip\\ {\bf Proof:} Assume (Definition 3.4) $F(x) = F(a) + \int_a^x f(t)dt$, $x\in [a,b]$, $f$ is Lebesgue integrable over~$[a,b]$. By Proposition~2.31, $|f|$ is Lebesgue integrable over~$[a,b]$. Then by Proposition~2.38, given $\epsilon>0$ there is $\delta>0$ such that for each measurable set $A\subseteq [a,b]$ with $m(A)<\delta$, then~$\int_A |f(t)|dt<\epsilon$.\\ Given integer $n>0$ and disjoint open intervals $(x_i,x_i') \subseteq [a,b]$, $i=1,\ldots,n$, with $\sum_{i=1}^n (x_i' - x_i) < \delta$, let $A=\cup_{i=1}^n(x_i,x_i')$. Then $A$ is measurable and $m(A)<\delta$. Thus,~$\int_A |f(t)|dt<\epsilon$. Accordingly, then \begin{eqnarray*} \sum_{i=1}^n |F(x_i') - F(x_i)| &=& \sum_{i=1}^n |\int_a^{x_i'}f(t)dt - \int_a^{x_i}f(t)dt| = \sum_{i=1}^n |\int_{x_i}^{x_i'}f(t)dt|\\ &\leq& \sum_{i=1}^n \int_{x_i}^{x_i'}|f(t)|dt = \int_A |f(t)|dt<\epsilon. \end{eqnarray*} Thus, $F$ is absolutely continuous on $[a,b]$.\\ \smallskip\\ {\bf Proposition 3.12 (Equivalent conditions for an absolutely continuous function):} Given a real-valued function $f$ on $[a,b]$, then the following three conditions are equivalent:\\ i. $f$ is absolutely continuous on $[a,b]$.\\ ii. There exists a Lebesgue integrable function~$g$ over~$[a,b]$ such that\\ $f(x) = f(a) + \int_a^x g(t)dt$, $x\in [a,b]$.\\ (Note that then by Proposition~3.7, $f'=g$ a.e. on~$[a,b]$).\\ iii. $f'$ exists a.e. on $[a,b]$ and is Lebesgue integrable over~$[a,b]$, and\\ $f(x) = f(a) + \int_a^x f'(t)dt$, $x\in [a,b]$.\\ \smallskip\\ {\bf Proof:}\\ iii $\Rightarrow$ ii:\\ Take $g=f'$.\\ ii $\Rightarrow$ i:\\ This is Proposition 3.11.\\ i $\Rightarrow$ iii:\\ By Corollary 3.5, $f'$ exists a.e. on $[a,b]$ and is Lebesgue integrable over~$[a,b]$.\\ Let $G(x) = \int_a^x f'(t)dt$, $x\in [a,b]$. Then $G$ is an indefinite integral of~$f'$ over~$[a,b]$ and by Proposition~3.11, is absolutely continuous on~$[a,b]$, and so is the function $h = f-G$ by Observation~3.5.\\ By Proposition 3.7, $G'=f'$ a.e. on~$[a,b]$. Thus $h'=0$ a.e. on~$[a,b]$, and by Proposition~3.8, $h$ is constant on~$[a,b]$, i.e., $f-G=C$ on~$[a,b]$ for some constant~$C$, i.e., $f(x)-\int_a^x f'(t)dt=C$, $x\in [a,b]$.\\ Since $C=f(a)$, it then follows that $f(x) = f(a) + \int_a^x f'(t)dt$, $x\in [a,b]$. \\ \smallskip\\ {\bf Corollary 3.6 (Fundamental Theorem of Lebesgue integral calculus):} Given real-valued functions $f$, $g$ on~$[a,b]$, $f$ absolutely continuous on~$[a,b]$ and $f'=g$ a.e. on~$[a,b]$, then $f(x) = f(a) + \int_a^x g(t)dt$, $x\in [a,b]$. \\ \smallskip\\ {\bf Proposition 3.13 (Fundamental Theorem of Lebesgue integral calculus (Alternate form)):} Given a real-valued function $f$ on $[a,b]$, if $f'(x)$ exists for every $x\in [a,b]$, and $f'$ is Lebesgue integrable over~$[a,b]$, then $f(x) = f(a) + \int_a^x f'(t)dt$, $x\in [a,b]$. Proof in~\cite{rudin2}.\\ \smallskip\\ {\bf Corollary 3.7:} Given a real-valued function $f$ on $[a,b]$, if $f'$ exists everywhere on~$[a,b]$ and $f'$ is Lebesgue integrable over~$[a,b]$, then $f$ is absolutely continuous on~$[a,b]$.\\ \smallskip\\ {\bf Proof:} From Proposition 3.13 and then Proposition 3.12.\\ \smallskip\\ {\bf Proposition 3.14 (Change of variable for Riemann integral):} Given a strictly monotonic continuous function $u$ from an interval~$[a,b]$ onto an interval~$[c,d]$ ($u(a)=c$, $u(b)=d$ if $u$ is strictly increasing, $u(a)=d$, $u(b)=c$ if it is strictly decreasing), with $u'$ Riemann integrable over~$[a,b]$, and a real-valued Riemann integrable function $f$ over~$[c,d]$, then \[ {\cal R} \int_{u(a)}^{u(b)} f(x)dx = {\cal R} \int_a^b f(u(t))u'(t)dt, \] with ${\cal R} \int_{u(a)}^{u(b)} f(x)dx = -{\cal R} \int_c^d f(x)dx$ if $u$ is decreasing. Proof in \cite{apostol} and \cite{rudin}. Note that by Proposition 3.4, $u'$ exists a.e. on~$[a,b]$.\\ \smallskip\\ {\bf Proposition 3.15 (Substitution rule for Riemann integral):} Given a function $u$ from an interval $[a,b]$ into an interval $I$ such that $u'(x)$ exists for every $x\in [a,b]$ with $u'$ Riemann integrable over $[a,b]$, and a real-valued continuous function $f$ on $I$, then \[ {\cal R} \int_{u(a)}^{u(b)} f(x)dx = {\cal R} \int_a^b f(u(t))u'(t)dt, \] with ${\cal R} \int_{u(a)}^{u(b)} f(x)dx = -{\cal R} \int_{u(b)}^{u(a)} f(x)dx$ if $u(b) < u(a)$. \\ \smallskip\\ {\bf Proof:} By Proposition 3.2 (Fundamental Theorem of calculus II), with $e$ the left endpoint of $I$, the function $F$ defined by $F(x) = {\cal R} \int_e^x f(t) dt$, $x\in I$, satisfies $F'(x)=f(x)$ for every $x\in I$ since $f$ is continuous on~$I$. From the definition of~$F$, given $c,\,d\in I$, $c$ not necessarily less than~$d$, then $F(d)-F(c) = {\cal R} \int_c^d f(x)dx$. In particular, \[ F(u(b))-F(u(a)) = {\cal R} \int_{u(a)}^{u(b)} f(x)dx.\] Since $u$ is differentiable on $[a,b]$ and $F$ is differentiable on~$I$, the composite function $F\circ u$ is differentiable on~$[a,b]$ and by the usual chain rule of calculus, \mbox{$(F\circ u)'(t)$} $= F'(u(t))u'(t) = f(u(t))u'(t)$, for every $t\in [a,b]$. Thus, since $f(u(t))u'(t)$ is clearly Riemann integrable over~$[a,b]$, by Proposition~3.1 (Fundamental Theorem of calculus I), it must be that \begin{eqnarray*} {\cal R}\int_a^b f(u(t))u'(t)dt &=& {\cal R}\int_a^b (F\circ u)'(t)dt = (F\circ u)(b) - (F\circ u)(a)\\ &=& F(u(b))- F(u(a)) = {\cal R} \int_{u(a)}^{u(b)} f(x)dx. \end{eqnarray*} {\bf Proposition 3.16 (Substitution rule for Lebesgue integral):} Given an absolutely continuous function $u$ from an interval~$[a,b]$ into an interval~$I$, and a real-valued continuous function~$f$ on~$I$, then \[ \int_{u(a)}^{u(b)} f(x)dx = \int_a^b f(u(t))u'(t)dt, \] with $\int_{u(a)}^{u(b)} f(x)dx = -\int_{u(b)}^{u(a)} f(x)dx = -\int_{[u(b),u(a)]} f(x)dx$ if $u(b) < u(a)$. \\ \smallskip\\ {\bf Proof:} By Proposition 3.2 (Fundamental Theorem of calculus II), with $e$ the left endpoint of $I$, the function $F$ defined by $F(x) = {\cal R} \int_e^x f(t) dt$, $x\in I$, satisfies $F'(x)=f(x)$ for every $x\in I$ since $f$ is continuous on~$I$. From the definition of~$F$, given $c,\,d\in I$, $c$ not necessarily less than~$d$, then $F(d)-F(c) = {\cal R} \int_c^d f(x)dx$. In particular, \[ F(u(b))-F(u(a)) = {\cal R} \int_{u(a)}^{u(b)} f(x)dx = \int_{u(a)}^{u(b)} f(x)dx, \] where the last equation is by Proposition~2.33 (Riemann implies Lebesgue).\\ Since $u$ is differentiable a.e. on~$[a,b]$ and $F$ is differentiable on $I$, the composite function $F\circ u$ is differentiable a.e. on~$[a,b]$. Indeed it is differentiable exactly at the points where $u$ is differentiable. Thus, by the usual chain rule of calculus, \mbox{$(F\circ u)'(t)$} $= F'(u(t))u'(t) = f(u(t))u'(t)$, for $t\in [a,b]$ at which $u'$~exists.\\ Finally, we show $F\circ u$ is absolutely continuous on $[a,b]$ in order to use Corollary~3.6 (Fundamental Theorem of Lebesgue integral calculus) with $F\circ u$ as the absolutely continuous function in the hypothesis of the corollary. For this purpose, since $f$ is continuous, assume $|f|<M$ on $[a,b]$, for some $M>0$ . Given $\epsilon>0$, let $\delta>0$ correspond to $\epsilon/M$ in the definition of the absolute continuity of~$u$. Given integer~$n>0$ and disjoint open intervals $(t_i,t'_i)\subseteq [a,b]$, $i=1,\ldots,n$, with $\sum_{i=1}^n(t'_i-t_i)<\delta$, then \begin{eqnarray*} \sum_{i=1}^n|F\circ u(t'_i)-F\circ u(t_i)| &=& \sum_{i=1}^n|{\cal R}\int_{u(t_i)}^{u(t'_i)}f(x)dx|\\ &<& \sum_{i=1}^n|{u(t_i)}-{u(t'_i)}|M = M\sum_{i=1}^n|{u(t_i)}-{u(t'_i)}|\\ &<& M\epsilon/M=\epsilon. \end{eqnarray*} Thus, $F\circ u$ is absolutely continuous and by Corollary 3.6, it must be that \begin{eqnarray*} \int_a^b f(u(t))u'(t)dt &=& \int_a^b (F\circ u)'(t)dt = (F\circ u)(b) - (F\circ u)(a)\\ &=& F(u(b))- F(u(a)) = \int_{u(a)}^{u(b)} f(x)dx. \end{eqnarray*} {\bf Observation 3.6:} Note that in the proof of Proposition~3.16 above, while proving that $F\circ u$ is absolutely continuous on $[a,b]$, we have actually proved that if $u$, $[a,b]$, $I$, $e$ are as given there and $f$ is Lebesgue integrable over $I$ and bounded on $I$, and the function $F$ is defined by $F(x) = \int_e^x f(t) dt$, $x\in I$, then $F\circ u$ is absolutely continuous on~$[a,b]$. At the end of this section, results are presented for carrying out a change of variable in Lebesgue integrals, useful in shape~analysis. \\ \smallskip\\ {\bf Proposition 3.17 (Saks' inequality \cite{saks}):} Given a real-valued function $f$ on $[a,b]$, a real number $r\geq 0$, and $E\subseteq [a,b]$ such that $|f'(x)|\leq r$ for each $x\in E$, then \[ m^*(f(E))\leq r\,m^*(E).\] {\bf Proof:} Given $\epsilon>0$, for every integer $n>0$, and any $y\in (a,b)$, define \[E_n=\{x\in E:\,\mathrm{if\,} 0<|x-y|<1/n,\,\mathrm{then\,} |f(x)-f(y)|<(r+\epsilon)|x-y|\}.\] Since $E_n\subseteq E_{n+1}$ for all $n$ and $\cup_{n=1}^{\infty} E_n=E$, then by 2 of Proposition~2.13, $m^*(E)= \lim_{n\rightarrow\infty} m^*(E_n)$. Similarly, since $f(E_n)\subseteq f(E_{n+1})$ for all $n$ and $\cup_{n=1}^{\infty} f(E_n)=f(E)$, then again by 2 of Proposition~2.13, \[ m^*(f(E))= \lim_{n\rightarrow\infty} m^*(f(E_n)).\] Given integer $n>0$, let $\{I_k\}$ be a countable collection of open intervals covering $E_n$, i.e., $E_n\subseteq \cup_{k=1}^{\infty}I_k$, with \[ \sum_{k=1}^{\infty} m(I_k) < m^*(E_n) +\epsilon. \] Note $\{I_k\}$ can be chosen so that $m(I_k)<1/n$ for each~$k$. Then, for each~$k$, given $x$, $x'$ $\in E_n\cap I_k$, $x\not=x'$, from the definition of $E_n$, since $|x-x'|<1/n$, it must be that \[ |f(x)-f(x')| < (r+\epsilon)\,|x-x'| < (r+\epsilon)\,m(I_k). \] Thus,\[ m^* (f(E_n\cap I_k))\leq \sup_{x,x'\in E_n\cap I_k}\,|f(x)-f(x')| \leq (r+\epsilon)\,m(I_k). \] Since $E_n=\cup_{k=1}^{\infty}\,(E_n\cap I_k)$, then $f(E_n)=\cup_{k=1}^{\infty}\,f(E_n\cap I_k)$, therefore, \begin{eqnarray*} m^*(f(E_n)) &\leq& \sum_{k=1}^{\infty}\,m^*(f(E_n\cap I_k)) \leq \sum_{k=1}^{\infty}\,(r+\epsilon)\,m(I_k)\\ &=& (r+\epsilon)\,\sum_{k=1}^{\infty}\,m(I_k) < (r+\epsilon)\,(m^*(E_n) +\epsilon). \end{eqnarray*} Thus, since as established above $m^*(E)= \lim_{n\rightarrow\infty} m^*(E_n)$ and $ m^*(f(E))= \lim_{n\rightarrow\infty} m^*(f(E_n))$, it must be that \begin{eqnarray*} m^*(f(E))&=& \lim_{n\rightarrow\infty} m^*(f(E_n))\\ &\leq& \lim_{n\rightarrow\infty} (r+\epsilon)\,(m^*(E_n)+\epsilon)\\ &=& (r+\epsilon)\,(m^*(E)+\epsilon). \end{eqnarray*} Hence, since $\epsilon>0$ is arbitrary, it must be that $m^*(f(E)) \leq r\,m^*(E)$. \\ \smallskip\\ {\bf Corollary 3.8:} Given a real-valued function $f$ on~$[a,b]$, let $E$ be a subset of $[a,b]$ on which $f'=0$. Then~$m(f(E))=m^*(f(E))=0$. \\ \smallskip\\ {\bf Proof:} By Proposition 3.17, $m^*(f(E))\leq 0\cdot m^*(E)=0$. Thus, $m(f(E))=0$. \\ \smallskip \\ {\bf Corollary 3.9:} Given a real-valued function $f$ on~$[a,b]$, and $E\subseteq [a,b]$ with $m(E)=0$, such that $f'$ exists on~$E$, then~$m(f(E))=m^*(f(E))=0$. \\ \smallskip\\ {\bf Proof:} Without any loss of generality assume $E\subseteq (a,b)$. Given $\epsilon >0$, let $\{I_k\}$ be a countable collection of open intervals covering~$E$, i.e., $E\subseteq \cup_{k=1}^{\infty}I_k$, with $\sum_{k=1}^{\infty} m(I_k) < \epsilon$, $I_k\subseteq (a,b)$ for each~$k$.\\ Given an integer $l\geq 0$, and an integer $k>0$, let \[ E_{l} =\{x\in E:\, f'(x)\ \mathrm{exists\ and\ } |f'(x)|\leq l\} \] and \[ E_{lk} =\{x\in I_k:\, f'(x)\ \mathrm{exists\ and\ } |f'(x)|\leq l\}.\] Note $E_l\subseteq \cup_{k=1}^{\infty}E_{lk}$ so that $f(E_l)\subseteq \cup_{k=1}^{\infty}f(E_{lk})$.\\ By Proposition 3.17, $m^*(f(E_{lk}))\leq l\,m^*(E_{lk})$ for each~$k$, and therefore, \[ m^*(f(E_l))\leq\sum_{k=1}^{\infty} m^*(f(E_{lk}))\leq\sum_{k=1}^{\infty}l\, m^*(E_{lk}) \leq\sum_{k=1}^{\infty}l\,m(I_k)<l\,\epsilon. \] Since $\epsilon>0$ is arbitrary, then $m^*(f(E_l))=0$.\\ Finally, note $f(E_{l+1})\supseteq f(E_l)$ for each~$l$, and $f(E)=\cup_{l=0}^{\infty}\,f(E_l)$.\\ Thus, by 2 of Proposition 2.13, $m^*(f(E))=\lim_{\,l\rightarrow\infty} m^*(f(E_l))=0$. \\ \smallskip\\ {\bf Corollary 3.10 (Saks' Theorem \cite{saks}):} Given a real-valued function $f$ on~$[a,b]$, and $E\subseteq [a,b]$ such that $f'$ exists on~$E$, if $f'=0$ a.e. on~$E$, then~$m(f(E))=0$.\\ \smallskip\\ {\bf Proof:} Let $E_1$, $E_2$ be subsets of $E$, $E=E_1\cup E_2$, $f'=0$ on~$E_1$ and $m(E_2)=0$. By Corollary~3.8, $m(f(E_1))=0$. By Corollary~3.9, $m(f(E_2))=0$. Thus, $m(f(E)) \leq m(f(E_1))+m(f(E_2))= 0$. \\ \smallskip\\ {\bf Observation 3.7:} Let $f$ be the Cantor function and $C$ the Cantor set. Then $f'=0$~a.e. on~$[0,1]$ and~$m(C)=0$. Since $f([0,1])=[0,1]$, by Saks' Theorem (Corollary~3.10), it must be that $f$ is not differentiable at certain points in~$[0,1]$, and since $f(C)=[0,1]$, by Corollary~3.9, it must be that $f$ is not differentiable at certain points in~$C$. Of course we know $f$ is not differentiable at any point in~$C$ and $f'=0$ on~$[0,1]\setminus C$ so that by Corollary~3.8, $m(f([0,1]\setminus C))=0$ which makes sense as $f([0,1]\setminus C)$ is countable.\\ The following proposition is the converse of Saks' Theorem (Corollary 3.10): $m(f(E))=0$ implies $f'=0$~a.e. on~$E$. Here again it is assumed $f'$ exists everywhere on~$E$. However, almost everywhere (a.e.) will suffice.\\ \smallskip\\ {\bf Proposition 3.18 (Serrin-Varberg's Theorem \cite{serrin}):} Given $f$, a real-valued function on~$[a,b]$, and $E\subseteq [a,b]$ such that $f'$ exists on $E$, if $m(f(E))=0$, then $f'=0$~a.e. on~$E$. \\ \smallskip\\ {\bf Proof:} Let $B=\{x\in E:\, |f'(x)|>0 \}$, and for every integer $n>0$, and any $y\in (a,b)$, define \[B_n=\{x\in B:\,\mathrm{if\,} 0<|x-y|<1/n,\,\mathrm{then\,} |f(x)-f(y)|\geq |x-y|/n \}.\] Clearly, we need to show $m(B)=0$, and since $B=\cup_{n=1}^{\infty}\,B_n$, then it suffices to show $m(B_n)=0$ for each~$n$. However, since each $B_n$ can be covered by a countable collection of intervals, each interval of length less than~$1/n$, it then suffices to show that if $I$ is any interval of length less than~$1/n$ and $A=I\cap B_n$, then~$m(A)=0$.\\ For this purpose, given $\epsilon>0$, since $A\subseteq E$ so that $m(f(A))=0$, let $\{I_k\}$ be a countable collection of open intervals covering~$f(A)$, i.e., $f(A)\subseteq \cup_{k=1}^{\infty}\,I_k$, with $\sum_{k=1}^{\infty}\,m(I_k)<\epsilon$. In addition, let $A_k= f^{-1}(f(A)\cap I_k)$. Then $A=\cup_{k=1}^{\infty} A_k$, and since for each $k$, $A_k\subseteq A=I\cap B_n$, given $x$, $x'\in A_k$, $x\not=x'$, then $|x-x'|<1/n$, and it must be that \[ |x-x'|\leq n |f(x)-f(x')|< n\,m(I_k).\] Thus, \[ m^*(A_k) \leq \sup_{x,x'\in A_k} |x-x'| \leq n\,m(I_k), \] and then \[ m^*(A)\leq\sum_{k=1}^{\infty} m^*(A_k)\leq \sum_{k=1}^{\infty}n\,m(I_k)= n\sum_{k=1}^{\infty}m(I_k)<n\epsilon. \] Hence, since $\epsilon >0$ is arbitrary, it must be that $m(A)=0$. \\ \smallskip\\ {\bf Corollary 3.11:} Given a real-valued function $f$ on $[a,b]$, and $E\subseteq [a,b]$ such that $f'$ exists on~$E$, if $f$ is constant on~$E$, then $f'=0$ a.e. on~$E$. \\ \smallskip\\ {\bf Corollary 3.12 (Serrin-Varberg's Theorem (Alternate form)):} Given a real-valued function $f$ on~$[a,b]$, and $E\subseteq [a,b]$ such that $f'$ exists a.e. on $E$, if $m(f(E))=0$, then $f'=0$ a.e.~on~$E$. In particular, if $f$ is of bounded variation on $[a,b]$, and $E\subseteq [a,b]$ with $m(f(E))=0$, then $f'=0$ a.e. on~$E$. \\ \smallskip\\ {\bf Proof:} Let $E_1$, $E_2$ be subsets of $E$, $E=E_1\cup E_2$, $f'$ exists on $E_1$ and~$m(E_2)=0$. Since $m(f(E_1))\leq m(f(E))=0$, then by Proposition~3.18, $f'=0$ a.e. on~$E_1$. Thus, since $m(E_2)=0$, then $f'=0$ a.e. on~$E$. If $f$ is of bounded variation on $[a,b]$, then by Corollary 3.3, $f$ is differentiable a.e. on~$[a,b]$ and therefore on any subset~$E$ of~$[a,b]$. \\ \smallskip\\ {\bf Proposition 3.19 (Measurability of the derivative of a measurable function):} Let $f$ be a real-valued measurable function on~$[a,b]$, and $E$ a measurable subset of~$[a,b]$. If $f'$ exists on~$E$, then $f'$ is a measurable function on~$E$. Proof in~\cite{yeh}.\\ \smallskip\\ {\bf Proposition 3.20:} Let $f$ be a real-valued measurable function on~$[a,b]$, and $E$ a measurable subset of~$[a,b]$. If $f'(x)$ exists for each $x\in E$, then \[ m^*(f(E))\leq \int_{E} |f'(x)|dx. \] {\bf Proof:} Given $\epsilon>0$, for each integer $n>0$, define \[ E_n=\{x\in E: (n-1)\,\epsilon\leq |f'(x)|<n\,\epsilon\}. \] Clearly, the $E_n$'s are pairwise disjoint and $E=\cup_{n=1}^{\infty}E_n$ so that $f(E)=\cup_{n=1}^{\infty}f(E_n)$. By Proposition~3.19, $f'$~is measurable on~$E$. Hence, each set~$E_n$ must be measurable. Since $f'(x)$ exists for every $x\in E_n$, by Proposition~3.17, $m^*(f(E_n))\leq n\epsilon\,m(E_n)$ for each~$n$, and given an integer~$N>0$, it must be that $\sum_{n=1}^N (n-1)\epsilon\,m(E_n)\leq \int_E |f'(x)|dx$ by the definition of the Lebesgue integral, so that $\sum_{n=1}^{\infty} (n-1)\epsilon\,m(E_n)\leq \int_E |f'(x)|dx$. Thus, \begin{eqnarray*} m^*(f(E)) &\leq& \sum_{n=1}^{\infty} m^*(f(E_n))\leq \sum_{n=1}^{\infty}n\epsilon\,m(E_n)= \sum_{n=1}^{\infty}((n-1)\epsilon +\epsilon)m(E_n)\\ &=& \sum_{n=1}^{\infty}(n-1)\epsilon\,m(E_n)+ \sum_{n=1}^{\infty}\epsilon\,m(E_n)\leq \int_E |f'(x)|dx +\epsilon\,m(E), \end{eqnarray*} where, in the last step, the countable additivity of $m$ on measurable sets is~used. Since $\epsilon$ is arbitrary and $m(E)<\infty$, then $m^*(f(E))\leq \int_{E} |f'(x)|dx$. \\ \smallskip\\ {\bf Proposition 3.21 (Absolutely continuous $f$ maps zero-measure sets to zero-measure sets~\cite{saks} - Absolutely continuous $f$ maps measurable sets to measurable sets):} Let $f$ be an absolutely continuous function on~$[a,b]$. If $E\subseteq [a,b]$ with $m(E)=0$, then~$m(f(E))=0$. In addition, given any measurable subset $E$ of~$[a,b]$, then~$f(E)$ is measurable. \\ \smallskip\\ {\bf Proof:} Without any loss of generality assume $E\subseteq (a,b)$. Given $\epsilon>0$, let $\delta>0$ correspond to $\epsilon$ in the definition of the absolute continuity of~$f$. Since $m(E)=0$, then by Proposition~2.1, there is a collection $\{I_k\} = \{(a_k,b_k)\}$ of nonoverlapping open intervals covering~$E$, i.e., $E\subseteq \cup_{k=1}^{\infty} I_k$, with $\sum_{k=1}^{\infty}(b_k-a_k)=\sum_{k=1}^{\infty} m(I_k)<\delta$, $I_k\subseteq (a,b)$ for each~$k$.\\ For each $k$, since $f$ is continuous, let $c_k$ and $d_k$ be points in $[a_k,b_k]$ where $f$ attains its minimum and maximum, respectively. Assuming without any loss of generality that $c_k<d_k$, then $\{(c_k,d_k)\}$ is a collection of nonoverlapping open intervals, and since again $f$ is continuous, by the intermediate value theorem \cite{rudin}, it follows that \[ f(E)\subseteq f(\cup_{k=1}^{\infty}(a_k,b_k))= \cup_{k=1}^{\infty}f((a_k,b_k)) \subseteq \cup_{k=1}^{\infty}f([c_k,d_k]). \] Thus, \[ m^*(f(E))\leq \sum_{k=1}^{\infty} m(f([c_k,d_k]) = \sum_{k=1}^{\infty}(f(d_k)-f(c_k)). \] Finally, given an integer $N>0$, then it must be that \[ \sum_{k=1}^N (d_k-c_k)< \sum_{k=1}^{\infty}(d_k-c_k)\leq \sum_{k=1}^{\infty}(b_k-a_k))<\delta \] so that $\sum_{k=1}^N(f(d_k)-f(c_k))<\epsilon$. $N$ arbitrary then implies $m^*(f(E))\leq\sum_{k=1}^{\infty}(f(d_k)-f(c_k))\leq\epsilon$. Thus, $m^*(f(E))=0$ since $\epsilon$ is arbitrary.\\ \smallskip\\ Assume now $E$ is a measurable subset of~$[a,b]$.\\ By v of Proposition~2.15, there is a set $F$ that is the union of a countable collection of closed sets, $F\subseteq E$ with $m^*(E\setminus F)=0$, i.e., $E=K\cup(\cup_{n=1}^{\infty}F_n)$, where $m^*(K)=0$, $F=\cup_{n=1}^{\infty}F_n$, and $F_n$ is closed for each~$n$, thus compact (Proposition~2.3). Note then that $f(E)=f(K\cup(\cup_{n=1}^{\infty}F_n))=f(K)\cup(\cup_{n=1}^{\infty}f(F_n))$. Since $m(K)=0$, then $m(f(K))=0$ as just proved above. Thus, $f(K)$~is measurable. Also since $f$~is continuous and $F_n$~is compact for each~$n$, it must be that $f(F_n)$ is compact for each~$n$~\cite{rudin} and thus measurable (Proposition~2.14). It then follows that $f(E)$ is the union of a countable collection of measurable sets and therefore it must be measurable. \\ \smallskip\\ {\bf Proposition 3.22 (Banach-Zarecki Theorem):} Let $f$ be a real-valued function on $[a,b]$. Then $f$ is absolutely continuous on~$[a,b]$ if and only if it satisfies the following three conditions:\\ i. $f$ is continuous on $[a,b]$.\\ ii. $f$ is of bounded variation on $[a,b]$.\\ iii. $f$ maps sets of measure zero to sets of measure zero.\\ \smallskip\\ {\bf Proof:} The necessity was established in Observation~3.3, Proposition~3.10, and Proposition~3.21. For the sufficiency, assume $f$ satisfies all three conditions. Given $[c,d]\subseteq [a,b]$, we show \[ |f(d)-f(c)|\leq \int_c^d |f'(x)|dx. \] By condition ii, $f$ is of bounded variation on $[a,b]$, so that by Corollary~3.3, $f$~is differentiable a.e. on~$[a,b]$ and therefore on~$[c,d]$. Accordingly, let $E_1$, $E_2$ be subsets of~$[c,d]$, $[c,d] = E_1\cup E_2$, $f'$ exists on~$E_1$ and $m(E_2)=0$. By condition iii, it then must be that $m(f(E_2))=0$.\\ Assume without any loss of generality that $f(c)<f(d)$. By condition i, $f$ is continuous on $[a,b]$, so that by the intermediate value theorem \cite{rudin}, given $y$, $f(c)<y<f(d)$, there must be $x$, $c<x<d$, with $f(x)=y$. Thus, $[f(c),f(d)]\subseteq f([c,d])$, and since $f$ is measurable on $[a,b]$ ($f$ is continuous on $[a,b]$) and $E_1$ is measurable ($E_1=[c,d]\setminus E_2$), by Proposition 3.20, we get \begin{eqnarray*} |f(d)-f(c)|&\leq& m(f([c,d]))=m(f(E_1)\cup f(E_2))\\ &\leq& m^*(f(E_1))+m(f(E_2))=m^*(f(E_1))+0\\ &=& m^*(f(E_1))\leq\int_{E_1}|f'(x)|dx=\int_c^d |f'(x)|dx. \end{eqnarray*} By Corollary 3.3, since $f$ is of bounded variation, it must be that $f'$ is integrable over~$[a,b]$ and so is $|f'|$ by Proposition~2.31. Given $\epsilon>0$, by Proposition~2.38, there is $\delta>0$ such that if $A$ is a measurable set with $m(A)<\delta$, then~$\int_A|f'(x)|dx<\epsilon$. Accordingly, for any integer $n>0$ and any disjoint collection of open intervals $(x_i,x_i') \subseteq [a,b]$, $i=1,\ldots,n$, with $\sum_{i=1}^n (x_i' - x_i) < \delta$, let $A=\cup_{i=1}^n (x_i' - x_i)$. Since $m(A)<\delta$, then \[ \sum_{i=1}^n |f(x_i') - f(x_i)|\leq\sum_{i=1}^n\int_{x_i}^{x_i'}|f'(x)|dx =\int_A |f'(x)|dx < \epsilon. \] Thus, $f$ is absolutely continuous. \\ \smallskip\\ {\bf Proposition 3.23 (Inverse function theorem):} Let $f$ be a strictly monotonic continuous function on~$[a,b]$. Then $I=f([a,b])$ is a closed interval with endpoints $f(a)$, $f(b)$, and $f^{-1}$, the inverse function of~$f$, exists on~$I$, and is strictly monotonic and continuous on~$I$. Given $x_0\in [a,b]$ such that $f$ is differentiable at~$x_0$ with~$f'(x_0)\not=0$, then $f^{-1}$ is differentiable at $y_0 = f(x_0)$ with \[ (f^{-1})'(y_0) = 1/f'(x_0). \] {\bf Proof:} Without any loss of generality, assume $f$ is increasing on~$[a,b]$. Since $f$ is strictly increasing and continuous, $f$ is one-to-one and by the intermediate value theorem \cite{rudin}, its range is $I=[f(a),f(b)]=f([a,b])$. Thus, $f^{-1}$ exists on~$I$ and is strictly increasing from $I$ onto~$[a,b]$. By Corollary~3.2, $f^{-1}$ is continuous on~$I$. With $y_0=f(x_0)$, $y=f(x)$, by the continuity of~$f^{-1}$, if~$y\rightarrow y_0$, it must be that~$x\rightarrow x_0$. Thus, \[ \frac{f^{-1}(y) -f^{-1}(y_0)}{y-y_0} = \frac{x-x_0}{f(x)-f(x_0)}\rightarrow 1/f'(x_0) \] as $y\rightarrow y_0$, since $f$ is differentiable at~$x_0$, and~$f'(x_0)\not=0$. Hence, $(f^{-1})'(y_0)$ exists and equals~$1/f'(x_0)$. \\ \smallskip\\ {\bf Proposition 3.24 (Zarecki's criterion for an absolutely continuous inverse~\cite{cabada}):} Let $f$ be a monotonic continuous function on~$[a,b]$. Then $I=f([a,b])$ is a closed interval with endpoints $f(a)$, $f(b)$ , and $f^{-1}$ exists and is absolutely continuous on~$I$ if and only if $\{x:f'(x)=0\}$ has measure~zero. Whenever $f^{-1}$ is absolutely continuous on the closed interval~$I$, then \[ (f^{-1})'= 1/(f'(f^{-1})) \mathrm{\ a.e.\ on\ } I. \] {\bf Proof:} Without any loss of generality, assume $f$ is increasing on~$[a,b]$. If~$f^{-1}$ exists or if $m^*(\{x:f'(x)=0\})=0$, then $f$ is strictly increasing. Thus, assume $f$ is strictly increasing. By Proposition~3.23, $I$ is closed, $I=[f(a),f(b)]$, and $f^{-1}$ exists on~$I$ and is strictly increasing and continuous on~$I$.\\ Assume $\{x:f'(x)=0\}$ has measure zero. As already established, $f^{-1}$ is continuous on~$I$, and since it is increasing, it is of bounded variation on~$I$. Thus, by Proposition~3.22, it suffices to show that $f^{-1}$ maps sets of measure zero to sets of measure zero. For this purpose, let $E\subseteq I$ be of measure zero, and $F=f^{-1}(E)$ so that~$f(F)=E$. Since $f$ is increasing, it is of bounded variation on~$[a,b]$ as well, and by Corollary~3.12, $f'=0$ a.e. on~$F$. Accordingly, let $F_1$, $F_2$ be subsets of~$F$, $F=F_1\cup F_2$, $f'=0$~on $F_1$ and~$m(F_2)=0$. Since $F_1\subseteq \{x:f'(x)=0\}$, then~$m(F_1)=0$. Thus,~$m(F)=0$.\\ Assume now $f^{-1}$ is absolutely continuous on~$I$. By Corollary~3.8, since $f'=0$ on $E=\{x:f'(x)=0\}$, then~$m(f(E))=0$. Thus, by Proposition~3.21, $m(E)=m(f^{-1}(f(E)))=0$.\\ Finally, whenever $f^{-1}$ is absolutely continuous on~$I$, define $F_1$, $F_2$, $F_3$, disjoint subsets of~$[a,b]$, $[a,b]=F_1\cup F_2\cup F_3$, as follows. $F_1=\{x:f'(x)\not=0\}$, $F_2=\{x:f'(x)=0\}$, $F_3=\{x:f'(x) \mathrm{\ does\ not\ exist}\}$. Since $f$ is of bounded variation on~$[a,b]$, $f$ is differentiable a.e. on~$[a,b]$, thus, $m(F_3)=0$. Also, since $f^{-1}$ is absolutely continuous, then $m(F_2)=0$ as just proved above. Hence, since by Proposition~3.23, for each $x\in F_1$, $f^{-1}$ is differentiable at $y=f(x)$ with $(f^{-1})'(y) = 1/f'(x)$, then $(f^{-1})'= 1/(f'(f^{-1}))$ a.e. on~$I$. \\ \smallskip\\ {\bf Proposition 3.25 (Composition of absolutely continuous functions):} Let $g$ be an absolutely continuous monotonic function from an interval~$[a,b]$ into an interval~$[c,d]$, and let $f$ be an absolutely continuous function on~$[c,d]$. Then the function $h= f\circ g$ is absolutely continuous on~$[a,b]$.\\ \smallskip\\ {\bf Proof:} Let $\epsilon>0$ be given. It follows easily that since $f$ is absolutely continuous, then there is $\rho>0$ such that for any integer $n>0$ and for any nonempty subset $A$ of $\{1,\ldots,n\}$ it must be that $\sum_{i\in A} |f(y_i') - f(y_i)| < \epsilon$ for disjoint open intervals $(y_i,y_i') \subseteq [c,d]$, $i\in A$, with $\sum_{i\in A} (y_i' - y_i) < \rho$. For $\rho$ as just described, since $g$ is also absolutely continuous, then there is $\delta>0$ such that for any integer $n>0$ it must be that $\sum_{i=1}^n |g(x_i') - g(x_i)| < \rho$ for disjoint open intervals $(x_i,x_i') \subseteq [a,b]$, $i=1,\ldots,n$, with $\sum_{i=1}^n (x_i' - x_i) < \delta$. Accordingly, for any integer $n>0$ let $(x_i,x_i') \subseteq [a,b]$, $i=1,\ldots,n$, be any collection of disjoint open intervals with $\sum_{i=1}^n (x_i' - x_i) < \delta$. Setting $A=\{i:\,1\leq i\leq n,\,g(x_i')\not=g(x_i)\}$, if $A\not=\emptyset$, then the collection of open intervals $(g(x_i),g(x_i'))$ , $i\in A$, if $g$ is increasing; $(g(x_i'),g(x_i))$, $i\in A$, if $g$ is decreasing; must be pairwise disjoint by the monoticity of~$g$ with $\sum_{i\in A}|g(x_i') - g(x_i)|<\rho$. Thus, if $A\not=\emptyset$, it must be that $S_A=\sum_{i\in A}|f(g(x_i')) - f(g(x_i))|<\epsilon$. Setting $S_A=0$ if~$A=\emptyset$, and since $|f(g(x_i')) - f(g(x_i))|=0$ for $i\not\in A$, then \[ \sum_{i=1}^n|f(g(x_i')) - f(g(x_i))|= S_A +0 =S_A<\epsilon.\] Thus, $h=f\circ g$ is absolutely continuous on $[a,b]$. \\ \smallskip\\ {\bf Proposition 3.26 (Chain rule \cite{serrin}):} Given real-valued functions $F$, $f$ on~$[c,d]$, $F'=f$ a.e. on~$[c,d]$, and a function $u:[a,b]\rightarrow [c,d]$, $u$ and $F\circ u$ differentiable a.e. on~$[a,b]$, if $F$ maps zero-measure sets to zero-measure sets, then \[(F\circ u)'= (f\circ u)u' \mathrm{\ a.e.\ on\ } [a,b].\] {\bf Proof:} Let $A=\{x\in [c,d]: F'(x)=f(x)\}$, $B=[c,d]\setminus A$, and $C=\{t\in [a,b]:u(t)\in B\}$. Clearly, $mB=0$. Letting $D=[a,b]\setminus C$, since $u$ is differentiable a.e. on~$[a,b]$, then it is differentiable a.e. on~$D$. Since $u(D)\subseteq A$ and $F$ is differentiable on~$A$, the composite function $F\circ u$ is differentiable a.e. on~$D$. Indeed it is differentiable exactly at the points in~$D$ where $u$ is differentiable. Thus, by the usual chain rule of calculus, $(F\circ u)'(t) = F'(u(t))u'(t) = f(u(t))u'(t)$, for $t\in D$ at which $u'$ exists, i.e., $(F\circ u)'= (f\circ u)u' \mathrm{\ a.e.\ on\ }D$.\\ Note that if $mC=0$, then the proof is complete.\\ Thus, assuming $mC\not=0$, we show $(F\circ u)'= (f\circ u)u' \mathrm{\ a.e.\ on\ }C$. Note $f\circ u$ is defined on~$C$. Since $u(C)\subseteq B$, then $m(u(C))=mB=0$, and since $u'$ exists a.e. on $C$, then by Corollary~3.12, $u'=0$ a.e. on~$C$ so that $(f\circ u)u'=0$ a.e. on~$C$. In addition, since $F$ maps zero-measure sets to zero-measure sets, then $m(F(u(C)))=0$, and since $(F\circ u)'$ exists a.e. on $C$, again by Corollary~3.12, $(F\circ u)' =0$ a.e. on~$C$. Thus, $(F\circ u)'=0 = (f\circ u)u'$ a.e. on~$C$. \\ \smallskip\\ {\bf Observation 3.8:} With $B$ the subset of~$[c,d]$ of measure zero on which~$F'\not=f$, and $C=\{t\in [a,b]:u(t)\in B\}$, then for the case $mC\not=0$ in the proof of Proposition~3.26 above it was proved that $u'=0$ a.e. on~$C$, and since $f\circ u$ is defined on~$C$, then $(f\circ u)u'=0$ a.e. on $C$ as well. Since it was also proved that $(F\circ u)'=0$ a.e. on~$C$, then it was concluded that $(F\circ u)'=0=(f\circ u)u'$ a.e. on~$C$. However, it can happen that when using the chain rule as described in Proposition~3.26 above and in Corollary 3.13 below, although $F'=f$ on $[c,d]\setminus B$, $f$ may not be defined everywhere on~$B$. Thus, $f\circ u$ may not be defined on $C$ as required in the proof of Proposition~3.26 above when $mC\not=0$. However, what matters here is that both $u'$ and $(F\circ u)'$ are zero a.e. on~$C$. Therefore, assuming $mC\not=0$, when computing $(F\circ u)'$ with the chain rule as suggested in Proposition~3.26 above and in Corollary~3.13 below, if $(F\circ u)'$ is set to zero at any point in~$[a,b]$ at which $u'$ is zero (whether or not $f\circ u$ is defined there), and computed or left undefined according to the chain rule elsewhere, then $(F\circ u)'$ so obtained will be correct a.e. on~$[a,b]$. Accordingly, assuming $mC\not=0$, one should keep in mind that if $(F\circ u)'$ is not computed as just suggested, so that $(F\circ u)'$ might be left undefined at points where $f\circ u$ is not defined although $u'$ is zero, one could end up with $(F\circ u)'$ not defined on a set of nonzero measure in~$[a,b]$. Actually, instead of computing $(F\circ u)'$ as just suggested, we do something simpler. Since, as mentioned above, what matters here is that both $u'$ and $(F\circ u)'$ are zero a.e. on~$C$, without any loss of generality, we simply set $f$ equal to 1 at points in $B$ where it is not defined and proceed with the chain rule to compute $(F\circ u)'$, as $f\circ u$ is then defined on~$C$ so that $(f\circ u)u'$ is zero at points in~$C$ where $u'$ is zero, thus zero a.e. on~$C$. More precisely, we define a new function $\hat{f}$ on~$[c,d]$ by setting $\hat{f}$ equal to $f$ at points in $[c,d]$ where $f$ exists, and to 1 where it does not. In what follows, we will refer to $\hat{f}$ as $f$ {\bf extended to all of~$[c,d]$}. This function is then defined everywhere in $[c,d]$, equals $f$ a.e. on~$[c,d]$, and takes the place of $f$ in the chain rule although it is still called $f$ there. Finally, note that above when we say anything about computing $(F\circ u)'$ with the chain rule, it is not $(F\circ u)'$ that is necessarily computed but a function that happens to be equal to $(F\circ u)'$ a.e. on~$[a,b]$. \\ \smallskip\\ {\bf Corollary 3.13 (Chain rule (Alternate form) \cite{serrin}):} Given an absolutely continuous function $F$ on $[c,d]$, and a real-valued function $f$ on $[c,d]$, $F'=f$ a.e. on~$[c,d]$, if $u:[a,b]\rightarrow [c,d]$ is a function such that $u$ and $F\circ u$ are differentiable a.e. on~$[a,b]$, then \[(F\circ u)'= (f\circ u)u' \mathrm{\ a.e.\ on\ } [a,b].\] {\bf Proof:} From Proposition 3.26 and Proposition 3.21. \\ \smallskip\\ {\bf Proposition 3.27 (Change of variable for Lebesgue integral \cite{serrin}):} Given a function $f$, Lebesgue integrable over~$[c,d]$, and a function $u:[a,b]\rightarrow [c,d]$, differentiable a.e. on~$[a,b]$, then the following two conditions are equivalent, where $F(x)=\int_c^x f(t)dt$, $x\in [c,d]$:\\ i. $F\circ u$ is absolutely continuous on $[a,b]$.\\ ii. $(f\circ u)u'$ is Lebesgue integrable over~$[a,b]$ and for all $\alpha,\,\beta\in [a,b]$ it must be that \[ \int_{u(\alpha)}^{u(\beta)} f(x)dx = \int_{\alpha}^{\beta} f(u(t))u'(t)dt, \] with $\int_{u(\alpha)}^{u(\beta)} f(x)dx = -\int_{u(\beta)}^{u(\alpha)} f(x)dx = -\int_{[u(\beta),u(\alpha)]} f(x)dx$ if $u(\beta) < u(\alpha)$, and $\int_{\alpha}^{\beta} f(u(t))u'(t)dt=-\int_{\beta}^{\alpha} f(u(t))u'(t)dt =-\int_{[\beta,\alpha]} f(u(t))u'(t)dt$ if $\beta<\alpha$. \\ \smallskip\\ {\bf Proof:}\\ i $\Rightarrow$ ii:\\ Since $F$ is absolutely continuous on~$[c,d]$ (Proposition 3.11), $F'=f$ a.e. on $[c,d]$ (Proposition~3.7), $u$ is differentiable a.e. on~$[a,b]$, and $F\circ u$, being absolutely continuous on~$[a,b]$, must be differentiable a.e. on~$[a,b]$ (Corollary~3.5), then by Corollary~3.13 (chain rule), $(F\circ u)'= (f\circ u)u'$ a.e. on~$[a,b]$ (here and in the corollaries that follow, without any loss of generality, $f$ is interpreted as $f$ extended to all of~$[c,d]$ (Observation~3.8 about the chain rule)). Note, by Corollary~3.5, since $F\circ u$ is absolutely continuous on~$[a,b]$, then $(f\circ u)u'$ is Lebesgue integrable over~$[a,b]$, and by Corollary~3.6 (Fundamental Theorem of Lebesgue integral calculus), applied to the absolutely continuous function $F\circ u$, for all $\alpha,\,\beta\in [a,b]$ it must be that \begin{eqnarray*} \int_{\alpha}^{\beta} f(u(t))u'(t)dt &=& \int_{\alpha}^{\beta} (F\circ u)'(t)dt = (F\circ u)(\beta) - (F\circ u)(\alpha)\\ &=& F(u(\beta))- F(u(\alpha)) = \int_{u(\alpha)}^{u(\beta)} f(x)dx. \end{eqnarray*} ii $\Rightarrow$ i:\\ Since $(f\circ u)u'$ is Lebesgue integrable over~$[a,b]$, and, in particular, for $x\in [a,b]$ \[ F(u(x))-F(u(a)) = \int_{u(a)}^{u(x)} f(s)ds = \int_a^{x} f(u(t))u'(t)dt, \] by Proposition 3.12, $F\circ u = F(u)$ must be absolutely continuous on~$[a,b]$.\\ \smallskip\\ {\bf Corollary 3.14 (Change of variable for Lebesgue integral (Alternate form I) \cite{serrin}):} Given a function $f$, Lebesgue integrable over~$[c,d]$, and a function $u:[a,b]\rightarrow [c,d]$, monotonic and absolutely continuous on~$[a,b]$, then $(f\circ u)u'$ is Lebesgue integrable over~$[a,b]$ and for all $\alpha,\,\beta\in [a,b]$ it must be that \[ \int_{u(\alpha)}^{u(\beta)} f(x)dx = \int_{\alpha}^{\beta} f(u(t))u'(t)dt. \] {\bf Proof:} Since $u$ is clearly differentiable a.e. on $[a,b]$, and $F$, $F(x)=\int_c^x f(t)dt$, $x\in [c,d]$, is absolutely continuous so that the composition $F\circ u$ is absolutely continuous on~$[a,b]$ by Proposition~3.25, then by Proposition~3.27, $(f\circ u)u'$ is Lebesgue integrable over~$[a,b]$ and for all $\alpha,\,\beta\in [a,b]$ it must be that \[ \int_{u(\alpha)}^{u(\beta)} f(x)dx = \int_{\alpha}^{\beta} f(u(t))u'(t)dt. \] {\bf Corollary 3.15 (Change of variable for Lebesgue integral (Alternate form II) \cite{serrin}):} Given a function $f$, bounded and measurable on~$[c,d]$, and a function $u:[a,b]\rightarrow [c,d]$, absolutely continuous on~$[a,b]$, then $(f\circ u)u'$ is Lebesgue integrable over~$[a,b]$ and for all $\alpha,\,\beta\in [a,b]$ it must be that \[ \int_{u(\alpha)}^{u(\beta)} f(x)dx = \int_{\alpha}^{\beta} f(u(t))u'(t)dt. \] {\bf Proof:} By Proposition~2.32, $f$ is Lebesgue integrable over~$[c,d]$. Since $u$ is clearly differentiable a.e. on~$[a,b]$, and by Observation~3.6, with $F(x)=\int_c^x f(t)dt$, $x\in [c,d]$, it must be that $F\circ u$ is absolutely continuous on~$[a,b]$, then by Proposition~3.27, $(f\circ u)u'$ is Lebesgue integrable over~$[a,b]$ and for all $\alpha,\,\beta\in [a,b]$ it must be that \[ \int_{u(\alpha)}^{u(\beta)} f(x)dx = \int_{\alpha}^{\beta} f(u(t))u'(t)dt. \] {\bf Corollary 3.16 (Change of variable for Lebesgue integral over a measurable set~\cite{klassen}:)} Given $A$, a measurable subset of~$[0,1]$, and a function $\gamma: [0,1]\rightarrow [0,1]$, $\gamma$ absolutely continuous on~$[0,1]$, $\dot{\gamma}>0$ a.e. on~$[0,1]$, $\gamma(0)=0$, $\gamma(1)=1$, then $\gamma^{-1}$ exists and is absolutely continuous on~$[0,1]$, and $\tilde{A}=\gamma^{-1}(A)$ is a measurable subset of~$[0,1]$. Accordingly, given a function~$f$, Lebesgue integrable over~$[0,1]$, then $(f\circ \gamma)\dot{\gamma}$ is Lebesgue integrable over~$\tilde{A}$ and \[ \int_A f(x)dx = \int_{\tilde{A}} f(\gamma(t))\dot{\gamma}(t)dt. \] {\bf Proof:} Clearly $\gamma$ is strictly increasing and thus $\gamma^{-1}$ exists and is absolutely continuous on~$[0,1]$ by Proposition~3.24. By Proposition 3.21, $\tilde{A}=\gamma^{-1}(A)$ is then a measurable subset of~$[0,1]$.\\ Define $I_A:[0,1]\rightarrow {\bf R}$ by $I_A(t)=1$ if $t\in A$, $I_A(t)=0$ if $t\in [0,1]\setminus A$, and $I_{\tilde{A}}:[0,1]\rightarrow {\bf R}$ by $I_{\tilde{A}}(t)=1$ if $t\in \tilde{A}$, $I_{\tilde{A}}(t)=0$ if $t\in [0,1]\setminus\tilde{A}$.\\ Note $I_{\tilde{A}}= I_A\circ\gamma$.\\ Also note $I_A\cdot f$ is Lebesgue integrable over~$[0,1]$, since it equals $f$ on~$A$ and~$0$ on~$[0,1]\setminus A$. It follows then by Corollary~3.14 that \[ \int_0^1 I_A(x)f(x)dx = \int_{\gamma(0)}^{\gamma(1)} I_A(x)f(x)dx= \int_0^1 I_A(\gamma(t))f(\gamma(t))\dot{\gamma}(t)dt, \] i.e., \[ \int_Af(x)dx = \int_0^1 I_{\tilde{A}}(t)f(\gamma(t))\dot{\gamma}(t)dt = \int_{\tilde{A}}f(\gamma(t))\dot{\gamma}(t)dt. \] \section{\large Functional Data and Shape Analysis and its\\ Connections to Lebesgue Integration and\\ Absolute Continuity} {\bf Observation 4.1:} In what follows, we review some important aspects of functional data and shape analysis of the type in~\cite{srivastava}, while at the same time pointing out its dependence on Lesbesgue integration, absolute continuity and the connections between them. As in~\cite{srivastava} where absolutely continuous functions on~$[0,1]$ are generalized to functions of range ${\bf R^n}$, {\bf R} the set of real numbers, $n$~a positive integer, we consider absolutely continuous functions on~$[0,1]$ but restrict ourselves to those with range in~${\bf R}^1= {\bf R}$. We denote by~$AC[0,1]$ the set of such functions. With two absolutely continuous functions on~$[0,1]$ considered equal if they differ by a constant, we note that the principal goal in~\cite{srivastava} is essentially that of presenting tools for analyzing the shapes of absolutely continuous functions and defining a distance metric for computing a distance between any two of them. Specializing to~$AC[0,1]$, a crucial aspect of the approach in~\cite{srivastava} is then that of identifying a bijective correspondence between functions in~$AC[0,1]$ and functions in $L^2[0,1]$, and taking advantage of this correspondence to compute easily the distance between functions in~$AC[0,1]$ (the definition of this distance in terms of the so-called Fisher-Rao metric appears below) by computing the distance between the corresponding functions in~$L^2[0,1]$. Actually, as mentioned above, the goal of this approach is not so much that of computing the distance between functions in~$AC[0,1]$ but of computing the distance between their shapes. More precisely, in this approach, each function in~$AC[0,1]$ is associated with its unique (a.e. on~$[0,1]$) so-called square-root slope function (SRSF) in~$L^2[0,1]$, and vice versa, and a distance metric is defined for computing the distance between the shapes of any two functions in~$AC[0,1]$ in terms of the $L^2$ distances between SRSF's of reparametrizations of the two functions. This distance, although computed in~$L^2[0,1]$, is a measure of how much one of the absolutely continuous functions must be reparametrized (with so-called warping functions) to align as much as possible with the other one. Since given two functions in~$AC[0,1]$ that are not equal, the possibility exists that one function can be reparametrized to align exactly with the other one, i.e., become exactly the other one, the set of reparametrization functions or warping functions then induces a quotient space of~$L^2[0,1]$. Accordingly, a distance metric is defined in~\cite{srivastava} that computes the distance between any two equivalence classes in the quotient space of $L^2[0,1]$ by the set of warping functions, thus computing the distance between the shapes of the two corresponding functions in~$AC[0,1]$. \\ \smallskip\\ {\bf Definition 4.1 (SRSF representation of functions \cite{srivastava}):} Given $f\in AC[0,1]$, the real-valued {\bf square-root slope function (SRSF)} $q$ of $f$, is defined for each $t\in [0,1]$ at which $f'$ exists by \[ q(t)= \mathrm{sign} (f'(t))\sqrt{|f'(t)|}. \] {\bf Observation 4.2:} By Corollary 3.5, $f'$ exists a.e. on $[0,1]$. Thus $q$ is defined a.e. on~$[0,1]$. We note that $q$, the SRSF of~$f$, is the $1-$dimensional version of the {\bf square-root velocity function (SRVF)} $q$ of an absolutely continuous function $f$, $f:[0,1]\rightarrow {\bf R}^n$, defined as follows. Let $F: {\bf R}^n\rightarrow {\bf R}^n$ be the continuous map defined by $F(v)=v/\sqrt{|v|}$ if $|v|\not=0$, $F(v)=0$ otherwise, $|\cdot|$ the Euclidean norm. Then the SRVF~$q$ of~$f$, $q:[0,1]\rightarrow {\bf R}^n$, is defined for each $t\in [0,1]$ at which $f'$ exists by \[ q(t)= F(f'(t)) = f'(t)/\sqrt{|f'(t)|} \] if $|f'(t)|\not=0$, $0$ (in ${\bf R}^n$) otherwise.\\ See \cite{lahiri}, \cite{srivastava} for a rigorous development of the SRVF. \\ \smallskip\\ {\bf Proposition 4.1 (Square integrability of SRSF \cite{srivastava}):} Given $f\in AC[0,1]$, the SRSF~$q$ of~$f$ is square-integrable over $[0,1]$, i.e.,~$q\in L^2[0,1]$, with $\int_0^1 |q(t)|^2 dt = \int_0^1 |f'(t)| dt$, i.e., $||q||_2^2$ = length of~$f$. \\ \smallskip\\ {\bf Proof:} By Corollary 3.5, $f'$ is measurable and Lebesgue integrable over~$[0,1]$. Note $h(t) = |q(t)|^2 = |\mathrm{sign} (f'(t))\sqrt{|f'(t)|}|^2 = |f'(t)|$ for each $t\in [0,1]$ at which $f'$ exists. Thus $h$ is measurable and Lebesgue integrable over $[0,1]$ (Proposition~2.31) so that $q\in L^2[0,1]$ and $\int_0^1 |q(t)|^2 dt = \int_0^1 |f'(t)| dt$. \\ \smallskip\\ {\bf Observation 4.3:} As noted in Observation 2.21, a Lebesgue integrable function over a measurable set $E$ can be undefined on a subset of $E$ of measure zero. That can be the case above for functions $f'$ and~$|q|^2$ with $E=[0,1]$ which we know exist a.e. on $[0,1]$. However, without any loss of generality, in the spirit of Observation~3.8 about the chain rule, $f'$ and $q$ will eventually be interpreted below as $f'$ and $q$ extended to all of~$[0,1]$. Finally, note the length of $f$ above is measured in~{\bf R}, a $1-$dimensional space. \\ \smallskip\\ {\bf Proposition 4.2 (Reconstruction of an absolutely continuous function from its SRSF \cite{srivastava}):} Given $f\in AC[0,1]$, let $q$ be the SRSF of~$f$. Then for each $t\in [0,1]$ it must be that $f(t) = f(0) +\int_0^t q(s)|q(s)|ds$. \\ \smallskip\\ {\bf Proof:} Note that for each $s\in [0,1]$ at which $f'$ exists, then \begin{eqnarray*} q(s)|q(s)|&=& \mathrm{sign} (f'(s))\sqrt{|f'(s)|} |\mathrm{sign} (f'(s))\sqrt{|f'(s)|}|\\ &=& \mathrm{sign} (f'(s))\sqrt{|f'(s)|} \sqrt{|f'(s)|}\\ &=& \mathrm{sign} (f'(s))|f'(s)| = f'(s). \end{eqnarray*} By Proposition 3.12, for each $t\in [0,1]$ it must be that $f(t) = f(0) +\int_0^t f'(s)ds$. Thus, for each $t\in [0,1]$ it must be that $f(t) = f(0) +\int_0^t q(s)|q(s)|ds$. \\ \smallskip\\ {\bf Proposition 4.3 ($L^2[0,1]$'s equivalence with the set of all SRSF's \cite{srivastava}):} Let $q$ be in $L^2[0,1]$ and $C$ any real number. Let $h(t)=q(t)|q(t)|$ for each $t\in [0,1]$ at which $q$ exists. Then $h$ is defined a.e. on $[0,1]$, $h$ is measurable and Lebesgue integrable over $[0,1]$, and the function $f$ defined for each $t\in [0,1]$ by $f(t) = C +\int_0^t h(s)ds$ is absolutely continuous on~$[0,1]$ with $q$ equal to the SRSF of~$f$ a.e. on~$[0,1]$. \\ \smallskip\\ {\bf Proof:} As $q$ is defined a.e. on $[0,1]$, then so is $h$. In addition, since $|h|=|q|^2$ is measurable and Lebesgue integrable over $[0,1]$, then so is $h$ (Proposition~2.31). By Proposition~3.12, $f$ is then absolutely continuous on~$[0,1]$.\\ Let $\hat{q}$ be the SRSF of $f$. Then for each $t\in [0,1]$ at which $f'$ exists it must be that $\hat{q}(t) = \mathrm{sign}(f'(t))\sqrt{|f'(t)|}$ and $\hat{q}$ is defined a.e. on~$[0,1]$. Since by Proposition 3.7, $f'=h$ a.e. on $[0,1]$, then it must also be that $\hat{q}(t) = \mathrm{sign}(h(t))\sqrt{|h(t)|}$ for almost all $t\in [0,1]$.\\ But $\mathrm{sign}(h(t))\sqrt{|h(t)|} = \mathrm{sign}(q(t)|q(t)|)\sqrt{q(t)^2} = \mathrm{sign}(q(t))|q(t)|=q(t)$ for each $t\in [0,1]$ at which $q$ exists and therefore for almost all $t\in [0,1]$. Thus, $q=\hat{q}$ a.e. on~$[0,1]$. \\ \smallskip\\ {\bf Definition 4.2:} Under the composition of functions operation, the {\bf admissible class $\Gamma$ of warping functions} is a semigroup of functions (not every element has an inverse) defined~by \begin{eqnarray*} \Gamma &=& \{\gamma| \gamma: [0,1]\rightarrow [0,1], \ \gamma \ \mathrm{absolutely\ continuous\ on}\ [0,1],\\ &\ &\ \ \dot{\gamma}\geq 0 \mathrm{\ a.e.\ on}\ [0,1],\ \gamma(0)=0,\ \gamma(1)=1\}, \end{eqnarray*} where $\dot{\gamma}$ is the derivative of~$\gamma$.\\ \smallskip\\ The {\bf group $\Gamma_0$ of invertible warping functions}, $\Gamma_0\subset\Gamma$, is defined by \[ \Gamma_0 = \{\gamma|\gamma\in\Gamma,\ \dot{\gamma}>0 \mathrm{\ a.e.\ on\ } [0,1]\}.\] {\bf Observation 4.4:} The functions in $\Gamma$ and $\Gamma_0$ play an important role in functional data and shape analysis as they are used to reparametrize an absolutely continuous function by warping its domain during the process of aligning its shape to the shape of another absolutely continuous function. As demonstrated in \cite{lahiri}, \cite{srivastava}, it is $\Gamma$ and $\Gamma_0$ that induce a quotient space of~$L^2[0,1]$ with a well-defined distance metric. More on this below. We note, given $\gamma\in\Gamma$, since $\gamma$ is continuous, $\gamma(0)=0$, $\gamma(1)=1$, $\gamma:[0,1]\rightarrow [0,1]$, then by the intermediate value theorem \cite{rudin}, $\gamma([0,1])=[0,1]$. We note, given $\gamma\in\Gamma_0$, $\gamma$ is strictly increasing, thus has an inverse $\gamma^{-1}$ which is also in~$\Gamma_0$ as $\gamma^{-1}(0)=0$, $\gamma^{-1}(1)=1$, $\gamma^{-1}$ is absolutely continuous on~$[0,1]$ by Proposition~3.24, and $(\gamma^{-1})'>0$ a.e. on~$[0,1]$, also from Proposition~3.24, since $\gamma$ (the inverse of $\gamma^{-1}$) is absolutely continuous on~$[0,1]$.\\ Note, given $\gamma$, $\tilde{\gamma}\in\Gamma$, then $\gamma\circ\tilde{\gamma}$ is absolutely continuous on~$[0,1]$ by Proposition~3.25 and clearly $(\gamma\circ\tilde{\gamma})(0)=0$, $(\gamma\circ\tilde{\gamma})(1)=1$. Accordingly, if $\gamma$, $\tilde{\gamma}\in\Gamma_0$, in order to conclude that $\gamma\circ\tilde{\gamma}\in\Gamma_0$, we prove $(\gamma\circ\tilde{\gamma})'>0$ a.e. on~$[0,1]$. For this purpose let $A=\{t\in [0,1]:\dot{\gamma}(t)>0\}$, $B=[0,1]\setminus A$, $C=\tilde{\gamma}^{-1}(B)$. Clearly, $mB=0$ and since $\tilde{\gamma}^{-1}$ is absolutely continuous on~$[0,1]$ as just proved above, then $mC=0$ by Proposition~3.21. Let $D=[0,1]\setminus C$. Accordingly, we only need to prove $(\gamma\circ\tilde{\gamma})'>0$ a.e. on~$D$. Clearly, $\dot{\tilde{\gamma}}$ exists (and is positive) a.e. on~$D$. Since $\tilde{\gamma}(D)\subseteq A$, and $\dot{\gamma}$ exists (and is positive) on~$A$, then $(\gamma\circ\tilde{\gamma})'$ exists a.e. on~$D$. Indeed it exists exactly at the points in~$D$ where $\dot{\tilde{\gamma}}$ exists. Thus, by the usual chain rule of calculus, $(\gamma\circ\tilde{\gamma})'(t) = \dot{\gamma}(\tilde{\gamma}(t))\dot{\tilde{\gamma}}(t)$ for $t\in D$ at which $\dot{\tilde{\gamma}}$ exists. Since as mentioned above $\dot{\gamma}(\tilde{\gamma}(t))$ exists and is positive for all $t\in D$, and $\dot{\tilde{\gamma}}$ exists and is positive a.e. on~$D$, then $(\gamma\circ\tilde{\gamma})'=(\dot{\gamma}\circ\tilde{\gamma})\dot{\tilde{\gamma}}>0$ a.e. on~$D$.\\ Finally, given $\gamma$, $\tilde{\gamma}\in\Gamma$, we show $\gamma\circ\tilde{\gamma}\in\Gamma$. It suffices to show $(\gamma\circ\tilde{\gamma})'\geq 0$ a.e. on~$[0,1]$. For this purpose let $A=\{t\in [0,1]:\dot{\gamma}(t) \geq 0\}$, $B=[0,1]\setminus A$, $C=\{t\in [a,b]:\tilde{\gamma}(t)\in B\}$. Clearly, $mB=0$. Letting $G=[a,b]\setminus C$, then we can show that $(\gamma\circ\tilde{\gamma})'\geq 0$ a.e. on G, in the same manner we showed above for $\gamma$, $\tilde{\gamma}\in\Gamma_0$ that $(\gamma\circ\tilde{\gamma})'>0$ a.e. on~$D$. Thus, in order to complete the proof we show $(\gamma\circ\tilde{\gamma})'=0$ a.e. on~$C$. Since $\tilde{\gamma}(C)\subseteq B$, then $m(\tilde{\gamma}(C)) = mB = 0$, and since $\gamma$ is absolutely continuous on~$[0,1]$, then $m(\gamma(\tilde{\gamma}(C)))=0$ by Proposition~3.21. Note $(\gamma\circ\tilde{\gamma})'$ exists a.e. on~$C$ as $\gamma\circ\tilde{\gamma}$ is absolutely continuous on~$[0,1]$. That $(\gamma\circ\tilde{\gamma})'=0$ a.e. on~$C$ now follows follows from Corollary~3.12. \\ \smallskip\\ {\bf Proposition 4.4 (SRSF of a warped absolutely continuous function \cite{srivastava}):} Given $f\in AC[0,1]$ and $\gamma\in\Gamma$, then $f\circ\gamma\in AC[0,1]$ and $(f\circ\gamma)(0) = f(0)$. With $q$ the SRSF of~$f$, without any loss of generality, in the spirit of Observation~3.8 about the chain rule, interpreting $f'$ and $q$ as $f'$ and $q$ extended to all of $[0,1]$, it then follows that the SRSF of $f\circ\gamma$ equals $(q\circ\gamma)\sqrt{\dot{\gamma}}$ a.e. on $[0,1]$. \\ \smallskip\\ {\bf Proof:} Clearly, $(f\circ\gamma)(0)=f(\gamma(0))=f(0)$. That $f\circ\gamma\in AC[0,1]$ follows directly from Proposition~3.25. Accordingly, it then follows from Corollary~3.13 (chain rule) that $(f\circ \gamma)' = (f'\circ \gamma)\dot{\gamma}$ a.e. on~$[0,1]$. Thus, the SRSF of $f\circ\gamma$ which is defined for each $t\in[0,1]$ at which $(f\circ\gamma)'$ exists as $\mathrm{sign}((f\circ\gamma)'(t))\sqrt{|(f\circ\gamma)'(t)|}$ must equal \begin{eqnarray*} \mathrm{sign}((f'\circ\gamma)(t)\dot{\gamma}(t)) \sqrt{|(f'\circ\gamma)(t)\dot{\gamma}(t)|} &=&\mathrm{sign}(f'(\gamma(t)))\sqrt{|f'(\gamma(t))|}\sqrt{\dot{\gamma}(t)}\\ &=& q(\gamma(t))\sqrt{\dot{\gamma}(t)} \end{eqnarray*} for almost all $t\in [0,1]$. \\ \smallskip\\ {\bf Observation 4.5:} Note the SRSF $q$ of $f$ is defined (Definition~4.1) for each $t\in [0,1]$ at which $f'$ exists. However, although the SRSF of $f\circ\gamma$ equals $(q\circ\gamma)\sqrt{\dot{\gamma}}$ a.e. on~$[0,1]$, it is not necessarily true that the SRSF of $f\circ\gamma$ exists at each $t\in [0,1]$ for which $q(\gamma(t))\sqrt{\dot{\gamma}(t)}$ exists or if it exists it is equal to it. \\ \smallskip\\ {\bf Observation 4.6:} An isometry is a distance-preserving transformation between two metric spaces. Here we describe in a nonrigorous manner isometries (and differentials as well) in the context of differential and Riemannian geometry. Let $M$, $N$ be spaces and let $\varphi$ be a mapping from $M$ into $N$, with $M$, $N$, $\varphi$ satisfying certain smoothness properties (in the language of differential geometry, $M$ and $N$ are {\bf smooth} or {\bf differentiable manifolds} which are spaces that locally resemble Euclidean, Hilbert or Banach spaces, and $\varphi$ is {\bf differentiable (generalized to smooth manifolds)}; here and in what follows, differentiability in the context of smooth manifolds is assumed to be of all orders). Given $\epsilon>0$, $p\in M$, assume $\alpha:(-\epsilon,\epsilon)\rightarrow M$ can be defined, $\alpha(0)=p$, $\alpha$ a curve in $M$, differentiable (generalized to smooth manifolds) so that $\alpha'(0)$ makes sense. Then $\alpha'(0)$ is considered to be a tangent vector to the curve $\alpha$ at $t=0$, and to $M$ at~$p$. Accordingly, the set of all tangent vectors to $M$ at $p$ is called the {\bf tangent space} of $M$ at~$p$ and denoted by~$T_pM$. Similarly, given $q\in N$, the set of all tangent vectors to $N$ at $q$ is called the tangent space of $N$ at $q$ and denoted by~$T_qN$. With $\alpha$ as above, define $\beta:(-\epsilon,\epsilon)\rightarrow N$ by $\beta=\varphi\circ\alpha$. Then $\beta(0)=\varphi(p)$, $\beta$ is a curve in~$N$, and we assume it is differentiable (generalized to smooth manifolds) so that $\beta'(0)$ makes sense and is then in $T_{\varphi(p)}N$. The mapping $d\varphi_p:T_pM\rightarrow T_{\varphi(p)}N$ given by $d\varphi_p(\alpha'(0)) = \beta'(0)$ is a linear mapping called the {\bf differential} of~$\varphi$ at~$p$. Finally, assume there is a correspondence on $M$, smooth in some manner (see below), that associates to each point $p$ in $M$ an inner product $<,>_p$ on the tangent space $T_pM$. Similarly, assume there is a correspondence on $N$, smooth in the same manner, that associates to each point $q$ in $N$ an inner product $<,>_q$ on the tangent space~$T_qN$. If $\varphi$ as above is bijective and satisfies certain smoothness properties (in the language of differential geometry, $\varphi$ is a {\bf diffeomorphism}: $\varphi$ is bijective, and $\varphi$ and $\varphi^{-1}$ are differentiable (generalized to smooth manifolds)), then $\varphi$ is called an {\bf isometry} if $<u,v>_p = <d\varphi_p(u),d\varphi_p(v)>_{\varphi(p)}$, for all $p\in M$ and all $u,v\in T_pM$, where the inner product on the left is the one on $T_pM$ and the inner product on the right is the one on~$T_{\varphi(p)}N$. On the other hand, if $\varphi$ is differentiable and satisfies $<u,v>_p = <d\varphi_p(u),d\varphi_p(v)>_{\varphi(p)}$, for all $p\in M$ and all $u,v\in T_pM$, but is not a diffeomorphism, then $\varphi$ is called a~{\bf semi-isometry}.\\ In the language of Riemannian geometry, the smooth correspondence above between points in a smooth manifold and inner products on tangent spaces of the space at the points is called a {\bf Riemannian metric} or {\bf structure}. Smooth manifolds equipped with such a structure are called {\bf Riemannian manifolds}. Using the Riemannian structure, the length of a curve in a Riemannian manifold~$M$ is computed as follows. Given $\alpha:[0,1]\rightarrow M$, a curve or path in~$M$, differentiable (generalized to smooth manifolds) on~$[0,1]$ so that~$\alpha'(t)$ makes sense for $t\in [0,1]$ and is then in the tangent space~$T_{\alpha(t)}M$, the length $L(\alpha)$ of the the path $\alpha$ is then given by \[ L(\alpha)=\int_0^1\sqrt{<\alpha'(t),\alpha'(t)>_{\alpha(t)}}dt,\] where $<,>_{\alpha(t)}$ is the inner product on the tangent space $T_{\alpha(t)}M$, and the smoothness of the Riemannian structure is such that $\sqrt{<\alpha'(t),\alpha'(t)>_{\alpha(t)}}$, $t\in [0,1]$, is integrable over~$[0,1]$ so that $L(\alpha)$ is well defined. In addition, given $p$, $q\in M$, the {\bf geodesic distance} $d(p,q)$ between them is defined as the minimum of the lengths of all paths $\alpha$ in $M$, $\alpha:[0,1]\rightarrow M$, differentiable (generalized to smooth manifolds) with $\alpha(0)=p$ and $\alpha(1)=q$, i.e., \[ d(p,q) = \min_{\alpha:[0,1]\rightarrow M,\,\alpha \,\mathrm{differentiable\,on\,}[0,1],\,\alpha(0)\,=\,p,\,\alpha(1)\,=\,q} L(\alpha).\] If a path $\alpha$ exists such that $d(p,q)$ achieves its minimum at $\alpha$, then $\alpha$ is called a {\bf geodesic} in~$M$ between~$p$ and~$q$. We note that geodesics in Euclidean spaces and $L^2[0,1]$ are given by straight lines. Thus, for example, given $p$, $q$ in $L^2[0,1]$, then $\alpha:[0,1]\rightarrow L^2[0,1]$ defined by $\alpha(t)=(1-t)p+tq$ for $t\in [0,1]$, is the geodesic between $p$ and $q$ and the distance $d(p,q)$ is \[ \int_0^1 (\int_0^1 |p(s)-q(s)|^2ds)^{1/2}dt = \int_0^1 ||p-q||_2dt = ||p-q||_2.\] Finally, we note that with the distance $d(p,q)$ as defined above, it then follows that an isometry also as defined above is indeed a distance-preserving transformation. We show this in a nonrigorous manner. Given Riemannian manifolds~$M$, $N$, let $\alpha:[0,1]\rightarrow M$ be a path from $p$ to $q$ in $M$, $\alpha(0)=p$, $\alpha(1)=q$, $\varphi:M\rightarrow N$ an isometry. Let $\beta = \varphi\circ\alpha$. Then $\beta:[0,1]\rightarrow N$, $\beta(0)=\varphi(p)$, $\beta(1)=\varphi(q)$, and $\beta$ is a path from $\varphi(p)$ to $\varphi(q)$ in~$N$. Since for any $t\in [0,1]$ the differential $d\varphi_{\alpha(t)}:T_{\alpha(t)}M\rightarrow T_{\varphi(\alpha(t))}N$ is given by $d\varphi_{\alpha(t)}(\alpha'(t)) = \beta'(t)$, then \begin{eqnarray*} L(\beta)&=&\int_0^1\sqrt{<\beta'(t),\beta'(t)>_{\beta(t)}}dt\\ &=& \int_0^1\sqrt{<d\varphi_{\alpha(t)}(\alpha'(t)), d\varphi_{\alpha(t)}(\alpha'(t))>_{\varphi(\alpha(t))}}dt\\ &=&\int_0^1\sqrt{<\alpha'(t),\alpha'(t)>_{\alpha(t)}}dt=L(\alpha) \end{eqnarray*} since $\varphi$ is an isometry. Similarly, given a path $\beta:[0,1]\rightarrow N$ from $\varphi(p)$ to $\varphi(q)$ in~$N$, there is a path $\alpha:[0,1]\rightarrow M$ from $p$ to~$q$ in~$M$ with $L(\alpha)=L(\beta)$. Thus, $d(p,q)=d(\varphi(p),\varphi(q))$. \\See \cite{carmo1}, \cite{carmo2}, \cite{lee1}, \cite{lee2}, \cite{srivastava} for a more rigorous development of the concepts of smooth manifolds, Riemannian manifolds, differentials, isometries,~etc. \\ \smallskip\\ {\bf Observation 4.7:} In what follows, given $q\in L^2[0,1]$, $\gamma\in\Gamma$, we use $(q,\gamma)$~as short notation for $(q\circ\gamma)\sqrt{\dot{\gamma}}$. Here again, without any loss of generality, in the spirit of Observation~3.8 about the chain rule, $q$ is interpreted as $q$ extended to all of $[0,1]$. As it will be shown below, $(q,\gamma)\in L^2[0,1]$ so that without any loss of generality, again in the spirit of Observation~3.8 about the chain rule, $(q,\gamma)$ can be interpreted as $(q,\gamma)$ extended to all of~$[0,1]$, and given $\overline{\gamma}\in\Gamma$, $((q,\gamma),\overline{\gamma})$ can be interpreted as $((q,\gamma),\overline{\gamma})$ extended to all of~$[0,1]$. \\ \smallskip\\ {\bf Proposition 4.5:} Given $q\in L^2[0,1]$ and $\gamma\in\Gamma$, then $(q,\gamma)\in L^2[0,1]$. In addition, given $\overline{\gamma}\in\Gamma$, then $((q,\gamma),\overline{\gamma})=(q,\gamma\circ\overline{\gamma})$ a.e. on~$[0,1]$, and if $\gamma\in\Gamma_0$, then $((q,\gamma),\gamma^{-1}) = q$ a.e. on~$[0,1]$. \\ \smallskip\\ {\bf Proof:} Let $h(t)=|q(t)|^2$ for each $t\in[0,1]$ at which $q$ exists. Then $h$ is Lebesgue integrable over~$[0,1]$ and by Corollary~3.14, $(h\circ\gamma)\dot{\gamma}= |q\circ\gamma|^2\dot{\gamma}$ is Lebesgue integrable over~$[0,1]$. Thus, $(q,\gamma)= (q\circ\gamma)\sqrt{\dot{\gamma}}\in L^2[0,1]$.\\ Now, if $\overline{\gamma}\in\Gamma$, then \begin{eqnarray*} ((q,\gamma),\overline{\gamma})&=&((q\circ\gamma)\sqrt{\dot{\gamma}},\overline{\gamma}) =(((q\circ\gamma)\sqrt{\dot{\gamma}})\circ\overline{\gamma}) \sqrt{\dot{\overline{\gamma}}}\\ &=& ((q\circ\gamma\circ\overline{\gamma})\sqrt{\dot{\gamma}\circ\overline{\gamma}}) \sqrt{\dot{\overline{\gamma}}} = (q\circ(\gamma\circ\overline{\gamma}))\sqrt{(\dot{\gamma}\circ\overline{\gamma}) \dot{\overline{\gamma}}}\\ &=& (q\circ(\gamma\circ\overline{\gamma}))\sqrt{(\gamma\circ\overline{\gamma})'} =(q,\gamma\circ\overline{\gamma}) \end{eqnarray*} a.e. on $[0,1]$ using Corollary 3.13 (chain rule) as $\gamma\circ\overline{\gamma}\in AC[0,1]$, by interpreting $\dot{\gamma}$ as $\dot{\gamma}$ extended to all of~$[0,1]$ (Observation~3.8 about the chain rule).\\ Finally, if $\gamma\in\Gamma_0$, then \begin{eqnarray*} ((q,\gamma),\gamma^{-1}) &=& ((q\circ\gamma)\sqrt{\dot{\gamma}},\gamma^{-1}) =(((q\circ\gamma)\sqrt{\dot{\gamma}})\circ\gamma^{-1}) \sqrt{(\gamma^{-1})'}\\ &=& ((q\circ\gamma\circ\gamma^{-1})\sqrt{\dot{\gamma}\circ\gamma^{-1}}) \sqrt{(\gamma^{-1})'} = q\sqrt{(\dot{\gamma}\circ\gamma^{-1})(\gamma^{-1})'}\\ &=& q\sqrt{(\dot{\gamma}\circ\gamma^{-1})/(\dot{\gamma}\circ\gamma^{-1})}=q \end{eqnarray*} a.e. on $[0,1]$ using Proposition 3.24. \\ \smallskip\\ {\bf Definition 4.3:} The {\bf action of $\Gamma_0$ on $L^2[0,1]$} is the operation that takes any element $\gamma\in\Gamma_0$ and any element $q$ of $L^2[0,1]$, and computes $(q,\gamma)=(q\circ\gamma)\sqrt{\dot{\gamma}}$. The {\bf action of $\Gamma$ on $L^2[0,1]$} is similarly defined. \\ \smallskip\\ {\bf Proposition 4.6 (Action of $\Gamma$ on $L^2[0,1]$ is by semi-isometries. Action of $\Gamma_0$ on $L^2[0,1]$ is by isometries \cite{srivastava}):} For each $\gamma\in\Gamma$, let $\varphi^{\gamma}: L^2[0,1]\rightarrow L^2[0,1]$ be defined for $q\in L^2[0,1]$ by $\varphi^{\gamma}(q)=(q,\gamma)=(q\circ\gamma)\sqrt{\dot{\gamma}}$. Then $\varphi^{\gamma}$ is differentiable and \[ <d\varphi^{\gamma}(u),d\varphi^{\gamma}(v)> = <(u,\gamma),(v,\gamma)> = <u,v> \] for any $u$, $v\in L^2[0,1]$, where $<,>$ is the $L^2[0,1]$ inner product and $d\varphi^{\gamma}$ is the differential of $\varphi^{\gamma}$, with $<,>$ and $d\varphi^{\gamma}$ the same at every $q\in L^2[0,1]$. Thus, $\varphi^{\gamma}$ is a semi-isometry and the action of $\Gamma$ on $L^2[0,1]$ is said to be by semi-isometries. If $\gamma\in\Gamma_0$, then $\varphi^{\gamma}$ is a diffeomorphism. Thus, $\varphi^{\gamma}$ is an isometry and the action of $\Gamma_0$ on $L^2[0,1]$ is said to be by isometries. \\ \smallskip\\ {\bf Proof:} If $\gamma\in\Gamma$, from Proposition 4.5 it follows that the range of $\varphi^{\gamma}$ is indeed in~$L^2[0,1]$. Since the tangent space of $L^2[0,1]$ at any point is $L^2[0,1]$ itself, it follows that $<,>$ is the same at every~$q\in L^2[0,1]$. Given $u$, $v\in L^2[0,1]$, from Proposition~2.41 (H\"{o}lder's inequality), $u\cdot v\in L^1$. Let $h(s)= u(s)v(s)$ for each $s\in [0,1]$ at which $u$ and $v$ exist. By Corollary~3.14 (change of variable), \begin{eqnarray*} <u,v> &=& \int_0^1u(s)v(s)ds = \int_{\gamma(0)}^{\gamma(1)}h(s)ds =\int_0^1 h(\gamma(t))\dot{\gamma}(t)dt\\ &=& \int_0^1 u(\gamma(t))\sqrt{\dot{\gamma}(t)} v(\gamma(t))\sqrt{\dot{\gamma}(t)}dt\\ &=& <(u,\gamma),(v,\gamma)> = <d\varphi^{\gamma}(u),d\varphi^{\gamma}(v)> \end{eqnarray*} as $\varphi^{\gamma}$ is linear so that it is differentiable and $d\varphi^{\gamma}$ acts on an element of $L^2[0,1]$ the same way $\varphi^{\gamma}$~does. Thus, $d\varphi^{\gamma}$ is the same at every $q\in L^2[0,1]$, and in addition, $\varphi^{\gamma}$ is a semi-isometry and the action of $\Gamma$ on $L^2[0,1]$ is by semi-isometries. If $\gamma\in\Gamma_0$, then $\varphi^{\gamma}$ is a bijection and its inverse~$\varphi^{\gamma^{-1}}$ is linear so that it is differentiable. Thus, $\varphi^{\gamma}$ is a diffeomorphism and therefore an isometry, and the action of $\Gamma_0$ on $L^2[0,1]$ is by~isometries. \\ \smallskip\\ {\bf Corollary 4.1 (Action of $\Gamma_0$ on $L^2[0,1]$ is distance preserving \cite{srivastava}):} Given $q_1$, $q_2\in L^2$, and $\gamma\in\Gamma_0$, then $||q_1-q_2||_2 = ||(q_1,\gamma)-(q_2,\gamma)||_2$. \\ \smallskip\\ {\bf Proof:} $d(q_1,q_2)= ||q_1-q_2||_2$ and $d((q_1,\gamma),(q_2,\gamma))=||(q_1,\gamma)-(q_2,\gamma)||_2$ (Observation~4.6). The action of $\Gamma_0$ on~$L^2[0,1]$ is by isometries (Proposition~4.6). Thus, $d(q_1,q_2)=d((q_1,\gamma),(q_2,\gamma))$ (Observation 4.6), and hence, $||q_1-q_2||_2 = ||(q_1,\gamma)-(q_2,\gamma)||_2$. \\ \smallskip\\ {\bf Corollary 4.2 (Action of $\Gamma_0$ on $L^2[0,1]$ is norm preserving \cite{srivastava}):} Given $q\in L^2$, and $\gamma\in\Gamma_0$, then $||q||_2 = ||(q,\gamma)||_2$. \\ \smallskip\\ {\bf Observation 4.8 (Action of $\Gamma$ on $L^2[0,1]$ is distance and norm preserving):} Corollary 4.1 and Corollary 4.2 can be shown to hold for all of $\Gamma$ as follows. Given $q_1$, $q_2\in L^2$, and $\gamma\in\Gamma$, then $q_1-q_2\in L^2$ and by Corollary~3.14 (change of variable), \begin{eqnarray*} ||(q_1,\gamma)-(q_2,\gamma)||_2^2 &=& \int_0^1 |q_1(\gamma(t))\sqrt{\dot{\gamma}(t)}- q_2(\gamma(t))\sqrt{\dot{\gamma}(t)}|^2dt\\ &=& \int_0^1 (q_1(\gamma(t))-q_2(\gamma(t)))^2\dot{\gamma}(t)dt\\ &=& \int_{\gamma(0)}^{\gamma(1)} (q_1(s)-q_2(s))^2ds = \int_0^1 (q_1(s)-q_2(s))^2ds\\ &=& ||q_1-q_2||_2^2. \end{eqnarray*} {\bf Definition 4.4:} Let $AC^0[0,1]=\{f: f\in AC[0,1],\, f'>0 \mathrm{\ a.e.\ on\ } [0,1]\}$. The {\bf Fisher-Rao metric} at any $f\in AC^0[0,1]$ is defined as the inner product \[\ll u,v\gg_f\, = \frac{1}{4}\int_0^1 \dot{u}(t)\dot{v}(t)\frac{1}{f'(t)}dt. \] for any $u, v\in T_fAC^0[0,1]$.\\ \smallskip\\ {\bf Observation 4.9:} The integral in the definition of the Fisher-Rao metric at $f\in AC^0[0,1]$ is well defined as $u$, $v$ are functions on~$[0,1]$ that are absolutely continuous \cite{schmeding}, hence $\dot{u}$, $\dot{v}$, $f'$ exist a.e. on~$[0,1]$, $f'>0$ a.e. on~$[0,1]$, thus $\dot{u}/\sqrt{f'}$, $\dot{v}/\sqrt{f'}$ exist a.e. on~$[0,1]$ and are in $L^2[0,1]$ (see below), so that $\dot{u}\dot{v}/f'$ is Lebesgue integrable over~$[0,1]$ by Proposition~2.41 (H\"{o}lder's inequality). In addition, this metric, as defined at elements of $AC^0[0,1]$, is known to have the behavior of a Riemannian metric~\cite{srivastava}. In what follows, we assume $AC^0[0,1]$ is endowed with this metric. \\ \smallskip\\ {\bf Proposition 4.7:} Given $f\in AC^0[0,1]$ and $\gamma\in\Gamma_0$, then $f\circ\gamma\in AC^0[0,1]$ and $(f\circ\gamma)(0)=f(0)$.\\ \smallskip\\ {\bf Proof:} That $f\circ\gamma\in AC[0,1]$ and $(f\circ\gamma)(0)=f(0)$ was established in Proposition~4.4. Accordingly, in order to conclude that $f\circ\gamma\in AC^0[0,1]$ we prove $(f\circ\gamma)'>0$ a.e. on~$[0,1]$. For this purpose let $A=\{t\in [0,1]:f'(t)>0\}$, $B=[0,1]\setminus A$, $C=\gamma^{-1}(B)$. Clearly, $mB=0$ and since $\gamma^{-1}$ is absolutely continuous on~$[0,1]$ (Observation~4.4), then $mC=0$ by Proposition~3.21. Let $D=[0,1]\setminus C$. Accordingly, we only need to prove $(f\circ\gamma)'>0$ a.e. on~$D$. Clearly, $\dot{\gamma}$ exists (and is positive) a.e. on~$D$. Since $\gamma(D)\subseteq A$, and $f'$ exists (and is positive) on~$A$, then $(f\circ\gamma)'$ exists a.e. on~$D$. Indeed it exists exactly at the points in~$D$ where $\dot{\gamma}$ exists. Thus, by the usual chain rule of calculus, $(f\circ\gamma)'(t) = f'(\gamma(t))\dot{\gamma}(t)$ for $t\in D$ at which $\dot{\gamma}$ exists. Since as mentioned above $f'(\gamma(t))$ exists and is positive for all $t\in D$, and $\dot{\gamma}$ exists and is positive a.e. on~$D$, then $(f\circ\gamma)'=(f'\circ\gamma)\dot{\gamma}>0$ a.e. on~$D$. \\ \smallskip\\ {\bf Definition 4.5:} The {\bf action of $\Gamma_0$ on $AC^0[0,1]$} is the operation that takes any element $\gamma\in\Gamma_0$ and any element $f$ of $AC^0[0,1]$, and computes~$f\circ\gamma$.\\ \smallskip\\ {\bf Observation 4.10:} In what follows, two functions in $AC[0,1]$ are considered equal if they differ by a constant. Simpler yet, we assume all functions in $AC[0,1]$ have the same value at zero. Since by Proposition~4.4, if $f\in AC[0,1]$, $\gamma\in\Gamma$, then $f\circ\gamma\in AC[0,1]$ and $(f\circ\gamma)(0)=f(0)$, and since in addition the SRSF of $(f+C)$ is the same for any constant~$C$, the latter is a reasonable assumption. \\ \smallskip\\ {\bf Proposition 4.8 (Action of $\Gamma_0$ on $AC^0[0,1]$ with Fisher-Rao metric is by isometries \cite{srivastava}):} For each $\gamma\in\Gamma_0$, let $\varphi^{\gamma}: AC^0[0,1]\rightarrow AC^0[0,1]$ be defined for $f\in AC^0[0,1]$ by $\varphi^{\gamma}(f)=f\circ\gamma$. Then $\varphi^{\gamma}$ is a diffeomorphism and \[ \ll d\varphi_f^{\gamma}(u),d\varphi_f^{\gamma}(v)\gg_{f\circ\gamma} = \ll u\circ \gamma,v\circ\gamma\gg_{f\circ \gamma}= \ll u,v\gg_f\] for any $u$, $v\in T_fAC^0[0,1]$, where $\ll ,\gg_f$ is the inner product that defines the Fisher-Rao metric at~$f$, $\ll ,\gg_{f\circ\gamma}$ is the inner product that defines it at~$f\circ\gamma$, and $d\varphi_f^{\gamma}$ is the differential of $\varphi^{\gamma}$ at~$f$. Thus, $\varphi^{\gamma}$ is an isometry and the action of $\Gamma_0$ on $AC^0[0,1]$ is said to be by~isometries. \\ \smallskip\\ {\bf Proof:} If $\gamma\in\Gamma_0$, from Proposition 4.7, the range of $\varphi^{\gamma}$ is indeed in~$AC^0[0,1]$. As noted in Observation~4.9, given $f\in AC^0[0,1]$, $u$, $v\in T_fAC^0[0,1]$, then $u$, $v$ are functions on~$[0,1]$ that are absolutely continuous \cite{schmeding}, and $h=\dot{u}\dot{v}/f'$ is Lebesgue integrable over~$[0,1]$. By Corollary~3.14 (change of variable), Corollary~3.13 (chain rule) as $u\circ\gamma$, $v\circ\gamma$, $f\circ\gamma\in AC[0,1]$, by interpreting $h$, $\dot{u}$, $\dot{v}$, $f'$ as $h$, $\dot{u}$, $\dot{v}$, $f'$ extended to all of~$[0,1]$ (Observation~3.8 about the chain rule), and noting that all denominators below are greater than zero a.e. on~$[0,1]$ (Proposition~4.7 and its proof), then \begin{eqnarray*} \ll u,v\gg_f &=& \frac{1}{4}\int_0^1\dot{u}(s)\dot{v}(s)\frac{1}{f'(s)}ds = \frac{1}{4}\int_{\gamma(0)}^{\gamma(1)}h(s)ds\\ &=& \frac{1}{4}\int_0^1 h(\gamma(t))\dot{\gamma}(t)dt = \frac{1}{4}\int_0^1 \dot{u}(\gamma(t))\dot{v}(\gamma(t))\frac{1}{f'(\gamma(t))} \dot{\gamma}(t)dt\\ &=&\frac{1}{4}\int_0^1 \dot{u}(\gamma(t))\dot{\gamma}(t)\dot{v}(\gamma(t))\dot{\gamma}(t) \frac{1}{f'(\gamma(t))\dot{\gamma}(t)}dt\\ &=&\frac{1}{4}\int_0^1 (u(\gamma(t)))'(v(\gamma(t)))'\frac{1}{(f(\gamma(t)))'}dt\\ &=& \ll u\circ\gamma,v\circ\gamma\gg_{f\circ\gamma} = \ll d\varphi^{\gamma}_f(u),d\varphi^{\gamma}_f(v)\gg_{f\circ\gamma} \end{eqnarray*} as $\varphi^{\gamma}$ is linear so that it is differentiable and $d\varphi^{\gamma}_f$ acts on an element of $T_fAC^0[0,1]$ the same way $\varphi^{\gamma}$~does on an element of~$AC^0[0,1]$. Since $\gamma\in\Gamma_0$, then $\varphi^{\gamma}$ is a bijection and its inverse~$\varphi^{\gamma^{-1}}$ is linear so that it is differentiable. Thus, $\varphi^{\gamma}$ is a diffeomorphism and therefore an isometry, and the action of $\Gamma_0$ on $AC^0[0,1]$ is by~isometries.\\ \smallskip\\ {\bf Proposition 4.9 (Fisher-Rao metric on $AC^0[0,1]$ under SRSF representation becomes $L^2[0,1]$ metric \cite{srivastava}):} Given $f\in AC^0[0,1]$ and $q$ the SRSF of~$f$, define a mapping $F:AC^0[0,1]\rightarrow L^2[0,1]$ by~$F(f)=q=\sqrt{f'}$. Then $F$ is differentiable, and for any $v \in T_fAC^0[0,1]$, it must be that $F_f^*(v)=\dot{v}/(2\sqrt{f'})\in T_qL^2[0,1]= L^2[0,1]$, where $F_f^*$ is the differential of~$F$ at~$f$. Given $v_1$, $v_2\in T_fAC^0[0,1]$, then $<F_f^*(v_1),F_f^*(v_2)>=\ll v_1,v_2\gg_f$, where $<,>$ is the $L^2[0,1]$ inner product and $\ll ,\gg_f$ is the inner product that defines the Fisher-Rao metric at~$f$. \\ \smallskip\\ {\bf Proof:} Let $L_0^1[0,1]=\{\hat{f}: \hat{f}\in L^1[0,1],\, \hat{f}>0 \mathrm{\ a.e.\ on\ } [0,1]\}$.\\ Given $\hat{f}\in L_0^1[0,1]$, define a mapping $S: L_0^1[0,1]\rightarrow L^2[0,1]$ by $S(\hat{f}) = \sqrt{\hat{f}}$.\\ In addition, given $f\in AC^0[0,1]$, define a mapping (the derivative mapping) $D:AC^0[0,1]\rightarrow L_0^1[0,1]$ by $D(f)=f'$.\\ With $F$ as defined above, then $F=S\circ D$.\\ Given $v\in T_fAC^0[0,1]$, then $D_f^*(v)=\dot{v}\in T_{f'}L_0^1[0,1]$, where $D_f^*$ is the differential of $D$ at~$f$, as $D$ is linear so that it is differentiable and $D_f^*$ acts on an element of $T_fAC^0[0,1]$ the same way $D$ acts on an element of~$AC^0[0,1]$.\\ Let $s:{\bf R}\rightarrow {\bf R}$ be the mapping defined by $s(x)=\sqrt{x}$, $x\in {\bf R}$, $x>0$. Then $s$ is differentiable for $x>0$, and $s^*(y)=s'(x)y=y/(2\sqrt{x})$ for any $y\in {\bf R}$, where $s^*$ is the differential of~$s$. From this, following closely the definition of the differential of a differentiable function \cite{lee1,srivastava}, it then follows that $S$ is differentiable and given $w\in T_{\hat{f}}L_0^1[0,1]$, then $S_{\hat{f}}^*(w) = w/(2\sqrt{\hat{f}})\in T_{\sqrt{\hat{f}}}L^2[0,1]$, where $S_{\hat{f}}^*$ is the differential of $S$ at~$\hat{f}$.\\ Thus, $F=S\circ D$ is differentiable and its differential $F_f^*:T_fAC^0[0,1]\rightarrow T_qL^2[0,1]$ at $f\in AC^0[0,1]$ is $S_{D(f)}^*\circ D_f^*=S_{f'}^*\circ D_f^*$~\cite{lee1}.\\ Accordingly, given $v\in T_fAC^0[0,1]$, then $F_f^*(v)=S_{f'}^*(D_f^*(v))=S_{f'}^*(\dot{v})=\dot{v}/(2\sqrt{f'})\in T_{\sqrt{f'}}L^2[0,1]=T_qL^2[0,1]=L^2[0,1]$.\\ Finally, given $v_1$, $v_2\in T_fAC^0[0,1]$, then \begin{eqnarray*} <F_f^*(v_1),F_f^*(v_2)>&=&<\dot{v}_1/(2\sqrt{f'}),\dot{v}_2/(2\sqrt{f'})> = \frac{1}{4}\int_0^1 \dot{v}_1(t)\dot{v}_2(t)\frac{1}{f'(t)}dt\\ &=&\ll v_1,v_2 \gg_f. \end{eqnarray*} {\bf Observation 4.11 (Distance between functions in $AC[0,1]$):} Given $f_1$, $f_2 \in AC[0,1]$, let $q_1$, $q_2$ be the SRSF's of $f_1$, $f_2$, respectively. We note that computing the distance between $f_1$ and $f_2$ with the Fisher-Rao metric as defined above may not be possible as a path in $AC[0,1]$ from $f_1$ to $f_2$ might contain functions whose derivatives are not positive a.e. on~$[0,1]$. Even if this was not the case, the minimization involved would be nontrivial. Accordingly, motivated by Proposition~4.9 above, the convention is to say that the {\bf Fisher-Rao distance} between $f_1$ and $f_2$ is $d_{FR}(f_1,f_2)=||q_1-q_2||_2$, i.e., the $L^2$ distance between $q_1$ and $q_2$. In addition, since the geodesic from $q_1$ to $q_2$ is a straight line, given $s\in [0,1]$, then $q=(1-s)q_1+sq_2$ is a function in this geodesic, and by Proposition~4.3, a function $f\in AC[0,1]$ can be computed for each $t\in [0,1]$ by $f(t) = C +\int_0^t q(x)|q(x)|dx$, where~$C=f_1(0)=f_2(0)$, with the SRSF of $f$ equal to $q$ a.e. on~$[0,1]$. Doing this for enough functions on the straight line joining $q_1$ and $q_2$, a collection of functions can be obtained in~$AC[0,1]$ that are then said to approximate a geodesic (based on the Fisher-Rao metric) from $f_1$ to~$f_2$. \\ \smallskip\\ {\bf Definition 4.6:} Given $q\in L^2[0,1]$, define the {\bf orbit $[q]_{\Gamma_0}$ of $q$ under $\Gamma_0$} by $[q]_{\Gamma_0}=\{\overline{q}: \overline{q}=(q,\gamma)=(q\circ\gamma)\sqrt{\dot{\gamma}}\ \mathrm{a.e.\ on\ }$[0,1]$, \ \mathrm{some}\ \gamma\in\Gamma_0\}$. Denote by $cl([q]_{\Gamma_0})$ the closure in $L^2[0,1]$ of~$[q]_{\Gamma_0}$. \\ \smallskip\\ {\bf Observation 4.12:} In what follows, given $q_1$, $q_2\in L^2[0,1]$, $q_1\in [q_2]_{\Gamma_0}$ so that $q_1= (q_2,\gamma)$ a.e. on~$[0,1]$ for some $\gamma\in\Gamma_0$, without any loss of generality we may simply say $q_1=(q_2,\gamma)$. Accordingly, given $q_1$, $q_2\in L^2[0,1]$, $q_1\in [q_2]_{\Gamma_0}$ so that $q_1= (q_2,\gamma)$ for some $\gamma\in\Gamma_0$, then it follows (Proposition~4.5) that $[q_1]_{\Gamma_0}\subseteq [q_2]_{\Gamma_0}$, and $q_2=(q_1,\gamma^{-1})$ so that $[q_2]_{\Gamma_0}\subseteq [q_1]_{\Gamma_0}$ and thus $[q_1]_{\Gamma_0}=[q_2]_{\Gamma_0}$. Using similar arguments, given $q_1$, $q_2\in L^2[0,1]$, an equivalence relation $\,\sim\,$ can be defined and justified on~$L^2[0,1]$ for which $q_1\,\sim\,q_2$ if $q_1$ and $q_2$ are in the same orbit under~$\Gamma_0$. Accordingly, with this equivalence relation a quotient space is obtained which is the set of all orbits of elements of $L^2[0,1]$ under $\Gamma_0$ and which we denote by~$L^2[0,1]/\Gamma_0$. An attempt then can be made as follows to define a distance function~$d$ between elements of $L^2[0,1]/\Gamma_0$ that would make $L^2[0,1]/\Gamma_0$ a metric space. Given $q_1$, $q_2\in L^2[0,1]$, let \begin{eqnarray*} d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0}) &=& \inf_{\gamma_1,\gamma_2\in\Gamma_0} ||(q_1,\gamma_1)-(q_2,\gamma_2)||_2\\ &=& \inf_{\gamma\in\Gamma_0} ||q_1-(q_2,\gamma)||_2 = \inf_{\gamma\in\Gamma_0} ||(q_1,\gamma)-q_2||_2, \end{eqnarray*} where the bottom equations follow from Corollary~4.1 (action of $\Gamma_0$ on $L^2[0,1]$ is distance preserving) again using Proposition~4.5 where appropriate. Of the properties that $d$ must satisfy to be a distance function all have been established \cite{srivastava} except one: $d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})=0$ if and only if~$[q_1]_{\Gamma_0}=[q_2]_{\Gamma_0}$. Unfortunately, as demonstrated in \cite{lahiri}, the orbits as defined are not closed in $L^2[0,1]$, which allows for examples with $[q_1]_{\Gamma_0}\not=[q_2]_{\Gamma_0}$ but $d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})=0$. \\ \smallskip\\ {\bf Proposition 4.10 (\cite{lahiri}):} Given $q_1$, $q_2\in L^2[0,1]$, then $d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})=0$ if and only if~$cl([q_1]_{\Gamma_0})=cl([q_2]_{\Gamma_0})$. In particular, if $q_1\in cl([q_2]_{\Gamma_0})$ so that $d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})=0$, then $cl([q_1]_{\Gamma_0})=cl([q_2]_{\Gamma_0})$. \\ \smallskip\\ {\bf Proof:} If $d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})=0$, fix $\overline{q}_1\in [q_1]_{\Gamma_0}$ and note $[\overline{q}_1]_{\Gamma_0}=[q_1]_{\Gamma_0}$ (Observation~4.12) so that $d([\overline{q}_1]_{\Gamma_0},[q_2]_{\Gamma_0}) = d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})=0$. Then~given integer $n>0$, there is $\gamma_n\in\Gamma_0$ such that $||\overline{q}_1-(q_2,\gamma_n)||_2<1/n$. Thus, $\overline{q}_1\in cl([q_2]_{\Gamma_0})$. Since $\overline{q}_1$ is arbitrary in $[q_1]_{\Gamma_0}$ then $[q_1]_{\Gamma_0}\subseteq cl([q_2]_{\Gamma_0})$, thus $cl([q_1]_{\Gamma_0})\subseteq cl([q_2]_{\Gamma_0})$. Similarly, $cl([q_2]_{\Gamma_0})\subseteq cl([q_1]_{\Gamma_0})$, thus \mbox{$cl([q_1]_{\Gamma_0})=cl([q_2]_{\Gamma_0})$}.\\ Assume $cl([q_1]_{\Gamma_0})=cl([q_2]_{\Gamma_0})$. Then, in particular, $q_1\in cl([q_2]_{\Gamma_0})$ so that given integer $n>0$, there is $\gamma_n\in\Gamma_0$ with $||q_1-(q_2,\gamma_n)||_2<1/n$. Thus, $d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0}) = \inf_{\gamma\in\Gamma_0} ||q_1-(q_2,\gamma)||_2=0$. \\ \smallskip\\ {\bf Observation 4.13:} Using arguments similar to those in the proof of Proposition~4.10 above, given $q_1$, $q_2\in L^2[0,1]$, an equivalence relation $\,\sim\,$ can be defined and justified on~$L^2[0,1]$ for which $q_1\,\sim\,q_2$ if $q_1$ and $q_2$ are in the closure of the same orbit under~$\Gamma_0$. Accordingly, with this equivalence relation a quotient space is obtained which is the set of all closures of orbits of elements of $L^2[0,1]$ under $\Gamma_0$ and which we denote by~$L^2[0,1]/\sim\,$. In what follows, we extend the function~$d$ above to the quotient space~$L^2[0,1]/\sim\,$. \\ \smallskip\\ {\bf Corollary 4.3 (Distance between equivalence classes in $L^2[0,1]/\sim\,$ \cite{lahiri}, \cite{srivastava}):} Given $q_1$, $q_2\in L^2[0,1]$, let \[d(cl([q_1]_{\Gamma_0}),cl([q_2]_{\Gamma_0})) = \inf_{\overline{q}_1\in cl([q_1]_{\Gamma_0}),\overline{q}_2\in cl([q_2]_{\Gamma_0})} ||\overline{q}_1-\overline{q}_2||_2.\] Then $d(cl([q_1]_{\Gamma_0}),cl([q_2]_{\Gamma_0})) = \inf_{\gamma_1,\gamma_2\in\Gamma_0} ||(q_1,\gamma_1)-(q_2,\gamma_2)||_2= \inf_{\gamma\in\Gamma_0} ||q_1-(q_2,\gamma)||_2 = \inf_{\gamma\in\Gamma_0} ||(q_1,\gamma)-q_2||_2 = d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})$, and $d$ is a distance function between elements of $L^2[0,1]/\sim\,$, so that $L^2[0,1]/\sim\,$ is a metric space with this distance function.\\ \smallskip\\ {\bf Proof:} Note \[\inf_{\overline{q}_1\in cl([q_1]_{\Gamma_0}),\overline{q}_2\in cl([q_2]_{\Gamma_0})} ||\overline{q}_1-\overline{q}_2||_2 = \inf_{\gamma_1,\gamma_2\in\Gamma_0} ||(q_1,\gamma_1)-(q_2,\gamma_2)||_2.\] Thus, $d(cl([q_1]_{\Gamma_0}),cl([q_2]_{\Gamma_0})) = \inf_{\gamma_1,\gamma_2\in\Gamma_0} ||(q_1,\gamma_1)-(q_2,\gamma_2)||_2 =\inf_{\gamma\in\Gamma_0} ||q_1-(q_2,\gamma)||_2 =\inf_{\gamma\in\Gamma_0} ||(q_1,\gamma)-q_2||_2 = d([q_1]_{\Gamma_0},[q_2]_{\Gamma_0})$, as previously noted in Observation~4.12.\\ That $d$ is a distance function follows from Proposition 4.10 and results about properties of this distance function in~\cite{srivastava}. \\ \smallskip\\ {\bf Observation 4.14:} Given $f_1$, $f_2\in AC[0,1]$, and $q_1$, $q_2$, the SRSF's of $f_1$, $f_2$, respectively, we note that $q_1$ and $q_2$ remain unchanged after translations of $f_1$ and $f_2$ (by translations we mean $f_1$ and $f_2$ become $f_1+c_1$ and $f_2+c_2$, respectively, for constants $c_1$, $c_2$) so that the distance between the equivalence classes of $q_1$ and $q_2$, defined by $d$ above, is the same before and after the translations. That this is true follows from the definition of the SRSF. For scalar multiplications of $f_1$ and $f_2$, the distance between the equivalence classes of $q_1$ and $q_2$ before and after the scalar multiplications can be approximated or computed exactly, if possible, by the same elements of $\Gamma_0$ as the following proposition shows. Accordingly, it is customary to normalize $q_1$ and $q_2$ so that $||q_1||_2=||q_2||_2=1$ and then compute the distance between their equivalence classes with $d$ as above, as from the comments just made doing so is compatible with the requirement that the shapes of $f_1$ and $f_2$ be invariant under translation and scalar multiplication. \\ \smallskip\\ {\bf Proposition 4.11 (\cite{srivastava}):} Given $q_1$, $q_2\in L^2[0,1]$, and $\gamma^*$, $\gamma\in\Gamma_0$ for which $||q_1-(q_2,\gamma^*)||_2\leq ||q_1-(q_2,\gamma)||_2$, then $||bq_1-(cq_2,\gamma^*)||_2\leq ||bq_1-(cq_2,\gamma)||_2$, for any $b,\,c,\,bc>0$. \\ \smallskip\\ {\bf Proof:} With $<,>$ as the $L^2$ inner product, note \[ ||q_1-(q_2,\gamma^*)||_2^2 = ||q_1||_2^2 -2 <q_1,(q_2,\gamma^*)> + ||(q_2,\gamma^*)||_2^2, \] and \[ ||q_1-(q2,\gamma)||_2^2 = ||q_1||_2^2 -2 <q_1,(q_2,\gamma)> + ||(q_2,\gamma)||_2^2.\] Thus, $||q_1-(q_2,\gamma^*)||_2\leq ||q_1-(q_2,\gamma)||_2$ and $||q_2||_2=||(q_2,\gamma^*)||_2=||(q_2,\gamma)||_2$, $bc>0$, imply $-2bc <q_1,(q_2,\gamma^*)>\,\,\,\ \leq\ -2bc <q_1,(q_2,\gamma)>$.\\ Accordingly, since $||cq_2||_2=||(cq_2,\gamma^*)||_2=||(cq_2,\gamma)||_2$, then \begin{eqnarray*} ||bq_1||_2^2 -2 <bq_1,(cq_2,\gamma^*)> + ||(cq_2,\gamma^*)||_2^2 & &\\ \leq \mathrm{\ \ } ||bq_1||_2^2 -2 <bq_1,(cq_2,\gamma)> + ||(cq_2,\gamma)||_2^2, & & \end{eqnarray*} so that $||bq_1-(cq_2,\gamma^*)||_2\leq ||bq_1-(cq_2,\gamma)||_2$. \\ \smallskip\\ {\bf Observation 4.15:} Figure~1 illustrates an instance of approximately computing $d(cl([q_1]_{\Gamma_0}),cl([q_2]_{\Gamma_0}))$ as expressed in Corollary~4.3 above. Here $q_1$, $q_2$ are the SRSF's of functions $f_1$, $f_2$, respectively, plotted in the lefmost diagram, $f_1$ in red, $f_2$ in blue, $q_1$, $q_2$ normalized so that $||q_1||_2=||q_2||_2=1$. The distance (about 0.1436) was approximately computed (in about 154 seconds) with {\em adapt-DP}~\cite{bernal}, a fast linear Dynamic Programming algorithm. The resulting warping function $\gamma\in\Gamma_0$ that approximately minimizes $||q_1-(q_2,\gamma)||_2$ is plotted in the rightmost diagram, and $f_1$ and $f_2\circ\gamma$ are plotted in the middle diagram in which they appear essentially aligned. The functions $f_1$ and $f_2$ were given in the form of sets of 19,693 and 19,763 points, respectively, with nonuniform domains in~$[0,1]$. A copy of {\em adapt-DP} with usage instructions and data files for the same example in Figure~1 can be obtained using~links: \verb+https://doi.org/10.18434/T4/1502501+ \verb+http://math.nist.gov/~JBernal+ \verb+/Fast_Dynamic_Programming.zip+ \begin{figure} \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{chrom_before.eps} & \includegraphics[width=0.3\textwidth]{chrom_after.eps} & \includegraphics[width=0.3\textwidth]{warp_fun.eps} \end{tabular} \end{center} \caption{Function alignment by warping that is obtained from the computation of $d(cl([q_1]_{\Gamma_0}),cl([q_2]_{\Gamma_0}))$.} \end{figure} \\ \smallskip\\ {\bf Definition 4.7:} Given $q\in L^2[0,1]$, define the {\bf orbit $[q]_{\Gamma}$ of $q$ under $\Gamma$} by $[q]_{\Gamma}=\{\overline{q}: \overline{q}=(q,\gamma)=(q\circ\gamma)\sqrt{\dot{\gamma}}\ \mathrm{a.e.\ on\ }$[0,1]$, \ \mathrm{some}\ \gamma\in\Gamma\}$. \\ \smallskip\\ {\bf Observation 4.16:} In what follows, we present results found mostly in~\cite{lahiri} for the purpose of showing that given $q\in L^2[0,1]$, then there exist $w\in L^2[0,1]$, $\gamma\in\Gamma$, such that $q=(w,\gamma)$, $|w|$ constant a.e. on~$[0,1]$, $cl([w]_{\Gamma_0}) = [w]_{\Gamma}$ so that $q\in cl([w]_{\Gamma_0})$, and thus $cl([q]_{\Gamma_0}) = cl([w]_{\Gamma_0}) = [w]_{\Gamma}$. We note that this result doesn't change how $d(cl([q_1]_{\Gamma_0}),cl([q_2]_{\Gamma_0}))$ in Corollary~4.3 is computed for $q_1$, $q_2\in L^2[0,1]$. It should still be done by computing $\inf_{\gamma\in\Gamma_0} ||q_1-(q_2,\gamma)||_2$ or $\inf_{\gamma\in\Gamma_0} ||(q_1,\gamma)-q_2||_2$ as implied by Corollary~4.3. \\ \smallskip\\ {\bf Proposition 4.12:} $A_0=\{q\in L^2[0,1]: ||q||_2=1, \ q>0\ \mathrm{a.e.\ on} \ [0,1]\}$ has closure in $L^2[0,1]$ equal to $A=\{q\in L^2[0,1]: ||q||_2=1, \ q\geq 0 \ \mathrm{a.e.\ on}\ [0,1]\}$. \\ \smallskip\\ {\bf Proof:} Clearly $A_0\subset A$. Let $\epsilon>0$ be given. Given $q\in A\setminus A_0$, then a measurable subset~$B$ of $[0,1]$ exists, $m(B)>0$, on which~$q=0$. Let $C=[0,1]\setminus B$. Then $q>0$ a.e. on~$C$ and $\int_C q(t)^2dt = \int_0^1 q(t)^2dt=1$.\\ Choose $b>0$, $b<1$ with $1-b<\epsilon/2$, and set $a=((1-b^2)/m(B))^{1/2}$.\\ Define a function $\hat{q}$ on $[0,1]$ by $\hat{q}=bq$ on~$C$ and $\hat{q}=a$ on~$B$. Then $\hat{q}>0$ a.e. on~$[0,1]$ and $\int_0^1\hat{q}(t)^2dt=\int_C(bq(t))^2dt +\int_B a^2 dt$ $=b^2\cdot 1 + a^2m(B) = b^2+(1-b^2) = 1$ so that $\hat{q}\in A_0$.\\ Note $\int_0^1(q(t)-\hat{q}(t))^2dt = \int_C(q(t)-bq(t))^2dt+\int_B a^2dt=(1-b)^2 +a^2m(B)$= $(1-b)^2+(1-b^2)= 1-2b+b^2+1-b^2=2-2b=2(1-b)<2\epsilon/2=\epsilon$ so that $q$ is in the closure of~$A_0$ in~$L^2[0,1]$ and this is true for every $q$ in $A\setminus A_0$.\\ Finally, if $q\not\in A$, we show $q$ is not in the closure of~$A_0$ in~$L^2[0,1]$. If $||q||_2\not =1$, then clearly $q$ is not in the closure. Thus, assume $||q||_2=1$. Since $q\not\in A$, then a measurable subset $B$ of~$[0,1]$ exists, $m(B)>0$, on which $q<0$. Thus, $\int_B q(t)^2 >0$ (i of Proposition~2.34) so that $\int_B (q(t)-\hat{q}(t))^2dt$ $>\int_B q(t)^2dt$ for any $\hat{q}\in A_0$. Thus, $q$ can not be in the closure of $A_0$ in~$L^2[0,1]$ and $A$ must then be the closure of $A_0$ in~$L^2[0,1]$.\\ \smallskip\\ {\bf Corollary 4.4 (SRSF's of functions in $\Gamma_0$ and $\Gamma$; orbit of the constant function equal to 1 \cite{lahiri}):} $Q_0=\{\overline{q}: \overline{q}= \sqrt{\dot{\gamma}} \ \mathrm{a.e.\ on}\ [0,1],\ \mathrm{some}\ \gamma\in\Gamma_0\}$ has closure in $L^2[0,1]$ equal to $Q=\{\overline{q}: \overline{q}= \sqrt{\dot{\gamma}}\ \mathrm{a.e.\ on} \ [0,1],\ \mathrm{some}\ \gamma\in\Gamma\}$. In addition, with $A_0=\{q\in L^2[0,1]: ||q||_2=1, \ q>0\ \mathrm{a.e.\ on} \ [0,1]\}$, $A=\{q\in L^2[0,1]: ||q||_2=1, \ q\geq 0 \ \mathrm{a.e.\ on}\ [0,1]\}$, and $q_0$ the constant function equal to 1 on~$[0,1]$, then $[q_0]_{\Gamma_0}= Q_0=A_0$ and $cl([q_0]_{\Gamma_0})=Q=A$. \\ \smallskip\\ {\bf Proof:} If $\gamma\in\Gamma_0$ and $\overline{q}=\sqrt{\dot{\gamma}}$ a.e. on~$[0,1]$, then $\int_0^1 \overline{q}(t)^2dt = \int_0^1 \dot{\gamma}(t)dt=\gamma(1)-\gamma(0)=1$. Also $\overline{q}>0$ a.e. on $[0,1]$ since $\gamma\in\Gamma_0$ so that $\overline{q}\in A_0$. Thus, $Q_0\subseteq A_0$. On the other hand, if $q\in A_0$, then $q>0$ a.e. on~$[0,1]$, $q\in L^2[0,1]$, $||q||_2=1$. By Proposition~4.3, $\gamma$ defined for each $t\in [0,1]$ by $\gamma(t) = \int_0^t q(s)|q(s)|ds$ $=\int_0^t q(s)^2ds$ is absolutely continuous on~$[0,1]$ with $q$ equal to the SRSF of $\gamma$ a.e. on~$[0,1]$. Clearly $\gamma(0)=0$, $\gamma(1)= ||q||_2=1$, $\dot{\gamma} = q^2$ a.e. on~$[0,1]$, thus $\gamma\in\Gamma_0$ and $\sqrt{\dot{\gamma}}=q$ a.e. on~$[0,1]$ so that $q\in Q_0$. Thus~$Q_0=A_0$. Similarly,~$Q=A$ so that the closure of $Q_0$ in $L^2[0,1]$ is~$Q$ by Proposition~4.12. Finally, $[q_0]_{\Gamma_0}=\{\overline{q}:\overline{q} = (q_0,\gamma)= (q_0\circ \gamma)\sqrt{\dot{\gamma}}=\sqrt{\dot{\gamma}}\ \mathrm{a.e.\ on}\ [0,1], \ \mathrm{some}\ \gamma\in\Gamma_0 \} = Q_0 = A_0$, and since, as just proved, the closure of $Q_0$ in $L^2[0,1]$ is $Q$, then $cl([q_0]_{\Gamma_0})= Q = A$. \\ \smallskip\\ {\bf Corollary 4.5 (\cite{lahiri}):} With $q_0$ the constant function equal to 1 on~$[0,1]$, given $q_1$, $q_2 \in L^2[0,1]$, $||q_1||_2=||q_2||_2=1$, if either (i) $q_1\geq 0$ a.e. on $[0,1]$ and $q_2\geq 0$ a.e. on $[0,1]$, or (ii) $q_1\leq 0$ a.e. on $[0,1]$ and $q_2\leq 0$ a.e. on $[0,1]$, then in the case of (i) it must be that $cl([q_1]_{\Gamma_0}) = cl([q_2]_{\Gamma_0}) = cl([q_0]_{\Gamma_0})$, and in the case of (ii) it must be that $cl([q_1]_{\Gamma_0}) = cl([q_2]_{\Gamma_0}) = cl([-q_0]_{\Gamma_0})$. In both cases a sequence $\{\gamma_n\}$ exists in $\Gamma_0$ with $(q_1,\gamma_n)\rightarrow q_2$ in~$L^2[0,1]$.\\ \smallskip\\ {\bf Proof:} With $A=\{q\in L^2[0,1]: ||q||_2=1, \ q\geq 0 \ \mathrm{a.e.\ on}\ [0,1]\}$, if (i) is true, then $q_1$, $q_2\in A = cl([q_0]_{\Gamma_0})$ (Corollary~4.4) so that by Proposition~4.10, $cl([q_1]_{\Gamma_0}) = cl([q_2]_{\Gamma_0}) = cl([q_0]_{\Gamma_0})$. On the other hand, if (ii) is true, then using similar arguments as above with $-q_0=-1$ taking the place of~$q_0$, it then follows that $cl([q_1]_{\Gamma_0}) = cl([q_2]_{\Gamma_0}) = cl([-q_0]_{\Gamma_0})$. In both cases $cl([q_1]_{\Gamma_0}) = cl([q_2]_{\Gamma_0})$ implies the existence of~$\{\gamma_n\}$. \\ \smallskip\\ {\bf Corollary 4.6 (\cite{lahiri}):} Given $q_1$, $q_2\in L^2[0,1]$, and a sequence of numbers $t_0=0 <t_1<\ldots <t_n=1$, such that for each $i$, $i=1,\ldots,n$, $\int_{t_{i-1}}^{t_i}q_1(t)^2dt = \int_{t_{i-1}}^{t_i}q_2(t)^2dt$, and either $q_1\geq 0$ and $q_2\geq 0$ a.e. on $[t_{i-1},t_i]$, or $q_1\leq 0$ and $q_2\leq 0$ a.e. on $[t_{i-1},t_i]$, then $cl([q_1]_{\Gamma_0})= cl([q_2]_{\Gamma_0})$.\\ \smallskip\\ {\bf Proof:} Given $i$, $1\leq i\leq n$, then a sequence $\{\lambda_n^i\}$ exists of absolutely continuous functions, $\lambda_n^i:[t_{i-1},t_i]\rightarrow [t_{i-1},t_i]$, $\dot{\lambda}_n^i>0$ a.e. on~$[t_{i-1},t_i]$, $\lambda_n^i(t_{i-1})=t_{i-1}$, $\lambda_n^i(t_i)=t_i$ for each $n$ such that $(q_1,\lambda_n^i)\rightarrow q_2$ in~$L^2[t_{i-1},t_i]$. Here $(q_1,\lambda_n^i)$ is understood to be $(q_1\circ\lambda_n^i)\sqrt{\dot{\lambda}_n^i}$ and $L^2[t_{i-1},t_i]$ the set of square-integrable functions over~$[t_{i-1},t_i]$. Proof of the existence of $\{\lambda_n^i\}$ along the lines of that of Corollary 4.5 with $[t_{i-1},t_i]$ taking the place of $[0,1]$ and the value of $\int_{t_{i-1}}^{t_i} q_1(t)^2dt = \int_{t_{i-1}}^{t_i} q_2(t)^2dt$ not necessarily equal to~1.\\ Finally, define a sequence of functions $\{\gamma_n\}$, $\gamma_n:[0,1]\rightarrow [0,1]$, by setting $\gamma_n(t)=\lambda_n^i(t)$ if $t\in [t_{i-1},t_i]$ for each~$n$. It follows $\gamma_n$ is absolutely continuous, $\gamma_n(0)=0$, $\gamma_n(1)=1$, $\dot{\gamma}_n>0$ a.e. on~$[0,1]$ for each~$n$. Thus $\{\gamma_n\}\subset \Gamma_0$ and since $(q_1,\lambda_n^i)\rightarrow q_2$ in~$L^2[t_{i-1},t_i]$ for each~$i$, then $(q_1,\gamma_n)\rightarrow q_2$ in $L^2[0,1]$. Thus, $q_2\in cl([q_1]_{\Gamma_0})$ and by Proposition~4.10, then $cl([q_1]_{\Gamma_0})=cl([q_2]_{\Gamma_0})$. \\ \smallskip\\ {\bf Corollary 4.7 (\cite{lahiri}):} Given $q_1$, $q_2\in L^2[0,1]$, and two sequences of numbers $t_0=0 <t_1<\ldots <t_n=1$, $t'_0=0 <t'_1<\ldots <t'_n=1$, such that for each $i$, $i=1,\ldots,n$, $\int_{t_{i-1}}^{t_i}q_1(t)^2dt = \int_{t_{i-1}'}^{t_i'}q_2(t)^2dt$, and either $q_1\geq 0$ a.e. on $[t_{i-1},t_i]$ and $q_2\geq 0$ a.e. on $[t_{i-1}',t_i']$, or $q_1\leq 0$ a.e. on $[t_{i-1},t_i]$ and $q_2\leq 0$ a.e. on $[t_{i-1}',t_i']$, then $cl([q_1]_{\Gamma_0})= cl([q_2]_{\Gamma_0})$. \\ \smallskip\\ {\bf Proof:} Let $\gamma$ be the piecewise linear element of $\Gamma_0$ for which $\gamma(t_i')=t_i$, $i=0,\ldots,n$, and let~$w=(q_1,\gamma)$. It then follows by Corollary~3.14 (change of variable) that for each $i$, $i=1,\ldots,n$, $\int_{t_{i-1}'}^{t_i'} w(t)^2dt = \int_{t_{i-1}'}^{t_i'} (q_1(\gamma(t)))^2\dot{\gamma}(t)dt = \int_{t_{i-1}}^{t_i} q_1(s)^2ds = \int_{t_{i-1}'}^{t_i'}q_2(t)^2dt$, and since $w\geq 0$ a.e. on~$[t_{i-1}',t_i']$ if $q_1\geq 0$ a.e. on~$[t_{i-1},t_i]$, and $w\leq 0$ a.e. on~$[t_{i-1}',t_i']$ if $q_1\leq 0$ a.e. on~$[t_{i-1},t_i]$, then $w$ and $q_2$ satisfy the hypothesis of Corollary~4.6 for the sequence $\{t_n'\}$ so that $cl([w]_{\Gamma_0})=cl([q_2]_{\Gamma_0})$. Since $w=(q_1,\gamma)$, $\gamma\in\Gamma_0$, then $w\in [q_1]_{\Gamma_0}$ so that by Observation~4.12, $[w]_{\Gamma_0}=[q_1]_{\Gamma_0}$ and therefore $cl([q_1]_{\Gamma_0})= cl([q_2]_{\Gamma_0})$.\\ \smallskip\\ {\bf Proposition 4.13 (\cite{lahiri}):} Given $q\in L^2[0,1]$, then $[q]_{\Gamma}\subseteq cl([q]_{\Gamma_0})$. \\ \smallskip\\ {\bf Proof:} The proposition is first proved for step functions on~$[0,1]$. Accordingly, we assume $q$ is a step function and $\gamma\in\Gamma$.\\ Let $t_0=0<t_1<\ldots <t_n=1$ be the set of numbers that define the partition associated with $q$ as a step function. For each $i$, $i=0,\ldots,n$, let $t_i'\in [0,1]$ be such that $\gamma(t_i')=t_i$ with $t_0'=0$ and~$t_n'=1$. Note $t_0'=0<t_1'<\ldots <t_n'=1$, as $\gamma$ is a nondecreasing function from $[0,1]$ onto~$[0,1]$.\\ Let $w=(q,\gamma)$. It then follows by Corollary~3.14 (change of variable) that for each $i$, $i=1,\ldots,n$, $\int_{t_{i-1}'}^{t_i'} w(t)^2dt = \int_{t_{i-1}'}^{t_i'} (q(\gamma(t)))^2\dot{\gamma}(t)dt = \int_{t_{i-1}}^{t_i} q(s)^2ds$, and either $q\geq 0$ a.e. on $[t_{i-1},t_i]$ and $w\geq 0$ a.e. on $[t_{i-1}',t_i']$, or $q\leq 0$ a.e. on $[t_{i-1},t_i]$ and $w\leq 0$ a.e. on $[t_{i-1}',t_i']$. Thus, by Corollary~4.7, $cl([q]_{\Gamma_0})= cl([w]_{\Gamma_0})$ so that, in particular, $w=(q,\gamma)\in cl([q]_{\Gamma_0})$ and therefore, since $\gamma$ is arbitrary in $\Gamma$, then $[q]_{\Gamma}\subseteq cl([q]_{\Gamma_0})$.\\ Finally, we assume $q$ is any function in $L^2[0,1]$ and $\gamma\in\Gamma$. Given $\epsilon>0$, by Proposition~2.44 (density of step functions in $L^p$), there is a step function $v$ on $[0,1]$ such that~$||q-v||_2<\epsilon/3$. As just proved above, $(v,\gamma)\in cl([v]_{\Gamma_0})$ so that for some $\overline{\gamma}\in\Gamma_0$ it must be that $||(v,\gamma)-(v,\overline{\gamma})||_2<\epsilon/3$. Thus, by Corollary~4.1 and Observation~4.8 (action of $\Gamma_0$ and $\Gamma$ is distance preserving) \begin{eqnarray*} ||(q,\gamma)-(q,\overline{\gamma})||_2 &=& ||(q,\gamma)-(v,\gamma)||_2 + ||(v,\gamma)-(v,\overline{\gamma})||_2\\ &+& ||(v,\overline{\gamma})-(q,\overline{\gamma})||_2\\ &=& ||q-v||_2 + ||(v,\gamma)-(v,\overline{\gamma})||_2 + ||v-q||_2\\ &<& \epsilon/3 + \epsilon/3 + \epsilon/3 = \epsilon \end{eqnarray*} so that $(q,\gamma)\in cl([q]_{\Gamma_0})$ and therefore, since $\gamma$ is arbitrary in $\Gamma$, then $[q]_{\Gamma}\subseteq cl([q]_{\Gamma_0})$.\\ \smallskip\\ {\bf Proposition 4.14 (Constant-speed parametrization of an absolutely continuous function \cite{stein}):} Given $f\in AC[0,1]$, then there exist $h\in AC[0,1]$, $\gamma\in\Gamma$, such that $|h'| = L = \int_0^1 |f'(t)|dt$ (the length of $f$) a.e. on~$[0,1]$ and~$f=h\circ\gamma$ on~$[0,1]$. \\ \smallskip\\ {\bf Proof:} Given $f\in AC[0,1]$, let $L=\int_0^1 |f'(t)|dt$. If $L=0$ then $f$ is constant on~$[0,1]$ (i of Proposition~2.34, Proposition~3.8). Otherwise, define $\gamma:[0,1]\rightarrow [0,1]$ by $\gamma(t)=(1/L)\int_0^t |f'(s)|ds$ for each $t\in [0,1]$. Accordingly, $\gamma(0)=0$, $\gamma(1)=(1)$, $\gamma\in AC[0,1]$ by Proposition~3.11, and $\dot{\gamma}(t)=(1/L)|f'(t)|$ a.e. on~$[0,1]$ by Proposition~3.7 so that $\dot{\gamma}\geq 0$ a.e. on~$[0,1]$. Thus~$\gamma\in\Gamma$.\\ Given $s\in [0,1]$, then for some $t\in [0,1]$ it must be that $\gamma(t)=s$. Define $h:[0,1]\rightarrow {\bf R}$ by~$h(s)=f(t)$. The function $h$ is well defined for if $s=\gamma(t_1)=\gamma(t_2)$, $t_1<t_2 \in [0,1]$, then $0 = \int_0^{t_2}|f'(x)|dx - \int_0^{t_1}|f'(x)|dx = \int_{t_1}^{t_2} |f'(x)|dx$. Thus, by i of Proposition~2.34, $f'=0$ a.e. on~$[t_1,t_2]$ so that by Proposition~3.8 $f$~is constant on~$[t_1,t_2]$ and, in particular,~$f(t_1)=f(t_2)$.\\ Clearly $h(\gamma(t)) = f(t)$ for each $t\in [0,1]$. Note for $s_1, s_2\in [0,1]$, $s_1<s_2$, then $s_1=\gamma(t_1)$, $s_2=\gamma(t_2)$, $t_1$, $t_2\in [0,1]$, $t_1<t_2$, and \[ |h(s_2)-h(s_1)| = |f(t_2)-f(t_1)|=|\int_{t_1}^{t_2} f'(x)dx|\leq \int_{t_1}^{t_2}|f'(x)|dx =L\cdot (s_2-s_1). \] From this inequality it follows clearly that $h\in AC[0,1]$ (Definition~3.5). Accordingly, $h$ is differentiable a.e. on~$[0,1]$ and $|h'|\leq L$ a.e. on~$[0,1]$ also from the inequality. Note that by Corollary~3.14 (change of variable) and Corollary~3.13 (chain rule), then \[ \int_0^1 |h'(s)|ds = \int_0^1 |h'(\gamma(t))|\dot{\gamma}(t)dt = \int_0^1|f'(t)|dt = L.\] By i of Proposition~2.34, then $|h'|=L$ a.e. on~$[0,1]$. \\ \smallskip\\ {\bf Corollary 4.8:} Given $q\in L^2[0,1]$, then there exist $w\in L^2[0,1]$, $\gamma\in\Gamma$, such that $|w|=\sqrt{L}$ a.e. on~$[0,1]$ and $q=(w,\gamma)$ a.e. on~$[0,1]$, where $L=\int_0^1|f'(t)|dt$ (the length of~$f$), $f\in AC[0,1]$, the SRSF of~$f$ equal to~$q$ a.e. on~$[0,1]$. In particular, if $||q||_2=1$ so that $L=1$, then $|w|=1$ a.e. on~$[0,1]$. \\ \smallskip\\ {\bf Definition 4.8:} A function $q\in L^2[0,1]$, is said to be in {\bf standard form} if for measurable subsets $A$, $B$ of~$[0,1]$, with $A\cap B=\emptyset$, $A\cup B=[0,1]$, then \[ q(t) = \left\{ \begin{array}{ll} 1 & \mathrm{for\ } t \in A\\ -1 & \mathrm{for\ } t \in B. \end{array} \right. \] Clearly, if $q$ is in standard form, then $||q||_2=1$.\\ Let $SF[0,1]=\{ q: q\in L^2[0,1],\ q\ \mathrm{in\ standard\ form}\}.$ \\ \smallskip\\ {\bf Proposition 4.15 (\cite{lahiri}):} Given $q$, $w\in SF[0,1]$, if $q\not=w$ in $L^2$, i.e., if $m(\{t:t\in [0,1], q(t)\not=w(t)\})>0$, then $w\not\in cl([q]_{\Gamma_0})$. Thus $cl([w]_{\Gamma_0})\cap cl([q]_{\Gamma_0}) = \emptyset$.\\ Proof in \cite{lahiri} using Corollary~3.16 (Change of variable for Lebesgue integral over a measurable set) and Observation~2.25 (Schwarz's inequality over a measurable set). \\ \smallskip\\ {\bf Corollary 4.9: (Uniqueness of constant-speed parametrization):} Given $\tilde{q}\in L^2[0,1]$, $||\tilde{q}||_2=1$, if for $\gamma$, $\tilde{\gamma}\in\Gamma$, and $q$, $w\in SF[0,1]$, $\tilde{q}= (q,\gamma)$ and $\tilde{q}=(w,\tilde{\gamma})$, then $q=w$ a.e. on~$[0,1]$. \\ \smallskip\\ {\bf Proof:} By Proposition 4.13, $[q]_{\Gamma}\subseteq cl([q]_{\Gamma_0})$ and $[w]_{\Gamma}\subseteq cl([w]_{\Gamma_0})$. Thus, $\tilde{q}\in cl([q]_{\Gamma_0})\cap cl([w]_{\Gamma_0})$ so that by Proposition~4.15, $q=w$ a.e. on~$[0,1]$. \\ \smallskip\\ {\bf Proposition 4.16 (\cite{lahiri}):} Given $w\in SF[0,1]$, then $cl([w]_{\Gamma_0})= [w]_{\Gamma}$. \\ \smallskip\\ {\bf Proof:} From Proposition 4.13, we know $[w]_{\Gamma}\subseteq cl([w]_{\Gamma_0})$. Thus, it suffices to show $cl([w]_{\Gamma_0}) \subseteq [w]_{\Gamma}$. For this purpose, let $\tilde{q}$ be in $cl([w]_{\Gamma_0})$. Clearly $||\tilde{q}||_2=1$, and by Corollary~4.8, for some $q\in SF[0,1]$, and some $\gamma\in\Gamma$, it must be that $\tilde{q}=(q,\gamma)$ a.e. on~$[0,1]$. By Proposition~4.13, $[q]_{\Gamma}\subseteq cl([q]_{\Gamma_0})$. Thus, $\tilde{q}\in cl([q]_{\Gamma_0})\cap cl([w]_{\Gamma_0})$ so that by Proposition~4.15, $q=w$ a.e. on~$[0,1]$, and therefore $\tilde{q}$ is in~$[w]_{\Gamma}$. Thus $cl([w]_{\Gamma_0}) \subseteq [w]_{\Gamma}$. \\ \smallskip\\ {\bf Corollary 4.10 (\cite{lahiri}):} Given $q\in L^2[0,1]$, if $q\not= 0$ a.e. on~$[0,1]$, then $cl([q]_{\Gamma_0})= [q]_{\Gamma}$. \\ \smallskip\\ {\bf Proof:} If $||q||_2=1$, then by Corollary~4.8, for some $w\in SF[0,1]$, and some $\gamma\in\Gamma$, it must be that $q=(w,\gamma)$ a.e. on~$[0,1]$. Since $q\not= 0$ a.e on~$[0,1]$, then $(w,\gamma)= (w\circ\gamma)\sqrt{\dot{\gamma}}\not= 0$ a.e. on~$[0,1]$, and therefore $\dot{\gamma}\not=0$ a.e. on~$[0,1]$. Thus, $\gamma\in\Gamma_0$ so that $[q]_{\Gamma_0}=[w]_{\Gamma_0}$ and $(q,\gamma^{-1})= ((w,\gamma),\gamma^{-1})= w$ a.e. on~$[0,1]$ (Proposition~4.5). Accordingly, $cl([q]_{\Gamma_0}) = cl([w]_{\Gamma_0})= [w]_{\Gamma} = [(q,\gamma^{-1})]_{\Gamma}$ (Proposition~4.16). Given $\tilde{q}\in [(q,\gamma^{-1})]_{\Gamma}$, then for some $\tilde{\gamma}\in\Gamma$, $\tilde{q}=((q,\gamma^{-1}),\tilde{\gamma})=(q,\gamma^{-1}\circ\tilde{\gamma})$ a.e. on~$[0,1]$ (Proposition~4.5), and since $\gamma^{-1}\circ\tilde{\gamma}\in\Gamma$ (Observation~4.4), then~$\tilde{q}\in [q]_{\Gamma}$. On the other hand, given $\tilde{q}\in [q]_{\Gamma}$, then for some $\tilde{\gamma}\in\Gamma$, $\tilde{q}=(q,\tilde{\gamma})= (q,\gamma^{-1}\circ\gamma\circ\tilde{\gamma})=((q,\gamma^{-1}),\gamma\circ\tilde{\gamma})$ a.e. on~$[0,1]$ (Proposition~4.5), and since $\gamma\circ\tilde{\gamma}\in\Gamma$ (Observation~4.4), then~$\tilde{q}\in [(q,\gamma^{-1})]_{\Gamma}$. Thus $[(q,\gamma^{-1})]_{\Gamma}=[q]_{\Gamma}$ and therefore $cl([q]_{\Gamma_0})= [q]_{\Gamma}$.\\ If $||q||_2\not=1$, then clearly $||q||_2\not=0$, $q/||q||_2\not=0$ a.e. on~$[0,1]$, and as just proved $cl([q/||q||_2]_{\Gamma_0})=[q/||q||_2]_{\Gamma}$. Given $\tilde{q}\in cl([q]_{\Gamma_0})$, then for a sequence $\{\gamma_n\}\subset\Gamma_0$, $(q,\gamma_n)\rightarrow\tilde{q}$ in $L^2$. Thus $(q/||q||_2,\gamma_n)\rightarrow\tilde{q}/||q||_2$ in $L^2$ implying $\tilde{q}/||q||_2=(q/||q||_2,\gamma)$ for some~$\gamma\in\Gamma$, and therefore $\tilde{q}=(q,\gamma)$ so that $\tilde{q}\in [q]_{\Gamma}$. On the other hand, given $\tilde{q}\in [q]_{\Gamma}$, then for some $\gamma\in\Gamma$, $\tilde{q}=(q,\gamma)$. Thus $\tilde{q}/||q||_2=(q/||q||_2,\gamma)$ implying for a sequence $\{\gamma_n\}\subset\Gamma_0$, $(q/||q||_2,\gamma_n)\rightarrow\tilde{q}/||q||_2$ in $L^2$, and therefore $(q,\gamma_n)\rightarrow\tilde{q}$ in $L^2$ so that $\tilde{q}\in cl([q]_{\Gamma_0})$. Thus $cl([q]_{\Gamma_0})= [q]_{\Gamma}$. \\ \smallskip\\ {\bf Observation 4.17:} As noted in \cite{lahiri}, given $f_1$, $f_2\in AC[0,1]$, and their SRSF's $q_1$, $q_2\in L^2[0,1]$, respectively, if $\tilde{q}_1\in cl([q_1]_{\Gamma_0})$, $\tilde{q}_2\in cl([q_2]_{\Gamma_0})$ exist such that $d(cl([q_1]_{\Gamma_0}),cl([q_2]_{\Gamma_0})) = ||\tilde{q}_1-\tilde{q}_2||_2$, assuming without any loss of generality that $q_1\not=0$ a.e. on~$[0,1]$, $q_2\not=0$ a.e. on~$[0,1]$ (Corollary~4.8), then by Corollary~4.10 above there exist $\gamma_1$, $\gamma_2 \in\Gamma$, such that $\tilde{q}_1=(q_1,\gamma_1)$ and $\tilde{q}_2=(q_2,\gamma_2)$. The pair $\gamma_1$, $\gamma_2$ is called an {\bf optimal matching} for~$f_1$,~$f_2$. In particular, it is proved in~\cite{lahiri} that if at least one of $cl([q_1]_{\Gamma_0})$, $cl([q_2]_{\Gamma_0})$ contains the SRSF of a piecewise linear function, then $\tilde{q}_1$, $\tilde{q}_2$ exist as above and therefore there is an optimal matching for~$f_1$,~$f_2$. This is actually proved in~\cite{lahiri} for absolutely continuous functions $f_1$, $f_2$ with range~${\bf R}^n$. \\ \smallskip\\ {\bf\large Summary}\\ \smallskip\\ In order to understand the theory of functional data and shape analysis as presented in Srivastava and Klassen's textbook ``Functional and Shape Data Analysis" \cite{srivastava}, it is important to understand the basics of Lebesgue integration and absolute continuity, and the connections between them. In this paper of the survey type, we have tried to provide a way to do exactly that. We have reviewed fundamental concepts and results about Lebesgue integration and absolute continuity, some results connecting the two notions, most of the material borrowed from Royden's ``Real Analysis" \cite{royden} and Rudin's ``Principles of Mathematical Analysis" \cite{rudin}. Additional important material was obtained from Saks' \cite{saks}, and Serrin and Varberg's \cite{serrin} seminal papers. In addition, we have presented fundamental concepts and results about functional data and shape analysis in 1-dimensional space, in the process shedding light on its dependence on Lebesgue integration and absolute continuity, and the connections between them, most of the material borrowed from Srivastava and Klassen's aforementioned textbook. Additional material presented at the end of the paper was obtained from Lahiri, Robinson and Klassen's outstanding manuscript~\cite{lahiri}. \\ \smallskip\\ {\bf\large Acknowledgements} \\ \smallskip\\ I am most grateful to Professor James F. Lawrence of George Mason University and the National Institute of Standards and Technology for the many insightful conversations on the subjects of Lebesgue integration and absolute continuity, and to Professor Eric Klassen of Florida State University for his generosity in always providing answers to my questions about his remarkable work on shape analysis.
{ "timestamp": "2019-07-01T02:03:42", "yymm": "1902", "arxiv_id": "1902.00051", "language": "en", "url": "https://arxiv.org/abs/1902.00051", "abstract": "As shape analysis of the form presented in Srivastava and Klassen's textbook 'Functional and Shape Data Analysis' is intricately related to Lebesgue integration and absolute continuity, it is advantageous to have a good grasp of the latter two notions. Accordingly, in these notes we review basic concepts and results about Lebesgue integration and absolute continuity. In particular, we review fundamental results connecting them to each other and to the kind of shape analysis, or more generally, functional data analysis presented in the aforementioned textbook, in the process shedding light on important aspects of all three notions. Many well-known results, especially most results about Lebesgue integration and some results about absolute continuity, are presented without proofs. However, a good number of results about absolute continuity and most results about functional data and shape analysis are presented with proofs. Actually, most missing proofs can be found in Royden's 'Real Analysis' and Rudin's 'Principles of Mathematical Analysis' as it is on these classic textbooks and Srivastava and Klassen's textbook that a good portion of these notes are based. However, if the proof of a result does not appear in the aforementioned textbooks, nor in some other known publication, or if all by itself it could be of value to the reader, an effort has been made to present it accordingly.", "subjects": "Functional Analysis (math.FA)", "title": "Shape Analysis, Lebesgue Integration and Absolute Continuity Connections", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429590144718, "lm_q2_score": 0.83973396967765, "lm_q1q2_score": 0.825746486527789 }
https://arxiv.org/abs/math/0607686
The Modulo 1 Central Limit Theorem and Benford's Law for Products
We derive a necessary and sufficient condition for the sum of M independent continuous random variables modulo 1 to converge to the uniform distribution in L^1([0,1]), and discuss generalizations to discrete random variables. A consequence is that if X_1, ..., X_M are independent continuous random variables with densities f_1, ..., f_M, for any base B as M \to \infty for many choices of the densities the distribution of the digits of X_1 * ... * X_M converges to Benford's law base B. The rate of convergence can be quantified in terms of the Fourier coefficients of the densities, and provides an explanation for the prevalence of Benford behavior in many diverse systems.
\section{Introduction} We investigate necessary and sufficient conditions for the distribution of a sum of random variables modulo $1$ to converge to the uniform distribution. This topic has been fruitfully studied by many previous researchers. Our purpose here is to provide an elementary proof of prior results, and explicitly connect this problem to related problems in the Benford's Law literature concerning the distribution of the leading digits of products of random variables. As this question has motivated much of the research on this topic, we briefly describe that problem and its history, and then state our results. For any base $B$ we may uniquely write a positive $x\in\mathbb{R}$ as $x = M_B(x)\cdot B^k$, where $k\in \mathbb{Z}$ and $M_B(x)$ (called the mantissa) is in $[1,B)$. A sequence of positive numbers $\{a_n\}$ is said to be \textbf{Benford base $B$} (or to satisfy Benford's Law base $B$) if the probability of observing the base-$B$ mantissa of $a_n$ of at most $s$ is $\log_B s$. More precisely, \be \lim_{N \to \infty} \frac{ \#\{n \le N: \text{$1 \le M_B(a_n) \le s$} \} }{N} \ =\ \log_B s. \ee Benford behavior for continuous systems is defined analogously. Thus base $10$ the probability of observing a first digit of $j$ is $\log_{10} (j+1) - \log_{10} (j)$, implying that about $30\%$ of the time the first digit is a $1$. Benford's Law was first observed by Newcomb in the 1880s, who noticed that pages of numbers starting with $1$ in logarithm tables were significantly more worn than those starting with $9$. In 1938 Benford \cite{Ben} observed the same digit bias in 20 different lists with over 20,000 numbers in all. See \cite{Hi1,Rai} for a description and history. Many diverse systems have been shown to satisfy Benford's law, ranging from recurrence relations \cite{BrDu} to $n!$ and $\ncr{n}{k}$ ($0 \le k \le n$) \cite{Dia} to iterates of power, exponential and rational maps \cite{BBH,Hi2} to values of $L$-functions near the critical line and characteristic polynomials of random matrix ensembles \cite{KoMi} to iterates of the $3x+1$ Map \cite{KoMi,LS} to differences of order statistics \cite{MN}. There are numerous applications of Benford's Law. It is observed in natural systems ranging from hydrology data \cite{NM} to stock prices \cite{Ley}, and is used in computer science in analyzing round-off errors (see page 255 of \cite{Knu} and \cite{BH}), in determining the optimal way to store numbers\footnote{If the data is distributed according to Benford's Law base $2$, the probability of having to shift the result of multiplying two numbers if the mantissas are written as $0.x_1x_2x_3\cdots$ is about $.38$; if they are written as $x_1.x_2x_3\cdots$ the probability is about $.62$.} \cite{Ha}, and in accounting to detect tax fraud \cite{Nig1,Nig2}. See \cite{Hu} for a detailed bibliography of the field. In this paper we consider the distribution of digits of products of independent random variables, $X_1 \cdots X_M$, and the related questions about probability densities of random variables modulo $1$. Many authors \cite{Sa,ST,AS,Adh,Ha,Tu} have observed that the product (and more generally, any nice arithmetic operation) of two random variables is often closer to satisfying Benford's law than the input random variables; further, that as the number of terms increases, the resulting expression seems to approach Benford's Law. Many of the previous works are concerned with determining exact formulas for the distribution of $X_1 \cdots X_M$; however, to understand the distribution of the digits all we need is to understand $\log_B |X_1 \cdots X_M| \bmod 1$. This leads to the equivalent problem of studying sums of random variables modulo $1$. This formulation is now ideally suited for Fourier analysis. The main result is a variant of the Central Limit Theorem, which in this context states that for ``nice'' random variables, as $M\to\infty$ the sum\footnote{That is, we study sums of the form $Y_1 + \cdots + Y_M$. For the standard Central Limit Theorem one studies $\frac{\sum_m Y_m - \E[\sum_m Y_m]}{{\rm StDev}(\sum_m Y_m)}$. We subtract the mean and divide by the standard deviation to obtain a quantity which will be finite as $M\to\infty$; however, sums modulo $1$ are a priori finite, and thus their unscaled value is of interest.} of $M$ independent random variables modulo $1$ tends to the uniform distribution; by simple exponentiation this is equivalent to Benford's Law for the product (see \cite{Dia}). To emphasize the similarity to the standard Central Limit Theorem and the fact that our sums are modulo $1$, we refer to such results as Modulo $1$ Central Limit Theorems. Many authors \cite{Bh,Bo,Ho,JR,Lev,Lo,Ro,Sc1,Sc2,Sc3} have analyzed this problem in various settings and generalizations, obtaining sufficient conditions on the random variables (often identically distributed) as well as estimates on the rate of convergence. Our main result is a proof, using only elementary results from Fourier analysis, of a necessary and sufficient condition for a sum modulo 1 to converge to the uniform distribution in $L^1([0,1])$. We also give a specific example to emphasize the different behavior possible when the random variables are not identically distributed. We let $\widehat{g_{m}}(n)$ denote the $n$\textsuperscript{th} Fourier coefficient of a probability density $g_m$ on $[0,1]$: \be \widehat{g_{m}}(n) \ = \ \int_0^1 g_m(x) e^{-2\pi i n x} dx.\ee \begin{thm}[The Modulo $1$ Central Limit Theorem for Independent Continuous Random Variables]\label{thm:mainL1} Let $\{Y_m\}$ be independent continuous random variables on $[0,1)$, not necessarily identically distributed, with densities $\{g_m\}$. A necessary and sufficient condition for the sum $Y_1 + \cdots + Y_M$ modulo $1$ to converge to the uniform distribution as $M\to\infty$ in $L^1([0,1])$ is that for each $n\neq 0$ we have $\lim_{M\to\infty} \widehat{g_1}(n)\cdots \widehat{g_M}(n) = 0$. \end{thm} As Benford's Law is equivalent to the associated base $B$ logarithm being equidistributed modulo $1$ (see \cite{Dia}), from Theorem \ref{thm:mainL1} we immediately obtain the following result on the distribution of digits of a product. \begin{thm}\label{thm:mainbenfL1} Let $X_1, \dots, X_M$ be independent continuous random variables, and let $g_{B,m}$ be the density of $\log_B M_B(|X_m|)$. A necessary and sufficient condition for the distribution of the digits of $X_1 \cdots X_M$ to converge to Benford's Law (base $B$) as $M\to\infty$ in $L^1([0,1])$ is for each $n\neq 0$ that $\lim_{M\to\infty} \widehat{g_{B,1}}(n)\cdots \widehat{g_{B,M}}(n) = 0$. \end{thm} As other authors have noticed, the importance of results such as Theorem \ref{thm:mainbenfL1} is that they give an explanation of why so many data sets follow Benford's Law (or at least a close approximation to it). Specifically, if we can consider the observed values of a system to be the product of many independent processes with reasonable densities, then the distribution of the digits of the resulting product will be close to Benford's Law. We briefly compare our approach with other proofs of results such as Theorem \ref{thm:mainL1} (where the random variables are often taken as identically distributed). If the random variables are identically distributed with density $g$, our condition reduces to $|\widehat{g}(n)| < 1$ for $n \neq 0$. For a probability distribution, $|\widehat{g}(n)| = 1$ for $n \neq 0$ if and only if there exists $\alpha \in R$ such that all the mass is contained in the set $\{\alpha, \alpha + \frac1n, \dots, \alpha + \frac{n-1}n\}$. (As we are assuming our random variables are continuous and not discrete, the corresponding densities are in $L^1([0,1])$ and this condition is not met; in Theorem \ref{thm:discmainL1} we discuss generalizations to discrete random variables.) In other words, the sum of identically distributed random variables modulo $1$ converges to the uniform distribution if and only if the support of the distribution is not contained in a coset of a finite subgroup of the circle group $[0,1)$. Interestingly, Levy \cite{Lev} proved this just one year after Benford's paper \cite{Ben}, though his paper does not study digits. Levy's result has been generalized to other compact groups, with estimates on the rate of convergence \cite{Bh}. Stromberg \cite{Str} proved that\footnote{The following formulation is taken almost verbatim from the first paragraph of \cite{Bh}.} \emph{the $n$-fold convolution of a regular probability measure on a compact Hausdorff group $G$ converges to the normalized Haar measure in the weak-star topology if and only if the support of the distribution is not contained in a coset of a proper normal closed subgroup of $G$.} Our arguments in the proof of Theorem \ref{thm:mainL1} may be generalized to independent discrete random variables, at the cost of replacing $L^1$-convergence with weak convergence. Below $\delta_{\alpha}(x)$ denotes a unit point mass at $\alpha$. \begin{thm}[Modulo $1$ Central Limit Theorem for Certain Independent Discrete Random Variables]\label{thm:discmainL1} Let $\{Y_m\}$ be independent discrete random variables on $[0,1)$, not necessarily identically distributed, with densities \be g_m(x) \ = \ \sum_{k=1}^{r_m} w_{k,m} \delta_{\alpha_{k,m}}(x), \ \ \ w_{k,m} > 0, \ \ \ \sum_{k=1}^{r_m} w_{k,m} \ = \ 1. \ee Assume that there is a \emph{finite} set $A \subset [0,1)$ such that all $\alpha_{k,m} \in A$. A necessary and sufficient condition for the sum $Y_1 + \cdots + Y_M$ modulo $1$ to converge weakly to the uniform distribution as $M\to\infty$ is that for each $n\neq 0$ we have $\lim_{M\to\infty} \widehat{g_1}(n)\cdots \widehat{g_M}(n) = 0$. \end{thm} In \S\ref{sec:mainproof} we prove Theorem \ref{thm:mainL1} using only elementary facts from Fourier analysis, showing our condition is a consequence of Lebesgue's Theorem (on $L^1$-convergence of the Fej\'{e}r series) and a standard approximation argument. We give an example of distinct densities $\{f_i\}$ with the following properties: (1) for each $i$, if every $X_k$ is independently chosen with density $f_i$ then the sum converges to the uniform distribution; (2) if the $X_k$'s are independent but non-identical, with $X_k$ having distribution $f_k$, then the sum does not converge to the uniform distribution. This example illustrates the difference in behavior when the random variables are not identically distributed: to obtain uniform behavior for the sum it does not suffice for each random variable to satisfy Levy or Stromberg's condition (the distribution is not concentrated on a coset of a finite subgroup of $[0,1)$). We conclude in \S\ref{sec:discrete} by sketching the proof of Theorem \ref{thm:discmainL1}, and in Appendix \ref{sec:alternate} we comment on alternate techniques to prove results such as Theorem \ref{thm:mainbenfL1} (in particular, why our arguments are more general than applying the standard Central Limit Theorem to $\log_B|X_1| + \cdots +\log_B|X_M|$ to analyze the distribution of digits of $|X_1 \cdots X_N|$). \section{Analysis of Sums of Continuous Random Variables}\label{sec:mainproof} We recall some standard facts from Fourier analysis (see for example \cite{SS}). The convolution of two functions in $L^1([0,1])$ is \be (f \ast g)(x) \ = \ \int_0^1 f(y) g(x-y)dy \ = \ \int_0^1 f(x-y)g(y)dy. \ee Convolution is commutative and associative, and the $n$\textsuperscript{th} Fourier coefficient of a convolution is the product of the two $n$\textsuperscript{th} Fourier coefficients. Let $g_1$ and $g_2$ be two probability densities in $L^1([0,1])$. If $Z_i$ is a random variable on $[0,1)$ with density $g_i$, then the density of $Z_1 + Z_2 \bmod 1$ is the convolution of $g_1$ with $g_2$. \begin{defi}[Fej\'{e}r kernel, Fej\'{e}r series] Let $f \in L^1([0,1])$. The $N$\textsuperscript{{\rm th}} Fej\'{e}r kernel is \be F_N(x) \ = \ \sum_{n=-N}^N \left(1 - \frac{|n|}{N}\right) e^{2\pi i n x}, \ee and the $N$\textsuperscript{{\rm th}} Fej\'{e}r series of $f$ is \be T_Nf(x) \ = \ (f \ast F_N)(x) \ = \ \sum_{n=-N}^N \left(1 - \frac{|n|}{N}\right) \widehat{f}(n) e^{2\pi i n x}. \ee The Fej\'{e}r kernels are an approximation to the identity (they are non-negative, integrate to $1$, and for any $\delta \in (0,1/2)$ we have $\lim_{N\to\infty}\int_{\delta}^{1-\delta} F_N(x)dx = 0$).\end{defi} \begin{thm}[Lebesgue's Theorem]\label{thm:fejTnconverg} Let $f\in L^1([0,1])$. As $N\to\infty$, $T_N f$ converges to $f$ in $L^1([0,1])$. \end{thm} \begin{lem}\label{lem:fejconvolve} Let $f, g \in L^1([0,1])$. Then $T_N (f \ast g) = (T_N f) \ast g$. \end{lem} \begin{proof} The proof follows immediately from the commutative and associative properties of convolution. \end{proof} We can now prove Theorem \ref{thm:mainL1}. \begin{proof}[Proof of Theorem \ref{thm:mainL1}] We first show our condition is sufficient. The density of the sum modulo $1$ is $h_M = g_1 \ast \cdots \ast g_M$. It suffices to show that, for any $\gep > 0$, \be \lim_{M\to\infty} \int_0^1 |h_M(x) - 1| dx \ < \ \gep. \ee Using Lebesgue's Theorem (Theorem \ref{thm:fejTnconverg}), choose $N$ sufficiently large so that \be\label{eq:h1Tn} \int_0^1 |h_1(x) - T_N h_1(x)| dx \ < \ \frac{\gep}2. \ee While $N$ was chosen so that \eqref{eq:h1Tn} holds with $h_1$, in fact this $N$ works for \emph{all} $h_M$ (with the same $\gep$). This follows by induction. The base case is immediate (this is just our choice of $N$). Assume now that \eqref{eq:h1Tn} holds with $h_1$ replaced by $h_M$; we must show it holds with $h_1$ replaced by $h_{M+1} = h_M \ast g_{M+1}$. By Lemma \ref{lem:fejconvolve} we have \be T_N h_{M+1} \ = \ T_N (h_M \ast g_{M+1}) \ = \ (T_N h_M) \ast g_{M+1}. \ee This implies \bea & & \int_0^1 |h_{M+1}(x) - T_N h_{M+1}(x)| dx\nonumber\\ & &\ \ \ \ \ \ =\ \int_0^1 |(h_M \ast g_{M+1})(x) - (T_N h_M)\ast g_{M+1}(x)| dx \nonumber\\ & &\ \ \ \ \ \ =\ \int_0^1 \left| \int_0^1 \left(h_M(y) - T_N h_M(y)\right) \cdot g_{M+1}(x-y) \right| dy dx \nonumber\\ & &\ \ \ \ \ \ \le \ \int_0^1 \int_0^1 |h_M(y) - T_Nh_M(y)| \cdot g_{M+1}(x-y)dx dy \nonumber\\ & &\ \ \ \ \ \ =\ \int_0^1|h_M(y) - T_Nh_M(y)|dy \cdot 1 \ < \ \frac{\gep}2; \eea the interchange of integration above is justified by the absolute value being integrable in the product measure, and the $x$-integral is $1$ as $g_{M+1}$ is a probability density. To show $h_M$ converges to the uniform distribution in $L^1([0,1])$, we must show $\lim_{M\to\infty}\int_0^1 |h_M(x) - 1|dx = 0$. Let $N$ and $\gep$ be as above. By the triangle inequality we have \bea \int_0^1 |h_M(x) - 1|dx & \ \le \ & \int_0^1 |h_M(x) - T_N h_M(x)|dx + \int_0^1 |T_Nh_M(x) - 1|dx. \nonumber\\ \eea From our choices of $N$ and $\gep$, $\int_0^1 |h_M(x) - T_N h_M(x)|dx < \gep/2$; thus we need only show $\int_0^1 |T_Nh_M(x) - 1|dx < \gep/2$ to complete the proof. As $\widehat{h_M}(0) = 1$, \bea \int_0^1 |T_Nh_M(x) - 1|dx & \ = \ & \int_0^1 \left|\sum_{n=-N \atop n \neq 0}^N \left(1 - \frac{|n|}{N}\right) \widehat{h_M}(n) e^{2\pi i n x}\right| dx \nonumber\\ & \le & \sum_{n=-N \atop n \neq 0}^N \left(1 - \frac{|n|}{N}\right) |\widehat{h_M}(n)|. \eea However, $\widehat{h_M}(n) = \widehat{g_1}(n) \cdots \widehat{g_M}(n)$, and by assumption tends to zero as $M\to\infty$ (as each $\widehat{g_m}(n)$ is at most $1$ in absolute value, for each $n$ the absolute value of the product is non-increasing in $M$). For fixed $N$ and $\gep$, we may choose $M$ sufficiently large so that $|\widehat{h_M}(n)| < \gep/4N$ whenever $n \neq 0$ and $|n| \le N$. Thus \be \int_0^1 |T_Nh_M(x) - 1|dx \ < \ 2N \cdot \frac{\gep}{4N} \ = \ \frac{\gep}{2}, \ee which implies \be \int_0^1 |h_M(x) - 1| dx \ < \ \gep\ee for $M$ sufficiently large. As $\gep$ is arbitrary, this completes the proof of the sufficiency; we now prove this condition is necessary. Assume for some $n_0 \neq 0$ that $\lim_{M\to\infty} |\widehat{h}_M(n_0)| \neq 0$ (where as always $h_M = g_1 \ast \cdots g_M$). As the $g_m$ are probability densities, $|\widehat{g}_m(n)| \le 1$; thus the sequence $\{|\widehat{h}_M(n)|\}_{M=1}^\infty$ is non-increasing for each $n$, and hence by assumption converges to some number $c_n \in (0,1]$. Let $E_M(x) = h_M(x) - 1$; note $\widehat{E}_M(n) = \widehat{h}_M(n)$ for $n\neq 0$. To show $h_M$ does not converge to the uniform distribution on $[0,1]$, it suffices to show that $E_M$ does not converge almost everywhere to the zero function on $[0,1]$. Let $n_0$ be as above. We have \bea \left|\widehat{h}_M(n_0)\right| \ = \ \left|\widehat{E}_M(n_0)\right| \ = \ \left|\int_0^1 E_M(x) e^{2\pi i n_0 x} dx \right| \ \ge \ c_{n_0} \ > \ 0. \eea Therefore at least one of the following integrals is at least $c_{n_0}/2$: \bea \int_{x \in [0,1] \atop {\rm Re}\left(E_M(x)\right) \ge 0} {\rm Re}\left(E_M(x)\right)dx, \ \ & & \ \ \int_{x \in [0,1] \atop {\rm Re}\left(E_M(x)\right) \le 0} {\rm Re}\left(-E_M(x)\right)dx \nonumber\\ \int_{x \in [0,1] \atop {\rm Im}\left(E_M(x)\right) \ge 0} {\rm Im}\left(E_M(x)\right)dx, \ \ & & \ \ \int_{x \in [0,1] \atop {\rm Im}\left(E_M(x)\right) \le 0} {\rm Im}\left(-E_M(x)\right)dx, \eea and $h_M$ cannot converge to the zero function in $L^1([0,1])$; further, we obtain an estimate on the $L^1$-distance between the uniform distribution and $h_M$. \end{proof} The behavior is non-Benford if the conditions of Theorem \ref{thm:mainbenfL1} are violated. It is enough to show that we can find a sequence of densities $g_{B,m}$ such that $\lim_{M\to\infty} \prod_{m=1}^M\widehat{g_{B,m}}(1) \neq 0$. We are reduced to searching for an infinite product that is non-zero; we also need each term to be at most $1$, as the Fourier coefficients of a probability density are dominated by $1$. A standard example is $\prod_m c_m$, where $c_m = \frac{m^2+2m}{(m+1)^2}$; the limit of this product is $1/2$. Thus as long as $\widehat{g_{B,m}}(1) \ge \frac{m^2+2m}{(m+1)^2}$, the conclusion of Theorem \ref{thm:mainbenfL1} will not hold for the products of the associated random variables; analogous reasoning yields a sum of independent random variables modulo $1$ which does not converge to the uniform distribution. \begin{exa}[Non-Benford Behavior of Products]\label{exa:nonbenf} Consider \be \twocase{\phi_{m} \ = \ }{m}{{\rm if} $|x - \frac18| \le \frac1{2m}$}{0}{{\rm otherwise;}} \ee $\phi_{m}$ is non-negative and integrates to $1$. As $m \to \infty$ we have $|\widehat{\phi_{m}}(1)| \to 1$ because the density becomes concentrated at $1/8$ (direct calculation gives $\widehat{\phi_m}(1) = e^{2\pi i/8} + O(m^{-2})$). Let $X_1, \dots, X_M$ be independent random variables where the associated densities $g_{B,m}$ of $\log_B M(|X_m|)$ are $\phi_{11^m}$. The behavior is non-Benford (see Figure \ref{fig:counterex}). Note, however, that if each $X_m$ had the common distribution $\phi_i$ for any fixed $i$, then in the limit the product will satisfy Benford's law. \begin{figure} \begin{center} \scalebox{.75}{\includegraphics{1000CounterEx1000.eps}} \caption{\label{fig:counterex} Distribution of digits (base 10) of 1000 products $X_1 \cdots X_{1000}$, where $g_{10,m} = \phi_{11^m}$.} \end{center}\end{figure} \end{exa} \begin{rek} Generalizations of Theorem \ref{thm:mainL1} hold for more general sums of random variables. Instead of $Y_1 + \cdots + Y_M$ we may study $\eta_1 Y_1 + \cdots + \eta_M Y_M$, where each $\eta_m$ is a random variable taking values in $\{-1,1\}$; the proof follows from the observation that if $Y_m$ has density $g_m(y)$ then $-Y_m$ has density $g_m(1-y)$. \end{rek} \section{Analysis of Sums of Discrete Random Variables}\label{sec:discrete} Many results from Fourier analysis do not apply if the random variables are discrete; Lebesgue's Theorem cannot be correct for a point mass as the density is concentrated on a set of measure zero. Let $\delta_\alpha(x)$ be a unit point mass\footnote{Thus $\delta_\alpha(x)$ is a Dirac delta functional; if $\phi(x)$ is a Schwartz function then $\int_0^1 \delta_\alpha(x)\phi(x)dx$ is defined to be $\phi(\alpha)$.} at $\alpha$. Its Fourier coefficients are $\widehat{\delta_\alpha}(n) = e^{-2\pi i n \alpha}$, and simple algebra shows that its Fej\'{e}r series is \be F_N \delta_\alpha(x) \ = \ \frac{e^{-2\pi i(N-1)(x-\alpha)}(e^{2\pi i N(x-\alpha)}-1)^2}{(e^{2\pi i(x-\alpha)}-1)^2\ N}.\ee For $x\neq \alpha$, $\lim_{N\to\infty} F_N \delta_\alpha(x)$ $=$ $\delta_\alpha(x)$ $=$ $0$; moreover, for $x$ near $\alpha$ we have $|F_N \delta_\alpha(x)| \sim N$. Instead of convergence in $L^1([0,1])$ we have weak convergence: for any Schwartz function $\phi$, \be \lim_{N\to\infty} \int_0^1 F_N \delta_\alpha(x) \phi(x)dx \ = \ \int_0^1 \delta_\alpha(x) \phi(x)dx \ = \ \phi(\alpha). \ee \begin{proof}[Sketch of the proof of Theorem \ref{thm:discmainL1}] We argue as in Theorem \ref{thm:mainL1}. Note Lemma \ref{lem:fejconvolve} holds if $f$ and $g$ are sums of point masses. Instead of using Lebesgue's Theorem, we use weak convergence: given an $\epsilon>0$ and a Schwartz function $\phi(x)$, by weak convergence there is an $N$ such that \be\label{eq:genh1Tn} \left| \int_0^1 \left(h_1(x) - T_Nh_1(x)\right) \phi(x)dx \right| \ < \ \frac{\gep}2. \ee This is the generalization of \eqref{eq:h1Tn}. Further, we may assume \eqref{eq:genh1Tn} holds with $\phi(x)$ replaced with $\phi_{\alpha_{k,m}}(x) = \phi(x+\alpha_{k,m})$ for any $\alpha_{k,m} \in A$. \emph{This is only true because $A$ is finite}; while $N=N(\phi)$ depends on $\phi$, as there are only finitely many test functions $\phi_{\alpha_{k,m}}$ we may take $N = \max N(\phi_{\alpha_{k,m}})$. A similar analysis as before shows \eqref{eq:genh1Tn} also holds with $h_1$ replaced by $h_M$. The key step in the induction is \bea & & \int_0^1 \left(h_M(y) - T_N h_M(y)\right) g_{M+1}(x-y)\phi(x) dxdy \nonumber\\ & & \ \ = \ \int_0^1 \left(h_M(y) - T_N h_M(y)\right) \sum_{k=1}^{r_{M+1}} w_{k,M+1} \phi(y+\alpha_{k,M+1})dy \nonumber\\ & & \ \ = \ \sum_{k=1}^{r_{M+1}} w_{k,M+1} \int_0^1 \left(h_M(y) - T_N h_M(y)\right) \phi_{\alpha_{k,M+1}}(y)dy, \eea which, as the $w_{k,M+1}$ sum to $1$, is less than $\gep/2$ in absolute value. Arguing as in Theorem \ref{thm:mainL1} completes the proof. \end{proof}
{ "timestamp": "2007-11-20T15:22:31", "yymm": "0607", "arxiv_id": "math/0607686", "language": "en", "url": "https://arxiv.org/abs/math/0607686", "abstract": "We derive a necessary and sufficient condition for the sum of M independent continuous random variables modulo 1 to converge to the uniform distribution in L^1([0,1]), and discuss generalizations to discrete random variables. A consequence is that if X_1, ..., X_M are independent continuous random variables with densities f_1, ..., f_M, for any base B as M \\to \\infty for many choices of the densities the distribution of the digits of X_1 * ... * X_M converges to Benford's law base B. The rate of convergence can be quantified in terms of the Fourier coefficients of the densities, and provides an explanation for the prevalence of Benford behavior in many diverse systems.", "subjects": "Probability (math.PR); Classical Analysis and ODEs (math.CA)", "title": "The Modulo 1 Central Limit Theorem and Benford's Law for Products", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9857180656553329, "lm_q2_score": 0.8376199673867852, "lm_q1q2_score": 0.825657134006785 }
https://arxiv.org/abs/0802.3874
On low rank perturbation of matrices
The article is devoted to different aspects of the question "What can be done with a matrix by low rank perturbation?" It is proved that one can change a geometrically simple spectrum drastically by a rank 1 permutation, but the situation is quite different if one restricts oneself to normal matrices. Also, the Jordan normal form of a perturbed matrix is considered. It is proved that with respect to rank as a distance all almost unitary matrices are near unitary.
\section{Introduction} The article is devoted to different aspects of the question: "What can be done with a complex-valued matrix by a low rank perturbation?"\footnote{The authors would like to thank V.S.Savchenko, J. Moro and F. Dopico for useful comments and references. The work was partially supported by CONACyT grant SEP-25750, and PROMEP grant UASLP-CA-21.} From the works of Thompson \cite{thompson} we know how the Jordan normal form can be changed by a rank $k$ perturbation, see Theorem~\ref{th3}. Particulary, it follows that one can do everything with a geometrically simple spectrum by a rank $1$ perturbation, see Corollary~\ref{th1}. But the situation is quite different if one restricts oneself to normal matrices, see Theorem~\ref{th4} and Corollary~\ref{corol_Norm1}. We think that Corollary~\ref{corol_Norm1} may be considered as a finite dimension analogue of the continuous spectrum conservation under compact perturbations in Hilbert spaces. For unitary and self-adjoint matrices the inequality of Corollary~\ref{corol_Norm1} is the only restrictions on "what can be done with a spectrum by a rank $k$ perturbation", see Theorem~\ref{Th_un_ad}. We don't know if there is an analogue of Theorem~\ref{Th_un_ad} for normal matrices. It is worth to mention that Corollary~\ref{corol_Norm1} for self-adjoint matrices follows from Cauchy interlacing theorem \cite{Cauchy}. Theorem~\ref{Th_un_ad} is related with the converse Cauchy interlacing theorem \cite{Fan}. The spectrum of $H_1+H_2$ with known spectra of self-adjoint matrices $H_1$ and $H_2$ is studied a lot, see \cite{Klyachko} and the bibliography therein. Although the complete set of restrictions on the spectrum $H_1+H_2$ known in this situation, we are not sure that there is an easy proof of Theorem~\ref{Th_un_ad} using results of \cite{Klyachko}. Although Theorem~\ref{th3} should be known (see, for example, \cite{SSV2}, where Theorem~\ref{th3} formulated in one direction), we will give a proof here, manly because our proof falls in a general framework , which is also used in the proof of Theorem~\ref{Th_un_ad}. Let us describe the framework. The set ${\bf C}_{n\times n}$ of all complex $n\times n$-matrices (set of self-adjoint matrices) we equip with the arithmetic distance, $d(A,B)=\mathop{rank}(A-B)$ (see \cite{Che}). The arithmetic distance is geodesic for these cases. The spectral properties of matrices, such as Weyr characteristics and spectra (multiset) also may be considered as a metric spaces with distance, related to the arithmetic distance on matrices, see Section~\ref{sec_dist}. These distances also turn out to be geodesic. Then we prove Theorem~\ref{th3} (Theorem~\ref{Th_un_ad}) for $\mathop{rank}(A-B)=1$ and the general results will follow from Proposition~\ref{prop_geod}. \begin{proposition}\label{prop_geod} Let $X$ and $Y$ be geodesic metric spaces, let $O^X_n(x)$ denote the closed ball of radius $n$ around $x$ in $X$, let $\phi:X\to Y$ be such that $\phi(O^X_1(x))=O^Y_1(\phi(x))$ for all $x\in X$. Then $\phi(O^X_n(x))=O^Y_n(\phi(x))$ for any $n\in {\bf N}$ and $x\in X$. \end{proposition} \begin{proof} The proof is by induction. For $n=1$ there is nothing to prove. Step $n\to n+1$: It follows that $O^X_{n+1}(x)=\bigcup\limits_{z\in O^X_{n}(x)}O^X_1(z)$ ($X$ is geodesic), then $$ \phi(O^X_{n+1}(x))=\bigcup\limits_{z\in O^X_{n}(x)}\phi(O^X_1(z))= \bigcup\limits_{z\in \phi(O^X_{n}(x))}O^Y_1(z)=\bigcup\limits_{z\in O^Y_{n}(\phi(x))}O^Y_1(z)= O^Y_{n+1}(\phi(x)). $$ \end{proof} In Section~\ref{sec_almost} and Section~\ref{sec_com} we use the normalized arithmetic distance $d_r(A,B)=\frac{\mathop{rank}(A-B)}{n}$, where $n$ is the size of the matrices. We are interested in the following questions: " Suppose that matrices almost satisfy some equations (in the sense of $d_r(\cdot,\cdot)$). If close to that matrices there exist matrices satisfying the equations (uniformly with respect to $n$)?" We manage to answer only the following: close to an almost unitary (self-adjoint) matrix there exists an unitary (self-adjoint) matrix. We do not know if the same is true for normal matrices. (This question has the affirmative answer for norm distance $dn(A,B)=\|A-B\|$, see \cite{Lin}. It is equivalent to the following: "close to any pair of almost commuting self-adjoint matrices there exists a pair of commuting self-adjoint matrices (with respect to the distance $dn(\cdot,\cdot)$). It is interesting that there are almost commuting (with respect to $dn(\cdot,\cdot)$) matrices, close to which there are no commuting matrices, \cite{Choi,RuyTerry,Dan}). The similar question have been studied for operators in Hilbert spaces (Calkin algebras, \cite{Farah}). In Hilbert spaces the operator $a$ is called to be essentially normal iff $aa^*-a^*a$ is a compact operator. In contrast with Theorem~\ref{th6}, there exists an essentially unitary operator which is not a compact perturbation of an unitary operator (just infinite 0-Jordan cell). There is a complete characterization of compact perturbations of normal operators, see \cite{Farah} and the bibliography therein. Let us return to almost commuting matrices with respect normalized arithmetic distance $d_r$. In Section~\ref{sec_com} we show that for any $A\in {\bf C}_{n\times n}$ with simple spectrum there exists an almost commuting matrix, which is far from each commuting with $A$ matrix. The similar problem for the pairs of almost commuting matrices, as far as we know, is open. Precisely, if for any $\epsilon>0$ there exists $\delta>0$ such that for any $A,B$, $d_r(AB,BA)<\epsilon$ there exists $(\tilde A,\tilde B)$ satisfying $\tilde A\tilde B=\tilde B\tilde A$ and $d_r(\tilde A,A),d_r(\tilde B,B)<\delta$ ($\delta$ does not depends on size of the matrices). We think that low rank perturbations of matrices may related to sofic groups. The following question seems to be interesting from this point of view (although it seems to goes beyond the scope of the present article). One can show that all solutions of equation $C^{-1}A^{-1}CAC^{-1}AC=A^2$ in finite unitary matrices are trivial in $A$ ($A=E$). On the other hand, it is true that for any $\epsilon >0$ there exist $A,C$, $d(A,E)=d(C,E)=1$ and $d(C^{-1}A^{-1}CAC^{-1}AC,A^2)<\epsilon$. If the above assertion is true with additional requirements $C^4=1$? If not, it gives an example of non-sofic group. Note. All linear spaces are supposed to be finite dimensional in the rest of the article. ${\bf C}_{n\times n}$ will denote the set of all complex $n\times n$-matrices, ${\bf N}=\{0,1,2,...\}$. \section{Some discrete geodesic spaces.} \label{sec_dist} \subsection{Arithmetic distance on $C_{n\times n}$} \begin{lemma}\label{lm_arith} The arithmetics distance $\mathop{rank}(A-B)$ is geodesic on \begin{itemize} \item Set of all $n\times n$ matrices. \item Set of all self-adjoint $n\times n$ matrices. \item Set of all unitary $n\times n$ matrices. \end{itemize} \end{lemma} \begin{proof} It is clear that a rank $k$ matrix (self-adjoint matrix) may be represented as sum of $k$ matrices (self-adjoint matrices) of rank $1$. The first two items follow from the fact that set of matrices (self-adjoint matrices) is closed with respect to summation. For unitary matrices. Let $\mathop{rank}(U_1-U_2)=k$, or, the same, $\mathop{rank}(E-U_1^{-1}U_2)=k$. It means that, in a proper basis, $U_1^{-1}U_2=\mathop{diag}(\lambda_1,\lambda_2,...,\lambda_k,1,1,...,1)$. Now the sequence $U_1,\;U_1\cdot\mathop{diag}(\lambda_1,1,1,...,1),\;U_1\cdot \mathop{diag}(\lambda_1,\lambda_2,1,1,...,1)... U_1\cdot\mathop{diag}(\lambda_1,\lambda_2,...,\lambda_k,1,1,...,1)=U_2$ give us the geodesic needed. \end{proof} \begin{remark} The methods used in the above proof are not applied for normal matrices -- the set of normal matrices is not closed with respect neither summation nor multiplication. In fact, an example from \cite{Fan} hints that arithmetic distance might be non geodesic on the set of normal matrices. \end{remark} \begin{proposition}\label{prop_mob1} Let $\phi(x)=(ax+b)^{-1}(cx+d)$ be a M\"obius transformation of ${\bf C}_{n\times n}$, defined on $A$, $B$. Then $\mathop{rank}(A-B)=\mathop{rank}(\phi(A)-\phi(B))$. \end{proposition} \begin{proof} A M\"obius transformation is a composition of linear transformations $A\to aA+b$ ($a,b\in {\bf C}$) and taking inverse $A\to A^{-1}$. Those transformations (if defined) clearly conserve arithmetic distance, for example, $\mathop{rank}(A^{-1}-B^{-1})=\mathop{rank}(A^{-1}(B-A)B^{-1})=\mathop{rank}(A-B)$. ($A^{-1}$ and $B^{-1}$ is of full rank.) \end{proof} \subsection{Distance on the spaces of the Weyr characteristics.} Having in mind the Weyr characteristics of complex matrices (see below), we introduce the spaces $\Im_n$ of the Weyr characteristics. Where $\Im_n$ is the space of functions ${\bf Z}^+\times{\bf C}\to {\bf N},\; (i,\lambda)\to\eta_i(\lambda)$ such that \begin{itemize} \item $\eta_i(\lambda)\neq 0$ for finitely many $(i,\lambda)$ only, and $\sum\limits_{\lambda\in{\bf C}}\sum\limits_{i\in{\bf N}}\eta_i(\lambda)=n$. \item $\eta_i(\lambda)\geq \eta_{i+1}(\lambda)$. \end{itemize} On $\Im_n$ define a metric $d(\eta,\mu)=\max\limits_{(i,\lambda)}\{|\eta_i(\lambda)-\mu_i(\lambda)|\}$. First of all let us note that $d(\cdot,\cdot)$ is indeed a metric. Trivially, $d(\eta,\mu)=0$ implies $\eta=\mu$ and $d(\cdot,\cdot)$ satisfies triangle inequality as supremum (maximum) of semimetrics. It is clear, that $d(\mu,\nu)$ is also well defined for $\mu$ and $\nu$ in different spaces of Weyr characteristics (for different $n$). We will need the following \begin{proposition}\label{prop_weyr} Let $\mu\in\Im_m$ and $n>m$. Then there exists $\nu\in\Im_n$ such that for any $\eta\in\Im_n$, the inequality $d(\mu,\eta)\geq d(\nu,\eta)$ holds. \end{proposition} \begin{proof} We can do as follows. Let $\mu_i(\lambda_0)\neq 0$ and $\mu_{i+1}(\lambda_0)=0$. We can take $\nu_{i+1}(\lambda_0)=\nu_{i+2}(\lambda_0)=...=\nu_{i+n-m}(\lambda_0)=1$ and $\mu_j(\lambda)=\nu_j(\lambda)$ for all other pairs $(j,\lambda)$. \end{proof} \begin{proposition} $\Im_n$ are geodesic metric spaces. \end{proposition} \begin{proof} Let $\eta,\mu\in \Im_n$ and $d(\eta,\mu)=k>1$ it is enough to find $\nu\in\Im_n$ such that either $d(\eta,\nu)=1$ and $d(\nu,\mu)=k-1$, or $d(\eta,\nu)=k-1$ and $d(\nu,\mu)=1$, moreover, by Proposition~\ref{prop_weyr} it is enough to find $\nu\in\Im_m$ for $m\leq n$. Let $S_+=\{(j,\lambda)\in {\bf Z}^+\times{\bf C}\;|\;\eta_j(\lambda)-\mu_j(\lambda)=k\}$ and $S_-=\{(j,\lambda)\in {\bf Z}^+\times{\bf C}\;|\;\eta_j(\lambda)-\mu_j(\lambda)=-k\}$. Suppose, that $|S_+|\geq |S_-|$ (if not, we can change $\eta\leftrightarrow\mu$). Now let $$ \nu_i(\lambda)=\left\{\begin{array}{lll} \eta_i(\lambda)-1 & \mbox{if} & (i,\lambda)\in S_+\\ \eta_i(\lambda)+1 & \mbox{if} & (i,\lambda)\in S_-\\ \eta_i(\lambda) & \mbox{if} & (i,\lambda)\not\in S_+\cup S_- \end{array}\right. $$ We have to show that $\nu\in\Im_m$ for $m=n-|S_+|+|S_-|$. It is enough to show that $\nu_{j+1}(\lambda)\leq \nu_j(\lambda)$. Suppose contrary, that $\nu_{j+1}(\lambda)> \nu_j(\lambda)$. There are three possibility: \begin{description} \item[a)] $\eta_{j+1}(\lambda)=\eta_j(\lambda)$, $(j,\lambda)\in S_+$ and $(j+1,\lambda)\not\in S_+$, but then $k>\eta_{j+1}(\lambda)-\mu_{j+1}(\lambda)\geq \eta_{j+1}(\lambda)-\mu_j(\lambda)= \eta_j(\lambda)-\mu_j(\lambda)=k$, a contradiction. \item[b)] $\eta_{j+1}(\lambda)=\eta_j(\lambda)$, $(j+1,\lambda)\in S_-$ and $(j,\lambda)\not\in S_-$, but then $-k<\eta_{j}(\lambda)-\mu_{j}(\lambda)\leq \eta_{j}(\lambda)-\mu_{j+1}(\lambda)= \eta_{j+1}(\lambda)-\mu_{j+1}(\lambda)=-k$, a contradiction. \item[c)] $\eta_{j+1}(\lambda)=\eta_j(\lambda)-1$, $(j,\lambda)\in S_+$ and $(j+1,\lambda)\in S_-$, but then $-k=\eta_{j+1}(\lambda)-\mu_{j+1}(\lambda)\geq \eta_{j+1}(\lambda)-\mu_j(\lambda)= \eta_j(\lambda)-\mu_j(\lambda)-1=k-1$, so $-k\geq k-1$ and $1/2\geq k$, a contradiction with $k>1$. \end{description} Now, by construction, $d(\nu,\eta)=1$ and $d(\nu,\mu)=k-1$. \end{proof} \subsection{Distances $\mathop{dc}$ and $\tilde\mathop{dc}$ on finite multisets of the complex numbers.} Multisets and operations. The language of multisets is very convenient to deal with spectrums. We will need only finite multisets. For a multiset ${\cal A}$ let $\mathop{set}({\cal A})$ denote the set of elements of ${\cal A}$, forgetting multiplicity. It is clear that a muiltiset maybe considered as the multiplicity function $\chi_{\cal A}:\mathop{set}(A)\to{\bf N}$, for any $x\not\in{\cal A}$ we will suppose $\chi_{\cal A}(x)=0$. (For all cases, considered here, $\mathop{set}({\cal A})\subset{\bf C}$, so we can consider $\chi_{\cal A}:{\bf C}\to{\bf N}=\{0,1,2...\}$.) As far as the authors aware, there are several generalizations of the set-theoretical operations to multisets. We will need the following operations: \begin{itemize} \item Difference of two multiset ${\cal A}\setminus{\cal B}$, $\chi_{{\cal A}\setminus{\cal B}}(x)=\max\{0,\chi({\cal A})-\chi({\cal B}))\}$. \item Intersection ${\cal A}\cap X$ of a set $X$ and a multiset ${\cal A}$, $$ \chi_{{\cal A}\cap X}(x)=\left\{\begin{array}{lll} \chi_{\cal A}(x),&\mbox{if} & x\in X \\ 0,&\mbox{if} & x\not\in X \end{array}\right. $$ \item Union ${\cal A}\uplus{\cal B}$, $\chi_{{\cal A}\uplus{\cal B}}(x)=\chi_{\cal A}(x)+\chi_{\cal B}(x)$. \end{itemize} Let $S(a,r)=\{x\in{\bf C}\;: |x-a|\leq r\}$ and ${\cal S}=\{S(a,r)\;|\;a\in{\bf C},\;r\in {\bf R}^+\}$ be the set of circles. For ${\cal A},{\cal B}\subset_M{\bf C}$ let $\mathop{dc}({\cal A},{\cal B})=\max\limits_{S\in{\cal S}}\{||{\cal A}\cap S|-|{\cal B}\cap S||\}$. Let as extend ${\cal S}$ to $\tilde {\cal S}$ which include interior of complements of circles and semiplains: $\tilde S=S\cup \{\{x\in{\bf C}\;:\;|x-a|\geq r\}\;|\;a\in{\bf C},\;r\in{\bf R}^+\}\cup \{ \{x\in{\bf C}\;:\;\mathop{Im}(\frac{x-b}{a})\geq 0\}\;|\;a,b\in{\bf C}\}$; and introduce new metric $\tilde\mathop{dc}$: $\tilde\mathop{dc}({\cal A},{\cal B})=\max\limits_{S\in\tilde{\cal S}}\{||{\cal A}\cap S|-|{\cal B}\cap S||\}$. \begin{proposition} \begin{itemize} \item $\mathop{dc}$ and $\tilde\mathop{dc}$ are metrics on the finite multisubsets of ${\bf C}$. \item $\mathop{dc}({\cal A},{\cal B})=\mathop{dc}({\cal A}\setminus{\cal B},{\cal B}\setminus{\cal A})$, $\tilde\mathop{dc}({\cal A},{\cal B})=\tilde\mathop{dc}({\cal A}\setminus{\cal B},{\cal B}\setminus{\cal A})$. \item If $|{\cal A}|=|{\cal B}|$, then $\tilde\mathop{dc}({\cal A},{\cal B})=\mathop{dc}({\cal A},{\cal B})$. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item The same as for the spaces of Weyr characteristics. \item Let $\sum_S({\cal A},{\cal B})=|{\cal A}\cap S|-|{\cal B}\cap S|= \sum\limits_{x\in S}(\chi_{\cal A}(x)-\chi_{\cal B}(x))$. Then $\sum_S({\cal A}\setminus{\cal B},{\cal B}\setminus{\cal A})= \sum\limits_{x\in S}(\max\{0,\chi_{\cal A}(x)-\chi_{\cal B}(x)\}-\max\{0,\chi_{\cal B}(x)-\chi_{\cal A}(x)\})= \sum\limits_{x\in S}(\chi_{\cal A}(x)-\chi_{\cal B}(x))$. Now the item follows by definition of $\mathop{dc}$ ($\tilde\mathop{dc}$). \item First of all, due to ${\cal A}$ and ${\cal B}$ are finite multisets, for any semiplain $p$ we can find a circle $c$ such that $\sum_p({\cal A},{\cal B})= \sum_c({\cal A},{\cal B})$. Also for any closed circle $c_c$ there exists an open circle $c_o$ such that $\sum_{c_c}({\cal A},{\cal B})= \sum_{c_o}({\cal A},{\cal B})$. Now, under assumption of the item $\sum_S({\cal A},{\cal B})=-\sum_{{\bf C}\setminus S}({\cal A},{\cal B})$ and the result follows. \end{itemize} \end{proof} \begin{proposition}\label{prop_mob} Let $\phi(x)=\frac{ax+b}{cx+d}$ be a M\"obius transformation of ${\bf C}$, defined on $\mathop{set}({\cal A})\cup\mathop{set}({\cal B})$. Then $\tilde\mathop{dc}({\cal A},{\cal B})=\tilde\mathop{dc}(\phi({\cal A}),\phi({\cal B}))$. \end{proposition} \begin{proof} A M\"obius transformation defines a bijection of $\tilde{\cal S}$. \end{proof} We don't know if the metric $\mathop{dc}$ is geodesic on the multisets with fixed cardinality, but its restriction on any circle o line is: \begin{proposition}\label{lm_geod1} Let $l\subset{\bf C}$ be a circumference or a straight line. Let ${\cal A}, {\cal B}\subset_M l$, $|{\cal A}|=|{\cal B}|=n$ and $\tilde\mathop{dc}({\cal A},{\cal B})=k\geq 2$. Then there exists ${\cal C}\subset_M l$, $|{\cal C}|=n$ such that $\tilde\mathop{dc}({\cal A},{\cal C})=1$ and $\tilde\mathop{dc}({\cal C},{\cal B})=k-1$. \end{proposition} \begin{proof} By Proposition~\ref{prop_mob}, it is enough to proof it for the unit circle. Let us start with the case when $\mathop{set}({\cal A})\cap\mathop{set}({\cal B})=\emptyset$. Let $\Gamma=\mathop{set}({\cal A})\cup\mathop{set}({\cal B})\subset C^1$. Let $|\Gamma|=r$. We will cyclically anticlockwise order $\Gamma=\{\gamma_0,\gamma_1,...,\gamma_{r-1}\}$ by elements of $Z_r$. To construct $\cal C$ we move each element of ${\cal A}$ to the next element in $\Gamma$, precisely, $\mathop{set}({\cal C})\subseteq \Gamma$ and $$ \chi_{\cal C} (\gamma_i)=\max\{0,\chi_{\cal A}(\gamma_i)-1\}+\chi_{\mathop{set}({\cal A})}(\gamma_{i-1}), $$ the other words $$ \chi_{\cal C} (\gamma_i)=\left\{\begin{array}{lll} \chi_{\cal A}(\gamma_i)-1 &\mbox{if} & \gamma_i\in\mathop{set}({\cal A})\;\mbox{and}\;\gamma_{i-1}\not\in\mathop{set}({\cal A}) \\ 1 &\mbox{if} & \gamma_i\not\in\mathop{set}({\cal A})\;\mbox{and}\; \gamma_{i-1}\in\mathop{set}({\cal A}) \\ \chi_{\cal A}(\gamma_i) & \mbox{for} & \mbox{the other cases} \end{array}\right. $$ We will check that $\cal C$ satisfies our needs. For $x,y\in \Gamma$ let $[x,y]$ denote the closed segment of $C^1$, starting from $x$ and going anticlockwise to $y$ (so $[x,y]\cup [y,x]=C^1$). It is clear that for $X,Y\subset_M\Gamma$ one has $\tilde\mathop{dc}(X,Y)=\max\{||X\cap [\alpha,\beta]|-|Y\cap [\alpha,\beta]||\;:\;\alpha,\beta\in\mathop{set}(X)\cup\mathop{set}(Y)\}$. Denote by $\sum_{[\alpha,\beta]}(X,Y)=|X\cap [\alpha,\beta]|-|Y\cap [\alpha,\beta]|$. Now, $\tilde\mathop{dc}({\cal A},{\cal C})=\tilde\mathop{dc}({\cal A}\setminus{\cal C},{\cal C}\setminus{\cal A})=1$, for ${\cal A}\setminus{\cal C}$ and ${\cal C}\setminus{\cal A}$ are interlacing sets on ${\bf C}^1$. Suppose further, $\tilde\mathop{dc}({\cal C},{\cal B})=\tilde\mathop{dc}({\cal C}\setminus{\cal B},{\cal B}\setminus{\cal C})\geq k$, then there exists $[\gamma_i,\gamma_j]$ such that either \begin{enumerate} \item $\sum_{[\gamma_i,\gamma_j]}({\cal C}\setminus{\cal B},{\cal B}\setminus{\cal C}) =\tilde\mathop{dc}({\cal C}\setminus{\cal B},{\cal B}\setminus{\cal C})\geq k$\\ or \item $\sum_{[\gamma_i,\gamma_j]}({\cal C}\setminus{\cal B},{\cal B}\setminus{\cal C})\leq -k$. \end{enumerate} In the first case we may assume that $\gamma_i,\gamma_j\in{\cal C}\setminus{\cal B}$ and $\gamma_{i-1},\gamma_{j+1}\not\in{\cal C}\setminus{\cal B}$. Now, changing interval if necessary ($i^n=i-1$ and (or) $j^n=j-1$) we may, keeping $\sum_{[\gamma_i,\gamma_j]}({\cal C}\setminus{\cal B},{\cal B}\setminus{\cal C})$, achieve that $\gamma_i,\gamma_j\in{\cal A}$ and $\gamma_{i-1},\gamma_{j+1}\not\in{\cal A}$. Then $\sum_{[\gamma_i,\gamma_j]}({\cal A},{\cal B})= \sum_{[\gamma_i,\gamma_j]}({\cal C}\setminus{\cal B},{\cal B}\setminus{\cal C})+1\geq k+1$, a contradiction. The second case may be considered similarly. If $\mathop{set}({\cal A})\cap\mathop{set}({\cal B})\neq\emptyset$ then we can find ${\cal C'}$ for ${\cal A}\setminus{\cal B}$ and ${\cal B}\setminus{\cal A}$ and then take ${\cal C}={\cal C'}\uplus X$, where $X={\cal A}\setminus({\cal A}\setminus{\cal B})={\cal B}\setminus({\cal B}\setminus{\cal A})$. \end{proof} \section{On spectrum of low rank perturbations.} \begin{theorem}[Thompson]\label{tt} Let $n \times n$ matrix $A$ over a field $F$ have similarity invariants $h_n(A) \mid h_{n-1}(A) \mid \ldots \mid h_1(A)$. Then: as column $n$ tuple $x$ and row $n$-tuple $y$ range over all vectors entries in $F$, the similarity invariants assumed by the matrix $$ B=A+xy $$ are precisely the monic polynomials $h_n(B) \mid \ldots \mid h_1(B)$ over $F$ for which $degree(h_1(B) \cdots h_n(B))=n$ and $$ \begin{array}{l} h_n(B) \mid h_{n-1}(A) \mid h_{n-2}(B) \mid h_{n-3}(A) \mid \ldots,\\ h_n(A) \mid h_{n-1}(B) \mid h_{n-2}(A) \mid h_{n-3}(B) \mid \ldots. \end{array} $$ \end{theorem} We are going to reformulate Theorem~\ref{tt} for the field ${\bf C}$ using Weyr characteristic. Let $\eta_m (A, \lambda)$ denote the number of $\lambda$-Jordan blocks in $A$ of size greater or equal to $m$ ($m \in {\bf N}$). $$ \eta_m(A,\lambda)=\mbox{dim}\hspace{0.2cm} Ker(\lambda E - A)^m-\mbox{dim}\hspace{0.2cm} Ker(\lambda E - A)^{m-1} $$ This sequence of numbers $\eta_1(A,\lambda), \ldots \eta_q(A,\lambda)$ is called the Weyr characteristic for the eigenvalue $\lambda$ of matrix $A$, see \cite{weyr}. \begin{theorem} \label{th3} Let $A\in {\bf C}_{n\times n}$ with Weyr invariants $\eta_m (A, \lambda)$. Then as $R$ ranges over all $n\times n$ complex matrices of rank less o equal $k$, the Weyr invariants assumed by the matrix $B=A+R$ are precisely those, that satisfy both of the following conditions: \begin{itemize} \item For any $\lambda\in{\bf C}$ and any $m\in{\bf N}$ $$ \mid \eta_m (A, \lambda)-\eta_m (B, \lambda) \mid \leq k. $$ \item $\sum\limits_{\lambda\in{\bf C}}\sum\limits_{m\in{\bf N}}\eta_m(B,\lambda)=n$. \end{itemize} \end{theorem} \begin{proof} It is enough to prove the theorem for $k=1$. Indeed, assume that the theorem is valid for $k=1$. Then we may consider the Weyr characteristics as a map $\psi:{\bf C}_{n\times n}\to \Im_n$, which satisfies Proposition~\ref{prop_geod}. So, Theorem~\ref{th3} follows, though it states that $\psi(O_k(A))=O_k(\psi(A))$. Let us prove Theorem~\ref{th3} for $k=1$. For a given eigenvalue $\lambda \in sp(A)$ the sequence of numbers $$ q_1(A, \lambda) \geq q_2(A, \lambda) \geq ..., $$ corresponding to the sizes of the $\lambda$-Jordan blocks in the Jordan normal form of $A$ are know as the Segre characteristics of $A$ relative to $\lambda$, \cite{weyr}. The similarity invariant factors of $A \in {\bf C}_{n \times n}$ are sequence of monic polynomials in $x$, $h_n(A) \mid h_{n-1}(A) \mid h_{n-2}(A) \ldots \mid h_{1}(A)$. It is known that $h_i(A)=\prod_{\lambda} (\lambda -x)^{q_i(A, \lambda)}$, where $\lambda \in sp(A)$ and $q_i(A, \lambda)$ is a Segre characteristic corresponding to $\lambda$. So, if $\mathop{rank}(A-B)=1$, by Thompson's theorem \ref{tt} one has \begin{equation}\label{ineq1} \begin{array}{l} q_1(B, \lambda) \geq q_2(A, \lambda) \geq q_3(B, \lambda) \geq \ldots ,\\ q_1(A, \lambda) \geq q_2(B, \lambda) \geq q_3(A, \lambda) \geq \ldots . \end{array} \end{equation} As Weyr characteristic is the conjugate partition of Segre characteristic, we can use Ferrers diagram to compute $\eta_m(B, \lambda)$ (see, \cite{weyr}) as the number of points of column $m$ in Ferrer diagram of the Segre characteristic of $B$ relatively to $\lambda$ (by short the Ferrer diagram of $(B, \lambda)$). Precisely, the Ferrer diagram for $q(B,\lambda)$ is the set $F^\lambda_B=\{(i,j)\in{\bf Z}^+\times{\bf Z}^+\;|\;j\leq q_i(B,\lambda)\}$, see Figure~\ref{figferrer} (the numbering from top to bottom and left to right). The Weyr characteristics is related with Ferrers diagram by the formula $\eta_j(B,\lambda)=|\{(x,y)\in F^\lambda_B\;|\;x=j\}|$. From inequalities (\ref{ineq1}) we have that $q_i(B, \lambda) \geq q_{i+1}(A, \lambda)$. This inequality is equivalent to the statement $\forall i\neq 1\; (i,j)\in F^\lambda_A \to (i-1,j)\in F^\lambda_B$ (Figure~\ref{figferrer}), which is equivalent to the fact that $\eta_j(B, \lambda) \geq \eta_j(A, \lambda)-1$. In similar form, from inequalities (\ref{ineq1}) we can observe that $q_i(A, \lambda) \geq q_{i+1}(B, \lambda)$ with $1 \leq i \leq n-1$. Therefore, $\eta_j(A, \lambda) \geq \eta_j(B, \lambda)-1$, and the theorem follows. \begin{figure} $$ \begin{array}{cccccccccccccccc} \empty & \eta_1(B) & \eta_2 & \eta_3 & \eta_4 & \cdots & \eta_k & & &\eta_1(A) & \eta_2& \eta_3 & \eta_4 & \cdots & \eta_k \\ q_1(B, \lambda) & \bullet & \bullet & \bullet & \bullet & \cdots & \bullet &\hspace{1.cm} & q_2(A, \lambda ) & \bullet & \bullet & \bullet & \bullet & \cdots & \bullet\\ q_2(B, \lambda) & \bullet & \bullet & \bullet & \cdots & \bullet & \hspace{1cm} & & q_3(A, \lambda ) & \bullet & \bullet & \bullet &\cdots & \bullet\\ q_3(B ,\lambda) & \bullet & \bullet & \cdots & \bullet & & \hspace{1cm} & & q_4(A, \lambda ) & \bullet & \bullet & \cdots & \bullet\\ \vdots & & & & & & & & \vdots\\ q_{n-1}(B, \lambda) & \bullet & \cdots & \bullet & & & \hspace{1cm} & & q_n(A, \lambda ) & \bullet & \cdots & \bullet\\ q_{n}(B, \lambda) & \bullet \end{array} $$ \caption{Relation between Ferrers diagrams of $(B, \lambda)$ and $(A, \lambda)$} \label{figferrer} \end{figure} \end{proof} \begin{corollP} \label{th1} If the geometric multiplicity of any eigenvalue $\lambda$ of $A$ (number of $\lambda$-Jordan cells)is $1$, then for any multiset $M$ of size $n$ there is a rank $1$ matrix $B$ such that $\mathop{sp}(A+B)=M$. \end{corollP} \section{Case of normal matrices.} We will say that the vector $x$ is an $\alpha$-eigenvector if $Ax=\alpha x$. We will denote by ${\cal R}(A,\lambda, \epsilon)$ the space generated by all the $\alpha$-eigenvectors of $A$ with $|\lambda-\alpha|\leq\epsilon$. For the case of normal matrices, the following theorem shows that the difference between the dimension of ${\cal R} (A,\lambda,\epsilon)$ and the dimension ${\cal R} (B,\lambda,\epsilon)$ is bounded by the rank of the difference matrix $A-B$. \begin{theorem} \label{th4} If $A$ and $B$ are normal matrices, then for any $\lambda$, and for any $\epsilon \geq 0$, $$\mid dim( {\cal R} (A,\lambda,\epsilon)) - dim ( {\cal R} (B,\lambda,\epsilon)) \mid \leq \mathop{rank}(A-B)$$ \end{theorem} Let $X^\perp$ be the orthogonal complement of subspace $X$ and $P_X$ be an orthogonal projection on $X$. \begin{lemma}\label{lm_ort} Let $N:L\to L$ be a normal operator, and $X$ be a subspace of $L$ such that $\|(N-\lambda)x\|\leq\epsilon\|x\|$ for any $x\in X$, then (we will write ${\cal R} (\lambda,\epsilon)$ for ${\cal R} (N,\lambda,\epsilon)$) \begin{enumerate} \item $P_{{\cal R}(\lambda,\epsilon)}x\neq 0$ for any $x\in X$, $x\neq 0$. \item $\|P_{{\cal R}(\lambda,a\epsilon)}x\|\geq \sqrt{1-\frac{1}{a^2}}\|x\|$ for any $x\in X$. \item $\mathop{dim}(R(\lambda,\epsilon))\geq\mathop{dim}(X)$ \end{enumerate} \end{lemma} \begin{proof} It is clear that (1) implies (3). (1) Let $e_1,e_2,...,e_n$ be a diagonal orthonormal basis for $N$, and $\lambda_1,\lambda_2,...\lambda_n$ corresponding eigenvalues ($Ne_i=\lambda_ie_i$). Let $x=\alpha_1e_1+\alpha_2e_2+...+\alpha_ne_n\in X$ and $\|x\|=\sum\limits_{i=1}^n|\alpha_i|^2=1$. Let $x=x_1+x_2$ where $x_1\in {\cal R}(\lambda,\epsilon)$ and $x_2\in{\cal R}^\perp(\lambda,\epsilon)$. Now, $\|(N-\lambda)x\|^2=\sum\limits_{i=1}^n|\alpha_i|^2|\lambda_i-\lambda|^2\leq\epsilon^2$ implies that $\sum\limits_{i\;|\;|\lambda_i-\lambda|>\epsilon}|\alpha_i|^2<1$. So, $P_{{\cal R}(\lambda,\epsilon)}(x)=x_1=\sum\limits_{i\;|\;|\lambda_i-\lambda|\leq\epsilon}\alpha_ie_i\neq 0$. (2) Similarly, $$ \sum\limits_{i\;|\;|\lambda_i-\lambda|>a\epsilon}|\alpha_i|^2<\frac{1}{a^2}\;\;\mbox{and}\;\; \sum\limits_{i\;|\;|\lambda_i-\lambda|\leq a\epsilon}|\alpha_i|^2\geq (1-\frac{1}{a^2}), $$ so, $\|P_{{\cal R}(\lambda,a\epsilon)}x\|\geq \sqrt{1-\frac{1}{a^2}}\|x\|$ \end{proof} Now we a ready to prove Theorem~\ref{th4}. Let $\mathop{rank}(A-B)=r$ then there exists $X= {\cal R}(A,\lambda,\epsilon)\cap\ker(A-B)$ with $\mathop{dim}(X)\geq \mathop{dim}({\cal R}(A,\lambda,\epsilon))-r$ and $A|_X=B|_X$, so $\|(B-\lambda)|_X\|\leq\epsilon$ and, by Lemma~\ref{lm_ort}, $\mathop{dim}({\cal R}(B,\lambda,\epsilon))\geq \mathop{dim}({\cal R}(A,\lambda,\epsilon))-r$. By symmetry, we get Theorem~\ref{th4}. Theorem~\ref{th4} implies that any circle in complex plain, containing $m$ spectral points of $A$ ($B$) should contain at least $m-k$ spectral points of $B$ ($A$). So, we have \begin{corollP}\label{corol_Norm1} If $A$ and $B$ are normal matrices then $\mathop{dc}(\mathop{sp}(A),\mathop{sp}(B))\leq \mathop{rank}(A-B)$. \end{corollP} \begin{proof} It is just a reformulation of Theorem~\ref{th4}. \end{proof} If the condition of Corollary~\ref{corol_Norm1} describes all accessible by rank $k$ perturbation spectra? We are going to show that the answer is "yes" for self-adjoint and unitary matrices. \begin{theorem}\label{Th_un_ad} Let $A$ be a self-adjoint (unitary) $n\times n$-matrix. Let ${\cal B}\subset_M{\bf R}$ (${\cal B}\subset_M C^1$), $|{\cal B}|=n$. Then there exists self-adjoint (unitary) matrix $B$ such that $\mathop{sp}(B)={\cal B}$ and $\mathop{rank}(A-B)=\mathop{dc}(\mathop{sp}(A),{\cal B})$. \end{theorem} In fact the following, more general result is valid: \begin{theorem}\label{Th_un_ad2} Let ${l}\subset{\bf C}$ be a circumference or straight line. Let $A$ be a normal $n\times n$-matrix, $\mathop{sp}(A)\subset_M{l}$. Let ${\cal B}\subset_M{l}$, $|{\cal B}|=n$. Then exists a normal matrix $B$ such that $\mathop{sp}(B)={\cal B}$ and $\mathop{rank}(A-B)=\mathop{dc}(\mathop{sp}(A),{\cal B})$. \end{theorem} \begin{itemize} \item It is enough to prove Theorem~\ref{Th_un_ad2} for self adjoin matrices. Indeed, let $\mathop{sp}(A),{\cal B}\subset_M{l}$, $|{\cal B}|=n$ for a circle (line) $l\subset{\bf C}$. Then there exists a M\"obius transformation $\phi$, defined on $\mathop{sp}(A)\cup{\cal B}$, which map $l$ to the real line. Then $\phi(A)$ is a self-adjoint matrix and we can apply Theorem~\ref{Th_un_ad} to $\phi(A)$ and $\phi({\cal B})$ to find $\tilde B$ with $\mathop{sp}(\tilde B)=\phi({\cal B})$ and $\mathop{rank}(\phi(A)-\tilde B)=\tilde\mathop{dc}(\phi(\mathop{sp}(A)),\phi({\cal B}))$. Now take $B=\phi^{-1}(\tilde B)$ and results follows, for the M\"obius transformations conserve arithmetic distance on $C_{n\times n}$ and the distance $\tilde\mathop{dc}$ on multisets (Proposition~\ref{prop_mob1} and Proposition~\ref{prop_mob}). \item It is enough to prove Theorem~\ref{Th_un_ad2} for $\mathop{dc}(\mathop{sp}(A),{\cal B})=1$ and the rest will follow from Proposition~\ref{prop_geod}, Proposition~\ref{lm_geod1} and Lemma~\ref{lm_arith}. \item Also w.l.g. we may assume that $\mathop{set}(\mathop{sp}(A))\cap\mathop{set}({\cal B})=\emptyset$. For if $X=\mathop{sp}(A)\setminus(\mathop{sp}(A)\setminus{\cal B})$ we can write $A=A_1\oplus A_2$ with $\mathop{sp}(A_1)=X$ and $\mathop{sp}(A_2)=\mathop{sp}(A)\setminus X$. We can find $B_2$ with $\mathop{sp}(B_2)={\cal B}\setminus X$ and $\mathop{rank}(A_2-B_2)=1$. Now, take $B=A_1\oplus B_2$. \item Let ${\cal A},{\cal B}\subset_M{\bf R}$, $\mathop{set}({\cal A})\cap\mathop{set}({\cal B})=\emptyset$, $|{\cal A}|=|{\cal B}|$ and $\mathop{dc}({\cal A},{\cal B})=1$. Then, in fact, ${\cal A}$ and ${\cal B}$ are interlacing sets. It means that if ${\cal A}=\{\alpha_1,\alpha_2,...,\alpha_n\}$, ${\cal B}=\{\beta_1,\beta_2,...,\beta_n\}$ then $\alpha_1<\beta_1<\alpha_2<\beta_2<...$ or $\beta_1<\alpha_1<\beta_2<\alpha_2<...$. \end{itemize} So, we need to prove only \begin{lemma}\label{lm_rank1} Let $A\in{\bf C}_{n\times n}$ be a self-adjoint matrix with a simple spectrum. Let ${\cal B}\subset {\bf R}$ with $|{\cal B}|=n$. If $\mathop{sp}(A)$ and ${\cal B}$ are interlacing then there exists a self-adjoint matrix $B$ with $\mathop{sp}(B)={\cal B}$ and $\mathop{rank}(A-B)=1$. \end{lemma} Let $\mathop{sp}(A)={\cal A}=\{\alpha_1,\alpha_2,...,\alpha_n\}$ and ${\cal B}=\{\beta_1,\beta_2,...,\beta_n\}$. As $A$ and $B$ can be put in diagonal normal form $\tilde A=\mathop{diag}(\alpha_1,\alpha_2,...,\alpha_n)$ and $\tilde B=\mathop{diag}(\beta_1,\beta_2,...,\beta_n)$ by unitary transformations and unitary transformations map (by conjugation) self-adjoint matrices to self adjoint matrices, Lemma~\ref{lm_rank1} is equivalent to the fact that under our assumptions on ${\cal A}$ and ${\cal B}$ the equation \begin{equation} \label{eq_un} \tilde A X- X\tilde B=R \end{equation} has a solution in $(X,R)$ for unitary $X$ and $R$ of rank $1$. Before solving Eq.\ref{eq_un} let us introduce some notations and prove a proposition. For a finite ${\cal A}\subset{\bf R}$ let $P_{\cal A}(\lambda)= \prod_{\alpha\in{\cal A}}(\lambda-\alpha)$. Let ${\cal A}$ and ${\cal B}$ be finite subsets of ${\bf R}$ of equal cardinality. It follows from interpolation that there exist unique $x:{\cal A}\to{\cal A}$, such that \begin{equation}\label{eq_interpolation} P_{\cal B}=P_{\cal A}-\sum_{\alpha\in{\cal A}}x_\alpha P_{{\cal A}\setminus\{\alpha\}} \end{equation} (we write $x_\alpha$ not $x(\alpha)$). Moreover, $x_\alpha=P_{\cal B}(\alpha)/P_{{\cal A}\setminus\{\alpha\}}(\alpha)$. Studying signs of $P_{\cal B}(\alpha)$ and $P_{{\cal A}\setminus\{\alpha\}}(\alpha)$ we trivially get \begin{proposition}\label{prop_interlace} If ${\cal A}$ and ${\cal B}$ interlacing, then all $x_\alpha$ in Eq.\ref{eq_interpolation} have the same sign. \end{proposition} \begin{remark} In fact inverse of this proposition is also valid, see Lemma~1.20 of \cite{Fisk}. \end{remark} Now let us go back to solutions of Eq.\ref{eq_un}. For $R=\{r_{ij}\}$ fixed the equation has the unique solution $X=\{x_{ij}\}$, with $x_{ij}=\frac{r_{ij}}{\alpha_i-\beta_j}$. Now, suppose that $\mathop{rank}(R)=1$ or, the same $r_{ij}=y_iz_j$ for some $y,z\in{\bf C}^n$, $y,z\neq 0$. When the matrix $X$ is unitary? When its columns (rows) are orthonormal, or \begin{equation}\label{eq_unit} z_jz^*_k\sum_{i}\frac{|y_i|^2}{(\alpha_i-\beta_j)(\alpha_i-\beta_k)}=\delta_{jk} \end{equation} It follows that $z_j\neq 0$ for all $j=1,...,n$, changing rows by columns we get the same for $y$. So, the difficult part is to guarantee that l.h.s. of Eq.~\ref{eq_unit} is $0$ for $j\neq k$. Putting equality $$ \frac{|y_i|^2}{(\alpha_i-\beta_j)(\alpha_i-\beta_k)}=\frac{1}{\beta_j-\beta_k} (\frac{|y_i|^2}{\alpha_i-\beta_j}-\frac{|y_i|^2}{\alpha_i-\beta_k}). $$ into Eq.\ref{eq_unit} and multiplying it by $P_{\cal A}(\beta_k)P_{\cal A}(\beta_j)(\beta_j-\beta_k)/(z_jz^*_k)$ we get (after some elementary transformations): $$ P_{\cal A}(\beta_k)\sum_{i=1}^n|y_i|^2P_{{\cal A}\setminus\{\alpha_i\}}(\beta_j)= P_{\cal A}(\beta_j)\sum_{i=1}^n|y_i|^2P_{{\cal A}\setminus\{\alpha_i\}}(\beta_k), $$ which imply that there exists $c$, such that $$ \sum_{i=1}^n|y_i|^2P_{{\cal A}\setminus\{\alpha_i\}}(\beta_j)=cP_{\cal A}(\beta_j), $$ for all $j$. Or the same, ${\cal B}$ is the set of roots of polynomial $$ \Phi(\lambda)=\sum_{i=1}^n|y_i|^2P_{{\cal A}\setminus\{\alpha_i\}}(\beta_j)-cP_{\cal A}(\beta_j), $$ so $|y_i|^2=P_{\cal B}(\alpha_i)/cP_{{\cal A}\setminus\{\alpha_i\}}(\alpha_i)$. Choosing $c=1$ or $c=-1$, we get, by Proposition~\ref{prop_interlace}, that $|y_i|^2$ is well defined. Now, take, for example, $y_i=|y_i|$. From Eq.\ref{eq_unit} for $j=k$ we can find $|z_j|^2$. Then, taking $z_j=|z_j|$ we get needed solution of Eq.\ref{eq_un}. \section{Almost unitary operators are near unitary operators with respect to normalized arithmetic distance}\label{sec_almost} For $A,B \in C_{n \times n}$, let $d_r(A,B)$ be the normalized arithmetic distance: $$ d_r(A,B)=\frac{\mathop{rank}(A-B)}{n} $$ The matrix $A$ is called an $\alpha$-self-adjoint matrix if $d_r(A, A^*)=\alpha$, where $A^*$ denotes the adjoint of $A$. The matrix $A$ is called an $\alpha$-unitary matrix if $d_r(A^*A,E) =\alpha$ The following theorems says that "near" to any $\alpha$-self-adjoint matrix there exists a self-adjoint matrix $S$, and that "near" to any $\alpha$-unitary matrix there exists an unitary matrix $U$ (for small $\alpha$). \begin{theorem} \label{th5} For any $A \in {\bf C}_{n \times n}$ there exists a self-adjoint matrix $S$ ($S=S^*$) such that $d_r(A,S) \leq d_r(A,A^*)$. \end{theorem} \begin{proof} Take $S=\frac{1}{2}(A+A^*)$. \end{proof} \begin{theorem} \label{th6} For any $A \in {\bf C}_{n \times n}$ there exists a unitary matrix $U$ ($U^*U=E$), such that $d_r(A,U)\leq d_r(A^*A,E)$. \end{theorem} The good illustrations for this theorem are $0$-Jordan cells: $$ \left( \begin{array}{ccccccc} 0 & 0 & \cdots & 0 &0 & \cdots & 0\\ 1 & 0 & \cdots & 0 & 0& \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 0 & 0 & \cdots &0\\ 0 & 0 & \cdots & 1 & 0 & \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 0 & 0& \cdots & 0\\ \end{array} \right) \left( \begin{array}{ccccccc} 0 & 1 & \cdots & 0 &0 & \cdots & 0\\ 0 & 0 & \cdots & 0 & 0& \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 0 & 1 & \cdots &0\\ 0 & 0 & \cdots & 0& 0 & \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 0 & 0& \cdots & 0\\ \end{array} \right)= \left( \begin{array}{ccccccc} 0 & 0 & \cdots & 0 &0 & \cdots & 0\\ 0 & 1 & \cdots & 0 & 0& \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 1 & 0 & \cdots &0\\ 0 & 0 & \cdots & 0 & 1 & \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 0 & 0& \cdots & 1\\ \end{array} \right), $$ but the matrix $$ \left( \begin{array}{ccccccc} 0 & 0 & \cdots & 0 &0 & \cdots & 1\\ 1 & 0 & \cdots & 0 & 0& \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 0 & 0 & \cdots &0\\ 0 & 0 & \cdots & 1 & 0 & \cdots &0\\ \multicolumn{7}{c}\dotfill\\ 0 & 0 & \cdots & 0 & 0& \cdots & 0\\ \end{array} \right) $$ is unitary. \begin{proof} Let $\mathop{rank}(A^*A-E)=r$, so there exist subspace $X\subset L$, $\mathop{dim}(X)=n-r$ such that $A^*A|_X=E|_X$. Consider $A|_X \,:\, X\to Y=A(X)$. Under assumptions of the theorem $A^*(Y)=X$, it follows that $(A|_X)^*=A^*|_Y:Y\to X$, so $A|_X\, :\,X\to Y$ is an unitary operator. Choose any unitary operator $B: X^\perp\to Y^\perp$ ($B^*B=E_{X^\perp}$). Then $U=A|_X\oplus B$ proves the theorem. \end{proof} It is not clear if this proof could be adapted for normal matrices -- unitary operator from an unitary space to another unitary space is well defined, but how to define normal operators between different unitary spaces...? Question: If we define $\alpha$-normal matrices in similar form to self-adjoint and unitary matrices, the equivalent of theorems~\ref{th5} and~\ref{th6} are true for normal matrices? \section{Almost commuting matrices}\label{sec_com} \begin{theorem}\label{th_com} For every $4 \leq n \in {\bf N}$ and every $A\in{\bf C}_{n \times n}$ with simple spectrum there exists $X\in {\bf C}_{n\times n}$ such that $d_r(AX,XA) < 2/n$ and for any matrix $B$, commuting with $A$, $d_r(B,X) \geq \frac{1}{2}$. \end{theorem} Before starting the proof of the theorem we need some facts. \begin{proposition}\label{propol} Let $\{\lambda_1,...,\lambda_k\}$ and $\{\alpha_1,...,\alpha_k\}$ be two disjoint sets, then the matrix $M=[x_{ij}]$ with $x_{ij}=\frac{1}{\alpha_i-\lambda_j}$ is nonsingular \end{proposition} \begin{proof} The matrix $M$ has the form $$M= \left( \begin{array}{cccc} \frac{1}{\alpha_1-\lambda_1} & \frac{1}{\alpha_1-\lambda_2} & \cdots & \frac{1}{\alpha_1-\lambda_k} \\ \frac{1}{\alpha_2-\lambda_1} & \frac{1}{\alpha_2-\lambda_2} & \cdots & \frac{1}{\alpha_2-\lambda_k} \\ \multicolumn{4}{c}\dotfill\\ \frac{1}{\alpha_k-\lambda_1} & \frac{1}{\alpha_k-\lambda_2} & \cdots & \frac{1}{\alpha_k-\lambda_k} \\ \end{array} \right) $$ Let $P(\lambda)=(\lambda-\lambda_1)(\lambda-\lambda_2)\cdots (\lambda-\lambda_k)$. Multiply each row $j$ of the matrix $M$ by $P(\alpha_j)$ we obtain a matrix of the form $$ \tilde M= \left( \begin{array}{cccc} P_1(\alpha_1) & P_2(\alpha_1) & \cdots & P_k(\alpha_1) \\ P_1(\alpha_2) & P_2(\alpha_2) & \cdots & P_k(\alpha_2) \\ \multicolumn{4}{c}\dotfill\\ P_1(\alpha_k) & P_2(\alpha_k) & \cdots & P_k(\alpha_k) \\ \end{array} \right) $$ where $P_j(\lambda)=\frac{P(\lambda)}{\lambda-\lambda_j}$. This matrix will be nonsingular if and only if matrix $M$ is nonsingular. We will prove that matrix $\tilde M$ is nonsingular showing that the following system of linear equations has a unique solution: $$ \left( \begin{array}{cccc} P_1(\alpha_1) & P_2(\alpha_1) & \cdots & P_k(\alpha_1) \\ P_1(\alpha_2) & P_2(\alpha_2) & \cdots & P_k(\alpha_2) \\ \multicolumn{4}{c}\dotfill\\ P_1(\alpha_k) & P_2(\alpha_k) & \cdots & P_k(\alpha_k) \\ \end{array} \right) \left( \begin{array}{c} a_1\\ a_2 \\ \vdots \\ a_k \\ \end{array} \right)= \left( \begin{array}{c} b_1\\ b_2 \\ \vdots \\ b_k \\ \end{array} \right) $$ So we have to solve the system $\sum_{i=1}^k a_iP_i(\alpha_j)=b_j$. Consider the polynomial $\Phi(\lambda)=\sum_{i=1}^k a_iP_i(\lambda)$, note that $\Phi(\lambda_i)=a_iP_i(\lambda_i)$ because $P_i(\lambda_j)=0$ for $i\neq j$. We have $k$ points $b_j$, therefore, we can use Lagrange interpolation to find the unique polynomial $\Phi(\lambda)$ of degree $k-1$ such that $\Phi(\alpha_j)=b_j$, and then we can compute the values $a_i=\frac{\Phi(\lambda_i)}{P_i(\lambda_i}$ ($P_i(\lambda_i)\neq 0$ for all $\lambda_i$ are different). \end{proof} Now we are ready to prove Theorem~\ref{th_com}. \begin{proof} Consider the matrix equation \begin{equation}\label{ax} AX-XA=\{c_{ij}\}, \end{equation} with $c_{ij}=i+j \textsf{mod} 2$. This matrix has the following form $$AX-XA= \left( \begin{array}{cccccc} 0 & 1 & 0 & 1 & \cdots & 1+ n \hspace{0.2cm}\textsf{mod} 2 \\ 1 & 0 & 1 & 0 & \cdots & n \hspace{0.2cm}\textsf{mod} 2 \\ 0 & 1 & 0 & 1 & \cdots & 1+n \hspace{0.2cm}\textsf{mod} 2 \\ 1 & 0 & 1 & 0 & \cdots & n \hspace{0.2cm}\textsf{mod} 2 \\ \multicolumn{6}{c}\dotfill\\ n +1 \hspace{0.2cm}\textsf{mod} 2 & \multicolumn{4}{c}\dotfill & 0 \end{array} \right) $$ Let $A$ be a diagonal matrix with simple spectrum $A=diag(\lambda_1,\lambda_2,...,\lambda_n)$. One matrix $X$ that satisfies Eq.~\ref{ax} is $X = \{x_{ij}\}$ with \\ \begin{equation*} x_{ij}=\begin{cases} \frac{c_{ij}}{\lambda_i-\lambda_j} & \text{for $i \neq j$} \\ 0 & \text{for $i=j$} \\ \end{cases} \end{equation*} Every matrix $B$ that commute with $A$ should be necessarily a diagonal matrix $B=diag(b_1,b_2,...,b_n)$, then $X-B=\{x^*_{ij}\}$ with \begin{equation*} x^*_{ij}=\begin{cases} \frac{c_{ij}}{\lambda_i-\lambda_j} & \text{for $i \neq j$} \\ -b_i & \text{for $i=j$} \\ \end{cases} \end{equation*} If we delete from $X-B$ the odd columns and the even rows, we obtain a submatrix $X'$ of size $\lfloor \frac{n}{2} \rfloor \times \lfloor \frac{n}{2} \rfloor$ o f the form \begin{equation*} \{x'_{ij}\}= \frac{1}{\lambda_i^*-\lambda_j^*} \end{equation*} with $i^*=2i-1$, $j^*=2j$. By proposition~\ref{propol} this matrix is nonsingular. Therefore, $\mathop{rank}(X-B)\geq \frac{n}{2}$, and we obtain $d_r(X,B)\geq \frac{1}{2}.$ \end{proof}
{ "timestamp": "2008-09-02T02:36:20", "yymm": "0802", "arxiv_id": "0802.3874", "language": "en", "url": "https://arxiv.org/abs/0802.3874", "abstract": "The article is devoted to different aspects of the question \"What can be done with a matrix by low rank perturbation?\" It is proved that one can change a geometrically simple spectrum drastically by a rank 1 permutation, but the situation is quite different if one restricts oneself to normal matrices. Also, the Jordan normal form of a perturbed matrix is considered. It is proved that with respect to rank as a distance all almost unitary matrices are near unitary.", "subjects": "Functional Analysis (math.FA)", "title": "On low rank perturbation of matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9830850906978877, "lm_q2_score": 0.8397339716830605, "lm_q1q2_score": 0.825529947714139 }
https://arxiv.org/abs/1904.10507
Fekete's lemma for componentwise subadditive functions of two or more real variables
We prove an analogue of Fekete's subadditivity lemma for functions of several real variables which are subadditive in each variable taken singularly. This extends both the classical case for subadditive functions of one real variable, and a result in a previous paper by the author. While doing so, we prove that the functions with the property mentioned above are bounded in every closed and bounded subset of their domain. The arguments follows those of Chapter 6 in E. Hille's 1948 textbook.
\section{Introduction} A real-valued function $f$ defined on a semigroup $(S,\cdot)$ is \emph{subadditive} if \begin{equation} \label{eq:subadd} f(x \cdot y) \leq f(x) + f(y) \end{equation} for every $x,y\in{S}$. Examples of subadditive functions include the absolute value of a complex number; the ceiling of a real number (smallest integer not smaller than it); the cardinality of a a finite subset of a given set; and the length of a word over an alphabet. Subadditive functions have applications in many fields including information theory \cite{lm1995}, economics, and combinatorics. In real analysis, subadditive functions are much more interesting than additive ones: for example, every additive, Lebesgue measurable function of one real variable is linear (cf. \cite{sierp1920}) but the characteristic function of the set of irrational numbers is subadditive, Lebesgue measurable, not linear, and everywhere discontinuous. A classical result in mathematical analysis, \emph{Fekete's lemma} \cite{fekete1923} states that, if $f$ is a real-valued subadditive function of one positive integer or positivereal variable, then $f(x)/x$ converges, for $x\to+\infty$, to its greatest lower bound. This simple fact has a huge number of applications in many fields, including symbolic dynamics (cf. \cite[Chapter 4]{lm1995}) and the theory of neural networks (see \cite{guerratoninelli}). Reusing a metaphor from \cite{capobianco2008}, Fekete's lemma says that for a sequence of independent observations, the \emph{average information per observation} converges to its greatest lower bound. Given the importance and ubiquity of Fekete's lemma, we wonder if similar results may hold for functions of many variables. Oddly, the mathematical literature seems to contain generalizations where, in almost all cases, the function in the limit actually depends again on a single variable, which is sometimes a real number, sometimes a finite set; many of these are closer to a corollary than to an extension. Of course there are practical reasons for this: for one, division by a vector with many components is undefined. But maybe we could look for a different \emph{type} of limit, or even for a different \emph{flavor} of subadditivity. At the aim of understanding which of the above could be feasible, we note that, if \begin{math} S = S_1 \times S_2 \end{math} is a product semigroup, we can also consider the case of a function which is subadditive in \emph{each variable, however given the other}. That is, instead of requiring \begin{math} f(x_1 y_1, x_2 y_2) \leq f(x_1, x_2) + f(y_1, y_2) \end{math} for every $x_1$, $x_2$, $y_1$, and $y_2$, we could demand that: \begin{enumerate} \item \label{it:subadd-componentwise-x1} \begin{math} f(x_1 y_1, x_2) \leq f(x_1, x_2) + f(y_1, x_2) \end{math} for every $x_1$, $x_2$, and $y_1$; and \item \label{it:subadd-componentwise-x2} \begin{math} f(x_1, x_2 y_2) \leq f(x_1, x_2) + f(x_1, y_2) \end{math} for every $x_1$, $x_2$, and $y_2$. \end{enumerate} The two requirements above, even together, do not imply subadditivity as a function defined on the product semigroup, nor does the latter imply the former: see Example \ref{ex:subadd-separate-not-joint}. Oddly again, this multivariate ``componentwise subadditivity'' seems not to have been addressed very often in the literature. In this paper, we state and prove an extension of Fekete's lemma to componentwise subadditive functions of $d\geq2$ real variables. We state a special case as an example, leaving the full statement to Section \ref{sec:fekete-d}. \begin{proposition} \label{prop:fekete-2d-positives} Let $f$ be a function of two positive real variables which is subadditive in each of them, however given the other. For every $\delta>0$ there exists $R>0$ such that, if both $x_1>R$ and $x_2>R$, then: \begin{displaymath} \frac{f(x_1, x_2)}{x_1 \cdot x_2} < \inf_{x_1, x_2 > 0} \frac{f(x_1, x_2)}{x_1 \cdot x_2} + \delta \,. \end{displaymath} In addition, \begin{displaymath} \lim_{x_1 \to +\infty} \lim_{x_2 \to +\infty} \frac{f(x_1, x_2)}{x_1 \cdot x_2} = \lim_{x_2 \to +\infty} \lim_{x_1 \to +\infty} \frac{f(x_1, x_2)}{x_1 \cdot x_2} = \inf_{x_1, x_2 > 0} \frac{f(x_1, x_2)}{x_1 \cdot x_2} \,. \end{displaymath} \end{proposition} That is: if the componentwise subadditive function $f(x,y)$ is considered as a \emph{net} on the \emph{directed set} of pairs of positive reals with the \emph{product ordering} where $(x_1,x_2)\leq(y_1,y_2)$ if and only if $x_1\leq{y_1}$ and $x_2\leq{y_2}$, then the \emph{simultaneous limit} on this directed set of the net $\dfrac{f(x_1,x_2)}{x_1\cdot{x_2}}$ is its greatest lower bound. This is a generalization of the original statement, where the functions depend on multiple \emph{independent} real variables, both notions of subadditivity and limit are extended, and the original lemma is a special case for $d=1$. The double limit is also remarkable, because multiple limits need not commute, let alone coincide with a simultaneous limit. A similar statement for functions defined on $d$-tuples of positive integers (instead of reals) was proved in \cite{capobianco2008}; see also \cite{lmp2013} for an application. The argument presented there, however, relies on a hidden hypothesis of boundedness on compact subsets, which comes for free in the integer setting (where compact subsets are precisely the finite subsets) but must be proved in the new one, and \emph{cannot} be inferred from boundedness in each variable however given the others (see Example \ref{ex:bound-unbound}). By adapting the proof of \cite[Theorem 6.4.1]{hille1948} we obtain the following result: componentwise subadditive functions defined on suitable regions of $\ensuremath{\mathbb{R}}^d$ are indeed bounded on compact subsets. For $d=2$ and positive variables the statement goes as follows: \begin{proposition} \label{prop:subadd-bounded-1q} In the hypotheses of Proposition \ref{prop:fekete-2d-positives}, the function $f$ is bounded on $[a,b]\times[c,d]$ for every $0<a<b$ and $0<c<d$. \end{proposition} The paper is organized as follows. Section \ref{sec:background} provides the theoretical background. In Section \ref{sec:subadd-componentwise} we introduce componentwise subadditivity and explain how it is different from subadditivity in the product semigroup. In Section \ref{sec:boundedness} we adapt the argument from \cite[Theorem 6.4.1]{hille1948} to prove that componentwise subadditive functions of $d$ real variables are bounded on compact subsets of $\ensuremath{\mathbb{R}}^d$. In Section \ref{sec:fekete-d} we state, prove, and discuss the main theorem: boundedness will have a crucial role in the proof. Section \ref{sec:owlemma} is a discussion on how the beautiful \emph{Ornstein-Weiss lemma} \cite{ornstein-weiss}, an important result on subadditive functions defined on finite subsets of groups of a certain class which includes $\ensuremath{\mathbb{Z}}$ and $\ensuremath{\mathbb{R}}$, \emph{is not} an extension of Fekete's lemma. \section{Background} \label{sec:background} Throughout the paper, the subsets of $\ensuremath{\mathbb{R}}^d$ and the real-valued functions of real variables are presumed to be Lebesgue measurable. We also let real-valued functions take value either $+\infty$ or $-\infty$, but not both. We denote by $\ensuremath{\mathbb{R}}$, $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$, and $\ensuremath{\ensuremath{\mathbb{R}}_{-}}$ the sets of real numbers, positive real numbers, and negative real numbers, respectively. Similarly, we denote by $\ensuremath{\mathbb{Z}}$, $\ensuremath{\integers_{+}}$, and $\ensuremath{\integers_{-}}$ the sets of integers, positive integers, and negative integers, respectively. All these sets are considered as additive semigroups (groups in the case of $\ensuremath{\mathbb{R}}$ and $\ensuremath{\mathbb{Z}}$). If $m$ and $n$ are integers and $m\leq{n}$ we denote the \emph{slice} \begin{math} \{ m, m+1, \ldots, n-1, n \} = \left[ m, n \right] \cap \ensuremath{\mathbb{Z}} \end{math} as $\slice{m}{n}$. If $X$ is a set, we denote by $\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(X)$ the set of its finite subsets. If the sets $X$ and $Y$ where the variable $x$ and the expression $E(x)$ take values are irrelevant or clear from the context, we denote by \begin{math} \lambdaxt{x}{E(x)} \end{math} the function that associates to each value $\overline{x}$ taken by $x$ the value $E(\overline{x})$. For example, \begin{math} \lambdaxt{x}{1} \end{math} is the constant function taking value 1 everywhere. A \emph{directed set} is a partially ordered set \begin{math} \ensuremath{\mathcal{U}} = (U, \preceq) \end{math} with the following additional property: for every $x,y\in{U}$ there exists $z\in{U}$ such that $x\preceq{z}$ and $y\preceq{z}$. Every totally ordered set is a directed set, and so is the family of \emph{decompositions} \begin{math} x = \left\{ a = x_0 < x_1 < \ldots < x_n = b \right\} \end{math} of the compact interval $[a,b]$ with the partial order $x\preceq{y}$ iff for every $i$ there exists $j$ such that $x_i=y_j$. A function $f$ defined on $U$ is also called a \emph{net} on $\ensuremath{\mathcal{U}}$. If $Y$ is the codomain of $f$, a \emph{subnet} of $f$ is a net \begin{math} g : V \to Y \end{math} on a directed set \begin{math} \ensuremath{\mathcal{V}} = (V, \leq) \end{math} together with a function \begin{math} \phi : V \to U \end{math} such that: \begin{enumerate} \item \begin{math} f \circ \phi = g \end{math}; and \item for every $x\in{U}$ there exists $y\in{V}$ such that, if $z\in{V}$ and $z\geq{y}$, then $\phi(z)\succeq{x}$. \end{enumerate} For example, a subsequence \begin{math} \{ x_{n_k} \}_{k \geq 1} \end{math} of a sequence \begin{math} \{ x_{n} \}_{n \geq 1} \end{math} of real numbers is a subnet, with \begin{math} V = U = \ensuremath{\integers_{+}} \end{math} and \begin{math} \phi(k) = n_k \end{math}. If \begin{math} \ensuremath{\mathcal{U}} = (U, \preceq) \end{math} is a directed set and $f:U\to\ensuremath{\mathbb{R}}$ is a function, the \emph{lower limit} and the \emph{upper limit} of $f$ in $\ensuremath{\mathcal{U}}$ are the values \begin{equation} \label{eq:liminf-net} \liminf_{x \to \ensuremath{\mathcal{U}}} f(x) = \sup_{x \in U} \inf_{y \succeq x} f(y) \end{equation} and \begin{equation} \label{eq:limsup-net} \limsup_{x \to \ensuremath{\mathcal{U}}} f(x) = \inf_{x \in U} \sup_{y \succeq x} f(y) \,, \end{equation} respectively. The following chain of equalities holds: \begin{equation} \label{eq:liminf-limsup} \inf_{x \in U} f(x) \leq \liminf_{x \to \ensuremath{\mathcal{U}}} f(x) \leq \limsup_{x \to \ensuremath{\mathcal{U}}} f(x) \leq \sup_{x \in U} f(x) \,. \end{equation} Moreover, if \begin{math} \ensuremath{\mathcal{V}} = (V, \leq) \end{math} is a directed set and \begin{math} g : V \to \ensuremath{\mathbb{R}} \end{math} is a subnet of $f$, then: \begin{equation} \label{eq:liminf-limsup-subnet} \liminf_{x \to \ensuremath{\mathcal{U}}} f(x) \leq \liminf_{y \to \ensuremath{\mathcal{V}}} g(y) \leq \limsup_{y \to \ensuremath{\mathcal{V}}} g(y) \leq \limsup_{x \to \ensuremath{\mathcal{U}}} f(x) \,. \end{equation} If \begin{math} \liminf_{x \to \ensuremath{\mathcal{U}}} f(x) = \limsup_{x \to \ensuremath{\mathcal{U}}} f(x) \end{math}, their common value $L$ is called the \emph{limit} of $f$ in $\ensuremath{\mathcal{U}}$, and we say that $f$ \emph{converges} to $L$ in $\ensuremath{\mathcal{U}}$. This is equivalent to the following: for every $\varepsilon>0$ there exists \begin{math} x_\varepsilon \in U \end{math} such that \begin{math} |f(x) - L| < \varepsilon \end{math} for every \begin{math} x \succeq x_\varepsilon \end{math}. In this case, every subnet of $f$ also converges to $L$. The \emph{ordered product} of a family \begin{math} \{ (X_i, \leq_i) \}_{i \in I} \end{math} of ordered sets is the ordered set $(X,\leq_\Pi)$ where \begin{math} X = \prod_{i \in I} X_i \end{math} and the \emph{product ordering} $\leq_\Pi$ is defined as: \begin{equation} \label{eq:product-order} x \leq_\Pi y \Longleftrightarrow x_i \leq y_i \;\; \textrm{for every } i \in I \,. \end{equation} If each $(X_i,\leq_i)$ is a directed set, then so is $(X,\leq_\Pi)$. For $d\geq2$ and $w\in\ensuremath{\{0,1\}}^d$ the \emph{orthant} denoted by $w$ is the directed set \begin{math} \ensuremath{\mathcal{R}}_w = \prod_{i=1}^d (X_i, \leq_i) \end{math} where \begin{math} (X_i, \leq_i) = (\ensuremath{\ensuremath{\mathbb{R}}_{+}}, \leq) \end{math} if $w_i=0$ and \begin{math} (X_i, \leq_i) = (\ensuremath{\ensuremath{\mathbb{R}}_{-}}, \geq) \end{math} if $w_i=1$. For example, $\ensuremath{\mathcal{R}}_{10}$ is the open second quadrant of the Cartesian plane, with \begin{math} (x_1, x_2) \leq (y_1, y_2) \end{math} if and only if $x_1\geq{y_1}$ and $x_2\leq{y_2}$. In particular, the \emph{main orthant} of $\ensuremath{\mathbb{R}}^d$, corresponding to $w=0^d$, is \begin{math} \ensuremath{\ensuremath{\mathcal{R}}_{+}}^d = \left( \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d, \leq_\Pi \right) \end{math}. Note that, if \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} is a net on $\ensuremath{\ensuremath{\mathcal{R}}_{+}}^d$ and \begin{math} \{ x_{i, n} \}_{n \geq 1} \end{math}, $i\in\slice{1}{d}$, are sequences of positive reals such that \begin{math} \lim_{n \to \infty} x_{i,n} = +\infty \end{math} for every $i\in\slice{1}{d}$, then \begin{math} g(n) = f \left( x_{1,n}, \ldots, x_{d,n} \right) \end{math} is a subnet of $f$: consequently, if $f$ converges to $L\in\ensuremath{\mathbb{R}}$ in $\ensuremath{\ensuremath{\mathcal{R}}_{+}}^d$, then $g(n)$ converges to $L$ for $n\to\infty$. \section{Componentwise subadditivity} \label{sec:subadd-componentwise} In the literature, subadditivity is most often studied in functions of a single variable, which sometimes may be vector rather than scalar. But in some cases, it is of interest to consider functions of $d$ independent variables, which are subadditive when considered as functions of only one of those, but however given the remaining ones. \begin{definition} \label{def:subadd-componentwise} Let \begin{math} S_1, \ldots, S_d \end{math} be semigroups, let \begin{math} S = \prod_{i=1}^d S_i \end{math}, and let \begin{math} f : S \to \ensuremath{\mathbb{R}} \end{math}. Given $i\in\slice{1}{d}$, we say that $f$ is \emph{subadditive in $x_i$ independently of the other variables} if, however given \begin{math} x_j \in S_j \end{math} for every \begin{math} j \in \slice{1}{d} \setminus \{ i \} \end{math}, the function \begin{math} \lambdaxt{x_i}{f(x_1, \ldots, x_i, \ldots, x_d)} \end{math} is subadditive on $S_i$. We say that $f$ is \emph{componentwise subadditive} if it is subadditive in each variable independently of the others. \end{definition} \begin{example} \label{ex:subadd-componentwise-immediate} If \begin{math} f_1 : S_1 \to \ensuremath{\mathbb{R}} \end{math} and \begin{math} f_2 : S_2 \to \ensuremath{\mathbb{R}} \end{math} are both subadditive and nonnegative, then \begin{math} f : S_1 \times S_2 \to \ensuremath{\mathbb{R}} \end{math} defined by \begin{math} f(x_1, x_2) = f_1(x_1) \cdot f_2(x_2) \end{math} is componentwise subadditive. If one between $f_1$ and $f_2$ takes negative values, then $f$ might not be componentwise subadditive. For example, $f(x_1)=-x_1$ is subadditive on $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$, because it is linear, and $f_2(x_2)=\sqrt{x_2}$ is also subadditive on $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$, because it is nondecreasing and \begin{math} x_2 + y_2 < \left( \sqrt{x_2} + \sqrt{y_2} \right)^2 \end{math} for every $x_2,y_2>0$; but for any fixed $x_1>0$, the function \begin{math} \lambdaxt{x_2}{-x_1 \sqrt{x_2}} \end{math} is not subadditive on $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$. \end{example} \begin{example} (cf. \cite[Section 3]{capobianco2008}) \label{ex:subadd-subshift-logoutput} Let $d$ be a positive integer and let $A$ be a finite set with $a\geq2$ elements, considered as a discrete space. The \emph{translation} by $v\in\ensuremath{\mathbb{Z}}^d$ is the function \begin{math} \sigma_v : A^{\ensuremath{\mathbb{Z}}^d} \to A^{\ensuremath{\mathbb{Z}}^d} \end{math} defined by \begin{math} \sigma_v(c)(x) = c(x+v) \end{math} for every $x\in\ensuremath{\mathbb{Z}}^d$. A $d$-dimensional \emph{subshift} on $A$ is a subset $X$ of $A^{\ensuremath{\mathbb{Z}}^d}$ which is closed in the product topology and invariant by translation, that is, if $c\in{X}$, then $\sigma_v(c)\in{X}$ for every $v\in\ensuremath{\mathbb{Z}}^d$. Given $d$ positive integers $n_1,\ldots,n_d$, an \emph{allowed pattern} of sides $n_1,\ldots,n_d$ for $X$ is a function \begin{math} p : \prod_{i=1}^d \slice{1}{n_i} \to A \end{math} such that there exists $c\in{X}$ for which the restriction of $c$ to $\prod_{i=1}^d\slice{1}{n_i}$ coincides with $p$. Let $\ensuremath{\mathcal{A}}_X(n_1,\ldots,n_d)$ be the number of allowed patterns for $X$ of sides $n_1,\ldots,n_d$: then \begin{equation} \label{eq:subadd-ca-logoutput} f(n_1, \ldots, n_d) = \log_{a} \ensuremath{\mathcal{A}}_X(n_1, \ldots, n_d) \;\; \textrm{for every} \; n_1, \ldots, n_d \in \ensuremath{\integers_{+}} \end{equation} is componentwise subadditive, because every allowed pattern of sides \begin{math} n_1 + m_1, n_2, \ldots, n_d \end{math} can be obtained by joining an allowed pattern of sides $n_1,n_2,$ $\ldots,n_d$ with an allowed pattern of sides $m_1,n_2,\ldots,n_d$, but joining two such allowed patterns does not necessarily produce an allowed pattern; similarly for the other $d-1$ coordinates. This works because $X$ is invariant by translations. \end{example} Componentwise subadditivity is very different from subadditivity with respect to the operation of the product semigroup. Already with $d=2$, if \begin{math} f : S_1 \times S_2 \to \ensuremath{\mathbb{R}} \end{math} is subadditive, then for every \begin{math} x_1, y_1 \in S_1 \end{math} and \begin{math} x_2, y_2 \in S_2 \end{math} we have: \begin{equation} \label{eq:subadditive-2} f(x_1 y_1, x_2 y_2) \leq f(x_1, x_2) + f(y_1, y_2) \,, \end{equation} while if $f$ is componentwise subadditive, then for every \begin{math} x_1, y_1 \in S_1 \end{math} and \begin{math} x_2, y_2 \in S_2 \end{math} we have the more complex upper bound: \begin{equation} \label{eq:subadditive-2-componentwise} f(x_1 y_1, x_2 y_2) \leq f(x_1, x_2) + f(x_1, y_2) + f(y_1, x_2) + f(y_1, y_2) \,. \end{equation} If $f$ is nonnegative, then (\ref{eq:subadditive-2}) implies (\ref{eq:subadditive-2-componentwise}), which however is weaker than the conditions of Definition \ref{def:subadd-componentwise}; if $f$ is nonpositive, then (\ref{eq:subadditive-2-componentwise}) implies (\ref{eq:subadditive-2}). In general, however, neither implies the other. \begin{example} \label{ex:subadd-separate-not-joint} By our discussion in Example \ref{ex:subadd-componentwise-immediate}, the function \begin{math} f(x_1, x_2) = \sqrt{x_1 x_2} \end{math} is componentwise subadditive on $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^2$. However, $f$ is not subadditive, because \begin{math} f(3, 3) = 3 > 2\sqrt{2} = f(1, 2) + f(2, 1) \end{math}. \end{example} \begin{example} \label{ex:subadd-ca-surjective} The function (\ref{eq:subadd-ca-logoutput}) of Example \ref{ex:subadd-subshift-logoutput} is not, in general, subadditive. For example, for $d=2$ and $X=A^{\ensuremath{\mathbb{Z}}^2}$ every pattern is allowed, so \begin{math} f(n_1, n_2) = n_1 n_2 \end{math}: but if $n_1$, $n_2$, $m_1$, and $m_2$ are all positive, then \begin{math} (n_1 + m_1) (n_2 + m_2) > n_1 n_2 + m_1 m_2 \end{math}. \end{example} Although componentwise subadditivity is very different from subadditivity in the product semigroup, Fekete's lemma can tell us something important for the case of positive integer or real variables. \begin{lemma} \label{lem:positive-variables} Let \begin{math} S = \prod_{i=1}^d S_i \end{math} with each $S_i$ being either $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$ or $\ensuremath{\integers_{+}}$, and let $f:S^d\to\ensuremath{\mathbb{R}}\cup\{-\infty\}$ be componentwise subadditive. Having fixed $k\in\slice{1}{d-1}$, let \begin{math} i, j_1, \ldots, j_k \in \slice{1}{d} \end{math} be pairwise different. However fixed the values of the remaining variables, the function \begin{math} h : S_i \to \ensuremath{\mathbb{R}} \cup \{ -\infty \} \end{math} defined by the multiple limit: \begin{equation} \label{eq:positive-variables} h(x_i) = \lim_{x_{j_1} \to +\infty} \ldots \lim_{x_{j_k} \to +\infty} \dfrac{f(x_1, \ldots, x_d)}{\prod_{i=1}^k x_{j_i}} \end{equation} is subadditive. \end{lemma} \begin{proof} It is sufficient to prove the thesis for $k=1$; the general case follows by repeated application. To simplify notation, let $j=j_1$. Fix the values of $x_s$ for \begin{math} s \in \slice{1}{d} \setminus \{ i, j \} \end{math}. By hypothesis, for every $x_i\in{S_i}$ the function \begin{math} \lambdaxt{x_j}{f(x_1, \ldots, x_d)} \end{math} is subadditive, so by Fekete's lemma \begin{math} h(x_i) = \lim_{x_j \to \infty} \frac{f(x_1, \ldots, x_d)}{x_j} \end{math} exists. But for every $x_i,x_i',x_j>0$ it is: \begin{displaymath} \frac{f(\ldots, x_i + x_i', \ldots )}{x_j} \leq \frac{f(\ldots, x_i, \ldots)}{x_j} + \frac{f(\ldots, x_1', \ldots)}{x_j} \,, \end{displaymath} so it must be \begin{math} h(x_1 + x_1') \leq h(x_1) + h(x_1') \end{math} too. Note that it is crucial for the proof that $x_j>0$. \end{proof} The following observation is crucial for the next sections. \begin{proposition} \label{prop:subadditive-componentwise-orthant} Let \begin{math} w = w_1 \ldots w_d \end{math} be a binary word of length $d$ and let \begin{math} f : \ensuremath{\mathbb{R}}_w \to \ensuremath{\mathbb{R}} \end{math}. For every $i\in\slice{1}{d}$ let \begin{math} x_{w,i} = (-1)^{w_i} x_i \in \ensuremath{\ensuremath{\mathbb{R}}_{+}} \end{math}, and let \begin{math} f_w : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} be defined by \begin{math} f_w(x_{w,1}, \ldots, x_{w,d}) = f(x_1, \ldots, x_d) \end{math}. The following are equivalent: \begin{enumerate} \item $f(x_1,\ldots,x_d)$ is componentwise subadditive in $\ensuremath{\mathbb{R}}_w$. \item \begin{math} f_w(x_{w,1}, \ldots, x_{w,d}) \end{math} is componentwise subadditive in $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^d$. \end{enumerate} The same holds if $\ensuremath{\mathbb{R}}_w$ and $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^d$ are replaced with \begin{math} \ensuremath{\mathbb{Z}}_w = \ensuremath{\mathbb{R}}_w \cap \ensuremath{\mathbb{Z}}^d \end{math} and $\ensuremath{\integers_{+}}^d$, respectively. \end{proposition} \section{% Componentwise subadditive functions of $d$ real variables are bounded on compacts% } \label{sec:boundedness} In \cite{capobianco2008} we prove the following: \begin{proposition}[% {Fekete's lemma in $\ensuremath{\integers_{+}}^d$; \cite[Theorem 1]{capobianco2008}}% ] \label{prop:fekete-positiveint-d} Let \begin{math} \ensuremath{\mathcal{U}} = \left( \ensuremath{\integers_{+}}^d, \leq_\Pi \right) \end{math} and let \begin{math} f : \ensuremath{\integers_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive. Then: \begin{equation} \label{eq:fekete-positiveint-d} \lim_{(x_1, \ldots, x_d) \to \ensuremath{\mathcal{U}}} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} = \inf_{x_1, \ldots, x_d \in \ensuremath{\integers_{+}}} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,. \end{equation} \end{proposition} \begin{example} \label{ex:fekete-entropy} With the notation of Example \ref{ex:subadd-subshift-logoutput} and $\ensuremath{\mathcal{U}}$ as in Proposition \ref{prop:fekete-positiveint-d}, the value: \begin{equation} \label{eq:fekete-entropy} h(X) = \lim_{(x_1, \ldots, x_d) \to \ensuremath{\mathcal{U}}} \frac{\log_a \ensuremath{\mathcal{A}}_X(x_1, \ldots, x_d)}{x_1 \cdots x_d} \end{equation} is well defined, and is called the \emph{entropy} of the subshift $X$. For $d=1$ this coincides with \cite[Definition 4.1.1]{lm1995}. \end{example} We try to reuse the argument from \cite{capobianco2008} to prove Proposition \ref{prop:fekete-2d-positives}. Fix $s,t>0$. Every $x>0$ large enough has a unique writing \begin{math} x = qs + r \end{math} with $q$ positive integer and \begin{math} r \in \left[ s, 2s \right) \end{math}, and every $y>0$ large enough has a unique writing \begin{math} y = mt + p \end{math} with $m$ positive integer and \begin{math} p \in \left[ t, 2t \right) \end{math}. By subadditivity, \begin{eqnarray*} \frac{f(x, y)}{x \cdot y} & \leq & \frac{q}{x \cdot y} f(s, y) + \frac{1}{x \cdot y} f(r, y) \\ & \leq & \frac{q}{x} \cdot \frac{m}{y} \cdot f(s, t) \\ & & + \frac{q}{x} \cdot \frac{1}{y} \cdot f(s, p) + \frac{1}{x} \cdot \frac{m}{y} \cdot f(r, t) \\ & & + \frac{1}{x} \cdot \frac{1}{y} \cdot f(r, p) \end{eqnarray*} Consider the four summands on the right-hand side of the last inequality. By construction, \begin{math} \lim_{x \to +\infty} q / x = 1 / s \end{math} and \begin{math} \lim_{y \to +\infty} m / y = 1 / t \end{math}: therefore, the first summand converges to \begin{math} f(s, t) / st \end{math} for \begin{math} (x, y) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^2 \end{math}. Now, by \cite[Theorem 6.4.1]{hille1948}, a subadditive function of one positive real variable is bounded in every compact subset of $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$. Then \begin{math} \lambdaxt{p}{f(s, p)} \end{math} is bounded on \begin{math} \left[ t, 2t \right] \end{math} and \begin{math} \lambdaxt{r}{f(r, t)} \end{math} is bounded on \begin{math} \left[ s, 2s \right] \end{math}: consequently, the second and third summand vanish for \begin{math} (x, y) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^2 \end{math}. But the fourth summand presents a problem. What we know, is that \begin{math} \lambdaxt{x}{f(x, y)} \end{math} is bounded in $[s,2s]$ for every $y\in[t,2t]$, and \begin{math} \lambdaxt{y}{f(s, y)} \end{math} is bounded in $[t,2t]$ for every $x\in[s,2s]$. This is, in general, \emph{strictly less} than $f$ being bounded in \begin{math} [s, 2s] \times [t, 2t] \end{math}: which is what we actually need to show that the fourth summand vanishes when $x$ and $y$ both grow arbitrarily large! \begin{example}[{suggested by Arthur Rubin}] \label{ex:bound-unbound} Let \begin{math} h : \ensuremath{\ensuremath{\mathbb{R}}_{+}} \to \ensuremath{\mathbb{R}} \end{math} be such that $h(t)$ is the denominator of the representation of $t$ as an irreducible fraction if $t$ is rational, and 0 if $t$ is irrational. Then \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^2 \to \ensuremath{\mathbb{R}} \end{math} defined by \begin{math} f(x, y) = \min(h(x), h(y)) \end{math} satisfies the following conditions: \begin{enumerate} \item for every $x\in[1,2]$, the function $\lambdaxt{y}{f(x,y)}$ is bounded in $[1,2]$; \item for every $y\in[1,2]$, the function $\lambdaxt{x}{f(x,y)}$ is bounded in $[1,2]$. \end{enumerate} However, $f$ is not bounded in \begin{math} [1, 2] \times [1, 2] \end{math}, because \begin{math} f(1 + 1/n, 1 + 1/n) = n \end{math} for every $n\in\ensuremath{\integers_{+}}$. On the other hand, $h(4)=1$ and \begin{math} h(\pi) = h(4-\pi) = 0 \end{math}, so $f$ is neither subadditive nor componentwise subadditive in $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^2$. \end{example} We could overcome this issue if a result of boundedness such as the one in \cite[Theorem 6.4.1]{hille1948} held for componentwise subadditive functions. Luckily, it is so, and we can follow the same idea of Hille's proof. Given \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} and \begin{math} t_1, \ldots, t_d \in \ensuremath{\ensuremath{\mathbb{R}}_{+}} \end{math}, let: \begin{equation} \label{eq:levelset-d} V_{t_1, \ldots, t_d, k} = \{ (x_1, \ldots, x_d) \in \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \mid 0 < x_i < t_i \forall i \in \slice{1}{d}, f(x_1, \ldots, x_d) \geq k \} \,. \end{equation} Under our hypothesis that $f$ is measurable, so is (\ref{eq:levelset-d}). The next statement is the cornerstone of our argument. For Lemma \ref{lem:subadd-levelset-d} and Theorem \ref{thm:subadd-bounded-1q}, the symbol $\mu$ and the word ``measure'' denote the $d$-dimensional Lebesgue measure. \begin{lemma} \label{lem:subadd-levelset-d} Let \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive. Then for every \begin{math} t_1, \ldots, t_d \in \ensuremath{\ensuremath{\mathbb{R}}_{+}} \end{math}, \begin{equation} \label{eq:subadd-levelset-d} \mu \left( V_{t_1, \ldots, t_d, \frac{f(t_1, \ldots, t_d)}{2^d}} \right) \geq \frac{t_1 \cdots t_d}{2^d} \,. \end{equation} \end{lemma} \begin{proof} Call $V$ the set on the left-hand side of (\ref{eq:subadd-levelset-d}). For every $i\in\slice{1}{d}$, given \begin{math} x_i \in (0, t_i) \end{math}, let \begin{math} y_i^{(0)} = x_i \end{math} and \begin{math} y_i^{(1)} = t_i - x_i \end{math}. Then for every $w\in\ensuremath{\{0,1\}}^d$ the transformation \begin{displaymath} \lambdaxt{% (x_1, \ldots, x_d) }{ \left( y_1^{(w_1)}, \ldots, y_d^{(w_d)} \right) } \end{displaymath} is a measure-preserving continuous involution, hence the set: \begin{displaymath} V_w = \left\{ \left( y_1^{(w_1)}, \ldots, y_d^{(w_d)} \right) \mid (x_1, \ldots, x_d) \in V \right\} \end{displaymath} is measurable and satisfies \begin{math} \mu(V_w) = \mu(V) \end{math}. Note that $V=V_{0^d}$. By repeatedly applying subadditivity, once in each variable, we arrive at: \begin{equation} \label{eq:levelset-subadd} f(t_1, \ldots, t_d) \leq \sum_{w \in \ensuremath{\{0,1\}}^d} f \left( y_1^{(w_1)}, \ldots, y_d^{(w_d)} \right) \,. \end{equation} For example, for $d=2$ we have: \begin{eqnarray*} f(t_1, t_2) & \leq & f(x_1, t_2) + f(t_1 - x_1, t_2) \\ & \leq & f(x_1, x_2) + f(x_1, t_2 - x_2) \\ & & + f(t_1 - x_1, x_2) + f(t_1 - x_1, t_2 - x_2) \,. \end{eqnarray*} For (\ref{eq:levelset-subadd}) to hold, at least one of the $2^d$ summands on the right-hand side must be no smaller than \begin{math} \dfrac{f(t_1, \ldots, t_d)}{2^d} \end{math}. Then \begin{math} \bigcup_{w \in \ensuremath{\{0,1\}}^d} V_w = \prod_{i=1}^d (0, t_i) \end{math}, so: \begin{displaymath} t_1 \cdots t_d \leq \sum_{w \in \ensuremath{\{0,1\}}^d} \mu(V_w) = 2^d \, \mu(V) \,. \end{displaymath} \end{proof} From Lemma \ref{lem:subadd-levelset-d} follows: \begin{theorem} \label{thm:subadd-bounded-1q} Let \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive. Then $f$ is bounded in every compact subset of $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^d$. \end{theorem} \begin{proof} It is sufficient to prove the thesis for every compact hypercube of the form $H=[a,b]^d$ with $0<a<b$. We proceed by contradiction, following the argument from \cite[Theorem 6.4.1]{hille1948}. First, suppose that $f$ is unbounded from above in $H$. Then for every $n\geq1$ and $i\in\slice{1}{d}$ there exists \begin{math} x_{i, n} \in [a, b] \end{math} such that \begin{math} f(x_{1, n}, \ldots, x_{d, n}) \geq 2^d n \end{math}. Let \begin{math} W_{t_1, \ldots, t_d} \end{math} be the set in (\ref{eq:subadd-levelset-d}). By construction, for every $n\geq1$ we have \begin{displaymath} W_{x_{1,n}, \ldots, x_{d,n}} \subseteq V_{b, \ldots, b, n} \,, \end{displaymath} and by Lemma \ref{lem:subadd-levelset-d}, \begin{displaymath} \mu \left( W_{x_{1,n}, \ldots, x_{d,n}} \right) \geq \frac{x_{1,n} \cdots x_{d,n}}{2^d} \geq \left( \frac{a}{2} \right)^d \,. \end{displaymath} Now, the sets \begin{math} V_{b, \ldots, b, n} \end{math} are measurable and form a nonincreasing sequence, so \begin{math} V = \bigcap_{n \geq 0} V_{b, \ldots, b, n} \end{math} is measurable and \begin{math} \mu(V) \geq\left( a/2 \right)^d \end{math}: in particular, $V$ cannot be empty. But for \begin{math} (x_1, \ldots, x_d) \in V \end{math} it must be \begin{math} f(x_1, \ldots, x_d) \geq n \end{math} for every $n\geq1$: which is impossible. Next, suppose that $f$ is unbounded from below in $H$. Then for every $n\geq1$ and $i\in\slice{1}{d}$ there exists \begin{math} x_{i, n} \in [a, b] \end{math} such that \begin{math} f(x_{1, n}, \ldots, x_{d, n}) \leq -n \end{math}: we may assume that \begin{math} \lim_{n \to \infty} x_{i, n} = x_i \in [a, b] \end{math} exists for every $i\in\slice{1}{d}$. Let \begin{math} s = \min(a, 1) \end{math}, \begin{math} t = b + 4 \end{math}, and \begin{math} J = [s,t]^d \end{math}: then every point \begin{math} (z_1, \ldots, z_d) \end{math} where each $z_i$ belongs to either $[a,b]$ or $[1,4]$ belongs to $J$. Let now $y_i\in[1,4]$ for every $i\in\slice{1}{d}$ and \begin{displaymath} M = \sup \{ f(z_1, \ldots, z_d) \mid (z_1, \ldots, z_d) \in J \} \,, \end{displaymath} which is a real number because of the previous point. By applying subadditivity in each variable, for such \begin{math} y_1, \ldots, y_d \end{math} and $n$ we obtain \begin{displaymath} \label{eq:subadd-bounded-1q} f(y_1 + x_{1, n}, \ldots, y_d + x_{d, n}) \leq (2^d-1)M - n \,, \end{displaymath} because $-n$ is an upper bound for \begin{math} f(x_{1, n}, \ldots, x_{d, n}) \end{math} and $M$ is an upper bound for the other $2^d-1$ summands, For example, for $d=2$ we have: \begin{eqnarray*} f(y_1 + x_{1, n}, y_2 + x_{2, n}) & \leq & f(y_1, y_2) + f(y_1, x_{2, n}) \\ & & + f(x_{1, n}, y_2) + f(x_{1, n}, x_{2, n}) \\ & \leq & 3M - n \,. \end{eqnarray*} But for every $n$ such that \begin{math} |x_{i,n} - x_i| \leq 1 \end{math} it is \begin{math} [x_i + 2, x_i + 3] \subseteq [x_{i,n} + 1, x_{i,n} + 4] \end{math}: calling \begin{displaymath} K = \prod_{i=1}^d [x_i + 2, x_i + 3] \subseteq J \,, \end{displaymath} for every $n$ large enough every element of $K$ can be written in the form \begin{math} (y_1 + x_{1,n}, \ldots, y_d + x_{d,n}) \end{math} for suitable \begin{math} y_1, \ldots, y_d \in [1, 4] \end{math}. For every \begin{math} (z_1, \ldots, z_d) \in K \end{math} it must then be \begin{math} f(z_1, \ldots, z_d) \leq (2^d-1)M - n \end{math} for every $n$ large enough: which is impossible. \end{proof} If $f:\ensuremath{\mathbb{R}}_w\to\ensuremath{\mathbb{R}}$ is bounded on the compact subsets of $\ensuremath{\mathbb{R}}_w$, then $f_w$ as defined in Proposition \ref{prop:subadditive-componentwise-orthant} is bounded on the compact subsets of $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$; and vice versa. From Theorem \ref{thm:subadd-bounded-1q} and Proposition \ref{prop:subadditive-componentwise-orthant} follows: \begin{corollary} \label{cor:subadd-bounded-Rw} Let $w\in\ensuremath{\{0,1\}}^d$ and let \begin{math} f : \ensuremath{\mathbb{R}}_w \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive. Then $f$ is bounded in every compact subset of $\ensuremath{\mathbb{R}}_w$. \end{corollary} In turn, Corollary \ref{cor:subadd-bounded-Rw} allows us to prove: \begin{theorem} \label{thm:subadd-bounded} Let \begin{math} f : \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive. Then $f$ is bounded in every compact subset of $\ensuremath{\mathbb{R}}^d$. \end{theorem} \begin{proof} It is sufficient to show finitely many open sets \begin{math} V_1, \ldots, V_n \end{math} such that $f$ is bounded on the compacts of each $V_i$ and: \begin{displaymath} \ensuremath{\mathbb{R}}^d = \left( \bigcup_{w \in \ensuremath{\{0,1\}}^d} \ensuremath{\mathbb{R}}_w \right) \cup \left( \bigcup_{i=1}^n V_i \right) \,. \end{displaymath} We give the argument for $d=3$: the ideas for arbitrary $d\geq1$ are similar. Let \begin{math} I = [-1/2, 1/2] \end{math} and \begin{math} U = [-3/2, -1/2] \cup [1/2, 3/2] \end{math}. We start by proving that $f$ is bounded in every compact subset of the open set \begin{displaymath} Z_{00} = \{ (x, y, z) \in \ensuremath{\mathbb{R}}^3 \mid x>0, y>0 \} = \ensuremath{\mathbb{R}}_{000} \cup \ensuremath{\mathbb{R}}_{001} \cup D_{00} \,, \end{displaymath} where \begin{math} D_{00} = \{ (x, y, z) \in \ensuremath{\mathbb{R}}^3 \mid x>0, y>0, z=0 \} \end{math} is the first quadrant of the $XY$ plane. To do this, we only need to show that $f$ is bounded in every set of the form \begin{math} H = [a, b] \times [a, b] \times I \end{math}. Let \begin{math} V = [a, b] \times [a, b] \times U \end{math}: if $(x,y,z)\in{H}$, then $(x,y,z-1)$ and $(x,y,z+1)$ are both in $V$. Let $T$ and $t$ be an upper bound and a lower bound for $f$ in $V$, respectively: then for every $(x,y,z)\in{H}$, \begin{displaymath} f(x, y, z) \leq f(x, y, z-1) + f(x, y, 1) \leq 2T \end{displaymath} and \begin{displaymath} f(x, y, z) \geq f(x, y, z+1) - f(x, y, 1) \geq t - T \,. \end{displaymath} By similar arguments, $f$ is bounded in every compact subset of every subset of $\ensuremath{\mathbb{R}}^3$ which is the union of two adjacent orthants and the corresponding ``quadrant''. As for each open orthant there are three which border it by one ``quadrant'', there are \begin{math} \dfrac{8 \cdot 3}{2} = 12 \end{math} such subsets. We now show that $f$ is bounded in every compact subset of the open ``upper demispace'' \begin{math} Z_0 = \{ (x, y, z) \in \ensuremath{\mathbb{R}}^3 \mid z > 0 \} \end{math}. To do so, it will suffice to show that $f$ is bounded in every set of the form \begin{math} L = I \times I \times [a, b] \end{math} with $0<a<b$. Let \begin{math} W = U \times U \times [a, b] \end{math} and let $S$ and $s$ be an upper bound for $f$ in $W$, respectively: then for every $(x,y,z)\in{L}$, \begin{eqnarray*} f(x, y, z) & \leq & f(x-1, y, z) + f(1, y, z) \\ & \leq & f(x-1, y-1, z) + f(x-1, 1, z) + f(1, y-1, z) + f(1, 1, z) \\ & \leq & 4S \end{eqnarray*} and \begin{eqnarray*} f(x, y, z) & \geq & f(x+1, y, z) - f(1, y, z) \\ & \geq & f(x+1, y+1, z) - f(x+1, 1, z) - f(1, y, z) \\ & \geq & s - 2S \,. \end{eqnarray*} Similarly, $f$ is bounded in each of the other five open ``demispaces''. To conclude the proof, we only need to show that $f$ is bounded in \begin{math} K = I \times I \times I \end{math}. Let \begin{math} E = U \times U \times U \end{math} and let $M$ and $m$ be an upper bound and a lower bound for $f$ in $E$, respectively: then for every $(x,y,z)\in{K}$, \begin{eqnarray*} f(x, y, z) & \leq & f(x-1, y, z) + f(1, y, z) \\ & \leq & f(x-1, y-1, z) + f(x-1, 1, z) + f(1, y-1, z) + f(1, 1, z) \\ & \leq & f(x-1, y-1, z-1) + f(x-1, z-1, 1) \\ & & + f(x-1, 1, z-1) + f(x-1, 1, 1) \\ & & + f(1, y-1, z-1) + f(1, y-1, 1) \\ & & + f(1, 1, z-1) + f(1, 1, 1) \\ & \leq & 8M \end{eqnarray*} and \begin{eqnarray*} f(x, y, z) & \geq & f(x+1, y, z) - f(1, y, z) \\ & \geq & f(x+1, y+1, z) - f(x+1, 1, z) - f(1, y, z) \\ & \geq & f(x+1, y+1, z+1) - f(x+1, y+1, 1) \\ & & - f(x+1, 1, z) - f(1, y, z) \\ & \geq & m - 3M \,. \end{eqnarray*} \end{proof} Note that the argument of Lemma \ref{lem:subadd-levelset-d} also works if $f$ is subadditive, rather than componentwise subadditive. In this case, however, the denominator in (\ref{eq:subadd-levelset-d}) and in the thesis is 2 rather than $2^d$. A more complex variant of it can then be stated, where $f$ is a function of $k$ variables $x_i$, each taking values in an orthant of $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^{d_i}$: and the denominator would then be $2^k$. From this, a generalization of Theorem \ref{thm:subadd-bounded-1q} to the case of componentwise functions of $k$ variables, the $i$th of which takes values in $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^{d_i}$, can be derived. \section{% Fekete's lemma for componentwise subadditive functions of $d$ real variables% } \label{sec:fekete-d} We can now state and prove the main result of this paper. \begin{theorem}[Fekete's lemma in $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^d$] \label{thm:fekete-positives-d} Let $d\geq1$ and let \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive. Then: \begin{equation} \label{eq:fekete-positives-d} \lim_{(x_1, \ldots, x_d) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^{d}} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} = \inf_{x_1, \ldots, x_d \in \ensuremath{\ensuremath{\mathbb{R}}_{+}}} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,, \end{equation} which can be $-\infty$. In addition, for every permutation $\sigma$ of $\slice{1}{d}$, \begin{equation} \label{eq:fekete-positives-d-multiplelimit} \lim_{x_{\sigma(1)} \to +\infty} \ldots \lim_{x_{\sigma(d)} \to +\infty} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} = \lim_{(x_1, \ldots, x_d) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^{d}} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,, \end{equation} regardless of any of the limits being finite or (negatively) infinite. \end{theorem} The proof of (\ref{eq:fekete-positives-d}) is similar to that of \cite[Theorem 1]{capobianco2008}, with an important change; for convenience of the reader, we report it entirely. The proof of (\ref{eq:fekete-positives-d-multiplelimit}) relies on (\ref{eq:fekete-positives-d}), the original Fekete's lemma, and the following Lemma \ref{lem:multiple-inf}; the argument remains valid for the case of positive integer variables. \begin{lemma} \label{lem:multiple-inf} Let $u$ be a real-valued function depending on $d\geq1$ variables $x_i$, no matter of what type. Then for every permutation $\sigma$ of $\slice{1}{d}$, \begin{equation} \label{eq:multiple-inf} \inf_{x_{\sigma(1)}} \ldots \inf_{x_{\sigma(d)}} u(x_1, \ldots, x_d) = \inf_{x_1, \ldots, x_d} u(x_1, \ldots, x_d) \,. \end{equation} \end{lemma} \begin{proof}[Sketch of proof.] Let $L$ and $R$ be the left- and right-hand side of (\ref{eq:multiple-inf}), respectively. It is easy to see that $L\geq{R}$. Let now $\delta>0$: there exist $z_1,\ldots,z_d\in\ensuremath{\ensuremath{\mathbb{R}}_{+}}$ such that \begin{math} u(z_1, \ldots, z_d) < R + \delta \end{math}, so a fortiori $L<R+\delta$ too; as $\delta>0$ is arbitrary, $L\leq{R}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:fekete-positives-d}] For every \begin{math} i \in \{ 1, \ldots, d \} \end{math} and \begin{math} x_1, \ldots, x_d \in \ensuremath{\ensuremath{\mathbb{R}}_{+}}, \end{math} if $x_i=qt+r$ with $q\in\ensuremath{\integers_{+}}$ and $r,t\in\ensuremath{\ensuremath{\mathbb{R}}_{+}}$, then: \begin{displaymath} f(x_1, \ldots, x_i, \ldots, x_d) \leq q \cdot f(x_1, \ldots, t, \ldots, x_d) + f(x_1, \ldots, r, \ldots, x_d) \;. \end{displaymath} Fix \begin{math} t_1, \ldots, t_d \in \ensuremath{\ensuremath{\mathbb{R}}_{+}} \end{math}. For every $i\in\slice{1}{d}$ and $x_i\geq{2t_i}$ there exist unique $q_i\in\ensuremath{\integers_{+}}$ and \begin{math} r_i \in \left[ t_i, 2 t_i \right) \end{math} such that \begin{math} x_i = q_i t_i + r_i \end{math}. For every $i\in\slice{1}{d}$ let \begin{math} y_i^{(0)} = r_i \end{math} and \begin{math} y_i^{(1)} = t_i \end{math}: by repeatedly applying subadditivity, once per each variable, we find: \begin{equation} \label{eq:fekete-d-estimate} f(x_1, \ldots, x_d) \leq \sum_{w \in \ensuremath{\{0,1\}}^d} q_1^{w_1} \cdots q_d^{w_d} \cdot f \left( y_1^{(w_1)}, \ldots, y_d^{(w_d)} \right) \;. \end{equation} Now, on the right-hand side of (\ref{eq:fekete-d-estimate}), each occurrence of $f$ has $k$ arguments chosen from the $t_i$'s and $d-k$ chosen from the $r_i$'s, is multiplied by the $q_i$'s corresponding to the $t_i$'s, and is bounded from above by the constant \begin{displaymath} M = \sup \{ f(y_1, \ldots, y_d) \mid y_i \in [t_i,2t_i] \; \forall i \in \slice{1}{d} \} \,, \end{displaymath} which exists because of Theorem \ref{thm:subadd-bounded-1q}. Such boundedness is crucial for the proof, and was ensured for free in the case of positive integer variables from \cite{capobianco2008}, but had to be proved for positive real variables. By dividing both sides of (\ref{eq:fekete-d-estimate}) by $x_1\cdots{x_d}$ we get: \begin{equation} \label{eq:fekete-d-estimate2} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \leq \frac{q_1 \cdots q_d}{x_1 \cdots x_d} f(t_1,\ldots,t_d) + M \cdot \sum_{w \in \ensuremath{\{0,1\}}^d \setminus \{1^d\}} \frac{q_1^{w_1} \cdots q_d^{w_d}}{x_1\cdots x_d} \;. \end{equation} where $1^d$ is the binary word of length $d$ where all the bits are 1. By construction, \begin{math} \lim_{x_i \to \infty} q_i / x_i = 1/t_i \end{math}. Given $\varepsilon>0$, choose \begin{math} x_1^{(\varepsilon)}, \ldots, x_d^{(\varepsilon)} \in \ensuremath{\ensuremath{\mathbb{R}}_{+}} \end{math} such that, if \begin{math} x_i \geq x_i^{(\varepsilon)} \end{math} for each $i\in\slice{1}{d}$, then the following hold: \begin{enumerate} \item \begin{math} \dfrac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} < \dfrac{f(t_1, \ldots, t_d)}{t_1 \cdots t_d} + \dfrac{\varepsilon}{2^d} \end{math}; \item \begin{math} \dfrac{q_1^{w_1} \cdots q_d^{w_d}}{x_1\cdots x_d} < \dfrac{\varepsilon}{M \cdot 2^d} \end{math} for every \begin{math} w \in \ensuremath{\{0,1\}}^d \setminus \{ 1^d \} \end{math}. \end{enumerate} This is possible because if \begin{math} w \neq 1^d \end{math}, then at least one of the $q_i^{w_i}$ equals 1. For such \begin{math} x_1, \ldots, x_d \end{math} it is: \begin{displaymath} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} < \frac{f(t_1, \ldots, t_d)}{t_1 \cdots t_d} + \varepsilon \;. \end{displaymath} As $\varepsilon>0$ is arbitrary, it must be: \begin{displaymath} \limsup_{(x_1, \ldots,x_d) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^d} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \leq \frac{f(t_1, \ldots, t_d)}{t_1 \cdots t_d} \,. \end{displaymath} But the $t_i$'s are also arbitrary, hence: \begin{eqnarray*} \limsup_{(x_1, \ldots, x_d) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^d} \frac{f(x_1 ,\ldots, x_d)}{x_1 \cdots x_d} & \leq & \inf_{t_1, \ldots, t_d \in \ensuremath{\ensuremath{\mathbb{R}}_{+}}} \frac{f(t_1, \ldots, t_d)}{t_1 \cdots t_d} \\ & \leq & \liminf_{(x_1, \ldots, x_d) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^d} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \;, \end{eqnarray*} which yields (\ref{eq:fekete-positives-d}). Now, by Lemma \ref{lem:positive-variables}, for every choice of \begin{math} i, j_1, \ldots, j_k \in \slice{1}{d} \end{math} all different, and however fixed the remaining variables, the function (\ref{eq:positive-variables}) is subadditive. Then (\ref{eq:fekete-positives-d-multiplelimit}) follows from Lemma \ref{lem:multiple-inf} by repeated application of the original Fekete's lemma: \begin{eqnarray*} \lim_{x_{\sigma(1)} \to +\infty} \ldots \lim_{x_{\sigma(d)} \to +\infty} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} & = & \inf_{x_{\sigma(1)} > 0} \lim_{x_{\sigma(2)} \to +\infty} \ldots \lim_{x_\sigma(d) \to \infty}\frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \\ & = & \ldots \\ & = & \inf_{x_{\sigma(1)} > 0} \ldots \inf_{x_{\sigma(d)} > 0} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \\ & = & \inf_{x_1, \ldots, x_d > 0} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \\ & = & \lim_{(x_1, \ldots, x_d) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^{d}} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,. \end{eqnarray*} \end{proof} From Theorem \ref{thm:fekete-positives-d} and Proposition \ref{prop:subadditive-componentwise-orthant} follows: \begin{theorem} \label{thm:fekete-Rd} Let $d\geq1$, let $w,w'\in\ensuremath{\{0,1\}}^d$ and let \begin{math} f : \ensuremath{\mathbb{R}}_w \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive. \begin{enumerate} \item \label{it:fekete-Rd-even} If $w$ contains evenly many 1s, then: \begin{equation} \lim_{(x_1, \ldots, x_d) \to \ensuremath{\mathcal{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} = \inf_{(x_1, \ldots, x_d) \in \ensuremath{\mathbb{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \end{equation} is not $+\infty$, but can be $-\infty$. \item \label{it:fekete-Rd-odd} If $w$ contains oddly many 1s, then: \begin{equation} \lim_{(x_1, \ldots, x_d) \to \ensuremath{\mathcal{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} = \sup_{(x_1, \ldots, x_d) \in \ensuremath{\mathbb{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \end{equation} is not $-\infty$, but can be $+\infty$. \item \label{it:fekete-Rd-all} Suppose now $w$ contains evenly many 1s, $w'$ differs from $w$ in exactly one coordinate, and $f$ is defined and componentwise subadditive in \begin{math} \ensuremath{\mathbb{R}}_w \cup \ensuremath{\mathbb{R}}_{w'} \cup U_{w,w'} \end{math}, where: \begin{displaymath} U_{w,w'} = \{ x \in \ensuremath{\mathbb{R}}^d \mid x_i=0, (x_1, \ldots, x_{i-1}, 1, x_{i+1}, \ldots, x_d) \in \ensuremath{\mathbb{R}}_w \cup \ensuremath{\mathbb{R}}_{w'} \} \end{displaymath} is the boundary between $\ensuremath{\mathbb{R}}_w$ and $\ensuremath{\mathbb{R}}_{w'}$. Then: \begin{equation} \lim_{(x_1, \ldots, x_d) \to \ensuremath{\mathcal{R}}_{w'}} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \leq \lim_{(x_1, \ldots, x_d) \to \ensuremath{\mathcal{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,; \end{equation} consequently, both limits are finite. \end{enumerate} \end{theorem} For $d=1$ we recover \cite[Theorem 6.6.1]{hille1948}. To prove Theorem \ref{thm:fekete-Rd}, we make use of the following result, whose proof we leave to the reader. \begin{lemma} \label{lem:subadd-monoid} Let $S$ be a semigroup and \begin{math} f : S \to \ensuremath{\mathbb{R}} \end{math} be a subadditive function. If $S$ is a monoid with identity $e$, then $f(e)\geq0$. If, in addition, $S$ is a group, then \begin{math} f(x) + f(x^{-1}) \geq 0 \end{math} for every $x\in{S}$. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:fekete-Rd}] For \begin{math} (x_1, \ldots, x_d) \in \ensuremath{\mathcal{R}}_w \end{math} and $i\in\slice{1}{d}$ let $x_{w,i}$ and $f_w$ be defined as in Proposition \ref{prop:subadditive-componentwise-orthant}. If $w$ contains evenly many 1s, then \begin{math} x_1 \cdots x_d = x_{w,1} \cdots x_{w,d} \end{math} and: \begin{eqnarray*} \lim_{x \to \ensuremath{\mathcal{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} & = & \lim_{x_w \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^d} \frac{f_w(x_{w,1}, \ldots, x_{w,d})}{x_{w,1} \cdots x_{w,d}} \\ & = & \inf_{x_w \in \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d} \frac{f_w(x_{w,1}, \ldots, x_{w,d})}{x_{w,1} \cdots x_{w,d}} \\ & = & \inf_{x \in \ensuremath{\mathbb{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,. \end{eqnarray*} If $w$ contains oddly many 1s, then \begin{math} x_1 \cdots x_d = - x_{w,1} \cdots x_{w,d} \end{math} and: \begin{eqnarray*} \lim_{x \to \ensuremath{\mathcal{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} & = & - \lim_{x_w \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^d} \frac{f_w(x_{w,1}, \ldots, x_{w,d})}{x_{w,1} \cdots x_{w,d}} \\ & = & - \inf_{x_w \in \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d} \frac{f_w(x_{w,1}, \ldots, x_{w,d})}{x_{w,1} \cdots x_{w,d}} \\ & = & \sup_{x \in \ensuremath{\mathbb{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,. \end{eqnarray*} Suppose now $w$ has evenly many 1s and $w'$ differs from $w$ only in component $i$, and $f$ is defined and componentwise subadditive in \begin{math} \ensuremath{\mathbb{R}}_w \cup \ensuremath{\mathbb{R}}_{w'} \cup U_{w,w'} \end{math}. Then for every \begin{math} x_1, \ldots, x_{i-1}, x_{i+1}, \ldots x_d \end{math} the function \begin{math} \lambdaxt{x_i}{f(x_1, \ldots, x_i, \ldots, x_d)} \end{math} is subadditive on $\ensuremath{\mathbb{R}}$: by Lemma \ref{lem:subadd-monoid}, for every $x_i\in\ensuremath{\mathbb{R}}$, \begin{displaymath} f(x_1, \ldots, x_i, \ldots, x_d) + f(x_1, \ldots, -x_i, \ldots, x_d) \geq 0 \,. \end{displaymath} Then: \begin{eqnarray*} & & \lim_{x \to \ensuremath{\mathcal{R}}_w} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} - \lim_{x' \to \ensuremath{\mathcal{R}}_{w'}} \frac{f(x'_1, \ldots, x'_d)}{x'_1 \cdots x'_d} \\ & = & \lim_{x \to \ensuremath{\mathcal{R}}_w} \frac{f(x_1, \ldots, x_i, \ldots, x_d)}{x_1 \cdots x_d} + \lim_{x \to \ensuremath{\mathcal{R}}_{w}} \frac{f(x_1, \ldots, -x_i, \ldots, x_d)}{x_1 \cdots x_d} \\ & = & \lim_{x \to \ensuremath{\mathcal{R}}_w} \frac{ f(x_1, \ldots, x_i, \ldots, x_d) + f(x_1, \ldots, -x_i, \ldots, x_d) }{ x_1 \cdots x_d } \end{eqnarray*} is nonnegative. The last passage is valid because the two limits on the second line are either finite or $-\infty$. \end{proof} As every subnet of a convergent net converges to the same limit, we get: \begin{corollary} \label{cor:fekete-subsequence} Let \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^d \to \ensuremath{\mathbb{R}} \end{math} be componentwise subadditive and let $T$ be either $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$ or $\ensuremath{\integers_{+}}$. For every \begin{math} i \in \slice{1}{d} \end{math} let \begin{math} x_i(t) : T \to \ensuremath{\ensuremath{\mathbb{R}}_{+}} \end{math} satisfy \begin{math} \lim_{t \to +\infty} x_i(t) = +\infty \end{math}. Then: \begin{equation} \label{eq:fekete-subsequence} \lim_{t \to +\infty} \frac{f(x_1(t), \ldots, x_d(t))}{x_1(t) \cdots x_d(t)} = \inf_{x_1, \ldots, x_d > 0} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,, \end{equation} and also \begin{equation} \label{eq:fekete-subsequence-inf} \inf_{t \in T} \frac{f(x_1(t), \ldots, x_d(t))}{x_1(t) \cdots x_d(t)} = \inf_{x_1, \ldots, x_d > 0} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,. \end{equation} In particular, \begin{equation} \label{eq:fekete-n-d} \lim_{n \to \infty} \frac{f(n, \ldots, n)}{n^d} = \inf_{n \geq 1} \frac{f(n, \ldots, n)}{n^d} = \inf_{x_1, \ldots, x_d > 0} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} \,. \end{equation} \end{corollary} \begin{proof}[Sketch of proof] We only remark that (\ref{eq:fekete-subsequence-inf}) follows from (\ref{eq:fekete-positives-d}) and: \begin{eqnarray*} \inf_{x_1, \ldots, x_d > 0} \frac{f(x_1, \ldots, x_d)}{x_1 \cdots x_d} & \leq & \inf_{t \in T} \frac{f(x_1(t), \ldots, x_d(t))}{x_1(t) \cdots x_d(t)} \\ & \leq & \liminf_{t \to \infty} \frac{f(x_1(t), \ldots, x_d(t))}{x_1(t) \cdots x_d(t)} \,. \end{eqnarray*} \end{proof} Note that, in general, even if $f$ is componentwise subadditive on $\ensuremath{\ensuremath{\mathbb{R}}_{+}}^d$, \begin{math} \lambdaxt{t}{f(t, \ldots, t)} \end{math} is not subadditive on $\ensuremath{\ensuremath{\mathbb{R}}_{+}}$: a simple example is \begin{math} f(x_1, x_2) = x_1 \cdot x_2 \end{math}. This provides further evidence that Theorems \ref{thm:fekete-positives-d} and \ref{thm:fekete-Rd} are not special cases of \cite[Theorem 6.6.1]{hille1948}. A real-valued function defined on a semigroup $(S,\cdot)$ is \emph{superadditive} if it satisfies \begin{math} f(x \cdot y) \geq f(x) + f(y) \end{math} for every $x,y\in{S}$. As $f$ is superadditive if and only if $-f$ is subadditive, an analogue of Theorem \ref{thm:fekete-positives-d} holds for componentwise superadditive functions, provided one swaps the roles of $\inf$ and $\sup$ and those of $-\infty$ and $+\infty$. If $f$ is superadditive in some variables and subadditive in other variables, however, Theorem \ref{thm:fekete-positives-d} does not hold. \begin{example} \label{ex:superadd-subadd} The function \begin{math} f : \ensuremath{\ensuremath{\mathbb{R}}_{+}}^2 \to \ensuremath{\mathbb{R}} \end{math} defined by \begin{math} f(x_1, x_2) = x_1^2 \sqrt{x_2} \end{math} is superadditive in $x_1$ and subadditive in $x_2$, and \begin{math} f(x_1, x_2) / x_1 x_2 = x_1 / \sqrt{x_2} \end{math}. But \begin{math} \lim_{(x_1, x_2) \to \ensuremath{\ensuremath{\mathcal{R}}_{+}}^2} f(x_1, x_2) / x_1 x_2 \end{math} does not exist, because for every $y,R>0$ there exist \begin{math} x_1, x_2 > R \end{math} such that \begin{math} x_1 / \sqrt{x_2} = y \end{math}. Also, \begin{math} \lim_{x_1 \to \infty} \lim_{x_2 \to \infty} \dfrac{f(x_1, x_2)}{x_1 \cdot x_2} = 0 \end{math} but \begin{math} \lim_{x_2 \to \infty} \lim_{x_1 \to \infty} \dfrac{f(x_1, x_2)}{x_1 \cdot x_2} = +\infty \end{math}. \end{example} As a final remark for this section, the following statement appears in the literature as an extension to arbitrary dimension of \cite[Theorem 6.1.1]{hille1948}: \begin{proposition}[{cf. \cite[Theorem 16.2.9]{kuczma2009}}] \label{prop:fekete-d-literature} Let \begin{math} f : \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}} \end{math} be subadditive \emph{in the variable $\mathbf{x}\in\ensuremath{\mathbb{R}}^d$}. Then for every $\mathbf{x}\in\ensuremath{\mathbb{R}}^d$ the following limit exists: \begin{displaymath} L_{\mathbf{x}} = \lim_{t \to +\infty} \frac{f(t\mathbf{x})}{t} \,. \end{displaymath} \end{proposition} This, however, is not so much an extension than a \emph{corollary}. If \begin{math} f : \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}} \end{math} satisfies \begin{math} f(\mathbf{x} + \mathbf{y}) \leq f(\mathbf{x}) + f(\mathbf{y}) \end{math} for every \begin{math} \mathbf{x}, \mathbf{y} \in \ensuremath{\mathbb{R}}^d \end{math}, then obviously \begin{math} g_{\mathbf{x}}(t) = f(t\mathbf{x}) \end{math} satisfies \begin{math} g_{\mathbf{x}}(s + t) \leq g_{\mathbf{x}}(s) + g_{\mathbf{x}}(t) \end{math} for every $s,t>0$: and $L_{\mathbf{x}}$ is simply the limit of $g_{\mathbf{x}}(t)/t$ according to \cite[Theorem 6.1.1]{hille1948}. On the other hand, Theorem \ref{thm:fekete-Rd} is an extension. \section{A comparison with the Ornstein-Weiss lemma} \label{sec:owlemma} A group $G$ is \emph{amenable} if there exist a directed set \begin{math} \ensuremath{\mathcal{U}} = (U, \preceq) \end{math} and a net \begin{math} \{ F_x \}_{x \in U} \end{math} of finite nonempty subsets of $G$ such that: \begin{equation} \label{eq:folner} \lim_{x \to \ensuremath{\mathcal{U}}} \frac{| g F_x \setminus F_x |}{|F_x|} = 0 \;\; \textrm{for every } g \in G \,. \end{equation} A net such as in (\ref{eq:folner}) is called a \emph{(left) F{\o}lner net} on the group $G$, from the Danish mathematician Erling F{\o}lner who introduced them in \cite{folner1955}. Every abelian group is amenable: for a proof, see \cite[Chapter 4]{csc10}. \begin{proposition}[Ornstein-Weiss lemma; cf. \cite{ornstein-weiss}] \label{prop:ornstein-weiss} Let $G$ be an amenable group and let \begin{math} f : \ensuremath{\ensuremath{\mathcal{P}}\Fcal}(G) \to \ensuremath{\mathbb{R}} \end{math} be a function which: \begin{enumerate} \item is subadditive with respect to set union, that is, \begin{math} f(A \cup B) \leq f(A) + f(B) \end{math} for every \begin{math} A, B \in \ensuremath{\ensuremath{\mathcal{P}}\Fcal}(G) \end{math}; and \item satisfies \begin{math} f(A) = f(gA) \end{math} for every $A\in\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(G)$ and $g\in{G}$. \end{enumerate} Then for every directed set \begin{math} \ensuremath{\mathcal{U}} = (U, \preceq) \end{math} and every left F{\o}lner net \begin{math} \ensuremath{\mathcal{F}} = \{ F_x \}_{x \in U} \end{math} on $G$, \begin{equation} \label{eq:ornstein-weiss} L = \lim_{x \to \ensuremath{\mathcal{U}}} \frac{f(F_x)}{|F_x|} \end{equation} exists, and does not depend on the choice of $\ensuremath{\mathcal{U}}$ and $\ensuremath{\mathcal{F}}$. \end{proposition} The Ornstein-Weiss lemma says that, for ``well behaving'' functions on amenable groups, a notion of \emph{asymptotic average} is well defined. A detailed proof of Proposition \ref{prop:ornstein-weiss} is given by F. Krieger in \cite{krieger2010}. \begin{example} \label{ex:folner-entropy} Let $G$ be an amenable group and let $A$ be a finite set with $a\geq2$ elements. The \emph{shift} by $g\in{G}$ is the function \begin{math} \sigma_g : A^G \to A^G \end{math} defined by \begin{math} \sigma_g(c)(x) = c(g \cdot x) \end{math} for every $c\in{A^G}$ and $x\in{G}$. The notions of subshift and of allowed pattern with support $S\in\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(G)$ are extended naturally from those of Example \ref{ex:subadd-subshift-logoutput}. Calling $\ensuremath{\mathcal{A}}_X(S)$ the number of allowed patterns for $X$ with support $S$, and convening that the unique \emph{empty pattern} $e:\emptyset\to{A}$ appears in every configuration, we have for every $S,T\in\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(G)$: \begin{displaymath} \ensuremath{\mathcal{A}}_X(S) \leq \ensuremath{\mathcal{A}}_X(S \cup T) \leq \ensuremath{\mathcal{A}}_X(S) \cdot \ensuremath{\mathcal{A}}_X(T \setminus S) \leq \ensuremath{\mathcal{A}}_X(S) \cdot \ensuremath{\mathcal{A}}_X(T) \,. \end{displaymath} Indeed, every allowed pattern on $S$ (resp., $T\setminus{S}$) can be extended to at least one allowed pattern on $S\cup{T}$ (resp., $T$) but joining an allowed pattern over $S$ and an allowed pattern over $T\setminus{S}$ does not necessarily yield an allowed pattern on $S\cup{T}$. Hence, \begin{math} f(S) = \log_a \ensuremath{\mathcal{A}}_X(S) \end{math} is subadditive on $\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(G)$, and clearly satisfies $f(gS)=f(S)$ for every $g\in{G}$ and $S\in\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(G)$. The \emph{entropy} of $X$ can then be defined as: \begin{equation} \label{eq:entropy-subshift} h(X) = \lim_{x \to \ensuremath{\mathcal{U}}} \frac{\log_a \ensuremath{\mathcal{A}}_X(F_x)}{|F_x|} \end{equation} where \begin{math} \ensuremath{\mathcal{U}} = (U, \preceq) \end{math} is an arbitrary directed set and \begin{math} \{ F_x \}_{x \in U} \end{math} is an arbitrary F{\o}lner net on $G$. \end{example} As the sets \begin{math} E_{x_1, \ldots, x_d} = \prod_{i=1}^d \slice{1}{x_i} \end{math} with \begin{math} x_1, \ldots, x_d \in \ensuremath{\integers_{+}} \end{math} constitute a F{\o}lner net on $\ensuremath{\mathbb{Z}}^d$, defining the entropy of a $d$-dimensional subshift according to either Example \ref{ex:fekete-entropy} or Example \ref{ex:folner-entropy} yields the same result. Nevertheless, the Ornstein-Weiss lemma does not generalize Fekete's lemma, nor it is possible to prove the latter from the former, as the limit (\ref{eq:ornstein-weiss}) is only ensured to exist, not to coincide with any specific value. In addition, even if \begin{math} f : \ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{R}} \end{math} is subadditive, the ``natural'' conversion \begin{equation} \label{eq:subadd-ZtoPF} g(A) = \ifthenelse{A \neq \emptyset}{f(|A|)}{0}{,} \end{equation} where $|A|$ is the number of elements of $A$, is invariant by translations, but needs not be subadditive on $\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(\ensuremath{\mathbb{Z}})$, the main reason being that $|A\cup{B}|$ needs not equal $|A|+|B|$. Moreover, while invariance by translations is essential in the Ornstein-Weiss lemma, a translate of a subadditive function needs not be subadditive. \begin{example} \label{ex:ZtoPF-not} The function \begin{math} f(n) = n \!\! \mod 2 \end{math} is easily seen to be subadditive on $\ensuremath{\mathbb{Z}}$. But the function $g$ defined from $f$ by (\ref{eq:subadd-ZtoPF}) is not subadditive on $\ensuremath{\ensuremath{\mathcal{P}}\Fcal}(\ensuremath{\mathbb{Z}})$, because if \begin{math} U = \{ 1, 2 \} \end{math} and \begin{math} V = \{ 2, 3 \} \end{math}, then $g(U\cup{V})=1$ and $g(U)=g(V)=0$. Note that $h(n)=f(n+1)$ is not subadditive, because $h(1)=0$ but $h(2)=1$. \end{example} \section{Conclusions} \label{sec:conclusions} We have discussed an extension of the notion of subadditivity in the case of many independent variables. In this context, we have proved a nontrivial extension of the classical Fekete's lemma to the case of functions of $d\geq1$ real variables, which recovers the original statement for $d=1$, and which is more general than other extensions already present in the literature. While doing so, we have also proved that these componentwise subadditive functions satisfy the important property of being bounded on compact subsets, the case $d=1$ being already known from the literature. We believe that our results can be of interest for researchers in economics, theory of dynamical systems, and mathematical analysis. \bibliographystyle{plain}
{ "timestamp": "2021-02-04T02:12:55", "yymm": "1904", "arxiv_id": "1904.10507", "language": "en", "url": "https://arxiv.org/abs/1904.10507", "abstract": "We prove an analogue of Fekete's subadditivity lemma for functions of several real variables which are subadditive in each variable taken singularly. This extends both the classical case for subadditive functions of one real variable, and a result in a previous paper by the author. While doing so, we prove that the functions with the property mentioned above are bounded in every closed and bounded subset of their domain. The arguments follows those of Chapter 6 in E. Hille's 1948 textbook.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Fekete's lemma for componentwise subadditive functions of two or more real variables", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964211605606, "lm_q2_score": 0.8376199592797929, "lm_q1q2_score": 0.8254714721628904 }
https://arxiv.org/abs/2103.08498
On the nilradical of a Leibniz algebra
The purpose of this short note is to correct an error which appears in the literature concerning Leibniz algebras $L$: namely, that $N(L/I)=N(L)/I$ where $N(L)$ is the nilradical of $L$ and $I$ is the Leibniz kernel.
\section{Introduction} \medskip An algebra $L$ over a field $F$ is called a {\em Leibniz algebra} if, for every $x,y,z \in L$, we have \[ [x,[y,z]]=[[x,y],z]-[[x,z],y] \] In other word,s the right multiplication operator $R_x : L \rightarrow L : y\mapsto [y,x]$ is a derivation of $L$. As a result, such algebras are sometimes called {\it right} Leibniz algebras, and there is a corresponding notion of {\it left} Leibniz algebra. Clearly, the opposite of a right Leibniz algebra is a left Leibniz algebra so, for our purposes, it does not matter which is used. Every Lie algebra is a Leibniz algebra and every Leibniz algebra satisfying $[x,x]=0$ for every element is a Lie algebra. \par Put $I=span\{x^2:x\in L\}$. Then $I$ is an ideal of $L$ and $L/I$ is a Lie algebra called the {\em liesation} of $L$. We define the following series: \[ L^1=L,L^{k+1}=[L^k,L] \hbox{ and } L^{(1)}=L,L^{(k+1)}=[L^{(k)},L^{(k)}] \hbox{ for all } k=2,3, \ldots \] Then $L$ is {\em nilpotent} (resp. {\em solvable}) if $L^n=0$ (resp. $ L^{(n)}=0$) for some $n \in \N$. The {\em nilradical}, $N(L)$, (resp. {\em radical}, $R(L)$) is the largest nilpotent (resp. solvable) ideal of $L$. \par In \cite{gorb} it is claimed that $R(L/I)=R(L)/I$ and $N(L/I)=N(L)/I$. However, whereas the former is clearly true, the latter is false in general. The claim appears in the proof of \cite[Proposition 4]{gorb}, which has two corollaries. Although this paper appears only to have been published on arxiv it has been quite widely cited. Moreover, the results following from this assertion have been quoted in \cite[Proposition 4.4]{dms}, \cite[Propositions 3.3, 3.4 and Corollaries 3.5,3.6]{rak} and referenced to \cite{gorb}, and the same incorrect assertion is used to prove two results in a recent book (\cite[Proposition 2.1 and 2.2]{aor}). The results which Gorbatsevich uses this assertion to prove are true, as are \cite[Proposition 2.1 and 2.2]{aor}. The purpose of this short note is to give a correct characterisation of $N(L/I)$ and to provide or reference proofs for the results in which the incorrect assertion is used. \par Throughout, $L$ will denote a finite-dimensional (right) Leibniz algebra over a field $F$. The {\em Frattini ideal} of $L$ is the largest ideal of $L$ contained in every maximal subalgebra of $L$. We will denote algebra direct sums by $\oplus$ and vector space direct sums by $\dot{+}$. \section{The nilradical} The literature concerning Leibniz algebras is quite diverse and a number of results have been duplicated, so the survey articles \cite{feld}, \cite{gorb} and the new book \cite{aor} are useful. The nilradical of a Leibniz algebra is a well-defined object. \begin{theor} The sum of two nilpotent ideals of a Leibniz algebra is nilpotent. \end{theor} \begin{proof} See \cite[Theorem 5.14]{feld} or \cite[Lemma 1.5]{schunck}. \end{proof} \begin{coro} Any Leibniz algebra has a maximal nilpotent ideal containing all nilpotent ideals of $L$. \end{coro} \begin{proof} A valid proof of this can also be found as \cite[Corollary 4]{bosko}. Note that the proof given in \cite[Proposition 1]{gorb} and \cite[Proposition 2.1]{aor} is incorrect. \end{proof} \medskip Next, we show that $N(L/I)\neq N(L)/I$ in general. \begin{ex} Let $ L = Fx + Fx^2$ where $[x^2,x] = x^2$ is the two-dimensional solvable cyclic Leibniz algebra, we have $I = Fx^2 = N(L)$ and $L/I$ is the nilradical of $L/I$. \end{ex} The fact that $N(L)=I$ is not significant in the above example as the following class of algebras shows. \begin{ex} Let $L=Fx_1+\ldots +Fx_r+Fx_{r+1}+\ldots +Fx_n+Fy$ where $[x_i,y]=x_i$ for $r+1\leq i\leq n$ and all other products are zero. Then $I=Fx_{r+1}+\ldots +Fx_n$, $N(L)=Fx_1+\ldots +Fx_n$ and $N(L/I)=L/I$. \end{ex} Nor is the fact that $N(L/I)=L/I$ significant in the above examples as can be seen by taking the direct sum of them with a simple Lie algebra. Next we look for more information on $N(L/I)$. First note the following. \begin{lemma}\label{1} If $I\subseteq \phi(L)$ then $N(L/I)=N(L)/I$. \end{lemma} \begin{proof} Clearly $N(L)/I\subseteq N(L/I)=K/I,$ say. But $K$ is nilpotent, by \cite[Theorem 5.5]{barnes}. \end{proof} \begin{lemma}\label{2} If $I \not \subseteq \phi(L)$ then there is a subalgebra $B$ of $L$ such that $L=I+B$ and $I\cap B\subseteq \phi(B)$. \end{lemma} \begin{proof} This is \cite[Lemma 7.1]{frat}. \end{proof} \medskip Then, using the same notation as in Lemma \ref{2}, we have the following. \begin{theor} The nilradical of $L/I$ is $N(L/I)=(I+N(B))/I$ and this is the same as $N(L)/I$ if and only if $R_n|_I$ is nilpotent for all $n\in N(B)$. \end{theor} \begin{proof} We have \[ N\left(\frac{L}{I}\right)\cong N\left(\frac{B}{I\cap B}\right)=\frac{N(B)}{I\cap B}=\frac{N(B)}{I\cap N(B)}\cong \frac{I+N(B)}{I}. \] But $(I+N(B)/I\subseteq N(L/I)$, so equality results. \par Now, $I+N(B)=N(L)$ if and only if $N(B)$ acts nilpotently on the right on $I$. \end{proof} \section{Some results where $N(L/I)=N(L)/I$ were used} First we have the following analogue of a well-known result for Lie algebras. The only references to this in the literature of which we are aware are \cite[Proposition 4]{gorb} and \cite[Proposition 2.2]{aor}. However, the proof in each case is incorrect, though the result is true. \begin{propo}\label{3} Let $L$ be a Leibniz algebra over a field of characteristic zero, $R$ be its radical and $N$ its nilradical. Then $[L,R]\subseteq N$. \end{propo} \begin{proof} Since $N(L/\phi(L))=N(L)/\phi(L)$ we can assume that $L$ is $\phi$-free. Then $L=$Asoc$(L)\dot{+} V$, where $V=S\oplus Z(V)$ and $S$ is a semisimple Lie algebra, by \cite[Corollary 2.9]{stit}. Also $R=$Asoc$(L)+Z(V)$ and $N=$Asoc$(L)$, by \cite[Theorem 2.4]{stit}, from which the result is clear. \end{proof} \medskip The above result has the following corollary which appears in several places in the literature. It occurs as \cite[Corollaries 5 and 6]{gorb} and \cite[Corollaries 2.2 and 2.3]{aor} but the proofs are incorrect. It appears with correct proofs as \cite[Corollary 6.8]{feld}, \cite[Corollary 3]{pak}, \cite[Theorem 4]{ao}, \cite[Theorem 2]{ns} and \cite[Theorem 2.6]{barnes}. \begin{coro}(\cite[Corollary 5]{gorb}) With same notation as in Proposition \ref{3}, $[R,R]\subseteq N$; in particular, $[R,R]$ is nilpotent; in fact, $L$ is solvable if and only if $[L,L]$ is nilpotent. \end{coro}
{ "timestamp": "2021-03-16T01:38:35", "yymm": "2103", "arxiv_id": "2103.08498", "language": "en", "url": "https://arxiv.org/abs/2103.08498", "abstract": "The purpose of this short note is to correct an error which appears in the literature concerning Leibniz algebras $L$: namely, that $N(L/I)=N(L)/I$ where $N(L)$ is the nilradical of $L$ and $I$ is the Leibniz kernel.", "subjects": "Rings and Algebras (math.RA); Group Theory (math.GR)", "title": "On the nilradical of a Leibniz algebra", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.978051741806865, "lm_q2_score": 0.8438951084436077, "lm_q1q2_score": 0.8253730807155638 }
https://arxiv.org/abs/2006.14228
Primitive point packing
A point in the $d$-dimensional integer lattice $\mathbb{Z}^d$ is primitive when its coordinates are relatively prime. Two primitive points are multiples of one another when they are opposite, and for this reason, we consider half of the primitive points within the lattice, the ones whose first non-zero coordinate is positive. We solve the packing problem that asks for the largest possible number of such points whose absolute values of any given coordinate sum to at most a fixed integer $k$. We present several consequences of this result at the intersection of geometry, number theory, and combinatorics. In particular, we obtain an explicit expression for the largest possible diameter of a lattice zonotope contained in the hypercube $[0,k]^d$ and, conjecturally of any lattice polytope in that hypercube.
\section{Introduction}\label{PPP.sec.0} Lattice polytopes appear in many branches of mathematics, as for instance in algebraic geometry where they are associated with certain toric varieties. It is noteworthy that, in relation with the study of these and similar objects, methods from combinatorics and algebraic geometry have been beneficial to both fields \cite{AdiprasitoHuhKatz2018,BogartHaaseHeringLorenzNillPaffenholzRoteSantosSchenck2015,DickensteinDiRoccoPiene2009,Huh2012,Pommersheim1993,Stanley1980}. Another branch of mathematics where lattice polytopes turn up is combinatorial optimization. In particular, they encode the feasible domain of a number of optimization problems \cite{AprileCevallosFaenza2018,DelPiaMichini2016,Naddef1989,NillZiegler2011}. In this other context, an important quantity is the largest diameter a polytope can possibly have in terms of a fixed combinatorial or geometric property. Here, by the diameter of a polytope, we mean the diameter of the graph made up of its vertices and edges. This quantity is tightly linked to the complexity of pivoting methods for solving the corresponding optimization problem and it has been extensively studied, for instance as a function of the dimension and number of facets of the polytope \cite{KalaiKleitman1992,KleeWalkup1967,Naddef1989,Sukegawa2019}. However, as the disproval of the Hirsch conjecture \cite{Santos2012} shows, its behavior remains elusive. Yet another field where lattice polytopes appear, lies at the intersection of geometry and number theory \cite{Andrews1963,BrionVergne1997,Ehrhart1967a,Ehrhart1967b}. For instance, the density of primitive points---points whose coordinates are relatively prime---within the lattice $\mathbb{Z}^d$ is equal to $1/\zeta(d)$, where $\zeta$ denotes Riemann's zeta function \cite{HardyWright1938,KranakisPocchiola1994,Nymann1970}. From that observation, a sharp estimate for the largest possible diameter of a lattice zonotope contained in the hypercube $[0,k]^d$ when $d$ is fixed and $k$ grows large has been derived recently \cite{DezaPourninSukegawa2020}. The case of zonotopes---polytopes obtained as a Minkowski sum of line segments---is particularly interesting in relation to the problem of the diameter of polytopes. Indeed, it is conjectured that the largest possible diameter of a lattice polytope contained in the hypercube $[0,k]^d$ is always achieved by a zonotope \cite{DezaManoussakisOnn2018}. In the case of lattice zonotopes, the problem can be reformulated as follows in terms of primitive points. Let $\mathbb{P}^d$ be the set of the primitive points contained in $\mathbb{Z}^d$. Consider a subset $\mathcal{X}$ of $\mathbb{P}^d$ whose first non-zero coordinate of elements is positive and call $$ \kappa(\mathcal{X})=\max\left\{\sum_{x\in\mathcal{X}}|x_i|:1\leq{i}\leq{d}\right\}\!\mbox{,} $$ where the coordinates of a point $x$ from $\mathbb{R}^d$ are denoted by $x_1$ to $x_d$. In other words, $\kappa(\mathcal{X})$ is the sum over all the points in $\mathcal{X}$ of the absolute value of one of their coordinates for which that sum is the greatest. Using this quantity, we can ask the question of \emph{how large the cardinality of $\mathcal{X}$ can be under the requirement that $\kappa(\mathcal{X})$ does not exceed a given integer $k$}. Incidentally, the answer to this primitive point packing problem at the intersection of geometry, number theory, and combinatorics coincides with the largest possible diameter of a lattice zonotope contained in the hypercube $[0,k]^d$. In this article, we settle this problem completely by giving an explicit expression for its solution, thus also solving the question on the largest possible diameter of lattice zonotopes and, conjecturally, the more general one about lattice polytopes. Let us describe our main results in more details. Following the notation introduced in \cite{DezaPourninSukegawa2020}, we refer to the solution to our packing problem as $$ \delta_z(d,k)=\max_{\mathcal{X}\in\mathbb{P}^d_\circ}\left\{|\mathcal{X}|:\kappa(\mathcal{X})\leq{k}\right\}\!\mbox{.} $$ Here $\mathbb{P}^d_\circ$ is the subset of $\mathbb{P}^d$ made up of the points whose first non-zero coordinate is positive. Note that an expression for $\delta_z(d,k)$ is already known when $k$ is less than $2d$. Indeed, it is proven in \cite{DezaManoussakisOnn2018} that, in this case, \begin{equation}\label{PPP.sec.0.eq.0} \delta_z(d,k)=\left\lfloor\frac{(k+1)d}{2}\right\rfloor\!\mbox{.} \end{equation} Denote by $B(d,p)$ the ball of radius $p$ for the $1$-norm, centered on the origin of $\mathbb{R}^d$. Note that this ball is a cross-polytope, the polytope dual to the hypercube. Our first result, which we prove in Section \ref{PPP.sec.1} provides the cardinality of the intersection of $B(d,p)$ with $\mathbb{P}^d_\circ$ and the value of $\kappa$ at that intersection. \begin{thm}\label{PPP.sec.0.thm.1} For any positive integers $d$ and $p$, \begin{itemize} \item[(i)] $\!\displaystyle\left| B(d,p)\cap\mathbb{P}^d_\circ\right|=\frac{1}{2}\sum_{j=1}^{d}2^j{d\choose{j}}\sum_{i=j}^pc_\psi(i,j)$, and \item[(ii)] $\displaystyle\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)=\frac{1}{2}\sum_{j=1}^{d}2^j{d\choose{j}}\sum_{i=j}^pic_\psi(i,j)$, \end{itemize} where $$ \displaystyle c_\psi(p,d)=\frac{1}{(d-1)!}\sum_{i=1}^ds(d,i)J_{i-1}(p)\mbox{.} $$ \end{thm} In the statement of this theorem, $s(d,i)$ stands for Stirling's numbers of the first kind and $J_i(p)$ for Jordan's totient function, whose precise definitions will be recalled in Section \ref{PPP.sec.1}. The number of lattice points contained in a lattice polytope is related to certain invariants of the associated toric variety \cite{Oda1988} and expressions for that number have been obtained in particular cases. For instance, a formula for the number of lattice points contained in the dilates by a positive integer of the cross-polytope $B(d,1)$ is known \cite{BeckRobins2015}. In this light, it is noteworthy that Theorem \ref{PPP.sec.0.thm.1} provides an expression for the number of primitive points contained in these dilates. Indeed, the intersection of $B(d,p)$ with $\mathbb{P}^d$ is twice as large as its intersection with $\mathbb{P}^d_\circ$. Theorem \ref{PPP.sec.0.thm.1} also solves our primitive point packing problem for special values of $k$. Indeed, it is shown in \cite{DezaPourninSukegawa2020} that the intersection of $B(d,p)$ with $\mathbb{P}^d_\circ$ is the unique subset of $\mathbb{P}^d_\circ$ of cardinality $\delta_z(d,k)$ when $$ k=\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)\!\mbox{.} $$ Hence, for these values of $k$, Theorem~\ref{PPP.sec.0.thm.1} provides an expression for $\delta_z(d,k)$. Now consider the piecewise-linear map $k\mapsto\lambda(d,k)$ that interpolates $\delta_z(d,k)$ linearly between any two consecutive such values of $k$. It follows from Theorem~\ref{PPP.sec.0.thm.1} that, when $k$ is between $\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)$ and $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$ the slope of the interpolation is $d/p$. More precisely, for these values of $k$, $$ \frac{\lambda(d,k)-\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|}{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}=\frac{d}{p}\mbox{.} $$ We will prove throughout Sections \ref{PPP.sec.3}, \ref{PPP.sec.3.5}, and \ref{PPP.sec.4} that $\delta_z(d,k)$ is equal to $\lfloor\lambda(d,k){\rfloor}$ except for a few notable, but infrequent cases. \begin{thm}\label{PPP.sec.0.thm.2} For any fixed $d$, the maps $k\mapsto\delta_z(d,k)$ and $k\mapsto\lfloor\lambda(d,k){\rfloor}$ coincide, except on an infinite subset $\mathbb{E}$ of $\mathbb{N}\mathord{\setminus}\{0\}$ such that, $$ \lim_{k\rightarrow\infty}\frac{\left|\mathbb{E}\cap[1,k]\right|}{k^{1/(d-1)}}=0\mbox{.} $$ In addition, $k\mapsto\delta_z(d,k)$ coincides, on $\mathbb{E}$, with $k\mapsto\lfloor\lambda(d,k){\rfloor}-1$. \end{thm} Note that the set $\mathbb{E}$ of exceptions depends on the dimension. We will determine it explicitly in every dimension: it is given a the end of Section \ref{PPP.sec.3.5} when $d>2$, and at the end of Section \ref{PPP.sec.4} when $d=2$. The dependence of $\mathbb{E}$ on $d$ is consistent over all the dimensions above $2$, but as we shall see, its form and its density within $\mathbb{N}$ are slightly different in the $2$-dimensional case The proof of theorem \ref{PPP.sec.0.thm.2} will be split into first establishing an upper bound on $\delta_z(d,k)$, and then proving that this bound is exact by constructing sets of primitive points that achieve it. The upper bound is proven in Section \ref{PPP.sec.3}. The general machinery we use to show that this bound is sharp is given in Section~\ref{PPP.sec.3.5} together with sets of primitive points that achieve it in all dimensions greater than $2$. The construction for the $2$-dimensional case is given in Section \ref{PPP.sec.4}. Recall that, when $1\leq{k}<2d$, our expression for $\delta_z(d,k)$ simplifies to (\ref{PPP.sec.0.eq.0}). Note, in particular that the bounds of this range for $k$ are $\left|B(d,1)\cap\mathbb{P}^d_\circ\right|$ and $\left|B(d,2)\cap\mathbb{P}^d_\circ\right|$. We report in Table~\ref{PPP.sec.3.tab.1} the first values of $\delta_z(d,k)$ such that $k$ is at least $2d$. In this table, the bold numbers are the values of $\delta_z(d,k)$ that coincide, for some integer $p$, with the cardinality of $B(d,p)\cap\mathbb{P}^d_\circ$ and the starred values are the ones such that $k$ belongs to $\mathbb{E}$. Note that, when $d$ is equal to $3$, the smallest number in $\mathbb{E}$ is $135$ and it already lies way outside of the table. \begin{table}[!t] \begin{center} \begin{tabular}{>{\centering}p{0.7cm}cccccccccccccccccc} \multirow{2}{*}{$d$}& \multicolumn{15}{c}{$k-2d$}\\ \cline{2-16} & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$ & $14$\\ \hline $2$ & $4$ & $5$ & $6$ & $6$ & $7$ & ${\bf 8}$ & $8$ & $8^\star$ & $9$ & $10$ & $10$ & $10^\star$ & $11$ & ${\bf 12}$ & $12$\\ $3$ & $10$ & $11$ & $12$ & $13$ & $14$ & $15$ & $16$ & $17$ & $18$ & $19$ & $20$ & $21$ & $22$ & $23$ & $24$\\ $4$ & $17$ & $18$ & $20$ & $21$ & $22$ & $24$ & $25$ & $26$ & $28$ & $29$ & $30$ & $32$ & $33$ & $34$ & $36$\\ $5$ & $26$ & $28$ & $30$ & $31$ & $33$ & $35$ & $36$ & $38$ & $40$ & $41$ & $43$ & $45$ & $46$ & $48$ & $50$\\ $6$ & $38$ & $40$ & $42$ & $44$ & $46$ & $48$ & $50$ & $52$ & $54$ & $56$ & $58$ & $60$ & $62$ & $64$ & $66$\\ \end{tabular} \end{center} \caption{Some values of $\delta_z(d,k)$.} \label{PPP.sec.3.tab.1} \end{table} We will also study the sets of points that solve our primitive point packing problem. Surprisingly, there exist values of $d$ and $k$ such that $\delta_z(d,k)$ is not achieved by a set of primitive points between $B(d,p-1)\cap\mathbb{P}^d_\circ$ and $B(d,p)\cap\mathbb{P}^d_\circ$ in terms of inclusion, where $p$ is the smallest integer such that $k$ is less than $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$. We will obtain, in Section \ref{PPP.sec.5}, the following characterization for the values of $d$ and $k$ such that this phenomenon occurs. Note in particular, that this phenomenon only happens when $d$ is equal to $2$. \begin{thm}\label{PPP.sec.0.thm.4} Consider the smallest integer $p$ such that $k<\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$. If $d$ is greater than $2$, then any subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ of cardinality $\delta_z(d,k)$ such that $\kappa(\mathcal{X})$ does not exceed $k$ must satisfy \begin{equation}\label{PPP.sec.0.thm.4.eq.0} B(d,p-1)\cap\mathbb{P}^d_\circ\subset\mathcal{X}\subset{B(d,p)\cap\mathbb{P}^d_\circ}\mbox{.} \end{equation} If, on the other hand, $d$ is equal to $2$, then there exists a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $|\mathcal{X}|=\delta_z(d,k)$ and $\kappa(\mathcal{X})\leq{k}$ that satisfies (\ref{PPP.sec.0.thm.4.eq.0}) if and only if $p=2$, or $p$ is odd, or $p/2$ is even, or the following two conditions hold. \begin{itemize} \item[(i)] $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ is distinct from $p/2+1$, \item[(ii)] $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k$ is distinct from $p/2-1$. \end{itemize} \end{thm} We further show in Section \ref{PPP.sec.5} that the unicity result from \cite{DezaPourninSukegawa2020} that we mention above singles out every value of $k$ such that there is a unique set of primitive points that solves our packing problem. \begin{thm}\label{PPP.sec.0.thm.5} There exists a unique subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $|\mathcal{X}|=\delta_z(d,k)$ and $\kappa(\mathcal{X})\leq{k}$ if and only if $k=\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$ for some $p$. \end{thm} We conclude the article, by gathering in Section \ref{PPP.sec.1.5} a number of asymptotic estimates. Among them, we derive from Theorem \ref{PPP.sec.0.thm.1} asymptotic estimates for $\left|B(d,p)\cap\mathbb{P}^d_\circ\right|$ in terms of $\kappa(B(d,p)\cap\mathbb{P}^d_\circ)$ when $p$ is fixed and $d$ goes to infinity. This complements the result from \cite{DezaPourninSukegawa2020} on the asymptotic behavior of $\delta_z(d,k)$. We also obtain an exact asymptotic estimate of the number of primitive points of $1$-norm $p$ contained in the positive orthant $]0,+\infty[^d$, for any fixed $d$, when $p$ goes to infinity. Note that these points are a special case of integer compositions \cite{MacMahon1893}, the compositions of $p$ into $d$ relatively prime integers. \section{The number of primitive points in a cross-polytope}\label{PPP.sec.1} Consider the isometry $\sigma$ of $\mathbb{R}^d$ that permutes the coordinates cyclically as $$ \sigma: \left[ \begin{array}{c} x_1\\ x_2\\ \vdots\\ x_{d-1}\\ x_d \end{array} \right] \mapsto \left[ \begin{array}{c} x_d\\ x_1\\ \vdots\\ x_{d-2}\\ x_{d-1}\\ \end{array} \right]\!\mbox{.} $$ Further consider the map $\tau$ that sends a vector $x$ of $\mathbb{R}^d$ to $\sigma(x)$ when $x_d$ is non-negative and to $-\sigma(x)$ otherwise. In the remainder of the article, $C$ denotes the cyclic group generated by $\tau$. Observe that $C$ has order $d$. For any point $x$ contained in $\mathbb{R}^d$, we denote by $C\mathord{\cdot}x$ the orbit of $x$ under the action of $C$ on $\mathbb{R}^d$. Note that all the elements of $C\mathord{\cdot}x$ have the same $1$-norm and, when $x$ is primitive, $C\mathord{\cdot}x$ is a subset of $\mathbb{P}^d$. In addition, observe that the intersection of $B(d,p)$ and $\mathbb{P}^d_\circ$ is invariant under the action of $C$ on $\mathbb{R}^d$. We first recall a proposition on the relation between $\kappa(\mathcal{X})$ and the $1$-norms of the elements of $\mathcal{X}$, that is used implicitly in \cite{DezaPourninSukegawa2020}. \begin{prop}\label{PPP.sec.3.prop.1} For any subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$, $$ \kappa(\mathcal{X})\geq\frac{1}{d}\sum_{x\in\mathcal{X}}\|x\|_1\mbox{,} $$ with equality if and only if $\mathcal{X}$ is invariant under the action of $C$ on $\mathbb{R}^d$. \end{prop} Now denote by $c_\psi(p,d)$ the number of primitive points of $1$-norm $p$ and dimension $d$ contained in the positive orthant: $$ c_\psi(p,d)=\left|\left\{x\in\mathbb{P}^d\cap\left]0,+\infty\right[^d:\|x\|_1=p\right\}\right|\!\mbox{.} $$ This quantity makes it possible to express as follows the sum over the intersection $B(d,p)\cap\mathbb{P}^d_\circ$ of any map that is well-behaved relative to the $1$-norm. \begin{lem}\label{PPP.sec.1.lem.1} For any map $f:\mathbb{N}\rightarrow\mathbb{R}$ and any positive integer $p$, $$ \sum{f\!\left(\|x\|_1\right)}=\sum_{j=1}^{d}2^j{d\choose{j}}\sum_{i=j}^pf(i)c_\psi(i,j)\mbox{,} $$ where the sum in the left-hand side is over the primitive points $x$ in $B(d,p)$. \end{lem} \begin{proof} Consider the sphere $S(d,i)$ of radius $i$ for the $1$-norm, centered at the origin of $\mathbb{R}^d$. In other words, $S(d,i)$ is the boundary of $B(d,i)$. Let us first count the number of primitive points contained in $S(d,i)$. As $B(d,i)$ is a cross polytope, $c_\psi(i,j)$ is precisely the number of primitive points contained in the relative interior of one of its faces of dimension $j-1$. We therefore immediately obtain $$ \left| S(d,i)\cap\mathbb{P}^d\right|=\sum_{j=1}^d2^j{d\choose{j}}c_\psi(i,j)\mbox{.} $$ where the coefficient of $c_\psi(i,j)$ in the right-hand side is the number of the faces of dimension $j-1$ of a cross-polytope. Now observe that by construction, the intersection of $B(d,p)$ with $\mathbb{P}^d$ is partitioned into the sets $S(d,i)\cap\mathbb{P}^d$ where $i$ ranges from $1$ to $p$. Hence, $$ \sum{f\!\left(\|x\|_1\right)}=\sum_{i=1}^{p}f(i)\sum_{j=1}^d2^j{d\choose{j}}c_\psi(i,j)\mbox{,} $$ where the sum in the left-hand side is over the primitive points $x$ in $B(d,p)$. Exchanging the two sums in the right-hand side and noticing that $c_\psi(i,j)$ vanishes when $i$ is less than $j$ completes the proof. \end{proof} Note that the cardinality of the intersection $B(d,p)\cap\mathbb{P}^d$ can be obtained from Lemma \ref{PPP.sec.1.lem.1} by using, for $f$, the map that sends every natural integer to $1$. By symmetry, the cardinality of $B(d,p)\cap\mathbb{P}^d_\circ$ is half of that quantity. In addition, since the intersection of $B(d,p)$ with $\mathbb{P}^d_\circ$ is invariant under the action of $C$ on $\mathbb{R}^d$, it follows from Proposition \ref{PPP.sec.3.prop.1} that $$ d\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)=\sum\|x\|_1\mbox{,} $$ where the sum in the right-hand side is over the points $x$ in that intersection. As above, that sum is half the sum of the $1$-norms of the points in $B(d,p)\cap\mathbb{P}^d$, that can be obtained from Lemma \ref{PPP.sec.1.lem.1} by using for $f$, the identity map on $\mathbb{N}$. According to this discussion, we obtain the following. \begin{prop}\label{PPP.sec.1.prop.1} For any positive integers $d$ and $p$, \begin{itemize} \item[(i)] $\!\displaystyle\left| B(d,p)\cap\mathbb{P}^d_\circ\right|=\frac{1}{2}\sum_{j=1}^{d}2^j{d\choose{j}}\sum_{i=j}^pc_\psi(i,j)$, and \item[(ii)] $\displaystyle\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)=\frac{1}{2d}\sum_{j=1}^{d}2^j{d\choose{j}}\sum_{i=j}^pic_\psi(i,j)$. \end{itemize} \end{prop} It is noteworthy that, as a consequence of this proposition, $$ \kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)=\frac{p\!\left|S(d,p)\cap\mathbb{P}^d_\circ\right|}{d}\mbox{,} $$ where $S(d,p)$ is the sphere of radius $p$ for the $1$-norm, centered at the origin of $\mathbb{R}^d$. We now derive a formula for $c_\psi(p,d)$. Let us denote by $J_q$ Jordan's totient function; that is, for any two positive integers $p$ and $q$, $$ J_q(p)=p^q\prod_{n|p}\left(1-\frac{1}{n^q}\right)\!\mbox{,} $$ where, in this expression, $n$ ranges over prime numbers. We also denote by $s(d,i)$ Stirling's numbers of the first kind. Let us recall that these numbers can be computed using the recurrence $$ s(d+1,i)=-ds(d,i)+s(d,i-1)\mbox{,} $$ with $s(i,i)=1$ for all $i\in\mathbb{N}$ and $s(d,0)=0$ when $d$ is positive. \begin{thm}\label{PPP.sec.1.thm.1} $\displaystyle c_\psi(p,d)=\frac{1}{(d-1)!}\sum_{i=1}^ds(d,i)J_{i-1}(p)$. \end{thm} \begin{proof} First observe that $$ {p-1\choose{d-1}}=\sum_{q|p}c_\psi\!\left(\frac{p}{q},d\right)\!\mbox{.} $$ Indeed, the left-hand side of this equality is the number of the lattice points of $1$-norm $p$ contained in the positive orthant $]0,+\infty[^d$ or, equivalently, the number of compositions of $p$ into $d$ integers. Dividing each of these points by the greatest common divisor of its coordinates results in a set of primitive points whose number is the right-hand side of the equality. By M{\"o}bius' inversion formula, \begin{equation}\label{PPP.sec.1.thm.1.eq.1} c_\psi(p,d)=\sum_{q|p}\mu(q){p/q-1\choose{d-1}} \end{equation} where $\mu$ denotes M{\"o}bius' function. By definition of Stirling's numbers of the first kind, the following holds when $d$ is positive. \begin{equation}\label{PPP.sec.1.thm.1.eq.2} {p/q-1\choose{d-1}}=\frac{1}{(d-1)!}\sum_{i=1}^ds(d,i)\!\left(\frac{p}{q}\right)^{i-1}\mbox{.} \end{equation} Combining (\ref{PPP.sec.1.thm.1.eq.1}) with (\ref{PPP.sec.1.thm.1.eq.2}), and permuting the sums yields $$ c_\psi(p,d)=\frac{1}{(d-1)!}\sum_{i=1}^ds(d,i)\sum_{q|p}\mu(q)\!\left(\frac{p}{q}\right)^{i-1}\mbox{.} $$ Since Jordan's totient function satisfies $$ \sum_{q|p}J_i(q)=p^i\mbox{,} $$ M{\"o}bius' inversion formula provides the desired result. \end{proof} Theorem \ref{PPP.sec.0.thm.1} immediately follows from Proposition \ref{PPP.sec.1.prop.1} and Theorem \ref{PPP.sec.1.thm.1}. \section{Upper bounds on $\delta_z(d,k)$}\label{PPP.sec.3} We first establish the following upper bound, and provide some information on the subsets of $\mathbb{P}^d_\circ$ that achieve it in a special case. \begin{lem}\label{PPP.sec.3.lem.0.5} Consider a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$. If $\kappa(\mathcal{X})$ is at most $k$, then $\mathcal{X}$ has cardinality at most $\lfloor\lambda(d,k)\rfloor$. If in addition, \begin{itemize} \item[(i)] $\mathcal{X}$ has cardinality exactly $\lfloor\lambda(d,k)\rfloor$, \item[(ii)] $d$ is a proper divisor of $p$, \item[(iii)] $p/d$ a divisor of $k-\kappa(B(d,p-1)\cap\mathbb{P}^d_\circ)$, \end{itemize} where $p$ is the smallest integer such that $k<\kappa(B(d,p)\cap\mathbb{P}^d_\circ)$, then $$ B(d,p-1)\cap\mathbb{P}^d_\circ\subset\mathcal{X}\subset{B(d,p)\cap\mathbb{P}^d_\circ}\mbox{.} $$ \end{lem} \begin{proof} Assume that $\kappa(\mathcal{X})\leq{k}$. We further assume without loss of generality that $|\mathcal{X}|$ is equal to $\delta_z(d,k)$. Let $p$ be the smallest integer such that $k$ is less than $\kappa(B(d,p)\cap\mathbb{P}^d_\circ)$. It follows that $\kappa(B(d,p-1)\cap\mathbb{P}^d_\circ)\leq{k}$ and therefore, $$ \left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|\leq\delta_z(d,k)\mbox{.} $$ Since the right-hand side of this inequality is also the cardinality of $\mathcal{X}$, we can consider a subset $\mathcal{X}^\star$ of $\mathcal{X}$ with the same cardinality than $B(d,p-1)\cap\mathbb{P}^d_\circ$. We can also require that the sum of the $1$-norms of the points in $\mathcal{X}^\star$ is as small as possible. We then consider a bijection $\psi:B(d,p-1)\cap\mathbb{P}^d_\circ\rightarrow\mathcal{X}^\star$. We can assume without loss of generality that the restriction of $\psi$ to $B(d,p-1)\cap\mathcal{X}$ is the identity. By this assumption, the $1$-norm of an element $y$ of $B(d,p-1)\cap\mathbb{P}^d_\circ$ is at most that of its image by $\psi$. Indeed, either $\psi(y)$ coincides with $y$ or it is outside of $B(d,p-1)$. In the latter case, the $1$-norm of $\psi(y)$ is at least $p$, while the $1$-norm of $y$ is at most $p-1$. According to Proposition \ref{PPP.sec.3.prop.1}, $$ \kappa(\mathcal{X})\geq\frac{1}{d}\sum_{x\in\mathcal{X}}\|x\|_1 $$ and, as $B(d,p-1)\cap\mathbb{P}^d_\circ$ is invariant under the action of $C$ on $\mathbb{R}^d$, $$ \kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)=\frac{1}{d}\sum\|y\|_1\mbox{,} $$ where the sum in the right-hand side of this equality is over the points $y$ contained in $B(d,p-1)\cap\mathbb{P}^d_\circ$. Since $\psi$ cannot make $1$-norms decrease, $$ \kappa(\mathcal{X})\geq\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)+\frac{1}{d}\sum_{\substack{x\in\mathcal{X}\\x\not\in\mathcal{X}^\star}}\|x\|_1\mbox{.} $$ In addition, if $B(d,p-1)\cap\mathbb{P}^d_\circ$ and $\mathcal{X}^\star$ do not coincide, then this inequality is strict. As all the points in $\mathcal{X}\mathord{\setminus}\mathcal{X}^\star$ have $1$-norm at least $p$ \begin{equation}\label{PPP.sec.3.lem.0.5.eq.1} \kappa(\mathcal{X})\geq\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)+p\frac{|\mathcal{X}|-\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|}{d}\mbox{.} \end{equation} Note that this inequality is strict as soon as a point in $\mathcal{X}\mathord{\setminus}\mathcal{X}^\star$ has $1$-norm greater than $p$ or $B(d,p-1)\cap\mathbb{P}^d_\circ$ does not coincide with $\mathcal{X}^\star$. As $\kappa(\mathcal{X})\leq{k}$ and $|\mathcal{X}|=\delta_z(d,k)$, we obtain from (\ref{PPP.sec.3.lem.0.5.eq.1}) the desired upper bound on $|\mathcal{X}|$. Now assume that $d$ is a proper divisor of $p$ and that $p/d$ is a divisor of $k-\kappa(B(d,p-1)\cap\mathbb{P}^d_\circ)$. Observe that, in this case, $\lambda(d,k)$ is an integer. Therefore, if $\mathcal{X}$ has cardinality $\lfloor\lambda(d,k)\rfloor$, then (\ref{PPP.sec.3.lem.0.5.eq.1}) must be an equality. In this case, as we discussed above, $B(d,p-1)\cap\mathbb{P}^d_\circ$ must coincide with $\mathcal{X}^\star$. In other words, $B(d,p-1)\cap\mathbb{P}^d_\circ$ is a subset of $\mathcal{X}$. In addition, all the points in $\mathcal{X}\mathord{\setminus}\mathcal{X}^\star$ must have $1$-norm at most $p$. It follows that $\mathcal{X}$ is a subset of $B(d,p)\cap\mathbb{P}^d_\circ$. \end{proof} It is an immediate consequence, of Lemma \ref{PPP.sec.3.lem.0.5} that $\delta_z(d,k)\leq\lfloor\lambda(d,k)\rfloor$ for all $d$ and $k$. We can refine this bound as follows. \begin{lem}\label{PPP.sec.3.lem.1} Consider the smallest integer $p$ such that $k<\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$. If $d$ is a proper divisor of $p$ and either \begin{itemize} \item[(i)] $k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)$ is equal to $p/d$, or \item[(ii)] $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-k$ is equal to $p/d$, \end{itemize} then $\delta_z(d,k)$ cannot be equal to $\lfloor\lambda(d,k)\rfloor$. \end{lem} \begin{proof} Consider a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $\kappa(\mathcal{X})$ is at most $k$. Assume that $\mathcal{X}$ has cardinality exactly $\lfloor\lambda(d,k)\rfloor$. Hence, by the definition of $\lambda(d,k)$, \begin{equation}\label{PPP.sec.3.lem.1.eq.1} |\mathcal{X}|=\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+\left\lfloor\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor\!\mbox{.} \end{equation} Further assume that $d$ is a proper divisor of $p$ and let us reach a contradiction in each of the two special cases considered in the statement of the lemma. We first treat the case when \begin{equation}\label{PPP.sec.3.lem.1.eq.2} k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)=\frac{p}{d}\mbox{.} \end{equation} In this case, by Lemma \ref{PPP.sec.3.lem.0.5}, $B(d,p-1)\cap\mathbb{P}^d_\circ$ is a subset of $\mathcal{X}$, which in turn is contained in $B(d,p)$. Combining (\ref{PPP.sec.3.lem.1.eq.1}) with (\ref{PPP.sec.3.lem.1.eq.2}), we obtain that $$ |\mathcal{X}|=\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+1\mbox{.} $$ In other words, $\mathcal{X}$ contains exactly one element $x$ of $B(d,p)\cap\mathbb{P}^d_\circ$. According to (\ref{PPP.sec.3.lem.1.eq.2}) and to Proposition \ref{PPP.sec.3.prop.1}, the absolute value of all the coordinates of $x$ must be equal to $p/d$. Since $d$ is a proper divisor of $p$, $x$ cannot be primitive. This contradiction shows that $\delta_z(d,k)$ cannot coincide with $\lfloor\lambda(d,k)\rfloor$. Now, assume that $d$ is a proper divisor of $p$ and that \begin{equation}\label{PPP.sec.3.lem.1.eq.3} \kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-k=\frac{p}{d}\mbox{.} \end{equation} Recall that, according to Proposition \ref{PPP.sec.1.prop.1}, \begin{equation}\label{PPP.sec.3.lem.1.eq.4} \kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)=\frac{p\!\left|S(d,p)\cap\mathbb{P}^d_\circ\right|}{d}\mbox{.} \end{equation} Combining the equalities (\ref{PPP.sec.3.lem.1.eq.1}), (\ref{PPP.sec.3.lem.1.eq.3}), and (\ref{PPP.sec.3.lem.1.eq.4}) yields $$ |\mathcal{X}|=\left|B(d,p)\cap\mathbb{P}^d_\circ\right|-1\mbox{.} $$ Again, it follows from Lemma \ref{PPP.sec.3.lem.0.5} that $B(d,p-1)\cap\mathbb{P}^d_\circ$ is a subset of $\mathcal{X}$, and $\mathcal{X}$ a subset of $B(d,p)\cap\mathbb{P}^d_\circ$. Hence, $\mathcal{X}$ is obtained by removing a single point of $1$-norm $p$ from $B(d,p)\cap\mathbb{P}^d_\circ$. According to (\ref{PPP.sec.3.lem.1.eq.3}) and to Proposition~\ref{PPP.sec.3.prop.1}, the absolute value of all the coordinates of $x$ must be equal to $p/d$ and, since $d$ is a proper divisor of $p$, $x$ cannot be a primitive point. It follows from this contradiction, that $\delta_z(d,k)$ is not equal to $\lfloor\lambda(d,k)\rfloor$. \end{proof} In the $2$-dimensional case, we can further refine our bound as follows. \begin{lem}\label{PPP.sec.3.lem.2} Consider the smallest integer $p$ such that $k<\kappa(B(2,p)\cap\mathbb{P}^2_\circ)$. If $p$ is a multiple of $4$ and $k-\kappa(B(2,p-1)\cap\mathbb{P}^2_\circ)$ is an odd multiple of $p/2$, then $\delta_z(2,k)$ cannot be equal to $\lfloor\lambda(d,k)\rfloor$. \end{lem} \begin{proof} Assume that $p$ is a multiple of $4$ and $k-\kappa(B(2,p-1)\cap\mathbb{P}^2_\circ)$ an odd multiple of $p/2$. Consider a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $\kappa(\mathcal{X})$ is not greater than $k$. We proceed as in the proof of Lemma \ref{PPP.sec.3.lem.1} by assuming that \begin{equation}\label{PPP.sec.3.lem.2.eq.1} |\mathcal{X}|=\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+\left\lfloor\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}\right\rfloor\!\mbox{,} \end{equation} and we aim for a contradiction. According to Lemma \ref{PPP.sec.3.lem.0.5}, $\mathcal{X}$ admits $B(2,p-1)\cap\mathbb{P}^2_\circ$ as a subset and is itself a subset of $B(2,p)\cap\mathbb{P}^2_\circ$. Denote by $\mathcal{Y}$ the set of the points of $1$-norm $p$ contained in $\mathcal{X}$. Since $k-\kappa(B(2,p-1)\cap\mathbb{P}^2_\circ)$ an odd multiple of $p/2$, it follows from (\ref{PPP.sec.3.lem.2.eq.1}) that the cardinality of $\mathcal{Y}$ is odd. It also follows from (\ref{PPP.sec.3.lem.2.eq.1}) that $$ |\mathcal{Y}|=\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}\mbox{.} $$ Therefore, as $\kappa(\mathcal{X})\leq{k}$ and as the points in $\mathcal{Y}$ have $1$-norm $p$, $$ \kappa(\mathcal{X})\leq\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+\frac{1}{2}\sum_{x\in\mathcal{Y}}\|x\|_1\mbox{.} $$ Observe that, by Proposition~\ref{PPP.sec.3.prop.1} the opposite inequality holds. Hence, according to the same proposition, $\mathcal{X}$ is invariant under the action of $C$ on $\mathbb{R}^2$. Since $\mathcal{X}\mathord{\setminus}\mathcal{Y}$ is also invariant under that action, then so is $\mathcal{Y}$. Therefore, $$ \sum_{x\in\mathcal{Y}}|x_1|=\frac{p|\mathcal{Y}|}{2}\mbox{.} $$ However, since $p$ is a multiple of $4$, the right-hand side of this equality is even. As $|\mathcal{Y}|$ is odd, there must exist a point $x$ in $\mathcal{Y}$ such that $x_1$ is even. However, since the $1$-norm of $x$ is even, the coordinates of $x$ must both be even. It follows that $x$ cannot be primitive, a contradiction \end{proof} \section{An expression for $\delta_z(d,k)$ when $d\geq3$}\label{PPP.sec.3.5} In this section, we build a subset of $\mathbb{P}^d_\circ$ that achieves the bound on $\delta_z(d,k)$ provided by Lemmas~\ref{PPP.sec.3.lem.0.5} and \ref{PPP.sec.3.lem.1} when $d>2$ and for a number of values of $k$ when $d=2$. We will distinguish two cases depending on whether $d$ is, or not, a divisor of the smallest integer $p$ such that $k<\kappa(B(d,p)\cap\mathbb{P}^d_\circ)$. Throughout the section, we assume that $k$ and $d$ (and therefore $p$) are fixed. Let us first address the case when $d$ is not a proper divisor of $p$. Note that, in this case, our construction will be done for all $d\geq2$. We introduce, for the purpose of the construction, $d$ sets that we denote $\mathcal{S}_1$ to $\mathcal{S}_d$, made up of points of $1$-norm $p$ from $\mathbb{P}^d_\circ$. Denote by $r$ the remainder of the integer division of $p$ by $d$ and recall that $\lfloor{p/d}\rfloor$ is the quotient of this division. As a consequence, the lattice point $x$ such that $x_i=\lceil{p/d}\rceil$ when $i\leq{r}$ and $x_i=\lfloor{p/d}\rfloor$ otherwise has $1$-norm $p$. Moreover, $\lceil{p/d}\rceil$ and $\lfloor{p/d}\rfloor$ are relatively prime as they differ by $1$, and $x$ is necessarily primitive. For any integer $j$ such that $1\leq{j}\leq{d}$, denote $$ \mathcal{S}_j=\{\tau^{ir}(x):1\leq{i}\leq{j}\}\mbox{.} $$ As announced, $\mathcal{S}_j$ is made up of $j$ points of $1$-norm $p$ from $\mathbb{P}^d_\circ$. Moreover, \begin{equation}\label{PPP.sec.3.5.eq.1} \kappa\!\left(\mathcal{S}_j\right)=\left\lceil\frac{jp}{d}\right\rceil\!\mbox{.} \end{equation} Using the sets $\mathcal{S}_1$ to $\mathcal{S}_d$, we prove the following. \begin{lem}\label{PPP.sec.3.5.lem.1} If $d$ is not a divisor of $p$, then there exists a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $\kappa(\mathcal{X})$ is at most $k$ and $|\mathcal{X}|$ is equal to $\lfloor\lambda(d,k)\rfloor$. \end{lem} \begin{proof} Assume that $d$ is not a divisor of $p$ and consider a point $x$ of $1$-norm $p$ in $\mathbb{P}^d_\circ$. Since $C$ has order $d$, $C\mathord{\cdot}x$ has cardinality at most $d$. In particular, the $1$-norms of the elements of $C\mathord{\cdot}x$ sum to at most $dp$. Observe that $C\mathord{\cdot}x$ coincides with $\mathcal{S}_d$ when $x$ is the unique element of $\mathcal{S}_1$, and that $\mathcal{S}_d$ has cardinality exactly $d$. Further observe that the points of $1$-norm $p$ in $\mathbb{P}^d_\circ$ are partitioned into the orbits of their elements under the action of $C$ on $\mathbb{R}^d$. Hence, one can take the union of some of these orbits in order to build a set $\mathcal{Y}$ of points of $1$-norm $p$ from $\mathbb{P}^d_\circ$ disjoint from $\mathcal{S}_d$, invariant under the action of $C$ on $\mathbb{R}^d$, and that satisfies $$ \frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}-d<\frac{1}{p}\sum_{x\in\mathcal{Y}}\|x\|_1\leq\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\mbox{.} $$ In this case, the integer $j$ defined as \begin{equation}\label{PPP.sec.3.5.lem.1.eq.1} j=\left\lfloor\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor-\frac{1}{p}\sum_{x\in\mathcal{Y}}\|x\|_1 \end{equation} is non-negative and less than $d$. Denote $$ \mathcal{X}=\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)\cup\mathcal{Y}\cup\mathcal{S}_j\mbox{,} $$ with the convention that $\mathcal{S}_0$ is the empty set. By construction, both $\mathcal{Y}$ and the intersection of $B(d,p-1)$ with $\mathbb{P}^d_\circ$ are invariant under the action of $C$ on $\mathbb{R}^d$. In particular, according to Proposition \ref{PPP.sec.3.prop.1}, \begin{equation}\label{PPP.sec.3.5.lem.1.eq.2} \kappa(\mathcal{X})=\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)+\frac{1}{d}\sum_{x\in\mathcal{Y}}\|x\|_1+\kappa(S_j)\mbox{.} \end{equation} However, it follows from (\ref{PPP.sec.3.5.lem.1.eq.1}) that \begin{equation}\label{PPP.sec.3.5.lem.1.eq.3} \frac{1}{d}\sum_{x\in\mathcal{Y}}\|x\|_1\leq{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)-\frac{jp}{d}}\mbox{.} \end{equation} In addition, by (\ref{PPP.sec.3.5.eq.1}), \begin{equation}\label{PPP.sec.3.5.lem.1.eq.4} \kappa(\mathcal{S}_j)<\frac{jp}{d}+1\mbox{,} \end{equation} As this inequality is strict, we obtain $\kappa(\mathcal{X})\leq{k}$ by combining (\ref{PPP.sec.3.5.lem.1.eq.2}), (\ref{PPP.sec.3.5.lem.1.eq.3}), and (\ref{PPP.sec.3.5.lem.1.eq.4}). Now recall that all the points in $\mathcal{Y}$ have $1$-norm $p$ and $\mathcal{S}_j$ has cardinality $j$. As a consequence the cardinality of $\mathcal{X}$ can be decomposed as \begin{equation}\label{PPP.sec.3.5.lem.1.eq.5} |\mathcal{X}|=\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+\frac{1}{p}\sum_{x\in\mathcal{Y}}\|x\|_1+j\mbox{.} \end{equation} Combining (\ref{PPP.sec.3.5.lem.1.eq.1}) and (\ref{PPP.sec.3.5.lem.1.eq.5}) yields $$ |\mathcal{X}|=\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+\left\lfloor\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor\!\mbox{.} $$ The right-hand side of this equality is $\lfloor\lambda(d,k)\rfloor$, as desired. \end{proof} We now address the case when $d$ is a divisor of $p$. In this case, the construction of subset of $\mathbb{P}^d_\circ$ of the right cardinality will rely on the following lemma \begin{lem}\label{PPP.sec.3.5.lem.2} Assume that $d$ is a divisor of $p$ and that \begin{equation}\label{PPP.sec.3.5.lem.2.eq.0} \kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)+\frac{2p}{d}\leq{k}<\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-\frac{p}{d}\mbox{.} \end{equation} Let $\mathcal{Y}$ be the union of the orbits under the action of $C$ on $\mathbb{R}^d$ of some points of $1$-norm $p$ from $\mathbb{P}^d_\circ$. If $|\mathcal{Y}|$ is at least $d+3$ and, whenever $2\leq{j}\leq|\mathcal{Y}|-2$, there exists a subset $\mathcal{S}$ of $j$ elements of $\mathcal{Y}$ satisfying \begin{equation}\label{PPP.sec.3.5.lem.2.eq.0.5} \kappa(\mathcal{S})=\frac{jp}{d}\mbox{,} \end{equation} then there exists a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $\kappa(\mathcal{X})\leq{k}$ and $|\mathcal{X}|=\lfloor\lambda(d,k)\rfloor$. \end{lem} \begin{proof} Assume that $\mathcal{Y}$ has cardinality at least $d+3$. Consider the set of the points of $1$-norm $p$ contained in $\mathbb{P}^d_\circ$. Observe that this set is partitioned into the orbits of its elements under the action of $C$ on $\mathbb{R}^d$. Moreover, these orbits have cardinality at most $d$. As in addition, $|\mathcal{Y}|\geq{d+3}$, one can take the union of some of these orbits in order to build a set $\mathcal{Z}$ disjoint from $\mathcal{Y}$ that is invariant under the action of $C$ on $\mathbb{R}^d$ and that satisfies the inequalities $$ \frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}-|\mathcal{Y}|+1<\frac{1}{p}\sum_{x\in\mathcal{Z}}\|x\|_1\leq\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}-2\mbox{.} $$ As an immediate consequence, the integer $j$ defined as \begin{equation}\label{PPP.sec.3.5.lem.2.eq.1} j=\left\lfloor\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor-\frac{1}{p}\sum_{x\in\mathcal{Z}}\|x\|_1 \end{equation} is such that $2\leq{j}\leq{|\mathcal{Y}|-2}$. Assume that there exists a subset $\mathcal{S}$ of $j$ elements of $\mathcal{Y}$ that satisfies (\ref{PPP.sec.3.5.lem.2.eq.0.5}) and consider the set $$ \mathcal{X}=\left[B(d,p-1)\cap\mathbb{P}^d_\circ\right]\cup\mathcal{Z}\cup\mathcal{S}\mbox{.} $$ From there on, the argument is the same as in the proof of Lemma \ref{PPP.sec.3.5.lem.1}. According to Proposition \ref{PPP.sec.3.prop.1}, \begin{equation}\label{PPP.sec.3.5.lem.2.eq.2} \kappa(\mathcal{X})=\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)+\frac{1}{d}\sum_{x\in\mathcal{Z}}\|x\|_1+\kappa(\mathcal{S})\mbox{.} \end{equation} Moreover, it follows from (\ref{PPP.sec.3.5.lem.2.eq.1}) that \begin{equation}\label{PPP.sec.3.5.lem.2.eq.3} \frac{1}{d}\sum_{x\in\mathcal{Z}}\|x\|_1\leq{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)-\frac{jp}{d}}\mbox{.} \end{equation} Hence, we obtain $\kappa(\mathcal{X})\leq{k}$ by combining (\ref{PPP.sec.3.5.lem.2.eq.0.5}), (\ref{PPP.sec.3.5.lem.2.eq.2}), and (\ref{PPP.sec.3.5.lem.2.eq.3}). Now, since all the elements of $\mathcal{Z}$ have $1$-norm $p$, and since has $\mathcal{S}$ has cardinality $j$, \begin{equation}\label{PPP.sec.3.5.lem.2.eq.4} |\mathcal{X}|=\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+\frac{1}{p}\sum_{x\in\mathcal{Z}}\|x\|_1+j\mbox{.} \end{equation} Combining (\ref{PPP.sec.3.5.lem.2.eq.1}) with (\ref{PPP.sec.3.5.lem.2.eq.4}) provides the equality $$ |\mathcal{X}|=\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+\left\lfloor\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor\!\mbox{,} $$ whose right-hand side is precisely $\lfloor\lambda(d,k)\rfloor$. \end{proof} We now build a subset of $\mathbb{P}^d_\circ$ that achieves, when $d$ is greater than $2$ and a divisor of $p$, the upper bound on $\delta_z(d,k)$ provided by Lemmas \ref{PPP.sec.3.lem.0.5} and \ref{PPP.sec.3.lem.1}. \begin{lem}\label{PPP.sec.3.5.lem.3} Assume that $d$ is greater than $2$ and a divisor of $p$. If \begin{itemize} \item[(i)] $k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)$ is distinct from $p/d$, \item[(ii)] $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-k$ is distinct from $p/d$, \end{itemize} then there exists a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ satisfying $\kappa(\mathcal{X})\leq{k}$ and $|\mathcal{X}|=\lfloor\lambda(d,k)\rfloor$. Otherwise, there also exists a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $\kappa(\mathcal{X})\leq{k}$, but whose cardinality is $\lfloor\lambda(d,k)\rfloor-1$ instead of $\lfloor\lambda(d,k)\rfloor$. \end{lem} \begin{proof} Assume that $d$ is a divisor of $p$. Consider the point $a$ of $\mathbb{Z}^d$ whose first coordinate is $p/d-1$, whose second coordinate is $p/d+1$, and whose all other coordinates are $p/d$. Further consider the point $b$ obtained from $a$ by exchanging the first two coordinates and the point $c$ obtained from $b$ by exchanging the second and third coordinates. As $d$ is greater than $2$, the three points $a$, $b$ and $c$ each admit two coordinates that are consecutive integers. Therefore, they are primitive. Note, in addition, that they have $1$-norm $p$. Consider the set $$ \mathcal{Y}=(C\mathord{\cdot}a)\cup(C\mathord{\cdot}b)\cup(C\mathord{\cdot}c)\mbox{.} $$ We are going to show that $\mathcal{Y}$ satisfies the requirements of Lemma \ref{PPP.sec.3.5.lem.2}. Observe that the orbits of $a$, $b$, and $c$ under the action of $C$ on $\mathbb{R}^d$ have cardinality $d$. Further observe that these orbits are pairwise disjoint if $d$ is greater than $3$. Therefore, in this case, $\mathcal{Y}$ has cardinality $3d$. If, however, $d$ is equal to $3$, then $\mathcal{Y}$ has cardinality only $6$ because $C\mathord{\cdot}a$ coincides with $C\mathord{\cdot}b$. Further observe that all the points contained in $\mathcal{Y}$ have non-negative coordinates. Now, consider an integer $j$ such that $2\leq{j}\leq|\mathcal{Y}|-2$. If $j$ is even and at most $|\mathcal{Y}|/2$, denote $$ \mathcal{S}_j=\left\{\tau^i(a):1\leq{i}\leq\frac{j}{2}\right\}\!\cup\!\left\{\tau^i(b):1\leq{i}\leq\frac{j}{2}\right\}\!\mbox{.} $$ By construction, $\mathcal{S}_j$ is a subset of $j$ elements of $\mathcal{Y}$. Observe that $\mathcal{Y}$ is a subset of $[0,+\infty[^d$ and, therefore, so is $\mathcal{S}_j$. Further observe that all the coordinates of $a+b$ are equal to $2p/d$. Hence, all the coordinates of the sum of the points contained in $\mathcal{S}_j$ are equal to $jp/d$. As a consequence, \begin{equation}\label{PPP.sec.3.5.lem.3.eq.1} \kappa(\mathcal{S}_j)=\frac{jp}{d}\mbox{,} \end{equation} Now assume that $j$ is odd and at most $|\mathcal{Y}|/2$. We will distinguish two cases depending on whether $d$ is equal to $3$ or greater than $3$. If $d$ is equal to $3$, then $j$ must be equal to $3$, as this is the only odd number that is both greater than~$2$ and at most $|\mathcal{Y}|/2$. In this case, we denote $$ \mathcal{S}_j=\left\{a,\tau(a),\tau^2(a)\right\}\mbox{,} $$ and if $d$ is greater than $3$, we denote $$ \mathcal{S}_j=\left\{\tau^i(a):1\leq{i}\leq\frac{j+1}{2}\right\}\!\cup\!\left\{\tau^i(b):1\leq{i}\leq\frac{j-3}{2}\right\}\!\cup\!\left\{\tau^{(j-1)/2}(c)\right\}\mbox{.} $$ Again, $\mathcal{S}_j$ is a subset of $j$ elements of $\mathcal{Y}$. Moreover, all the coordinates of the sum of its elements are equal to $jp/d$. Hence $\mathcal{S}_j$ satisfies~(\ref{PPP.sec.3.5.lem.3.eq.1}). In the case when $j$ is greater than $|\mathcal{Y}|/2$, denote $$ \mathcal{S}_j=\mathcal{Y}\mathord{\setminus}\mathcal{S}_{|\mathcal{Y}|-j}\mbox{.} $$ By construction, all the coordinates of the sum of the points contained in $\mathcal{Y}$ are equal to $|\mathcal{Y}|p/d$ and all the coordinates of the sum of the points contained in $\mathcal{S}_{|\mathcal{Y}|-j}$ to $(|\mathcal{Y}|-j)p/d$. Hence, $\mathcal{S}_j$ satisfies (\ref{PPP.sec.3.5.lem.3.eq.1}) in this case as well. As a consequence, when (\ref{PPP.sec.3.5.lem.2.eq.0}) holds, it follows from Lemma \ref{PPP.sec.3.5.lem.2} that there exists a subset of $\mathbb{P}^d_\circ$ of the desired cardinality, whose image by $\kappa$ is at most $k$. In the remainder of the proof, we assume that (\ref{PPP.sec.3.5.lem.2.eq.0}) does not hold, and we review several cases depending on the values of $p/d$ and $k$. In each of this cases, we exhibit a subset of $\mathbb{P}^d_\circ$ with the desired properties. Recall that $$ \lambda(d,k)=\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\mbox{.} $$ Observe that, when the difference $k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)$ is less than $p/d$ or, when that difference is exactly $p/d$ with $p>d$, then $B(d,p-1)\cap\mathcal{P}^d_\circ$ has the desired properties. Moreover, if $p$ is equal to $d$ and $$ \frac{p}{d}\leq{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}<\frac{2p}{d}\mbox{,} $$ then the union of $B(d,p-1)\cap\mathbb{P}^d_\circ$ with the singleton that contains the point of $\mathbb{R}^d$ whose all coordinates are equal to $1$ has the desired cardinality, and its image by $\kappa$ is at most $k$. Similarly, if $p$ is greater than $d$ and $$ \frac{p}{d}<{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}<\frac{2p}{d}\mbox{,} $$ then the set $\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)\cup\{a\}$, where $a$ is the point defined at the beginning of the proof is a subset of $\mathbb{P}^d_\circ$ with the desired properties. Now, if $p$ is equal to $d$ and $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-k$ is at most $p/d$, then the set obtained from $B(d,p)\cap\mathbb{P}^d_\circ$ by removing the point whose all coordinates are equal to $1$ has the desired cardinality and its image by $\kappa$ is at most $k$. Similarly, if $p>d$ and $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-k$ is less than $p/d$, then removing from $B(d,p)\cap\mathbb{P}^d_\circ$ the point $a$ results in a subset of $\mathbb{P}^d_\circ$ of the right cardinality and image by $\kappa$. Finally, assume that $p$ is greater than $d$ and that $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-k$ is equal to $p/d$. In that case, $p/d$ is a divisor of $k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)$. Hence, $$ \left\lfloor\frac{k-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor\!-1=\left\lfloor\frac{k-1-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor\!\mbox{.} $$ Note in particular that $k-1-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)$ cannot be a multiple of $p/d$. Therefore the set that we have already built, of cardinality $$ \left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|+\left\lfloor\frac{k-1-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)}{p/d}\right\rfloor\! $$ whose image by $\kappa$ is at most $k-1$ has the desired properties. \end{proof} Assume that $d$ is greater than $2$ and consider the set $$ \mathbb{E}=\bigcup\left\{\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)+\frac{p}{d},\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-\frac{p}{d}\right\}\mbox{,} $$ where the union is over the positive integers $p$ that admit $d$ as a proper divisor. Note in particular that $\mathbb{E}$ depends on the dimension. With this notation Theorem \ref{PPP.sec.0.thm.2} is a consequence, for all dimensions above $2$, of Lemmas \ref{PPP.sec.3.lem.1}, \ref{PPP.sec.3.5.lem.1}, and \ref{PPP.sec.3.5.lem.3}, except for the part on the density of $\mathbb{E}$ within $\mathbb{N}$. The following result completes the proof of Theorem \ref{PPP.sec.0.thm.2} when $d>2$. \begin{lem}\label{PPP.sec.3.5.lem.4} If $d>2$, then $\displaystyle\lim_{k\rightarrow\infty}\frac{\left|\mathbb{E}\cap[1,k]\right|}{k^{1/(d+1)}}=\frac{\left[4(d+1)!\zeta(d)\right]^{1/(d+1)}}{d}$. \end{lem} \begin{proof} Consider the smallest integer $p$ such that $k<\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$ and observe that the cardinality of $\mathbb{E}\cap[1,k]$ is at most twice the number of multiples of $d$ less than or equal to $p$ and at least that number minus $4$. Therefore, $$ \frac{2p/d-4}{\!\left[\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)\right]^{1/(d+1)}}\leq\frac{\left|\mathbb{E}\cap[1,k]\right|}{k^{1/(d+1)}}\leq\frac{2p/d}{\!\left[\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)\right]^{1/(d+1)}}\mbox{.} $$ However, according to Theorem 4.2 from \cite{DezaPourninSukegawa2020}, $$ \lim_{p\rightarrow\infty}\frac{\left[\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)\right]^{1/(d+1)}}{p}=\left(\frac{2^{d-1}}{(d+1)!\zeta(d)}\right)^{1/(d+1)}\mbox{,} $$ and the desired result follows. \end{proof} Note that the asymptotic estimate of the density of $\mathbb{E}$ within $\mathbb{N}$ provided by Lemma~\ref{PPP.sec.3.5.lem.4} is exact. This estimate is two orders lower than the upper bound in the statement of Theorem \ref{PPP.sec.0.thm.2}. Indeed, we shall see in the next section that $\mathbb{E}$ is more dense in $2$-dimensions. However, if the statement of Theorem \ref{PPP.sec.0.thm.2} is restricted to dimensions above $2$, the upper bound on the density of $\mathbb{E}$ can be replaced by the exact estimate from Lemma~\ref{PPP.sec.3.5.lem.4}. \section{An expression for $\delta_z(2,k)$}\label{PPP.sec.4} This section is devoted to proving the $2$-dimensional case of Theorem \ref{PPP.sec.0.thm.2}. We will assume throughout the section that $d$ is equal to $2$, that $k$ is fixed, and that $p$ denotes the smallest integer such that $k$ is less than $\kappa\!\left(B(2,p)\cap\mathbb{P}^d_\circ\right)$. Observe that, when $p$ is odd, the value of $\delta_z(2,k)$ is provided by Lemmas~\ref{PPP.sec.3.lem.0.5} and~\ref{PPP.sec.3.5.lem.1}. Therefore, we only have to treat the case when $p$ is even. Moreover, note that Lemmas \ref{PPP.sec.3.lem.0.5}, \ref{PPP.sec.3.lem.1}, and \ref{PPP.sec.3.lem.2} provide the desired upper bound on $\delta_z(2,k)$ for all $k$. Hence, we only need to build subsets of $\mathbb{P}^2_\circ$ that achieve that bound for even $p$. We first consider the case when $p$ is a multiple of $4$. \begin{lem}\label{PPP.sec.4.lem.1} Assume that $p$ is a multiple of $4$. If $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ is not a multiple of $p/2$ by an odd integer, then there exists a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ satisfying $\kappa(\mathcal{X})\leq{k}$ and $|\mathcal{X}|=\lfloor\lambda(2,k)\rfloor$. Otherwise, there also exists such a subset of $\mathbb{P}^2_\circ$, but whose cardinality is $\lfloor\lambda(2,k)\rfloor-1$ instead of $\lfloor\lambda(2,k)\rfloor$. \end{lem} \begin{proof} Consider the point $a$ such that $a_1=p/2-1$ and $a_2=p/2+1$. Note that, since $p/2$ is even, $a$ is primitive. Moreover, $a$ has $1$-norm $p$. Observe that the set of the points of $1$-norm $p$ from $\mathbb{P}^2_\circ$ is partitioned by the orbits of its elements under the action of $C$ on $\mathbb{R}^2$. Moreover, since $p$ is greater than $2$ the orbit of any of these elements under the action of $C$ on $\mathbb{R}^2$ has cardinality exactly $2$. Therefore, one can take the union of some of these orbits in order to build a set $\mathcal{Z}$ of points of $1$-norm $p$ from $\mathbb{P}^2_\circ$ that is invariant under the action of $C$ on $\mathbb{R}^2$, that is disjoint from $C\mathord{\cdot}a$ and that satisfies \begin{equation}\label{PPP.sec.4.lem.1.eq.1} \frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}-2<\frac{1}{p}\sum_{x\in\mathcal{Z}}\|x\|_1\leq\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}\mbox{.} \end{equation} Consider the difference \begin{equation}\label{PPP.sec.4.lem.1.eq.2} g=\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}-\frac{1}{p}\sum_{x\in\mathcal{Z}}\|x\|_1\mbox{.} \end{equation} It follows from (\ref{PPP.sec.4.lem.1.eq.1}) that $0\leq{g}<2$. If $g\leq1$, consider the set $$ \mathcal{X}=\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)\cup\mathcal{Z}\mbox{.} $$ By construction, $\mathcal{X}$ is invariant under the action of $C$ on $\mathbb{R}^2$. In addition, as the elements of $\mathcal{Z}$ all have $1$-norm $p$, it follows from Proposition \ref{PPP.sec.3.prop.1} that \begin{equation}\label{PPP.sec.4.lem.1.eq.3} |\mathcal{X}|=\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+\frac{1}{2}\sum_{x\in\mathcal{Z}}\|x\|_1\mbox{.} \end{equation} However, as $g\geq0$, it follows from (\ref{PPP.sec.4.lem.1.eq.2}) that $$ \frac{1}{2}\sum_{x\in\mathcal{Z}}\|x\|_1\leq{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}\mbox{.} $$ Combining this with (\ref{PPP.sec.4.lem.1.eq.3}) shows that $\kappa(Z)\leq{k}$. Now recall that $$ \lambda(2,k)=\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}\mbox{.} $$ Since all the elements of $\mathcal{Z}$ all have $1$-norm $p$, \begin{equation}\label{PPP.sec.4.lem.1.eq.3.5} |\mathcal{X}|=\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+\frac{1}{p}\sum_{x\in\mathcal{Z}}\|x\|_1\mbox{.} \end{equation} As $g$ is non-negative, combining (\ref{PPP.sec.4.lem.1.eq.2}) with (\ref{PPP.sec.4.lem.1.eq.3.5}) shows that $|\mathcal{X}|=\lfloor\lambda(d,k)\rfloor$. Moreover, when $g$ is equal to $1$, we obtain that $\mathcal{X}$ has cardinality $\lfloor\lambda(d,k)\rfloor-1$. As $g$ is equal to $1$ if and only if $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ is an odd multiple of $p/2$, the lemma is proven in this case. Now assume that $g$ is greater than $1$ and denote $$ \mathcal{X}=\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)\cup\mathcal{Z}\cup\{a\}\mbox{,} $$ According to Proposition \ref{PPP.sec.3.prop.1}, \begin{equation}\label{PPP.sec.4.lem.1.eq.4} \kappa(\mathcal{X})=\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+\frac{1}{2}\sum_{x\in\mathcal{Z}}\|x\|_1+\frac{p}{2}+1\mbox{.} \end{equation} As $g$ is greater than $1$, (\ref{PPP.sec.4.lem.1.eq.2}) yields $$ \frac{1}{2}\sum_{x\in\mathcal{X}}\|x\|_1<k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)-\frac{p}{2}\mbox{.} $$ Since this inequality is strict, combining it with (\ref{PPP.sec.4.lem.1.eq.4}) shows that $\kappa(Z)\leq{k}$. Finally, as all the elements of $\mathcal{Z}$ have $1$-norm $p$, $$ |\mathcal{X}|=\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+\frac{1}{p}\sum_{x\in\mathcal{Z}}\|x\|_1+1\mbox{.} $$ As $g$ greater than $1$, combining this with (\ref{PPP.sec.4.lem.1.eq.2}) shows that $|\mathcal{X}|=\lfloor\lambda(d,k)\rfloor$. \end{proof} Let us now treat the case when $p$ is even and $p/2$ is odd. The sub-cases when $$ k\in\left\{\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+\frac{p}{2}+1,\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)-\frac{p}{2}+1\right\} $$ turn out to be particularly interesting. In these two sub-cases, the subsets $\mathcal{X}$ of $\mathbb{P}^2_\circ$ such that $|\mathcal{X}|=\delta_z(2,k)$ and $\kappa(\mathcal{X})\leq{k}$ cannot be between $B(2,p-1)\cap\mathbb{P}^2_\circ$ and $B(2,p)\cap\mathbb{P}^2_\circ$ in terms of inclusion. Let us begin with the case when $k$ is equal to $\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+p/2+1$. \begin{prop}\label{PPP.sec.4.prop.1} If $p$ is even, $p/2$ is odd, and $p>2$, then there exists a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ of cardinality $\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+1$ such that $$ \kappa(\mathcal{X})\leq\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+\frac{p}{2}+1\mbox{.} $$ In addition, for any such subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$, either \begin{itemize} \item[(i)] some point in $B(2,p-1)\cap\mathbb{P}^2_\circ$ is not contained in $\mathcal{X}$, or \item[(ii)] some point in $\mathcal{X}$ is not contained in $B(2,p)\cap\mathbb{P}^2_\circ$. \end{itemize} \end{prop} \begin{proof} Let $a$ be the point in $\mathbb{Z}^2_\circ$ such that $a_1=p/2$ and $a_2=p/2+1$. Note that $a$ belongs to $\mathbb{P}^2_\circ$ and consider the set $$ \mathcal{X}=\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)\cup\{a\}\mbox{.} $$ Note that $|\mathcal{X}|=\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+1$. Recall that $B(2,p-1)\cap\mathbb{P}^2_\circ$ is invariant under the action of $C$ on $\mathbb{R}^2$. As in addition, $\max\{|a_1|,|a_2|\}=p/2+1$, $$ \kappa(\mathcal{X})\leq\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+\frac{p}{2}+1\mbox{,} $$ as desired. Now observe that, if a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ satisfies that inequality and admits $B(2,p-1)\cap\mathbb{P}^2_\circ$ as a subset then, by Proposition \ref{PPP.sec.3.prop.1}, the only point $z$ of $1$-norm $p$ contained in $\mathcal{X}$ must satisfy \begin{equation}\label{PPP.sec.4.prop.1.eq.1} \max\{|z_1|,|z_2|\}\leq\frac{p}{2}+1\mbox{.} \end{equation} If in addition, $z$ is has $1$-norm $p$, the two coordinates of $z$ cannot have the same absolute value because $z$ is primitive and $p$ is greater than $2$. Hence, according to (\ref{PPP.sec.4.prop.1.eq.1}), the absolute value of the coordinates of $z$ must be $p/2-1$ and $p/2+1$. However, as $p/2$ is odd, these numbers are both even. As a consequence, $z$ cannot be primitive, a contradiction. \end{proof} The argument, for the proof of the following proposition is similar to that in the proof of Proposition \ref{PPP.sec.4.prop.1}, and we only sketch it. \begin{prop}\label{PPP.sec.4.prop.2} If $p$ is even, $p/2$ is odd, and $p>2$, then there exists a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ of cardinality $\left|B(2,p)\cap\mathbb{P}^2_\circ\right|-1$ such that $$ \kappa(\mathcal{X})\leq\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-\frac{p}{2}+1\mbox{.} $$ In addition, for any such subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$, either \begin{itemize} \item[(i)] some point in $B(2,p-1)\cap\mathbb{P}^2_\circ$ is not contained in $\mathcal{X}$, or \item[(ii)] some point in $\mathcal{X}$ is not contained in $B(2,p)\cap\mathbb{P}^2_\circ$. \end{itemize} \end{prop} \begin{proof} Consider the set $\mathcal{X}$ obtained by removing from the intersection of $B(2,p)$ and $\mathbb{P}^2_\circ$ the point whose first coordinate is $p/2-1$ and whose second coordinate is $p/2$. This set has cardinality $|B(2,p)\cap\mathbb{P}^2_\circ|-1$ and satisfies \begin{equation}\label{PPP.sec.4.prop.2.eq.1} \kappa(\mathcal{X})\leq\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-\frac{p}{2}+1\mbox{,} \end{equation} as desired. Moreover, using an argument similar to that of the proof of Proposition \ref{PPP.sec.4.prop.1}, we show that any subset $\mathcal{X}$ of $B(2,p)\cap\mathbb{P}^2_\circ$ that satisfies (\ref{PPP.sec.4.prop.2.eq.1}), has cardinality $|B(2,p)\cap\mathbb{P}^2_\circ|-1$, and admits $B(2,p-1)\cap\mathbb{P}^2_\circ$ as a subset must miss a point from $B(2,p)\cap\mathbb{P}^2_\circ$ whose absolute value of the coordinates are $p/2-1$ and $p/2+1$. However, as $p/2$ is odd, this point cannot be primitive. \end{proof} The following lemma is a consequence of the two above propositions. \begin{lem}\label{PPP.sec.4.lem.2} Assume that $p$ is even and that $p/2$ is odd. Further assume that either $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)<p$ or $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k\leq{p/2}$. If \begin{itemize} \item[(i)] $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ is distinct from $p/2$, \item[(ii)] $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k$ is distinct from $p/2$, \end{itemize} then there exists a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ satisfying $\kappa(\mathcal{X})\leq{k}$ and $|\mathcal{X}|=\lfloor\lambda(2,k)\rfloor$. Otherwise, there also exists a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ such that $\kappa(\mathcal{X})\leq{k}$, but whose cardinality is $\lfloor\lambda(2,k)\rfloor-1$ instead of $\lfloor\lambda(2,k)\rfloor$. \end{lem} \begin{proof} Assume that $p$ is even and $p/2$ is odd and recall that $$ \lambda(2,k)=\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}\mbox{.} $$ Let us first treat the case when $$ k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)<p\mbox{.} $$ Observe that, if $k$ is less than $\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+p/2$ or $k$ is equal to this quantity but $p$ is greater than $2$, then we can take $\mathcal{X}=B(2,p-1)\cap\mathbb{P}^2_\circ$. If $p=2$ and $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)\geq{p/2}$, observe that \begin{equation}\label{PPP.sec.4.lem.2.eq.1} \left\lfloor\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}\right\rfloor\!=1\mbox{.} \end{equation} Let $\mathcal{X}$ be the union of $B(2,p-1)\cap\mathbb{P}^2_\circ$ with the singleton that contains the point whose two coordinates are equal to $1$. This set has cardinality $\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+1$ and satisfies $\kappa(\mathcal{X})\leq\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+p/2$, as desired. Therefore, the lemma holds in this case. If $p>2$ and $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)>p/2$, then (\ref{PPP.sec.4.lem.2.eq.1}) also holds. Therefore, a set $\mathcal{X}$ with the desired properties is provided by Proposition \ref{PPP.sec.4.prop.1}. Now assume that $$ \kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k\leq\frac{p}{2}\mbox{.} $$ If $p$ is equal to $2$, then consider the set $\mathcal{X}$ obtained from $B(2,p)\cap\mathbb{P}^2_\circ$ by removing the point whose two coordinates are equal to $1$. This set has cardinality $\left|B(2,p)\cap\mathbb{P}^2_\circ\right|-1$, as desired, and satisfies $\kappa(\mathcal{X})\leq\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-p/2$. Therefore, the lemma holds in this case. If $p>2$ and $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k<p/2$, a set $\mathcal{X}$ with the desired properties is provided by Proposition \ref{PPP.sec.4.prop.2}, and if $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k=p/2$, then consider any point $x$ of $1$-norm $p$ in $B(2,p)\cap\mathbb{P}^2_\circ$. Any set obtained by removing the two elements of $C\mathord{\cdot}x$ from $B(2,p)\cap\mathbb{P}^2_\circ$ will have the desired properties. \end{proof} We are ready to build the announced subsets of $\mathbb{P}^2_\circ$. The construction relies on Lemma \ref{PPP.sec.4.lem.2} when $k$ is close to $\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ or to $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)$, and on Lemma \ref{PPP.sec.3.5.lem.2} otherwise. We also use the property that, if $n$ is an odd integer and $m$ a positive integer, then $n-2^m$ and $n+2^m$ are relatively prime. \begin{lem}\label{PPP.sec.4.lem.3} Assume that $p$ is even and that $p/2$ is odd. If \begin{itemize} \item[(i)] $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ is distinct from $p/2$, \item[(ii)] $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k$ is distinct from $p/2$, \end{itemize} then there exists a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ satisfying $\kappa(\mathcal{X})\leq{k}$ and $|\mathcal{X}|=\lfloor\lambda(2,k)\rfloor$. Otherwise, there also exists a subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ such that $\kappa(\mathcal{X})\leq{k}$, but whose cardinality is $\lfloor\lambda(2,k)\rfloor-1$ instead of $\lfloor\lambda(2,k)\rfloor$. \end{lem} \begin{proof} Assume that $p$ is even and that $p/2$ is odd. We will split the proof into three cases. The first case, when $k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ is less than $p$ or $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-k$ is at most $p/2$, is taken care of by Lemma \ref{PPP.sec.4.lem.2}. Now assume that \begin{equation}\label{PPP.sec.4.lem.3.eq.1} \kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+p\leq{k}<\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-\frac{p}{2}\mbox{,} \end{equation} and that $p$ is at least $10$. In this case, we are going to use Lemma \ref{PPP.sec.3.5.lem.2}. Consider the point $a$ of $\mathbb{R}^2$ whose first coordinate is $p/2-2$ and whose second coordinate is $p/2+2$. Further consider the point $b$ obtained from $a$ by negating the second coordinate and the point $c$ whose first coordinate is $p/2+4$ and whose second coordinate is $p/2-4$. These points all have $1$-norm $p$ and, as $p/2$ is odd, they are primitive. Since the orbits of these points under the action of $C$ on $\mathbb{R}^2$ have cardinality $2$, and since these orbits are pairwise disjoint, the set $$ \mathcal{Y}=(C\mathord{\cdot}a)\cup(C\mathord{\cdot}b)\cup(C\mathord{\cdot}c)\mbox{.} $$ has cardinality $6$. Let us consider an integer $j$ such that $2\leq{j}\leq4$. We will exhibit a subset of $j$ elements of $\mathcal{S}$ that satisfies (\ref{PPP.sec.3.5.lem.2.eq.0.5}), as required by Lemma~\ref{PPP.sec.3.5.lem.2}. If $j$ is equal to $2$ we take for $\mathcal{S}$ the set $\{a,\tau(a)\}$. If $j$ is equal to $3$, we take $\mathcal{S}=\{a,b,c\}$, and if $j$ is equal to $4$, we take $\mathcal{S}=\{a,\tau(a),b,\tau(b)\}$. By the choice of $a$, $b$, and $c$, $\mathcal{S}$ satisfies (\ref{PPP.sec.3.5.lem.2.eq.0.5}). Therefore, according to Lemma \ref{PPP.sec.3.5.lem.2}, there exists a subset $\mathbb{P}^2_\circ$ such that $\kappa(\mathcal{X})\leq{k}$ and $|\mathcal{X}|=\lfloor\lambda(2,k)\rfloor$, as desired. Now assume that (\ref{PPP.sec.4.lem.3.eq.1}) still holds, but that $p=6$. Recall that $$ \lambda(2,k)=\left|B(2,p-1)\cap\mathbb{P}^2_\circ\right|+\frac{k-\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}{p/2}\mbox{.} $$ It follows from Theorem \ref{PPP.sec.0.thm.1} that $$ \kappa\!\left(B(2,6)\cap\mathbb{P}^2_\circ\right)-\kappa\!\left(B(2,5)\cap\mathbb{P}^2_\circ\right)=12\mbox{.} $$ Hence, according to (\ref{PPP.sec.4.lem.3.eq.1}), $6\leq{k-\kappa\!\left(B(2,5)\cap\mathbb{P}^2_\circ\right)}\leq8$. In this range, $$ \left\lfloor\frac{k-\kappa\!\left(B(2,5)\cap\mathbb{P}^2_\circ\right)}{6/2}\right\rfloor\!=2\mbox{.} $$ Therefore, the subset $\mathcal{X}$ of $\mathbb{P}^2_\circ$ made up of the points from $B(2,5)\cap\mathbb{P}^2_\circ$ together with the two primitive points of $1$-norm $6$ whose coordinates are $1$ and $5$ satisfies the desired properties as it has cardinality $\left|B(2,5)\cap\mathbb{P}^2_\circ\right|+2$ and $$ \kappa(\mathcal{X})\leq\kappa\!\left(B(2,5)\cap\mathbb{P}^2_\circ\right)+6\mbox{.} $$ Finally, note that no other case needs to be treated. In particular, when $p$ is equal to $2$, it follows from Theorem \ref{PPP.sec.0.thm.1} that $\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)=1$ and $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)=3$. Therefore, in this case, (\ref{PPP.sec.4.lem.3.eq.1}) is never satisfied. \end{proof} For any positive integer $p$, let $I_p$ be the set of the odd multiples of $p/2$ that lie between $\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)$ and $\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)$. Denote $$ \mathbb{E}_e=\bigcup_{p=1}^\infty{I_{4p}}\mbox{.} $$ Further consider the set $$ \mathbb{E}_o=\bigcup\left\{\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)+\frac{p}{2},\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)-\frac{p}{2}\right\}\!\mbox{,} $$ where the union ranges over the even numbers $p$ greater than $2$ such that $p/2$ is odd. Denote $\mathbb{E}=\mathbb{E}_e\cup\mathbb{E}_o$. With this notation, the expression for $\delta_z(2,k)$ provided by Theorem \ref{PPP.sec.0.thm.2} is a consequence of Lemmas \ref{PPP.sec.3.lem.0.5}, \ref{PPP.sec.3.lem.1}, \ref{PPP.sec.3.lem.2}, \ref{PPP.sec.3.5.lem.1}, \ref{PPP.sec.4.lem.1}, and \ref{PPP.sec.4.lem.3}. We now estimate the density of $\mathbb{E}$ within $\mathbb{N}$ in the $2$-dimensional case, which completes the proof of Theorem \ref{PPP.sec.0.thm.2}. \begin{lem}\label{PPP.sec.4.lem.4} If $d$ is equal to $2$, then $\displaystyle\lim_{k\rightarrow\infty}\frac{\left|\mathbb{E}\cap[1,k]\right|}{k}=0$. \end{lem} \begin{proof} Observe that there are exactly $$ \frac{\kappa\!\left(B(2,i)\cap\mathbb{P}^2_\circ\right)-\kappa\!\left(B(2,i-1)\cap\mathbb{P}^2_\circ\right)}{i} $$ odd multiples of $i/2$ between $\kappa\!\left(B(2,i-1)\cap\mathbb{P}^2_\circ\right)$ and $\kappa\!\left(B(2,i)\cap\mathbb{P}^2_\circ\right)$. According to Theorem \ref{PPP.sec.0.thm.1}, this quantity is precisely $J_1(i)$. Hence, when $d=2$, $$ \left|\mathbb{E}\cap[1,k]\right|\leq\sum_{i=1}^pJ_1(i)\mbox{,} $$ where $p$ is the smallest integer such that $k<\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)$. Therefore, \begin{equation}\label{PPP.sec.4.lem.4.eq.1} \frac{\left|\mathbb{E}\cap[1,k]\right|}{k}\leq\frac{1}{\kappa\!\left(B(2,p-1)\cap\mathbb{P}^2_\circ\right)}\sum_{i=1}^pJ_1(i)\mbox{.} \end{equation} Now recall that $J_1(i)$ is Euler's totient function (see, for instance \cite{HardyWright1938}). In particular, we have the following estimate. $$ \lim_{p\rightarrow\infty}\frac{1}{p^2}\sum_{i=1}^pJ_1(i)=\frac{3}{\pi^2}\mbox{.} $$ In addition, according to Theorem 4.2 from \cite{DezaPourninSukegawa2020}, $$ \lim_{p\rightarrow\infty}\frac{\kappa\!\left(B(2,p)\cap\mathbb{P}^2_\circ\right)}{p^3}=\frac{1}{3\zeta(2)}\mbox{.} $$ Hence letting $k$ go to infinity in (\ref{PPP.sec.4.lem.4.eq.1}) proves the lemma. \end{proof} \section{The geometry of packed primitive point sets}\label{PPP.sec.5} In this section, we gather results on which sets of primitive points solve our packing problem. The first one is Theorem \ref{PPP.sec.0.thm.4}, announced in the introduction, that we prove using the constructions in the previous two sections. \begin{proof}[Proof of Theorem \ref{PPP.sec.0.thm.4}] First observe that all the subsets $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $|\mathcal{X}|=\delta_z(d,k)$ and $\kappa(\mathcal{X})\leq{k}$ constructed in the proofs of Lemmas \ref{PPP.sec.3.5.lem.1}, \ref{PPP.sec.3.5.lem.2}, and \ref{PPP.sec.3.5.lem.3} satisfy the double inequality \begin{equation}\label{PPP.sec.0.thm.4.eq.1} B(d,p-1)\cap\mathbb{P}^d_\circ\subset\mathcal{X}\subset{B(d,p)\cap\mathbb{P}^d_\circ}\mbox{,} \end{equation} where $p$ is the smallest integer such that $k$ is less than $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$. Hence, the theorem holds when $d>2$, and when $d=2$ while $p$ is odd. Now assume that $d=2$ and that $p$ is even. Observe that the subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $|\mathcal{X}|=\delta_z(d,k)$ and $\kappa(\mathcal{X})\leq{k}$ constructed in the proof of Lemma \ref{PPP.sec.4.lem.1} also satisfies (\ref{PPP.sec.0.thm.4.eq.1}). The same holds for the construction given in the proof of Lemma \ref{PPP.sec.4.lem.3} except when $p/2$ is odd, $p$ is greater than $2$ and either $$ k=\kappa\!\left(B(2,p-1)\cap\mathbb{P}^d_\circ\right)+\frac{p}{2}+1\mbox{,} $$ or $$ k=\kappa\!\left(B(2,p)\cap\mathbb{P}^d_\circ\right)-\frac{p}{2}+1\mbox{,} $$ the cases treated by Propositions \ref{PPP.sec.4.prop.1} and \ref{PPP.sec.4.prop.2}. According to these propositions, a subset $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $|\mathcal{X}|=\delta_z(d,k)$ and $\kappa(\mathcal{X})\leq{k}$ cannot satisfy (\ref{PPP.sec.0.thm.4.eq.1}) in these cases, and the theorem follows. \end{proof} Let us turn our attention to the unicity of the subsets of $\mathbb{P}^d_\circ$ that solve our primitive point packing problem. Recall that we need to prove that unicity is not achieved apart for the cases already identified in \cite{DezaPourninSukegawa2020}. It is noteworthy that this will not require any knowledge on the value of $\delta_z(d,k)$. Indeed, the general idea in order to prove that several subsets of $\mathbb{P}^d_\circ$ solve our packing problem, is to consider such a subset, and to replace some of the points it contains without affecting its cardinality or increasing its image by $\kappa$. We will do this by means of the following proposition. \begin{prop}\label{PPP.sec.5.prop.1} For any point $z$ in $\mathbb{P}^d_\circ$, there exist a set $\mathcal{K}$ of $d$ points from $\mathbb{P}^d_\circ$, all of $1$-norm $\|z\|_1$, and a partition of $\mathcal{K}$ into $d/|C\mathord{\cdot}z|$ subsets, each of cardinality $|C\mathord{\cdot}z|$ such that, if $\mathcal{L}$ is any of these subsets, then $$ \kappa(C\mathord{\cdot}z)=\sum_{x\in\mathcal{L}}|x_i| $$ for every integer $i$ satisfying $1\leq{i}\leq{d}$. In addition, there exists a point of $\mathbb{P}^d_\circ$ whose orbit under the action of $C$ on $\mathbb{R}^d$ admits $\mathcal{L}$ as a subset. \end{prop} \begin{proof} Consider a point $z$ in $\mathbb{P}^d_\circ$ and note that when $C\mathord{\cdot}z$ has cardinality $d$ then, we can take $\mathcal{K}=C\mathord{\cdot}z$. Assume that $|C\mathord{\cdot}z|$ is less than $d$. We can require without loss of generality that the first coordinate of $z$ is non-zero by exchanging it for a point from $C\mathord{\cdot}z$ with that property. Pick an integer $j$ such that $$ 2\leq{j}\leq\frac{d}{|C\mathord{\cdot}z|}\mbox{.} $$ Consider the point $y$ such that $y_i=-z_i$ when $i$ is equal to $(j-1)|C\mathord{\cdot}z|$ and $y_i=z_i$ otherwise. Denote $$ \mathcal{L}_j=\left\{\tau^i(y):0\leq{i}<|C\mathord{\cdot}z|\right\}\!\mbox{.} $$ By construction, $$ \kappa(C\mathord{\cdot}z)=\sum_{x\in\mathcal{L}_j}|x_i|\mbox{.} $$ Further denote $\mathcal{L}_1=C\mathord{\cdot}z$ and call $\mathcal{K}$ the union of $\mathcal{L}_1$ to $\mathcal{L}_{d/|C\mathord{\cdot}z|}$. By construction, $\mathcal{K}$, and its partition into the sets $\mathcal{L}_i$ have the desired properties. \end{proof} We now prove Theorem \ref{PPP.sec.0.thm.5}. Recall that, according to this theorem, there is a unique subset of $\mathbb{P}^d_\circ$ that solves our primitive point packing problem if and only if $k$ coincides with $\kappa(B(d,p)\cap\mathbb{P}^d_\circ)$ for some integer $p$. \begin{proof}[Proof of Theorem \ref{PPP.sec.0.thm.5}] When $k$ coincides with $\kappa\!\left(B_1(d,p)\cap\mathbb{P}^d_\circ\right)$ for some integer $p$, the result is provided by Corollary 3.2 from \cite{DezaPourninSukegawa2020}. Assume that $$ k\neq\kappa\!\left(B_1(d,p)\cap\mathbb{P}^d_\circ\right) $$ for all $p$ and consider a subsets $\mathcal{X}$ of $\mathbb{P}^d_\circ$ such that $|\mathcal{X}|=\delta_z(d,k)$ and $\kappa(\mathcal{X})\leq{k}$. Let us prove there is another such subset of $\mathbb{P}^d_\circ$. If $\mathcal{X}$ is not invariant under the action of $C$ on $\mathbb{R}^d$, then there exists a integer $i$ such that $\tau^i(\mathcal{X})\neq\mathcal{X}$. However, $\tau^i(\mathcal{X})$ and $\mathcal{X}$ have the same cardinality and the same image by $\kappa$. Therefore, the lemma holds in this case, and we assume from now on that $\mathcal{X}$ is invariant under the action of $C$ on $\mathbb{R}^d$. We review two cases. Assume first that $\mathcal{X}$ is equal to $B(d,p)\cap\mathbb{P}^d_\circ$ for some integer $p$. By assumption, $k$ must then be greater than $\kappa(\mathcal{X})$. Consider the point $a$ in $\mathcal{X}$ whose first coordinate is equal to $p-1$, whose second coordinate is equal to $1$, and whose all other coordinates, if any, are equal to $0$. Observe that adding $1$ to any of the coordinates of $a$ other than the second one results in a primitive point $b$ of $1$-norm $p+1$. In addition, since $\mathcal{X}$ is invariant under the action of $C$ on $\mathbb{R}^d$, the equality $$ \sum_{x\in\mathcal{X}}|x_i|=\kappa(\mathcal{X}) $$ holds for every integer $i$ such that $1\leq{i}\leq{d}$. Hence, $$ \mathcal{Y}=(\mathcal{X}\mathord{\setminus}\{a\})\cup\{b\} $$ satisfies $\kappa(\mathcal{Y})=\kappa(\mathcal{X})+1$. Now recall that $k$ is greater than $\kappa(\mathcal{X})$. Therefore, $\mathcal{Y}$ is a subset of $\mathbb{P}^d_\circ$ distinct from $\mathcal{X}$ such that $|\mathcal{Y}|=\delta_z(d,k)$ and $\kappa(\mathcal{Y})\leq{k}$, as desired. Now assume that $\mathcal{X}$ is distinct from $B(d,p)\cap\mathbb{P}^d_\circ$ for every integer $p$. In this case, there exists a point $y$ in $\mathcal{X}$ and a point $z$ in $\mathbb{P}^d_\circ\mathord{\setminus}\mathcal{X}$ such that $\|z\|_1\leq\|y\|_1$. As $\mathcal{X}$ is invariant under the action of $C$ on $\mathbb{R}^d$, it must admit $C\mathord{\cdot}y$ as a subset and be disjoint from $C\mathord{\cdot}z$. Hence, if the orbits of $y$ and $z$ under the action of $C$ on $\mathbb{R}^d$ have the same cardinality, the set $$ \mathcal{Y}=(\mathcal{X}\mathord{\setminus}C\mathord{\cdot}y)\cup{C\mathord{\cdot}z} $$ is a subset of $\mathbb{P}^d_\circ$ distinct from $\mathcal{X}$ such that $|\mathcal{Y}|=\delta_z(d,k)$ and $\kappa(\mathcal{Y})\leq{k}$, as desired. Assume that $C\mathord{\cdot}y$ and $C\mathord{\cdot}z$ do not have the same cardinality. According to Proposition \ref{PPP.sec.5.prop.1}, there exists a subset $\mathcal{K}_y$ of $d$ points from $\mathbb{P}^d_\circ$ of the same $1$-norm than $y$, and a partition $\mathcal{P}_y$ of $\mathcal{K}_y$ into $d/|C\mathord{\cdot}y|$ subsets, all of cardinality $C\mathord{\cdot}y$, such that any of these subsets $\mathcal{L}$ satisfies $\kappa(C\mathord{\cdot}y)=\kappa(\mathcal{L})$. In addition, $\mathcal{L}$ is contained in the orbit of some point from $\mathbb{P}^d_\circ$ under the action of $C$ on $\mathbb{R}^d$. Therefore, either $\mathcal{L}$ is a subset of $\mathcal{X}$, or it is disjoint from it. If any of the sets $\mathcal{L}$ in $\mathcal{P}_y$ is disjoint from $\mathcal{X}$, then the set $$ \mathcal{Y}=(\mathcal{X}\mathord{\setminus}C\mathord{\cdot}y)\cup\mathcal{L}\mbox{.} $$ is distinct from $\mathcal{X}$. Yet, it satisfies $|\mathcal{Y}|=\delta_z(d,k)$ and, as $\kappa(C\mathord{\cdot}y)$ coincides with $\kappa(\mathcal{L})$, then so do $\kappa(\mathcal{Y})$ and $\kappa(\mathcal{X})$. Therefore, in this case the lemma holds. Now assume that $\mathcal{K}_y$ is a subset of $\mathcal{X}$. We proceed in the same way with $C\mathord{\cdot}z$. More precisely, by Proposition \ref{PPP.sec.5.prop.1}, there exists a set $\mathcal{K}_z$ made up of $d$ points from $\mathbb{P}^d_\circ$ of the same $1$-norm than $z$ and a partition of it into $d/|C\mathord{\cdot}z|$ subsets of cardinality $C\mathord{\cdot}z$ such that, if $\mathcal{L}$ is one of these subsets, then for every $i$, \begin{equation}\label{PPP.sec.0.thm.5.eq.1} \kappa(C\mathord{\cdot}z)=\sum_{x\in\mathcal{L}}|x_i|\mbox{.} \end{equation} Again, $\mathcal{L}$ is either a subset of $\mathcal{X}$ or disjoint from it. If any of the sets, say $\mathcal{L}$, in $\mathcal{K}_z$ is a subset of $\mathcal{X}$, then the set $$ \mathcal{Y}=(\mathcal{X}\mathord{\setminus}\mathcal{L})\cup{C\mathord{\cdot}z}\mbox{.} $$ is distinct from $\mathcal{X}$. Moreover by construction, it satisfies $|\mathcal{Y}|=\delta_z(d,k)$. Further note that, since, (\ref{PPP.sec.0.thm.5.eq.1}) holds for every $i$, $\kappa(\mathcal{Y})$ must be equal to $\kappa(\mathcal{X})$. As a consequence, the lemma also holds in this case. Assume that $\mathcal{K}_z$ is disjoint from $\mathcal{X}$. In this case the set $$ \mathcal{Y}=(\mathcal{X}\mathord{\setminus}\mathcal{K}_y)\cup\mathcal{K}_z $$ has cardinality $\delta_z(d,k)$. Moreover, by construction, for every $i$, $$ \frac{d\kappa(C\mathord{\cdot}y)}{|C\mathord{\cdot}y|}=\sum_{x\in\mathcal{K}_y}|x_i|\mbox{.} $$ However, according to Proposition \ref{PPP.sec.3.prop.1}, the left-hand side of this equality is precisely the $1$-norm of $y$. Similarly, for every $i$, $$ \|z\|_1=\sum_{x\in\mathcal{K}_z}|x_i|\mbox{.} $$ Now recall that $\|z\|_1\leq\|y\|_1$. As a consequence, $\kappa(\mathcal{Y})\leq\kappa(\mathcal{X})$. \end{proof} \section{Asymptotic estimates}\label{PPP.sec.1.5} In this section we provide sharp asymptotic estimates for some of the quantities we studied in the previous sections. We begin with the asymptotic behavior of $c_\psi(p,d)$ when $d$ is fixed and $p$ grows large. Recall that this quantity is the number of primitive points of $1$-norm $p$ contained in the positive orthant $]0,+\infty[^d$ or, in combinatorial terms, the number of compositions of $p$ into $d$ relatively prime integers. Let us first observe that there is no asymptotics in the $2$-dimensional case. Indeed, according to Theorem \ref{PPP.sec.1.thm.1}, when $p\geq2$, $$ c_\psi(p,2)=J_1(p)\mbox{.} $$ Recall that $J_1(p)$ is Euler's totient function. In particular, we have the following, where $\gamma$ stands for Euler's constant (see for instance \cite{HardyWright1938}). \begin{prop}\label{PPP.sec.1.prop.2} The following equalities hold. \begin{enumerate} \item[(i)] $\displaystyle\liminf_{p\to\infty}\frac{c_\psi(p,2)}{p}\log\log p=e^{-\gamma}$, \item[(ii)] $\displaystyle\limsup_{p\to\infty}\frac{c_\psi(p,2)}{p}=1$. \end{enumerate} \end{prop} Let us now consider the case of the dimensions above $2$. It is an immediate consequence of Theorem \ref{PPP.sec.0.thm.1} that \begin{equation}\label{PPP.sec.1.5.eq.1} \left|S(d,p)\cap\mathbb{P}^d_\circ\right|=\frac{1}{2}\sum_{j=1}^{d}2^j{d\choose{j}}c_\psi(p,j)\mbox{.} \end{equation} Moreover, it follows from Theorem 2.2 from \cite{DezaPourninSukegawa2020} that \begin{equation}\label{PPP.sec.1.5.eq.2} \lim_{p\to\infty}\frac{\left|B(d,p)\cap\mathbb{P}^d_\circ\right|}{p^d}=\frac{2^{d-1}}{d!\zeta(d)}\mbox{,} \end{equation} where $\zeta$ stands for Riemann's zeta function. These two equalities provide the asymptotic behavior of $c_\psi(p,d)$ when $d$ is fixed and $p$ grows large. \begin{thm}\label{PPP.sec.1.5.thm.1} When $d\geq3$, $\displaystyle\lim_{p\to\infty}\frac{c_\psi(p,d)}{p^{d-1}}=\frac{1}{(d-1)!\zeta(d)}$. \end{thm} \begin{proof} Assume that $d\geq3$. According to (\ref{PPP.sec.1.5.eq.1}), \begin{equation}\label{PPP.sec.1.5.thm.1.eq.1} \frac{c_\psi(p,d)}{p^{d-1}}=\frac{\left|S(d,p)\cap\mathbb{P}^d_\circ\right|}{2^{d-1}p^{d-1}}-\sum_{j=1}^{d-1}\frac{1}{2^{d-j}}{d\choose{j}}\frac{c_\psi(p,j)}{p^{d-1}}\mbox{.} \end{equation} Moreover, it follows from (\ref{PPP.sec.1.5.eq.2}) that \begin{equation}\label{PPP.sec.1.5.thm.1.eq.2} \displaystyle\lim_{p\to\infty}\frac{\left|S(d,p)\cap\mathbb{P}^d_\circ\right|}{p^{d-1}}=\frac{2^{d-1}}{(d-1)!\zeta(d)}\mbox{.} \end{equation} Using (\ref{PPP.sec.1.5.thm.1.eq.1}) and (\ref{PPP.sec.1.5.thm.1.eq.2}), the theorem can be established by induction on $d$. Indeed, recall that $c_\psi(p,1)=0$ when $p\geq2$. In addition, by Proposition \ref{PPP.sec.1.prop.2}, $$ \displaystyle\lim_{p\to\infty}\frac{c_\psi(p,2)}{p^2}=0\mbox{.} $$ Hence, combining (\ref{PPP.sec.1.5.thm.1.eq.1}) and (\ref{PPP.sec.1.5.thm.1.eq.2}) proves the theorem when $d$ is equal to $3$. Now assume that $d$ is greater than $3$ and that the theorem holds for all dimensions less than $d$ and greater than $2$. In this case, by induction, $$ \displaystyle\lim_{p\to\infty}\frac{c_\psi(p,j)}{p^{d-1}}=0 $$ when $3\leq{j}<d$, and according to Proposition \ref{PPP.sec.1.prop.2}, $$ \displaystyle\lim_{p\to\infty}\frac{c_\psi(p,2)}{p^{d-1}}=0\mbox{.} $$ As $c_\psi(p,1)=0$ when $p$ is greater than $1$, we just need to combine (\ref{PPP.sec.1.5.thm.1.eq.1}) and (\ref{PPP.sec.1.5.thm.1.eq.2}) in order to get the desired asymptotic estimate. \end{proof} We also give the asymptotic behavior of $\left|B(d,p)\cap\mathbb{P}^d_\circ\right|$ and $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$ when $p$ is fixed and $d$ grows large. More precisely, we prove the following. \begin{lem}\label{PPP.sec.1.5.lem.1} $\displaystyle\lim_{d\to\infty}\frac{\left|B(d,p)\cap\mathbb{P}^d_\circ\right|}{d^p}=\frac{2^{p-1}}{p!}$ and $\displaystyle\lim_{d\to\infty}\frac{\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)}{d^{p-1}}=\frac{2^{p-1}}{(p-1)!}$. \end{lem} \begin{proof} Observe that, when $d>p$, $c_\psi(p,d)=0$. In this case, (\ref{PPP.sec.1.5.eq.1}) yields \begin{equation}\label{PPP.sec.1.5.lem.1.eq.1} \frac{\left|B(d,p)\cap\mathbb{P}^d_\circ\right|}{d^p}=\frac{\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|}{d^p}+\frac{1}{2}\sum_{j=1}^p\frac{2^j}{d^p}{d\choose{j}}c_\psi(p,j)\mbox{.} \end{equation} The announced asymptotic behavior for $\left|B(d,p)\cap\mathbb{P}^d_\circ\right|$ can be obtained from this equality by induction on $p$. Indeed, as observed in \cite{DezaManoussakisOnn2018} (see theorem 3.2 therein), $\left|B(d,2)\cap\mathbb{P}^d_\circ\right|=d^2$, which provides the base case for the induction. Now assume that $p$ is greater than $2$ and note that the limit $$ \lim_{d\to\infty}\frac{1}{d^p}{d\choose j} $$ is equal to $0$ when $j<p$ and to $1/p!$ when $j=p$. Moreover, by induction, $$ \lim_{d\to\infty}\frac{\left|B(d,p-1)\cap\mathbb{P}^d_\circ\right|}{d^p}=0\mbox{.} $$ As in addition, $c_\psi(p,p)=1$, letting $d$ grow large in (\ref{PPP.sec.1.5.lem.1.eq.1}) results in the desired asymptotic estimate for $\left|B(d,p)\cap\mathbb{P}^d_\circ\right|$ when $d$ is fixed and $p$ goes to infinity. Now recall that, as a consequence of Proposition \ref{PPP.sec.1.prop.1}, $$ \kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)-\kappa\!\left(B(d,p-1)\cap\mathbb{P}^d_\circ\right)=\frac{p\!\left|S(d,p)\cap\mathbb{P}^d_\circ\right|}{d}\mbox{,} $$ By another induction on $p$, this equality, together with our asymptotic estimate for $\left|B(d,p)\cap\mathbb{P}^d_\circ\right|$ provides the announced estimate for $\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)$. Note that the base case is given, again, by Theorem 3.2 from \cite{DezaManoussakisOnn2018}. \end{proof} The following estimate is an immediate consequence of Lemma \ref{PPP.sec.1.5.lem.1}. \begin{thm}\label{PPP.sec.1.5.thm.2} $\displaystyle\lim_{d\to\infty}\frac{1}{d}\frac{\left|B(d,p)\cap\mathbb{P}^d_\circ\right|}{\kappa\!\left(B(d,p)\cap\mathbb{P}^d_\circ\right)}=\frac{1}{p}$. \end{thm}
{ "timestamp": "2020-06-26T02:09:36", "yymm": "2006", "arxiv_id": "2006.14228", "language": "en", "url": "https://arxiv.org/abs/2006.14228", "abstract": "A point in the $d$-dimensional integer lattice $\\mathbb{Z}^d$ is primitive when its coordinates are relatively prime. Two primitive points are multiples of one another when they are opposite, and for this reason, we consider half of the primitive points within the lattice, the ones whose first non-zero coordinate is positive. We solve the packing problem that asks for the largest possible number of such points whose absolute values of any given coordinate sum to at most a fixed integer $k$. We present several consequences of this result at the intersection of geometry, number theory, and combinatorics. In particular, we obtain an explicit expression for the largest possible diameter of a lattice zonotope contained in the hypercube $[0,k]^d$ and, conjecturally of any lattice polytope in that hypercube.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT); Optimization and Control (math.OC)", "title": "Primitive point packing", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9877587261496031, "lm_q2_score": 0.8354835411997897, "lm_q1q2_score": 0.8252561583744638 }
https://arxiv.org/abs/2206.06981
Generalized graph splines and the Universal Difference Property
We study the generalized graph splines introduced by Gilbert, Tymoczko, and Viel and focus on an attribute known as the Universal Difference Property (UDP). We prove that paths, trees, and cycles satisfy UDP. We explore UDP on graphs pasted at a single vertex and use Prüfer domains to illustrate that not every edge labeled graph satisfies UDP. We show that UDP must hold for any edge labeled graph over a ring $R$ if and only if $R$ is a Prüfer domain. Lastly, we prove that UDP is preserved by isomorphisms of edge labeled graphs.
\section{Introduction}\label{intro} Splines are perhaps best known for their usage in analysis and for their applications in finding approximate solutions to differential equations, but splines also appear in a variety of other contexts including geometry and topology. To unify these various notions of splines, Gilbert, Tymoczko, and Viel defined generalized splines on graphs in \cite{Gilbert}, and it is these generalized graph splines we consider here. \begin{definition} Given a graph $G= (V,E)$ and a ring $R$, an \textit{edge labeling of $G$} is a function $\alpha: E \to \mathcal{I}(R)$, where $\mathcal{I}(R)$ is the set of all ideals in $R$. The pair $(G,\alpha)$ denotes a graph $G$ with edge labeling $\alpha$. \end{definition} \begin{definition} A \textit{generalized spline} on $(G,\alpha)$ over a ring $R$ is a function $\rho: V \to R$ such that for each edge $uv$, the difference $\rho(u) - \rho(v)$ is an element of $\alpha(uv)$. \end{definition} Note that when working with a graph $G$ and a ring $R$, an edge labeling of $G$ labels the edges of $G$ with ideals in $R$, while a generalized spline labels the vertices of $G$ with elements of $R$. Simply put, edges are labeled with ideals, while vertices are labeled with ring elements. The ring $R$ is called the base ring for the edge labeled graph $(G, \alpha)$, and henceforth we shall refer to generalized splines simply as splines. When working with an edge labeled graph $(G,\alpha)$ with $|V(G)|=n$, we often number the vertices of $G$ as $v_1, v_2, \ldots, v_n$ and denote a spline $\rho$ on $G$ by $\rho=\left(\rho(v_1), \rho(v_2),\ldots, \rho(v_n)\right)$. \begin{ex}\label{spline on 3cycle} Figure \ref{3cycle} illustrates an edge labeled graph with base ring $\mathbb{Z}$. We see that $f=(2,16, 26)$ is a spline on the edge labeled $3$-cycle. \begin{figure}[H] \begin{center} \scalebox{0.2}{\includegraphics{fig1.pdf}} \caption{An edge labeled $3$-cycle} \label{3cycle} \end{center} \end{figure} \end{ex} One of the primary results of \cite{Gilbert} is given in Proposition \ref{space of splines is a ring and an R-module}. \begin{proposition}\label{space of splines is a ring and an R-module} \cite{Gilbert} Fix a ring $R$ and a graph $G$ with edge labeling $\alpha$. The set of all splines on $(G,\alpha)$ is both a ring and an $R$-module. We denote this set by $R_{(G, \alpha)}$. \end{proposition} The ring structure of $R_{(G,\alpha)}$ arises in algebraic and geometric topology. Equivariant cohomology rings of a GKM-manifold $X$ correspond to the generalized spline ring on the moment graph of $X$ over the multivariate polynomial ring with complex coefficients \cite{GKM}. Given an edge labeled graph $(G,\alpha)$ over a ring $R$, a central question in spline theory is whether the set of splines $R_{(G, \alpha)}$ is free as an $R$-module. When it is free, often the next task is to find a basis for $R_{(G,\alpha)}$. The module structure of $R_{(G,\alpha)}$ and the freeness and bases of $R_{(G,\alpha)}$ have been studied by many mathematicians~\cite{Alt2,Alt,Bow,Tym, Dip,Gjo,Hand,Mah,Phi}. Specifically, in addition to Proposition \ref{space of splines is a ring and an R-module}, another central result of \cite{Gilbert} is the description of a basis for the space of splines on an edge labeled tree. Work has been done to find a basis for the space of splines on an edge labeled cycle, over $\mathbb{Z}$ in \cite{Bow} and \cite{Hand} and over $\mathbb{Z}/m\mathbb{Z}$ by Bowden and Tymoczko in \cite{Tym}. Anders, Crans, Foster-Greenwood, Mellor, and Tymoczko investigated graphs admitting only constant splines in \cite{Anders}. It is worth noting that the freeness of $R_{(G,\alpha)}$ depends on the graph $G$ and the base ring $R$, but it does not depend on the ordering of the vertices of $G$. In Section \ref{UDP on graphs over PIDs}, we utilize a special type of spline, known as a flow-up class, which we define here. \begin{definition}\label{defn flow-up class} Let $(G,\alpha)$ be an edge labeled graph. Given a vertex $v_i\in V(G)$, an \textit{$i$-th flow-up class $f^{(i)}$} is a spline in $R_{(G,\alpha)}$ such that $f^{(i)}(v_i)\neq 0$ and $f^{(i)}(v_j)=0$ for all $j<i$. The entry $f^{(i)}(v_i)$ is the \textit{leading entry} of $f^{(i)}$. \end{definition} \begin{definition}\label{defn flow-up basis} Let $(G, \alpha)$ be an edge labeled graph. A basis for $R_{(G,\alpha)}$ consisting of flow-up classes is a \textit{flow-up basis} for $R_{(G,\alpha)}$. \end{definition} In \cite{Alt}, Alt{\i}nok and Sar{\i}o\u{g}lan showed that a flow-up basis must exist for $R_{(G,\alpha)}$ whenever the base ring is a principal ideal domain, and for a cycle over a PID, they outlined a process for finding the flow-up basis. Returning to Example \ref{spline on 3cycle}, we see that examples of flow-up classes on the graph in Figure \ref{3cycle} are $f^{(1)}=(1,1,1), f^{(2)}=(0,14,0)$, and $f^{(3)}=(0,0,4)$. Throughout this paper we use the following notation: for any $r \in R$, we let $\langle r \rangle$ denote the principal ideal of $R$ generated by $r$. For connected vertices $u$ and $w$ in a graph $G$, we denote the set of all paths in $G$ from $u$ to $w$ by $\mathcal{P}_{(u,w)}$. In \cite{Anders2}, Anders, Crans, Foster-Greenwood, Mellor, and Tymoczko proved the following theorem, noting that it enables us to compare the values of a spline on vertices that are in the same connected component of a graph. \begin{theorem}\label{thm2.1}\cite{Anders2} Suppose $u, w \in V(G)$ for a graph $(G, \alpha)$, and $P = \langle u, v_1, \dots, v_n, w\rangle$ is a path from $u$ to $w$. Let $\alpha(P) = \alpha(uv_1) + \alpha(v_{1}v_2) + \dots + \alpha(v_nw)$. If $\rho \in R_{(G, \alpha)}$, then $\rho(u) - \rho(w) \in \alpha(P)$. Moreoever, if $P_1, P_2, \dots, P_m$ are paths from $u$ to $w$, then $\rho(u) - \rho(w) \in \bigcap_{i \in [m]} \alpha(P_i)$. \end{theorem} \begin{proof} Consider a graph $(G, \alpha)$ with $u, w \in V(G)$, and let $P = \langle u, v_1, \dots, v_n, w\rangle$ be a path from $u$ to $w$. Let $\alpha(P)$ be defined as above, and let $\rho$ be a spline on $(G, \alpha)$. Then \[ \begin{split} \rho(u) - \rho(w) &= \rho(u) - \rho(v_1) + \rho(v_1) - \cdots - \rho(v_n) + \rho(v_n) - \rho(w) \\ &= (\rho(u) - \rho(v_1)) +( \rho(v_1) - \rho(v_2)) + \cdots + (\rho(v_{n-1}) - \rho(v_n)) + (\rho(v_n) - \rho(w)). \end{split} \] Note that $\rho(u) - \rho(v_1) \in \alpha(uv_1),\; \rho(v_n) - \rho(w) \in \alpha(v_nw)$, and $\forall\, i \in [n-1]$, $\rho(v_i) - \rho(v_{i+1}) \in \alpha(v_iv_{i+1})$. Thus $\rho(u) - \rho(w) \in \alpha(P)$. Now let $P_1, \dots, P_m$ be paths between $u$ and $w$. We know that $\forall\, i \in [m]$, $\rho(u) - \rho(w) \in \alpha(P_i)$, so $\rho(u) - \rho(w) \in \bigcap_{i \in [m]} \alpha(P_i)$. \end{proof} The authors of \cite{Anders2} also asked when the converse to Theorem \ref{thm2.1} holds and posed the following question. \begin{?}\label{when does UDP hold} Let $(G, \alpha)$ be an edge labeled graph. Let $u, w$ be connected vertices in $(G, \alpha)$. Let $P_1, P_2, \dots, P_m$ be the paths in $G$ from $u$ to $w$. For each $i \in [m]$, let $\alpha(P_i)$ be the sum of the ideals labeling the edges in the path $P_i$. Let $x \in \bigcap_{i \in [m]} \alpha(P_i)$. Under what conditions does there exist a spline $\rho$ on $(G, \alpha)$ such that $\rho(u) - \rho(w) = x$? \end{?} To answer this question, the second, third, fourth, fifth, and seventh authors defined the Universal Difference Property, as described below. \begin{definition} If for every pair of vertices $u$ and $w$ and every $x \in \bigcap_{i \in [m]} \alpha(P_i)$ there exists a spline $\rho$ on $(G, \alpha)$ such that $\rho(u) - \rho(w) = x$, then we say that $G$ satisfies the \textit{Universal Difference Property}, abbreviated UDP. \end{definition} In Section \ref{paths trees cycles}, we show that UDP holds for edge labeled paths, trees, and cycles over any base ring $R$. In Section \ref{pasting graphs and counterexample}, we consider taking two graphs on which UDP is satisfied and pasting them at a single vertex. We determine a necessary and sufficient condition for UDP to hold on such a pasted graph. We then turn our attention to Prüfer domains and find an example of an edge labeled, pasted graph over a Prüfer domain that does not satisfy UDP. In Section \ref{UDP on graphs over PIDs}, we utilize zero paths and flow-up classes to show that UDP must be satisfied on an edge labeled graph over a principal ideal domain. We conclude in Section \ref{UDP and isomorphisms} by showing that the Universal Difference Property is a structural property of an edge labeled graph; that is, the Universal Difference Property is preserved by an isomorphism of edge labeled graphs. \section{Paths, Trees, and Cycles}\label{paths trees cycles} In this section we answer Question \ref{when does UDP hold} for some basic types of graphs. We begin with a lemma that will be used both in this section and in Section \ref{pasting graphs and counterexample}. \begin{lemma}\label{LEMMA} Suppose a graph $(G, \alpha)$ satisfies the Universal Difference Property. Let $u, w \in V(G)$ and let $x \in \bigcap_{P \in \mathcal{P}_{(u, w)}} \alpha(P)$. Let $\varphi$ be a spline on $G$ such that $\varphi(u) - \varphi(w) = x$. Let $v \in V(G)$ and $r \in R$. Then there exists a spline $\rho$ on $G$ such that $\rho(u) - \rho(w) = x$ and $\rho(v) = r$. \end{lemma} \begin{proof} Suppose $(G, \alpha)$ satisfies the Universal Difference Property. Let $u, w\in V(G)$. Let $x \in \bigcap_{P \in \mathcal{P}_{(u, w)}} \alpha(P)$. Then there exists a spline $\varphi$ on $G$ such that $\varphi(u) - \varphi(w) = x$. Let $v \in V(G)$ and $r \in R$. Consider the value $s = r - \varphi(v) \in R$. Create a new function $\rho$ on $V(G)$ such that $\forall \,z \in V(G),\, \rho(z) = \varphi(z) + s$. Note that $\rho$ is a spline on $G$ because $\varphi$ was. Furthermore, $\rho(u) - \rho(w) = \varphi(u) + s - \varphi(w)-s = x$. Finally, note that $\rho(v) = r$, as desired. \end{proof} This means that we always have one free variable when we are manually building splines. It is often convenient to set this free variable to be 0. Now we will show that the Universal Difference Property must hold on any edge labeled path. \begin{theorem} \label{theorem:3.1} The Universal Difference Property holds when $G$ is a path. \end{theorem} \begin{proof} Let $G$ be a path and let $u, w \in V(G)$. Because $G$ itself is a path, there is a unique path from $u$ to $w$. Denote it by $P_1 = \langle u, v_1, v_{2}, \ldots, v_{n}, w\rangle$. We have $\bigcap_{P \in \mathcal{P}_{(u, w)}} \alpha(P_i) = \alpha(P_1)$. Let $x \in \alpha(P_1)$. Then by definition $$x = \alpha_1 + \dots + \alpha_{n+1}, $$ where $\alpha_1 \in \alpha(uv_1),\, \alpha_{n+1} \in \alpha(v_nw),\, \text{ and }\alpha_{\ell} \in \alpha(v_{\ell - 1} v_{\ell})\text{ for all }2 \leq \ell \leq n$. Let $r \in R$ be arbitrary and set $\rho(w) = r$. Now let $\rho(v_n) = \rho(w) + \alpha_{n+1}$. Note $\rho(v_n) - \rho(w) = \alpha_{n+1} \in \alpha(v_nw)$ by construction. For each $i \in [n-1]$, set $\rho(v_i) = \rho(w) + \sum_{j = i+1}^{n+1} \alpha_j$. Then for any $i \in [n-1]$, \[ \rho(v_i) - \rho(v_{i+1}) = \alpha_{i+1} \in \alpha(v_iv_{i+1}). \] Finally, set $\rho(u) = \rho(w) + \sum_{j \in [n+1]} \alpha_j$. We have $\rho(u) - \rho(v_1) = \alpha_1 \in \alpha(uv_1)$. We can then find a spline for the rest of the path $G$ by giving the label $\rho(u)$ to all vertices coming before $u$ in the path $G$ and giving all of the vertices in $G$ after $w$ the label $\rho(w)$. Finally, note that \[ \rho(u) - \rho(w) = \rho(w) + \sum_{j \in [n+1]} \alpha_j - \rho(w) = \sum_{j \in [n+1]} \alpha_j = x. \] We conclude that the Universal Difference Property holds when $G$ is a path. \end{proof} \begin{example} Consider a path $P$ on $10$ vertices. Suppose our ring is $\mathbb{Z}$ and the edges of $P$ are labeled as in Figure \ref{fig:path} . Note that there is only one path $P_1$ from $u$ to $w$. It has edge labels $\langle 4 \rangle, \langle 6 \rangle, \langle 2 \rangle, \langle 8 \rangle, \langle 12 \rangle,$ and $\langle 20 \rangle$. Thus $\alpha(P_1) = \langle 2 \rangle$. Suppose $x$ is arbitrarily chosen to be 64. We can use the process described in the proof of Theorem \ref{theorem:3.1} to build a spline $\rho$ such that $\rho(u) - \rho(w) = x$. Since we can write $64=8+12+4+8+12+20$, using the algorithm described in the proof of Theorem \ref{theorem:3.1}, we can label the vertices in the path from $u$ to $w$ as $64, 56, 44, 40, 32, 20, 0$, respectively. All other vertices in $P$ are given the same label as their nearest neighbor in $P_1$. Note that by construction we have $\rho(u)-\rho(w)=64-0=x$. \begin{figure}[H] \begin{center} \scalebox{0.2}{\includegraphics{fig2.pdf}} \caption{Path example} \label{fig:path} \end{center} \end{figure} \end{example} \begin{theorem} The Universal Difference Property holds when $G$ is a tree. \end{theorem} \begin{proof} Let $G$ be a tree and $u, w\in V(G)$. Because $G$ is a tree, there exists a unique path $P_1$ between $u$ and $w$. Let $x \in \alpha(P_1)$ be arbitrary. One can create a spline $\rho$ on $G$ as follows. As in Theorem \ref{theorem:3.1}, create a spline on $P_1$ such that $\rho(u) - \rho(w) = x$. Then let $\mathcal{U}$ be the set of vertices in $G$ that remain unlabeled. For any $v \in \mathcal{U}$, choose $z \in V(P_1)$ such that the unique path between $v$ and $z$ does not contain any elements of $P_1$ other than $z$. Assign $\rho(v) = \rho(z)$. We claim that this creates a spline. Indeed, let $a, b \in V(G)$ be adjacent. If $a, b \in V(P_1)$, by Theorem \ref{theorem:3.1} we know that $\rho(a) - \rho(b) \in \alpha(ab)$. If $a \in V(P_1)$ and $b \notin V(P_1)$, then $\rho(a) - \rho(b) = \rho(a) - \rho(a) = 0 \in \alpha(ab)$. The case where $a \notin V(P_1), b \in V(P_1)$ is similar. If $a, b \notin V(P_1)$, then $\rho(a) = \rho(b)$ because two adjacent vertices in $G$ not in $P_1$ must be closest to the same vertex on $P_1$. Otherwise, there would be a cycle in $G$. Then $\rho(a) - \rho(b) = 0 \in \alpha(ab)$. Because we have considered all cases, we may conclude that $\rho$ is a spline on the tree $G$. Since $\rho(u) - \rho(w) = x$ by construction, we see that UDP holds on the tree $G$. \end{proof} \begin{example} Suppose our ring is $\mathbb{Z}$. Consider the labeled tree in Figure \ref{tree_example} below. The path $P_1$ from $u$ to $w$ has edge labels $\langle 5 \rangle, \langle 5 \rangle, \langle 11 \rangle$, and $\langle 12 \rangle$. Thus $\alpha(P_1) = \langle 1 \rangle$. Suppose $x$ is arbitrarily chosen to be $53$. We can write $53 = 15 + 15 + 11 + 12$, and so, using the algorithm described in the proof of Theorem \ref{theorem:3.1}, we can label the vertices in the path from $u$ to $w$ as $53, 38, 23, 12$, and $0$, respectively. All other vertices on the tree are then given the same label as their nearest neighbor in $P_1$. Notice by construction we have $\rho(u) - \rho(w) = 53 - 0 = x$. \begin{figure}[H] \begin{center} \scalebox{0.2}{\includegraphics{fig3.pdf}} \caption{Tree example} \label{tree_example} \end{center} \end{figure} \end{example} \begin{theorem}\label{UDP on cycles} The Universal Difference Property holds when $G$ is a cycle. \end{theorem} \begin{proof} Let $G$ be a cycle. There will be exactly two internally disjoint paths between any two distinct vertices $u$ and $w$. Denote them as \[ P_1 = \langle u, v_1, \dots, v_n, w \rangle \] and \[ P_2 = \langle u, s_1, \dots, s_k, w \rangle. \] Let $x \in \alpha(P_1) \cap \alpha(P_2)$. Then we know that \begin{equation} x = \alpha_1 + \dots + \alpha_{n+1} \end{equation} and \begin{equation} x = \beta_1 + \dots + \beta_{k+1}, \end{equation} for $\alpha_i, \beta_j$ in the appropriate ideals of paths $P_1, P_2$, respectively. We construct a function $\rho: V \to R$ as follows. Let $r$ be an arbitrary element of the ring. Set $\rho(w) = r$ and let $\rho(v_n) =\rho(w)+ \alpha_{n+1}$. Note $\rho(v_n) - \rho(w) = \alpha_{n+1} \in \alpha(v_nw)$ by construction. For each $i \in [n-1]$, set $\rho(v_i) = \rho(w) + \sum_{j = i+1}^{n+1} \alpha_j$. Then for any $i \in [n-1]$, \[ \rho(v_{i}) - \rho(v_{i+1}) = \left(\rho(w) + \sum_{j = i+1}^{n+1} \alpha_j\right)-\left(\rho(w) + \sum_{j = i+2}^{n+1} \alpha_j\right) = \alpha_{i+1}. \] Finally, set $\rho(u) = \rho(w) + \sum_{j \in [n+1]} \alpha_j$, and observe that $\rho(u) - \rho(v_1) = \alpha_1 \in \alpha(uv_1)$ and $\rho(u) - \rho(w) = \sum_{i \in [n+1]} \alpha_i = x$. Assign labels to the vertices $s_1, \dots, s_k$ by following the same process. However, now we have $\rho(u) = r + \sum_{i \in [n+1]} \alpha_i$ and $\rho(u) = r + \sum_{j \in [k+1]} \beta_j$. For this vertex labeling to be a function, these two values must be equal; indeed they are since the sums in these equations are (1) and (2), respectively, which both equal $x$. Thus $\rho$ is a spline on the cycle with $\rho(u)-\rho(w)=x$, and the Universal Difference Property holds for cycles. \end{proof} \begin{example} Suppose our ring is $\mathbb{Z}$. Consider the edge labeling of the cycle in Figure \ref{fig:hendecagon} below. There are two paths between $u$ and $w$. The path $P_1$ has edge labels $\langle 2 \rangle, \langle 6 \rangle, \langle 8 \rangle, \langle 18 \rangle$, and $\langle 2 \rangle$. The path $P_2$ has edge labels $\langle 3 \rangle, \langle 5 \rangle, \langle 12 \rangle, \langle 11 \rangle, \langle 4 \rangle$, and $\langle 3 \rangle$. We have $\alpha(P_1) \cap \alpha(P_2) = \langle 2 \rangle + \langle 1 \rangle = \langle 1 \rangle$. Let $x = 48$. As described in the proof of Theorem \ref{UDP on cycles}, we can write $x$ in two different ways: $x = 8 + 12 + 8 + 18 + 2$ and $x = 0 + 15 + 12 + 11 + 4 + 6$. Labeling the vertices as shown in Figure \ref{fig:hendecagon} produces a spline $\rho$ on the cycle such that $\rho(u) - \rho(w) = 48 - 0 = x$. \end{example} \begin{figure}[H] \begin{center} \scalebox{0.23}{\includegraphics{fig4.pdf}} \caption{Cycle example} \label{fig:hendecagon} \end{center} \end{figure} \section{Pasting Graphs and an Example Where UDP Does Not Hold}\label{pasting graphs and counterexample} In this section, we examine when the Universal Difference Property holds on graphs pasted at a single vertex. Specifically, we develop a necessary and sufficient condition for UDP to hold on an edge labeled graph constructed by pasting together at a single vertex two edge labeled graphs on which UDP holds. Finally, we introduce Prüfer domains to allow us to provide an example of a graph on which UDP does not hold. Our first step toward the necessary and sufficient condition is the following lemma. \begin{lemma} \label{lemma:5.1} Let $G_1$ and $G_2$ be connected, edge labeled graphs satisfying the Universal Difference Property. Suppose $V(G_1) \cap V(G_2) = \{ z\}$. Let $G = G_1 \cup G_2$, where $G$ is created by pasting $G_1$ and $G_2$ at $z$. If $u$ and $w$ are both in $V(G_1)$ or both in $V(G_2)$, then for all $x \in \bigcap_{P \in \mathcal{P}_{(u, w)}} \alpha(P)$, there exists a spline $\rho$ on $G$ such that $\rho(u) - \rho(w) = x$. \end{lemma} \begin{proof} Assume without loss of generality that $u, w \in V(G_1)$. Observe that no path from $u$ to $w$ can include an element of $V(G_2)$ other than $z$. If a path from $u$ to $w$ were to leave $G_1$, it would have to pass through $z$ by construction; but then to reach $w$, the path would have to return to $z$, violating the definition of a path. Thus each path from $u$ to $w$ involves only vertices in $G_1$. Since $G_1$ satisfies the Universal Difference Property, for any $x \in \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P)$ there exists a spline $\rho$ on $G_1$ such that $\rho(u) - \rho(w) = x$. Extend $\rho$ to the remaining vertices of $G$ by labeling all vertices in $G_2$ with $\rho(z)$. Then $\rho$ is a spline on $G$ satisfying $\rho(u) - \rho(w) = x$. \end{proof} Lemma \ref{lemma:5.1} addresses the case where $u$ and $w$ both belong to $V(G_1)$ or both belong to $V(G_2)$. What if exactly one of $u$ and $w$ is in $V(G_1)$ and the other is in $V(G_2)$? The next theorem addresses this case, providing two conditions, and if either condition is satisfied, then UDP holds on $G_1\cup G_2$. \begin{theorem}\label{theorem:5.2} Let $G_1$ and $G_2$ be connected, edge labeled graphs satisfying the Universal Difference Property. Suppose $V(G_1) \cap V(G_2) = \{ z\}$. Let $G = G_1 \cup G_2$, where $G$ is created by pasting $G_1$ and $G_2$ at $z$. Let $u,w \in V(G_1 \cup G_2)$. If either \begin{equation}\label{uw in uz} \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha (P) \subseteq \bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha (P) \end{equation} or \begin{equation}\label{uw in wz} \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha (P) \subseteq \bigcap_{P \in \mathcal{P}_{(w,z)}} \alpha (P) \end{equation} then $(G_1 \cup G_2,\alpha)$ satisfies the Universal Difference Property. \end{theorem} \begin{proof} Let $u, w \in V(G)$. The case where $u$ and $w$ are both in $V(G_1)$ or $u$ and $w$ are both in $V(G_2)$ is covered in Lemma \ref{lemma:5.1}. It remains to consider the case when $u$ and $w$ are on different connected graphs $G_1$ and $G_2$. Assume without loss of generality that $u \in V(G_1)$ and $w \in V(G_2)$ with $u$ and $w$ both distinct from $z$. Suppose $(3)$ holds and let $x \in \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P)$. Then $x \in \bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P)$ by assumption. Since $G_1$ satisfies the Universal Difference Property, there exists a spline $\rho$ on $(G_1, \alpha|_{E(G_1)})$ such that $\rho(u) - \rho(z) = x$. This is important to note since every path from $u$ to $w$ in this case must contain $z$. Now extend the domain of $\rho$ to $G_1 \cup G_2$ by mapping every element in $V(G_2)$ to $\rho(z)$. It is clear that $\rho$ is a spline on $G_1 \cup G_2$ and since $w \in V(G_2)$, we have $\rho(u) - \rho(w) = \rho(u) - \rho(z) = x$ as desired. A similar argument proves the case when $(4)$ is true. \end{proof} \begin{example} Suppose our ring is $\mathbb{Z}$. Consider the graph with edge labeling shown below in Figure \ref{pastingg}. Note that $\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P) = \langle 2 \rangle$ and $\bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P) = \langle 2 \rangle$. If we let $x = 38$, then we can then use the process described in Theorem \ref{theorem:5.2} to create a spline satisfying $\rho(u) - \rho(w) = 38 - 0 = x$. \begin{figure}[H] \begin{center} \scalebox{0.2}{\includegraphics{fig5.pdf}} \caption{Example of an edge labeled graph created by pasting at a single vertex two edge labeled subgraphs each satisfying UDP} \label{pastingg} \end{center} \end{figure} \end{example} The next lemma gives set containments that must hold for any graph $G=G_1\cup G_2$ pasted at a single vertex, regardless of whether $G_1$ and $G_2$ satisfy UDP. In the proof of this next lemma, we see that the containments that are the reverses of \eqref{uw in uz} and \eqref{uw in wz} must always hold. \begin{lemma} \label{prop:subset} Suppose $V(G_1) \cap V(G_2) = \{ z\}$. Let $G = G_1 \cup G_2$, where $G$ is created by pasting $G_1$ and $G_2$ at $z$. Suppose $u \in V(G_1)$ and $w \in V(G_2)$. Then $$\left(\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P)\right) + \left(\bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P)\right) \subseteq \left(\bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P)\right).$$ \end{lemma} \begin{proof} Let $\mathcal{P}_{(u,w)} = \{P_1, P_2, \ldots, P_m\}$. For each $P_i$, let $P_i'$ denote the subpath of $P_i$ from $u$ to $z$. Since $\alpha(P_i')$ is just a truncated sum of ideals in $\alpha(P_i)$ we have $\alpha(P_i') \subseteq \alpha(P_i)$ for every $1 \leq i \leq m$. It follows that $\bigcap_{i=1}^m \alpha(P_i') \subseteq \bigcap_{i=1}^m \alpha(P_i)$, or equivalently, $$\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P) \subseteq \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P).$$ By symmetry we also have $$\bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P) \subseteq \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P).$$ Thus $$\left(\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P) \right) + \left(\bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P) \right) \subseteq \left(\bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P) \right).$$ \end{proof} We are now ready to state and prove our main result on pasted graphs, Theorem \ref{pasting}, which gives a necessary and sufficient condition for UDP to hold on an edge labeled graph constructed by pasting together at a single vertex two edge labeled graphs on which UDP holds. Theorem \ref{theorem:5.2} gives way to Theorem \ref{pasting}. Indeed, if the hypotheses of Theorem \ref{theorem:5.2} hold, then Equation \eqref{importantequation} is satisfied. \begin{theorem}\label{pasting} Let $G_1$ and $G_2$ be connected graphs such that the Universal Difference Property holds for each and $V(G_1) \cap V(G_2) = \{ z\}$. Let $G = G_1 \cup G_2$, where $G$ is created by pasting $G_1$ and $G_2$ at $z$. Then $G$ satisfies the Universal Difference Property if and only if for all $u \in V(G_1), w \in V(G_2)$, we have that \begin{equation} \left(\bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P) \right) = \left(\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P) \right) + \left(\bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P) \right). \label{importantequation} \end{equation} \end{theorem} \begin{proof} Let $u, w \in V(G)$. The case where $u, w \in V(G_1)$ or $u, w \in V(G_2)$ is covered in Lemma \ref{lemma:5.1}. Hence, it remains to consider cases where, without loss of generality, $u \in V(G_1)$ and $w \in V(G_2)$. We will prove the forward direction by considering the contrapositive. Assume that Equation \eqref{importantequation} does not hold. By Lemma \ref{prop:subset}, we know it must be that the left side of (\ref{importantequation}) is not a subset of the right side. Then consider $x \in \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P)$ such that for all $s \in \bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P)$ and for all $t \in\bigcap_{P \in \mathcal{P}_{(z, w)}} \alpha(P)$, $x \neq s + t$. Suppose for the sake of contradiction that there is a spline $\rho$ on $G$ such that $\rho(u) - \rho(w) = x$. By Theorem \ref{thm2.1}, we know that $\rho(u) - \rho(z) \in \left(\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P) \right)$; similarly, $\rho(z) - \rho(w)\in \left(\bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P) \right)$. Let $s=\rho(u)-\rho(z)$ and $t=\rho(z)-\rho(w)$. Then \begin{align*} x &= \rho(u) - \rho(w) \\ &= \rho(u) - \rho(z) + \rho(z) - \rho(w) \\ &= s + t. \end{align*} This yields a contradiction, since we chose $x$ such that it would be impossible to write it in this form. Hence our assumption was incorrect; there is no spline on $G$ such that $\rho(u) - \rho(w) = x$. Therefore the Universal Difference Property does not hold in this case. Assume that Equation \eqref{importantequation} holds. Let $x \in\bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P)$. Then there exists $s \in\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P)$ and $\\ t \in \bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P)$ such that $x = s + t$. Because $s \in\bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P)$, and $G_1$ satisfies the Universal Difference Property, there is a spline $\rho_{1}$ on $G_1$ such that $\rho_{1}(u) - \rho_{1}(z)= s$. Similarly, there is a spline $\rho_2$ on $G_2$ such that $\rho_2(z) - \rho_2(w) = t$, and by Lemma \ref{LEMMA} there is such a spline $\rho_2$ also satisfying $\rho_2(z) = \rho_1(z)$ since $\rho_1(z) \in R$. Now define the function $\rho$ on all of $G$ by \[ \rho(v) = \begin{cases} \rho_{1}(v), & v \in G_1 \\ \rho_{2}(v), & v \in G_2\text{.} \end{cases}\] Note that this indeed is a spline. Furthermore, \[ \rho(u) - \rho(w) = \rho_{1}(u) - \rho_{1}(z) + \rho_{2}(z) - \rho_{2}(w) = s + t = x. \] Therefore, combining this result and the result from Lemma \ref{lemma:5.1}, we know that for any $u, w\in V(G)$, for all $x \in \bigcap_{P \in \mathcal{P}_{(u, w)}} \alpha(P)$, there is a spline $\rho$ such that $\rho(u) - \rho(w) = x$ provided that (\ref{importantequation}) is true. Hence UDP is satisfied precisely when Equation \eqref{importantequation} holds. \end{proof} \begin{example} Below is an example of a pasted graph $(G,\alpha)$ satisfying Equation \eqref{importantequation}. \begin{figure}[H] \begin{center} \scalebox{0.2}{\includegraphics{fig6.pdf}} \caption{Example of a pasted graph that satisfies Equation \eqref{importantequation}} \label{fig:bowtie} \end{center} \end{figure} We have $$\bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P) = \langle 1 \rangle ,\quad \bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P) = \langle 2 \rangle, \;\text{ and}\:\: \bigcap_{P \in \mathcal{P}_{(w,z)}} \alpha(P) = \langle 3 \rangle,$$ and $\langle 2 \rangle + \langle 3 \rangle = \langle 1 \rangle$. Thus by Theorem \ref{pasting}, UDP holds for $(G, \alpha)$. \end{example} Next we will exhibit an edge labeled graph $(G, \alpha)$ that satisfies the hypotheses of Theorem \ref{pasting} but does not satisfy Equation \eqref{importantequation}. This will be our first example of a graph on which the Universal Difference Property does not hold. Before considering this example, we need to define a Prüfer domain. \begin{definition} \cite{Jensen} A \textit{Prüfer domain} is a commutative ring without zero divisors in which every non-zero finitely generated ideal is invertible. \end{definition} \begin{proposition} \label{prop:5.10}\cite{Jensen} Let $R$ be a domain. The following conditions are equivalent: \begin{enumerate} \item $R$ is a Prüfer domain. \item If $I$, $J$, and $K$ are non-zero ideals of $R$, then $I + (J \cap K) = (I + J) \cap (I + K)$. \item If $I$, $J$, and $K$ are non-zero ideals of $R$, then $I \cap (J + K) = (I \cap J) + (I \cap K)$. \end{enumerate} \end{proposition} We now use a base ring that is not a Prüfer domain to exhibit a family of pasted graphs on which the Universal Difference Property does not hold. \begin{theorem} Let $R$ be a commutative ring which is not a Prüfer domain. Then there exists an edge labeled graph $(G, \alpha)$ over $R$ that does not satisfy UDP. \end{theorem} \begin{proof} Because $R$ is not a Prüfer domain, there exist non-zero ideals $I, J, K$ of $R$ such that $I + (J \cap K) \neq (I + J) \cap (I + K)$. Let $G_1$ and $G_2$ be edge labeled cycles such that $V(G_1) \cap V(G_2) = \{z\}$. Let $G$ be the edge labeled graph formed by pasting $G_1$ and $G_2$ at $z$. Fix $u \in V(G_1)$ and $w \in V(G_2)$ such that neither $u$ nor $w$ is $z$. By Theorem \ref{pasting}, to show that our graph does not satisfy UDP, it suffices to show that Equation \eqref{importantequation} does not hold. Let $P_1, P_2$ be the paths in $G_1$ from $u$ to $z$. Label the edges of $P_1$ with ideals of $R$ such that $\alpha(P_1) = I$. Then label the edges of $P_2$ with ideals of $R$ such that $\alpha(P_2) = I$. Now, turning to $G_2$, let $P_3$ and $P_4$ be the paths in $G_2$ from $z$ to $w$. Label the edges of $P_3$ with ideals of $R$ such that $\alpha(P_3) = J$, and label the edges of $P_4$ with ideals of $R$ such that $\alpha(P_4) = K$. This labeling is depicted in Figure \ref{fig:generalnotprufer}. Now \[ \bigcap_{P \in \mathcal{P}_{(u, w)}} \alpha(P) = (I + J) \cap (I + K), \] while \[ \bigcap_{P \in \mathcal{P}_{(u, z)}} \alpha(P) = I\] and \[ \bigcap_{P \in \mathcal{P}_{(z, w)}} \alpha(P) = J \cap K. \] Thus \begin{equation*} \bigcap_{P \in \mathcal{P}_{(u, z)}} \alpha(P) + \bigcap_{P \in \mathcal{P}_{(z, w)}} \alpha(P) =I + (J \cap K) \neq(I + J) \cap (I + K)=\bigcap_{P \in \mathcal{P}_{(u, w)} } \alpha(P). \label{notequality} \end{equation*} Hence Equation \eqref{importantequation} does not hold on this graph, and by Theorem $\ref{pasting}$, $(G, \alpha)$ does not satisfy UDP. \end{proof} \begin{figure}[H] \begin{center} \scalebox{0.2}{\includegraphics{fig7.pdf}} \caption{A graph labeled with certain ideals from a non-Prüfer domain} \label{fig:generalnotprufer} \end{center} \end{figure} \begin{example} For a more specific example, let $R = \mathbb{Z}[x]$ and $(G,\alpha)$ be as in Figure \ref{fig:notprufer}. Note that $\mathbb{Z}[x]$ is not a Prüfer domain. We have \begin{equation*} \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P) = \left(\langle 3 \rangle + \langle x + 3 \rangle\right) \cap \left(\langle 2 \rangle + \langle x - 3 \rangle\right) \cap \left(\langle 3 \rangle + \langle x - 3 \rangle\right) \cap \left(\langle 2 \rangle + \langle x + 3 \rangle\right), \end{equation*} while \begin{equation*} \bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P) = \langle 3 \rangle \cap \langle 2 \rangle = \langle 6 \rangle \end{equation*} and \begin{equation*} \bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P) = \langle x + 3 \rangle \cap \langle x - 3 \rangle = \langle x^2 - 9 \rangle. \end{equation*} \noindent Note that $x + 3 \in \bigcap_{P \in \mathcal{P}_{(u,w)}} \alpha(P)$ but $x + 3 \notin \bigcap_{P \in \mathcal{P}_{(u,z)}} \alpha(P)$ and $x + 3 \notin \bigcap_{P \in \mathcal{P}_{(z,w)}} \alpha(P) $. Furthermore, $x + 3 \notin \langle 6 \rangle + \langle x^2 - 9 \rangle$, and so by Theorem \ref{pasting}, UDP does not hold on the edge labeled graph in Figure \ref{fig:notprufer}. \begin{figure}[H] \begin{center} \scalebox{0.2}{\includegraphics{fig8.pdf}} \caption{A graph with edge ideals of $\mathbb{Z}[x]$, a non-Prüfer domain} \label{fig:notprufer} \end{center} \end{figure} \end{example} \section{UDP on Graphs Over Principal Ideal Domains}\label{UDP on graphs over PIDs} In this section, we will prove that the Universal Difference Property holds for any edge labeled graph over a principal ideal domain. We will do this using zero paths and flow-up classes. We define zero paths below and introduce the relevant notation. Recall that the definitions for flow-up class and flow-up basis were stated in Section \ref{intro} as Definitions \ref{defn flow-up class} and \ref{defn flow-up basis}. \begin{definition} A path that connects $u$ to $v$ is a \textit{$v$-path of $u$} and is denoted $P_{(u,v)}$, as before. We continue to denote the set of all paths from $u$ to $v$ by $\mathcal{P}_{(u,v)}$. If $v$ is labeled zero, then $P_{(u,v)}$ is called a \textit{zero path of $u$}. An arbitrary zero path of $u$ is denoted $P_{(u,0)}$. The set of all zero paths of $u$ is denoted $\mathcal{P}_{(u,0)}$. \end{definition} Given $P_{(u,v)}$, a $v$-path of $u$, let $\left(P_{(u,v)}\right)$ denote the greatest common divisor of the edge labels of $P_{(u,v)}$ and $\left[P_{(u,v)}\right]$ denote the least common multiple of the edge labels of $P_{(u,v)}$. Let $P_{1,(u,v)}, P_{2,(u,v)}, \ldots, P_{k,(u,v)}$ be all of the $v$-paths of $u$. We use the notations \begin{align*} \left(\mathcal{P}_{(u,v)}\right)&=\left\{\left(P_{1,(u,v)}\right), \left(P_{2,(u,v)}\right),\ldots,\left(P_{k,(u,v)}\right)\right\},\\ \left[\mathcal{P}_{(u,v)}\right]&=\operatorname{lcm}\left\{\left(P_{1,(u,v)}\right), \left(P_{2,(u,v)}\right),\ldots,\left(P_{k,(u,v)}\right)\right\}.\\ \end{align*} \begin{example} Consider the edge labeled $3$-cycle in Figure \ref{3cycle}. Label $v_1$ as $0$. Then $v_2$ has two zero paths, and they are $P_{1,(v_2,0)}=v_2v_1$ and $P_{2,(v_2,0)}=v_2v_3v_1$. We have $\mathcal{P}_{(v_2,0)}=\left\{P_{1,(v_2,0)}, P_{2,(v_2,0)}\right\}, \left(\mathcal{P}_{(v_2,0)}\right)=\{7, (2,4)\}=\{7,2\}$, and $\left[\mathcal{P}_{(v_2,0)}\right]=\operatorname{lcm}\left\{7,(2,4)\right\}=\operatorname{lcm}\left\{7,2\right\}=14$. Additionally, we have $\alpha\left(P_{1,(v_2,0)}\right)=\langle 7\rangle$ and $\alpha\left(P_{2,(v_2,0)}\right)=\langle 2\rangle +\langle 4\rangle$. \end{example} Let $(G,\alpha)$ be an edge labeled graph over a principal ideal domain and $v_i\in V(G)$. In \cite{Alt}, Alt{\i}nok and Sar{\i}o\u{g}lan showed that there is an $i$-th flow up class $f^{(i)}$ in $R_{(G,\alpha)}$ with leading element $f^{(i)}(v_i)=\left[\mathcal{P}_{(v_i,0)}\right]$. They also showed that for any edge labeled graph $(G,\alpha)$ over a principal ideal domain, the set $R_{(G,\alpha)}$ has a flow-up basis. The existence of this flow-up basis does not depend on the ordering of the vertices in $G$. In this section, we prove that the UDP holds for an edge labeled graph over a principal ideal domain. In \cite{Alt}, Alt{\i}nok and Sar{\i}o\u{g}lan used zero trails to show the freeness of $R_{(G,\alpha)}$ for arbitrary graphs over principal ideal domains. According to Lemma 3.6 in \cite{Alt} and the remark immediately following, the results we need from \cite{Alt} can be achieved by only considering zero paths. Here we work with zero paths because the definition of UDP involves paths between two vertices, and we will state the necessary result from \cite{Alt} in terms of zero paths. \begin{theorem}[\cite{Alt}, Theorem 3.8]\label{ThmFromAlt} Let $(G, \alpha)$ be an edge labeled graph over a principal ideal domain. Fix $v_i$ with $i>1$ and assume that all vertices $v_j$ with $j<i$ are labeled zero. Then there exists a flow-up class $f^{(i)}$ on $(G, \alpha)$ with leading entry $f^{(i)}(v_i)=\left[\mathcal{P}_{(v_i,0)}\right]$. \end{theorem} Before showing that UDP must hold on any edge labeled graph over a principal ideal domain, we need some algebraic equalities. Let $R$ be a principal ideal domain and $I$ and $J$ be nontrivial ideals in $R$. Since $R$ is a PID, there exist $r_1, r_2 \in R$ such that $I=\langle r_1\rangle$ and $J=\langle r_2\rangle$. Then $I+J=\langle r_1\rangle +\langle r_2\rangle=\langle\operatorname{gcd}(r_1, r_2)\rangle$, and $I\cap J=\langle r_1\rangle \cap \langle r_2\rangle=\langle \operatorname{lcm}(r_1, r_2)\rangle$. These equalities lead us to the following proposition. \begin{proposition}\label{PathIdealEqualities} Let $(G,\alpha)$ be an edge labeled graph over a principal ideal domain $R$. Let $u$ and $v$ be connected vertices in $G$ with $P_{(u,v)}=uv_1v_2\ldots v_nv$ a $v$-path of $u$. Then $\alpha\left(P_{(u,v)}\right)=\left(P_{(u,v)}\right)$. Moreover, we have $\displaystyle\bigcap_{P\in\mathcal{P}_{(u,v)}}\alpha(P)=\left\langle\left[\mathcal{P}_{(u,v)}\right]\right\rangle$. \end{proposition} \begin{proof} For the first result, we have \[ \alpha\left(P_{(u,v)}\right)=\alpha(uv_1)+\alpha(v_1v_2)+\cdots+\alpha(v_nv)=\left\langle\operatorname{gcd}\left(\alpha(uv_1), \alpha(v_1v_2),\ldots,\alpha(v_nv)\right)\right\rangle=\left\langle\left(P_{(u,v)}\right)\right\rangle. \] For the second result, if $\mathcal{P}_{(u,v)}=\left\{P_{1,(u,v)}, P_{2,(u,v)},\ldots, P_{k,(u,v)}\right\}$, we have \begin{align*} \bigcap_{P\in\mathcal{P}_{(u,v)}}\alpha(P)=\bigcap_{i=1}^k \alpha\left(P_{i,(u,v)}\right)&=\alpha\left(P_{1,(u,v)}\right)\cap\alpha\left(P_{2,(u,v)}\right)\cap\cdots\cap\alpha\left(P_{k,(u,v)}\right)\\ &=\left\langle\left(P_{1,(u,v)}\right)\right\rangle\cap\left\langle\left(P_{2,(u,v)}\right)\right\rangle\cap\cdots\cap\left\langle\left(P_{k,(u,v)}\right)\right\rangle\\ &=\left\langle\operatorname{lcm}\left\{\left(P_{1,(u,v)}\right),\left(P_{2,(u,v)}\right),\ldots,\left(P_{k,(u,v)}\right)\right\}\right\rangle\\ &=\left\langle\left[\mathcal{P}_{(u,v)}\right]\right\rangle. \end{align*} \end{proof} We are now prepared to prove the main result of this section. \begin{theorem} The Universal Difference Property holds for any edge labeled graph over a principal ideal domain. \end{theorem} \begin{proof} Let $(G,\alpha)$ be an edge labeled graph over a principal ideal domain $R$. Let $u$ and $v$ be connected vertices in $G$. If needed, reorder the vertices of $G$ so that $u=v_1$ and $v=v_2$. We are guaranteed that a second flow-up class $f^{(2)}=\left(0,f_2^{(2)}, \ldots,f_n^{(2)}\right)$ exists; see remark at the bottom of page 4 in \cite{Alt}. By Theorem \ref{ThmFromAlt}, we can find such a second flow-up class with $f_2^{(2)}=\left[\mathcal{P}_{(v_2,0)}\right]=\left[\mathcal{P}_{(v,0)}\right]$. Let $x\in\bigcap_{P\in\mathcal{P}_{(v,0)}}\alpha(P)=\left\langle\left[\mathcal{P}_{(v_2,0)}\right]\right\rangle=\left\langle f_2^{(2)}\right\rangle$. Then $x=r\cdot f_2^{(2)}$ for some $r\in R$. Because the space of splines on $(G,\alpha)$ is an $R$-module, as noted in Proposition \ref{space of splines is a ring and an R-module}, we know that $g=-r\cdot f^{(2)}$ is a spline on $(G,\alpha)$. Since we have the spline $g=\left(0,-rf_2^{(2)}, \ldots,-rf_n^{(2)}\right)=\left(0,-x,\ldots,-rf_n^{(2)}\right)$ with $g(u)-g(v)=x$, we see that the Universal Difference Property holds on the graph $(G,\alpha)$. \end{proof} \begin{example} As an example, consider the edge labeled complete graph $K_4$ in Figure \ref{k41}. Order the vertices of $K_4$ as in Figure \ref{k42}. We label $v_1$ with $0$ as in Figure \ref{k43} and then consider the zero trails of $v_2$. They are \begin{align*} P_{1,(v_2,0)}&=v_2v_1,\\ P_{2,(v_2,0)}&=v_2v_4v_1,\\ P_{3,(v_2,0)}&=v_2v_3v_1,\\ P_{4,(v_2,0)}&=v_2v_3v_4v_1,\\ P_{5,(v_2,0)}&=v_2v_4v_3v_1.\\ \end{align*} \begin{figure}[!htb] \centering \begin{minipage}{.35\textwidth} \centering \scalebox{0.2}{\includegraphics{fig9.pdf}} \caption{} \label{k41} \end{minipage}% \begin{minipage}{0.35\textwidth} \centering \scalebox{0.2}{\includegraphics{fig10.pdf}} \caption{} \label{k42} \end{minipage}% \begin{minipage}{.35\textwidth} \centering \scalebox{0.2}{\includegraphics{fig11.pdf}} \caption{} \label{k43} \end{minipage} \end{figure} \end{example} Therefore we have \begin{align*} \alpha\left(P_{1,(v_2,0)}\right)&=\langle 6\rangle,\\ \alpha\left(P_{2,(v_2,0)}\right)&=\langle 2\rangle +\langle 4\rangle=\left\langle \operatorname{gcd}(2,4)\right\rangle=\langle 2\rangle,\\ \alpha\left(P_{3,(v_2,0)}\right)&=\langle 9\rangle +\langle 3\rangle=\left\langle \operatorname{gcd}(9,3)\right\rangle=\langle 3\rangle,\\ \alpha\left(P_{4,(v_2,0)}\right)&=\langle 9\rangle +\langle 5\rangle+\langle 4 \rangle=\left\langle \operatorname{gcd}(9,5,4)\right\rangle=\langle 1\rangle,\\ \alpha\left(P_{5,(v_2,0)}\right)&=\langle 2\rangle +\langle 5\rangle+\langle 3 \rangle=\left\langle \operatorname{gcd}(2,5,3)\right\rangle=\langle 1\rangle,\\ \end{align*} and \[ \bigcap_{k=1}^5 \alpha\left(P_{k,(v_2,0)}\right)=\langle 6\rangle \cap \langle 2 \rangle \cap \langle 3 \rangle \cap\langle 1\rangle \cap \langle 1\rangle=\left\langle\operatorname{lcm}(6,2,3,1,1)\right\rangle=\langle 6\rangle. \] Let $x\in\bigcap_{k=1}^5 \alpha\left(P_{k,(v_2,0)}\right)=\langle 6\rangle$. If, for example, $x=24$, then our aim is to construct a spline $f$ on the $K_4$ graph in Figure \ref{k42} such that $f(u)-f(v)=24$. The existence of a second flow-up class $f^{(2)}$ with leading entry $f^{(2)}(v_2)=6$ is guaranteed by Proposition \ref{PathIdealEqualities} and Theorem \ref{ThmFromAlt}. Such a second flow-up class can be chosen as $f^{(2)}=(0,6,15,0)$. Since $x=24=4\cdot6$, we consider the spline $g=-4\cdot f^{(2)}=(0,-24,-60,0)$ and note that $g(u)-g(v)=g(v_1)-g(v_2)=24$. A Bezout domain is an integral domain such that any finitely generated ideal is principal. Every PID is a Bezout domain, but the converse is not true in general. It can be shown by the same techniques in \cite{Alt} that for any edge labeled graph $(G,\alpha)$ whose edges are labeled by finitely generated ideals over a Bezout domain, the set $R_{(G,\alpha)}$ has a flow-up basis. As a consequence, we see that UDP holds for any edge labeled graph with finitely generated edge labels over a Bezout domain. \section{UDP Is a Structural Property of an Edge Labeled Graph}\label{UDP and isomorphisms} In this section we show that the Universal Difference Property is a structural property of an edge labeled graph; that is, the Universal Difference Property is preserved by an isomorphism of edge labeled graphs. We begin with a definition. \begin{definition}\label{def1} \cite{Gilbert} Let $(G,\alpha)$ and $(G',\alpha')$ be edge labeled graphs with base ring $R$. A \textit{homomorphism of edge labeled graphs} $\varphi: (G,\alpha) \to (G',\alpha')$ is a graph homomorphism $\varphi_1: G \to G'$ together with a ring automorphism $\varphi_2:R \to R$ so that for each edge $e \in E(G)$ we have $\varphi_2(\alpha(e)) = \alpha'(\varphi_1(e))$. An \textit{isomorphism of edge labeled graphs} is a homomorphism of edge labeled graphs whose underlying graph homomorphism is in fact an isomorphism. \end{definition} We are now ready to state and prove the main result of this section. \begin{theorem}\label{SIUDP} Let $(G,\alpha)$ and $(G',\alpha')$ be edge labeled isomorphic. If the Universal Difference Property holds on $(G,\alpha)$, then it holds on $(G^{'},\alpha')$. \end{theorem} \begin{proof} Let $u',w'\in V(G')$, and $P_{1}',P_{2}',\ldots,P_{n}'$ be the paths in $G'$ from $u'$ to $w'$. Let $y\in\bigcap_{i=1}^{n}\alpha'(P_{i}')$. Let $\varphi_{1}:G\rightarrow G'$ be the graph isomorphism and $\varphi_{2}:R\rightarrow R$ be the ring automorphism from Definition \ref{def1}. Let $u,v\in V(G)$ such that $\varphi_{1}^{-1}(u')=u$ and $\varphi_{1}^{-1}(w')=w$. Finally, let $\alpha$ and $\alpha'$ be the edge labelings of $G$ and $G'$, respectively. We know there must be a $z\in R$ such that $\varphi_{2}(z)=y$. Consider the path $P_{i}'$ from $u'$ to $w'$. Let $e_{1}',e_{2}',\ldots,e_{k}'$ be the edges in the path $P_{i}'$. We can write $y$ as a sum of elements from the ideals along $P_{i}'$ such that $y=a_{1}'+a_{2}'+\cdots+a_{k}'$, where $a_{i}'\in\alpha'(e_{i}')$.\\ Then we have \begin{align*} z&=\varphi_{2}^{-1}(y)\\ &=\varphi_{2}^{-1}(a_{1}'+a_{2}'+\dots+a_{k}')\\ &=\varphi_{2}^{-1}(a_{1}')+\varphi_{2}^{-1}(a_{2}')+\dots+\varphi_{2}^{-1}(a_{k}'). \end{align*} Consider the element $a_{j}' \in \alpha '(e_j ')$. We know there exists an edge $e_j$ in $G$ such that $\varphi_1 (e_j) = e_j '$. Then $a_j ' \in \alpha '(e_j ') = \alpha '(\varphi_1 (e_j)) = \varphi_2 (\alpha(e_j))$ by Definition \ref{def1}. Then $\varphi_{2}^{-1} (a_j ') \in \alpha(e_j)$. Let $\varphi_{2}^{-1} (a_j ') = a_j$. Doing this for $1\leq j\leq k$, we obtain \begin{align*} z&=\varphi_{2}^{-1}(y)\\ &=\varphi_{2}^{-1}(a_{1}'+a_{2}'+\dots+a_{k}')\\ &=\varphi_{2}^{-1}(a_{1}')+\varphi_{2}^{-1}(a_{2}')+\dots+\varphi_{2}^{-1}(a_{k}')\\ &= a_1 + a_2 + \dots + a_k. \end{align*} Recalling that $e_1, e_2, \dots, e_k$ is a path in $G$ from $u$ to $w$, we have $z$ as a sum of elements of the ideals labeling the edges along this path. Since vertex adjacency is preserved by the graph isomorphism, every path from $u$ to $w$ in $G$ corresponds to one of the paths $P_i'$ from $u'$ to $w'$ in $G'$. Hence $z\in\bigcap_{P \in \mathcal{P}_{(u, w)}} \alpha(P)$. Since $G$ satisfies the Universal Difference Property, there is a spline $\rho\in R_G$ such that $\rho(u)-\rho(w)=z$. Then \begin{align}\label{proofend} y =\varphi_{2}(z)=\varphi_{2}(\rho(u)-\rho(w))=\varphi_{2}(\rho(u))-\varphi_{2}(\rho(w)).\end{align} For any $v' \in V(G')$, let $v$ be the element of $G$ such that $\varphi_1(v) = v'$. Define a vertex labeling $\gamma$ on $G'$ by $\gamma(v') = \varphi_2(\rho(\varphi_1^{-1}(v'))) = \varphi_2(\rho(v))$. We will show that $\gamma$ is a spline on $G'$ and that $\gamma(u') - \gamma(w') = y$. Let $s'$ and $t'$ be adjacent vertices in $G'$. Then there exist vertices $s$ and $t$ in $V(G)$ such that $\varphi_1(s) = s'$ and $\varphi_1(t) = t'$. Then \begin{align*} \gamma(s') - \gamma(t') &= \varphi_2(\rho(\varphi_1^{-1}(s'))) - \varphi_2(\rho(\varphi_1^{-1}(t'))) \\ &= \varphi_2(\rho(s)) - \varphi_2(\rho(t)) \\ &= \varphi_2(\rho(s) - \rho(t)) \\ &\in \varphi_2(\alpha(st)), \end{align*} but $\varphi_2(\alpha(st)) = \alpha'(\varphi_1(st)) = \alpha'(s't')$ and so $\gamma$ is indeed a spline on $G'$. Lastly, we have \begin{align*} \gamma(u')-\gamma(w')&=\varphi_{2}(\rho(u))-\varphi_{2}(\rho(w))\\ &=y\quad \text{by Equation \eqref{proofend}}. \end{align*} \noindent Thus $(G',\alpha')$ satisfies UDP. \end{proof} \section{Acknowledgments} The authors wish to thank Professor Julianna Tymoczko for her thoughtful feedback and questions. The second author thanks the Institute for Computational and Experimental Research in Mathematics and the American Institute of Mathematics for their generous support of the Research Experience for Undergraduate Faculty (REUF) program, in which she participated in Summers 2017 and 2018. The second, third, fourth, and fifth authors acknowledge support from the University of Texas at Tyler Research Experience for Undergraduates, NSF Grant DMS-1659221, in which the third, fourth, and fifth authors participated during Summer 2019, under the mentorship of the second author. The second and seventh authors acknowledge support from NSF Grant HRD-1826745 Louis Stokes STEM Pathways and Research Alliance: University of Texas System LSAMP , in which the seventh author participated as an undergraduate researcher under the mentorship of the second author in the Summer Research Academy 2019. The first and sixth authors are supported by Hacettepe University Scientific Research Projects Coordination Unit. Project Number: FHD-2022-19802.
{ "timestamp": "2022-06-15T02:22:52", "yymm": "2206", "arxiv_id": "2206.06981", "language": "en", "url": "https://arxiv.org/abs/2206.06981", "abstract": "We study the generalized graph splines introduced by Gilbert, Tymoczko, and Viel and focus on an attribute known as the Universal Difference Property (UDP). We prove that paths, trees, and cycles satisfy UDP. We explore UDP on graphs pasted at a single vertex and use Prüfer domains to illustrate that not every edge labeled graph satisfies UDP. We show that UDP must hold for any edge labeled graph over a ring $R$ if and only if $R$ is a Prüfer domain. Lastly, we prove that UDP is preserved by isomorphisms of edge labeled graphs.", "subjects": "Combinatorics (math.CO)", "title": "Generalized graph splines and the Universal Difference Property", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9802808690122164, "lm_q2_score": 0.8418256452674008, "lm_q1q2_score": 0.8252255750994975 }
https://arxiv.org/abs/1311.3874
An Algorithm to Solve the Equal-Sum-Product Problem
A recursive algorithm is constructed which finds all solutions to a class of Diophantine equations connected to the problem of determining ordered n-tuples of positive integers satisfying the property that their sum is equal to their product. An examination of the use of Binary Search Trees in implementing the algorithm into a working program is given. In addition an application of the algorithm for searching possible extra exceptional values of the equal-sum-product problem is explored after demonstrating a link between these numbers and the Sophie Germain primes.
\section{Introduction} Suppose we are asked to consider the following three arithmetic identities \[ 2+2=4\mbox{ ,}\hspace{1.7cm}1+2+3=6\mbox{ ,}\hspace{1.7cm}1+1+2+2+2=8\mbox{ .} \] What can we say is a feature common to each of the three identities? Looking at the second equality we might first think that we are dealing with the property of perfect numbers, namely that the number $6$ is equal to the sum of it's proper divisors $1$, $2$ and $3$, but neither $4$ or $8$ are perfect numbers. However if each of the right-hand sides are expressed as products in the following manner \[ 2+2=2\cdot2\mbox{ ,}\hspace{1.7cm}1+2+3=1\cdot2\cdot3\mbox{ ,}\hspace{1.7cm}1+1+2+2+2=1\cdot1\cdot2\cdot2\cdot2\mbox{ ,} \] then we can see at once that the original identities express the fact that three sets of numbers $\{ 2,2\}$, $\{ 1,2,3\}$ and $\{1,1,2,2,2\}$, have the property that the sum of their elements is equal to their respective products. In view of these sets, one is naturally drawn to question whether it is possible to find for each integer $n\geq 2$, all sets of $n$ positive integers having the equal-sum-product property. We shall refer to this problem as the Equal-Sum-Product problem in $n$ variables, or the ESP-Problem in $n$ variables for short. Determining a set of $n$ positive integers satisfying the equal-sum-product property is equivalent to solving the following equation \begin{equation} x_{1}x_{2}\cdots x_{n}=x_{1}+x_{2}+\cdots +x_{n}\mbox{ ,}\label{eq:2ex} \end{equation} in positive integers $x_{i}$, where without loss of generality we may assume $x_{1}\leq x_{2}\leq\cdots\leq x_{n}$. Equations in which only positive integer solutions are sought are referred to as Diophantine equations, after the mathematician Diophantos who lived in Alexandria around 300 A.D. When examining Diophantine equations the following three questions naturally arise: Does the equation have a solution? Are there only finitely many solutions? Is it possible to determine all solutions? In the case of the Diophantine equation in (\ref{eq:2ex}), the first two questions have been independently answered in the affirmative by M. W. Ecker in \cite{ecker} and by L. Kurlandchik and A. Nowicki in \cite{kurland}. In particular the first question was easily answered via the observation that for any $n\geq 2$ the $n$-tuple $(\underbrace{1,1,\ldots ,1}_{(n-2)1's},2,n)$ is clearly a solution of~(\ref{eq:2ex}). While for the second question, the finiteness of solutions followed from a demonstration of a boundedness result, namely that for each $n\geq 2$ when (\ref{eq:2ex}) is satisfied by an $n$-tuple $(x_{1},x_{2},\ldots ,x_{n})$ then the largest component can be at most $n$, thus yielding an extremely large but finite upper bound of $n^{n}$ for the number of solutions. Despite the vast search space of solutions, we will show in this paper that the third question can also be answered in the affirmative, in the case of (\ref{eq:2ex}) by constructing an algorithm which generates all solutions to the ESP-Problem in $n$ variables. To help construct the algorithm, it first will be necessary to determine the structure of the solution set $S(n)$. In particular, by making use of existing results again found in \cite{ecker}, \cite{kurland} we will show that $\displaystyle S(n)=\cup_{r=2}^{\lfloor\log_{2}n\rfloor +1}S_{r}(n)$, where $S_{r}(n)$ is the set of solutions of (\ref{eq:2ex}), having precisely $n-r$ unit and $r$ non-unit components, that is of the form $(\underbrace{1,1,\ldots ,1,}_{(n-r)1's}x_{1},\dots ,x_{r})$. For notational convenience the solutions contained in $S_{r}(n)$ will be denoted by $(x_{1},x_{2},\ldots,x_{r};n-r)$, with $x_{i}\geq 2$.\\ \\ From this description of the solution set $S(n)$ the basic function of the algorithm can be explained. Fundamentally the algorithm will be recursive in nature, as it shall generate solutions in each set $S_{r}(n)$ for $r=3,\ldots ,\lfloor\log_{2}n\rfloor +1$, from those found in $S_{r-1}(r+j)$, for $j=2^{r-2}-r,\ldots ,\left\lfloor \frac{n-3r+2}{2}\right\rfloor$. Specifically we will show that to generate each set $S_{r}(n)$, it will suffice to examine the solutions $(x_{1},x_{2},\ldots ,x_{r-1};j+1)\in S_{r-1}(r+j)$, for $j=2^{r-2}-r,\ldots ,\left\lfloor \frac{n-3r+2}{2}\right\rfloor$ which satisfy the divisibility condition $(x_{1}+x_{2}+\cdots +x_{r-1}-1)|(x_{1}+x_{2}+\cdots +x_{r-1}+n-r)$, and construct $S_{r}(n)$ as the set of elements of the form $(x_{1},x_{2},\ldots ,x_{r-1},w;n-r)$, where $w:=(x_{1}+x_{2}+\cdots +x_{r-1}+n-r)/(x_{1}+x_{2}+\cdots +x_{r-1}-1)$. Given that the set $S_{2}(n)$ can be determined from an explicit formula in (\ref{eq:1ex}), we can see that when $r-1>2$ the algorithm must repeatedly apply the recursive procedure to generate each set $S_{r-1}(r+j)$. This presents us with a problem in that before each set $S_{r}(n)$ can be constructed, one will first have to keep track of a specific sequence of intermediary sets through descending values of $r$ to $r=2$, and then determine an associated group of ``base`` sets $S_{2}(\cdot)$ via (\ref{eq:1ex}). Secondly one must then apply the respective divisibility tests in the reverse sequence order to construct each of the intermediary sets, before the set $S_{r}(n)$ can finally be determined. Thus how are we to store, search and retrieve the solutions contained in these intermediary sets? As shall be seen later, this question will be solved by the use of a commonly occurring data structure known as a Binary Search Tree, which can efficiently search and retrieve stored data in the form of nodes within a tree structure. In our case, the intermediary sets and the set $S_{r}(n)$, will form the nodes of an evolving Binary Search Tree as pictured in Figures 1, 2, and 3.\\ \\ Although a complexity analysis will not be performed here, one can at least conclude from the recursive procedure described above that the algorithm must terminate, for each input $n > 2$. We now briefly outline the structure of the remaining paper. In Section 2, we begin by proving a number of preliminary results leading to the structure of the solution set $S(n)$, and then introduce the main features of the Binary Search Tree. Within Section 3 our task will be to establish the recursive procedure, which will then be formulated into the pseudo code of Algorithm 3.1. This will be followed by an examination of the use of Binary Search Trees that will be used to implement the algorithm into a working program, which the interested reader can access at \cite{program1}. Finally in Section 4, we investigate a theoretical application of Algorithm 3.1 to a well known conjecture connected with the Diophantine equation in (\ref{eq:2ex}). This conjecture asserts that the only integers $n>2$, for which the $n$-tuple $(2,n;n-2)$ is a unique solution to (\ref{eq:2ex}), are those contained in the set $E=\{ 2,3,4,6,24,114,174,444\}$, the so-called set of exceptional values. As reported in \cite{ecker}, despite extensive computer searches of all positive integers less than $10^{10}$, no new elements of $E$ have been revealed. Although we will not resolve the conjecture here, we shall by using an argument based on the above recursive procedure, prove that if $n>2$ is an element of $E$, then $n-1$ must be a Sophie Germain prime, that is both $n-1$ and $2(n-1)+1$ are prime. Equipped with this result together with the scarcity of the Sophie Germain primes, we can see that Algorithm 3.1 provides a basis for a more refined computer search for possible extra elements of $E$. \section{Preliminaries} In this section we shall first establish some preliminary technical lemmas that will be needed in Section 3. In addition, a brief overview of the data structure known as a Binary Search Tree, which will be used for the implementation of the algorithm, shall also be given. To begin we note that the first of the required lemmas was proved in \cite{ecker},\cite{kurland}, but we present here an alternate proof based on a divisibility argument, which later will form the basis for the construction of the algorithm. The following result states that in any solution to the ESP-Problem in $n>2$ variables, there must be at least one unit component. \begin{lem} The ESP-Problem in two variables has exactly one solution, namely $\{ (x_{1},x_{2})\in\mathbb{N}\times\mathbb{N}:x_{1}x_{2}=x_{1}+x_{2}\} =\{ (2,2)\}$. While if $(x_{1},x_{2},\ldots ,x_{n})$ is a solution to the ESP-Problem in $n\geq 3$ variables, then there must exist at least one $i\in\{ 1,2,\ldots ,n\}$ such that $x_{i}=1$. \end{lem} \begin{proof} Suppose $x_{1}x_{2}=x_{1}+x_{2}$, for some positive integers $x_{1},x_{2}$. Then as $x_{1}|(x_{1}+x_{2})$, observe $x_{1}|x_{2}$ and so $x_{1}\leq x_{2}$. Similarly $x_{2}\leq x_{1}$. Consequently $x_{1}=x_{2}$ and so $x_{1}^{2}=2x_{1}$ which yields that either $x_{1}=2$ or $x_{1}=0$, but as $x_{1}$ is a positive integer we conclude $x_{1}=x_{2}=2$. Next we show that apart from the two-variable case, the ESP-Problem in $n\geq 3$ variables cannot have solutions $(x_{1},x_{2},\ldots ,x_{n})$ in which $x_{i}\geq 2$ for all $1\leq i\leq n$. This can be proved using the following argument: Assuming $x_{i}\geq 2$ for all $1\leq i\leq n$ and recalling that $x_{i}\leq x_{i+1}$, observe that $x_{1}x_{2}\cdots x_{n}\geq 2^{n-1}x_{n}$ while $n x_{n} \geq x_{1}+x_{2}+\ldots +x_{n}$, but as $2^{n-1}>n$ for $n\geq 3$ we deduce that $x_{1}x_{2}\cdots x_{n}> x_{1}+x_{2}+\ldots +x_{n}$, and so the $n$-tuple $(x_{1},x_{2},\ldots,x_{n})$ cannot be a solution of (\ref{eq:2ex}). \end{proof}\\ In view of the previous result we introduce, for notational convenience, the following definition for classifying solutions of the ESP-Problem in $n$ variables, according to the number of non-unit components present within the $n$-tuple. \begin{defy} For given integers $n\geq r\geq 2$, let $(x_{1},x_{2},\ldots ,x_{r} ;n-r)$ denote an $n$-tuple having $r$ non-unit components and satisfying equation (\ref{eq:2ex}), and set \[ S_{r}(n)=\{ (x_{1},x_{2},\ldots ,x_{r} ; n-r)\in \mathbb{N}^{n}: \prod_{j=1}^{r} x_{j} =\sum_{j=1}^{r} x_{j} + n-r\}\mbox{ .} \] \end{defy} Clearly from Definition 2.1 we observe that $S_{2}(n)\not=\emptyset$, as the n-tuple $(2,n;n-2)$, is an element of $S_{2}(n)$ for all $n\geq 2$. In the next result a characterization for the set $S_{2}(n)$ will be given in terms of the divisors of $n-1$. \begin{lem} For an integer $n\geq 2$, the set $S_{2}(n)$ has the following explicit form: \begin{equation} S_{2}(n)=\left\{ \left( d+1,\frac{n-1}{d}+1;n-2\right) :d| (n-1), d\leq\sqrt{n-1}\right\}.\label{eq:1ex} \end{equation} \end{lem} \begin{proof} Suppose $(x_{1},x_{2};n-2)\in S_{2}(n)$. Then upon rearrangement of $x_{1}+x_{2}+n-2=x_{1}x_{2}$, we find $n-1=(x_{1}-1)(x_{2}-1)$. Assuming $x_{1}\leq x_{2}$, observe that for a divisor $d$ of $n-1$ with $d\leq \sqrt{n-1}$, we may set $x_{1}-1=d$ and $x_{2}-1=(n-1)/d$ and so $S_{2}(n)$ is of the form as stated in (\ref{eq:1ex}). \end{proof}\\ A variation of the following result was proved in \cite{ecker},\cite{kurland}, and gives a lower bound on $r$ that insures $S_{r}(n)=\emptyset$. We present here for completeness, the proof which uses the fact that any solution $(x_{1},x_{2},\ldots ,x_{r};n-r)\in S_{r}(n)$ has the property that the common equal-sum-product value is at most $2n$, with equality holding if and only if when $r=2$ and $(x_{1},x_{2},\ldots ,x_{r};n-r)=(2,n;n-2)$. This result incidentally appeared in the form of a problem for the Polish Mathematical Olympiad in 1990 (see \cite{kurland}). \begin{lem} For an integer $n\geq 2$, if $r>\lfloor \log_{2}(n)\rfloor +1$, then $S_{r}(n)=\emptyset$. \end{lem} \begin{proof} Suppose $S_{r}(n)\not=\emptyset$, for some $n\geq 2$. Now in \cite{ecker}, it was shown that any $(x_{1},x_{2},\ldots ,x_{r};n-r)\in S_{r}(n)$ has the property that the common equal-sum-product value is bounded above by $2n$. Consequently as $2^{r}\leq \prod_{l=1}^{r}x_{l}=x_{1}+\cdots x_{r}+n-r\leq 2n$ we deduce that $2^{r-1}\leq n$, from which one finds $r\leq \log_{2}(n)+1$. As $r$ is an integer the previous inequality implies $r\leq \lfloor\log_{2}(n)\rfloor +1$. \end{proof} \\ For any integer $n>2$, as the number of non-unit components in a solution $(x_{1},x_{2},\ldots ,x_{r};n-r)\in S(n)$ is equal to $r\geq 2$, we are guaranteed that at least two integers say $x',x''\in \{x_{1},x_{2},\ldots ,x_{r}\}$ are such that $x''\geq x'\geq 2$. Now if $x''$ is the largest non-unit component of a solution $(x_{1},x_{2},\ldots ,x_{r};n-r)\in S(n)$, then from the above upper bound for the common equal-sum-product value, we find $2x''\leq \prod_{i=1}^{r}x_{i}\leq 2n$, that is $x''\leq n$. Consequently each of the non-unit components in a solution $(x_{1},x_{2},\ldots , x_{r};n-r)\in S_{r}(n)$ can only assume the $n-1$ integer values in the set $\{ 2,\ldots ,n\}$, and so $|S_{r}(n)| \leq (n-1)^{r}$. Thus one deduces from Lemma 2.3 that the reduced search space for the ESP-Problem in $n$ variables is $(n-1)^{2}+(n-1)^{3}+\cdots +(n-1)^{\lfloor \log_{2}(n)\rfloor +1}=O((n-1)^{\lfloor \log_{2}(n)\rfloor+1})$. In view of this bound, we can see that an exhaustive search for solutions to the ESP-Problem is impractical for large $n$, and hence the need for an algorithmic solution to this problem. \\ \\ As shall be seen in Section 3, one of the main operations performed by the algorithm to recursively generate the solution sets $S_{r}(n)$, will be to search previously constructed and stored sets of the form $S_{r-1}(\cdot )$, for the eventual purpose of applying a divisibility test. To make this searching operation practicable, we first impose on the solution sets $S_{r}(n)$ a partial order defined as $S_{r_{1}}(n_{1})<S_{r_{2}}(n_{2})$ when $r_{1}<r_{2}$ in the case $n_{1}=n_{2}$, while in the case $n_{1}\not= n_{2}$ then $S_{r_{1}}(n_{1})<S_{r_{2}}(n_{2})$ when $n_{1}<n_{2}$. This partial ordering can best be summarized using the Iverson bracket notation (see \cite[p.24]{graham}) as follows \begin{equation} [S_{r_{1}}(n_{1})<S_{r_{2}}(n_{2})]=[n_{1}=n_{2}][r_{1}<r_{2}] + [n_{1}<n_{2}]\mbox{ ,}\label{parord} \end{equation} where the square bracket $[P]$ evaluates to 1 if the statement $P$ is true and 0 otherwise. Note from the definition of the partial ordering, $S_{r_{1}}(n_{1})=S_{r_{2}}(n_{2})$ if and only if $r_{1}=r_{2}$ and $n_{1}=n_{2}$. In addition to the partial order, the second device we shall employ to facilitate the searching operation of the algorithm, will be the storage of the solution sets $S_{r}(n)$ as nodes in a Binary Search Tree. A Binary Search Tree, denoted $T$, is an abstract data structure used for the storage, retrieval and deletion of nodes representing the elements of a finite set $S$, on which a partial order is defined. More precisely a Binary Search Tree can be defined via the following definition: \begin{defy} A Binary Search Tree (BST) is a data structure $T$ for a finite partially ordered set $S$, and is defined as follows: If $S=\emptyset$ then $T$ is a null tree, otherwise $T$ consists of a node storing some element $x\in S$ and two Binary Search Trees such that:\\ \\ (1) $Left(T)$ stores elements of $S$ which are less than $x$.\\ \\ (2) $Right(T)$ stores elements of $S$ which are greater than $x$.\\ \\ The node storing $x$ is called the root node of the tree. \end{defy} To illustrate the concept of a Binary Search Tree and how it will be used for the storage of the solution sets $S_{r}(n)$ during the execution of the algorithm, consider the following set $S=\{ S_{2}(2),S_{3}(4),S_{3}(5),S_{3}(6),S_{2}(7),S_{4}(15)\}$, whose elements have been placed in ascending order, as specified by the partial ordering in (\ref{parord}). If for example we select the element $S_{3}(5)\in S$ as the root node of the BST, then $Left(T)$ would be a BST consisting of nodes contained in the subset $\{ S_{2}(2),S_{3}(4)\}$, while $Right(T)$ would also be a BST consisting of nodes contained in the remaining subset $\{ S_{3}(6),S_{2}(7),S_{4}(15)\}$. By further selecting $S_{2}(2)$ and $S_{2}(7)$ as the root nodes of $Left(T)$ and $Right(T)$ respectively, then the elements of $S$ can now be stored within the BST pictured in Figure 1. \\ \\ Defining the height of a BST as the total number of levels below the root node, observe that the tree structure in Figure 1 has a height of $2$. Clearly from Definition 2.2 one can see there are many possible BST that can be constructed to store the elements, as nodes, of a finite partially ordered set $S$. By using standard algorithms for the transformation of BST, (see \cite[pp. 458-481]{knuth}), a BST can have it's height reduced, while maintaining the original partial ordering of the set $S$. A BST is said to be balanced when it's height is reduced to a minimum. One major advantage of a balanced BST having $n$ nodes, is that the operations of searching and insertion of a node can be performed in $O(\log n)$ time in the worst case, compared with a linear list where the same operations take $O(n)$ time in the worst case. This efficiency is the reason for our use of such a data structure in the implementation of the algorithm in Section 3. \\ \begin{figure}[htb] \centering {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{3}(5) $} child { node [circle,draw] (a) {$ S_{2}(2) $} child { [draw=white] node [draw=white] (b) {$ $} } child { node [circle,draw] (c) { $ S_{3}(4) $ } } } child { node [circle,draw] (d) { $ S_{2}(7) $ } child { node [circle,draw] (e) { $ S_{3}(6) $ } } child { node [circle,draw] (f) { $S_{4}(15)$ } } }; \end{tikzpicture} } \caption*{Figure 1} \end{figure} \section{Algorithm Construction and Implementation} We now come to the heart of the paper where an algorithm will be constructed which recursively generates all solutions of the ESP-Problem in $n$ variables, and where an examination of the algorithms implementation shall also be given. To begin, in view of Lemma 2.3, we can readily deduce that the complete solution set of the ESP-Problem in $n$ variables is given by \[ S(n)=\bigcup_{r=2}^{m+1}S_{r}(n)\mbox{ ,} \] where $m=\lfloor \log_{2} n\rfloor$. Applying (\ref{eq:1ex}) of Lemma 2.2 it is a theoretically straightforward procedure to construct the set $S_{2}(n)$ for any integer $n\geq 2$. However in what follows we will first describe a general process by which each set $S_{r}(n)$, for $r=3,\ldots ,m+1$ and $n\geq 3$, can be recursively generated from the elements of the sets $S_{r-1}(r+j)$, for values of $j$ to be specified shortly. This process, established in Step 1, will form the basis for the construction of Algorithm 3.1 described in Step 2 (Part A), while in Step 2 (Part B) the implementation of Algorithm 3.1 using balanced BST will be examined, and shall lead to a working program in the form of a javascript web page, (see \cite{webpage}), for the generation of all solutions to the ESP-Problem in $n>2$ variables.\\ \\ {\bf Step 1: Recursive generation of $S_{r}(n)$.}\\ Suppose $S_{r}(n)\not=\emptyset$ for $r\geq 3$, then there must exist $r$ integers greater than unity and such that $x_{1}x_{2}\ldots x_{r}=x_{1}+x_{2}+\ldots +x_{r}+n-r$. Solving for $x_{r}$ we find that \begin{equation} x_{r}=\frac{x_{1}+x_{2}+\cdots +x_{r-1}+n-r}{x_{1}x_{2}\ldots x_{r-1}-1}\in\mathbb{N}\mbox{ ,}\label{eq:1} \end{equation} and so $(x_{1}x_{2}\cdots x_{r-1}-1)|(x_{1}+x_{2}+\ldots + x_{r-1}+n-r)$. Now as $x_{r}\geq 2$, we can conclude that $(x_{1}x_{2}\cdots x_{r-1}-1)<x_{1}+x_{2}+\cdots +x_{r-1}+n-r$ and so, there must exist a $k\in\mathbb{N}$ such that $x_{1}x_{2}\cdots x_{r-1}-1=(x_{1}+x_{2}+\cdots +x_{r-1}+n-r)-k$, or equivalently after setting ${\bf j=n-r-k}$ \begin{equation} x_{1}x_{2}\cdots x_{r-1}=x_{1}+x_{2}+\cdots +x_{r-1}+(j+1)\mbox{ ,}\label{eq:2} \end{equation} with $j+1\geq 0$ by Lemma 2.1. Thus $(x_{1},x_{2},\ldots ,x_{r-1};j+1)\in S_{r-1}(r+j)$. Conversely assume $(x_{1},x_{2},\ldots ,x_{r-1};j+1)\in S_{r-1}(r+j)$, with ${\bf r+j<n}$ and is such that $x_{r}:=(\sum_{l=1}^{r-1}x_{l}+n-r)/(\prod_{l=1}^{r-1}x_{l}-1)\in\mathbb{N}$, then by definition of $x_{r}$ we have $(x_{1},x_{2},\ldots ,x_{r-1},x_{r};n-r)\in S_{r}(n)$, noting here that such an $x_{r}\geq 2$, since by assumption $\prod_{l=1}^{r-1}x_{l}=\sum_{l=1}^{r-1}x_{l}+j+1$ we have \begin{eqnarray*} x_{r}:=\frac{\sum_{l=1}^{r-1}x_{l}+n-r}{\prod_{l=1}^{r-1}x_{l}-1} = \frac{\sum_{l=1}^{r-1}x_{l}+n-r}{\sum_{l=1}^{r-1}x_{l}+j} & = & \frac{(\sum_{l=1}^{r-1}x_{l}+j)+n-r-j}{\sum_{l=1}^{r-1}x_{l}+j} \nonumber\\ & = & 1+\frac{n-r-j}{\sum_{l=1}^{r-1}x_{l}+j}>1\mbox{ .}\\ \end{eqnarray*} Thus to generate $S_{r}(n)$, it is necessary and sufficient to find solutions $(x_{1},x_{2},\ldots ,x_{r-1};j+1)\in S_{r-1}(r+j)$, with $j+1 \geq 0$, satisfying the divisibility condition in (\ref{eq:1}), and construct $S_{r}(n)$ as the set containing elements of the form $(x_{1},x_{2}\ldots ,x_{r-1},x_{r};n-r)$, with $x_{r}$ defined as in (\ref{eq:1}). We next give upper and lower bound on $j$ necessary for both $S_{r-1}(r+j)\not=\emptyset$ and have elements that satisfy the divisibility condition in (\ref{eq:1}). Recall from Lemma 2.3 that for $S_{r}(n)\not=\emptyset$ necessarily $r\leq\lfloor\log_{2}(n)\rfloor +1$, consequently if $S_{r-1}(r+j)\not=\emptyset$, then one finds $r-1\leq\lfloor\log_{2}(r+j)\rfloor +1\leq\log_{2}(r+j)+1$, which upon rearrangement yields that \begin{equation} 2^{r-2}-r\leq j\mbox{ .}\label{eq:3} \end{equation} Recalling from above that if $(x_{1},x_{2},\dots ,x_{r-1};j+1)\in S_{r-1}(r+j)$ we again have \begin{equation} \frac{\sum_{l=1}^{r-1}x_{l}+n-r}{\prod_{l=1}^{r-1}x_{l}-1} =1+\frac{n-r-j}{\sum_{l=1}^{r-1}x_{l}+j}\mbox{ .}\label{eq:4} \end{equation} Now the right-hand side of (\ref{eq:4}) will not be an integer if $n-r-j<\sum_{l=1}^{r-1}x_{l}+j$, but as $x_{l}\geq 2$, further observe that the previous inequality will be satisfied if and only if $n-r-j<2(r-1)+j$, that is when \begin{equation} j>\frac{n-3r+2}{2}\Leftrightarrow j>\left\lfloor \frac{n-3r+2}{2}\right\rfloor\mbox{ ,}\label{eq:5} \end{equation} as $j\in\mathbb{N}$. Thus for $S_{r-1}(r+j)\not=\emptyset$ and to have elements satisfying the divisibility condition of (\ref{eq:1}), we must necessarily have \begin{equation} 2^{r-2}-r\leq j\leq \left\lfloor\frac{n-3r+2}{2}\right\rfloor\mbox{ ,}\label{eq:6} \end{equation} assuming that $2^{r-2}-r\leq\left\lfloor\frac{n-3r+2}{2}\right\rfloor$.\\ \\ {\bf NB:}\,\, If $j \geq 2^{r-2}-r \geq \lfloor \frac{n-3r+2}{2} \rfloor$ then no element in $S_{r-1}(r+j)$ can satisfy the divisibility condition in (\ref{eq:1}) and so $S_{r}(n) = \emptyset $. \\ \\ Thus excluding the possible case of $j\geq 2^{r-2}-r>\left\lfloor\frac{n-3r+2}{2}\right\rfloor$ in which $S_{r}(n)=\emptyset$, we can conclude that to construct each $S_{r}(n)$, for $r=3,\ldots ,m+1$ and $n\geq 3$, it suffices to examine the elements $(x_{1},x_{2},\ldots, x_{r-1};j+1)\in S_{r-1}(r+j)$, for the values of $j$ in (\ref{eq:6}) which satisfy the divisibility condition \[ w=1+\frac{n-r-j}{\sum_{l=1}^{r-1}x_{l}+j}\in\mathbb{N}\mbox{ ,} \] and then construct $S_{r}(n)$ from the set of elements of the form $(x_{1},x_{2},\ldots ,x_{r-1},w;n-r)$.\\ \\ {\bf Step 2 Part A: Construction of Algorithm}\\ We now formalize the recursive generation of the sets $S_{r}(n)$ in Step 1, into the pseudo code of Algorithm 3.1. To assemble the required algorithm recall that the solution set of the ESP-Problem in $n$ variables is $S(n)=\bigcup_{r=2}^{m+1}S_{r}(n)$, in which the set $S_{2}(n)$ is explicitly given in (\ref{eq:1ex}), while from Step 1 each set $S_{r}(n)$ for $r=3,\ldots ,m+1$ is of the form \begin{equation} \bigcup_{j=2^{r-2}-r}^{\lfloor\frac{n-3r+2}{2}\rfloor}\{ (x_{1},x_{2},\ldots ,x_{r-1},w;n-r):(x_{1},x_{2},\ldots ,x_{r-1};j+1)\in S_{r-1}(r+j),w=1+\frac{n-r-j}{\sum_{l=1}^{r-1}x_{l}+j}\in\mathbb{N}\}\mbox{ ,}\label{eq:7} \end{equation} provided $\lfloor\frac{n-3r+2}{2}\rfloor\geq 2^{r-2}-r$, with $S_{r}(n)=\emptyset$ otherwise. Now to affect the construction of each set $S_{r}(n)$ via (\ref{eq:7}), we define a recursive procedure denoted $calc\_shell(n,r)$ in a typical programming language such as C++. Introducing a generic data structure, denoted $T$, and initially assigned $T:=\emptyset$, for storing and accessing all previously constructed sets $S_{r-1}(j+r)$ for $2^{r-2}-r\leq j\leq \lfloor\frac{n-3r+2}{2}\rfloor$, the procedure $calc\_shell(n,r)$ will first test if the set $S_{r}(n)$ is already contained in $T$ and exist if ($S_{r}(n)\in T$) is true.\\ \\ When ($S_{r}(n)\in T$) is false, then after initially assigning $S_{r}(n):=\emptyset$, if $r=2$ construct $S_{r}(n)$ using the explicit formulation in (\ref{eq:1ex}) and insert $S_{r}(n)$ into $T$ via the assignment $T:=T\cup S_{r}(n)$ and exit. However if $r>2$, then the procedure $calc\_shell(n,r)$, as dictated by (\ref{eq:7}), enters into a {\scriptsize FOR} Loop indexed by $j=\lfloor\frac{n-3r+2}{2}\rfloor...2^{r-2}-r$, which either terminates if $2^{r-2}-r>\lfloor\frac{n-3r+2}{2}\rfloor$ resulting in the assignment $S_{r}(n):=\emptyset$ or proceeds for each $j$ to access the elements of the sets $S_{r-1}(r+j)$ from $calc\_shell(j+r,r-1)$ if non-empty, and tests the divisibility condition $w=1+\frac{n-r-j}{\sum_{l=1}^{r-1}x_{l}+j}\in\mathbb{N}$, which if true inserts the solution $(x_{1},x_{2},\ldots ,x_{r-1},w;n-r)$ into the set $S_{r}(n)$ or otherwise proceeds to the next value of the index $j$. Once this {\scriptsize FOR} Loop is completed the resulting set $S_{r}(n)$ is inserted into the data structure $T$ via the assignment $T:=T\cup S_{r}(n)$. Finally to construct the complete solution set $S(n)$ of the ESP-Problem, each recursive procedure $calc\_shell(n,i)$ is executed within the procedure $cal\_solution(n)$, which consists of a {\scriptsize FOR} Loop indexed by $i=m+1...2$. The above algorithm assembly is summarized in pseudo code as follows:\\ \\ {\bf Algorithm 3.1}\\ \\ {\bf Initialization:} $ T := \emptyset $ \\ \\ Construct the set $S_{r}(k)$\\ $ procedure \hspace*{2mm} calc\_shell(k,r)$\\ \mbox{ }\hspace{1cm}If $S_{r}(k) \in T $ then\\ \mbox{ }\hspace{1cm} exit \\ \mbox{ }\hspace{1cm}$S_{r}(k):=\emptyset$ \\ \mbox{ }\hspace{1cm}If $r = 2 $ then $S_{2}(k):=\left\{ \left(\frac{k-1}{d}+1,d+1;k-2\right) :d| (k-1), d\leq\sqrt{k-1}\right\}$ \\ \mbox{ }\hspace{2cm}$ T := T \cup S_{2}(k) $ \\ \mbox{ }\hspace{2cm}exit \\ \mbox{ }\hspace{1cm}For $ j = \lfloor\frac{k-3r+2}{2}\rfloor \ldots 2^{r-2}-r $ \\ \mbox{ }\hspace{2cm}If $2^{r-2}-r>\lfloor\frac{k-3r+2}{2}\rfloor$ then $S_{r}(k):=\emptyset\,\,\mbox{and}\,\, T:=T\cup S_{r}(k)$\\ \mbox{ }\hspace{2cm}exit \\ \mbox{ }\hspace{2cm} call $calc\_shell(j+r,r-1) $ \\ \mbox{ }\hspace{2cm} For each element $(x_{1},x_{2}\ldots ,x_{r-1};j+1)\in S_{r-1}(j+r)$ such that \\ \[ w=1+\frac{k-r-j}{\sum_{l=1}^{r-1}x_{l} +j}\in\mathbb{N} \] \mbox{ }\hspace{3cm} then $S_{r}(k):=S_{r}(k)\cup\{ (x_{1},x_{2},\ldots ,x_{r-1},w;k-r)\}$\\ \mbox{ }\hspace{1cm}$ T := T \cup S_{r}(k) $ \\ \\ Construct the solution set $S(n)$\\ $ procedure \hspace*{2mm} calc\_solution(n)$\\ \mbox{ }\hspace{1cm}For $ i= \lfloor\log_{2}(n)\rfloor +1 \ldots 2 $ \\ \mbox{ }\hspace{2cm} $ calc\_shell(n,i) $ \\ \mbox{}\\ {\bf Step 2 Part B: Implementation of Algorithm}\\ To illustrate both the function of Algorithm 3.1 and the need for a balanced Binary Search Trees (BST) in it's implementation, we examine first the construction of the solution set for the ESP-Problem in the case of $n=15$ as follows.\\ \\ {\bf Example 3.1:} As $2^{3}<15<2^{4}$ we have $m=\lfloor\log_{2}15\rfloor =3$ and so $S(15)=\cup_{r=2}^{4}S_{r}(15)$. After initializing $T:=\emptyset$, Algorithm 3.1 executes each procedure $calc\_shell(15,i)$ in descending order of $i=4,3,2$, to construct $S_{i}(15)$, initially assigned as empty. Beginning with $calc\_shell(15,4)$, this procedure must first call on each $calc\_shell(j+4,3)$ in descending order of $j=2,1,0$, for the construction of $S_{3}(6)$,$S_{3}(5)$,$S_{3}(4)$ respectively, before invoking the divisibility test \begin{equation} (x_{1},x_{2},x_{3};j+1)\in S_{3}(j+4)\,\,\mbox{is such that}\,\, w=1+\frac{11-j}{\sum_{l=1}^{3}x_{l}+j}\in\mathbb{N}\mbox{ ,}\label{eq:8} \end{equation} which if true results in the insertion $(x_{1},x_{2},x_{3},w;11)\in S_{4}(15)$, or else leaves $S_{4}(15):=\emptyset$. At this stage we similarly find that the execution of each $calc\_shell(k,3)$ for $k=6,5$ must both call on $calc\_shell(j+3,2)$ for $j=-1$ to produce, via (\ref{eq:1ex}), the set $S_{2}(2)=\{ (2,2;0)\}$, before invoking the divisibility test \begin{equation} (2,2;0)\in S_{2}(2)\,\,\mbox{ is such that}\,\, w=1+\frac{k-3+1}{4-1}\in\mathbb{N}\mbox{ ,}\label{eq:9} \end{equation} which if true results in the insertion $(2,2,w;k-3)\in S_{3}(k)$ or else leaves $S_{3}(k):=\emptyset$. Consequently upon substituting $k=6,5$ into (\ref{eq:9}) we deduce that $S_{3}(6)=\emptyset$ while $S_{3}(5)=\{ (2,2,2;2)\}$. However the execution of $calc\_shell(4,3)$ must result in $S_{3}(4):=\emptyset$ as $2^{r-2}-r>\lfloor\frac{k-3r+2}{2}\rfloor$ for $(k,r)=(4,3)$. Thus by substituting the only solution $(2,2,2;2)=(x_{1},x_{2},x_{3};j+1)\in S_{3}(j+4)$, corresponding to $j=1$, into (\ref{eq:8}), we find $w=1+\frac{10}{7}\not\in\mathbb{N}$ and so $S_{4}(15):=\emptyset$. Thus $T$ remains empty.\\ \\ In like manner the execution of $calc\_shell(15,3)$, to construct $S_{3}(15)$, will call on $calc\_shell(j+3,2)$ in descending order of $j=4,3,2,1,0,-1$, and produces again via (\ref{eq:1ex}) the sets $S_{2}(7)=\{(7,2;5),(4,3;5)\},S_{2}(6)=\{(6,2;4)\},S_{2}(5)=\{(5,2;3),(3,3;3)\},S_{2}(4)=\{(4,2;2)\},S_{2}(3)=\{(3,2;1)\},S_{2}(2)=\{ (2,2;0)\}$, which after substituting into the divisibility test \[ (x_{1},x_{2};j+1)\in S_{2}(j+3)\,\,\mbox{is such that}\,\, w=1+\frac{12-j}{\sum_{l=1}^{2}x_{l}-j}\in\mathbb{N}\mbox{ ,} \] yields that $S_{3}(15)=\emptyset$, as for each corresponding value of $j$ we find $w\not\in\mathbb{N}$, and so $T$ remains empty. Finally the execution of $calc\_shell(15,2)$ produces via (\ref{eq:1ex}) the only non-empty set, namely $S_{2}(15)=\{ (15,2;13), (8,3;13)\}$ for insertion into $T$. Thus we conclude $S(15)=\{ (15,2;13), (8,3;13)\}$.\\ \\ From Example 3.1 we can see that even for a small value of $n=15$, prior to the construction of each set $S_{r}(n)$, for $r=\lfloor\log_{2}n\rfloor +1\ldots 3$, the procedure $calc\_shell(n,r)$ must first call on itself a number of times in descending procedural calls of the form $calc\_shell(j+r,r-1)$, for $j=\lfloor\frac{n-3r+2}{2}\rfloor \ldots 2^{r-2}-r$, in order to construct the sets $S_{r-1}(r+j)$. However before these later sets can be constructed, a similar process of descending procedural calls must be repeatedly applied until we reach the construction of the ``base'' sets namely $S_{2}(\cdot)$ given explicitly in (\ref{eq:1ex}). Once these base sets have been determined, then by applying the respective divisibility tests in the reverse order to the sequence of procedural calls described above, one can finally construct each set $S_{r}(n)$, and obtain the complete solution set $S(n)$. \\ \\ Clearly, the first problem we must overcome to achieve a practicable implementation of Algorithm 3.1, is to efficiently store and retrieve solutions within the respective sequence of intermediary sets that are needed for the recursive generation of each set $S_{r}(n)$, for $r=\lfloor\log_{2}n\rfloor +1\ldots 3$. Moreover, as illustrated by Example 3.1, we should avoid the unnecessary reconstruction of existing intermediary sets, which may arise when constructing the sets $S_{r}(n)$ for lower values of $r$. Finally one will also require a practical method for determining the base sets $S_{2}(\cdot )$ using (\ref{eq:1ex}). This problem will be addressed last, and in the interim we shall assume that one can readily construct the set $S_{2}(n)$ for any integer $n>2$.\\ \\ Beginning with the problem of storage and retrieval, recall from Section 2 that a finite collection of solution sets such as $S=\{ S_{2}(2),S_{3}(4),S_{3}(5),S_{3}(6),S_{2}(7),S_{4}(15)\}$, can be inserted as nodes in a BST which preserves the partial ordering in (\ref{parord}). When applying Algorithm 3.1 after tracking the execution of all descending procedural calls required in the construction of each set $S_{r}(n)$, we first determine a specific collection of base sets $S_{2}(\cdot)$ which are constructed in descending order with respect to (\ref{parord}). Once determined these base sets can then be inserted into an evolving BST, in which their placement into the left or right subtree will depend upon their relative ordering via (\ref{parord}) with respect to the current root node. Upon re-balancing, the process of retrieving the required solutions for applying the respective divisibility tests can then be executed, and the resulting intermediary sets, whether empty or not, can be progressively inserted into an evolving BST, culminating with the insertion of each set $S_{r}(n)$. Once all the sets $S_{r}(n)$ have been inserted into the final balanced BST, the complete solution set $S(n)$ can then be formed by retrieval of the elements from those sets $S_{r}(n)$ which are non-empty.\\ \\ One advantage in using a balanced BST is that we can search through $m$ nodes in $O(\log (m))$ time, thus making efficient the process of searching and retrieval. This efficiency can further be exploited to effectively search for pre-existing intermediary sets, thus avoiding unnecessary reconstruction. A secondary advantage in using BST is that these data structures are readily implementable in a programming language such as C++, where the operations of storing, searching and retrieval of nodes can be performed using existing Standard Template Libraries supported within C++. We now illustrate the process outlined above by displaying a subsequence of evolving balanced BST leading to the construction of $S(15)$.\\ \\ From Example 3.1 recall the algorithm began with the construction of $S_{4}(15)$ by first executing three descending procedural calls, to construct the intermediary sets $S_{3}(6)$,$S_{3}(5)$,$S_{3}(4)$, which in turn could not be determined until the base set $S_{2}(2)$ was constructed. Once this is achieved, the sequence of insertions into a BST can then begin with $S_{2}(2)$ chosen as the initial root node followed by the insertion of the set $S_{3}(6)$ onto the right of $S_{2}(2)$. Next after inserting $S_{3}(5)$ to the right of $S_{3}(6)$ and then re-balancing, the root node of the new BST then becomes $S_{3}(5)$. After insertion of the remaining set $S_{3}(4)$ to the right of $S_{2}(2)$, the BST can now be searched for solutions used in the divisibility test to construct $S_{4}(15)$, which is then placed to the right of $S_{3}(6)$ in the BST. This initial sequence of evolving BST is illustrated below in Figure 2.\\ \begin{figure}[htb] \centering {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{2}(2) $} child { [draw=white] node [draw=white] (a) {$ $} child { [draw=white] node [draw=white] (b) {$ $} } }; \end{tikzpicture} } {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{2}(2) $} child { [draw=white] node [draw=white] (a) {$ $} } child { node [circle,draw] (b) {$ S_{3}(6) $} }; \end{tikzpicture} } {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{3}(5) $} child { node [circle,draw] (a) {$ S_{2}(2) $} } child { node [circle,draw] (b) {$ S_{3}(6) $} }; \end{tikzpicture} } {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{3}(5) $} child { node [circle,draw] (a) {$ S_{2}(2) $} child { [draw=white] node [draw=white] (c) {$ $} } child { node [circle,draw] (d) { $ S_{3}(4) $ } } } child { node [circle,draw] (b) {$ S_{3}(6) $} }; \end{tikzpicture} } {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{3}(5) $} child { node [circle,draw] (a) {$ S_{2}(2) $} child { [draw=white] node [draw=white] (b) {$ $} } child { node [circle,draw] (g) { $ S_{3}(4) $ } } } child { node [circle,draw] (j) { $ S_{3}(6) $ } child { [draw=white] node [draw=white] (k) { } } child { node [circle,draw] (l) { $ S_{4}(15) $ } } }; \end{tikzpicture} } {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{3}(5) $} child { node [circle,draw] (a) {$ S_{2}(2) $} child { [draw=white] node [draw=white] (b) {$ $} } child { node [circle,draw] (c) { $ S_{3}(4) $ } } } child { node [circle,draw] (d) { $ S_{2}(7) $ } child { node [circle,draw] (e) { $ S_{3}(6) $ } } child { node [circle,draw] (f) { $ S_{4}(15) $ } } }; \end{tikzpicture} } \caption*{Figure 2} \end{figure} \\ Continuing on, recall the algorithm had to construct the set $S_{3}(15)$ by executing six descending procedural calls to determine the base sets $S_{2}(7),\ldots ,S_{2}(2)$. As $S_{2}(2)$ is already present in the previous BST, this set is not re-constructed. Consequently in like manner to the above, a sequence of node insertions and re-balancing continues until all of the previous base sets have been inserted into the BST leading to, after another divisibility test, the insertion of the set $S_{3}(15)$. The resulting penultimate BST is shown on the left of Figure 3. Upon inserting the base set of $S_{2}(15)$ to the left of $S_{3}(15)$ the final BST is pictured on the right of Figure 3. From this BST, the complete solution set $S(15)$ can now be by retrieved from those non-empty nodes contained within the list $\{S_{4}(15),S_{3}(15),S_{2}(15)\}$.\\ \vspace{2cm} In implementing Algorithm 3.1, one could also have taken advantage of the linear complexity in searching and retrieval of such data structures as a Hash table, but this was not chosen due to the fact that Hash tables requires {\em a priori} estimates for the number of intermediary sets needed, which we do not have, and moreover BST are easier to implement in comparison. \\ \begin{figure}[htb] \centering {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{3}(5) $} child { node [circle,draw] (a) {$ S_{3}(4) $} child { node [circle,draw] (b) {$ S_{2}(3) $} child { node [circle,draw] (l) {$ S_{2}(2) $} } child { node [circle,draw] (k) {$ S_{2}(4) $} } } child { node [circle,draw] (c) { $ S_{2}(5) $ } } } child { node [circle,draw] (d) { $ S_{2}(7) $ } child { node [circle,draw] (e) { $ S_{3}(6) $ } child { node [circle,draw] (h) { $ S_{2}(6) $ } } child { [draw=white] node [draw=white] (i) { } } } child { node [circle,draw] (f) { $ S_{4}(15) $ } child { node [circle,draw] (m) { $ S_{3}(15) $ } } child { [draw=white] node [draw=white] (n) { } } } }; \end{tikzpicture} } {\scalefont{0.5} \begin{tikzpicture}[] \node [circle,draw] (z){$ S_{3}(5) $} child { node [circle,draw] (a) {$ S_{3}(4) $} child { node [circle,draw] (b) {$ S_{2}(3) $} child { node [circle,draw] (l) {$ S_{2}(2) $} } child { node [circle,draw] (k) {$ S_{2}(4) $} } } child { node [circle,draw] (c) { $ S_{2}(5) $ } } } child { node [circle,draw] (d) { $ S_{2}(7) $ } child { node [circle,draw] (e) { $ S_{3}(6) $ } child { node [circle,draw] (h) { $ S_{2}(6) $ } } child { [draw=white] node [draw=white] (i) { } } } child { node [circle,draw] (f) { $ S_{3}(15) $ } child { node [circle,draw] (m) { $ S_{2}(15) $ } } child { node [circle,draw] (n) { $ S_{4}(15) $ } } } }; \end{tikzpicture} } \\ \caption*{Figure 3} \end{figure} \\ To conclude we now address how the ``base" sets $S_{2}(\cdot)$ were constructed within the C++ program. From (\ref{eq:1ex}) it is clear that in order $S_{2}(n)$ can be determined for large $n>2$, one needs to perform a factorization of the form $n-1=d((n-1)/d)$, over all divisors $d$ of $n-1$, with $d\leq\sqrt{n-1}$. This was made practical via the use of the online number theory library LIDIA (see \cite{lidia}), which provided the necessary factorization algorithm, together with GMP (see \cite{gmp}), a free online library to perform the high precision integer arithmetic to compute and output the numerical values of the terms $\frac{n-1}{d}+1$ and $d+1$ necessary to construct each of elements of the set $S_{2}(n)$ \\ \section{Searching for Extra Exceptional Values} As mentioned previously the Diophantine equation in (\ref{eq:2ex}) always has a solution for $n\geq 2$, as the ordered $n$-tuple $(2,n;n-2)$ satisfies the arithmetic identity \[ \underbrace{1+1+\cdots +1}_{(n-2) 1's}+2+n=\underbrace{1\cdot 1\cdots 1}_{(n-2)1's}\cdot2\cdot n\mbox{ .} \] The ordered $n$-tuple $(2,n;n-2)$ was referred to in \cite{ecker} as the ``basic solution''. It has been noted in \cite{ecker},\cite{kurland} that for some values of $n\geq 2$, the basic solution is the only solution to the ESP-Problem in $n$ variables, that is where $S(n)=\{ (2,n;n-2)\}$. These special values of $n$ have been termed in \cite{ecker} as the Exceptional Values, of which $n=2$ is clearly one such value, via Lemma 2.1. From the structure of the solution set $S(n)$, one can see that for an integer $n>2$ to be an exceptional value, necessarily $S_{2}(n)=\{ (2,n;n-2)\}$. Now in view of (\ref{eq:1ex}), as the cardinality of $S_{2}(n)$ is equal to the number of ordered factorizations of $n-1=ab$, with $1\leq a\leq b$, we deduce that $S_{2}(n)=\{ (2,n;n-2)\}$ if and only if $n-1$ is prime. Thus when searching for exceptional values of $n>2$, it suffices to concentrate the search on those $n>2$ for which $n-1$ is prime. Armed with this information the authors in \cite{ecker},\cite{kurland} via an exhaustive computer search of all primes less than $10^{10}$, uncovered the following list of exceptional values contained in the set $E=\{ 2,3,4,6,24,114,174,444\}$. As stated earlier, it is still an open conjecture as to whether the set $E$ is complete. In the following result, we provide a stricter necessary condition for an integer $n>2$ to be an element of $E$. Recall that a prime $p$ is a Sophie Germain prime if $2p+1$ is also a prime. \begin{theo} If an integer $n>2$ is an exceptional value of the ESP-Problem in $n$ variables, then $n-1$ must be a Sophie Germain prime number. \end{theo} \begin{proof} If $n>2$ is an exceptional value, then necessarily $n-1$ is a prime number and the set $S_{3}(n)=\emptyset$. Recalling that the recursive generation of the set $S_{r}(n)$ is summarized in equation (\ref{eq:7}), observe after setting $r=3$ that \begin{equation} S_{3}(n)=\bigcup_{j=-1}^{\lfloor\frac{n-7}{2}\rfloor}\{ (x_{1},x_{2},w;n-3): (x_{1},x_{2};j+1)\in S_{2}(j+3), w=1+\frac{n-3-j}{x_{1}+x_{2}+j}\in\mathbb{N}\}\mbox{ .}\label{eq:10} \end{equation} Under assumption all the sets within the union must be empty, and so from (\ref{eq:10}) we deduce upon substituting the basic solution $(2,j+3;j+1)\in S_{2}(j+3)$ into the expression for $w$, that $w:=\frac{j+2+n}{2j+5}\not\in\mathbb{N}$, for $j=-1,\ldots ,\lfloor\frac{n-7}{2}\rfloor$. As $2j+5$ is an odd integer the previous conclusion further implies that $2w\not\in\mathbb{N}$, and so observe \begin{eqnarray} 2w:=\frac{2j+4+2n}{2j+5} & = & \frac{2j+5+(2n-1)}{2j+5} \nonumber\\ & = & 1 +\frac{2n-1}{2j+5}\not\in\mathbb{N}\mbox{ ,}\label{eq:11} \end{eqnarray} for $j=-1,\ldots ,\lfloor\frac{n-7}{2}\rfloor$. Recalling that $n-1$ is prime and so $n$ is an even integer, one finds that $\lfloor\frac{n-7}{2}\rfloor =\lfloor\frac{n-6}{2}-\frac{1}{2}\rfloor=\frac{n-6}{2}-1$, consequently $2j+5$ assumes the values of all odd integers $3,5,\ldots ,n-3\leq\sqrt{2n-1}<n$, for $j=-1,\ldots ,\lfloor\frac{n-7}{2}\rfloor$. As $n-1$ clearly does not divide $2n-1=2(n-1)+1$, we deduce from (\ref{eq:11}) that $2n-1=2(n-1)+1$ must be a prime number and so by definition $n-1$ is a Sophie Germain prime number. \end{proof} It is conjectured that the supply of Sophie Germain primes in infinite, and that the number of Sophie Germain primes less than or equal to $n$ is $O(\frac{n}{(\ln n)^{2}})$ (see \cite[p. 165]{riben}). Regardless of the validity of this conjecture, we can see that Theorem 1 together with the working program in \cite{program1}, provides us with a refined searching scheme for extra elements of $E$. Indeed all that is required to take each known Sophie Germain prime number $p$ and input $n=p+1$ into \cite{}, if one non-basic solution is uncovered then terminate the algorithm, else we can conclude that $n=p+1$ is an extra exceptional value. To make this searching scheme practicable, one would need to have access to sufficiently large computing power, in view of the length of the largest known Sophie Germain prime number to date. As of April $2012$ the largest known Sophie Germain Prime number, discovered by P. Bliedung, using a distributed PrimeGrid search is, $18543637900515\times 2^{666667}-1$, and is comprised of $200701$ decimal digits (see \cite{blied}). Apart from this computational investigation, a final alternate perspective of Theorem 1 we may wish to consider is that, if it were possible to prove that the set $E$ was infinite, then the infinitude of Sophie Germain primes would follow at once. However the consensus of the authors is that this is very unlikely, as the Sophie Germain prime conjecture is a sufficiently deep problem of number theory, that such an elementary approach to settle this conjecture, would be at best wishful thinking.
{ "timestamp": "2013-11-18T02:09:43", "yymm": "1311", "arxiv_id": "1311.3874", "language": "en", "url": "https://arxiv.org/abs/1311.3874", "abstract": "A recursive algorithm is constructed which finds all solutions to a class of Diophantine equations connected to the problem of determining ordered n-tuples of positive integers satisfying the property that their sum is equal to their product. An examination of the use of Binary Search Trees in implementing the algorithm into a working program is given. In addition an application of the algorithm for searching possible extra exceptional values of the equal-sum-product problem is explored after demonstrating a link between these numbers and the Sophie Germain primes.", "subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)", "title": "An Algorithm to Solve the Equal-Sum-Product Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9928785720971596, "lm_q2_score": 0.8311430436757313, "lm_q1q2_score": 0.8252241184132473 }
https://arxiv.org/abs/2111.02635
The 3x+1 Problem: An Overview
This paper is an overview and survey of work on the 3x+1 problem, also called the Collatz problem, and generalizations of it. It gives a history of the problem. It addresses two questions: (1) What can mathematics currently say about this problem? (as of 2010). (2) How can this problem be hard, when it is so easy to state?
\section{Introduction} The $3x+1$ problem concerns the following innocent seeming arithmetic procedure applied to integers: If an integer $x$ is odd then ``multiply by three and add one", while if it is even then ``divide by two". This operation is described by the {\em Collatz function} $$ C(x) = \left\{ \begin{array}{cl} 3x+1 & \mbox{if}~ x \equiv 1~~ (\bmod ~2 ) , \\ ~~~ \\ \displaystyle\frac{x}{2} & \mbox{if} ~~x \equiv 0~~ (\bmod~2) . \end{array} \right. $$ The $3x+1$ problem, which is often called the {\em Collatz problem}, concerns the behavior of this function under iteration, starting with a given positive integer $n$. \smallskip {\bf $3x+1$ Conjecture.} {\em Starting from any positive integer $n$, iterations of the function $C(x)$ will eventually reach the number $1$. Thereafter iterations will cycle, taking successive values $1, 4, 2, 1,...$.} \smallskip This problem goes under many other names, including the {\em Syracuse problem}, {\em Hasse's algorithm}, {\em Kakutani's problem} and {\em Ulam's problem}. \smallskip A commonly used reformulation of the $3x+1$ problem iterates a different function, the {\em $3x+1$ function}, given by $$ T(x) = \left\{ \begin{array}{cl} \displaystyle\frac{3x+1}{2} & \mbox{if}~ x \equiv 1~~ (\bmod ~2 ) , \\ ~~~ \\ \displaystyle\frac{x}{2} & \mbox{if} ~~x \equiv 0~~ (\bmod~2) . \end{array} \right. $$ From the viewpoint of iteration the two functions are simply related; iteration of $T(x)$ simply omits some steps in the iteration of the Collatz function $C(x)$. The relation of the $3x+1$ function $T(x)$ to the Collatz function $C(x)$ is that: $$ T(x) = \left\{ \begin{array}{cl} C( C(x)) & \mbox{if}~ x \equiv 1~~ (\bmod ~2 ) ~, \\ ~~~ \\ C(x) & \mbox{if} ~~x \equiv 0~~ (\bmod~2) ~. \end{array} \right. $$ As it turns out, the function $T(x)$ proves more convenient for analysis of the problem in a number of significant ways, as first observed independently by Riho Terras (\cite{Ter76}, \cite{Ter79}) and by C. J. Everett \cite{Ev77}. \smallskip The $3x+1$ problem has fascinated mathematicians and non-mathematicians alike. It has been studied by mathematicians, physicists, and computer scientists. It remains an unsolved problem, which appears to be extremely difficult. \smallskip This paper aims to address two questions: \smallskip % % \begin{enumerate} \item[(1)] {\em What can mathematics currently say about this problem?}\medskip \item[(2)] {\em How can this problem be hard, when it is so easy to state? } \end{enumerate} % % \smallskip To address the first question, this overview discusses the history of work on the problem. Then it describes generalizations of the problem, and lists the different fields of mathematics on which the problem impinges. It gives a brief summary of the current strongest results on the problem. \smallskip Besides the results summarized here, this volume contains more detailed surveys of mathematicians' understanding of the $3x+1$ problem and its generalizations. These cover both rigorously proved results and heuristic predictions made using probabilistic models. The book includes several survey articles, it reprints several early papers on the problem, with commentary, and it presents an annotated bibliography of work on the problem and its generalizations. \smallskip To address the second question, let us remark first that the true level of difficulty of any problem can only be determined when (and if) it is solved. Thus there can be no definitive answer regarding its difficulty. The track record on the $3x+1$ problem so far suggests that this is an extraordinarily difficult problem, completely out of reach of present day mathematics. Here we will only say that part of the difficulty appears to reside in an inability to analyze the pseudorandom nature of successive iterates of $T(x)$, which could conceivably encode very difficult computational problems. We elaborate on this answer in \S7. \smallskip Is the $3x+1$ problem an important problem? Perhaps not for its individual sake, where it merely stands as a challenge. It seems to be a prototypical example of an extremely simple to state, extremely hard to solve, problem. A middle of the road viewpoint is that this problem is representative of a large class of problems, concerning the behavior under iteration of maps that are expanding on part of their domain and contracting on another part of their domain. This general class of problems is of definite importance, and is currently of great interest as an area of mathematical (and physical) research; for some perspective, see Hasselblatt and Katok \cite{HK95}. Progress on general methods of solution for functions in this class would be extremely significant. \smallskip This overview describes where things currently stand on the $3x+1$ problem and how it relates to various fields of mathematics. For a detailed introduction to the problem, see the following paper of Lagarias \cite{Lag85} (in this volume). In \S2 we give some history of the problem; this presents some new information beyond that given in \cite{Lag85} . Then in \S3 we give a flavor of the behavior of the $3x+1$ iteration. In \S4 we discuss various frameworks for generalizing the problem; typically these concern iterations of functions having a similar appearance to the $3x+1$ function. In \S5 we review areas of research: these comprise different fields of mathematics and computer science on which this problem impinges. In \S6 we summarize the current best results on the problem in various directions. In \S7 we discuss the hardness of the $3x+1$ problem. In \S8 we describe some research directions for future progress. In \S9 we address the question: ``Is the $3x+1$ problem a good problem?" In the concluding section \S10 we offer some advice on working on $3x+1$-related problems. \section{History and Background} The $3x+1$ problem circulated by word of mouth for many years. It is generally attributed to Lothar Collatz. He has stated (\cite{Col80}) that he took lecture courses in 1929 with Edmund Landau and Fritz von Lettenmeyer in G\"{o}ttingen, and courses in 1930 with Oskar Perron in Munich and with Issai Schur in Berlin, the latter course including some graph theory. He was interested in graphical representations of iteration of functions. In his notebooks in the 1930's he formulated questions on iteration of arithmetic functions of a similar kind (cf. \cite[p.\,3]{Lag85}). Collatz is said by others to have circulated the problem orally at the International Congress of Mathematicians in Cambridge, Mass. in 1950. Several people whose names were subsequently associated with the problem gave invited talks at this International Congress, including H. S. M. Coxeter, S. Kakutani, and S. Ulam. Collatz \cite{Col86} (in this volume) states that he described the $3x+1$ problem to Helmut Hasse in 1952 when they were colleagues at the University of Hamburg. Hasse was interested in the problem, and wrote about it in lecture notes in 1975 (\cite{Ha75}). Another claimant to having originated the $3x+1$ problem is Bryan Thwaites \cite{Th85}, who asserts that he came up with the problem in 1952. Whatever is its true origin, the $3x+1$ problem was already circulating at the University of Cambridge in the late 1950's, according to John H. Conway and to Richard Guy \cite{Guy09}. \smallskip There was no published mathematical literature about the $3x+1$ problem until the early 1970's. This may have been, in part, because the 1960's was a period dominated by Bourbaki-style mathematics. The Bourbaki viewpoint emphasized complete presentations of theories with rich internal structure, which interconnect with other areas of core mathematics (see Mashaal \cite{Mas06}). In contrast, the $3x+1$ problem initially appears to be an isolated problem unrelated to the rest of mathematics. Another obstacle was the difficulty in proving interesting results about the $3x+1$ iteration. The results that could be proved appeared pathetically weak, so that it could seem damaging to one's professional reputation to publish them. In some mathematical circles it might have seemed in bad taste even to show interest in such a problem, which appears d\'{e}class\'{e}. \smallskip During the 1960's, various problems related to the $3x+1$ problem appeared in print, typically as unsolved problems. This included one of the original problems of Collatz from the 1930's, which concerned the behavior under iteration of the function $$U(2n)= 3n, ~U(4n+1)= 3n+1, ~U(4n+3)= 3n+2. $$ The function $U(n)$ defines a permutation of the integers, and the question concerns whether the iterates of the value $n=8$ form an infinite set. This problem was raised by Murray Klamkin \cite{Klm63} in 1963 (see Lagarias \cite[p.\, 3]{Lag85}), and remains unsolved. Another such problem was posed by Ramond Queneau, a founder of the French mathematical-literary group Oulipo (Ouvroir de litt\'{e}rature potentielle), which concerns allowable rhyming patterns generalizing those used in poems by the 12-th century troubadour, Arnaut Daniel. This problem turns out to be related to a $(3x+1)$-like function whose behavior under iteration is exactly analyzable, see Roubaud \cite{Rou69}. Concerning the $3x+1$ problem itself, during the 1960's large computations were done testing the truth of the conjecture. These reportedly verified the conjecture for all $n \le 10^{9}$. \smallskip To my knowledge, the $3x+1$ problem first appeared in print in 1971, in the written version of a 1970 lecture by H. S. M. Coxeter \cite{Cox71} (in this volume). It was presented there ``as a piece of mathematical gossip." In 1972 it appeared in six different publications, including a Scientific American column by Martin Gardner \cite{Gar72} that gave it wide publicity. Since then there has been a steady stream of work on it, now amounting to several hundred publications. \smallskip Stanislaw Ulam was one of many who circulated the problem; the name ``Ulam's problem" has been attached to it in some circles. He was a pioneer in ergodic theory and very interested in iteration of functions and their study by computer; he formulated many problem lists (e.g. \cite{Ulam64}, \cite{Ulam-CC}). A collaborator, Paul Stein \cite[p. 104]{Ste89}, wrote about Ulam: \begin{quotation} Stan was not a number theorist, but he knew many number-theoretical facts. As all who knew him well will remember, it was Stan's particular pleasure to pose difficult, though simply stated, questions in many branches of mathematics. Number theory is a field particularly vulnerable to the ``Ulam treatment," and Stan proposed more than his share of hard questions; not being a professional in the field, he was under no obligation to answer them. \end{quotation} \noindent Ulam's long term collaborator C. J. Everett \cite{Ev77} wrote one of the early papers about the $3x+1$ problem in 1977. \smallskip The $3x+1$ problem can also be formulated in the backwards direction, as that of determining the smallest set $S_0$ of integers containing $1$ which is closed under the affine maps $x \mapsto 2x$ and $3x+2 \mapsto 2x+1$, where the latter map may only be applied to inputs $3x+2$ whose output $2x+1$ will be an integer. The $3x+1$ conjecture then asserts that $S_0$ will be the set of all positive integers. This connects the $3x+1$ problem with problems on sets of integers which are closed under the action of affine maps. Problems of this sort were raised by Isard and Zwicky \cite{IZ70} in 1970. In 1970-1971 David Klarner began studying sets of integers closed under iteration of affine maps, leading to joint work with Richard Rado \cite{KR74}, published in 1974. Interaction of Klarner and Paul Erd\H{o}s at the University of Reading in 1971 led to the formulation of a (solved) Erd\H{o}s prize problem: Does the smallest set $S_1$ of integers containing $1$ and closed under the affine maps $x \mapsto 2x+1, x \mapsto 3x+1$ and $x \mapsto 6x+1$ have a positive (lower asymptotic) density? This set $S_1$ was proved to have zero density by D. J. Crampin and A. J. W. Hilton (unpublished), according to Klarner \cite{Kla82}. The solvers collected $\pounds 10$ from Erd\H{o}s (\cite{Hil10}). Later Klarner \cite[p. 47]{Kla82} formulated a revised problem: \smallskip {\bf Klarner's Integer Sequence Problem. } {\em Does the smallest set of integers $S_2$ containing $1$ and closed under the affine maps $x \mapsto 2x, x \mapsto 3x+2$ and $x \mapsto 6x+3$ have a positive (lower asymptotic) density?} \smallskip \noindent This problem remains unsolved; see the paper of Guy \cite{Guy83} (in this volume) and accompanying editorial commentary. \smallskip Much early work on the problem appeared in unusual places, some of it in technical reports, some in problem journals. The annotated bibliography given in this book \cite{Lag-B1} covers some of this literature, see also its sequel \cite{Lag-B2}. Although the problem began life as a curiosity, its general connection with various other areas of mathematics, including number theory, dynamical systems and theory of computation, have made it a respectable topic for mathematical research. A number of very well known mathematicians have contributed results on it, including John H. Conway \cite{Con72} and Yakov G. Sinai \cite{Sin03a}, \cite{Sin03b}. \section{$3x+1$ Sampler} The fascination of the $3x+1$ problem involves its simple definition and the apparent complexity of its behavior under iteration: there seems to be no simple relation between the input value $n$ and the iterates of $n$. Exploration of its structure has led to the formulation of a web of subsidiary conjectures about the behavior of iterates of the $3x+1$ function and generalizations; these include conjectures (C1)--(C5) listed in \S8. Many of these conjectures seem to be extremely difficult problems as well, and their exploration has led to much further research. Since other papers in this volume give much more information on this complexity, here we give only a brief sampler of $3x+1$ function behavior. % % \subsection{Plots of Trajectories } By the {\em trajectory} of $x$ under a function $T$, we mean the forward orbit of $x$, that is, the sequence of its forward iterates \\ $( x, T(x), T^{(2)}(x), T^{(3)}(x), ...)$. Figure \ref{fig21} displays the $3x+1$-function iterates of $n=649$ plotted on a standard scale. We see an irregular series of increases and decreases, leading to the name ``hailstone numbers" proposed by Hayes \cite{Hay84}, as hailstones form by repeated upward and downward movements in a thunderhead. \begin{figure}[h] \centering \includegraphics[width=3.5in]{Figure21.pdf} \caption{Trajectory of $n=649$ plotted on standard vertical scale} \label{fig21} \end{figure} \smallskip To gain insight into a problem it helps to choose an appropriate scale for picturing it. Here it is useful to view long trajectories on a logarithmic scale, i.e., to plot $\log T^{(k)}(n)$ versus $k$. Figure \ref{fig22} displays the iterates of $ n_0 = 100\lfloor \pi 10^{35} \rfloor $ on such a scale. Using this scale we see a decrease at a certain geometric rate to the value of $1$, indicated by the trajectory having roughly a constant slope. This is characteristic of most long trajectories. As explained in \S3.3 a probabilistic model predicts that most trajectories plotted on a logarithmic scale will stay close to a line of constant slope $-\frac{1}{2} \log \frac{3}{4} \sim -0.14384,$ thus taking about $6.95212 \log n$ steps to reach $1$. This line is pictured as the dotted line in Figure \ref{fig22}. This trajectory takes $529$ steps to reach $n=1$, while the probabilistic model predicts about $600$ steps will be taken. \begin{figure} \centering \includegraphics[width=3.5in]{Figure22.pdf} \caption{Trajectory of $n_0=100 \lfloor \pi \cdot 10^{35}\rfloor$ plotted on a logarithmic vertical scale $y= \log T^{(x)}(n_0)$ (natural logarithm), $x$ an integer. The dotted line is a probability model prediction for a ``random" trajectory for this size $N$.} \label{fig22} \end{figure} \smallskip On the other hand, plots of trajectories suggest that iterations of the $3x+1$ function also seem to exhibit pseudo-random features, i.e. the successive iterates of a random starting value seem to increase or decrease in an unpredictable manner. From this perspective there are some regularities of the iteration that appear (only) describable as statistical in nature: they are assertions about the majority of trajectories in ensembles of trajectories rather than about individual trajectories. \\ % % \subsection{Patterns} Close examination of the iterates of the $3x+1$ function $T(x)$ for different starting values reveals a myriad of internal patterns. A simple pattern is that the initial iterates of $n=2^m-1$ are $$ T^{(k)}(2^m-1) = 3^{k} \cdot 2^{m-k} -1, ~~\mbox{for} ~1 \le k \le m. $$ In particular, $T^{(m)}(2^m -1) = 3^m -1$; this example shows that the iteration can sometimes reach values arbitrarily larger than the initial value, either on an absolute or a relative scale, even if, as conjectured, the iterates eventually reach $1$. Other patterns include the appearance of occasional large clusters of consecutive numbers which all take exactly the same number of iterations to reach the value $1$. Some of these patterns are easy to analyze, others are more elusive. \smallskip Table 2.1 presents data on iterates of the $3x+1$ function $T(x)$ for $n= n_0+ m$, $0 \le m = 10j +k \le 99$, with $$ n_0 = 100\lfloor \pi \cdot 10^{35} \rfloor = 31,415,926,535,897,932,384,626,433,832,795,028,800. $$ Here $\sigma_{\infty} (n)$ denotes the {\em total stopping time} for $n$, which counts the number of iterates of the $3x+1$-function $T(x)$ needed to reach $1$ starting from $n$, counting $n$ as the $0$-th iterate. This number is the same as the number of even numbers appearing in the trajectory of the Collatz function before first reaching $1$. \medskip \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & $j=0$ & $j=1$ & $j=2$ & $j=3$ & $j=4$ & $j=5$ & $j=6$ & $j=7$ & $j=8$ & $j=9$ \\ \hline $k=0$& 529& 529& 529 & 678& 529 & 529& 846 & 529& 846& 846\\ $k=1$ & 659& 659& 529 & 678& 659 & 529 & 846& 529& 529& 529\\ $k=2$ & 846& 529& 659 & 529& 529 & 529& 659& 846& 529& 659\\ $k=3$ & 846& 529& 659 & 846 & 659 & 529 & 659& 846 & 529 & 659 \\ $k=4$ & 659& 659& 659 & 846& 678& 529& 846& 846& 846& 659\\ $k=5$ & 659& 659& 846 & 846& 678& 529 & 529& 529 & 846& 659 \\ $k=6$ & 659& 529& 659 & 846& 678& 846& 529& 846& 659& 846 \\ $k=7$ & 529& 529& 659 & 846& 659& 659& 529& 846& 659& 529\\ $k=8$ & 529& 678& 659& 846& 529& 846& 529 & 529& 846& 846 \\ $k=9$ & 529& 678& 659& 659& 529& 529& 529& 529& 659 & 846\\ \hline \end{tabular} \bigskip {\sc Table}~2.1.~~Values of total stopping time $\sigma_\infty (n)$ for $n= n_0+10j +k,$ with $n_0 := 100\lfloor \pi \cdot 10^{35} \rfloor = 31, 415, 926,535, 897,932, 384,626,433, 832, 795, 028 ,800.$ \bigskip \end{center} We observe that the total stopping time function takes only a few different values, namely: 529, 659, 678 and 846, and these four values occur intermixed in a somewhat random-appearing way, but with some regularities. Note that around $n_0 \sim 3.14 \times 10^{37}$ the predicted ``average size" of a trajectory is $6.95212 \log n_0\approx 600$. In the data here we also observe ``jumps" of size between the occurring values on the order of $100$. \smallskip This is not a property of just this starting value. In Table 2.2 we give similar data for blocks of $100$ near $n= 10^{35}$ and $10^{36}$, respectively. Again we observe that there are also four or five values occurring, but now they are different values. In this table we present data on two other statistics: the {\em frequency} statistic gives the count of these number of occurrences of each value, and the {\em $1$-ratio} statistic denotes the fraction of odd iterates occurring in the given trajectory up to and including when $1$ is reached. It is an experimental fact that all sequences in the table having the same total stopping time also have the same $1$-ratio. In the first two blocks the value $\sigma_{\infty}(n)=481$ (resp. $351$) that occurs with frequency $1$ is that for the intial value $n =10^{35}$ (resp. $n=10^{36}$) in the given interval; these initial values are unusual in being divisible by a high power of $2$. Probabilistic models for the $3x+1$-function iteration predict that even and odd iterates will initially occur with equal frequency, so we may anticipate the $1$-ratio values to be relatively close to $0.5$. \begin{center} \begin{tabular}{|c|c|c||c|c|c||c|c|c|} \hline ~ & (a)~ $10^{35}$ & ~& ~& (b)~ $10^{36}$ &~&~& (c)~$n_0$ & \\ \hline $ \sigma_\infty (n)$ & freq. & $1$-ratio & ~$ \sigma_\infty (n)$ & freq. & $1$-ratio & $ \sigma_\infty (n)$ & freq.& $1$-ratio \\ \hline 481& 1 & 0.47817&351 & 1 & 0.41594 & 529 & 38& 0.48204\\ 508 & 19& 0.48622 & 467& 72& 0.46895&659&28&0.51138 \\ 573 & 49& 0.50261& 508 & 21& 0.48228&678&7&0.51474\\ 592& 10& 0.50675 &519& 6 & 0.48554&846 & 27& 0.53782\\ 836 & 21& 0.54306 &~& ~ &~ &&&\\ \hline \end{tabular} \bigskip {\sc Table}~2.2~~Values of total stopping time, their frequencies, and $1$-ratio for \\ (a) $10^{35} \leq n \leq 10^{35}+99$, ~~~(b) $10^{36} \leq n \leq 10^{36}+99$, (c) $n_0 \leq n \leq n_0 +99$. \bigskip\\ \end{center} The data in Table 2.2 suggests the following heuristic: as $n$ increases only a few values of $\sigma_{\infty} (n)$ locally occur over short intervals; there is then a slow variation in which values of $\sigma_{\infty} (n)$ occur. However these local values are separated from each other by relatively large ``jumps" in size. We stress that this is a purely empirical observation, nothing like this is rigorously proved! Our heuristic did not quantify what is a ``short interval" and it did not quantify what ``relatively large jumps" should mean. Even the existence of finite values for $\sigma_{\infty}(n)$ in the tables presumes the $3x+1$ conjecture is true for all numbers in the table. % % \subsection{Probabilistic Models} A challenging feature of the $3x+1$ problem is the huge gap between what can be observed about its behavior in computer experiments and what can be rigorously proved. Attempts to understand and predict features of empirical experimentation have led to the following curious outcome: {\em the use of probabilistic models to describe a deterministic process! } This gives another theme of research on this problem: the construction and analysis of probabilistic and stochastic models for various aspects of the iteration process. \smallskip A basic probabilistic model of iterates of the $3x+1$ function $T(x)$ proposes that most trajectories for $3x+1$ iterates have equal numbers of even and odd iterates, and that the parity of successive iterates behave in some sense like independent coin flips. A key observation of Terras \cite{Ter76} and Everett \cite{Ev77}, leading to this model, is that the initial iterates of the $3x+1$ function have this property (see Lagarias \cite[Lemma B]{Lag85}).) This probabilistic model suggests that most trajectories plotted on a logarithmic vertical scale should appear close to a straight line having negative slope equal to $-\frac{1}{2} \log \frac{3}{4} \sim -0.14384,$ and should thus take about $6.95212 \log n$ steps to reach $1$. \smallskip The corresponding behavior of iterates of the Collatz function $C(x)$ is more complicated. The allowed patterns of even and odd Collatz function iterates always have an even iterate following each odd iterate. Probabilistic models taking this into account are more complicated to formulate and analyze than that for the $3x+1$ function; this is a main reason for studying the $3x+1$ function rather than the Collatz function. Use of the probabilistic model above allows the heuristic inference that Collatz iterates will be even about two-thirds of the time. \smallskip A variety of fairly complicated stochastic models, many of which are rigorously analyzable (as probability models), have now been formulated to model various aspects of these iterations, see Kontorovich and Lagarias \cite{KL09} (in this volume). Rigorous results for such models lead to heuristic predictions for the statistical behavior of iterates of the generalized $3x+1$ map. The model above predicts the behavior of ``most" trajectories. A small number of trajectories may exhibit quite different behavior. One may consider those trajectories that that seem to offer maximal value of some iterate of $T^{(k)}(n)$ compared to $n$. Here a probabilistic model (see \cite[Sec. 4.3]{KL09} in this volume) predicts that the statistic $$ \rho(n) := \frac{\log( \max_{k \ge 1} \left(T^{(k)}(n)\right)}{\log n} $$ as $n \to \infty$ should have $\rho(n) \le 2 + o(1)$ for all sufficiently large $n$. Figure \ref{fig23c} offers a plot of the trajectory, for the value $n_1=1 980 976 057 694 878 447,$ which attains the largest value of the statistic $\rho(n)$ over $1 \le n \le 10^{18}$; this value was found by Oliveira e Silva \cite[Table 6]{OeS09} (in this volume). This example has $\rho(n_1) \approx 2.04982$. Probabilistic models suggest that the extremal trajectories of this form will approach a characteristic shape which consists of two line segments, one of length $7.645 \log n$ steps of slope about $0.1308$ up to the maximal value of about $2 \log n$, the second of about $13.905 \log n$ steps of slope about $-0.1453$ to 0, taking $21.55 \log n$ steps in all. This shape is indicated by the dotted lines on Figure \ref{fig23c} for comparison purposes. \begin{figure}\label{fig23c} \centering \includegraphics[width=3.5in]{Figure23v2.pdf \caption Extremal trajectory $n_1=1 980 976 057 694 878447$ given in Oliveira e Silva's Table 6.} \end{figure} Another prediction of such stochastic models, relevant to the $3x+1$ conejcture, is that the number of iterations required for a positive integer $n$ to iterate to $1$ under the $3x+1$ function $T(x)$ is at most $41.677647 \log n$ (see \cite{LW92}, \cite[Sect. 4]{KL09}). In particular such models predict, in a quantitative form, that there will be no divergent trajectories. \smallskip These stochastic models can be generalized to model the behavior of many generalized $3x+1$ functions, and they make qualitatively different predictions depending on the function. For example, such models predict that no orbit of iteration of the $3x+1$ function ``escapes to infinity" (divergent trajectory). However for the {\em $5x+1$ function} given by $$ T_5(x) = \left\{ \begin{array}{cl} \displaystyle\frac{5x+1}{2} & \mbox{if}~ x \equiv 1~~ (\bmod ~2 ), \\ ~~~ \\ \displaystyle\frac{x}{2} & \mbox{if} ~~x \equiv 0~~ (\bmod~2), \end{array} \right. $$ similar stochastic models predict that almost all orbits should ``escape to infinity" (\cite[Sect. 8]{KL09}). These predictions are supported by experimental computer evidence, but it remains an unsolved problem to prove that there exists even one trajectory for the $5x+1$ problem that ``escapes to infinity". \smallskip There remains considerable research to be done on further developing stochastic models. The experiments on the $3x+1$ iteration reported above in \S3.2 exhibit some patterns not yet explained by stochastic models. In particular, the behaviors of total stopping times observed in Tables 2.1 and 2.2, and the heuristic presented there, have not yet been justified by suitable stochastic models. \section{Generalized $3x+1$ functions} \hspace{\parindent} The original work on the $3x+1$ problem viewed it as a problem in number theory. Much of the more recent work views it as an example of a special kind of discrete dynamical system, as exemplified by the lecture notes volume of G. J. Wirsching \cite{Wir98}. As far as generalizations are concerned, a very useful class of functions has proved to be the set of generalized Collatz functions which are defined below. These possess both number-theoretical and dynamic properties; the number-theoretic properties have to do with the existence of $p$-adic extensions of these maps for various primes $p$. \smallskip At present the $3x+1$ problem is most often viewed as a discrete dynamical system of an arithmetical kind. It can then be treated as a special case, within the framework of a general class of such dynamical systems. But what should be the correct degree of generality in such a class? \smallskip There is significant interest in exploring the behavior of dynamical systems of an arithmetic nature, since these may be viewed as ``toy models" of more complicated dynamical systems arising in mathematics and physics. There are a wide variety of interesting arithmetic dynamical systems. The book of Silverman \cite{Sil07} studies the iteration of algebraic maps on algebraic varieties. The book of Schmidt \cite{Sch95} considers dynamical systems of algebraic origin, meaning ${\Bbb Z}^d$-actions on compact metric groups, using ergodic theory and symbolic methods. The book of Furstenberg \cite{Fur81} considers various well structured arithmetical dynamical systems; for a further development see Glasner \cite{Gla03}. The generalized $3x+1$ functions studied in this book provide another distinct type of arithmetic discrete dynamical system. \smallskip We present a taxonomy of several classes of functions which represent successive generalizations of the $3x+1$ function. The simplest generalization of the $3x+1$ function is the $3x+k$ function, which is defined for $k \equiv 1$ or $ 5 ~(\bmod 6)$, by $$ T_{3,k}(x) = \left\{ \begin{array}{cl} \displaystyle\frac{3x+k}{2} & \mbox{if}~ x \equiv 1~~ (\bmod ~2 ) ~, \\ ~~~ \\ \displaystyle\frac{x}{2} & \mbox{if} ~~x \equiv 0~~ (\bmod~2) ~. \end{array} \right. $$ The generalization of the $3x+1$ conjecture to this situation is twofold: first, that under iteration every orbit becomes eventually periodic, and second, that there are only a finite number of cycles (periodic orbits). This class of functions occurs in the study of cycles of the $3x+1$ function (Lagarias \cite{Lag90}). Note that the $3x+1$ function $T(x)$ can be extended to be well defined on the set of all rational numbers having odd denominator, and a rescaling of any $T$-orbit of such a rational number $r=\frac{n}{k}$ to clear its denominator $k$ will give an orbit of the map $T_{3, k}$. Thus, integer cycles of the $3x+k$ function correspond to rational cycles of the $3x+1$ function having denominator $k$. \smallskip To further generalize, let $d \ge 2$ be a fixed integer and consider the function defined for integer inputs $x$ by \beql{121} f(x) = \frac{a_i x + b_i}{d} ~~~\mbox{if}~~x \equiv i~~(\bmod ~d), ~~0 \le i \le d-1, \end{equation} where $\{ (a_i, b_i): 0 \le i \le d-1\}$ is a collection of integer pairs. Such a function is called {\em admissible} if the integer pairs $(a_i, b_i)$ satisfy the condition \beql{130} i a_i + b_i \equiv 0 ~~(\bmod~d) ~~~\mbox{for}~~0 \le i \le d-1. \end{equation} This condition is necessary and sufficient for the map $f(x)$ to take integers to integers. These functions $f(x)$ have been called {\em generalized Collatz functions,} or {\em $RCWA$ functions} (Residue-Class-Wise Affine functions). Generalized Collatz functions have the nice feature that they have a unique continuous extension to the space ${\Bbb Z}_d$ of $d$-adic integers in the sense of Mahler \cite{Ma61}. \smallskip An important subclass of generalized Collatz functions are those of {\em relatively prime type}. These are the subclass of generalized Collatz functions for which \beql{122} \gcd ( a_0 a_1 \cdots a_{d-1}, d) =1. \end{equation} This class includes the $3x+1$ function $T(x)$ but not the Collatz function $C(x)$ itself. It includes the $5x+1$ function $T_5(x)$, which as mentioned above appears to have quite different long-term dynamics on the integers ${\Bbb Z}$ than does the $3x+1$ function. Functions in this class have the additional property that their unique extension to the $d$-adic integers ${\Bbb Z}_d$ has the $d$-adic Haar measure as an invariant measure. This permits ergodic theory methods to be applied to their study, see the survey paper of Matthews \cite[Thm. 6.2]{Mat09} (in this volume) for many examples. \smallskip As a final generalization, one may consider the class of integer-valued functions, which when restricted to residue classes $(\bmod \, d)$ are given by a polynomial $P_i(x)$ for each class $i~(\bmod \, d).$ Members of this class of functions have arisen in several places in mathematics. They are now widely called {\em quasi-polynomial functions} or {\em quasi-polynomials}. Quasi-polynomials appear in commutative algebra and algebraic geometry, in describing the Hilbert functions of certain semigroups, in a well known theorem of Serre, see Bruns and Herzog \cite[pp. 174--175]{BH93} and Bruns and Ichim \cite{BI07}. In another direction, functions that count the number of lattice points inside dilated rational polyhedra have been shown to be quasi-polynomial functions (on the positive integers), starting with work of Ehrhart \cite{Ehr76}, see Beck and Robins \cite{BR07} and Barvinok \cite[Chap. 18]{Bar08}. They also have recently appeared in differential algebra in connection with $q$-holonomic sequences, see Garoufalidis \cite{Ga10}. Such functions were introduced in group theory by G. Higman in 1960 \cite{Hi60} under the name PORC functions (polynomial on residue class functions). Higman's motivating problem was the enumeration of $p$-groups, cf. Evseev \cite{Ev08}. The class of all quasi-polynomial functions is closed under addition and pointwise multiplication, and forms a commutative ring under these operations. \smallskip We arrive at the following taxonomy of function classes of increasing generality: \begin{eqnarray*} \{3x+1 ~\mbox{function} ~ T(x) \} & \subset & \{3x+k ~\mbox{functions}~ T_{3,k}(x) \} \\ & \subset& \{ \mbox{generalized Collatz ~functions~of~relatively~prime~type} \} \\ &\subset & \{ \mbox{generalized Collatz~functions}\} \\ &\subset & \{ \mbox{quasi-polynomial ~functions}\} . \end{eqnarray*} For applications in mathematical logic, it has proved useful to further widen the definition of generalized Collatz functions to allow {\em partially defined functions}. Such functions are obtained by dropping the admissibility condition \eqn{130}; they map integers to rational numbers having denominator dividing $d$. If a non-integer value is encountered, then one cannot iterate such a function further. In this circumstance we adopt the convention that if a non-integer iteration value is encountered, the calculation stops in a special ``undefined" state. This framework allows the encoding of partially-defined (recursive) functions. One can use this convention to also define composition of partially defined functions. \section{Research Areas} \hspace{\parindent} Work on the $3x+1$ problem cuts across many fields of mathematics. Six basic areas of research on the problem are: (1) {\em number theory}: analysis of periodic orbits of the map; (2) {\em dynamical systems}: behavior of generalizations of the $3x+1$ map; (3){\em ergodic theory}: invariant measures for generalized maps; (4) {\em theory of computation}: undecidable iteration problems; (5) {\em stochastic processes and probability theory}: models yielding heuristic predictions for the behavior of iterates; and (6) {\em computer science}: algorithms for computing iterates and statistics, and explicit computations. We treat these in turn.\\ (1) {\em Number Theory} \\ The connection with number theory is immediate: the $3x+1$ problem is a problem in arithmetic, whence it belongs to elementary number theory. Indeed it is classified as an unsolved problem in number theory by R. K. Guy \cite[Problem E16] {Guy04}. The study of cycles of the $3x+1$ map leads to problems involving exponential Diophantine equations. The powerful work of Baker and Masser--W\"{u}stholz on linear forms in logarithms gives information on the non-existence of cycles of various lengths having specified patterns of even and odd iterates. A class of generalized $3x+1$ functions has been defined in a number theory framework, in which arithmetic operations on the domain of integers are replaced with such operations on the ring of integers of an algebraic number field, or by function field analogues such as a polynomial ring with coefficients in a finite field. Number-theoretic results are surveyed in the papers of Lagarias \cite{Lag85} and Chamberland \cite{Cha07} in this volume. \\ (2) {\em Dynamical Systems} \\ The theory of discrete dynamical systems concern the behavior of functions under iteration; that of continuous dynamical systems concern flows or solutions to differential equations. The $3x+1$ problem can be viewed as iterating a map, therefore it is a discrete dynamical system on the state space ${\Bbb Z}$. This viewpoint was taken in Wirsching \cite{Wir98}. The important operation for iteration is {\em composition of functions}. One can formulate iteration and composition questions in the general context of universal algebra, cf. Lausch and Nobauer \cite[Chap. 4.5]{LN73}. In the taxonomy above, the classes of generalized $3x+1$ functions, and quasi-polynomial functions are each closed under addition and composition of functions. The iteration properties of the first three classes of functions above have been studied, in connection with the $3x+1$ problem and the theory of computation. However the iteration of general quasi-polynomial functions remains an unexplored research area. \smallskip Viewing the problem this way suggests that it would be useful in the study of the $3x+1$ function to obtain dynamical systems on larger domains, including the real numbers ${\Bbb R}$ and the complex numbers ${\Bbb C}$. Other extensions include defining analogous functions on the ring ${\Bbb Z}_2$ of $2$-adic integers, or, for generalized $3x+1$ maps, on a ring of $d$-adic integers, for a value of $d$ determined by the function. When one considers generalized $3x+1$ functions on larger domains, a wide variety of behaviors can occur. These topics are considered in the papers of Chamberland \cite{Cha07} and Matthews \cite{Mat09} in this volume. For a general framework on topological dynamics see Akin \cite{Ak93}. \\ (3) {\em Ergodic Theory} \\ The connection with ergodic theory arises as an outgrowth of the dynamical systems viewpoint, but adds the requirement of the presence of an invariant measure. It was early observed that there are finitely additive measures which are preserved by the $3x+1$ map on the integers. Extensions of generalized $3x+1$ functions to $d$-adic integers lead to maps invariant under standard measures (countably additive measures). For example, the (unique continuous) extension of the $3x+1$ map to the $2$-adic integers has $2$-adic measure as an invariant measure, and the map is ergodic with respect to this measure. Ergodic theory topics are considered in the surveys of Matthews \cite{Mat09} and Kontorovich and Lagarias \cite{KL09} in this volume. An interesting open problem is to classify all invariant measures for generalized $3x+1$ functions on the $d$-adic integers. \\ (4) {\em Mathematical Logic and the Theory of Computation} \\ The connection to logic and the theory of computation starts with the result of Conway that there is a generalized $3x+1$ function whose iteration can simulate a universal computer. Conway \cite{Con72} exhibited an unsolvable iteration problem for a particular generalized $3x+1$ function: starting with a given input which is a positive integer $n$, decide whether or not some iterate of this map with this input is ever a power of $2$. In this connection note that the $3x+1$ problem can be reformulated as asserting that, starting from any positive integer $n$, some iterate $C^{(k)}(n)$ of the Collatz function (or of the $3x+1$ function) is a power of $2$. It turns out that iteration of $3x+1$-like functions had already been considered in understanding the power of some logical theories even in the late 1960's; these involved partially defined functions taking integers to integers (with undefined output for some integers), cf. Isard and Zwicky \cite{IZ70}. More recently such functions have arisen in studying the computational power of ``small'' Turing machines, that are too small to encode a universal computer. These topics are surveyed in the paper of Michel and Margenstern \cite{MM09} in this volume.\\ (5) {\em Probability Theory and Stochastic Processes}\\ A connection to probability theory and stochastic processes arises when one attempts to model the behavior of the $3x+1$ iteration on large sets of integers. This leads to heuristic probabilistic models for the iteration, which allow predictions of its behavior. Some authors have argued that the iteration can be viewed as a kind of pseudo-random number generator, viewing the input as being given by a probability distribution, and then asking how this probability distribution evolves under iteration. In the reverse direction, one can study trees of inverse iterates (the inverse map is many-to-one, giving rise to a unary-binary tree of inverse iterates). Here one can ask for facts about the structure of such trees whose root node is an integer picked from some probability distribution. One can model this by a stochastic model corresponding to random tree growth, e.g. a branching random walk. These topics are surveyed in the paper of Kontorovich and Lagarias \cite{KL09} in this volume. \\ (6) {\em Computer Science: Machine Models, Parallel and Distributed Computation} \\ In 1987 Conway \cite{Con87} (in this volume) formalized the Fractran model of computation as a universal computer model, based on his earlier work related to the $3x+1$ problem. This computational model is related to the register machine (or counter machine) model of Marvin Minsky (\cite{Min61}, \cite[Sect. 11.1] {Min67}). Both these machine models have recently been seen as relevant for developing models of computation using chemical reaction networks, and to biological computation, see Soloveichik et al \cite{SCWB08} and Cook et al. \cite{CSWB10}. The necessity to make computer experiments to test the $3x+1$ conjecture, and to explore various properties and patterns of the $3x+1$ iteration, leads to other questions in computation. One has the research problem of developing efficient algorithms for computing on a large scale, using either parallel computers or a distributed computer system. The $3x+1$ conjecture has been tested to a very large value of $n$, see the paper of Oliveira e Silva \cite{OeS09} in this volume. The computational method used in \cite{OeS09} to obtain record results can be parallelized. Various large scale computations for the $3x+1$ problem have used distributed computing, cf. Roosendaal \cite{Roo}. \\ \section{Current Status} We give a brief summary of the current status of the problem, which further elaborates answers to the two questions raised in the introduction. \subsection{Where does research currently stand on the $3x+1$ problem?} The $3x+1$ problem remains unsolved, and a solution remains unapproachable at present. To quote a still valid dictum of Paul Erd\H{o}s (\cite[p. 3]{Lag85}) on the problem: \begin{quotation} ``Mathematics is not yet ready for such problems." \end{quotation} \smallskip Research has established various ``world records", all of which rely on large computer calculations (together with various theoretical developments). \begin{enumerate} \item[(W1)] The $3x+1$ conjecture has now been verified for all $n < 20 \times 2^{58} \approx 5.7646 \times 10^{18}$ (Oliveira e Silva \cite{OeS09} (in this volume)). \item[(W2)] The trivial cycle $\{1, 2\}$ is the only cycle of the $3x+1$ function on the positive integers having period length less than $10,439,860,591$. It is also the only cycle containing less than $6,586,818,670$ odd integers (Eliahou \cite[Theorem 3.2]{Eli93}\footnote{This number is the bound $(21,0)$ given in \cite[Table 2]{Eli93}. The smaller values in Table 2 are now ruled out by the computations in item (W1) above.}). \item[(W3)] Infinitely many positive integers $n$ take at least $6.143 \log n$ steps to reach $1$ under iteration of the $3x+1$ function $T(x)$ (Applegate and Lagarias \cite{AL03}). \item[(W4)] The positive integer $n$ with the largest currently known value of $C$, such that it takes $C \log n$ iterations of the $3x+1$ function $T(x)$ to reach $1$, is $n=7,219,136,416,377,236,271,195$ with $C\approx 36.7169$ (Roosendaal \cite[$3x+1$ Completeness and Gamma records]{Roo}). \item[(W5)] The number of integers $1 \le n \le X$ that iterate to $1$ is at least $X^{0.84}$, for all sufficiently large $X$ (Krasikov and Lagarias \cite{KL03}). \end{enumerate} There has also been considerable progress made on showing the nonexistence of various kinds of periodic points for the $3x+1$ function, see Brox \cite{Br00} and Simons and de Weger \cite{SdW05}. These bounds are based on number-theoretic methods involving Diophantine approximation. \\ \subsection{ Where does research stand on generalizations of the $3x+1$ problem?} It has proved fruitful to view the $3x+1$ problem as a special case of wider classes of functions. These function classes appear naturally as the correct level of generality for basic results on iteration; this resulted in the taxonomy of function classes given in \S3. There are some general results for these classes and many unsolved problems. \smallskip The $3x+k$ problem seems to be the correct level of generality for studying rational cycles of the $3x+1$ function (\cite{Lag90}). There are extensive results on cycles of the $3x+1$ function, and the methods generally apply to the $3x+k$ function as well, see the survey of Chamberland \cite{Cha07} (in this volume). \smallskip The class of generalized $3x+1$ functions of relatively prime type is a very natural class from the ergodic theory viewpoint, since this is the class on which the $d$-adic extension of the function has $d$-adic Haar measure as an invariant measure. The paper of Matthews \cite{Mat09} (in this volume) reports general ergodicity results and raises many questions about such functions. \smallskip The class of generalized Collatz functions has the property that all functions in it have a unique continuous extension to the domain of $d$-adic integers ${\Bbb Z}_d$. This general class is known to contain undecidable iteration problems, as discussed in the paper of Michel and Margenstern \cite{MM09} (in this volume). The dynamics of general functions in this class is only starting to be explored; many interesting examples are given in the paper of Matthews \cite{Mat09} (in this volume). An interesting area worthy of future development is that of determining the existence and structure of invariant Borel measures for such functions on ${\Bbb Z}_d$, and determining whether there is some relation of their structure to undecidability of the associated iteration problem. \subsection{How can this be a hard problem, when it is so easy to state?} Our answer is that there are two different mechanisms yielding hard problems, either or both of which may apply to the $3x+1$ problem. The first is ``pseudorandomness"; this involves a connection with ergodic theory. The second is ``non-computability". Both of these are discussed in detail in this volume. \smallskip The ``ergodicity" connection has been independently noted by a number of people, see for example Lagarias \cite{Lag85} (in this volume) and Akin \cite{Ak04}. The unique continuous extension of the $3x+1$ map $T(x)$ to the $2$-adic integers ${\Bbb Z}_2$ gives a function which is known to be ergodic in a strong sense, with respect to the $2$-adic measure. It is topologically and metrically conjugate to the shift map, which is a maximum entropy map. The iterates of the shift function are completely unpredictable in the ergodic theory sense. Given a random starting point, predicting the parity of the $n$-th iterate for any $n$ is a ``coin flip" random variable. The $3x+1$ problem concerns the behavior of iterating this function on the set of integers ${\Bbb Z}$, which is a dense subset of ${\Bbb Z}_2$, having $2$-adic measure zero. The difficulty is then in finding and understanding non-random regularities in the iterates when restricted to ${\Bbb Z}$. Various probabilistic models are discussed in the paper of Kontorovich and Lagarias \cite{KL09} (in this volume). Empirical evidence seems to indicate that the $3x+1$ function on the domain ${\Bbb Z}$ retains the ``pseudorandomness" property on its initial iterates until the iterates enter a periodic orbit. This supports the $3x+1$ conjecture and at the same time deprives us of any obvious mechanism to prove it, since mathematical arguments exploit the existence of structure, rather than its absence. \smallskip A connection of a generalized Collatz function to ``non-computability" was made by Conway \cite{Con72} (in this volume), as already mentioned. Conway's undecidability result indicates that the $3x+1$ problem could be close to the unsolvability threshold. It is currently unknown whether the $3x+1$ problem is itself undecidable, however no method is currently known to approach this question. The survey of Michel and Margenstern \cite{MM09} (in this volume) describes many results on generalized $3x+1$ functions that exhibit undecidable or difficult-to-decide iteration problems. The $3x+1$ function might conceivably belong to a smaller class of generalized $3x+1$ functions that evade undecidability results that encode universal computers. Even so, it conceivably might encode an undecidable problem, arising by another (unknown) mechanism. As an example, could the following question be undecidable: ``Is there any positive integer $n$ such that $T^{(k)}(n) >1$ for $1 \le k \le 100 \log n$?" \section{Hardness of the $3x+1$ problem}\label{sec7} \smallskip Our viewpoint on hard problems has evolved since 1900, starting with Hilbert's program in logic and proof theory and benefiting from developments in the theory of computation. Starting in the 1920's, Emil Post uncovered great complexity in studying some very simple computational problems, now called ``Post Tag Systems". A {\em Tag system} in the class $TS(\mu, \nu)$ consists of a set of rules for transforming words using letters from an alphabet ${\cal A} = \{ a_1, ..., a_{\mu} \} $ of $\mu$ symbols, a deletion number (or shift number) $\nu \ge 1$, and a set of $\mu$ production rules $$ a_j \mapsto w_j:= a_{j,0} a_{j,1} \cdots a_{j, n_j},~~~ 1 \le j \le \mu, $$ in which the output $w_j$ is a finite string (or word) of length $n_j$ in the alphabet ${\cal A}$. Starting from an initial string $S$ a Tag system looks at the leftmost symbol of $S$, call it $a_j$, then attaches to the right end of the string the word $w_j$, and finally deletes the first $\nu$ symbols of the resulting string $S w_j$, thus obtaining a new string $S'$. Here the ``tag" is the set of symbols $w_j$ attached to the end of the word, and the iteration halts if a word of length less than $\nu$ is encountered. The {\em halting problem} is the question of deciding whether for an arbitrary initial word $S$, iteration eventually reaches the empty word. The {\em reachability problem} is that of deciding whether, given words $S$ and $\tilde{S}$, starting from word $S$ will ever produce word $\tilde{S}$ under iteration. The halting problem is a special case of the reachability problem. Post \cite{Pos65} reports that in 1920--1921 he found a complete decision procedure\footnote{Post did not publish his proof. A decision procedure for both problems is outlined in de Mol \cite{DM07}.} for the case $\mu=2, \nu=2$, i.e. the class $T(2,2)$. He then tried to solve the case $\mu=2, \nu>2$, without success. He reported \cite[p. 372]{Pos65} that the special case $\mu=2, \nu=3$ with ${\cal A} =\{0, 1\}$ and the two production rules \beql{730a} 0 \mapsto w_0=00, ~~~ 1 \mapsto w_1=1101 \end{equation} already seemed to be an intractable problem. We shall term this problem \medskip {\bf Post's Original Tag Problem.} {\em Is there a recursive decision procedure for the halting problem for the Tag system in $T(2, 3)$ given by the rules $0 \mapsto 00$ and $1 \mapsto 1101$?} \smallskip \noindent Leaving this question aside, Post considered the parameter range $\mu>2, \nu=2$. He wrote \cite[p. 373]{Pos65}: \begin{quotation} For a while the case $\nu=2, \, \mu>2$ seemed to be more promising, since it seemed to offer a greater chance of a finitely graded series of problems. But when this possibility was explored in the early summer of 1921, it rather led to an overwhelming confusion of classes of cases, with the solution of the corresponding problem depending more and more on problems of ordinary number theory. Since it had been our hope that the known difficulties of number theory would, as it were, be dissolved in the particularities of this more primitive form of mathematics, the solution of the general problem of ``tag" appeared hopeless, and with it our entire program of the solution of finiteness problems. \end{quotation} \smallskip \noindent Discouraged by this, Post reversed course and went on to obtain a ``Normal Form Theorem" (\cite{Pos43}), published in the 1940's, showing that a general logical problem could be reduced to a form slightly more complicated than Tag Systems. In 1961 Marvin Minsky \cite{Min61} proved that Post Tag Systems were undecidable problems in general. In the next few years Hao Wang \cite{Wang63}, J. Cocke and M. Minsky \cite{Coc64} and S. Ju. Maslov \cite{Mas64} independently showed undecidability for the subclass of Post Tag Systems consisting of those with $\nu=2$, thus showing that Post was right to quit trying to solve problems in that class. At present the recursive solvability or unsolvability in the class $T(2, \nu)$ remains open for all $\nu>2$. Post's original tag problem, which is the halting problem for one special function in $T(2,3)$, is still unsolved, see Lisbeth De Mol \cite{DM06}, \cite[p. 93]{DM08}, and for further work \cite{DM07}, \cite{DM09}. \smallskip Recently de Mol showed that the $3x+1$ problem can be encoded as a reachability problem for a tag system in $T(3,2)$ (\cite[Theorem 2.1]{DM08}). This tag system encodes the $3x+1$ function, and the reachability problem is: \smallskip {\bf $3x+1$ Tag Problem.} {\em Consider the tag system $T_C$ in $T(3,2)$ with alphabet ${\cal A}=\{ 0, 1, 2\}$, deletion number $\nu=2$, and production rules $$ 0\mapsto 12, \, 1 \mapsto 0, \, 2 \mapsto 000. $$ For each $n \ge 1$, if one starts from the configuration $S= 0^n$, will the tag system iteration for $T_C$ always reach state $\tilde{S}=0$?} \smallskip In 1931 Kurt G\"{o}del \cite{God31} showed the existence of undecidable problems: he showed that certain propositions were undecidable in any logical system complicated enough to include elementary number theory. This result showed that Hilbert's proof theory program could not be carried out. Developments in the theory of computation showed that one of G\"{o}del's incompleteness results corresponded to the unsolvability of the halting problem for Turing machines. This was based on the existence of a universal Turing machine, that could simulate any computation, and in his 1937 foundational paper Alan Turing \cite{Tur37} already showed one could be constructed of a not very large size. \smallskip We now have a deeper appreciation of exactly how simple a problem can be and still simulate a universal computer. Amazingly simple problems of this sort have been found in recent years. Some of these involve cellular automata, a model of computation developed by John von Neumann and Stansilaw M. Ulam in the 1950's. One of these problems concerns the possible behavior of a very simple one-dimensional nearest neighbor cellular automaton, Rule 110, using a nomenclature introduced by Wolfram \cite{Wol83}, \cite{Wol84}. This rule was conjectured by Wolfram to give a universal computer (\cite[Table 15]{Wol87}, \cite[pp. 575--577]{Wol94}). It was proved to be weakly universal by M. Cook (see Cook \cite{Co04}, \cite{Co09}). Here weakly universal means that the initial configuration of the cellular automaton is required to be ultimately periodic, rather than finite. Another is John H. Conway's game of ``Life," first announced in 1970 in Martin Gardner's column in Scientific American (Gardner \cite{Gar70}), which is a two-dimensional cellular automaton, having nearest neighbor interaction rules of a particularly simple nature. Its universality as a computer was later established, see Berkelamp, Conway and Guy \cite[Chap. 25]{BCG04}. Further remarks on the size of universal computers are given in the survey of Michel and Margenstern \cite{MM09} (in this volume). \smallskip There are, however, reasons to suspect that the $3x+1$ function is not complicated enough to be universal, i.e. to allow the encoding of a universal computer in its input space. First of all, it is so simple to state that there seems very little room in it to encode the elementary operations needed to create a universal computer. Second, the $3x+1$ conjecture asserts that the iteration halts on the domain of all positive integer inputs, so for each integer $n$, the value $F(n)$ of the largest integer observed before visiting $1$ is recursive. To encode a universal computer, one needs to represent all recursive functions, including functions that grow far faster than any given recursive function $F(n)$. It is hard to imagine how one can encode it here as a question about the iteration, without enlarging the domain of inputs. Third, the $3x+1$ function possesses the feature that there is a nice (finitely additive) invariant measure on the integers, with respect to which it is completely mixing under iteration. This is the measure that assigns mass $\frac{1}{2^n}$ to each complete arithmetic progression $(\bmod~ 2^n)$, for each $n \ge 1$. This fundamental observation was made in 1976 by Terras \cite{Ter76}, and independently by Everett \cite{Ev77} in 1977, see Lagarias \cite[Theorem B]{Lag85} for a precise statement. This ``mixing property" seems to fight against the amount of organization needed to encode a universal computer in the inputs. We should caution that this observation by itself does not rule out the possibility that, despite this mixing property, a universal computer could be encoded in a very thin set of input values (of ``measure zero"), compatible with an invariant measure. It just makes it seem difficult to do. Indeed, the 1972 encoding of a universal computer in the iteration of a certain generalized $3x+1$ function found by Conway \cite{Con72} (in this volume) has the undecidability encoded in the iteration of a very thin set of integers. However Conway's framework is different from the $3x+1$ problem in that the halting function he considers is partially defined. \smallskip Even if iteration of the $3x+1$ function is not universal, it could still potentially be unsolvable. Abstractly, there may exist in an axiomatic system statements $F(n)$ for a positive integer predicate, such that $F(1), F(2), F(3), ...$ are provable in the system for all integer $n$, but the statement $(\forall n) \,F(n)$ is not provable within the system. For example, one can let $F(n)$ encode a statement that there is no contradiction in a system obtainable by a proof of length at most $n$. If the system is consistent, then $F(1), F(2), ...$ will all individually be provable. The statement $(\forall n)\, F(n)$ then encodes the consistency of the system. But the consistency of a system sufficiently complicated to include elementary number theory cannot be proved within the system, according to G\"{o}del's second incompleteness theorem. \smallskip The pseudo-randomness or ``mixing" behavior of the $3x+1$ function also seems to make it extremely resistant to analysis. If one could rigorously show a sufficient amount of mixing is guaranteed to occur, in a controlled number of iterations in terms of the input size $n$, then one could settle part of the $3x+1$ conjecture, namely prove the non-existence of divergent trajectories. Here we have the fundamental difficulty of proving in effect that the iterations actually do have an explicit pseudo-random property. Besides this difficulty, there remains a second fundamental difficulty: solving the number-theoretic problem of ruling out the existence of an enormously long non-trivial cycle of the $3x+1$ function. This problem also seems unapproachable at present by known methods of number theory. However the finite cycles problem does admit proof of partial results, showing the nonexistence of non-trivial cycles having particular patterns of even and odd iterates. \smallskip A currently active and important general area of research concerns the construction of pseudo-random number generators: these are deterministic recipes that produce apparently random outputs (see Knuth \cite[Chap. 3]{Kn81}). More precisely, one is interested in methods that take as input $n$ truly random bits and deterministically produce as output $n+1$ ``random-looking" bits. These bits are to be ``random-looking" in the sense that they appear random with respect to a given family of statistical tests, and the output is then said to be pseudo-random with respect to this family of tests. Deciding whether pseudo-random number generators exist for statistical tests in various complexity classes is now seen as a fundamental question in computer science, related to the $P=NP$ probem, see for example Goldreich \cite{Gol01}, \cite{Gol10}. It may be that resolving the issue of the pseudo-random character of iterating the $3x+1$ problem will require shedding light on the general existence problem for pseudo-random number generators. \smallskip All we can say at present is that the $3x+1$ problem appears very hard indeed. It now seems less surprising than it might have once seemed that a problem as simple-looking as this one could be genuinely difficult, and inaccessible to known methods of attack. \section{Future Prospects} We observe first that further improvements are surely possible on the ``world records" (W1)--(W5) above. In particular, concerning (W3), it seems scandalous that it is not known whether or not there are infinitely many positive integers $n$ which iterate to $1$ under the $3x+1$ map $T(x)$ and take at least the ``average" number $\frac{2}{\log 4/3} \log n \approx 6.95212 \log n $ steps to do so. Here the stochastic models for the $3x+1$ iteration predict that at least half of all positive integers should have this property! These ``world records" are particularly worth improving if they can shed more light on the problem. This could be the case for world record (W5), where there is an underlying structure for obtaining lower bounds on the exponent, which involves an infinite family of nonlinear programs of increasing complexity (\cite{KL03}). \smallskip Analysis of the $3x+1$ problem has resulted in the formulation of a large set of ``easier'' problems. At first glance some of these seem approachable, but they also remain unsolved, and are apparently difficult. As samples, these include: \begin{enumerate} \item[(C1)] ({\em Finite Cycles Conjecture}) Does the $3x+1$ function have finitely many cycles (i.e. finitely many purely periodic orbits on the integers)? This is conjectured to be the case. \item[(C2)] ({\em Divergent Trajectories Conjecture-1}) Does the $3x+1$ function have a divergent trajectory, i.e., an integer starting value whose iterates are unbounded? This is conjectured {\em not} to be the case. \item[(C3)] ({\em Divergent Trajectories Conjecture-2}) Does the $5x+1$ function have a divergent trajectory? This is conjectured to be the case. \item[(C4)] ({\em Infinite Permutations-Periodic Orbits Conjecture}) If a generalized Collatz function permutes the integers and is not globally of finite order, is it true that it has only finitely many periodic orbits? The original Collatz function $U(n)$, which is a permutation, was long ago conjectured to have finitely many cycles. A conjecture of this kind, imposing extra conditions on the permutation, was formulated by Venturini \cite[p. 303 top]{Ven97}. \item[(C5)] ({\em Infinite Permutations-Zero Density Conjecture}) If a generalized Collatz function permutes the integers, is it true that every orbit has a (natural) density? Under some extra hypotheses one may conjecture that all such orbits have density zero; compare Venturini \cite[Sec. 6]{Ven97}. \end{enumerate} \smallskip Besides these conjectures, there also exist open problems which may be more accessible. One of the most intriguing of them concerns establishing lower bounds for the number $\pi_1(x)$ of integers less than $x$ that get to $1$ under the $3x+1$ iteration. As mentioned earlier it is known (\cite{KL03}) that there is a positive constant $c_0$ such that $$ \pi_1(x) > c_0 x^{0.84}. $$ It remains an open problem to show that for each $\epsilon>0$ there exists a positive constant $c(\epsilon)$ such that $$ \pi_1(x) > c(\epsilon) x^{1- \epsilon}. $$ Many other specific, but difficult, conjectures for study can be found in the papers in this volume, starting with the problems listed in Guy \cite{Guy83}. \smallskip We now raise some further research directions, related to the papers in this volume. A first research direction is to extend the class of functions for which the Markov models of Matthews \cite{Mat09} can be analyzed. Matthews shows that the class of generalized $3x+1$ functions of relatively prime type (\cite[Sec. 2]{Mat09}) is analyzable. He formulates some conjectures for exploration. It would be interesting to characterize the possible $d$-adic invariant measures for arbitrary generalized Collatz functions. It may be necessary to restrict to subclasses of such functions in order to obtain nice characterizations. \smallskip A second research direction concerns the class of generalized $3x+1$ functions whose iterations extended to the set of $d$-adic integers are ergodic with respect to the $d$-adic measure, cf. Matthews \cite[Sec. 6]{Mat09}). \smallskip {\bf Research Problem.} {\em Does the class of generalized Collatz functions of relatively prime type contain a function which is ergodic with respect to the standard $d$-adic measure, whose iterations can simulate a universal computer? Specificially, could it have an unsolvable iteration problem of the form: ``Given positive integers $(n, m)$ as input, does there exist $k$ such that the $k$-th iterate $T^{(k)}(n)$ equals $m$?" Or does ergodicity of the iteration preclude the possibility of simulating universal computation?} \smallskip A third research direction concerns the fact that generalized Collatz functions have now been found in many other mathematical structures, especially if one generalizes further to integer-valued functions that are piecewise polynomial on residue classes $(\bmod~d)$. These functions are the quasi-polynomial functions noted above, and they show up in a number of algebraic contexts, particularly in counting lattice points in various regions. It may prove worthwhile to study the iteration of various special classes of quasi-polynomial functions arising in these algebraic contexts. \smallskip At this point in time, in view of the intractability of problems (C1)--(C5) it also seems a sensible task to formulate a new collection of even simpler ``toy problems", which may potentially be approachable. These may involve either changing the problem or importing it into new contexts. For example, there appear to be accessible open problems concerning variants of the problem acting on finite rings (Hicks et al. \cite{HMYZ08}). Another promising recent direction is the connection of these problems with generating sets for multiplicative arithmetical semigroups, noted by Farkas \cite{Far05}. This has led to a family of more accessible problems, where various results can be rigorously established (\cite{AL06}). Here significant unsolved problems remain concerning the structure of such arithmetical semigroups. Finally it may prove profitable to continue the study, initiated by Klarner and Rado \cite{KR74}, of sets of integers (or integer vectors) closed under the action of a finitely generated semigroup of affine maps. \section{Is the $3x+1$ problem a ``good" problem?} There has been much discussion of what constitutes a good mathematical problem. We cannot do better than to recall the discussion of Hilbert \cite{Hi00} in his famous 1900 problem list. On the importance of problems he said (\cite[p. 437]{Hi00}): \begin{quotation} The deep significance of certain problems for the advance of mathematical science in general, and the important role they play in the work of the individual investigator, are not to be denied. As long as a branch of science offers an abundance of problems, so long is it alive; a lack of problems foreshadows extinction or the cessation of independent development. Just as every human undertaking pursues certain objects, so also mathematical research requires its problems. It is also by the solution of problems that the investigator tests the temper of his steel; he finds new methods and new outlooks, and gains a wider and freer horizon. \end{quotation} \smallskip Hilbert puts forward three criteria that a good mathematical problem ought to satisfy: \begin{quotation} It is difficult and often impossible to judge the value of a problem correctly in advance; for the final award depends upon the gain which science obtains from the problem. Nevertheless we can ask whether there are general criteria which mark a good mathematical problem. An old French mathematician said: ``A mathematical theory is not to be considered complete until you have made it so clear that you can explain it to the first man that you meet on the street." This clearness and ease of comprehension, here insisted on for a mathematical theory, I should still more demand for a mathematical problem if it is to be perfect; for what is clear and easily comprehended attracts, the complicated repels us. Moreover a mathematical problem should be difficult in order to entice us, but not completely inaccessible, lest it mock at our efforts. It should be to us a guide post on the mazy paths to hidden truths, and ultimately a reminder of our pleasure in its successful solution. \end{quotation} \smallskip From the viewpoint of the Hilbert criteria for a good problem, we see that: \medskip (1) The $3x+1$ problem is a clear, simply stated problem; \medskip (2) The $3x+1$ problem is a difficult problem; \medskip (3) The $3x+1$ problem initially seems accessible, in that it possesses a fairly intricate internal structure. \smallskip But -- and it is a big ``but" -- the evidence so far suggests that obtaining a proof of the $3x+1$ problem is inaccessible! Not only does this goal appear inaccessible, but various simplified conjectures derived from it appear to be completely inaccessible in their turn, leading to a regress to formulation of a series of simpler and simpler inaccessible problems, namely conjectures (C1)--(C5) listed in \S8. \smallskip We conclude that the $3x+1$ problem comes close to being a ``perfect" problem in the Hilbert sense. However it seems to fail the last of Hilbert's requirements: It mocks our efforts! It is possible to work hard on this problem to no result. It is definitely a dangerous problem! It could well be that the $3x+1$ problem remains out of human reach. But maybe not. Who knows? \\ \section{Working on the $3x+1$ problem} Whether or not the $3x+1$ problem is a ``good" problem, it is not going away, due to its extreme accessibility. It offers a large and tantalizing variety of patterns in computer experiments. This problem stands as a mathematical challenge for the 21-st century. \smallskip In working on this problem, the most cautious advice, following Richard Guy \cite{Guy83} is: \begin{quotation} {\em Don't try to solve these problems! } \end{quotation} \noindent But, as Guy said \cite[p. 35]{Guy83}, some of you may be already scribbling, in spite of the warning! \medskip We also note that Paul Erd\H{o}s said, in conversation, about its difficulty (\cite{Erd90}): \smallskip \begin{quotation} {\em ``Hopeless. Absolutely hopeless."} \end{quotation} \smallskip \noindent In Erd\H{o}s-speak, this means that there are no known methods of approach which gave any promise of solving the problem. For other examples of Erd\H{o}s's use of the term ``hopeless" see Erd\"{o}s and Graham \cite[pp. 1, 27, 66, 105]{EG80}. \smallskip At this point we may recall further advice of David Hilbert \cite[p. \, 442]{Hi00} about problem solving: \begin{quotation} If we do not succeed in solving a mathematical problem, the reason frequently consists in our failure to recognize the more general standpoint from which the problem before us appears only as a single link in a chain of related problems. After finding this standpoint, not only is this problem frequently more accessible to our investigation, but at the same time we come into possession of a method that is applicable to related problems. \end{quotation} \smallskip \noindent The quest for generalization cuts in two directions, for Hilbert also says \cite[p. \,442] {Hi00}: \begin{quotation} He who seeks for methods without having a definite problem in mind seeks for the most part in vain. \end{quotation} \smallskip Taking this advice into account, researchers have treated many generalizations of the $3x+1$ problem, which are reported on in this volume. One can consider searching for general methods that apply to a large variety of related iterations. Such general methods as are known give useful information, and answer some questions about iterates of the $3x+1$ function. Nevertheless it is fair to say that they do not begin to answer the central question: \smallskip {\em What is the ultimate fate under iteration of such maps over all time?} \smallskip My personal viewpoint is that the $3x+1$ problem is somewhat dangerous, and that it is prudent not to focus on resolving the $3x+1$ conjecture as an immediate goal. Rather, one might first look for more structure in the problem. Also one might profitably view the problem as a ``test case", to which one may from time to time apply new results arising from the ongoing development of mathematics. When new theories and new methods are discovered, the $3x+1$ problem may be used as a testbed to assess their power, whenever circumstances permit. To conclude, let us remind ourselves, following Hilbert \cite[p. 438]{Hi00}: \begin{quotation} The mathematicians of past centuries were accustomed to devote themselves to the solution of difficult particular problems with passionate zeal. They knew the value of difficult problems. \end{quotation} \smallskip \noindent The $3x+1$ problem stands before us as a beautifully simple question. It is hard to resist exploring its structure. We should not exclude it from the mathematical universe just because we are unhappy with its difficulty. It is a fascinating and addictive problem. \bigskip \paragraph{\bf Acknowledgments.} I am grateful to Michael Zieve and Steven J. Miller each for detailed readings and corrections. Marc Chamberland, Alex Kontorovich, and Keith R. Matthews also made many helpful comments. The figures are due to Alex Kontorovich, adapted from work on \cite{KL09}. I thank Andreas Blass for useful comments on incompleteness results and algebraic structures. The author was supported by NSF Grants DMS-0500555 and DMS-0801029.
{ "timestamp": "2021-11-05T01:09:01", "yymm": "2111", "arxiv_id": "2111.02635", "language": "en", "url": "https://arxiv.org/abs/2111.02635", "abstract": "This paper is an overview and survey of work on the 3x+1 problem, also called the Collatz problem, and generalizations of it. It gives a history of the problem. It addresses two questions: (1) What can mathematics currently say about this problem? (as of 2010). (2) How can this problem be hard, when it is so easy to state?", "subjects": "Number Theory (math.NT); Combinatorics (math.CO); History and Overview (math.HO)", "title": "The 3x+1 Problem: An Overview", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683469514965, "lm_q2_score": 0.8354835432479661, "lm_q1q2_score": 0.825097101710573 }
https://arxiv.org/abs/1810.02532
Sharp error bounds for Ritz vectors and approximate singular vectors
We derive sharp bounds for the accuracy of approximate eigenvectors (Ritz vectors) obtained by the Rayleigh-Ritz process for symmetric eigenvalue problems. Using information that is available or easy to estimate, our bounds improve the classical Davis-Kahan $\sin\theta$ theorem by a factor that can be arbitrarily large, and can give nontrivial information even when the $\sin\theta$ theorem suggests that a Ritz vector might have no accuracy at all. We also present extensions in three directions, deriving error bounds for invariant subspaces, singular vectors and subspaces computed by a (Petrov-Galerkin) projection SVD method, and eigenvectors of self-adjoint operators on a Hilbert space.
\section{Introduction} It is well known that the eigenvector corresponding to a near-multiple eigenvalue is ill-conditioned. Specifically, the classical Davis-Kahan theory~\cite{daviskahan} implies that the condition number of eigenvectors of symmetric or Hermitian matrices is $1/{\rm gap}$, where ${\rm gap}$ is the smallest distance between the particular eigenvalue and the other eigenvalues. For example, if $(\widehat\lambda,\widehat x)$ with $\|\widehat x\|=1$ is an approximation to an exact eigenpair $(\lambda,x)$ of a symmetric matrix $A$ with residual $\|r\|=\|A\widehat x- \widehat\lambda\widehat x\|$, then the Davis-Kahan $\sin\theta$ theorem gives the error bound for $\widehat x$~\cite{daviskahan},\cite[Ch.~11]{parlettsym}: \begin{equation} \label{eq:classical} \sin\angle(x,\widehat x)\leq \frac{\|r\|}{{\rm gap}_c}, \end{equation} where $\angle(x,\widehat x) = \mbox{acos}\frac{|\widehat x^Tx|}{\|\widehat x\|\|x\|}$ and ${\rm gap}_c$ is the distance between $\widehat\lambda$ and the eigenvalues of $A$ other than $\lambda$ (where the subscript stands for ``classical''). Here and throughout, $\|\cdot\|$ for vectors denotes the standard Euclidean norm. In view of the bound~\eqref{eq:classical}, it is commonly believed that if ${\rm gap}_c$ is smaller than the residual $\|r\|$, then we cannot guarantee any accuracy in the computed eigenvector $\widehat x$. In this work, we partly challenge this belief. Namely, we examine the accuracy of eigenvectors obtained by the Rayleigh-Ritz process (R-R), the most widely-used process for computing partial (usually extremal) eigenpairs of large-scale symmetric/Hermitian matrices, and show that~\eqref{eq:classical} can be improved---often significantly, and by a factor that can be arbitrarily large---using quantities that are readily available (or can be estimated cheaply) after the computation. Of course, the classical Davis-Kahan bound is tight in general: In the absence of additional information other than $\|r\|$ and ${\rm gap}_c$, we cannot improve~\eqref{eq:classical}, in that there exist examples for which the bound~\eqref{eq:classical} is essentially tight. However, when $(\widehat\lambda,\widehat x)$ is a computed approximate eigenpair (Ritz pair) obtained by R-R, there is usually abundant additional information available that~\eqref{eq:classical} does not use: most importantly, the residual $r$ is orthogonal to the trial subspace, which is rich in the eigenspace corresponding to not only $\lambda$ but also eigenvalues close to $\widehat\lambda$. Moreover, since the trial subspace in R-R usually contains approximation to nearby eigenpairs (e.g. when looking for the smallest eigenvalues), a bound can be computed for ${\rm Gap}$ (which we call the ``big Gap''), which is roughly the distance between the Ritz value $\widehat\lambda$ and eigenvalues not approximated by the Ritz values; see~\eqref{eq:Gapgapdef} for the precise definition. These are the crucial properties that allow us to improve the Davis-Kahan bound~\eqref{eq:classical}---in other words, we take into account the matrix structure generated automatically by R-R to derive sharp bounds for the Ritz vector error. Our results essentially show that up to a modest constant, the ${\rm gap}_c$ in \eqref{eq:classical} can be replaced by ${\rm Gap}$, which is usually much wider, thus improving classical results. Another way to understand our results is via (structured) perturbation theory: while an eigenvector has condition number $1/{\rm gap}_c$ if a general perturbation is allowed, R-R imposes a structure in the perturbation that reduces the \emph{structured} condition number to $1/{\rm Gap}$. Qualitatively speaking, the fact that the accuracy of Ritz vectors depends on ${\rm Gap}$ rather than ${\rm gap}_c$ was pointed out by Ovtchinnikov~\cite[Thm.~4]{ovtchinnikov2006cluster}. However, the bounds there involve quantities that are unavailable and diffucult to estimate, such as the projector onto an exact eigenspace. Our bounds are easy to compute or estimate, using information that is available after a typical computation of an approximate eigenpairs via the R-R process. Our bounds are also tight, in that they cannot be improved without additional information. In addition, we extend the results in three ways. First, we obtain error bounds for invariant subspaces (spanned by more than one eigenvector) computed by R-R. This gives an answer to one of the open problems suggested in Davis-Kahan's classical paper. Second, we derive their SVD variants, establishing tight bounds for the quality of approximate singular vectors and singular subspaces associated with the largest singular values, obtained by a (Petrov-Galerkin) projection method. Finally, we generalize the error bounds to eigenvectors of self-adjoint operators on a Hilbert space. \emph{Notation.} $\lambda(A)$ denotes the spectrum (set of eigenvalues) of a symmetric matrix $A$. $\sigma(A)=\{\sigma_i(A)\}_{i=1}^{\min(m,n)}$ is the set of singular values of $A\in\mathbb{C}^{m\times n}$, where $\sigma_1(A)\geq \sigma_2(A)\geq \cdots \geq \sigma_{\min}(A)=\sigma_{\min(m,n)}(A)\geq 0$. $I_n$ denotes the $n\times n$ identity matrix. $Q_\perp\in\mathbb{C}^{n\times (n-k)}$ is the orthogonal complement of $Q\in\mathbb{C}^{n\times k}$. Quantities involved in the R-R process wear a hat (e.g. $\widehat\lambda,\widehat x,\widehat X$), and those with tildes are auxiliary objects for the analysis. Norms $\|\cdot\|$ without subscripts denote the spectral norm equal to the largest singular value, which for vectors are the Euclidean norm. $\uniinv{\cdot}$ for inequalities that hold for any fixed unitarily invariant norm. Inequalities involving $\|\cdot\|_{2,F}$ hold for the spectral and Frobenius norms, but not necessarily for any unitarily invariant norm. We denote by $\mathcal{A}$ a self-adjoint operator on a Hilbert space, $\lambda(\mathcal{A})$ its spectrum, and $\|\mathcal{A}\|$ its spectral (operator) norm. We drop the subscript $i$ in $\widehat\lambda_i,\lambda_i$ when this can be done without causing confusion. We always normalize eigenvectors and Ritz vectors to have unit norm $\|x\|=\|\widehat x\|=1$. Unless otherwise stated, for definiteness we assume that the Ritz values $\widehat\lambda_1,\ldots\widehat\lambda_k$ approximate the smallest eigenvalues of $A$ (and accordingly the Ritz values are arranged in increasing order $\widehat\lambda_1\leq \widehat\lambda_2\leq \cdots \leq\widehat\lambda_k$). This is a typical situation in applications, and clearly the discussion covers the case where the largest eigenvalues are sought (if necessary by working with $-A$). A less common but still important case is when interior eigenvalues are desired, for example those lying in an interval~(e.g.~\cite{li2016thick}). Our results are applicable to this case also; one subtlety here is that some care is needed in estimating ${\rm Gap}_i$, since the Ritz values tend to contain outliers in this case. \section{Setup}\label{sec:setup} \subsection{Big (good) ${\rm Gap}$, small (bad) ${\rm gap}$}\label{sec:RR} Let $A\in\mathbb{C}^{n\times n}$ be the (large) Hermitian matrix whose partial eigenvalues are sought, and let $Q\in\mathbb{C}^{n\times k} (n\geq k$, usually $n\gg k)$ be a trial subspace with orthonormal columns $Q^*Q=I_k$ (obtained e.g. via Lanczos, LOBPCG, Jacobi-Davidson or the generalized Davidson method~\cite{baitemplates}). Following standard practice, for a matrix with orthonormal columns $Q$, we identify the matrix $Q$ with its column space $\mbox{Span}(Q)$. R-R obtains approximate eigenvalues (Ritz values) and eigenvectors (Ritz vectors) as follows. \begin{enumerate} \item Compute the $k\times k$ matrix $Q^{\ast}AQ$. \item Compute the eigendecomposition $Q^{\ast}AQ=\Omega\widehat \Lambda \Omega^\ast$. $(\widehat\lambda_1,\ldots,\widehat\lambda_k)=\mbox{diag}(\widehat \Lambda)$ are the Ritz values, and $\widehat X:=[\widehat x_1,\ldots,\widehat x_k] = Q\Omega$ are the Ritz vectors. \end{enumerate} The Ritz pairs $(\widehat\lambda_i,\widehat x_i)$ thus obtained satisfy $\widehat x_i\in \mbox{span}(Q)$ for all $i$, and since $Q^*(AQ\Omega-Q\Omega\widehat\Lambda)=Q^*AQ\Omega-\Omega\widehat\Lambda= \Omega\widehat \Lambda \Omega^\ast\Omega-\Omega\widehat\Lambda=0$ by construction, we have---crucially for this work---the orthogonality between $Q$ and the residuals $A\widehat x_i-\widehat\lambda_i\widehat x_i\perp Q$, for every $1\leq i\leq k$. Throughout we assume $k\geq 2$; indeed when $k=1$ there is no room for improvement upon Davis-Kahan. Underlying R-R is a matrix of particular structure: Let $Q_\perp$ be the orthogonal complement of $Q$, such that $[Q\ Q_\perp]$ is a square unitary matrix (and hence so is $[Q\Omega, Q_\perp]$), and consider the unitary transformation applied to $A$ \begin{equation} \label{eq:At} \widetilde A:= [Q\Omega,Q_\perp]^*A[Q\Omega,Q_\perp]= \begin{bmatrix} \quad \begin{matrix} \widehat\lambda_1 && \\ &\ddots& \\ &&\widehat\lambda_k \end{matrix}\quad \begin{matrix} \mbox{---}r_1^T\mbox{---}\\ \vdots \\ \mbox{---}r_k^T\mbox{---} \end{matrix}\quad \quad \\ \begin{matrix} |& &| \\ r_1&\cdots &r_k \\ |& &| \end{matrix} \begin{matrix} & & \\ & \Scale[2.5]{A_3} \\& \end{matrix} \end{bmatrix}. \end{equation} Here $R=(Q_\perp)^* AQ\Omega=[r_1,r_2,\ldots,r_k]$; we use the subscript $3$ in $A_3$ because later we partition the $(1,1)$ block further into two pieces. Suppose $(\lambda_i,\widetilde x_i)$ is an exact eigenpair of $\widetilde A$ such that $\widetilde A\widetilde x_i=\lambda_i\widetilde x_i$. Denote by $z_i\in\mathbb{C}^{n-k}$ the vector of the bottom $n-k$ elements of $\widetilde x_i$, and by $w_i$ the $i$th element, and by $y_i\in\mathbb{C}^{k-1}$ the first $k$ elements of $\widetilde x_i$, except the $i$th. For example when $i=1$, we have $\widetilde x_1=\left[\begin{smallmatrix}w_1\\ y_1\\z_1\end{smallmatrix}\right]$. Then since $x_i=[\widehat X,Q_\perp]\widetilde x_i$ is the corresponding eigenvector of $A$, and $(\widehat\lambda_i,\widehat x_i)$ is a Ritz pair with $\widehat x_i =[\widehat X,Q_\perp]e_i$ where $e_i$ is the $i$th column of identity $I_n$, it follows that $\cos\angle(x_i,\widehat x_i)=|e_i^T\widetilde x_i|=|w_i|$, and hence \begin{equation} \label{eq:sinx1xh1} \sin\angle(x_i,\widehat x_i)=\sqrt{\|y_i\|^2+\|z_i\|^2}. \end{equation} This is a key fact in the forthcoming analysis. Fundamental in this work is the distinction between the ``big gap'' ${\rm Gap}_i$ and the ``small gap'' ${\rm gap}_i$, defined for $i=1,\ldots,k$ by \begin{equation} \label{eq:Gapgapdef} {\rm Gap}_i:=\min|\lambda_i-\lambda(A_3)| ,\qquad {\rm gap}_i:=\min_{j\in\{1,\ldots,k\}\backslash i}|\lambda_i-\widehat\lambda_j|. \end{equation} Intuitively, ${\rm Gap}_i$ measures the distance between the target $\lambda_i$ and the undesired eigenvalues, whereas ${\rm gap}_i$ is that between $\lambda_i$ and all the other eigenvalues\footnote{\label{foot1}There is a subtle difference between ${\rm Gap}_i,{\rm gap}_i$ in~\eqref{eq:Gapgapdef} and ${\rm gap}_c$ in~\eqref{eq:classical}, in that the former is the difference between an exact desired eigenvalue and approximate undesired eigenvalues, whereas ${\rm gap}_c$ is the opposite. Our forthcoming analysis uses~\eqref{eq:Gapgapdef}, and we return to this difference in Remark~\ref{rem:gapdef}. Also note that we do not explicitly assume that ${\rm Gap}_i\geq {\rm gap}_i$; the bounds continue to hold without such assumptions, although when they do not hold, the bounds can be worse than Davis-Kahan.}, including the desired ones (e.g, $\lambda_{2}$). For example when $R\rightarrow 0$, we have ${\rm Gap}_i\rightarrow\min|\lambda_i-\lambda_{k+1}|$; by contrast ${\rm gap}_i \rightarrow \min(|\lambda_i-\lambda_{i+1}|,|\lambda_i-\lambda_{i-1}|)$. Observe that ${\rm Gap}_i\geq {\rm gap}_i$, and we typically have ${\rm Gap}_i\gg {\rm gap}_i$. We illustrate this in Figure~\ref{fig:gapGap} for $i=1$. Throughout the paper, it is helpful to consider the case $i=1$, where the target eigenpair is the smallest one\footnote{When the target eigenpairs are the smallest or largest ones, it also becomes easier to obtain lower bounds for ${\rm gap}_1$ and ${\rm Gap}_i$. For example, since the Courant-Fisher minimax theorem $\lambda_i\leq \widehat\lambda_i$, we have ${\rm gap}_1\geq \min_{j\in\{1,\ldots,k\}\backslash 1}|\widehat\lambda_1-\widehat\lambda_j|$, which involves only computed quantities. Similarly, we have ${\rm Gap}_i\geq \min|\widehat\lambda_i-\lambda(A_3)|$.}. \begin{figure}[htbp] \centering \footnotesize \begin{tikzpicture}[decoration=brace] \draw (0,0) -- (8,0); \draw (.5,-0.44) -- (.5,0.1) node[anchor=south] {$\lambda_1$}; \draw (.7,-0.44) -- (.7,0.1) node[anchor=south] {\ $\lambda_2$}; \draw (.7,-0.44) -- (.7,0.1) node[anchor=south] {\ $\lambda_2$}; \draw (1.3,0.1) node[anchor=south] {\ $\cdots$}; \draw (2,-0.44) -- (2,0.1) node[anchor=south] {\ $\lambda_k$}; \draw[decorate, yshift=0.3ex] (2.2,0) -- node[above=0ex] {$\mbox{eig}(A_3) =\mbox{eig}(Q_\perp^*AQ_\perp) $} (8,0); \draw[blue,thick,<->] (.5,-0.4) -- (.7,-0.4) node[anchor=north] {\ \scriptsize $\mbox{gap}_1$}; \draw[red,thick,<->] (.5,-0.2) -- (2.2,-0.2) node[anchor=north,xshift=-4.5ex] {\scriptsize $\mbox{Gap}_1$}; \draw[thick,<->] (2,-0.4) -- (2.2,-0.4) node[anchor=north] {\ \scriptsize $\mbox{gap}_k$}; \end{tikzpicture} \normalsize \caption{Illustration of typical situation when the smallest eigenvalues are sought, and $R$ is small enough so that $\widehat\lambda_i\approx \lambda_i$. While the small gap is ${\rm gap}_i=\min_{j\neq i}|\widehat\lambda_i-\lambda_j|\approx \min_{j\neq i}|\widehat\lambda_i-\widehat\lambda_j|$, the big Gap is much bigger ${\rm Gap}_i=\min|\lambda_i-\lambda(A_3)|$. } \label{fig:gapGap} \end{figure} In addition to ${\rm gap}_i$ and ${\rm Gap}_i$, some of the bounds we derive involve $|\lambda_i-\widehat\lambda_j|$ for a fixed $j\in\{1,\ldots,k\}\backslash i$. These lie between ${\rm gap}_i$ and ${\rm Gap}_i$. Recalling~\eqref{eq:At}, the information clearly available after R-R are the Ritz pairs $(\widehat\lambda_i,\widehat x_i)$ for $i=1,\ldots, k$ and the norms of the individual $r_i$, because they are equal to the residuals $\|r_i\|=\|A\widehat x_i-\widehat\lambda_i \widehat x_i\|$. In addition, one can reasonably expect that an estimate (or better yet, a lower bound) is available for ${\rm Gap}_i$ for each $i$, or at least for small $i$: when the smallest $k$ eigenpairs are sought, the trial subspace $Q$---assuming it has been chosen appropriately by the algorithm used---is expected to be rich in the eigenspace corresponding to those eigenvalues. It then follows from standard eigenvalue perturbation theory that $A_3$ contains only eigenvalues that are roughly at least as large as $\lambda_{k+1}(A)$ (up to $\|R\|$, or indeed~$\frac{\|R\|^2}{{\rm gap}}$~\cite{rcli05}). Therefore, although the exact value of $\lambda_{k+1}(A)$ is unknown, we can use the knowledge of the Ritz values $\widehat\lambda_i$ to estimate ${\rm Gap}_i$, for example ${\rm Gap}_i \approx |\widehat\lambda_i-\widehat\lambda_{k+1}|$ or ${\rm Gap}_i \gtrsim |\widehat\lambda_i-\widehat\lambda_{k}|$; we use the latter, approximate lower bound in our experiments. Similarly, one can estimate ${\rm gap}_i$ for example as ${\rm gap}_i\approx \min_{j\neq i}|\widehat\lambda_i-\widehat\lambda_j|$. In practice, an important feature of the residuals is that they are typically graded: $\|r_1\|\ll\|r_2\|\ll\cdots\ll \|r_k\|$. This is because the extremal eigenvalues converge much faster than interior ones; a fact deeply connected with polynomial (and rational) approximation theory~\cite[\S~33]{trefbau}. We derive bounds (e.g. Theorem~\ref{thm:eigrefine}) that respect this property, and hence give sharp bounds in practical situations. We note that previous bounds exist that involve the big ${\rm Gap}_i$ rather than ${\rm gap}_i$; most notably (aside from Ovtchinnikov's result~\cite{ovtchinnikov2006cluster} mentioned in the introduction) Davis-Kahan's generalized $\sin\theta$ theorem where the angles between subspaces of different dimensions are bounded~\cite[Thm.~6.1]{daviskahan}. In this case, however, (in addition to comparing e.g. a vector and a subspace rather than two vectors) the numerator is replaced by the entire $\|R\|$ rather than the $i$th column $\|r_i\|$. The bounds we derive essentially show that, up to a small constant, (i) the small ${\rm gap}_i$ in~\eqref{eq:classical} can be replaced by the big ${\rm Gap}_i$, and (ii) the numerator is the $i$th column $\|r_i\|$. These combined give a massively improved error bound for $\widehat x_i$, especially for small values of $i$. The next section illustrates the first aspect, and the second will be covered in Section~\ref{sec:eigsharp}. \section{$2\times 2$ partitioning}\label{sec:2by2} We will derive three error bounds for Ritz vectors; the first, obtained in this section, is simple and vividly illustrates the roles of ${\rm gap}$ and ${\rm Gap}$, but not sharp in a practical setting. In Section~\ref{sec:eigsharp} we derive two more bounds that give better bounds in practice. Here we consider a simplified $2\times 2$ block partitioning of~\eqref{eq:At} where \begin{equation} \label{eq:Aef} \widetilde A (= [Q\Omega,Q_\perp]^*A[Q\Omega,Q_\perp])= \begin{bmatrix} \widehat \Lambda_1 & R^*\\R& A_3 \end{bmatrix}, \end{equation} where $\widehat \Lambda_1=\mbox{diag}(\widehat\lambda_1,\widehat \Lambda_2)\in\mathbb{R}^{k\times k}$ and $\|R\|=\|A\widehat X-\widehat X\widehat \Lambda_1\|$ are the computed quantities. In other words, we do not distinguish the columns $r_i$ of $R$ but treat $\|R\|$ as a single residual term. \ignore{ Rather than $O(\|R\|_2/{\rm gap})$ as in~\eqref{eq:classical}, simple experiments suggest the accuracy of $\widehat x$ is more like \begin{itemize} \item if $\|R\|_2\leq {\rm gap}$, then $\angle(x,\widehat x )=O(\|R\|_2/{\rm Gap})$. \item if $\|R\|_2\geq {\rm gap}$, then $\angle(x,\widehat x )= O(\|R\|_2^2/{\rm gap}\cdot {\rm Gap})$. \end{itemize} Overall, a bound $O(\|R\|_2/{\rm Gap}+\|R\|_2^2/{\rm gap}\cdot {\rm Gap})$ seems to be what we observe. Note how the classical bound is insufficient to explain this: When $\|R\|_2\leq {\rm gap}$, the bound~\eqref{eq:classical} is an overestimate by a factor ${\rm gap}/{\rm Gap}$, which is typically $\ll 1$. Moreover, when $\|R\|_2\geq {\rm gap}$, classical results suggest $\widehat x $ may have no accuracy at all. Nonetheless, there is still significant accuracy in $\widehat x $ until $\|R\|_2=O(\sqrt{{\rm gap}\cdot {\rm Gap}})$. The bounds we derive below reflect this behavior. That is, we derive bounds that show that if \[ {\rm gap} \leq \|R\|_2\leq \sqrt{{\rm gap}}, \] then there is a bound for $\sin\angle(x,\widehat x )$ that is nontrivial (smaller than 1). When ${\rm gap} \geq \|R\|_2$, for which classical results give the nontrivial bound $\|R\|_2/{\rm gap}$, our result improves the bound to $O(\|R\|_2/{\rm Gap})$. These results are particularly relevant when $\|R\|_2$ is much larger than working precision. Below, we derive bounds for $\sin\angle(x_i,\widehat x_i)$ applicable to $i=1,\ldots,k$. In our analysis, we assume that $\widehat\lambda_i$ is the $(1,1)$ element of $\widetilde A$. This simplifies the discussion and loses no generality as we can permute the leading $k\times k$ block of $\widetilde A$. Moreover, we drop the subscript $i$ in the remainder of this section for simplicity. \begin{theorem}\label{thm:mainscalar} Let $A$ be a Hermitian matrix as in~\eqref{eq:At}, for which $(\widehat\lambda,\widehat x)$ is a Ritz pair with $\widehat\lambda=\widehat\lambda_1$. Let $(\lambda,x)$ be an eigenvector of $A$, and let ${\rm Gap}=\min|\lambda-\lambda(A_3)|$ and ${\rm gap}=\min|\lambda-\lambda(\widehat \Lambda_2)|$. Then writing $R_{2}=[r_2,r_3,\ldots,r_k]$, we have \begin{equation} \label{eq:boundvec} \sin\angle(x,\widehat x )\leq \frac{\|R\|}{{\rm Gap}}\sqrt{1+\frac{\|R_{2}\|^2}{{\rm gap}^2}} \quad \left( \leq \frac{\|R\|}{{\rm Gap}}(1+\frac{\|R_{2}\|}{{\rm gap}}) \right). \end{equation} \end{theorem} Note that clearly $\|R_{2}\|\leq \|R\|$, so the result implies $\sin\angle(x,\widehat x )\leq \frac{\|R\|}{{\rm Gap}}\sqrt{1+\frac{\|R\|^2}{{\rm gap}^2}} \leq \frac{\|R\|}{{\rm Gap}}(1+\frac{\|R\|}{{\rm gap}}). $ \begin{proof} Let $\widetilde x=\begin{bmatrix} w\\ y\\z\end{bmatrix}$ be an eigenvector of $\widetilde A$ as in~\eqref{eq:Aef} such that $\widetilde A \begin{bmatrix} w\\ y\\z \end{bmatrix}=\lambda \begin{bmatrix} w\\ y\\z \end{bmatrix}$, with $w\in\mathbb{C},y\in\mathbb{C}^{k-1}$. Then since $\sin\angle(x,\widehat x ) = \left\| \begin{bmatrix} y\\z \end{bmatrix} \right\|$ from~\eqref{eq:sinx1xh1}, the goal is to bound $\|y\|$ and $\|z\|$. The bottom part of $\widetilde A\widetilde x=\lambda \widetilde x$ gives \[ (\lambda I_{n-k}-A_3) z =R\begin{bmatrix} w\\y \end{bmatrix} , \] from which we obtain \begin{equation} \label{eq:dkspecial} \|z\|\leq \|(\lambda I_{n-k}-A_3)^{-1}\| \left\|R\begin{bmatrix} w\\y \end{bmatrix} \right\| \leq \|(\lambda I_{n-k}-A_3)^{-1}\|\|R\| = \frac{\|R\|}{{\rm Gap}} . \end{equation} Note that the denominator is ${\rm Gap}$, not ${\rm gap}$. We also note that the final bound is Davis-Kahan's generalized $\sin\theta$ theorem where subspaces of different sizes ($\widehat x$ and the $[x_1,\ldots,x_k]$) are compared (and when the perturbation is off-diagonal); in fact, we can also obtain $\|z\|\leq \frac{\|R\|}{{\rm Gap}} \left\| \big[ \begin{smallmatrix} w\\y \end{smallmatrix} \big] \right\|$, which is the generalized $\tan\theta$ theorem. From the second block of $\widetilde A\widetilde x=\lambda \widetilde x$ we have \[ (\lambda I_{k-1} -\widehat \Lambda_{2})y = R_{2}^*z, \] and since $\|(\lambda I_{k-1} -\widehat \Lambda_{2})y\|\geq {\rm gap} \|y\|$, we obtain the important bound \begin{equation} \label{eq:ybound1} \|y\| \leq \frac{\|R_{2}\|}{{\rm gap}}\|z\|. \end{equation} Combining with~\eqref{eq:dkspecial} we obtain \begin{equation} \label{eq:y2knya} \|y\| \leq \frac{\|R\|\|R_{2}\|}{{\rm gap}\cdot {\rm Gap}} . \end{equation} Therefore, we conclude that \[ \sin\angle(x,\widehat x )^2 = \left\| \begin{bmatrix} y\\z \end{bmatrix} \right\|^2 \leq \frac{\|R\|^2}{{\rm Gap}^2}\left( 1+\frac{\|R_{2}\|^2}{{\rm gap}^2} \right), \] giving~\eqref{eq:boundvec}. \end{proof} \ignore{ Let us examine how well the bound~\eqref{eq:boundvec} predicts the experimental observations. \begin{itemize} \item if $\|R\|\leq {\rm gap}$, then~\eqref{eq:boundvec} gives $\angle(x,\widehat x )=O(\|R\|/{\rm Gap})$; in particular, $\angle(x,\widehat x )\leq \sqrt{2}\|R\|/{\rm Gap}$. \item if $\|R\|\geq {\rm gap}$, then~\eqref{eq:boundvec} gives $\angle(x,\widehat x )= O(\|R\|^2/({\rm gap}\cdot {\rm Gap}))$. \end{itemize} These both accurately reflect the experiments, see also Section~\ref{sec:exp}. } We make several remarks regarding the theorem. \begin{remark}[Qualitative behavior of bounds] Theorem~\ref{thm:mainscalar} shows that \begin{itemize} \item if $\|R\|\leq {\rm gap}$, then $\sin\angle(x,\widehat x )\lesssim \frac{\|R\|}{{\rm Gap}}$. \item if $\|R\|\geq {\rm gap}$, then $\sin\angle(x,\widehat x )\lesssim \frac{\|R\|^2}{{\rm gap}\cdot {\rm Gap}}$. \end{itemize} Note how Davis-Kahan's bound is insufficient to explain these: When $\|R\|\leq {\rm gap}$, we improve the bound~\eqref{eq:classical} by a factor ${\rm gap}/{\rm Gap}$, which is typically $\ll 1$. Moreover, when $\|R\|\geq {\rm gap}$, classical results suggest $\widehat x $ may have no accuracy at all. Nonetheless, Theorem~\ref{thm:mainscalar} shows that there is still a nontrivial bound for $\sin\angle(x,\widehat x )$ as long as $\|R\|\lesssim \sqrt{{\rm gap}\cdot {\rm Gap}} (\gg {\rm gap})$. These results are particularly relevant when only low-accuracy solutions are available, so that $\|R\|$ is much larger than working precision. \end{remark} \begin{remark}[Effect of finite precision arithmetic]\label{rem:finite} Crucial in the above argument is that $\widehat \Lambda_1$ has zero off-diagonal elements. In practice in finite-precision arithmetic, the Rayleigh-Ritz process inevitably results in $\widehat \Lambda_1$ in~\eqref{eq:Aef} with off-diagonal elements that are $O(u)$ instead of $0$, due to roundoff errors (assuming for simplicity $\|A\|=O(1)$). It is therefore important to address how they affect the bounds. As mentioned in the introduction, classical perturbation theory shows that these $O(u)$ terms will perturb $\widehat x $ by up to $O(u/{\rm gap})$. Since the off-diagonal $O(u)$ elements in $\widehat \Lambda_1$ indeed lie in the directions that perturb the eigenvector the most (we return to this in Section~\ref{sec:eigpert}), to account for roundoff errors we will need to add the term $O(u/{\rm gap})$ to the bound~\eqref{eq:boundvec}. This remark becomes important especially when $\|R\|$ is small, so that $O(u/{\rm gap})$ is not negligible relative to $\frac{\|R\|}{{\rm Gap}}$. In other words, the folklore that eigenvectors cannot be computed with precision higher than $u/{\rm gap}$ is true; what we refute is the belief that the bound $\|R\|/{\rm gap}$ (or $\|r\|/{\rm gap}$) is sharp---our result shows that when $\|R\|> {\rm gap}$ but $\|R\|< {\rm Gap}$, Rayleigh-Ritz computes eigenvectors of much higher accuracy than $\|R\|/{\rm gap}$. \end{remark} \begin{remark}[Different partitionings]\label{rem:partition} We can obtain different bounds depending on where to partition, that is, we can invoke the bound~\eqref{eq:boundvec} by taking a $k\leftarrow k'$ for some $k'\leq k$. Each choice of $k'$ gives a different bound, since each gives different values of $\|R\|$ and ${\rm Gap}$ (along with ${\rm gap}$, though its dependence on $k'$ is usually much less significant). If the computational cost is not a concern, one can compute all possible partitionings and take the smallest bound obtained. However, the bounds in Section~\ref{sec:eigsharp} are often still better in practice. \end{remark} \begin{remark}[Proof via generalized Davis-Kahan and Saad]\label{rem:dksaad} The result~\eqref{eq:boundvec} can also be derived by combining (i) Saad's bound~\cite[Thm.~4.6]{saadbookeig}, which bounds $\sin\angle(\widehat x_i,x_i)$ relative to $\sin\angle(Q,x_i)$, the angle between the desired eigenvector and the trial subspace, and (ii) the generalized Davis-Kahan $\sin\theta$ theorem~\cite{daviskahan}, in which two subspaces of different dimensions are compared. Here we presented a first-principles derivation, as we use the same line of arguments to derive improved and generalized bounds in the forthcoming sections. Also noteworthy is Knyazev's paper~\cite{knyazevmc97}, which generalizes Saad's bound to subspaces. He also shows that Ritz vectors contain quadratically small components in eigenvectors approximated by the other Ritz vectors. This is essentially captured in~\eqref{eq:ybound1}, which indicates $\|y\|=O(\|R\|^2)$ (absorbing the gaps in the constant). We revisit this phenomenon for subspaces in Section~\ref{sec:RRangles}. \end{remark} While we will not repeat them, Remarks~\ref{rem:finite} and \ref{rem:partition} are relevant throughout the paper. \subsection{Experiments}\label{sec:exp} To illustrate Theorem~\ref{thm:mainscalar}, we conduct the following experiment; throughout, all experiments were carried out in MATLAB version R2017a using IEEE double precision arithmetic with unit roundoff $\approx 1.1\times 10^{-16}$. Let \[ A = \begin{bmatrix} \widehat \Lambda_1 & R^*\\R & A_3 \end{bmatrix}\in\mathbb{R}^{n\times n} \] where $n=10$ (the precise size of $n$ is insignificant), $\widehat \Lambda_1 = -\big[ \begin{smallmatrix} 1+{\rm gap}& \\ & 1 \end{smallmatrix}\big]$ and $A_3\succeq 0$, so that ${\rm Gap}\geq 1$. We take $R\in\mathbb{R}^{k\times 2}$ to be randomly generated matrices using MATLAB's {\tt randn} function, scaled so that $\|R\|$ is fixed to a value $10^{-i}$, for $i=0,\ldots,15$. For each $i$, we generate 100 such matrices $R$, and find the largest value of $\sin\angle(x,\widehat x )$ from the 100 runs (note that $\widehat x=[1,0,\ldots,0]^T$ by construction). These are shown as 'observed' in Figure~\ref{fig:exp}, along with (i) the classical bound $\|R\|/{\rm gap}$, (ii) the new bound~\eqref{eq:boundvec}, (iii) the bound $u/{\rm gap}$, in view of Remark~\ref{rem:finite}. In view of Remark~\ref{rem:finite}, $\sin\angle(x,\widehat x )$ is bounded by the maximum of~\eqref{eq:boundvec} and (a small multiple of) $u/{\rm gap}$. Of course, we always have the trivial bound $\sin\angle(x,\widehat x )\leq 1$, so putting these together, we have the following bound in finite-precision arithmetic: \begin{equation} \label{eq:finalbound} \sin\angle(x,\widehat x )\leq \min\left(1,\max\big(O(\frac{u}{{\rm gap}}),\frac{\|R\|}{{\rm Gap}}\sqrt{1+\frac{\|R\|^2}{{\rm gap}^2}}\big)\right). \end{equation} We observe in Figure~\ref{fig:exp} that this is indeed the case, and the new bound~\eqref{eq:boundvec} gives remarkably sharp bounds for the observed values of $\sin\angle(x,\widehat x )$ (when it is not dominated by $u/{\rm gap}$, and gives a nontrivial bound $\leq 1$). This is despite the fact that we are plotting the looser bound $\frac{\|R\|}{{\rm Gap}}\sqrt{1+\frac{\|R\|^2}{{\rm gap}^2}}$ with $\|R_2\|$ in~\eqref{eq:boundvec} replaced by $\|R\|$, as using $\|R_2\|$ makes the bound depend on the particular random instance of $R$. As discussed above, the new bound~\eqref{eq:boundvec} has two asymptotic behaviors: $\approx \|R\|/{\rm Gap}$ when $\|R\|\leq {\rm gap}$, and $\approx \|R\|^2/({\rm gap}\cdot {\rm Gap})$ when $\|R\|\geq {\rm gap}$. This can be seen in the plots, as the change of slope in the new bound around $\|R\|\approx {\rm gap}$. From the plots with ${\rm gap}=10^{-3}$ and $10^{-5}$, we see that this transition also reflects the observed values of $\sin\angle(x,\widehat x )$ quite accurately. In all cases, the classical Davis-Kahan bound $\|R\|/{\rm gap}$ tends to be severe overestimates (and $\|r\|/{\rm gap}$ as in~\eqref{eq:classical} is not much different), and the new bound can provide nontrivial information (bound smaller than 1) even when the Davis-Kahan bound is useless with $\|R\|/{\rm gap}>1$, and the difference between Davis-Kahan and the new bound widens when ${\rm gap}$ is small. \begin{figure}[htpb] \hspace{-4mm} \begin{minipage}[t]{0.49\hsize} \includegraphics[width=.95\textwidth]{figs/RReigvecfixgapRR1e-01.pdf} \end{minipage} \begin{minipage}[t]{0.49\hsize} \includegraphics[width=.95\textwidth]{figs/RReigvecfixgapRR1e-03.pdf} \end{minipage}\\ \vspace{2mm} \begin{minipage}[t]{0.49\hsize} \includegraphics[width=.95\textwidth]{figs/RReigvecfixgapRR1e-05.pdf} \end{minipage} \begin{minipage}[t]{0.49\hsize} \includegraphics[width=.95\textwidth]{figs/RReigvecfixgapRR1e-10.pdf} \end{minipage} \caption{Illustration of our bound~\eqref{eq:boundvec} (dashed red; with $R_2$ replaced by $R$), varying gap (upper-left: ${\rm gap}=10^{-1}$, upper-right: ${\rm gap}=10^{-3},$ lower-left: ${\rm gap}=10^{-5}$, lower-right: ${\rm gap}=10^{-10}$). Observe how sharp~\eqref{eq:boundvec} is, relative to the classical Davis-Kahan bound $\|R\|/{\rm gap}$. When $\|R\|\leq u/{\rm gap}$, the bound in finite-precision arithmetic would be the maximum between the new bound and $u/{\rm gap}$ (dashed black, constant line); see~\eqref{eq:finalbound}. } \label{fig:exp} \end{figure} \section{Improved error bounds for Ritz vectors}\label{sec:eigsharp} \label{sec:improve} The above experiments illustrate the sharpness of the bound~\eqref{eq:boundvec} given the information $\|R\|=\|[r_1,\ldots,r_k]\|$ and $\widehat\lambda_i$, along with $\min(\mbox{eig}(A_3))$. When applied in practice, however, we find that the bound~\eqref{eq:boundvec} is usually a severe overestimate, as we illustrate in Section~\ref{sec:exp2}. The reason is that it does not distinguish $r_1$ from $r_k$ (say), while typically we have $\|r_1\|\ll \|r_k\|$, reflecting the difference in speed with which each Ritz pair converges, typically the extremal ones converging first. As noted in Section~\ref{sec:RR}, after R-R one also has information on the individual norms $\|r_i\|=\|A\widehat x_i-\widehat\lambda_i\widehat x_i\|$. Here we derive bounds that are essentially sharp using all the information. We shall show that if $\|r_i\|$ are sufficiently small, then $\sin\angle(x,\widehat x )\lesssim \frac{\|r_1\|}{{\rm Gap}_1}$. This is usually a massive improvement over~\eqref{eq:boundvec}, and essentially sharp: we cannot improve the bound below $\frac{\|r_1\|}{{\rm Gap}_1}$. The argument is similar to Theorem~\ref{thm:mainscalar} but with more elaborate manipulations. The strategy is the same: bound $\|y\|$ in terms of $\|z\|$, and use this to bound $\left\|\big[ \begin{smallmatrix} y\\z \end{smallmatrix} \big]\right\|$. \begin{theorem}\label{thm:eigrefine} In the setting of Theorem~\ref{thm:mainscalar}, \begin{itemize} \item If ${\rm Gap}>\frac{\|R_{2}\|^2}{{\rm gap}}$, then \begin{equation} \label{eq:sin22} \sin\angle(x,\widehat x ) \leq \frac{\|r_1\|}{{\rm Gap}-\frac{\|R_{2}\|^2}{{\rm gap}}}\sqrt{1+\frac{\|R_{2}\|^2}{{\rm gap}^2}} . \end{equation} \item If ${\rm Gap}>\sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|}$, then \begin{equation} \label{eq:sin2indiv} \sin\angle(x,\widehat x ) \leq \frac{\|r_1\|}{ {\rm Gap}-\sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|}} \sqrt{1+ \left(\sum_{i=2}^{k}\frac{\|r_{i}\|}{|\lambda -\widehat\lambda_{i}|}\right)^2}. \end{equation} \end{itemize} \end{theorem} \begin{proof} We first prove~\eqref{eq:sin22}. The main idea is to improve the bound~\eqref{eq:dkspecial} on $\|z\|$. As before we have $ (\lambda I_{k-1} -\widehat \Lambda_{2})y = R_{2}^*z,$ so $\|y\|\leq \frac{\|R_{2}\|\|z\|}{{\rm gap}}$. We also have \[ (\lambda I_{n-k}-A_3) z =[r_1\ R_{2}] \begin{bmatrix} w\\y \end{bmatrix}. \] This gives $(\lambda I_{n-k}-A_3) z -R_{2} y=r_1w$, hence $\|(\lambda I_{n-k}-A_3) z\| -\|R_{2} y\|\leq \|r_1w\|$. Using $\|y\|\leq \frac{\|R_{2}\|\|z\|}{{\rm gap}}$ we obtain \[ \|(\lambda I_{n-k}-A_3) z\| - \frac{\|R_{2}\|^2\|z\|}{{\rm gap}} \leq \|r_1w\|. \] Noting that $\sigma_{\min}(\lambda I_{n-k}-A_3)={\rm Gap}$, we have $\|(\lambda I_{n-k}-A_3) z\|\geq {\rm Gap} \|z\|$, hence \[ ({\rm Gap}-\frac{\|R_{2}\|^2}{{\rm gap}})\|z\| \leq \|r_1w\|. \] Using the assumption ${\rm Gap}>\frac{\|R_{2}\|^2}{{\rm gap}}$ and the trivial bound $\|w\|\leq 1$ we obtain \[ \|z\| \leq \frac{\|r_1\|}{{\rm Gap}-\frac{\|R_{2}\|^2}{{\rm gap}}}. \] This together with $\|y\|\leq \frac{\|R_{2}\|\|z\|}{{\rm gap}}$ yields \[ \sin\angle(x,\widehat x ) = \left\| \begin{bmatrix} y\\z \end{bmatrix} \right\| \leq \frac{\|r_1\|}{{\rm Gap}-\frac{\|R_{2}\|^2}{{\rm gap}}}\sqrt{1+\frac{\|R_{2}\|^2}{{\rm gap}^2}}, \] giving~\eqref{eq:sin22}. The remaining task is to prove~\eqref{eq:sin2indiv}. The idea to improve the bound~\eqref{eq:ybound1} on $\|y\|$, or rather its individual entries, using \[ (\lambda I_{k-1} -\widehat \Lambda_{2})y = R_{2}^*z. \] Writing $y=[y_2,\ldots,y_k]^T$, the $i$th ($i=2,\ldots,k$) element gives $(\lambda -\widehat\lambda_{i})y_{i} = r_{i}^*z, $ hence \begin{equation} \label{eq:ithy} |y_i|= \frac{|r_{i}^*z|}{|\lambda -\widehat\lambda_{i}|} \leq \frac{\|r_{i}\|\|z\|}{|\lambda -\widehat\lambda_{i}|},\quad i=2,\ldots, k. \end{equation} We also have \[ (\lambda I_{n-k}-A_3) z =[r_1,r_2,\ldots,r_k] \begin{bmatrix} w\\y_2\\\vdots\\y_{k} \end{bmatrix}. \] This gives $(\lambda I_{n-k}-A_3) z -R_{2} y=r_1w$, and \begin{equation} \label{eq:rindiv} \|(\lambda I_{n-k}-A_3) z\| -\sum_{i=2}^{k}\|r_{i}y_i\|\leq \|r_1w\|, \end{equation} so using~\eqref{eq:ithy} we obtain \[\|(\lambda I_{n-k}-A_3) z\| - \sum_{i=2}^{k}\frac{\|r_{i}\|^2\|z\|}{|\lambda -\widehat\lambda_{i}|} \leq \|r_1w\| .\] Again using $\|(\lambda I_{n-k}-A_3) z\|\geq{\rm Gap}\|z\|$, we therefore obtain \[ ({\rm Gap}- \sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|} )\|z\| \leq \|r_1w\|. \] Hence, using the assumption ${\rm Gap}> \sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|}$ and the trivial bound $\|w\|\leq 1$ we obtain \[ \|z\| \leq \frac{\|r_1\|}{ {\rm Gap}-\sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|}}. \] The fact $\sin\angle(x,\widehat x ) = \left\| \begin{bmatrix} y\\z \end{bmatrix} \right\| $ together with~\eqref{eq:ithy} completes the proof of \eqref{eq:sin2indiv}. \end{proof} Note that since the bounds $\|y\|\leq \frac{\|R_{2}\|\|z\|}{{\rm gap}}$ and~\eqref{eq:ithy} are both valid, in both bounds~\eqref{eq:sin22} and~\eqref{eq:sin2indiv}, the term with the square root can be replaced with the minimum, that is, $\sqrt{1+\min\bigg(\frac{\|R_{2}\|^2}{{\rm gap}^2},\big(\sum_{i=2}^{k}\frac{\|r_{i}\|}{|\lambda -\widehat\lambda_{i}|}\big)^2\bigg)}$. This applies also to the bounds to follow, but for brevity we do not repeat this remark. We also note that the bounds~\eqref{eq:sin22} and \eqref{eq:sin2indiv} are not comparable. The bound \eqref{eq:sin22} involves the small ${\rm gap}$, which \eqref{eq:sin2indiv} avoids to some extent by using the individual residuals $\|r_i\|$; however, the heavy use of triangular inequalities in the bound \eqref{eq:rindiv} suggests \eqref{eq:sin22} can still be a significant overestimate. The ``sharpest'' bound one can obtain would be via directly bounding the norm $\|y\| = \|(\lambda I_{k-1} -\widehat \Lambda_{2})^{-1}R_{2}^*z\|$. Nonetheless, experiments suggest~\eqref{eq:sin2indiv} is often a good bound, as we illustrate now. \subsection{Experiments}\label{sec:exp2} We illustrate Theorem~\ref{thm:eigrefine} with experiments more practical than Section~\ref{sec:exp}. We let $A\in\mathbb{R}^{1000\times 1000}$ be the classical tridiagonal matrix with $2$ on the diagonal and -1 on the super- and sub-diagonals. This is a 1D Laplacian matrix, obtained by finite difference discretization. We then run the LOBPCG algorithm~\cite{lobpcg} to compute the smallest eigenpair with a random initial guess, working with a $k=50$-dimensional subspace. Figure~\ref{fig:lob} (left) shows the convergence of $\sin\angle(\widehat x_1,x_1)$ along with four bounds: Davis-Kahan's $\sin\angle(\widehat x_1,x_1)\leq \frac{\|r_1\|}{{\rm gap}}$, \eqref{eq:sin22}, \eqref{eq:sin2indiv} and \eqref{eq:boundvec} from the previous section. Some data are missing for \eqref{eq:sin22} and \eqref{eq:sin2indiv} in the early steps as they violated the assumption ${\rm Gap}>\frac{\|R_{2}\|^2}{{\rm gap}}$ or ${\rm Gap}>\sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|}$; note that these assumptions can be checked inexpensively. To estimate ${\rm Gap}$ and ${\rm gap}$ we used the available quantities ${\rm Gap} \gtrsim \min|\widehat\lambda_1-\widehat\lambda_{k}|$, ${\rm gap} \approx \min|\widehat\lambda_1-\widehat\lambda_{2}|$; the plots look nearly identical if the exact values are used. \begin{figure}[htbp] \begin{minipage}[t]{0.49\hsize} \includegraphics[width=0.95\textwidth]{figs/RitzveclobpcgRR1000.pdf} \end{minipage} \begin{minipage}[t]{0.49\hsize} \includegraphics[width=0.99\textwidth]{figs/residualslobpcg1000.pdf} \end{minipage} \caption{Left: convergence of $\sin\angle(\widehat x_1,x_1)$ (shown as exact), and its bounds~\eqref{eq:sin22}, \eqref{eq:sin2indiv} and the Davis-Kahan bound~\eqref{eq:classical}. Right: scatterplot of $\lambda$ vs. residuals $\|r_i\|=\|A\widehat x_i-\widehat\lambda_i\widehat x_i\|$ for $i=1,2,\ldots,k=50$, after 20 LOBPCG iterations. Note how $\|r_i\|$ are graded $\|r_i\|\ll \|r_j\|$ for $i\ll j$. } \label{fig:lob} \end{figure} We make several observations. First, \eqref{eq:sin2indiv} gave sharp bounds for $\sin\angle(\widehat x_1,x_1)$ when applicable. For example after eight LOBPCG iterations, Davis-Kahan's $\sin\theta$ theorem gives bounds $>1$, suggesting $\widehat x_1$ may have no accuracy at all. Nonetheless, \eqref{eq:sin2indiv} correctly shows that it has at least accuracy $\lesssim 10^{-3}$. Second, the bound \eqref{eq:boundvec} is poor throughout, because it takes the entire residual matrix norm $\|R\|$ in the numerator, without respecting the fact that the residuals $\|r_i\|=\|A\widehat x_i-\widehat\lambda_i\widehat x_i\|$ are typically graded and hence $\|r_1\|\ll \|R\|$, as illustrated in Figure~\ref{fig:lob} (right). Finally, the asymptotic behavior of the bounds as $\|R\|\rightarrow 0$ (many LOBPCG steps) are also in stark contrast. This is because up to first order in $\|R\|$, \eqref{eq:sin22} and \eqref{eq:sin2indiv} are $\frac{\|r_1\|}{{\rm Gap}}$, whereas Davis-Kahan involves the smaller gap $\frac{\|r_1\|}{{\rm gap}}$. \subsection{Structured condition number}\label{sec:eigpert} Here we interpret Theorem~\ref{thm:eigrefine} from the standpoint of perturbation theory. Namely, we regard $R$ in~\eqref{eq:At} as a perturbation to the block diagonal matrix $\widetilde A_0:=\mbox{diag}(\widehat\lambda_1,\ldots,\widehat\lambda_k,A_3)$ having eigenvectors $e_1,\ldots,e_k$ (besides others), the first $k$ canonical vectors. Examining $\angle(x_i,\widehat x_i)$ is equivalent to examining how much the eigenvector $e_i$ of $\widetilde A_0$ gets perturbed by $R$. In the opening we mentioned that the condition number of an eigenvector is $1/{\rm gap}_i$. That is, there exists a perturbation $E$ such that $\widetilde A_0+E$ has an eigenvector $\widehat e_i$ with \begin{equation} \label{eq:eihatei} \sin\angle(e_i,\widehat e_i)=\frac{\|E\|}{{\rm gap}_i} + O(\|E\|^2). \end{equation} Yet, the two bounds in Theorem~\ref{thm:eigrefine} show that writing $R:=\widetilde A-\widetilde A_0$ (slightly and harmlessly abusing notation), we have \begin{equation} \label{eq:eihateiGap} \sin\angle(e_i,\widehat e_i)=\frac{\|r_i\|}{{\rm Gap}_i} + O(\|R\|^2) . \end{equation} Note the two changes, both potentially significant: first, ${\rm gap}$ is replaced by ${\rm Gap}$. Second, the norm of the entire perturbation $\|E\|$ is replaced by the individual $\|r_i\|$, the perturbation only in the $i$th column of $\widetilde A_0$. An explanation of this effect can be made via structured perturbation analysis. In R-R, the perturbation $R$ in \eqref{eq:At} is highly structured in two ways: the nonzero pattern, and the grading of $\|r_i\|$. For example, the perturbation $E$ that would perturb the eigenvector $e_1$ the most is the $(1,2)$ and $(2,1)$ elements in $\widetilde A_0$, as they connect the eigenvalues $\lambda_1$ and $\lambda_2$, resulting in the (unstructured) condition number $1/|\lambda_1-\lambda_2|=1/{\rm gap}_1$. However, these are forced to be zero by the R-R construction. Within the structured perturbation allowed in R-R, $e_1$ is perturbed most by the $(k+1,1)$ and $(1,k+1)$ elements, assuming for the moment $A_3$ is diagonalized. These elements connect the eigenvalues $\lambda_1$ and $\min(\lambda(A_3))\approx \lambda_{k+1}$, resulting in the structured condition number $1/{\rm Gap}_1$. Regarding the grading of $\|r_i\|$, the $r_j$ ($j\neq i$) terms have no effect on $\widehat e_i$ up to $O(\|r_j\|^2)$, making $r_i$ the only term that affects the leading term in~\eqref{eq:eihateiGap}. \section{Bounds for invariant subspaces}\label{sec:RRangles} We now turn to bounding errors for invariant subspaces spanned by more than one eigenvector. Besides being the natural object in many applications, it is sometimes necessary to resort to subspaces instead of individual eigenvectors, when multiple or near-multiple eigenvalues are present. For example, if ${\rm gap}=O(u)$, none of the above bounds would be useful, as the $O(\frac{u}{{\rm gap}})$ term in~\eqref{eq:finalbound} due to roundoff errors is always present. Below we derive bounds that give useful information in such cases. We briefly recall the definition of angles between subspaces. The angles $\{\theta_i\}_{i=1}^{k_1}$ between two subspaces spanned by $X\in\mathbb{C}^{n\times k_1},Y\in\mathbb{C}^{n\times k_1}$ with orthonormal columns are defined by $\theta_i=\mbox{acos}(\sigma_i(X^*Y))$ and denoted by $\angle(X,Y)$; they are known as the canonical angles or principal angles~\cite[Thm.~6.4.3]{golubbook4th}. Equivalently, we have $\sin\theta_i=\sigma_i(X_\perp^*Y)$ (as can be verified e.g. via the CS decomposition~\cite[Thm.~2.5.2]{golubbook4th}), which is what we use below (and used above for $k_1=1$ to obtain~\eqref{eq:sinx1xh1}). To clarify the situation, rewrite~\eqref{eq:At} as \begin{equation} \label{reftantheta} \widetilde A: [\widehat X_{1}\ \widehat X_{2}\ \widehat X_3]^\ast A[\widehat X_{1}\ \widehat X_{2}\ \widehat X_3]= \begin{bmatrix} \widehat \Lambda_1&0& R^\ast _1\\ 0&\widehat \Lambda_2& R^\ast _2\\ R_1& R_2& A_3 \end{bmatrix}, \end{equation} where $[\widehat X_{1}\ \widehat X_{2}\ \widehat X_3]$ is an orthogonal matrix, with $[\widehat X_{1}\ \widehat X_{2}] = Q\Omega$, $\widehat X_{1}\in\mathbb{C}^{n\times k_1}, \widehat X_{2}\in\mathbb{C}^{n\times (k-k_1)}, $ and $\widehat X_3\in\mathbb{C}^{n\times (n-k)}$. Our goal is to bound $\unii{\sin\angle(\widehat X_1,X_1)}$ from above, where $X_1\in\mathbb{C}^{n\times k_1}$ is a matrix of $k_1$ exact eigenvectors of $A$, i.e., $AX_1=X_1\Lambda_1$. Defining $\widetilde X_1=[\widehat X_{1}\ \widehat X_{2}\ \widehat X_3]^*X_1$, we have $\widetilde A \widetilde X_1=\widetilde X_1\Lambda_1$, so the columns of $\widetilde X_1$ are eigenvectors of $\widetilde A$. With the partitioning $\widetilde X_1 = \begin{bmatrix} W\\ Y\\ Z \end{bmatrix}$ with $W\in\mathbb{C}^{k_1\times k_1}, Y\in\mathbb{C}^{(k-k_1)\times k_1}, Z\in\mathbb{C}^{(n-k)\times k_1}$, it therefore follows that $\uniinv{\sin\angle(\widehat X_1,X_1)}= \uniinv{[\widehat X_2\ \widehat X_3]^*X_1}= \uniinv{ \begin{bmatrix} Y\\ Z \end{bmatrix}}$. This extends~\eqref{eq:sinx1xh1}, and is a key identity in the forthcoming analysis. Sometimes we deal with the angles between subspaces of different dimensions, say $[\widehat X_1\ \widehat X_2]\in\mathbb{C}^{n\times k}$ and $X_1\in\mathbb{C}^{n\times k_1}$ with $k_1\leq k$. In this case the angles are defined via $\sin\theta_i=\sigma_i(X_1^*([\widehat X_1\ \widehat X_2]_\perp))$ for $i=1,\ldots,k_1$. Here is the extension of the previous bounds to invariant subspaces. Note that ${\rm gap}$ and ${\rm Gap}$ are redefined; we use the same notation as they reduce to the same values when $k_1=1$. \begin{theorem}\label{thm:mainsubspace} Let $A,\widetilde A$ be as in~\eqref{reftantheta}, with $(\widehat \Lambda_{1},\widehat X_1)$ being $k_1$ Ritz pairs. Let $(\Lambda_1,X_1)$ be a set of $k_1$ exact eigenpairs $AX_1=X_1\Lambda_1$. Let ${\rm Gap}=\min|\lambda(\Lambda_1)-\lambda(A_3)|$ and ${\rm gap}=\min|\lambda(\Lambda_1)-\lambda(\widehat \Lambda_2)|$. Then writing $R=[R_1\ R_2]:=[R_1\ r_{k_1+1},\ldots,r_{k}]\in\mathbb{C}^{n\times k}$ where $R_1\in\mathbb{C}^{n\times k_1}$, we have \begin{equation} \label{eq:boundvecsubspace} \uniinv{\sin\angle(X,\widehat X)} \leq \frac{\uniinv{R}}{{\rm Gap}}(1+\frac{\|R_2\|}{{\rm gap}}), \qquad \|\sin\angle(X,\widehat X)\|_{2,F}\leq \frac{\|R\|_{2,F}}{{\rm Gap}}\sqrt{1+\frac{\|R_2\|^2}{{\rm gap}^2}}. \end{equation} Moreover, if ${\rm Gap}>\frac{\|R_2\|^2}{{\rm gap}}$ then \begin{align} \left\|\sin\angle(X,\widehat X)\right\|_{2,F} \label{eq:sin22subF}&\leq \frac{\|R_1\|_{2,F}}{{\rm Gap}-\frac{\|R_2\|^2}{{\rm gap}}}\sqrt{1+\frac{\|R_2\|^2}{{\rm gap}^2}}, \end{align} and if ${\rm Gap}>\sum_{i=k_1+1}^{k}\frac{\|r_{i}\|^2}{\min|\lambda(\Lambda_1) -\widehat\lambda_{i}|} $ then \begin{equation} \label{eq:sin22subindF} \left\|\sin\angle(X,\widehat X)\right\|_{2,F} \leq \frac{\|R_1\|_{2,F}}{ {\rm Gap}-\sum_{i=k_1+1}^{k}\frac{\|r_{i}\|_{2}^2}{\min|\lambda(\Lambda_1) -\widehat\lambda_{i}|} }\sqrt{1+ \left(\sum_{i=k_1+1}^{k}\frac{\|r_{i}\|}{\min|\lambda(\Lambda_1) -\widehat\lambda_{i}|}\right)^2}. \end{equation} \end{theorem} \begin{proof} The proof mimics that of Theorem~\ref{thm:eigrefine}, extending the discussion from vectors to subspaces. Let $\widetilde X_1 = [\widehat X_1,\widehat X_2,\widehat X_3]X_1= \begin{bmatrix} W\\ Y\\Z \end{bmatrix}\in\mathbb{C}^{n\times k_1}$ be an invariant subspace of $\widetilde A$ such that \begin{equation} \label{eq:Atblock} \widetilde A \begin{bmatrix} W\\ Y\\Z \end{bmatrix}= \begin{bmatrix} W\\ Y\\Z \end{bmatrix}\Lambda_1. \end{equation} Then the bottom part of the equation gives \begin{equation} \label{eq:Zeqn} Z\Lambda_1 -A_3 Z =[R_1\ R_2] \begin{bmatrix} W\\Y \end{bmatrix}=R\begin{bmatrix} W\\Y \end{bmatrix}. \end{equation} Using a well-known bound for Sylvester's equations~(e.g.~\cite[Lem.~2]{li2015convergence}, \cite[Ch.~V]{stewart-sun:1990}), along with the fact $\min(\lambda(\Lambda_1)-\lambda(A_3))={\rm Gap}$, we obtain \begin{equation} \label{eq:dkspecialsub} \uniinv{Z} \leq \frac{\uniinv{R}\left\|\begin{bmatrix} W\\Y\end{bmatrix}\right\|}{{\rm Gap} } \leq \frac{\uniinv{R}}{{\rm Gap}}, \end{equation} where for the last inequality we used the fact $\uniinv{XY}\leq \uniinv{X}\|Y\|$~\cite[Cor. 3.5.10]{hornjohntopics}. As in~\eqref{eq:dkspecial}, this is the generalized Davis-Kahan $\sin\theta$ theorem. From the second block of~\eqref{eq:Atblock} we have \begin{equation}\label{eq:Ylam} Y\Lambda_1 -\widehat \Lambda_{2}Y = R_2^*Z, \end{equation} hence again from the Sylvester equation bound \begin{equation} \label{eq:Ynrmuniinv} \uniinv{Y} \leq \frac{\|R_2\|\uniinv{Z}}{{\rm gap}}. \end{equation} Together with~\eqref{eq:dkspecialsub} we obtain \begin{equation} \label{eq:y2knyasub} \uniinv{Y} \leq \frac{\uniinv{R}\|R_2\|}{{\rm gap}\cdot {\rm Gap}} . \end{equation} Therefore, we conclude that \[ \uniinv{\sin\angle(X,\widehat X)}= \uniinv{ \begin{bmatrix} Y\\Z \end{bmatrix}} \leq \frac{\uniinv{R}}{{\rm Gap}}(1+\frac{\|R_2\|}{{\rm gap}}), \] the first inequality in~\eqref{eq:boundvecsubspace}. For the spectral and Frobenius norms, using the stronger inequality \begin{equation} \label{eq:2Fget} \left\| \begin{bmatrix} A\\B \end{bmatrix}\right\|_{2,F}\leq \sqrt{\left\|A \right\|_{2,F}^2 +\left\|B\right\|_{2,F}^2 } , \end{equation} we obtain the second result in~\eqref{eq:boundvecsubspace}. We next prove~\eqref{eq:sin22subF}. \ignore{ We first improve the bound~\eqref{eq:dkspecialsub}. As before we have \begin{equation} \label{eq:Ylam} Y\Lambda_1 -\widehat \Lambda_{2}Y = R_2^*Z, \end{equation} so $\|Y\|\leq \frac{\|R_2\|\|Z\|}{{\rm gap}}$ by the bound for Sylvester's equation. Now \[ Z\Lambda_1 -A_3 Z =[R_1\ R_2] \begin{bmatrix} W\\Y \end{bmatrix}, \] } From~\eqref{eq:Zeqn} we obtain $\uniinv{Z\Lambda_1 -A_3 Z } -\uniinv{R_2Y}\leq \uniinv{R_1W}$, hence using~\eqref{eq:Ynrmuniinv} we have \[ \uniinv{Z\Lambda_1 -A_3 Z }- \frac{\|R_2\|^2\uniinv{Z}}{{\rm gap}} \leq \uniinv{R_1W}. \] Again using the Sylvester equation bound $\uniinv{Z\Lambda_1 -A_3 Z}\geq {\rm Gap} \uniinv{Z}$, we therefore obtain \[ ({\rm Gap}-\frac{\|R_2\|^2}{{\rm gap}})\uniinv{Z} \leq \uniinv{R_1W}. \] Hence using the assumption ${\rm Gap}>\frac{\|R_2\|^2}{{\rm gap}}$ and the trivial bound $\|W\|\leq 1$ along with $\uniinv{R_1W}\leq \uniinv{R_1}\|W\|$, we obtain \[ \uniinv{Z} \leq \frac{\uniinv{R_1}}{{\rm Gap}-\frac{\|R_2\|^2}{{\rm gap}}}. \] Finally, using~\eqref{eq:2Fget} again we obtain \[ \left\|\sin\angle(X,\widehat X )\right\|_{2,F} = \left\| \begin{bmatrix} Y\\Z \end{bmatrix} \right\|_{2,F} \leq \frac{\|R_1\|_{2,F}}{{\rm Gap}-\frac{\|R_2\|^2}{{\rm gap}}}\sqrt{1+\frac{\|R_2\|^2}{{\rm gap}^2}}, \] giving~\eqref{eq:sin22subF}. It remains to establish \eqref{eq:sin22subindF}. Taking the $i$th row of~\eqref{eq:Ylam} gives $y_{i}\Lambda_1 -\widehat\lambda_{k_1+i}y_{i} = r_{k_1+i}^*Z$ for $i=1,\ldots,k-k_1$, where $y_{i}$ is the $i$th row of $Y$. Hence \begin{equation} \label{eq:ithysub} \uniinv{y_{i}}\leq \frac{\|r_{k_1+i}\|\uniinv{Z}}{\min|\lambda(\Lambda_1) -\widehat\lambda_{k_1+i}|}, \quad i=1,\ldots,k-k_1. \end{equation} We also have $ Z\Lambda_1 -A_3 Z =[R_1,r_{k_1+1},\ldots,r_k] \begin{bmatrix} W\\y_1\\\vdots\\y_{k-k_1} \end{bmatrix}. $ This gives $(Z\Lambda_1 -A_3 Z)-[r_{k_1+1},\ldots,r_k]\begin{bmatrix} y_1\\\vdots\\y_{k-k_1} \end{bmatrix}=R_1W$, so using~\eqref{eq:ithysub} we obtain \begin{equation} \label{eq:rindivsubspace} \uniinv{Z\Lambda_1 -A_3 Z} - \sum_{i=k_1+1}^{k}\frac{\|r_{i}\|^2\uniinv{Z}}{ \min|\lambda(\Lambda_1) -\widehat\lambda_{i}|} \leq \uniinv{R_1W} . \end{equation} Since $\uniinv{Z\Lambda_1 -A_3 Z }\geq \uniinv{Z}/{\rm Gap}$ as before, this gives \[ ({\rm Gap}- \sum_{i=k_1+1}^{k}\frac{\|r_{i}\|^2}{\min|\lambda(\Lambda_1) -\widehat\lambda_{i}|} )\uniinv{Z} \leq \uniinv{R_1W}\leq \uniinv{R_1}\|W\|. \] Hence using the assumption ${\rm Gap}> \sum_{i=k_1+1}^{k}\frac{\|r_{i}\|^2}{\min|\lambda(\Lambda_1) -\widehat\lambda_{i}|}$ and the trivial bound $\|W\|\leq 1$ we obtain \[ \uniinv{Z} \leq \frac{\uniinv{R_1}}{ {\rm Gap}-\sum_{i=k_1+1}^{k}\frac{\|r_{i}\|^2}{\min|\lambda(\Lambda_1) -\widehat\lambda_{i}|} }. \] We use the fact $\uniinv{\sin\angle(X,\widehat X)} = \uniinv{ \begin{bmatrix} Y\\Z \end{bmatrix}} $ together with~\eqref{eq:ithysub} to complete the proof of \eqref{eq:sin22subindF}, again using~\eqref{eq:2Fget}. \end{proof} \ignore{ Then the bottom part of the equation gives \[ Z\Lambda - FZ =RY. \] Using a well-known bound for Sylvester's equation we obtain \begin{equation} \label{eq:dkspecial} \|Z\|\leq \|R\|/{\rm Gap} . \end{equation} Note that the denominator is ${\rm Gap}$, not ${\rm gap}$. Note that the last bound is Davis-Kahan when the perturbation is off-diagonal. Now, write $Y=\begin{bmatrix} Y_1\\Y_2\end{bmatrix}$ where $Y_1$ contains the desired part. Then from the second part of the first block of $A \begin{bmatrix} Y\\Z \end{bmatrix}=\lambda \begin{bmatrix} Y\\Z \end{bmatrix}$ we obtain (writing $A_1 = \begin{bmatrix} A_{11}& 0\\ 0& A_{22}\end{bmatrix}$; crucially, the off-diagonal blocks are zero due to Rayleigh-Ritz) \[ Y_2\Lambda - A_{22}Y_2 = R^*Z, \] from which we obtain the important bound \[ \|Y_2\| \leq \frac{\|R\|}{{\rm gap}}\|Z\|. \] Combining with~\eqref{eq:dkspecial} we obtain \[ \|Y_2\| =\|R\|^2/({\rm gap}\cdot {\rm Gap}). \] Therefore, we conclude that \[ \sin\angle(X,\widehat x )^2 = \left\| \begin{bmatrix} Y_2\\Z \end{bmatrix} \right\|^2 \leq \frac{\|R\|^4}{{\rm gap}^2\cdot {\rm Gap}^2}+\frac{\|R\|^2}{{\rm Gap}^2}, \] } Four remarks are in order. \begin{remark}[Vector vs. subspace bounds] The $\|\cdot\|_{2,F}$ bounds in Theorem~\ref{thm:mainsubspace} reduce to the vector bounds in the previous sections by taking $k_1=1$. Thus they can be regarded as proper generalizations. \end{remark} \begin{remark}[Bounds for unitarily invariant norms] Just like the two bounds in~\eqref{eq:boundvecsubspace}, \eqref{eq:sin22subF} and~\eqref{eq:sin22subindF} have their counterparts applicable to any unitarily invariant norm, obtained by replacing the final factors of the form $\sqrt{1+b^2}$ by $1+b$. The proof is identical up to the very end, where we use the bound $\uniinv{\begin{bmatrix}A\\B\end{bmatrix}}\leq \uniinv{A}+\uniinv{B}$ instead of~\eqref{eq:2Fget}. \end{remark} \begin{remark}[Question 10.3 by Davis-Kahan] At the end of their landmark paper, Davis and Kahan~\cite{daviskahan} suggest four open problems. Among them, Question 10.2 asks for an extension of their theorems to the case where $\mathbb{C}^n$ is split into a pair of three (instead of two) subspaces in two ways, $X_1,X_2,X_3$ (exact eigenspaces) and $\widehat X_1,\widehat X_2,\widehat X_3$ (approximate ones). Namely, using information such as the Ritz values and residuals, can one bound the subspace angles? We argue that the above results give an answer---the setting in~\eqref{reftantheta} is precisely in this form, and Theorem~\ref{eq:boundvecsubspace} gives sharp bounds for $\unii{\sin\angle(X_1,\widehat X_1)}$. \end{remark} \begin{remark}[On the definition of {\rm gap}]\label{rem:gapdef} As mentioned in footnote~\ref{foot1}, there is a difference in what is measured between our ${\rm gap},{\rm Gap}$ in~\eqref{eq:Gapgapdef} and ${\rm gap}_c$ in the classical treatments~\cite{daviskahan},\cite[Ch.~11]{parlettsym}. For example,~\eqref{eq:boundvecsubspace} reduces to $\frac{\|r_1\|}{{\rm Gap}}$ when $k_1=k=1$ (hence $R_2$ is empty), but ${\rm Gap}$ here is the difference between $\lambda$ (an exact desired eigenvalue) and $\lambda(A_3)$ (approximations to undesired eigenvalues), and the same can be said of ${\rm gap}$ in~\eqref{eq:Gapgapdef}. By contrast, ${\rm gap}_c$\ in \eqref{eq:classical} is the distance between the approximate desired eigenvalue $\widehat\lambda$ and exact undesired eigenvalues. It turns out that for~\eqref{eq:classical}, both gaps are applicable---indeed, we can obtain~\eqref{eq:classical} from Theorem~\ref{thm:mainsubspace}, takinig $k_1=k=n-1$: note that $\|\sin\angle(x,\widehat x)\|= \|\widehat x^TX_\perp\|= \|\sin\min\angle(X_\perp,\widehat X_\perp)\|$ (this can be verified e.g. via the CS decomposition~\cite[Thm.~2.5.2]{golubbook4th}; here $[x,X_\perp]$ and $[\widehat x,\widehat X_\perp]$ are orthogonal), and invoke~\eqref{eq:boundvecsubspace} taking $X\leftarrow X_\perp$, $\widehat X\leftarrow \widehat X_\perp$, and $\Lambda_2$ empty. Then ${\rm Gap}$ in \eqref{eq:boundvecsubspace} becomes ${\rm gap}$ in \eqref{eq:classical}, and since the residuals are related by $R=r_1^T$ their norms are the same $\|R\|=\|r_1\|$, so \eqref{eq:boundvecsubspace} reduces precisely to~\eqref{eq:classical}. In other words, the bound~\eqref{eq:classical} holds regardless of which definition of {\rm gap}\ is used. However, the analysis in this paper uses ${\rm gap},{\rm Gap}$ in \eqref{eq:Gapgapdef}, and we have not proved that our theorems hold with ${\rm gap}_c$. \end{remark} \begin{remark}[Proof techniques] The reader might have also noticed that the proofs above are basically repeated applications of well-known norm inequalities in matrix analysis. One might then wonder, why do they appear to give stronger results than previous ones? The answer appears to lie in~\eqref{eq:At}---the simple but crucial unitary transformation from $A$ to $\widetilde A$ that simplifies the task to bounding $\|Y\|,\|Z\|$, as in \eqref{eq:sinx1xh1}. By contrast, most classical results work with $A$ and start from the residual equation $A\widehat X_1-\widehat X_1\widehat \Lambda_1=R$ and derive bounds on $\angle(X_1,\widehat X_1)$: for example the Davis-Kahan $\sin\theta$ theorem can be obtained essentially by left-multiplying $X_3^T$, taking $X_2$ empty. Knyazev~\cite[Sec.~4]{knyazevmc97} employed ingenious techniques to obtain (among others) essentially~\eqref{eq:y2knyasub}. Obtaining ``sharper'' bounds like~\eqref{eq:rindivsubspace} in a similar manner appears to be highly challenging. Once we reformulate the problem as in~\eqref{eq:At}--\eqref{eq:sinx1xh1}, the derivation becomes significantly simpler (in the author's opinion). \end{remark} \section{SVD} We now present an SVD analogue of Theorem~\ref{thm:mainsubspace}, deriving bounds for the accuracy of singular vectors and singular subspaces obtained by a Petrov-Galerkin projection method. Such methods proceed as follows: project $A$ onto lower-dimensional trial subspaces spanned by $\widehat U\in\mathbb{C}^{m\times k_m}, \widehat V\in\mathbb{C}^{n\times k_n}$ having orthonormal columns (for how to choose $\widehat U,\widehat V$ see e.g.~\cite{baitemplates,halko2011finding,wu2017primme_svds}), compute the SVD of the small $k_m\times k_n$ matrix $\widehat U^*A\widehat V = \widetilde U\widehat \Sigma \widetilde V^*$ and obtain an approximate economical SVD as $A\approx (\widehat U \widetilde U)\widehat \Sigma (\widehat V\widetilde V)^*$, which is of rank $\leq \min(k_m,k_n)$. Some of the columns of $\widehat U \widetilde U$ and $\widehat V \widetilde V$ then approximate the exact left and right singular vectors of $A$. Our goal is to quantify their accuracy. We focus on the most frequently encountered case where an approximate SVD is sought, that is, the leading singular vectors are being approximated. \begin{theorem}\label{thm:svdmain} Let $A\in\mathbb{C}^{m\times n}$ with $m\geq n$, of the form \begin{equation} \label{eq:svdbasis} [\widehat U_1\ \widehat U_2\ \widehat U_3]^{\ast}A[\widehat V_1\ \widehat V_2\ \widehat V_3]= \begin{bmatrix} \widehat \Sigma_1&0& R_1\\ 0&\widehat \Sigma_2& R_2\\ S_1& S_2& A_3 \end{bmatrix}=:\widetilde A, \end{equation} where $[\widehat U_1\ \widehat U_2\ \widehat U_3]$ and $[\widehat V_1\ \widehat V_2\ \widehat V_3]$ are square unitary, and $\widehat \Sigma_1\in\mathbb{R}^{k_1\times k_1}$, $\widehat \Sigma_2\in\mathbb{R}^{(k_m-k_1)\times (k_n-k_1)}$ with $\big[\begin{smallmatrix}\widehat \Sigma_1 & \\& \widehat \Sigma_2 \end{smallmatrix} \big]$ equal to $\big[\begin{smallmatrix}{\rm diag}(\widehat\sigma_1,\widehat\sigma_2,\ldots,\widehat\sigma_k) \\ 0\end{smallmatrix} \big]$ if $k_m\geq k_n=k$, and $\big[\begin{smallmatrix}{\rm diag}(\widehat\sigma_1,\widehat\sigma_2,\ldots,\widehat\sigma_k)\ 0_{k\times (k_n-k)}\end{smallmatrix} \big]$ if $k=k_m< k_n$. Let $(\Sigma_1,U_1,V_1)$ be the set of $k_1$ leading singular triplets of $A$. Define ${\rm Gap}=\min(\sigma(\widehat \Sigma_1)-\sigma(A_3))$ and ${\rm gap} = \sigma_{\min}(\Sigma_1)-\|\widehat \Sigma_2\|$, and suppose that ${\rm Gap},{\rm gap}>0$. Write $[S_1\ S_2]=S$, $\big[ \begin{smallmatrix} R_1\\R_2 \end{smallmatrix} \big]=R$ and for brevity define $\uni{\Theta}:=\max(\unii{\sin\angle(U_1,\widehat U_1)},\unii{\sin\angle(V_1,\widehat V_1)})$. Then we have \begin{equation} \label{eq:boundvecsvd} \uni{\Theta}\leq \frac{\max(\uni{R},\uni{S})}{{\rm Gap}}\left(1+\frac{\max(\|R_2\|,\|S_2\|)}{{\rm gap}}\right). \end{equation} Moreover, provided that ${\rm Gap}>\frac{\max(\|S_2\|,\|R_2\|)^2}{{\rm gap}}$, we have \begin{align} \uni{\Theta}\leq \label{eq:sin22subSVD} \frac{\max(\uni{S_1},\uni{R_1})}{{\rm Gap}-\frac{\max(\|S_2\|,\|R_2\|)^2}{{\rm gap}}} \left(1+\frac{\max(\|R_2\|,\| S_2\|) }{{\rm gap}}\right). \end{align} Finally, define $k':=\max(k_m-k_1,k_n-k_1)$, and denote by $r_{2i}^T$ the $i$th row of $R_{21}$ and by $s_{2i}$ the $i$th column of $S_{21}$ (setting $r_{2i}=0$ for $i>k_m-k_1$ and $s_{2i}=0$ for $i>k_n-k_1$, and $\widehat\sigma_{k_1+i}=0$ for $i>\min(k_m,k_n)-k_1$). If ${\rm Gap}>\sum_{i=1}^{k'} \frac{ \max(\|r_{2i}\|,\|s_{2i}\|)^2}{\sigma_{\min}(\Sigma_1)-\widehat\sigma_{k_1+i}}$, then \begin{equation} \label{eq:sin22subindSVD} \uni{\Theta}\leq \frac{ \max(\uni{S_1},\uni{R_1})}{{\rm Gap}-\sum_{i=1}^{k'} \frac{ \max(\|r_{2i}\|,\|s_{2i}\|)^2}{\sigma_{\min}(\Sigma_1)-\widehat\sigma_{k_1+i}}} \left(1+\sum_{i=1}^{k'} \frac{ \max(\|r_{2i}\|,\|s_{2i}\|)}{\sigma_{\min}(\Sigma_1)-\widehat\sigma_{k_1+i}} \right) . \end{equation} \end{theorem} Though not displayed for brevity, slightly improved bounds for $\|\cdot\|_{2,F}$ analogous to those in Theorem~\ref{thm:mainsubspace} are available for each bound above. The derivation is again the same, using the inequality $\left\|\big[\begin{smallmatrix}A\\B \end{smallmatrix}\big]\right\|_{2,F}\leq \sqrt{\left\|A \right\|_{2,F}^2+\left\|B\right\|_{2,F}^2}$. \ignore{ We follow a standard approach for extending results in symmetric eigenvalue problems, the use of the Jordan-Wielandt matrix $ \big[ \begin{smallmatrix} 0& \widetilde A\\ \widetilde A^*&0 \end{smallmatrix} \big]$, whose eigenvalues are $\pm \sigma_i(\widetilde A)$ for $i=1,\ldots,n$ and $m-n$ copies of zero. The eigenvector matrix is $ \big[ \begin{smallmatrix} \widetilde U& \widetilde U & \widetilde U_0\\ \widetilde V& -\widetilde V& 0 \end{smallmatrix} \big]$~\cite[Thm.~I.4.2]{stewart-sun:1990}, where $\widetilde A=\widetilde U\Sigma \widetilde V^*$ is the SVD and $\widetilde U_0$ is the null space of $\widetilde A^*$. The Jordan-Wielandt matrix $J$ for $\widetilde A$, and its permuted version $P^TJP$ (taking the row/column blocks in the order $1,2,4,5,3,6$) is \[ J = \begin{bmatrix} \Scale[2]{0} & \begin{matrix} \widehat \Sigma_1&0& R_1\\ 0&\widehat \Sigma_2& R_2\\ S_1& S_2& A_3 \end{matrix} \\ \begin{matrix} \widehat \Sigma_1&0& R_1\\ 0&\widehat \Sigma_2& R_2\\ S_1& S_2& A_3 \end{matrix} & \Scale[2]{0} \end{bmatrix},\qquad P^TJP = \begin{bmatrix} \Scale[2]{0} & \begin{matrix} \widehat \Sigma_1&0\\ 0 & \widehat \Sigma_2 \end{matrix}& \begin{matrix} 0& S_1^*\\ 0 & S_2^* \end{matrix}\\ \begin{matrix} \widehat \Sigma_1&0\\ 0 & \widehat \Sigma_2 \end{matrix} & \Scale[2]{0} & \begin{matrix} R_1&0\\ R_2 & 0 \end{matrix} \\ \begin{matrix} 0&0\\ S_1 & S_2 \end{matrix} & \begin{matrix} R_1^* & R_2^*\\ 0&0 \end{matrix} & \begin{matrix} 0 & A_3^*\\ A_3& 0 \end{matrix} \end{bmatrix}. \] Let the top-left $2k\times 2k$ part of $P^TJP$ have eigenvalue decomposition $W\mbox{diag}(\widehat \Sigma_1,\widehat \Sigma_2,-\widehat \Sigma_1,-\widehat \Sigma_2)W^T$; we can take $W =\frac{1}{\sqrt{2}}\big[ \begin{smallmatrix} I& I \\ I& -I \end{smallmatrix} \big] $. Then \[ \begin{bmatrix} W& \\ & I_{m+n-2k} \end{bmatrix} P^TJP\mbox{diag} \begin{bmatrix} W^T& \\ & I_{m+n-2k} \end{bmatrix} = \begin{bmatrix} \begin{matrix} \widehat \Sigma_1&0\\ 0 & \widehat \Sigma_2 \end{matrix} &\Scale[2]{0} & \begin{matrix} \frac{1}{\sqrt{2}}R_1&\frac{1}{\sqrt{2}}S_1^*\\ \frac{1}{\sqrt{2}}R_2 & \frac{1}{\sqrt{2}}S_2^* \end{matrix}\\ \Scale[2]{0} & \begin{matrix} -\widehat \Sigma_1&0\\ 0 & -\widehat \Sigma_2 \end{matrix} & \begin{matrix} -\frac{1}{\sqrt{2}}R_1&\frac{1}{\sqrt{2}}S_1^*\\ -\frac{1}{\sqrt{2}}R_2 & \frac{1}{\sqrt{2}}S_2^* \end{matrix} \\ \begin{matrix} \frac{1}{\sqrt{2}}R_1^*&\frac{1}{\sqrt{2}}R_2^*\\ \frac{1}{\sqrt{2}}S_1 & \frac{1}{\sqrt{2}}S_2 \end{matrix} & \begin{matrix} -\frac{1}{\sqrt{2}}R_1^*&-\frac{1}{\sqrt{2}}R_2^*\\ \frac{1}{\sqrt{2}}S_1 & \frac{1}{\sqrt{2}}S_2 \end{matrix} & \begin{matrix} 0 & A_3^*\\ A_3& 0 \end{matrix} \end{bmatrix}. \] This matrix is now in the form~\eqref{eq:At} (taking the top-left $k\times k$ part as the matrix of Ritz values). Hence we can invoke the results in the previous sections, to give } \begin{proof} Let $\left(\Sigma_1,\widetilde U_1,\widetilde V_1\right)$ be a set of exact singular triplets of $\tilde A$, i.e., $\tilde A\widetilde V_1=\widetilde U_1\Sigma_1$ and $\widetilde U_1^*\tilde A=\Sigma_1\widetilde V_1^*$. Write $\widetilde V_1=\begin{bmatrix}\widetilde V_{11}\\\widetilde V_{21}\\\widetilde V_{31} \end{bmatrix}, \widetilde U_1=\begin{bmatrix}\widetilde U_{11} \\\widetilde U_{21}\\\widetilde U_{31} \end{bmatrix}$, so that \begin{equation} \label{eq:Av1} \begin{bmatrix} \widehat \Sigma_1&0& R_1\\ 0&\widehat \Sigma_2& R_2\\ S_1& S_2& A_3 \end{bmatrix} \begin{bmatrix}\widetilde V_{11}\\\widetilde V_{21}\\\widetilde V_{31} \end{bmatrix}= \begin{bmatrix}\widetilde U_{11} \\\widetilde U_{21}\\\widetilde U_{31} \end{bmatrix}\Sigma_1, \end{equation} and \begin{equation} \label{eq:Au} \begin{bmatrix}\widetilde U_{11}^{\ast} \ \widetilde U_{21}^{\ast}\ \widetilde U_{31}^{\ast} \end{bmatrix} \begin{bmatrix} \widehat \Sigma_1&0& R_1\\ 0&\widehat \Sigma_2& R_2\\ S_1& S_2& A_3 \end{bmatrix}= \Sigma_1\begin{bmatrix}\widetilde V_{11}^{\ast}\ \widetilde V_{21}^{\ast}\ \widetilde V_{31}^{\ast} \end{bmatrix} . \end{equation} As in the previous sections, we have the crucial identities \begin{equation} \label{eq:u1andv1} \uni{\sin\angle(U_1,\widehat U_1)}=\uni{ \begin{bmatrix}\widetilde U_{21} \\\widetilde U_{31} \end{bmatrix}}, \quad \uni{\sin\angle(V_1,\widehat V_1)}=\uni{ \begin{bmatrix}\widetilde V_{21} \\\widetilde V_{31} \end{bmatrix}}. \end{equation} To prove the theorem we first bound $\unii{\widetilde U_{21}}$ with respect to $\unii{\widetilde U_{31}}$, and similarly bound $\unii{\widetilde V_{21}}$ with respect to $\unii{\widetilde V_{31}}$. From the second block of~\eqref{eq:Av1} we obtain \begin{equation} \label{eq:forsvd1} \widehat \Sigma_2\widetilde V_{21}+ R_2\widetilde V_{31}=\widetilde U_{21}\Sigma_1, \end{equation} and the second block of \eqref{eq:Au} gives \begin{equation} \label{eq:forsvd2} \widetilde U_{21}^{\ast}\widehat \Sigma_2+\widetilde U_{31}^{\ast} S_2=\Sigma_1\widetilde V_{21}^{\ast}. \end{equation} Taking norms and using the triangular inequality and the fact $\sigma_{\min}(X)\uniinv{Y}\leq \uniinv{XY}\leq \|X\|\uniinv{Y}$ (the lower bound holds if $X\in\mathbb{C}^{m\times n}, m\geq n$) in \eqref{eq:forsvd1} and \eqref{eq:forsvd2}, we obtain \begin{equation} \label{eq:del1} \begin{split} \uniinv{\widetilde U_{21}}\sigma_{\min}(\Sigma_1)-\uniinv{\widetilde V_{21}}\|\widehat \Sigma_2\|&\leq \uniinv{ R_2\widetilde V_{31}}, \\ \uniinv{\widetilde V_{21}}\sigma_{\min}(\Sigma_1)-\uniinv{\widetilde U_{21}}\|\widehat \Sigma_2\|&\leq \uniinv{\widetilde U_{31}^{\ast} S_2}. \end{split} \end{equation} By adding the first inequality times $\sigma_{\min}(\Sigma_1)$ and the second inequality times $\|\widehat \Sigma_2\|$, we eliminate the $\unii{\widetilde V_{21}}$ term, and recalling the assumption $\sigma_{\min}(\Sigma_1)>\|\widehat \Sigma_2\|$ we obtain \begin{equation} \nonumber \uni{\widetilde U_{21}}\leq \frac{ \sigma_{\min}(\Sigma_1)\unii{R_2\widetilde V_{31}}+ \|\widehat \Sigma_2\|\unii{\widetilde U_{31}^{\ast}S_2}}{(\sigma_{\min}(\Sigma_1))^2-\|\widehat \Sigma_2\|^2}. \end{equation} Eliminating $\unii{\widetilde U_{21}}$ from~\eqref{eq:del1} similarly yields \begin{equation} \nonumber \uni{\widetilde V_{21}}\leq \frac{ \sigma_{\min}(\Sigma_1)\unii{\widetilde U_{31}^{\ast}S_2}+ \|\widehat \Sigma_2\|\unii{R_2\widetilde V_{31}}}{(\sigma_{\min}(\Sigma_1))^2-\|\widehat \Sigma_2\|^2}. \end{equation} Combining these two inequalities we obtain \begin{align} \max(\uni{\widetilde U_{21}},\uni{\widetilde V_{21}})&\leq \frac{\max(\unii{\widetilde U_{31}^{\ast}S_2},\unii{R_2\widetilde V_{31}}) }{\sigma_{\min}(\Sigma_1)-\|\widehat \Sigma_2\|}\nonumber\\ &\leq \frac{\max(\unii{\widetilde U_{31}},\unii{\widetilde V_{31}}) \max(\|R_2\|,\|S_2\|) }{\sigma_{\min}(\Sigma_1)-\|\widehat \Sigma_2\|}\nonumber\\ &= \frac{\max(\|R_2\|,\| S_2\|) }{{\rm gap}} \max(\uni{\widetilde U_{31}},\uni{\widetilde V_{31}}). \label{eq:21to31sig} \end{align} Together with \eqref{eq:u1andv1} it follows that \begin{align} \max&(\uni{\sin\angle(U_1,\widehat U_1)},\uni{\sin\angle(V_1,\widehat V_1)})\nonumber=\max\left( \uni{ \begin{bmatrix}\widetilde U_{21} \\\widetilde U_{31} \end{bmatrix}}, \uni{ \begin{bmatrix}\widetilde V_{21} \\\widetilde V_{31} \end{bmatrix}}\right)\nonumber\\ &\leq\max( \unii{ \widetilde U_{21}} +\unii{ \widetilde U_{31}}, \unii{ \widetilde V_{21}} +\unii{ \widetilde V_{31}}) \nonumber\\ &\leq (1+\frac{\max(\|R_2\|,\| S_2\|) }{{\rm gap}}) \max(\uni{ \widetilde U_{31}},\uni{ \widetilde V_{31}}).\label{eq:sinuuvv} \end{align} The remaining task is to bound $\max(\unii{ \widetilde U_{31}},\unii{ \widetilde V_{31}})$. The bottom block of~\eqref{eq:Av1} gives \begin{equation} \label{eq:Ut31S1S2etc} S_1\widetilde V_{11}+ S_2\widetilde V_{21}+A_3\widetilde V_{31}=\widetilde U_{31} \Sigma_1. \end{equation} Hence recalling that $[S_1\ S_2]=S$ we have \begin{equation} \label{eq:simple1} \sigma_{\min}(\Sigma_1)\Uniinv{\widetilde U_{31} }\leq \Uniinv{S}+\|A_3\|\Uniinv{\widetilde V_{31}}. \end{equation} Similarly, from the last block of~\eqref{eq:Au} \begin{equation} \label{eq:eq:Ut31S1S2etc2} \widetilde U_{11}^{\ast}R_1+ \widetilde U_{21}^{\ast}R_2+ \widetilde U_{31}^{\ast} A_3=\Sigma_1\widetilde V_{31}^{\ast}, \end{equation} we obtain \begin{equation} \label{eq:simple2} \sigma_{\min}(\Sigma_1)\Uniinv{\widetilde V_{31} }\leq \Uniinv{ R} +\|A_3\|\Uniinv{\widetilde U_{31}} \end{equation} We multiply~\eqref{eq:simple1} by $\sigma_{\min}(\Sigma_1)$ and~\eqref{eq:simple2} by $\|A_3\|$, and add them to eliminate the $\unii{\widetilde V_{31}}$ terms, to obtain \begin{align*} (\sigma_{\min}(\Sigma_1)^2-\|A_3\|^2)\Uniinv{\widetilde U_{31}} &\leq \|A_3\|\Uniinv{R} +\sigma_{\min}(\Sigma_1)\Uniinv{S} \\ &\leq (\sigma_{\min}(\Sigma_1)+\|A_3\|)\max(\Uniinv{R},\Uniinv{S}). \end{align*} Hence by the assumption ${\rm Gap}=\sigma_{\min}(\Sigma_1)-\|A_3\|>0$, we have \[ \Uniinv{\widetilde U_{31}}\leq \frac{\max(\Uniinv{R},\Uniinv{S})}{{\rm Gap}}. \] Eliminating the $\unii{\widetilde U_{31}}$ terms from \eqref{eq:simple1} and \eqref{eq:simple2} yields the same bound for $\unii{\widetilde V_{31}}$, hence \[ \max(\Uniinv{\widetilde U_{31}},\Uniinv{\widetilde V_{31}})\leq \frac{\max(\Uniinv{R},\Uniinv{S})}{{\rm Gap}}. \] Combine this with~\eqref{eq:21to31sig} and~\eqref{eq:u1andv1} to obtain~\eqref{eq:boundvecsvd}. We next prove~\eqref{eq:sin22subSVD}. From~\eqref{eq:Ut31S1S2etc} we also obtain \begin{equation} \label{eq:Ut311} \sigma_{\min}(\Sigma_1)\Uniinv{\widetilde U_{31}}\leq \Uniinv{S_1}+\|S_2\|\Uniinv{\widetilde V_{21}}+\|A_3\|\Uniinv{\widetilde V_{31}} , \end{equation} and from~\eqref{eq:eq:Ut31S1S2etc2}, \begin{equation} \label{eq:Vt311} \sigma_{\min}(\Sigma_1)\Uniinv{\widetilde V_{31}} \leq \Uniinv{R_1}+\Uniinv{\widetilde U_{21}}\|R_2\|+\Uniinv{\widetilde U_{31}}\|A_3\|. \end{equation} Again eliminate the $\|\widetilde V_{31}\|$ terms by multiplying~\eqref{eq:Ut311} by $\sigma_{\min}(\Sigma_1)$ and~\eqref{eq:Vt311} by $\|A_3\|$, and adding them: \begin{align*} (\sigma_{\min}&(\Sigma_1)^2-\|A_3\|^2)\Uniinv{\widetilde U_{31}}\\ &\leq \sigma_{\min}(\Sigma_1)(\Uniinv{S_1}+\|S_2\|\Uniinv{\widetilde V_{21}} ) +\|A_3\|(\Uniinv{R_1}+\|R_2\|\Uniinv{\widetilde U_{21}}) \\ &\leq (\sigma_{\min}(\Sigma_1)+\|A_3\|)(\max(\Uniinv{S_1},\Uniinv{R_1})+\max(\|S_2\|,\|R_2\|) \max(\Uniinv{\widetilde U_{21}},\Uniinv{\widetilde V_{21}})). \end{align*} Therefore, using~\eqref{eq:21to31sig} we obtain \begin{align} \uni{\widetilde U_{31}} &\leq \frac{ \max(\uni{S_1},\uni{R_1})+\max(\|S_2\|,\|R_2\|) \max(\uni{\widetilde U_{21}},\uni{\widetilde V_{21}})}{\sigma_{\min}(\Sigma_1)-\|A_3\|}\nonumber\\ &\leq \frac{1}{{\rm Gap}}\nonumber \left(\max(\uni{S_1},\uni{R_1})+\frac{\max(\|S_2\|,\|R_2\|)^2}{{\rm gap}}\max(\uni{\widetilde U_{31}},\uni{\widetilde V_{31}})\right). \end{align} As before, eliminating $\unii{\widetilde U_{31}}$ from~\eqref{eq:Ut311} and~\eqref{eq:Vt311} yields the same bound for $\unii{\widetilde V_{31}}$, hence \begin{align*} \max(\Uniinv{\widetilde U_{31}},\Uniinv{\widetilde V_{31}}) &\leq \frac{1}{{\rm Gap}} \left(\max(\Uniinv{S_1},\Uniinv{R_1})+\frac{\max(\|S_2\|,\|R_2\|)^2}{{\rm gap}}\max(\Uniinv{\widetilde U_{31}},\Uniinv{\widetilde V_{31}})\right). \end{align*} Therefore using the assumption ${\rm Gap}>\frac{\max(\|S_2\|,\|R_2\|)^2}{{\rm gap}}$ we obtain \begin{align*} \max(\Uniinv{\widetilde U_{31}},\Uniinv{\widetilde V_{31}}) &\leq \frac{\max(\Uniinv{S_1},\Uniinv{R_1})}{{\rm Gap}-\frac{\max(\|S_2\|,\|R_2\|)^2}{{\rm gap}}}. \end{align*} Together with~\eqref{eq:sinuuvv} we obtain~\eqref{eq:sin22subSVD}. The remaining task is to establish~\eqref{eq:sin22subindSVD}. For this, we revisit \eqref{eq:forsvd1}, \eqref{eq:forsvd2}, and now bound the individual $\widetilde U_{21i}$, the $i$th row of $\widetilde U_{21}$. We obtain \[ \Uniinv{\widetilde U_{21i}}\sigma_{\min}(\Sigma_1)-\Uniinv{\widetilde V_{21i}}\widehat\sigma_{k_1+i}\leq \Uniinv{ r_{2i}^T\widetilde V_{31}}, \quad \Uniinv{\widetilde V_{21i}}\sigma_{\min}(\Sigma_1)-\Uniinv{\widetilde U_{21i}}\widehat\sigma_{k_1+i}\leq \Uniinv{ \widetilde U_{31}^*s_{2i}}, \] Eliminating $\widetilde V_{21i}$ and $\widetilde U_{21i}$ as before gives \begin{equation} \nonumber \Uniinv{\widetilde U_{21i}}\leq \frac{ \sigma_{\min}(\Sigma_1)\Uniinv{r_{2i}^T\widetilde V_{31}}+ \widehat\sigma_{k_1+i}\Uniinv{\widetilde U_{31}^{\ast}s_{2i}}}{(\sigma_{\min}(\Sigma_1))^2-\widehat\sigma_{k_1+i}^2}, \end{equation} \begin{equation} \nonumber \Uniinv{\widetilde V_{21i}}\leq \frac{ \sigma_{\min}(\Sigma_1)\Uniinv{\widetilde U_{31}^{\ast}s_{2i}}+ \widehat\sigma_{k_1+i}\Uniinv{r_{2i}^T\widetilde V_{31}}}{(\sigma_{\min}(\Sigma_1))^2-\widehat\sigma_{k_1+i}^2}. \end{equation} Note that when $k_m\neq k_n$, $(\widetilde U_{21i},r_{2i})$ or $(\widetilde V_{21i},s_{2i})$ is empty for large $i$; by taking $\widehat\sigma_{k_1+i}=0$ for such $i$ the argument carries over. We therefore have for every $i$ \begin{equation} \label{eq:ut21i} \max(\Uniinv{\widetilde U_{21i}},\Uniinv{\widetilde V_{21i}})\leq \frac{ \max(\|s_{2i}\|,\|r_{2i}\|) \max(\Uniinv{\widetilde U_{31}},\Uniinv{\widetilde V_{31}}) }{\sigma_{\min}(\Sigma_1)-\widehat\sigma_{k_1+i}}. \end{equation} From~\eqref{eq:Ut31S1S2etc} we also obtain \begin{equation} \label{eq:Ut311final} \sigma_{\min}(\Sigma_1)\Uniinv{\widetilde U_{31}}\leq \|A_3\|\Uniinv{\widetilde V_{31}}+\Uniinv{S_1}+\sum_{i=1}^{k_n-k_1}\|s_{2i}\|\Uniinv{\widetilde V_{21i}} , \end{equation} and from~\eqref{eq:eq:Ut31S1S2etc2}, \begin{equation} \label{eq:Vt311final} \sigma_{\min}(\Sigma_1)\Uniinv{\widetilde V_{31}} \leq \Uniinv{\widetilde U_{31}}\|A_3\|+\Uniinv{R_1}+ \sum_{i=1}^{k_m-k_1}\Uniinv{\widetilde U_{21i}}\|r_{2i}\|. \end{equation} Hence eliminating $\unii{\widetilde U_{31}}$ and then $\unii{\widetilde V_{31}}$, and using~\eqref{eq:ut21i} gives \begin{align*} \max(&\unii{\widetilde U_{31}},\unii{\widetilde V_{31}})\\ \leq & \frac{1}{{\rm Gap}} \left(\max(\uni{S_1},\uni{R_1})+ \sum_{i=1}^{k'}\frac{\max(\|r_{2i}\|,\|s_{2i}\|)^2}{\sigma_{\min}(\Sigma_1)-\widehat\sigma_{k_1+i}}\max(\Uniinv{\widetilde U_{31}},\Uniinv{\widetilde V_{31}})\right). \end{align*} Thus by the assumption ${\rm Gap}>\frac{\sum_{i=1}^{k'}\max(\|r_{2i}\|,\|s_{2i}\|)^2}{\sigma_{\min}(\Sigma_1)-\widehat\sigma_{k_1+i}}$ we have \[ \max(\Uniinv{\widetilde U_{31}},\Uniinv{\widetilde V_{31}})\leq \frac{ \max(\uni{S_1},\uni{R_1})}{{\rm Gap}-\frac{\sum_{i=1}^{k'}\max(\|r_{2i}\|,\|s_{2i}\|)^2}{\sigma_{\min}(\Sigma_1)-\widehat\sigma_{k_1+i}}}. \] Finally, the bound~\eqref{eq:sin22subindSVD} follows from combining this with~\eqref{eq:ut21i} and~\eqref{eq:u1andv1}. \end{proof} We note that $R$ or $S$ in the above theorem is allowed to be empty, as in the case where a one-sided projection is employed. This includes the popular randomized SVD algorithm~\cite{halko2011finding}. We make two more remarks. \begin{remark}[Other approaches for the SVD] A standard approach to extending results in symmetric eigenvalue problems to the SVD is to use the Jordan-Wielandt matrix, for example as in~\cite[Sec.~3]{rcli05}. As pointed out in~\cite{nakatsukasa2017accuracy}, this has the slight downside of introducing spurious eigenvalues at 0. Moreover, the results via Jordan-Wielandt we obtained were less clean and looser than Theorem~\ref{thm:svdmain}. Another approach is to work with the Gram matrix $A^*A$, but this unnecessarily squares the singular values and modifies ${\rm gap}$ and ${\rm Gap}$. For these reasons, we have chosen to work directly with the SVD equations. \end{remark} \begin{remark}[Proof of \eqref{eq:boundvecsvd} via Wedin and \cite{nakatsukasa2017accuracy}] As in Remark~\ref{rem:dksaad}, a proof for~\eqref{eq:boundvecsvd} can be given by combining Wedin's result (the SVD analogue of Davis-Kahan) and~\cite{nakatsukasa2017accuracy} (SVD analogue of Saad's result). The sharper bounds~\eqref{eq:boundvecsvd} and~\eqref{eq:sin22subSVD} cannot be obtained this way. \end{remark} \ignore{ \subsubsection{Refined bounds for singular subspaces}As in Theorem~\ref{thm:eigrefine}, we can refine the bounds by using individual information about $\|R_i\|$. \begin{theorem}\label{thm:svdrefined} In the setting of Theorem~\ref{thm:svdmain}, $\max(\|\sin\angle(U_1,\widehat U_1)\|,\|\sin\angle(V_1,\widehat V_1)\|)$ is bounded from above by \begin{equation} \label{eq:sin22subSVD} \frac{\max(\|R_1\|,\|S_1\|)}{{\rm Gap}- \frac{\max(\|R_2\|,\|\widetilde \Sigma_2\|)^2}{{\rm gap}}}\sqrt{1+\frac{\max(\|R_2\|,\|\widetilde \Sigma_2\|)^2}{{\rm gap}^2}}, \end{equation} and \begin{equation} \label{eq:sin22subindSVD} \frac{\max(\|R_1\|,\|S_1\|)}{ {\rm Gap}-\max\left(\sum_{i=2}^{k}\frac{\|R_{i}\|^2}{|\sigma -\widehat\sigma_{i}|}, \sum_{i=2}^{k}\frac{\|S_{i}\|^2}{|\sigma -\widehat\sigma_{i}|} \right) }\sqrt{1+ \max\left(\sum_{i=2}^{k}\frac{\|R_{i}\|}{|\sigma -\widehat\sigma_{i}|}, \sum_{i=2}^{k}\frac{\|S_{i}\|}{|\sigma -\widehat\sigma_{i}|} \right)^2}. \end{equation} \end{theorem} We omit the proof as it is long but not enlightening, which combines techniques in the proofs of Theorems~\ref{thm:svdmain} and \ref{thm:eigrefine}. } \section{Eigenvectors of a self-adjoint operator}\label{sec:hilbert} So far we have specialized to finite-dimensional matrices as the analysis is elementary and the situation is more transparent. In this final section, as in~\cite{knyazevmc97,ovtchinnikov2006cluster}, we extend the discussion to the infinite-dimensional case, where the matrix is generalized to a self-adjoint operator $\mathcal{A}:\mathcal{H}\rightarrow \mathcal{H}$ on a Hilbert space $\mathcal{H}$ with inner product $\left<\cdot,\cdot\right>$. Unlike the previous studies, which assumed the operators are bounded, our discussion allows $\mathcal{A}$ to be unbounded, thus is applicable for example to differential operators $\mathcal{A}u=u''$; in this case, we assume that $\mathcal{A}$ is densely defined, as is customary. Let $Q$ be a subspace of $\mathcal{H}$, which is of finite dimension $k$ with orthonormal basis $q_1,\ldots,q_k$. In the Rayleigh-Ritz process for $\mathcal{A}$, we compute the $k\times k$ matrix $A_1$ with $(i,j)$ element $\left<q_i,\mathcal{A} q_j\right>$ and its eigenvalue decomposition $A_1 = \Omega\mbox{diag}(\widehat\lambda_1,\ldots,\widehat\lambda_k) \Omega^\ast$ to obtain the Ritz values $\widehat\lambda_1,\ldots,\widehat\lambda_k$ and Ritz vectors $[q_1,\ldots,q_k]\Omega$. Denote by $Q_1,Q_2\in\mathcal{H}$ the resulting Ritz subspaces corresponding to disjoint sets of eigenvalues of $A_1$ (we have $Q=Q_1\oplus Q_2$), and let $Q_3$ be the (infinite-dimensional) orthogonal complement of $Q$ such that $\mathcal{H}=Q_1\oplus Q_2\oplus Q_3$ is an orthogonal direct sum. For simplicity, we treat the case where $Q_1$ is one-dimensional (subspace versions can be obtained, generalizing Section~\ref{sec:RRangles}). That is, let $(\widehat\lambda,\widehat u)$ be a Ritz pair with $\widehat u=q_1$, and suppose that $\mathcal{A}u = \lambda u$; note that this is an assumption, as a self-adjoint operator may not have any eigenvalue (e.g.~\cite[Ch.~9]{hunter}), although the spectrum is always nonempty. The goal is to bound $\sin\angle(\widehat u,u)$. Denote by $\mathcal{P}_i$ be the orthogonal projectors onto each subspace $Q_i$. We define $\mathcal{A}_{ij}:=\mathcal{P}_i\mathcal{A} \mathcal{P}_j$. Then the R-R process forces $\mathcal{A}_{12}=0, \mathcal{A}_{21}=0$. Note that $\mathcal{A}_{13}^*=\mathcal{A}_{31}, \mathcal{A}_{23}^*=\mathcal{A}_{32}$ (where $*$ denotes the adjoint of the operators), and these terms represent the residuals, hence we write $\|R_1\|=\|R_{31}\|$ and $\|R\|=\|\mathcal{A}_{31}+\mathcal{A}_{32}\|$. Also define $\|R_2\|=\|\mathcal{A}_{32}\| (=\|\mathcal{A}_{23}\|)$, and $\|r_i\|=\|\mathcal{A}_{32}\mathcal{P}_{2,i}\|(=\|\mathcal{P}_{2,i}\mathcal{A}_{23}\|)$ for $i=2,\ldots,k$, where $\mathcal{P}_{2,i}$ is the $1$-dimensional projection onto the $i$th Ritz vector. The quantities ${\rm gap}$ and ${\rm Gap}$ are defined by ${\rm gap}=\min|\widehat\lambda-\lambda(\mathcal{A}_{22})|$, ${\rm Gap}=\min|\widehat\lambda-\lambda(\mathcal{A}_{33})|$, in which $\lambda(\mathcal{A}_{ii})$ denotes the spectrum of the restriction of $\mathcal{A}_{ii}$ to $Q_i$. \begin{theorem}\label{thm:hilbert} Under the above assumptions and notation, \begin{equation} \label{eq:boundvecselfadjoint} \sin\angle(u,\widehat u)\leq \frac{\|R\|}{{\rm Gap}}\sqrt{1+\frac{\|R_2\|^2}{{\rm gap}^2}} \quad \left( \leq \frac{\|R\|}{{\rm Gap}}(1+\frac{\|R_2\|}{{\rm gap}}) \right). \end{equation} Moreover, if ${\rm Gap}>\frac{\|R_2\|^2}{{\rm gap}}$, then \begin{equation} \label{eq:sin22_inf} \sin\angle(u,\widehat u) \leq \frac{\|R_1\|}{{\rm Gap}-\frac{\|R_2\|^2}{{\rm gap}}}\sqrt{1+\frac{\|R_2\|^2}{{\rm gap}^2}} , \end{equation} and if ${\rm Gap}>\sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|}$, then \begin{equation} \label{eq:sin2indiv_inf} \sin\angle(u,\widehat u) \leq \frac{\|R_1\|}{ {\rm Gap}-\sum_{i=2}^{k}\frac{\|r_{i}\|^2}{|\lambda -\widehat\lambda_{i}|}} \sqrt{1+ \left(\sum_{i=2}^{k}\frac{\|r_{i}\|}{|\lambda -\widehat\lambda_{i}|}\right)^2}. \end{equation} \end{theorem} \begin{proof} Writing $u=\sum_{i=1}^3 u_i$ with $u_i\in Q_i$, the $Q_i$-component of $\mathcal{A}u=\lambda u$ each implies \begin{subequations} \begin{align} \lambda u_1&=\mathcal{A}_{11}u_1+ \mathcal{A}_{13}u_3 , \label{eq:Aihilba}\\ \lambda u_2&=\mathcal{A}_{22}u_2+ \mathcal{A}_{23}u_3 , \label{eq:Aihilbb}\\ \lambda u_3&=\mathcal{A}_{31}u_1+ \mathcal{A}_{32}u_2+\mathcal{A}_{33}u_3 .\label{eq:Aihilbc} \end{align} \end{subequations} Our goal is to bound $\sin\angle(u,\widehat u)=\sqrt{\|u_2\|^2+\|u_3\|^2}$. We first derive~\eqref{eq:boundvecselfadjoint}, an analogue of Theorem~\ref{thm:mainscalar}. By~\eqref{eq:Aihilbc}, we have \[\|(\mathcal{A}_{33}-\lambda)u_3\|= \|\mathcal{A}_{31}u_1+\mathcal{A}_{32}u_2\|=\|(\mathcal{A}_{31}+\mathcal{A}_{32})u\|\leq \|\mathcal{A}_{31}+\mathcal{A}_{32}\|=\|R\|.\] Together with the fact $\|(\mathcal{A}_{33}-\lambda)u_3\|\geq {\rm Gap} \|u_3\|$ (\cite[\S~V.3.5]{kato1995perturbation}; to see this, note that $v=(\mathcal{A}_{33}-\lambda)u_3$ implies $(\mathcal{A}_{33}-\lambda)^{-1}v=u_3$, hence $\|u_3\|\leq \|(\mathcal{A}_{33}-\lambda)^{-1}\|\|v\|\leq {\rm Gap} \|v\|$), we obtain \begin{equation} \label{eq:u3} \|u_3\|\leq \frac{\|R\|}{{\rm Gap}} . \end{equation} Now~\eqref{eq:Aihilbb} gives $(\mathcal{A}_{22}-\lambda)u_2=- \mathcal{A}_{23}u_3 $. Since $\|\mathcal{A}_{23}\|=\|\mathcal{A}_{23}^*\|=\|\mathcal{A}_{32}\|= \|R_2\|$, and $\|(\mathcal{A}_{22}-\lambda)u_2\|\geq {\rm gap} \|u_2\|$, we thus have $\|u_2\|\leq \frac{\|R_2\|}{{\rm gap}} \|u_3\|$. Using this and~\eqref{eq:u3}, we obtain~\eqref{eq:boundvecselfadjoint}. We now turn to~\eqref{eq:sin2indiv_inf}; the proof of~\eqref{eq:sin22_inf} is similar and omitted. As in Theorem~\ref{thm:mainsubspace}, the idea is to improve the estimate of $\|u_2\|$ using \eqref{eq:Aihilbb}. Projecting it onto $\mathcal{P}_{2,i}$ gives $\mathcal{P}_{2,i}(\mathcal{A}_{22}-\lambda)u_2+ \mathcal{P}_{2,i}\mathcal{A}_{23}u_3 =0$ for $i=2,\ldots,k$, and by assumption $\mathcal{P}_{2,i}\mathcal{A}_{22}=\widehat\lambda_i\mathcal{P}_{2,i}$, so $(\widehat\lambda_i-\lambda)\mathcal{P}_{2,i}u_2+ \mathcal{P}_{2,i}\mathcal{A}_{23}u_3 =0, $ hence \begin{equation} \label{eq:Piu2} \|\mathcal{P}_{2,i}u_2\|\leq \frac{\|\mathcal{P}_{2,i}\mathcal{A}_{23}u_3\|}{|\widehat\lambda_i-\lambda|} \leq \frac{\|\mathcal{P}_{2,i}\mathcal{A}_{23}\|\|u_3\|}{|\widehat\lambda_i-\lambda|} = \frac{\|r_i\|\|u_3\|}{|\widehat\lambda_i-\lambda|}, \end{equation} where we used $\|r_i\|=\|\mathcal{P}_{2,i}\mathcal{A}_{23}\|$ for the final equality. The inequality~\eqref{eq:Piu2} holds for $i=2,\ldots,k$. Now since $\mathcal{A}_{32} = \mathcal{A}_{32}\mathcal{P}_{2}= \sum_{i=2}^k\mathcal{A}_{32}\mathcal{P}_{2,i} $, we can rewrite~\eqref{eq:Aihilbc} as $\mathcal{A}_{31}u_1+ \sum_{i=2}^k \mathcal{A}_{32}\mathcal{P}_{2,i}u_2=-(\mathcal{A}_{33}-\lambda)u_3$. Therefore \begin{align*} \|(\mathcal{A}_{33}-\lambda)u_3\|&\leq \|\mathcal{A}_{31}u_1\|+ \sum_{i=2}^k\|\mathcal{A}_{32}\mathcal{P}_{2,i} u_2\|\leq \|R_1\|+ \sum_{i=2}^k\|\mathcal{A}_{32}\mathcal{P}_{2,i}\|\|\mathcal{P}_{2,i} u_2\| \\ &\leq \|R_1\|+ \sum_{i=2}^k\frac{\|r_i\|^2\|u_3\|}{|\widehat\lambda_i-\lambda|} , \end{align*} where we used~\eqref{eq:Piu2} and $\|r_i\|=\|\mathcal{A}_{32}\mathcal{P}_{2,i}\|$. Together with $\|(\mathcal{A}_{33}-\lambda)u_3\|\geq {\rm Gap}\|u_3\|$ we obtain \begin{equation} \label{eq:u3bound} \|u_3\|\leq \frac{\|R_1\|}{{\rm Gap}\left(1-\sum_{i=2}^k\frac{\|r_i\|^2}{|\widehat\lambda_i-\lambda|} \right)} . \end{equation} Finally,~\eqref{eq:Piu2} together with $\|u_2\|^2=\sum_{i=2}^k\|\mathcal{P}_{2,i}u_2\|^2$ gives $\|u_2\|^2\leq \left(\sum_{i=2}^k\frac{\|r_i\|}{|\widehat\lambda_i-\lambda|}\right)^2 \|u_3\|^2$, so \[ \sin\angle(u,\widetilde u) = \sqrt{\|u_2\|^2+\|u_3\|^2} \leq \frac{\|R_1\|}{{\rm Gap}\left(1-\sum_{i=2}^k\frac{\|r_i\|^2}{|\widehat\lambda_i-\lambda|} \right)}\sqrt{1+\left(\sum_{i=2}^k\frac{\|r_i\|}{|\widehat\lambda_i-\lambda|}\right)^2 }, \] completing the proof of~\eqref{eq:sin2indiv_inf}. \end{proof} \subsection{Experiments: Sturm-Liouville eigenvalue problem}\label{sec:SL} \ignore{ We illustrate Theorem~\ref{thm:hilbert} with a simple Sturm-Liouville eigenvalue problem (e.g.~\cite[\S~3.5]{folland1992fourier}) \begin{equation} \label{eq:SL} \mathcal{A}u = u''=\lambda u,\qquad u(0) = u(\pi)=0,\qquad u\in\mathcal{H}=H^{2}(0,\pi). \end{equation} $\mathcal{A}$ is an unbounded self-adjoint operator, with a full set of (infinitely many) orthonormal eigenfunctions, with explicit solutions $(\lambda,u)=(n^2,\sin(nx))$ for $n\in\mathbb{N}$. We attempt to compute the smallest eigenpairs (which correspond to the smoothest eigenfunctions) by taking the trial subspace to be \begin{equation} \label{eq:trialQ} Q(x) = \mbox{Orth}(x(\pi-x)[P_0(x),P_1(x),\ldots,P_{k-1}(x)]) \end{equation} where $P_i(x)$ are the Legendre polynomials, orthogonal on $[0,\pi]$. Here Orth is an operator that outputs the orthonormal column space; here it simply applies a scalar scaling to each column. $Q$ is called a quasimatrix~\cite{trefethen2010householder} of size $\infty\times k$. Figure~\ref{fig:SR} (left) shows the basis functions~\eqref{eq:trialQ} before and after the orthonormalization for $k=7$. Having defined the subspace $Q$, we can perform R-R, namely compute the matrix $\mathcal{A}_1=Q^T\mathcal{L}Q=Q^TQ''$ and its eigenvalue decomposition $\mathcal{A}_1=\Omega\widehat \Lambda \Omega^\ast$, and take the Ritz values $\widehat\lambda=\mbox{diag}(\widehat \Lambda)$ and the Ritz vectors $\widehat u$ to be the columns of $Q\Omega$, each of which is a function. Figure~\ref{fig:SR} shows the convergence to the smallest eigenfunction $\angle(u,\widehat u)$ and its bounds, as in Figure~\ref{fig:lob}. As before, our bound~\eqref{eq:sin22_inf} gives tighter estimates of the actual error, although here Davis-Kahan also gives reasonably good bounds since ${\rm gap}$ is not very small here. Note that the error $\angle(u,\widehat u)$ and bounds all have an odd-even pattern; roughly, they go down only when the degree becomes even; this reflects the fact that the eigenfunction $\sin(x)$ is an even function about $x=\pi/2$. \begin{figure}[htpb] \begin{minipage}[t]{0.5\hsize} \includegraphics[width=0.9\textwidth]{figs/basis_funcsRleg.pdf} \end{minipage} \begin{minipage}[t]{0.5\hsize} \includegraphics[width=1.0\textwidth]{figs/Ritzvec_SR_RRleg.pdf} \end{minipage} \caption{ Left: Basis functions before (top) and after (bottom) orthonormalization. Right: Convergence of $\angle(u,\widehat u)$ and its bounds. } \label{fig:SR} \end{figure} Finally, in Figure~\ref{fig:SR_res} we illustrate the behavior of the residual function $\mathcal{A}\widehat u-\widehat\lambda \widehat u$ as $k$ varies. We make two observations. First, clearly the norm $\|\mathcal{A}\widehat u-\widehat\lambda \widehat u\|$ decays as $k$ increases, essentially like the right plot in Figure~\ref{fig:SR}. The second and more interesting observation is that the residuals appear to become more and more oscillatory (non-smooth) as $k$ grows. This is a general tendency, and can be explained as follows. As emphasized repeatedly in this paper, R-R forces the residual to be orthogonal to $Q$, which contains the ``smoothest'' functions. Consequently, in the Legendre expansion of $\mathcal{A}\widehat u-\widehat\lambda \widehat u=\sum_{i=0}^\infty c_iP_i(x)$, $|c_i|$ are small for $i< k$; they are bounded roughly by $\|u_2\|$, which is $O(\|\mathcal{A}\widehat u-\widehat\lambda\widehat u\|^2)$, by~\eqref{eq:Piu2}; this also reflects the main result in~\cite{knyazevmc97}. By growing $k$, the residual becomes orthogonal to more and more of these smoothest functions, and therefore its graph becomes more oscillatory. \begin{figure}[htpb] \begin{center} \includegraphics[width=0.99\textwidth]{figs/Ritzvec_SRR_res.pdf} \end{center} \caption{ Residual function $\mathcal{A}\widehat u-\widehat\lambda \widehat u$ for $k=3, 6$ and $9$. } \label{fig:SR_res} \end{figure} We illustrate Theorem~\ref{thm:hilbert} with a simple Sturm-Liouville eigenvalue problem (e.g.~\cite[\S~3.5]{folland1992fourier}) \begin{equation} \label{eq:SL} \mathcal{A}u = u''=\lambda u,\qquad u'(0) = \alpha u(0), \quad u'(\pi) = \beta u(\pi), \qquad u\in\mathcal{H}=H^{2}(0,\pi). \end{equation} $\mathcal{A}$ is an unbounded self-adjoint operator, with a full set of (infinitely many) orthonormal eigenfunctions. Here we take $\alpha = 1, \beta = -1$. The exact eigenvalues are $\lambda_i=-\nu_i^2$, where $\nu_i$ are the solutions for $\tan\pi\nu = 2\nu/(\nu^2-1)$, with corresponding eigenfunction $\nu_i\cos\nu_i x+\alpha\sin\nu_i x$~\cite[\S~3.5]{folland1992fourier}. We attempt to compute the eigenpairs with the smoothest eigenfunctions, i.e., eigenpairs closest to 0. To do this, a natural idea is to take low-degree polynomials. We take the trial subspace to be the $k$-dimensional subspace of polynomials $p$ of degree up to $k+1$ that satisfy the two boundary conditions $p'(0) = \alpha p(0)$ and $p'(\pi) = \beta p(\pi)$. Figure~\ref{fig:SRFolland} (left) shows the basis functions obtained in this way, for $k=7$. Such computations can be done conveniently using Chebfun~\cite{chebfunguide}. Having defined the subspace $Q$, we can perform R-R to obtain the Ritz vectors (which are functions in $\mathcal{H}$ here), along with the Ritz values. Figure~\ref{fig:SRFolland} shows the convergence of $\angle(u,\widehat u)$ to the eigenfunction $u$ for the smallest eigenpair and its bounds, analogous to Figure~\ref{fig:lob}. As in that experiment, our bound~\eqref{eq:sin22_inf} gives tighter bounds for the actual error, although here Davis-Kahan also performs well, since ${\rm gap}$ is not very small. \begin{figure}[htpb] \begin{minipage}[t]{0.49\hsize} \includegraphics[width=0.9\textwidth]{figs/basis_funcsFolland.pdf} \end{minipage} \begin{minipage}[t]{0.49\hsize} \includegraphics[width=1.0\textwidth]{figs/Ritzvec_SR_RFolland.pdf} \end{minipage} \caption{ Left: Basis functions for projection subspace $Q$, satisfying $u'(0)=u(0),u'(\pi)=-u(\pi)$. Right: Convergence of $\angle(u,\widehat u)$ and its bounds. } \label{fig:SRFolland} \end{figure} Finally, in Figure~\ref{fig:SRFolland_res} we illustrate the behavior of the residual function $\mathcal{A}\widehat u-\widehat\lambda \widehat u$ as $k$ varies. Note that $\widehat u$ is determined up to a sign flip $\pm 1$; here we chose $\widehat u(1)>0$. We make two observations. First, evidently the norm $\|\mathcal{A}\widehat u-\widehat\lambda \widehat u\|$ decays rapidly as $k$ increases, essentially like the right plot in Figure~\ref{fig:SRFolland}. The second and more interesting observation is that the residuals appear to become more and more oscillatory (non-smooth) as $k$ grows. This is a typical phenomenon, and can be explained as follows. As emphasized repeatedly in this paper, R-R forces the residual to be orthogonal to $Q$, which contains the ``smoothest'' functions. Consequently, in the Legendre expansion of the residual $\mathcal{A}\widehat u-\widehat\lambda \widehat u=\sum_{i=0}^\infty c_iP_i(x)$, $|c_i|$ are small for $i< k$; they are bounded roughly by $\|u_2\|$, which is $O(\|\mathcal{A}\widehat u-\widehat\lambda\widehat u\|^2)$ by~\eqref{eq:Piu2}. This also reflects the main result in~\cite{knyazevmc97}; recall Remark~\ref{rem:dksaad}. By growing $k$, the residual becomes orthogonal to more and more of these smoothest functions, and therefore becomes more oscillatory. \begin{figure}[htpb] \begin{center} \includegraphics[width=1.0\textwidth]{figs/Ritzvec_SRFolland_res2.pdf} \end{center} \caption{ Residual function $\mathcal{A}\widehat u-\widehat\lambda \widehat u$ for $k=3, 6$ and $9$. } \label{fig:SRFolland_res} \end{figure} \subsection*{Acknowledgements} I am grateful to Andrew Knyazev for helpful discussions and bringing~\cite{ovtchinnikov2006cluster} to my attention, and Mayuko Yamashita for the help in Section~\ref{sec:hilbert}. \def\noopsort#1{}\def\char32l}\def\v#1{{\accent20 #1}{\char32l}\def\v#1{{\accent20 #1}} \let\^^_=\v\defhardback}\def\pbk{paperback{hardback}\def\pbk{paperback}
{ "timestamp": "2020-01-01T02:11:31", "yymm": "1810", "arxiv_id": "1810.02532", "language": "en", "url": "https://arxiv.org/abs/1810.02532", "abstract": "We derive sharp bounds for the accuracy of approximate eigenvectors (Ritz vectors) obtained by the Rayleigh-Ritz process for symmetric eigenvalue problems. Using information that is available or easy to estimate, our bounds improve the classical Davis-Kahan $\\sin\\theta$ theorem by a factor that can be arbitrarily large, and can give nontrivial information even when the $\\sin\\theta$ theorem suggests that a Ritz vector might have no accuracy at all. We also present extensions in three directions, deriving error bounds for invariant subspaces, singular vectors and subspaces computed by a (Petrov-Galerkin) projection SVD method, and eigenvectors of self-adjoint operators on a Hilbert space.", "subjects": "Numerical Analysis (math.NA)", "title": "Sharp error bounds for Ritz vectors and approximate singular vectors", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.987568348780928, "lm_q2_score": 0.8354835350552603, "lm_q1q2_score": 0.825097095148176 }
https://arxiv.org/abs/1704.07022
Note on the union-closed sets conjecture
The union-closed sets conjecture states that if a family of sets $\mathcal{A} \neq \{\emptyset\}$ is union-closed, then there is an element which belongs to at least half the sets in $\mathcal{A}$. In 2001, D. Reimer showed that the average set size of a union-closed family, $\mathcal{A}$, is at least $\frac{1}{2} \log_2 |\mathcal{A}|$. In order to do so, he showed that all union-closed families satisfy a particular condition, which in turn implies the preceding bound. Here, answering a question raised in the context of T. Gowers' polymath project on the union-closed sets conjecture, we show that Reimer's condition alone is not enough to imply that there is an element in at least half the sets.
\section{Introduction} Given the set $[n]= \{1, \ldots, n\}$ and a family ${\mathcal A} \subseteq 2^{[n]}$ we say ${\mathcal A}$ is union-closed if for $A, B \in {\mathcal A}$ we have $A \cup B \in {\mathcal A}$. The Union-Closed Sets Conjecture, due to P. Frankl \cite{Rival}, states that if ${\mathcal A} \subseteq 2^{[n]}$ is union-closed and ${\mathcal A} \neq \{\emptyset\}$ then there is some element of $[n]$ which belongs to at least half the sets in ${\mathcal A}$. One method of approaching this conjecture is to look at the average frequency of an element or, equivalently, the average set size. The following theorem of D. Reimer \cite{Reimer} was thus motivated by and can be shown to follow from, the union-closed sets conjecture. \begin{theorem}\label{Reimer} If ${\mathcal A} \subseteq 2^{[n]}$ and is union-closed, then \begin{equation}\frac{\sum_{A \in {\mathcal A}} |A|}{|{\mathcal A}|} \ge \frac{\log_2|{\mathcal A}|}{2}\end{equation} \end{theorem} We will say that ${\mathcal F} \subseteq 2^{[n]}$ is a filter if $G \supseteq F$ and $F \in {\mathcal F}$ implies $G \in {\mathcal F}$. Additionally, for $A \subseteq B \subseteq [n]$ define $[A,B]\coloneqq \{C : A \subseteq C \subseteq B\}$. In order to prove Theorem \ref{Reimer}, Reimer introduced the following criterion for a family ${\mathcal A} \subseteq 2^{[n]}$:\\ \begin{defn}\label{def} We say ${\mathcal A} \subseteq 2^{[n]}$ satisfies \emph{Condition 1} if there exists a filter $\mathcal{F} \subseteq 2^{[n]}$ and a bijection $A \mapsto F_A$ from ${\mathcal A}$ to ${\mathcal F}$ satisfying: \begin{enumerate} \item $A \subseteq F_A$ for all $A \in {\mathcal A}$ \item For distinct $A, B \in {\mathcal A}$ we have $[A, F_A] \cap [B, F_B] = \emptyset$. \end{enumerate} \end{defn} Reimer's proof of Theorem \ref{Reimer} consists of two steps. He first shows that every union-closed family ${\mathcal A}$ satisfies Condition 1. He then shows that Condition 1 implies Theorem \ref{Reimer}. In 2016, T. Gowers began a polymath project focused on the union-closed sets conjecture. In the comments on the initial post I. Balla first proposed the conjecture below. Gowers reiterates this conjecture in his second post focused on strengthenings of the union-closed sets conjecture. In the comments there is a discussion of a possible counterexample, and it is stated that all families with ground set at most 5 and a random sampling of families with ground set at most 12 have been confirmed to satisfy the conjecture \cite{Gowers}. \begin{conj}\label{conj} Assume ${\mathcal A}\subseteq 2^{[n]}$ satisfies Condition 1. Then there is an element $x\in [n]$ in at least half the sets of ${\mathcal A}$. \end{conj} As Reimer showed that all union-closed families satisfy Condition 1, this conjecture is clearly a strengthening of the union-closed sets conjecture. The purpose of this note is to show that Conjecture \ref{conj} is false. \section{Counterexample} In what follows we will always have ${\mathcal A}$ and $ {\mathcal F}$ as in Definition \ref{def}. \begin{note} \label{equiv} An equivalent way of stating the second part of Condition 1 is that at least one of $A \setminus F_B$ or $B \setminus F_A$ is non-empty. \end{note} We will use the following notation: \begin{itemize} \item ${\mathcal A}_x = \{A \in {\mathcal A} : x \in A\}$ \item $A_0$ is the set for which $F_{A_0}=[n]$ \item $A_i$ is the set for which $F_{A_i} = [n]\setminus \{i\}$ for $i \in [n]$ \item $B_{i,j}$ is the set for which $F_{B_{i,j}}= [n]\setminus \{i,j\}$ for $i \neq j \in [n]$ \end{itemize} Before giving the counterexample we will briefly describe how we found it and indicate why no smaller example is possible. The following observation was our starting point. \begin{fact} \label{n-1} Assume ${\mathcal A}$ satisfies Condition 1. If every set in ${\mathcal F}$ has size at least $n-1$ then there is an element in at least half of the sets of ${\mathcal A}$. \end{fact} \begin{proof} Without loss of generality assume ${\mathcal F}= \{[n]\} \cup \{[n]\setminus \{i\}: i \in [k]\}$. Hence, $|{\mathcal F}|=|{\mathcal A}| = k+1$. By Note \ref{equiv} we know that $[k] \subseteq A_0$. Now we will view each $A_i$ as a vertex labelled $i$ in a digraph, $D$, on vertex set $[k]$, with $(i,j)$ an edge exactly when $i \in A_j$. Again by Note \ref{equiv} we know that $D$ must contain a tournament. Furthermore, the number of sets containing $i$ is simply the out-degree of $i$ plus 1 (since $i \in A_0$). Since $D$ has $k$ vertices and contains a tournament it has maximum out-degree at least $\frac{k-1}{2}$. Hence there is always an element in at least $\frac{k+1}{2}$ members of ${\mathcal A}$. \end{proof} We first observe that if $n$ is the smallest integer such that there is a counterexample to Conjecture \ref{conj} on $[n]$ and ${\mathcal A}$ is such a counterexample, then ${\mathcal F}$ must contain all sets of size $n-1$. To see this suppose instead that the elements of ${\mathcal F}$ of size $n-1$ are $[n]\setminus \{i\}$ for $i \in [k]$ with $k<n$. Since ${\mathcal F}$ is a filter we have $\{k+1, \ldots, n\} \subseteq F$ for all $F \in {\mathcal F}$, implying that the condition in Note \ref{equiv} is not affected if we replace each $X \in {\mathcal A} \cup {\mathcal F}$ by $X\setminus \{k+1, \ldots,n\}$. This produces a counterexample on a smaller set, contradicting the minimality of $n$. Restrict ${\mathcal A}$ to ${\mathcal A}'\coloneqq\{A_i\}_{i=0}^n$. If $n$ is even then there exists $x\in [n]$ with $|{\mathcal A}'_x| \ge \frac{n+2}{2}$. Hence we need at least two sets in ${\mathcal A} \setminus {\mathcal A}'$. (If $n$ is odd similar reasoning shows that there must be at least three sets in ${\mathcal A} \setminus {\mathcal A}'$.) In our example we will take $n$ to be even and ${\mathcal F}$ to consist of $[n] \setminus \{1,2\}$ and $ [n] \setminus\{3,4\}$ along with all sets of size at least $n-1$. Thus $|{\mathcal F}|=|{\mathcal A}|=n+3$, $A_0= [n]$, and we want to arrange that $|{\mathcal A}_x| \le \frac{n}{2}+1$ for all $x \in [n]$. We will use the same digraph, $D$, as in the proof of Fact \ref{n-1} (with $(i,j)$ an edge if and only if $i \in A_j$). Note that the $B_{i,j}$'s do not affect the digraph. By Note \ref{equiv} we know that if $B_{i,j} \in {\mathcal A}$ then $i \in A_j$ and $j \in A_i$. Therefore, the sum of the out-degrees in $D$ must be at least $\frac{n^2-n}{2}+2$. Without loss of generality $1 \in B_{3,4}$, since $B_{1,2}$ and $B_{3,4}$ must satisfy the condition of Note \ref{equiv}. Additionally, if $B_{1,2}= \emptyset$ then to satisfy Note \ref{equiv} all other sets in ${\mathcal A}$ must contain 1 or 2. However, $A_0$ contains both 1 and 2, so one of 1 or 2 must appear in at least half the sets, contradicting that ${\mathcal A}$ is a counterexample. Hence, $B_{1,2}$ and $B_{3,4}$ are both non-empty, so we must have at least $2$ vertices of out-degree no more than $\frac{n}{2}-1$ and the rest of out-degree no more than $\frac{n}{2}$. (If $|{\mathcal A} \setminus {\mathcal A}'| >2$ then we get even more ``extra" degrees and the following lower bound on $n$ increases.) Thus we have the inequality $2(\frac{n}{2}-1)+(n-2)(\frac{n}{2}) \ge \frac{n^2-n}{2}+2$, i.e. $n \ge 8$. When $n$ is odd similar consideration gives $n \ge 13$; so, since our example does indeed use $n=8$ it is of the smallest possible size. \begin{counter} Here we will take our universe to be $[8]$. Our family ${\mathcal A}$ consists of the following 11 sets: \begin{itemize} \item $A_0 = [8]$ \item $A_1= \{2, 4, 6, 7, 8\}$ \item $A_2= \{1,3,5,8\}$ \item $A_3= \{1,4,7,8\}$ \item $A_4 = \{2,3,5,6\}$ \item $A_5 = \{1, 3, 7\}$ \item $A_6 = \{2, 3,5\}$ \item $A_7 = \{2, 4, 6\}$ \item $A_8 = \{4, 5, 6, 7\}$ \item $B_{1,2} = \{8\}$ \item $B_{3,4} = \{1\}$ \end{itemize} \end{counter} We (or our computers) can easily check that the requirement in Note \ref{equiv} is satisfied and that each element appears in exactly 5 sets. \\ \\ {\bf Acknowledgment:} I would like to thank Jeff Kahn for suggesting this problem. \bibliographystyle{amsplain}
{ "timestamp": "2017-04-25T02:09:08", "yymm": "1704", "arxiv_id": "1704.07022", "language": "en", "url": "https://arxiv.org/abs/1704.07022", "abstract": "The union-closed sets conjecture states that if a family of sets $\\mathcal{A} \\neq \\{\\emptyset\\}$ is union-closed, then there is an element which belongs to at least half the sets in $\\mathcal{A}$. In 2001, D. Reimer showed that the average set size of a union-closed family, $\\mathcal{A}$, is at least $\\frac{1}{2} \\log_2 |\\mathcal{A}|$. In order to do so, he showed that all union-closed families satisfy a particular condition, which in turn implies the preceding bound. Here, answering a question raised in the context of T. Gowers' polymath project on the union-closed sets conjecture, we show that Reimer's condition alone is not enough to imply that there is an element in at least half the sets.", "subjects": "Combinatorics (math.CO)", "title": "Note on the union-closed sets conjecture", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.989986431444902, "lm_q2_score": 0.8333245994514084, "lm_q1q2_score": 0.8249800464461521 }
https://arxiv.org/abs/1203.2295
Techniques for Solving Sudoku Puzzles
Solving Sudoku puzzles is one of the most popular pastimes in the world. Puzzles range in difficulty from easy to very challenging; the hardest puzzles tend to have the most empty cells. The current paper explains and compares three algorithms for solving Sudoku puzzles. Backtracking, simulated annealing, and alternating projections are generic methods for attacking combinatorial optimization problems. Our results favor backtracking. It infallibly solves a Sudoku puzzle or deduces that a unique solution does not exist. However, backtracking does not scale well in high-dimensional combinatorial optimization. Hence, it is useful to expose students in the mathematical sciences to the other two solution techniques in a concrete setting. Simulated annealing shares a common structure with MCMC (Markov chain Monte Carlo) and enjoys wide applicability. The method of alternating projections solves the feasibility problem in convex programming. Converting a discrete optimization problem into a continuous optimization problem opens up the possibility of handling combinatorial problems of much higher dimensionality.
\section{Introduction} As all good mathematical scientists know, a broad community has contributed to the invention of modern algorithms. Computer scientists, applied mathematicians, statisticians, economists, and physicists, to name just a few, have made lasting contributions. Exposing students to a variety of perspectives outside the realm of their own disciplines sharpens their instincts for modeling and arms them with invaluable tools. In this spirit, the current paper discusses techniques for solving Sudoku puzzles, one of the most popular pastimes in the world. One could make the same points with more serious applications, but it is hard to imagine a more beguiling introduction to the algorithms featured here. Sudoku diagrams are special cases of the Latin squares long familiar in experimental design and, as such, enjoy interesting mathematical and statistical properties \cite{BaiCamCon2008}. The complicated constraints encountered in solving Sudoku puzzles have elicited many clever heuristics that amateurs use to good effect. Here we examine three generic methods with broader scientific and societal applications. The fact that one of these methods outperforms the other two is mostly irrelevant. No two problem categories are completely alike, and it is best to try many techniques before declaring a winner. The three algorithms tested here are simulated annealing, alternating projections, and backtracking. Simulating annealing is perhaps the most familiar to statisticians. It is the optimization analog of MCMC (Markov chain Monte Carlo) and has been employed to solve a host of combinatorial problems. The method of alternating projections was first proposed by von Neumann \cite{Neu1950} to find a feasible point in the intersection of a family of hyperplanes. Modern versions of alternating projections more generally seek a point in the intersection of a family of closed convex sets. Backtracking is a standard technique taken from the toolkits of applied mathematics and computer science. Backtracking infallibly finds all solutions of a Sudoku puzzle or determines that no solution exists. Its Achilles heel of excessive computational complexity does not come into play with Sudoku puzzles because they are, despite appearances, relatively benign computationally. Sudoku puzzles are instances of the satisfiability problem in computer science. As problem size increases, such problems are combinatorially hard and often defy backtracking. For this reason alone, it is useful to examine alternative strategies. In a typical Sudoku puzzle, there are 81 cells arranged in a 9-by-9 grid, some of which are occupied by numerical clues. See Figure~\ref{fig:sample_game}. The goal is to fill in the remaining cells subject to the following three rules: \begin{figure} \centering \begin{tikzpicture}[scale=0.75] \draw[black!50] (0,0) grid (9,9); \draw[black!100] (0,0) rectangle (3,3); \draw[black!100] (3,0) rectangle (6,3); \draw[black!100] (6,0) rectangle (9,3); \draw[black!100] (0,3) rectangle (3,6); \draw[black!100] (3,3) rectangle (6,6); \draw[black!100] (6,3) rectangle (9,6); \draw[black!100] (0,6) rectangle (3,9); \draw[black!100] (3,6) rectangle (6,9); \draw[black!100] (6,6) rectangle (9,9); \node[regular polygon, regular polygon sides=4] at (3.5,0.5) {$5$}; \node[regular polygon, regular polygon sides=4] at (5.5,0.5) {$6$}; \node[regular polygon, regular polygon sides=4] at (6.5,0.5) {$8$}; \node[regular polygon, regular polygon sides=4] at (7.5,0.5) {$2$}; \node[regular polygon, regular polygon sides=4] at (3.5,1.5) {$3$}; \node[regular polygon, regular polygon sides=4] at (5.5,1.5) {$4$}; \node[regular polygon, regular polygon sides=4] at (7.5,1.5) {$1$}; \node[regular polygon, regular polygon sides=4] at (8.5,1.5) {$9$}; \node[regular polygon, regular polygon sides=4] at (1.5,2.5) {$8$}; \node[regular polygon, regular polygon sides=4] at (4.5,2.5) {$2$}; \node[regular polygon, regular polygon sides=4] at (5.5,2.5) {$1$}; \node[regular polygon, regular polygon sides=4] at (4.5,3.5) {$7$}; \node[regular polygon, regular polygon sides=4] at (8.5,3.5) {$1$}; \node[regular polygon, regular polygon sides=4] at (0.5,4.5) {$2$}; \node[regular polygon, regular polygon sides=4] at (3.5,4.5) {$8$}; \node[regular polygon, regular polygon sides=4] at (4.5,4.5) {$6$}; \node[regular polygon, regular polygon sides=4] at (7.5,4.5) {$7$}; \node[regular polygon, regular polygon sides=4] at (8.5,4.5) {$4$}; \node[regular polygon, regular polygon sides=4] at (0.5,5.5) {$7$}; \node[regular polygon, regular polygon sides=4] at (2.5,5.5) {$4$}; \node[regular polygon, regular polygon sides=4] at (3.5,5.5) {$1$}; \node[regular polygon, regular polygon sides=4] at (5.5,5.5) {$5$}; \node[regular polygon, regular polygon sides=4] at (6.5,5.5) {$2$}; \node[regular polygon, regular polygon sides=4] at (1.5,6.5) {$3$}; \node[regular polygon, regular polygon sides=4] at (2.5,6.5) {$2$}; \node[regular polygon, regular polygon sides=4] at (3.5,6.5) {$9$}; \node[regular polygon, regular polygon sides=4] at (6.5,6.5) {$1$}; \node[regular polygon, regular polygon sides=4] at (7.5,6.5) {$4$}; \node[regular polygon, regular polygon sides=4] at (1.5,7.5) {$4$}; \node[regular polygon, regular polygon sides=4] at (0.5,8.5) {$1$}; \node[regular polygon, regular polygon sides=4] at (1.5,8.5) {$5$}; \node[regular polygon, regular polygon sides=4] at (2.5,8.5) {$7$}; \node[regular polygon, regular polygon sides=4] at (3.5,8.5) {$6$}; \node[regular polygon, regular polygon sides=4] at (4.5,8.5) {$4$}; \node[regular polygon, regular polygon sides=4] at (7.5,8.5) {$8$}; \end{tikzpicture} \caption{Sample Puzzle\label{fig:sample_game}} \end{figure} \begin{itemize} \item[1.] Each integer between 1 and 9 must appear exactly once in a row, \item[2.] Each integer between 1 and 9 must appear exactly once in a column, \item[3.] Each integer between 1 and 9 must appear exactly once in each of the 3-by-3 subgrids. \end{itemize} Solving a Sudoku game is a combinatorial task of intermediate complexity. The general problem of filling in an incomplete $n^2 \times n^2$ grid with $n \times n$ subgrids belongs to the class of NP-complete problems \cite{YatSet2003}. These problems are conjectured to increase in computational complexity at an exponential rate in $n$. Nonetheless, a well planned exhaustive search can work quite well for a low value of $n$ such as $9$. For larger values of $n$, brute force, no matter how cleverly executed, is simply not an option. In contrast, simulated annealing and alternating projections may yield good approximate solutions and partially salvage the situation. In the rest of this paper, we describe the three methods for solving Sudoku puzzles and compare them on a battery of puzzles. The puzzles range in difficulty from pencil and paper exercises to hard benchmark tests that often defeat the two approximate methods. Our discussion reiterates the rationale for equipping students with the best computational tools. \section{Three methods for solving Sudoku} \label{sec:methods} \subsection{Backtracking} Backtracking systematically grows a partial solution until it becomes a full solution or violates a constraint \cite{Ski2008}. In the latter case it backtracks to the next permissible partial solution and begins the growing process anew. The advantage of backtracking is that a block of potential solutions can be discarded en masse. Backtracking starts by constructing for each empty Sudoku cell $(i,j)$ a list $L_{ij}$ of compatible digits. This is done by scanning the cell's row, column, and subgrid. The empty cells are then ordered by the cardinalities of the lists $|L_{ij}|$. For example in Figure~\ref{fig:sample_game}, two cells $(7,4)$ and $(9,5)$ possess lists $L_{74}=\{7\}$ and $L_{95}=\{9\}$ with cardinality 1 and come first. Next come cells such as $(1,6)$ with $L_{16}=\{2,3\}$, $(1,7)$ with $L_{17}=\{3,9\}$, and $(1,9)$ with $L_{19}=\{2,3\}$ whose lists have cardinality 2. Finally come cells such as $(2,9)$ with $L_{29}=\{2,3,5,6,7\}$ whose lists have maximum cardinality 5. Partial solutions are character strings such as $s_{74}s_{95}s_{16}s_{17}s_{19}$ taken in dictionary order with the alphabet at cell $(i,j)$ limited to the list $L_{ij}$. In dictionary order a string such as $7939$ is treated as coming after a string such as $79232$. Backtracking starts with the string $7$ by taking the only element of $L_{74}$, grows it to $79$ by taking the only element of $L_{95}$, grows it to $792$ by taking the first element of $L_{16}$, grows it to $7923$ by taking the first element of $L_{17}$, and finally grows it to $79232$ by taking the first element of $L_{19}$. At this juncture a row violation occurs, namely a 2 in both cells $(1,6)$ and $(1,9)$. Backtracking discards all strings beginning with $79232$ and moves on to the string $79233$ by replacing the first element of $L_{19}$ by the second element of $L_{19}$. This leads to another row violation with a 3 in both cells $(1,7)$ and $(1,9)$. Backtracking moves back to the string $7929$ by discarding the fifth character of $79239$ and replacing the first element of $L_{17}$ by its second element. This sets the stage for another round of growing. Backtracking is also known as depth first search. In this setting the strings are viewed as nodes of a tree as depicted in Figure~\ref{fig:backtracking}. Generating strings in dictionary order constitutes a tree traversal that systematically eliminates subtrees and moves down and backs up along branches. Because pruning large subtrees is more efficient than pruning small subtrees, ordering of cells by cardinality compels the decision tree to have fewer branches at the top. We use the C code from Skiena and Revilla \cite{SkiRev2003} implementing backtracking on Sudoku puzzles. Backtracking has the virtue of finding all solutions when multiple solutions exist. Thus, it provides a mechanism for validating the correctness of puzzles. \begin{figure} \centering \begin{tikzpicture}[level/.style={sibling distance=75mm/#1}, scale=0.75] \node (z){79} child {node (a) {792} child {node (b) {7923} child {node(c) {79232} child{node(d) {Infeasible} child [grow=left] {node (q) {\quad\quad\quad\quad} edge from parent[draw=none] child [grow=up] {node (r) {$\ldots\VE{s}{19}$ \quad\quad\quad\quad} edge from parent[draw=none] child [grow=up] {node (s) {$\ldots\VE{s}{17}$ \quad\quad\quad\quad} edge from parent[draw=none] child [grow=up] {node (t) {$\dots\VE{s}{16}$ \quad\quad\quad\quad} edge from parent[draw=none] child [grow=up] {node (t) {$\VE{s}{74}\VE{s}{95}$ \quad\quad\quad\quad} edge from parent[draw=none] }}}}} } } child {node (e) {79233} child {node (f) {Infeasible} }} } child {node (g) {7929} child {node (h) {$\vdots$}}} } child {node (i) {793} child {node (j) {$\vdots$}} } ; \end{tikzpicture} \caption{Backtracking on the puzzle shown in Figure~\ref{fig:sample_game}. Starting from $\VE{s}{74}\VE{s}{95} = 79$, the algorithm attempts and fails to grow the solution beyond $\VE{s}{74}\VE{s}{95}\VE{s}{16}\VE{s}{17}\VE{s}{19} = 79232$. After failing to grow the solution beyond $\VE{s}{74}\VE{s}{95}\VE{s}{16}\VE{s}{17}\VE{s}{19} = 79233$, all partial solutions beginning with 7923 are eliminated from further consideration. The algorithm starts anew by attempting to grow $\VE{s}{74}\VE{s}{95}\VE{s}{16}\VE{s}{17} = 7929$. } \label{fig:backtracking} \end{figure} \subsection{Simulated Annealing} Simulated annealing \cite{Cer1985, KirGelVec1983, PreTeuVet2007} attacks a combinatorial optimization problem by defining a state space of possible solutions, a cost function quantifying departures from the solution ideal, and a positive temperature parameter. For a satisfiability problem, it is sensible to equate cost to the number of constraint violations. Solutions then correspond to states of zero cost. Each step of annealing operates by proposing a move to a new randomly chosen state. Proposals are Markovian in the sense that they depend only on the current state of the process, not on its past history. Proposed steps that decrease cost are always accepted. Proposed steps that increase cost are taken with high probability in the early stages of annealing when temperature is high and with low probability in the late stages of annealing when temperature is low. Inspired by models from statistical physics, simulated annealing is designed to sample the state space broadly before settling down at a local minimum of the cost function. For the Sudoku problem, a state is a $9 \times 9$ matrix (board) of integers drawn from the set $\{1,\ldots,9\}$. Each integer appears nine times, and all numerical clues are respected. Annealing starts from any feasible board. The proposal stage randomly selects two different cells without clues. The corresponding move swaps the contents of the cells, thus preserving all digit counts. To ensure that the most troublesome cells are more likely to be chosen for swapping, we select cells non-uniformly with probability proportional to $\exp(i)$ for a cell involved in $i$ constraint violations. Let $\M{b}$ denote a typical board, $c(\M{b})$ its associated cost, and $n$ the current iteration index. At temperature $\tau$, we decide whether to accept a proposed neighboring board $\M{b}$ by drawing a random deviate $U$ uniformly from $[0,1]$. If $U$ satisfies \begin{eqnarray*} U & \le & \min\left\{\exp (\left[c(\M{b}_n)-c(\M{b})\right]/\tau_n),1\right\}, \end{eqnarray*} then we accept the proposed move and set $\M{b}_{n+1}=\M{b}$. Otherwise, we reject the move and set $\M{b}_{n+1}=\M{b}_n$. Thus, the greater the increase in the number of constraint violations, the less likely the move is made to a proposed state. Also, the higher the temperature, the more likely a move is made to an unfavorable state. The final ingredient of simulated annealing is the cooling schedule. In general, the temperature parameter $\tau$ starts high and slowly declines to 0, where only favorable or cost neutral moves are taken. Typically temperature is lowered at a slow geometric rate. \subsection{Alternating Projections} The method of alternating projections relies on projection operators. In the projection problem, one seeks the closest point $\V{x}$ in a set $C \subset \mathbb{R}^d$ to a point $\V{y} \in \mathbb{R}^d$. Distance is quantified by the usual Euclidean norm $\lVert \V{x} - \V{y} \rVert.$ If $\V{y}$ already lies in $C$, then the problem is trivially solved by setting $\V{x} = \V{y}$. It is well known that a unique minimizer exists whenever the set $C$ is closed and convex \cite{Lan2004}. We will denote the projection operator taking $\V{y}$ to $\V{x}$ by $P_C(\V{y}) = \V{x}$. Given a finite collection of closed convex sets with a nonempty intersection, the alternating projection algorithm finds a point in that intersection. Consider the case of two closed convex sets $A$ and $B$. The method recursively generates a sequence $\V{y}_n$ by taking $\V{y}_0 = \V{y}$ and $\V{y}_{n+1} = P_A(\V{y}_{n})$ for $n$ even and $\V{y}_{n+1} = P_B(\V{y}_{n})$ for $n$ odd. Figure~\ref{fig:alternating_projections} illustrates a few iterations of the algorithm. As suggested by the picture, the algorithm does indeed converge to a point in $A \cap B$ \cite{CheGol1959}. For more than two closed convex sets with nonempty intersection, the method of alternating projections cycles through the projections in some fixed order. Convergence occurs in this more general case as well based on some simple theory involving paracontractive operators \cite{ElsKolNeu1992}. The limit is not guaranteed to be the closest point in the intersection to the original point $\V{y}$. The related but more complicated procedure known as Dykstra's algorithm \cite{Dyk1983} finds this point. \begin{figure}[t] \centering \includegraphics[scale=0.6]{Alternating_Projection} \caption{Alternating projections find a point in $A \cap B$, where $A$ and $B$ are closed convex sets. The initial point is $\V{y}$. The sequence of points $\V{y}_n$ is generated by alternating projection onto $A$ with projection onto $B$.} \label{fig:alternating_projections} \end{figure} It is easy to construct some basic projection operators. For instance, projection onto the rectangle $R = \{\V{x} \in \mathbb{R}^d : a_i \le x_i \le b_i \: \mbox{for all} \: i\}$ is achieved by defining $\V{x}=P_R(\V{y})$ to have components $x_i=\min\{\max\{a_i,y_i\},b_i\}$. This example illustrates a more general rule; namely, if $A$ and $B$ are two closed convex sets, then projection onto the Cartesian product $A \times B$ is effected by the Cartesian product operator $(\V{x},\V{y}) \mapsto [P_A(\V{x}),P_B(\V{y})]$. When $A$ is an entire Euclidean space, $P_A(\V{x})$ is just the identity map. Projection onto the hyperplane \begin{eqnarray*} H & = & \{\V{y} \in \mathbb{R}^d : \V{v}\Tra\V{y} = c\} \end{eqnarray*} is implemented by the operator \begin{eqnarray*} P_H(\V{x}) & = & \V{x} - \frac{\V{v}\Tra\V{x} - c}{\lVert \V{v} \rVert^2} \V{v} . \end{eqnarray*} Projection onto the unit simplex $U = \left\{\V{x} \in \mathbb{R}^d: \sum_{i=1}^d x_i = 1, \; x_i \ge 0 \: \forall i\right\}$ is more subtle. Fortunately there exist fast algorithms for this purpose \cite{DucShaSin2008, Mic1986}. In either the alternating projection algorithm or Dykstra's algorithm, it is advantageous to reduce the number of participating convex sets to the minimum possible consistent with fast projection. For instance, it is better to take the unit simplex $U$ as a whole rather than as an intersection of the halfspaces $\{\V{x}: x_i \ge 0\}$ and the affine subspace $\{\V{x}: \sum_{i=1}^d x_i =1\}$. Because our alternating projection algorithm for solving Sudoku puzzles relies on projecting onto several simplexes, it is instructive to derive the Duchi et al \cite{DucShaSin2008} projection algorithm. Consider minimization of a convex smooth function $f(\V{x})$ over $U$. The Karush-Kuhn-Tucker stationarity condition involves setting the gradient of the Lagrangian \begin{eqnarray*} {\cal L}(\V{x},\lambda,\V{\mu}) & = & f(\V{x})+ \lambda \Big(\sum_{i=1}^d x_i-1\Big) - \sum_{i=1}^d \mu_i x_i \end{eqnarray*} equal to $\V{0}$. This is stated in components as the Gibbs criterion \begin{eqnarray*} 0 & = & \frac{\partial}{\partial x_i} f(\V{x})+ \lambda - \mu_i \end{eqnarray*} for multipliers $\mu_i \ge 0$ obeying the complementary slackness conditions $\mu_i x_i = 0$. For the choice $f(\V{x}) = \frac{1}{2}\|\V{x}-\V{y}\|^2$, the Gibbs condition can be solved in the form \begin{eqnarray*} \VE{x}{i} & = & \begin{cases} \VE{y}{i}-\lambda & \VE{x}{i} > 0 \\ \VE{y}{i}-\lambda+\mu_i & \VE{x}{i} = 0. \end{cases} \end{eqnarray*} If we let $I_+ = \{i: \VE{x}{i}>0\}$, then the equality constraint \begin{eqnarray*} 1 & = & \sum_{i \in I_+} \VE{x}{i} \:\;\, = \:\;\, \sum_{i \in I_+} \VE{y}{i} - |I_+| \lambda \end{eqnarray*} implies \begin{eqnarray*} \lambda & = & \frac{1}{|I_+|} \Big(\sum_{i \in I_+} \VE{y}{i} - 1 \Big) . \end{eqnarray*} The catch, of course, is that we do not know $I_+$. The key to avoid searching over all $2^d$ subsets is the simple observation that the $\VE{x}{i}$ and $\VE{y}{i}$ are consistently ordered. Suppose on the contrary that $\VE{y}{i}< y_j$ and $x_j < \VE{x}{i}$. For small $s>0$ substitute $\VE{x}{j}+s$ for $x_j$ and $\VE{x}{i}-s$ for $\VE{x}{i}$. The objective function $f(\V{x}) = \frac{1}{2}\lVert \V{x}-\V{y} \rVert^2$ then changes by the amount \begin{equation*} \frac{1}{2} \Big[(\VE{x}{i}-s-\VE{y}{i})^2+(x_j+s-y_j)^2-(\VE{x}{i}-\VE{y}{i})^2-(x_j-y_j)^2 \Big] = s(\VE{y}{i}-y_j+x_j-\VE{x}{i})+s^2 , \end{equation*} which is negative for $s$ small. Let $\VE{w}{i}$ denote the $i$th largest entry of $\V{y}$. Then the Gibbs condition implies that $\VE{w}{1} \geq \VE{w}{2} \geq \ldots \geq \VE{w}{\lvert I_+ \rvert} > \lambda$. Thus, to determine $\lambda$ we seek the largest $k$ such that \begin{equation*} \VE{w}{k} > \frac{1}{k} \left ( \sum_{i=1}^k \VE{w}{i} - 1 \right) \end{equation*} and set $\lambda$ equal to the right hand side of this inequality. With $\lambda$ in hand, the Gibbs condition implies that $\VE{x}{i} = \max \{\VE{y}{i} - \lambda, 0\}$. It follows that projection onto $U$ can be accomplished in $O(d \log d)$ operations dominated by sorting. Algorithm~\ref{alg:project_simplex} displays pseudocode for projection onto $U$. \begin{algorithm}[t] \begin{algorithmic}[0] \State $\V{w} \leftarrow \textsc{sort\_descending}(\V{y}).$ \State $k \leftarrow \max \left \{ j : \VE{w}{j} > \frac{1}{j} \left( \sum_{i=1}^j \VE{w}{i} \right)\right \}$ \State $\lambda \leftarrow \frac{1}{k} \left( \sum_{i=1}^{k} \VE{w}{i} \right)$ \State $\VE{x}{i} \leftarrow \max\{\VE{y}{i} - \lambda, 0\}$. \end{algorithmic} \caption{\, \textsc{Projection onto simplex}} \label{alg:project_simplex} \end{algorithm} Armed with these results, we now describe how to solve a continuous relaxation of Sudoku by the method of alternating projections. In the relaxed version of the problem, we imagine generating candidate solutions by random sampling. Each cell $(i,j)$ is assigned a sampling distribution $\TE{p}{ijk}= \Pr(S_{ij}=k)$ for choosing a random deviate $S_{ij} \in \{1,\ldots,9\}$ to populate the cell. If a numerical clue $k$ occupies cell $(i,j)$, then we set $p_{ijl}=1$ for $l=k$ and 0 otherwise. A matrix of sampled deviates $\M{S}$ constitutes a candidate solution. It seems reasonable to demand that the average puzzle obey the constraints. Once we find a feasible 3-dimensional tensor $\T{P} = (\TE{p}{ijk})$ obeying the constraints, a good heuristic for generating an integer solution $\Mhat{S}$ is to put \begin{eqnarray*} \hat{s}_{ij} & = & \underset{k \in \{ 1, \ldots, 9 \}}{\max} \TE{P}{ijk}. \end{eqnarray*} In other words, we impute the most probable integer to each unknown cell $(i,j)$. It is easy to construct counterexamples where imputation of the most probable integer from a feasible tensor $\T{P}$ of the relaxed problem fails to solve the Sudoku puzzle. In any case, the remaining agenda is to specify the constraints and the corresponding projection operators. The requirement that each digit appear in each row on average once amounts to the constraint $\sum_{j=1}^9 \TE{p}{ijk} = 1$ for all $i$ and $k$ between 1 and 9. There are 81 such constraints. The requirement that each digit appear in each column on average once amounts to the constraint $\sum_{i=1}^9 \TE{p}{ijk} = 1$ for all $j$ and $k$ between 1 and 9. Again, there are 81 such constraints. The requirement that each digit appear in each subgrid on average once amounts to the constraint $\sum_{j=1}^3 \sum_{j=1}^3 \TE{p}{a+i,b+j,k} = 1$ for all $k$ between 1 and 9 and all $a$ and $b$ chosen from the set $\{0,3,6\}$. This contributes another 81 constraints. Finally, the probability constraints $\sum_{k=1}^9 \TE{p}{ijk} = 1$ for all $i$ and $j$ between 1 and 9 contribute 81 more affine constraints. Hence, there are a total of $324$ affine constraints on the $9^3 = 729$ parameters. In addition there are 729 nonnegativity constraints $\TE{p}{ijk} \ge 0$. Every numerical clue voids several constraints. For example, if the digit $7$ is mandated for cell (9,2), then we must take $\TE{p}{927} = 1$, $\TE{p}{92k} = 0$ for $k \ne 7$, $\TE{p}{i27} = 0$ for all $i \not = 9$, $\TE{p}{9j7} = 0$ for all $j \not = 2$, and $\TE{p}{ij7} = 0$ for all other pairs $(i,j)$ in the (3,1) subgrid. In carrying out alternating projection, we eliminate the corresponding variables. With this proviso, we cycle through the simplex projections summarized in Algorithm~\ref{alg:project_simplex}. The process is very efficient but slightly tedious to code. For the sake of brevity we omit the remaining details. All code used to generate the subsequent results are available at \url{https://github.com/echi/Sudoku}, and we direct the interested reader there. \section{Comparisons} \label{sec:experiments} We generated test puzzles from code available online \cite{Wan2007} and discarded puzzles that could be completely solved by filling in entries directly implied by the initial clues. This left 87 easy puzzles, 130 medium puzzles, and 100 hard puzzles. We also downloaded an additional 95 very hard benchmark puzzles \cite{Coc2012,Ste2005}. In simulated annealing, the temperature $\tau$ was initialized to 200 and lowered by a factor of 0.99 after every 50 steps. We allowed at most $2 \times 10^{5}$ iterations and reset the temperature to 200 if a solution had not been found after $10^{5}$ iterations. For the alternating projection algorithm, we commenced projecting from the origin $\bf 0$. \begin{table}[th] \centering \begin{tabular}{lcccc} \hline \hline & Alt. Projection & Sim. Annealing & Backtracking & Number of Puzzles\\ \hline Easy & 0.85 & 1.00 & 1.00 & 87 \\ Medium & 0.89 & 1.00 & 1.00 & 130 \\ Hard & 0.72 & 0.97 & 1.00 & 100 \\ Top 95 & 0.41 & 0.03 & 1.00 & 95 \\ \hline \end{tabular} \caption{Success rates for solving puzzles of varying difficulty.} \label{tab:comparisons} \end{table} \begin{table}[th] \centering \begin{tabular}{clccc} \toprule \midrule & & \multicolumn{3}{c}{CPU Time (sec)} \\ \cmidrule(r){3-5} & & Alt. Projection & Sim. Annealing & Backtracking \\ \midrule \multicolumn{1}{c}{\multirow{4}{*}{Easy}} & \multicolumn{1}{l}{Minimum} & 0.032 & 0.006 & 0.007 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Median} & 0.041 & 0.021 & 0.008 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Mean} & 0.052 & 0.112 & 0.008 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Maximum} & 0.237 & 0.970 & 0.009 \\ \\ \multicolumn{1}{c}{\multirow{4}{*}{Medium}} & \multicolumn{1}{l}{Minimum} & 0.032 & 0.007 & 0.007 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Median} & 0.051 & 0.037 & 0.008 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Mean} & 0.062 & 0.231 & 0.008 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Maximum} & 0.269 & 3.36 & 0.010 \\ \\ \multicolumn{1}{c}{\multirow{4}{*}{Hard}} & \multicolumn{1}{l}{Minimum} & 0.033 & 0.008 & 0.008 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Median} & 0.110 & 0.753 & 0.008 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Mean} & 0.159 & 1.104 & 0.009 \\ \multicolumn{1}{l}{} & \multicolumn{1}{l}{Maximum} & 0.525 & 7.204 & 0.031 \\ \bottomrule \end{tabular} \caption{Summary statistics on the run times for different methods on puzzles of varying difficulty. For the alternating projection and simulated annealing techniques, only successfully solved puzzles are included in the statistics.} \label{tab:run_times} \end{table} Backtracking successfully solved all puzzles. Table~\ref{tab:comparisons} shows the fraction of puzzles the two heuristics were able to successfully complete. Table~\ref{tab:run_times} records summary statistics for the CPU time taken by each method for each puzzle category. All computations were done on an iMac computer with a 3.4 GHz Intel Core i7 processor and 8 GB of RAM. We implemented the alternating projection and simulated annealing algorithms in Fortran 95. For backtracking we relied on the existing implementation in C. The comparisons show that backtracking performs best, and for the vast majority of $9 \times 9$ Sudoku problems it is probably going to be hard to beat. Simulated annealing finds the solution except for a handful of the most challenging paper and pencil problems, but its maximum run times are unimpressive. While alternating projection does not perform as well on the pencil and paper problems compared to the other two algorithms, it does not do terribly either. Moreover, we see hints of the tables turning on the hard puzzles. \begin{figure}[t] \centering \begin{tikzpicture}[scale=0.75] \draw[black!50] (0,0) grid (9,9); \draw[black!100] (0,0) rectangle (3,3); \draw[black!100] (3,0) rectangle (6,3); \draw[black!100] (6,0) rectangle (9,3); \draw[black!100] (0,3) rectangle (3,6); \draw[black!100] (3,3) rectangle (6,6); \draw[black!100] (6,3) rectangle (9,6); \draw[black!100] (0,6) rectangle (3,9); \draw[black!100] (3,6) rectangle (6,9); \draw[black!100] (6,6) rectangle (9,9); \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (0.5,8.5) {$4$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (1.5,8.5) {$1$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (2.5,8.5) {$7$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (3.5,8.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (4.5,8.5) {$6$}; \node[regular polygon, regular polygon sides=4,fill=gray!70,scale=0.75] at (5.5,8.5) {$2$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (6.5,8.5) {$8$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (7.5,8.5) {$3$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (8.5,8.5) {$5$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (0.5,7.5) {$6$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (1.5,7.5) {$3$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (2.5,7.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (3.5,7.5) {$1$}; \node[regular polygon, regular polygon sides=4,fill=gray!70,scale=0.75] at (4.5,7.5) {$5$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (5.5,7.5) {$8$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (6.5,7.5) {$7$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (7.5,7.5) {$4$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,7.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (0.5,6.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (1.5,6.5) {$5$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (2.5,6.5) {$8$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (3.5,6.5) {$7$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (4.5,6.5) {$3$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (5.5,6.5) {$4$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (6.5,6.5) {$6$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (7.5,6.5) {$1$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,6.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (0.5,5.5) {$8$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (1.5,5.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (2.5,5.5) {$5$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (3.5,5.5) {$4$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (4.5,5.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (5.5,5.5) {$7$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (6.5,5.5) {$3$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (7.5,5.5) {$6$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,5.5) {$1$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (0.5,4.5) {$3$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (1.5,4.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (2.5,4.5) {$1$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (3.5,4.5) {$5$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (4.5,4.5) {$8$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (5.5,4.5) {$6$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (6.5,4.5) {$4$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (7.5,4.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,4.5) {$7$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (0.5,3.5) {$7$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (1.5,3.5) {$4$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (2.5,3.5) {$6$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (3.5,3.5) {$3$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (4.5,3.5) {$1$}; \node[regular polygon, regular polygon sides=4,fill=gray!70,scale=0.75] at (5.5,3.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (6.5,3.5) {$5$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (7.5,3.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,3.5) {$8$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (0.5,2.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (1.5,2.5) {$8$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (2.5,2.5) {$9$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (3.5,2.5) {$6$}; \node[regular polygon, regular polygon sides=4,fill=gray!70,scale=0.75] at (4.5,2.5) {$5$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (5.5,2.5) {$3$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (6.5,2.5) {$1$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (7.5,2.5) {$7$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,2.5) {$4$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (0.5,1.5) {$5$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (1.5,1.5) {$7$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (2.5,1.5) {$3$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (3.5,1.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (4.5,1.5) {$4$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (5.5,1.5) {$1$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (6.5,1.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (7.5,1.5) {$8$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,1.5) {$6$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (0.5,0.5) {$1$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (1.5,0.5) {$6$}; \node[regular polygon, regular polygon sides=4,fill=gray!20,scale=0.75] at (2.5,0.5) {$4$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (3.5,0.5) {$8$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (4.5,0.5) {$7$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (5.5,0.5) {$9$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (6.5,0.5) {$2$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (7.5,0.5) {$5$}; \node[regular polygon, regular polygon sides=4,scale=0.75] at (8.5,0.5) {$3$}; \end{tikzpicture} \caption{A typical local minimum that traps simulated annealing in a top 95 puzzle. Clues are shaded light gray. There are two column constraint violations caused by the cells shaded dark gray. The local minimum is deep in the sense that all one-step swaps result in further constraint violations. \label{fig:top95_1}} \end{figure} \begin{figure} \centering \includegraphics[scale=0.75]{Comparison_top95} \caption{Scatterplot of solution times for the top 95 benchmark problems.} \label{fig:top} \end{figure} Simulated annealing struggles mightily on the 95 benchmark puzzles. Closer inspection of individual puzzles reveals that these very hard puzzles admit many local minima with just a few constraint violations. Figure~\ref{fig:top95_1} shows a typical local minimum that traps the simulated annealing algorithm. Additionally, something curious is happening in Figure~\ref{fig:top}, which plots CPU solution times for alternating projection versus backtracking. Points below the dashed line indicate puzzles that the method of alternating projection solves more efficiently than backtracking. It appears that when the method of alternating projections finds correct solutions to very hard problems, it tends to find them more quickly than backtracking. \section{Discussion} \label{sec:discussion} It goes almost without saying that students of the mathematical sciences should be taught a variety of solution techniques for combinatorial problems. Because Sudoku puzzles are easy to state and culturally neutral, they furnish a good starting point for the educational task ahead. It is important to stress the contrast between exact strategies that scale poorly with problem size and approximate strategies that adapt more gracefully. The performance of the alternating projection algorithm on the benchmark tests suggest it may have a role in solving much harder combinatorial problems. Certainly, the electrical engineering community takes this attitude, given the close kinship of Sudoku puzzles to problems in coding theory \cite{ErlChaGor2009, GunMoo2012, MooGun2006, MooGunKup2009}. One can argue that algorithm development has assumed a dominant role within the mathematical sciences. Three inter-related trends are feeding this phenomenon. First, computing power continues to grow. Execution times are dropping, and computer memory is getting cheaper. Second, good computing simply tempts scientists to tackle larger data sets. Third, certain fields, notably communications, imaging, genomics, and economics generate enormous amounts of data. All of these fields create problems in combinatorial optimization. For instance, modern DNA sequencing is still challenged by the phase problem of discerning the maternal and paternal origin of genetic variants. Computation is also being adopted more often as a means of proving propositions. The claim that at least 17 numerical clues are needed to ensure uniqueness of a Sudoku solution has apparently been proved using intelligent brute force \cite{McGTugCiv2012}. Mathematical scientists need to be aware of computational developments outside their classical application silos. Importing algorithms from outside fields is one of the quickest means of refreshing an existing field. \section*{Acknowledgments} We thank Peter Cock for helpful comments and pointing us to the source and author of the ``Top 95" puzzles. \bibliographystyle{siammod}
{ "timestamp": "2013-05-17T02:02:22", "yymm": "1203", "arxiv_id": "1203.2295", "language": "en", "url": "https://arxiv.org/abs/1203.2295", "abstract": "Solving Sudoku puzzles is one of the most popular pastimes in the world. Puzzles range in difficulty from easy to very challenging; the hardest puzzles tend to have the most empty cells. The current paper explains and compares three algorithms for solving Sudoku puzzles. Backtracking, simulated annealing, and alternating projections are generic methods for attacking combinatorial optimization problems. Our results favor backtracking. It infallibly solves a Sudoku puzzle or deduces that a unique solution does not exist. However, backtracking does not scale well in high-dimensional combinatorial optimization. Hence, it is useful to expose students in the mathematical sciences to the other two solution techniques in a concrete setting. Simulated annealing shares a common structure with MCMC (Markov chain Monte Carlo) and enjoys wide applicability. The method of alternating projections solves the feasibility problem in convex programming. Converting a discrete optimization problem into a continuous optimization problem opens up the possibility of handling combinatorial problems of much higher dimensionality.", "subjects": "Optimization and Control (math.OC); Data Structures and Algorithms (cs.DS); Computation (stat.CO)", "title": "Techniques for Solving Sudoku Puzzles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9822877044076956, "lm_q2_score": 0.8397339596505965, "lm_q1q2_score": 0.8248603435383689 }
https://arxiv.org/abs/1901.06096
Repeated minimizers of $p$-frame energies
For a collection of $N$ unit vectors $\mathbf{X}=\{x_i\}_{i=1}^N$, define the $p$-frame energy of $\mathbf{X}$ as the quantity $\sum_{i\neq j} |\langle x_i,x_j \rangle|^p$. In this paper, we connect the problem of minimizing this value to another optimization problem, so giving new lower bounds for such energies. In particular, for $p<2$, we prove that this energy is at least $2(N-d) p^{-\frac p 2} (2-p)^{\frac {p-2} 2}$ which is sharp for $d\leq N\leq 2d$ and $p=1$. We prove that for $1\leq m<d$, a repeated orthonormal basis construction of $N=d+m$ vectors minimizes the energy over an interval, $p\in[1,p_m]$, and demonstrate an analogous result for all $N$ in the case $d=2$. Finally, in connection, we give conjectures on these and other energies.
\section{Introduction}\label{sec:intro} Let $\mathbf{A}=A_{i,j}$ be an $N\times N$ real matrix of rank less than or equal to $d$, and with ones along the diagonal. The $p$-frame energy of matrix $\mathbf{A}$ is denoted $$E_p(\mathbf{A})=\sum\limits_{i\neq j} |A_{ij}|^p.$$ An interesting question is what the optimizing matrices for $E_p(\mathbf{A})$ are for fixed $p$, $N$, and $d$. Bukh and Cox in \cite{buk18} recently studied the question of bounding $E_{\infty}(\mathbf{A})=\max\{|A_{ij}|\}$ and its consequences. One special case of this problem concerns matrices associated with unit vectors, $\mathbf{X}=\{x_i\}_{i=1}^N\subset \mathbb R^d$, in which case $\mathbf{A}$ is the Gram matrix of $\mathbf{X}$ and so is additionally symmetric and positive semi-definite. When $p$ is an even natural number, and $N$ is sufficiently large, sharp bounds for energies in the real and complex case follow from the work of Sidelnikov and Welch \cite{sid74,wel74}. Several related results and conjectures on minimizers of $p$-frame energies were formulated by Ehler and Okoudjou in \cite{ehl12}. Using the fact that for $d=N$, the minimizer of $E_p(\mathbf{A})$ for $p=2$ is an orthonormal basis, they show that whenever $N$ is divisible by $d$ and $p\in(0,2)$, a repeated orthonormal basis is the unique minimizer. Since the value of the $p$-frame energy does not change when substituting a vector by its opposite, uniqueness in this scenario is meant up to symmetry. Alternatively, uniqueness can be understood by considering the energy on projective spaces $\mathbb {RP}^{d-1}$ (as is done in \cite{bil19a}), where the points in $\mathbb {RP}^{d-1}$ may be identified with lines through the origin in $\mathbb R^d$. The problem of minimizing the 1-frame energy was also posed in \cite{mat17}, where it was conjectured that for any $N$, the repeated orthonormal basis is the unique minimizer. In 1959, Fejes T\'{o}th posed the question \cite{fej59}: what is the largest sum of non-obtuse angles formed by $N$ lines in $\mathbb{R}^d$? The conjecture stands that for any $N$ the maximum is uniquely attained on a collection of $d$ lines generated by a repeated orthonormal basis and is resolved only for $d=2$, and for very few cases of $N$ for $d=3$. The asymptotic result for $d=3$ is wide open (see \cite{bil19} and \cite{fod16} for the recent progress). We note that the conjecture about $E_1$ from \cite{mat17} immediately follows from the conjecture of Fejes T\'{o}th since $\arccos t \geq \frac {\pi} 2 (1-t)$ for $t\in[0,1]$ with equality holding precisely at $t=0$ and $t=1$. In this paper, we develop new methods for finding lower bounds on $E_p(\mathbf{A})$, based on the framework of Bukh and Cox from \cite{buk18}. Doing so allows us to prove new general bounds for $E_p(\mathbf{A})$ when $p<2$. Such bounds are sharp in some cases, particularly, for $p=1$ and $N\in [d+1, 2d]$. We also give sharp bounds for $N=d+m$ and $p\in[1,2\log\frac{2m+1}{2m}/\log\frac{m+1}{m}]$, thus, in particular, partially confirming a conjecture from \cite{che19}. Although everything in Sections \ref{sec:aux}-\ref{sec:new} is formulated in the real case, all observations and proofs there work for matrices of complex numbers or quaternions without any changes. Our methods work for general matrix optimization problems, so most of our results will be formulated for matrices. However, we slightly abuse terminology when talking about vector sets instead of their Gram matrices while speaking of $E_p(\mathbf{A})$. In Section \ref{sec:2dim}, we prove that the $p$-frame energy for unit vectors in the plane is minimized by repeated orthonormal bases for any number of vectors if $p\in[1,1.3]$. In Section \ref{sec:disc} we discuss possible generalizations of the results of the paper and motivations behind them. \section{Auxiliary problems and tight frames}\label{sec:aux} Bukh and Cox in \cite{buk18} introduced a method for deriving new packing bounds for projective codes. In our related approach we use the notion of a {\it tight frame}. A tight frame is a finite collection of vectors $\{y_i\}_{i=1}^N$ in $\mathbb R^d$ with the property that, \begin{equation}\label{eq:tightframe} \sum\limits_{i=1}^N \langle x, y_i \rangle^2=A \|x\|^2, \end{equation} holds, for any $x\in\mathbb{R}^d$ and some $A>0$. Using the tight frame condition (\ref{eq:tightframe}) and comparing coefficients for all $d$ components of $x$, one can conclude that $\sum_{i=1}^N \|y_i\|^2=Cd$. It is convenient to use the normalization $A=\frac{1}{d}$ for frames as above, so that the Hilbert-Schmidt norm of the $d\times N$ matrix $\mathbf{Y}$, with column vectors $\{ y_{i} \}_{i=1}^N$, is normalized to be $1$, \begin{equation}\label{eq:normalize} \|\mathbf{Y}\|^2_{\text{HS}}= \sum\limits_{i=1}^N \|y_i\|^2=1. \end{equation} In the next two lemmas we collect instruments for computing new lower bounds for discrete $p$-frame energies. The first makes a connection between kernels of matrices and associated tight frames. We introduce the notation $f_{c,p}(t)=\left(\frac{t}{c-t} \right)^{\frac{p}{2}}$, to be used in the second lemma. We also introduce the optimization problem, \begin{equation}\label{eq:mcpn} M(c,p,N)=\min\left\{\sum_{i=1}^N f_{c,p}(t_i)\right\vert\left. \sum_{i=1}^N t_i = 1,\ t_i\in[0,c)\right\}, \end{equation} where $p>0$ and $c>1/N$. Clearly, this optimization problem is properly defined. \begin{lemma}\label{lem:frame} For any real $N\times N$ matrix $\mathbf{A}$ of rank $d$, $N\geq d+1$, with unit diagonal elements, there exists a tight frame $\{y_1, y_2,\ldots, y_N\}\subset\mathbb{R}^{N-d}$ with the frame constant $\frac 1 {N-d}$ such that $\text{Ker\,}\mathbf{A}$ consists of all vectors of the form $(\langle y, y_1\rangle,\ldots,\langle y, y_N\rangle )$ with $y\in\mathbb{R}^{N-d}$. \end{lemma} \begin{proof} $\text{Ker\,}\mathbf{A}$ is $(N-d)$-dimensional so there is a linear mapping $L:\mathbb{R}^{N-d}\rightarrow\mathbb{R}^{N}$ whose image is $\text{Ker\,}\mathbf{A}$. For each of $N$ components, $L$ is a linear functional so it may be represented as $L_i (y)=\langle y,z_i \rangle$. We note that for any non-singular mapping $D:\mathbb{R}^{N-d}\rightarrow\mathbb{R}^{N-d}$, the image of the mapping $L\circ D$ is $\text{Ker\,}\mathbf{A}$ as well. The quadratic form $\sum_{i=1}^N \langle y,z_i\rangle^2$ is positive definite, and so by choosing a suitable $D$, we can transform $\{z_1,z_2,\ldots,z_N\}$ into $\{y_1,y_2,\ldots,y_N\}$ so that $\sum_{i=1}^N \langle y,y_i\rangle^2 = \frac 1 {N-d} \langle y,y \rangle$. So, we obtain the condition \ref{eq:tightframe}, and $\{y_i\}_{i=1}^N$ is a tight frame. \end{proof} The construction in Lemma \ref{lem:frame} is due to Bukh and Cox \cite{buk18} who used it for obtaining new packing bounds for projective codes. It also can be interpreted as a tight frame representative of a Gale dual to the matrix $A$ (see \cite{gla19} for more details about this interpretation). \begin{lemma}\label{lem:min} For any real $N\times N$ matrix $\mathbf{A}$ of rank $d$, $N\geq d+1$, with unit diagonal elements, $$E_p(\mathbf{A})\geq M\left(\frac 1 {N-d}, p, N\right)\ \text{ if }\ 1\leq p\leq 2,$$ $$E_p(\mathbf{A})\geq (N-1)^{1-\frac p 2} M\left(\frac 1 {N-d}, p, N\right)\ \text{ if }\ p\geq 2$$ \end{lemma} \begin{proof} By Lemma \ref{lem:frame}, there exists a tight frame $\{y_1, y_2,\ldots, y_N\}\subset\mathbb{R}^{N-d}$ such that $\text{Ker\,}\mathbf{A}$ is the set of all vectors $(\langle y, y_1\rangle,\ldots,\langle y, y_N\rangle )$ for some $y\in\mathbb{R}^{N-d}$ and $\sum_{i=1}^N |y_i|^2 = 1$. Taking $y=y_i$ and using the kernel condition for row $i$, we get $$\langle y_i, y_i \rangle + \sum_{j\neq i} A_{ij} \langle y_i, y_j \rangle = 0.$$ Then for any $1\leq i \leq N$, $$ \langle y_i, y_i \rangle \leq \sum_{j\neq i} \vert A_{ij}\vert \vert\langle y_i, y_j \rangle\vert \leq \left( \sum_{j\neq i}|A_{ij}|^p\right)^{\frac 1 p} \left( \sum_{j\neq i}\vert\langle y_i, y_j \rangle\vert^q\right)^{\frac 1 q},$$ by H\"{o}lder's inequality for $\frac 1 p + \frac 1 q =1$ (for $q=\infty$, $\left(\sum_{j\neq i}\vert\langle y_i, y_j \rangle\vert^q\right)^{\frac 1 q}=\max_{j\neq i}\vert\langle y_i, y_j \rangle\vert$). By monotonicity of norms $\|\cdot \|_p$ in $p$ and H\"{o}lder's inequality (for vectors in $\mathbb{R}^{N-1}$), $\Vert x \Vert_q \leq \Vert x \Vert_2$ for $q\geq 2$, while $\Vert x \Vert_q \leq (N-1)^{\frac 1 q - \frac 1 2}\Vert x \Vert_2$ when $q\leq 2$. Hence, $$\langle y_i, y_i \rangle \leq \left( \sum_{j\neq i}|A_{ij}|^p\right)^{\frac 1 p} \left( \sum_{j\neq i}\langle y_i, y_j \rangle^2\right)^{\frac 1 2}\ \text{ if }p\leq 2,\text{ and},$$ $$\langle y_i, y_i \rangle \leq (N-1)^{\frac 1 2 - \frac 1 p} \left( \sum_{j\neq i}|A_{ij}|^p\right)^{\frac 1 p} \left( \sum_{j\neq i}\langle y_i, y_j \rangle^2\right)^{\frac 1 2}\ \text{ if }p\geq 2.$$ At this point we use the tight frame condition for $y_i$, i.e. $\sum_{j\neq i}\langle y_i, y_j \rangle^2 = \frac 1 {N-d} \langle y_i, y_i \rangle - \langle y_i, y_i \rangle^2$, and denote $\langle y_i, y_i \rangle$ by $t_i$ to arrive finally at the inequalities: $$ \left( \sum_{j\neq i}|A_{ij}|^p\right)^{\frac 1 p} \geq \left(\frac {t_i} {\frac 1 {N-d} - t_i} \right)^{\frac 1 2}\ \text{ if }p\leq 2,\text{ and},$$ $$ \left( \sum_{j\neq i}|A_{ij}|^p\right)^{\frac 1 p} \geq (N-1)^{\frac 1 p - \frac 1 2} \left(\frac {t_i} {\frac 1 {N-d} - t_i} \right)^{\frac 1 2}\ \text{ if }p\geq 2.$$ Taking powers, noting $\sum_{i=1}^N t_i=1$, and summing these inequalities over all $i$, we obtain the conclusion of the lemma. \end{proof} \section{New lower bounds for the $p$-frame energy}\label{sec:new} As a first application of Lemma \ref{lem:min}, we give a new proof of the result from \cite{okt07} (also Proposition 3.1 in \cite{ehl12}). \begin{proposition}\label{prop:p>2} For any $p\geq 2$ and any real $N\times N$ matrix $\mathbf{A}$ of rank $d$, $N\geq 2$, with unit diagonal elements, $$E_p(\mathbf{A})\geq N(N-1) \left( \frac {N-d} {d(N-1)} \right)^{\frac p 2}.$$ \end{proposition} \begin{proof} For $N=d$, the right-hand side is 0. We assume $N\geq d+1$ for the rest of the proof. For $p\geq 2$, $f_{c,N} (t) =\left(\frac {t} {c-t}\right)^{\frac p 2}$ is convex on $[0,c)$. Due to Jensen's inequality, this implies $$M\left(\frac 1 {N-d}, p, N\right) = N f_{\frac 1 {N-d},N} \left(\frac 1 N\right) = N \left(\frac {N-d} {d}\right)^{\frac p 2}.$$ Together with Lemma \ref{lem:min} this completes the proof. \end{proof} It is straightforward to check that when $p>2$, the only case in which the inequality of Proposition \ref{prop:p>2} is exact, holds when $\mathbf{Y}=\{y_1,\ldots,y_N\}$ is a tight frame in $\mathbb{R}^{N-d}$ which satisfies that $|y_i|$ is constant for all $i$ and $\vert\langle y_i,y_j\rangle\vert$ is also constant for all $i\neq j$. In other words equality holds in Proposition \ref{prop:p>2} if and only if $\mathbf{Y}$ is an \textit{equiangular tight frame} (ETF). In view of $\mathbf{Y}$ as an ETF, matrix $\mathbf{A}$ is then the Gram matrix of the $d$-dimensional ETF known as the \textit{Naimark complement} or \textit{Gale dual} to $\mathbf{Y}$ (see, for instance, \cite{cas13} and \cite{coh16} for more details about Naimark complements and Gale duality of equiangular tight frames). There are several interesting properties of ETFs but the two most fundamental are that they are precisely the equality achieving systems of vectors for the {\it Welch bound}, and the maximum size $N$ of such systems is limited by {\it Gerzon's bound}. The Welch bound gives a lower bound for the {\it coherence} of a system of unit vectors namely, for unit vectors $\{\varphi_{i}\}_{i=1}^N\subset \mathbb R^{d}$ or $\mathbb C^{d}$, $$\max\limits_{i\neq j} |\langle \varphi_i,\varphi_j \rangle|\geq \sqrt{\frac{N-d}{d(N-1)}}.$$ This bound is one example of several other bounds which limit how spread out the one-dimensional subspaces corresponding to each vector may be \cite{wel74}; see \cite{lev82} for similar bounds and their derivation from a linear programming approach. Gerzon's bound \cite{lem73} limits the size of an ETF, and states in the real and complex case respectively, $$N\leq \binom{d+1}{2}\ \text{ and }\ N\leq d^2.$$ We call ETFs attaining this bound {\it maximal}. Maximal ETFs in real spaces $\mathbb R^d$ may exist only when $d=1, 2, 3$ or $(2m+1)^2-2$ for some natural $m$. There are known maximal ETFs for $d=1, 2, 3, 7, 23$ only. More details on the existence of maximal ETFs and the improvement of Gerzon's bound for $d\neq (2m+1)^2-2$ are available in \cite{gla18}. The existence of complex maximal ETFs is the famous open conjecture \cite{zau99}. For maximal ETFs, the Welch bound must hold as well so their coherence must satisfy $\alpha^2=\frac 1 {d+2}$ in the real case and $\alpha^2=\frac 1 d$ in the complex case. We discuss maximal ETFs further in connection with Theorem~\ref{thm:cont-energy} in Section~\ref{sec:disc}. Using the lemmas from the previous section, we now give another observation on the relation between optimizing $E_p(\mathbf{A})$ and the problem $M(c,p,N)$. \begin{theorem}\label{thm:gen} For any $1\leq p< 2$ and any real $N\times N$ matrix $\mathbf{A}$ of rank $d$ with unit diagonal elements, $$E_p(\mathbf{A})\geq \frac {2(N-d)}{p^{\frac p 2} (2-p)^{\frac {2-p} 2}}.$$ \end{theorem} \begin{proof} For $N=d$, the right-hand side is 0. We assume $N\geq d+1$ for the rest of the proof. $f_{\frac 1 {N-d}, N} (t) / t$ is minimized on $(0,\frac 1 {N-d})$ at $t_0=\frac {2-p} {2(N-d)}$. Hence, $$f_{\frac 1 {N-d}, N} (t)\geq \frac {2(N-d)} {p^{\frac p 2} (2-p)^{\frac {2-p} 2}} t,\ \text{ and so,}$$ $$M\left(\frac 1 {N-d}, p, N\right)\geq \frac {2(N-d)} {p^{\frac p 2} (2-p)^{\frac {2-p} 2}} \sum_{i=1}^N t_i = \frac {2(N-d)} {p^{\frac p 2} (2-p)^{\frac {2-p} 2}}.$$ The final bound then follows from Lemma \ref{lem:min}. \end{proof} When taking $p=1$ in Theorem \ref{thm:gen}, we get $E_1(\mathbf{A})\geq 2(N-d)$. For $N$ in the range $d+1\leq N \leq 2d$, we thus obtain the bound conjectured in \cite{mat17} from Theorem \ref{thm:gen}. We formulate it here as a simple statement about angles between lines in Euclidean spaces where it is understood that such angles are restricted to lie in $[0,\pi/2]$. \begin{corollary}\label{cor:1-frame} The sum of cosines of all pairwise angles between $N$ lines in $\mathbb{R}^d$ is at least $N-d$. For $N\in[d,2d]$, the bound is sharp and the unique minimizer is the set of $N$ lines forming a repeated orthonormal basis. \end{corollary} As hinted in the discussion \cite{mat17}, Corollary \ref{cor:1-frame} may be proven by induction. For the sake of completeness, we provide such a proof here as well. \begin{proof}[Alternative proof of Corollary \ref{cor:1-frame}] We choose a unit vector in the direction of each of the $N$ lines and construct an $N\times N$ Gram matrix for the chosen vectors. The matrix has rank no greater than $d$ so, by the Gershgorin circle theorem, any $(d+1)\times(d+1)$ diagonal minor of the matrix will have at least one row whose sum of absolute values of non-diagonal entries is at least 1. The inductive step consists then in finding one row like this and using the inductive hypothesis for the $(N-1)\times (N-1)$ diagonal minor obtained by deleting this row and the column symmetric to it from the matrix. \end{proof} We do not know how to extend this short proof to a more general problem of finding lower bounds for the 1-frame energy of matrices that is covered by Theorem \ref{thm:gen}. The proof of Corollary \ref{cor:1-frame} does not seem to work for non-symmetric matrices. We also note that Theorem \ref{thm:gen} implies the same bound $2(N-d)$ for $E_p(\mathbf{A})$ when $p\in(0,1)$. For the case of $N=d+1$, Chen, Gonzales, Goodman, Kang, and Okoudjou \cite{che19} posed a conjecture for the minimum of the $p$-frame energy for all $p\in(0,2)$. They conjectured that a global minimum is necessarily formed by $k+1$ unit vectors whose endpoints form a regular $k$-dimensional simplex and $N-k-1$ vectors that are pairwise orthogonal and orthogonal to the subspace of the regular simplex. In particular, their conjecture states that for $p<\frac {\ln 3} {\ln 2}\approx 1.58496$, the minimum is 2 and attained only on the repeated orthogonal basis with $d+1$ vectors. We study a more general problem of $d+m$ vectors, $1\leq m < d$, and, using Lemma \ref{lem:min}, prove that the repeated orthonormal basis minimizes $E_p$ for $p \in [1,p_m]$. In particular, for $m=1$, our results confirm the conjecture from \cite{che19} for $p$ in the range $p\leq 2(\frac {\ln 3} {\ln 2}-1)\approx 1.16993$. \begin{theorem}\label{thm:d+1} For $p\in[1,2\log{\frac{2m+1}{2m}}/\log{\frac{m+1}{m}}]$, $1\leq m< d$ and a real $(d+m)\times (d+m)$ matrix $\mathbf{A}$ of rank $d$ with ones along the diagonal, $$E_{p}(\mathbf{A})=\sum\limits_{i\neq j}|A_{i,j}|^{p} \geq 2m.$$ \end{theorem} The following lemma is used in the proof of Theorem \ref{thm:d+1}. \begin{lemma} Set $\alpha=\frac 1 m \left(\frac{1}{2}-\frac{p}{4}\right)$. For $p\in[1,2]$, $M(\frac{1}{m},p,N)$ is minimized for $t_{j}$ of the form \begin{enumerate} \item[(i)] $t_1=\dots=t_{k}=\frac{1}{k},\ t_{k+1}=\dots=t_{n}=0$, where $\frac{1}{k}\geq \alpha$, or, \item[(ii)] $t_1=\dots=t_k=x$, $t_{k+1}=1-kx$, $t_{k+2}=\dots=t_{N}=0$, where $x\geq \alpha$, $0<1-kx<\alpha$. \end{enumerate} \end{lemma} \begin{proof} Computing the second derivative of $f_{1/m,p}(t)$, we see it is concave on $[0,\alpha]$ and convex on $[\alpha,1)$ where $\alpha=\frac 1 m \left(\frac{1}{2}-\frac{p}{4}\right)$. All $t_i$ lying in $[0,\alpha]$ may be moved to the endpoints of the interval, except for at most one number, while keeping their sum constant and minimizing the sum of values of $f_{1,p}$ (this follows, for instance, from the Karamata inequality, see \cite[pg 89]{har88}). After this we may apply Jensen's inequality for all numbers from $[\alpha,1)$ and assume they are all equal. The resulting minimizer is then necessarily one of the two types: 1) $t_1=\ldots=t_k=\frac 1 k$, $t_{k+1}=\ldots=t_N=0$, where $\frac 1 k \geq \alpha$, 2) $t_1=\ldots=t_k=x$, $t_{k+1}=1-kx$, $t_{k+2}=\ldots=t_N=0$, where $x\geq\alpha$ and $0<1-kx<\alpha$. \end{proof} We now follow with a proof of Theorem \ref{thm:d+1}. \begin{proof} \indent Set $p_m=2\log{\frac{2m+1}{2m}}/\log{\frac{m+1}{m}}$ and $q_{m}=\frac{p_m}{2}$. Clearly, it is sufficient to prove the lower bound for $p=p_m$ only. We use Lemma \ref{lem:min} and show that $M(\frac{1}{m},p,N)\geq 2m$. Consider the first case in the above lemma, so that $$t_1=\dots=t_{k}=\frac{1}{k},\ t_{k+1}=\dots=t_{n}=0,$$ where $\frac{1}{k}\geq \alpha=\frac 1 m \left(\frac{1}{2}-\frac{p_m}{4}\right)$. In this case, we need to minimize the value $$kf_{1/m,p_m}\left(\frac{1}{k}\right)=k\left( \frac {m} {k-m} \right)^{\frac{p_m}{2}}.$$ The real function \begin{equation}\label{eq:fm} F_m(x)=x\left( \frac {m} {x-m} \right)^{\frac{p_m}{2}} \end{equation} for $x>m$ has exactly one local minimum. The degree $p_m$ was specifically chosen so that $F_m(2m)=F_m(2m+1)=2m$. Then by Rolle's theorem, the local minimum of $F_m(x)$ lies in $[2m,2m+1]$. The minimum of $F_m(x)$ for natural values of $x$, $x>m$, is, therefore, attained on $2m$ and $2m+1$ and is equal to $2m$. \indent In the second case, $x<\frac{1}{k}$ and $x\geq \alpha=\frac 1 m \left(\frac{1}{2}-\frac{p_m}{4}\right)$. It is straightforward to show that $p_m<\frac {4m+2} {4m+1}$ for all natural $m$. Subsequently $k<\frac 1 {\alpha} < 4m+1$ so that $k$ can take (integer) values only in $[m,4m]$. To show $E_{p_m}\geq 2m$ it suffices then to show for all $m\leq j\leq 4m$, and all $x$ in $I=(\frac{1}{j+1},\frac{1}{j})$ that the function \begin{align*} g_{j}(x) = j \left( \frac{mx}{1-mx} \right)^{q_{m}}+\left(\frac{m(1-jx)}{1-m(1-jx)}\right)^{q_{m}}, \end{align*} satisfies $g_{j}(x)\geq 2m$. This will be demonstrated using properties specific to $g_{j}(x)$, namely that the function has at most one critical point, $g_{j}^{\prime}(x)=0$, inside the interval $I$. Taking derivatives, \begin{align*} g^{\prime}_{j}(x)&=q_{m} jm \left[ \left( \frac{mx}{1-mx} \right)^{q_{m}-1} \frac{1}{(1-mx)^2}- \left(\frac{m(1-jx)}{1-m(1-jx)}\right)^{q_{m}-1} \frac{1}{(1+m(-1+jx))^2} \right], \end{align*} so that $g'_{j}(x)=0$ gives \begin{align*} \left( \frac{x(1+m(-1+jx))}{(1-mx)(1-jx)} \right)^{q_{m}-1} &= \frac{(1-mx)^2}{(1+m(-1+jx))^2} \\ \left( \frac{x(1+m(-1+jx))}{(1-mx)(1-jx)} \right)^{q_{m}+1} &= \frac{x^2}{(1-jx)^2} \\ \frac{1+m(-1+jx)}{1-mx} &= \left( \frac{x}{(1-jx)} \right)^{\frac{2}{q_{m}+1}-1} \\ \frac{1-mx}{1+m(-1+jx)} &= \left( \frac{x}{(1-jx)} \right)^{1-\frac{2}{q_{m}+1}}. \end{align*} \vspace{1 mm} Calling the function on the left in the above expression $f(x)$ and the function on the right $g(x)$, \begin{align*} f^{\prime \prime}(x) = \frac{2j(1+j-m)m^2}{(1+m(-1+jx))^3}>0\ \text{ on }\ I, \end{align*} while letting $\gamma=1-\frac{2}{q_{m}+1}$, \begin{align*} g^{\prime \prime}(x) = \frac{\gamma (\frac{x}{1-jx})^\gamma (\gamma-1+2jx)}{x^2(jx-1)^2}<0\ \text{ on }\ I, \end{align*} since $\gamma<0$. Thus $f(x)$ is convex on $I$, while $g(x)$ is concave on $I$. Since $f(\frac{1}{j+1})=g(\frac{1}{j+1})$ and $f'(\frac{1}{j+1})\leq g'(\frac{1}{j+1})$, when $j<4m$, it must be the case then that $f(x)=g(x)$ for exactly one point $x\in I$, ($x\neq \frac{1}{j+1},\frac{1}{j}$). Note that when $j=4m$ there are no critical points in $I$. Now, \begin{align*} g_{j}'\left(\frac{1}{j+1}\right)=0\ \text{ and }\ \lim\limits_{x\rightarrow \frac{1}{j}} g_{j}'\left(x\right)=-\infty. \end{align*} \indent Thus the critical points correspond to local maxima of $g_{j}(x)$ and it suffices to check the value of $g_{j}(x)$ at the endpoints in $I$ for each $m\leq j\leq 4m$ to establish the desired lower bound. These values are \begin{align*} g_{j}\left(\frac{1}{j+1}\right)=(1+j)\left(\frac{m}{1+j-m}\right)^{q_m}=F_m(j+1),\ \text{ and }\ g_{j}\left(\frac{1}{j}\right)=j\left(\frac{m}{j-m}\right)^{q_m}=F_m(j), \end{align*} where $F_m$ is the function defined in equation (\ref{eq:fm}). The minimal value of $F_m(x)$ on natural numbers, $x>m$, as we established earlier, is precisely $2m$. \end{proof} Following the proofs of Theorem \ref{thm:d+1} and Lemma \ref{lem:min} it is easy to check that the only minimizer is the repeated orthonormal basis. Theorem \ref{thm:d+1} was first announced by the second author in \cite{par19}. \section{Minimizing energy in two dimensions}\label{sec:2dim} In this section, we study the problem of minimizing the $p$-frame energy for collections of unit vectors in the plane. In particular, we show that the repeated orthonormal basis is the minimizer for $p\leq 1.3$. As one of the instruments for our proof, we use the solution of the Fejes T\'{o}th problem mentioned in Section \ref{sec:intro}. \begin{theorem}\label{thm:fej-2} Let $x_1, x_2, \ldots, x_N$ be unit vectors in the plane. Then, $$\sum\limits_{i,j=1}^N \arccos|\langle x_i,x_j \rangle|\leq \frac{\pi N^2 }{4}, \text{ if } N \text{ is even},$$ $$\sum\limits_{i,j=1}^N \arccos|\langle x_i,x_j \rangle|\leq \frac{\pi (N^2-1)}{4}, \text{ if } N \text{ is odd}.$$ \end{theorem} Theorem \ref{thm:fej-2} was proven in \cite{fod16}. Several alternative proofs were also obtained in \cite{bil19}. \begin{theorem}\label{thm:dim-2} Let $\mathbf{A}$ be a Gram matrix of $N$ unit vectors in the plane. Then for $p\in (0,1.3]$ , $E_p(A)\geq N(N-2)/2$ if $N$ is even and $E_p(A)\geq (N-1)^2/2$ if $N$ is odd. \end{theorem} \begin{proof} Assume $\mathbf{A}$ is the Gram matrix of unit vectors $x_1, x_2, \ldots, x_N$ in the plane. For any $p\in [0,2]$, the lower bound on $E_p$ for even $N$ follows immediately from the fact that a repeated orthonormal basis is one of the minimizers of $E_2$ when the number of vectors is divisible by the dimension (see \cite{ehl12,sid74,ven01}). It is sufficient to prove the lower bound for $p=1.3$ so we consider this case only for the rest of the proof. For odd $N$, we split our proof into two parts: 1) angles between each pair of vectors are sufficiently far from $\pi/2$; 2) there are vectors that are almost orthogonal. For the first case, we assume that all angles $\arccos|\langle x_i, x_j\rangle |$, $1\leq i,j\leq N$, are no greater than $1.34$. It is straightforward to check that for any $t\in[0,1.34]$, $$\cos^{1.3} t \geq \frac 2 {\pi} \left(\frac \pi 2 - t\right).$$ Summing these inequalities for all $t=\arccos|\langle x_i, x_j\rangle|$, $1\leq i,j \leq N$, $i\neq j$, we obtain $$E_{1.3} (\mathbf{A})\geq N^2 - N - \frac 2 \pi \sum\limits_{i,j=1}^N \arccos|\langle x_i,x_j \rangle|.$$ Using the solution of the Fejes T\'{o}th problem from Theorem \ref{thm:fej-2} we conclude $$E_{1.3} (\mathbf{A})\geq N^2 - N - \frac 2 \pi (N^2-1) \frac \pi 4 = \frac {(N-1)^2} 2.$$ For the second case, we assume that the largest angle among $\arccos|\langle x_i, x_j\rangle |$, $1\leq i,j\leq N$, is at least $1.34$. Without loss of generality, let one of such angles be $\arccos|\langle x_1, x_2\rangle |$. Our proof will be by induction on odd numbers $N$. The statement of the theorem is clearly true for $N=1$. Let $N$ be an odd number greater than 1. Now we will show that for any $i$, $3\leq i\leq N$, $|\langle x_1, x_i\rangle |^{1.3}+|\langle x_2, x_i\rangle |^{1.3}\geq 1$. We can always switch a vector to its opposite without changing the total energy so we may assume that $x_i$ lies in the angle formed by $x_1$ and $x_2$. We assume the angle between $x_1$ and $x_2$ is $\vartheta$ and $x_i$ forms the angles of $\phi$ and $\vartheta-\phi$ with $x_1$ and $x_2$, respectively. Note that both $\phi$ and $\vartheta-\phi$ must be less than $\pi/2$, otherwise one of them is closer to $\pi/2$ than $\vartheta$. Without loss of generality, $\phi\leq\vartheta/2$. There are two possible options: $\vartheta\leq \pi/2$ and $\vartheta>\pi/2$. For the first option, $$|\langle x_1, x_i\rangle |^{1.3}+|\langle x_2, x_i\rangle |^{1.3} = \cos^{1.3} \phi + \cos^{1.3} (\vartheta-\phi) \geq \cos^{1.3} \phi + \cos^{1.3} \left(\frac \pi 2-\phi\right) \geq 1.$$ For the second option, $\vartheta\leq\pi-1.34$ because $\vartheta=\pi-\arccos|\langle x_1, x_2\rangle |$. The angle $\vartheta$ is the one closest to $\pi/2$ among all angles formed by the vectors. In particular, $\vartheta-\phi$ cannot be closer to $\pi/2$ so $\vartheta-\phi\leq \pi - \vartheta$. This condition can be rewritten as $\frac {\pi-\phi} 2 \geq \vartheta-\phi$. For the next step we try to minimize $\cos^{1.3} \phi + \cos^{1.3} \left(\vartheta-\phi\right)$ by keeping $\phi$ intact and increasing $\vartheta-\phi$ as much as possible while preserving the conditions $\vartheta-\phi\leq\pi-\phi-1.34$ and $\vartheta-\phi\leq \frac{\pi-\phi} 2$. While increasing $\vartheta$, at some moment we reach the point when one of these two inequalities becomes a precise equality. These two possibilities can be described by the two cases depending on the value of $\phi$. For the first case, assume $\frac {\pi-\phi} 2 > 1.34$, i.e. $\phi<\pi-2.68$. Then $$\cos^{1.3} \phi + \cos^{1.3} (\vartheta-\phi)\geq \cos^{1.3} \phi + \cos^{1.3} \left(\frac {\pi-\phi} 2\right) = \cos^{1.3} \phi + \sin^{1.3} \frac \phi 2.$$ The function $\cos^{1.3} \phi + \sin^{1.3} \frac \phi 2$ is at least 1 for $\phi\in[0, \pi-2.68]$ so the first case is covered. For the second case, we assume $\frac {\pi-\phi} 2 \leq 1.34$, i.e. $\phi\geq\pi-2.68$. We know that $\phi\leq\vartheta/2\leq\frac {\pi-1.34} 2$. Using the inequality $\vartheta<\pi-1.34$ again we get that $\vartheta-\phi<\pi-1.34-\phi$. This implies $$\cos^{1.3} \phi + \cos^{1.3} (\vartheta-\phi)\geq \cos^{1.3} \phi + \cos^{1.3} (\pi-1.34-\phi).$$ The function $\cos^{1.3} \phi + \cos^{1.3} (\pi-1.34-\phi)$ is at least 1 for $\phi\in [\pi-2.68,\frac {\pi-1.34} 2]$. Overall, we conclude that $|\langle x_1, x_i\rangle |^{1.3}+|\langle x_2, x_i\rangle |^{1.3} = \cos^{1.3} \phi + \cos^{1.3} (\vartheta-\phi)\geq 1.$ Then, by the induction hypothesis for unit vectors $x_3, \ldots, x_N$, $$E_{1.3} (\mathbf{A})\geq \frac {(N-3)^2} 2 + 2\sum\limits_{i=3}^N (|\langle x_2, x_i\rangle |^{1.3}+|\langle x_1, x_i\rangle |^{1.3})$$ $$ \geq \frac {(N-3)^2} 2 + 2(N-2) = \frac {(N-1)^2} 2.$$ \end{proof} We do not know how to extend this proof to the general case of matrices of rank 2. The value $p=1.3$ is not the best possible. One can impose all conditions necessary for the proof of Theorem \ref{thm:dim-2} to work and optimize for $p$. The numerical value obtained this way is approximately $1.317$. \section{Discussion}\label{sec:disc} Recently, the conjecture mentioned above from the paper \cite{che19} was proved true in \cite{xu19}. In particular, the range that the orthonormal basis plus a vector minimizes in Theorem~\ref{thm:d+1} can be increased to the value $p=\log{3}/\log{2}$. The behavior of the other maximal values of $p$ that similar constructions are expected to minimize for is suggested in the following conjecture (appearing first in \cite{par19}). \begin{conj} \label{conj:onb} Let $N=m+kd$ points be given in $\mathbb{S}^{d-1}$, with $1\leq m<d$, $d\geq 2$, and a Gram matrix $\mathbf{A}\in\mathbb R^{N\times N}$. Then there is a value of $p_0$, independent of dimension $d$ and excess $m$, such that the repeated orthonormal sequence $\{e_{j \mod d} \}_{j=1}^N$ minimizes $E_p$ over all size $N$ systems of unit vectors (with value $E_p(\mathbf{A})=d(k^2-k)+2k$) for $p<p_0$ and the minimum value of $E_p(\mathbf{A})$ satisfies $E_p(\mathbf{A})<d(k^2-k)+2k$ when $p>p_0$. Further $p_0=p_0(k)$ satisfies $p_0(k)\rightarrow 2$ as $k\rightarrow \infty$. \end{conj} Theorem \ref{thm:dim-2}, can be interpreted as an improvement to Theorem~\ref{thm:d+1} from the previous section, and partial progress towards the above conjecture. The parameter $p_0=1.3$ in Theorem \ref{thm:dim-2} is the same for all values of $k$. It might be interesting to find an elementary argument which showed that $p_0(k)\rightarrow 2$ in dimension $2$, let alone generally. Theorem~\ref{thm:d+1} and Conjecture~\ref{conj:onb} appear to be examples of a more general phenomenon. The direct analogs/extensions of orthonormal bases are regular simplices in the non-projective setting and maximal ETFs in projective spaces. In the first case, the analogous potential function is $f^{\Delta}_{d,p}(t)=|t+\frac 1 d|^p$. In the second case, one can view the potential function $f(t)=|t|^{p}$ as an instance of the more general function $f_{\alpha,p}(t)=|t^2-\alpha^2|^{p}$, and the orthonormal basis as an example of an equiangular tight frame ($|\langle x,y \rangle|=\alpha$, for $x\neq y$ with coherence $\alpha=0$). The problem of minimizing the energies $E^{\Delta}_{d,p}$ and $E_{\alpha,p}$, associated with $f^{\Delta}_{d,p}$ and $f_{\alpha,p}$, respectively, for $p$ close to $0$ might be expected to pick out repeated regular simplices and repeated ETFs with coherence $\alpha$. We conjecture generally: \begin{conj} \label{conj:etf} \begin{enumerate} \item[(i)] Let $\mathbf{A}\in\mathbb R^{N\times N}$ be the Gram matrix of $N=m+k(d+1)$ points given in $\mathbb{S}^{d-1}$, with $1\leq m<d+1$, $d\geq 2$, and let $\{\varphi_i\}_{i=1}^l\subset \mathbb R^{d}$ be the maximal regular simplex in $\mathbb{S}^{d-1}$. Then there is a value of $p_0>0$, such that the repeated regular simplex $\{\varphi_{j \mod l} \}_{j=1}^N$ minimizes $E^{\Delta}_{d,p}$ over all size $N$ systems of unit vectors for $p<p_0$. \item[(ii)] Let $\mathbf{A}\in\mathbb R^{N\times N}$ be the Gram matrix of $N=m+kl$ points given in $\mathbb{S}^{d-1}$, with $1\leq m<l$, $d\geq 2$, and $l$ the size of an ETF with coherence $\alpha$, $\{\varphi_i\}_{i=1}^l\subset \mathbb R^{d}$. Then there is a value of $p_0>0$, such that the repeated ETF sequence $\{\varphi_{j \mod l} \}_{j=1}^N$ minimizes $E_{\alpha,p}$ over all size $N$ systems of unit vectors for $p<p_0$. \end{enumerate} \end{conj} Conjectures \ref{conj:onb} and \ref{conj:etf} emphasize the possibility of repeated configurations minimizing $p$-frame or $p$-frame-type energies among sets of $N$ points for all large enough $N$. This property may be seen as a strong version of stability of an optimizing set. The collections of unit vectors $\Phi=\{\varphi_i\}_{i=1}^l\subset \mathbb R^{d}$ that, for all $N\geq l$, have a repeated set $\{\varphi_{j \mod l} \}_{j=1}^N$ minimizing the energy defined by the potential function $f$ among all $N\times N$ Gram matrices $\mathbf{A}$, also satisfy that the uniform distribution over $\Phi$ must solve the problem, \begin{equation}\label{eq:conten}\min\limits_{\mu\in\mathcal{P}(\mathbb S^{d-1})} I_f(\mu)=\min\limits_{\mu\in\mathcal{P}(\mathbb S^{d-1})} \int_{\mathbb S^{d-1}}\int_{\mathbb S^{d-1}} f(\langle x, y\rangle)d\mu(x)d\mu(y),\end{equation} where $\mathcal{P}(\mathbb S^{d-1})$ are Borel probability measures, $\mu(\mathbb S^{d-1})=1$. For the case of $p$-frame energies, {\it tight designs} are examples of configurations such that uniform distributions over them minimize $I_f(\mu)$ over ranges of $p$ \cite{bil19a}. This behavior is slightly different from that conjectured above, as repeated tight designs of size $l$ can only be expected to minimize the discrete energies when $N=kl$, generally. The existence of a $p_{0}$ in these conjectures might be expected in connection with ideas from the field of {\it compressed sensing}. One should expect that, as $p\rightarrow 0$, the solution to this minimization problem is a repeated ETF (as upon vectorizing the Gram matrix, and considering the difference, the sparsest difference arises this way), but Conjecture~\ref{conj:etf} strengthens this to say that the solution for $p$ sufficiently small is also a repeated ETF. In connection with these observations we collect further support for the above conjectures for maximal ETFs and the regular simplex, by showing they minimize the associated continuous energies (\ref{eq:conten}). The second part of the below theorem holds also with the coherence replaced with the corresponding value on a complex maximal ETF, these being known alternatively as {\it symmetric informationally complete positive operator-valued measures} (SIC-POVMs, see \cite{fuchs17} for more details). We give the proof only in the real case, but the same type of argument applies in the complex case. \begin{theorem} \label{thm:cont-energy} The following statements hold: \begin{enumerate} \item[(i)] The uniform distribution over the vertices of a regular simplex $\{\varphi_i\}_{i=1}^{d+1}\subset \mathbb S^{d-1}$ minimizes the continuous energy $I^{\Delta}_{d, p}$ for $f^{\Delta}_{d,p}(t)=|t+\frac{1}{d}|^p$ for all $p\in(0,2]$. \item[(ii)] Whenever a maximal ETF exists, $\{\varphi_i\}_{i=1}^M\subset \mathbb S^{d-1}$ (with coherence $\alpha^2=\frac{1}{d+2}$), the uniform distribution over its points minimizes the continuous energy $I_{\alpha,p}$ for $f_{\alpha,p}(t)=|t^2-\alpha^2|^p$ for all $p\in(0,2]$. \end{enumerate} \begin{proof} Note that the inequalities \begin{equation}\label{eq:ineqs} \left|\frac{t+\frac{1}{d}}{1+\frac{1}{d}}\right|^p \geq \left|\frac{t+\frac{1}{d}}{1+\frac{1}{d}}\right|^2\ \text{ and }\ \left|\frac{t^2-\frac{1}{d+2}}{1-\frac{1}{d+2}}\right|^p \geq \left|\frac{t^2-\frac{1}{d+2}}{1-\frac{1}{d+2}}\right|^2, \end{equation} hold for $t\in[-1,1]$. The first inequality implies $I^{\Delta}_{d,p}\geq (1+\frac 1 d)^{p-2} I^{\Delta}_{d,2}$ and the equality holds for the uniform distribution over the regular simplex. The second inequality implies $I_{\alpha,p}\geq (1-\frac 1 {d+2})^{p-2} I_{\alpha,2}$ and the equality holds for the uniform distribution over a maximal ETF. It is sufficient then to prove the theorem for $p=2$. For the proof in the case $p=2$, we use the notion of {\it positive definite functions} on unit spheres. A function of inner products is positive definite on $\mathbb S^{d-1}$ if for any collection of points $\{ x_{i}\}_{i=1}^N \subset \mathbb S^{d-1}$ and sequence of complex numbers $\{c_{i}\}_{i=1}^N\subset\mathbb C$ it is true that $$\sum\limits_{i,j=1}^{N} c_i\overline{c_j}f(\langle x_i, x_j \rangle)\geq 0.$$ This condition implies that for any positive definite $f$ and any measure $\mu$ on $\mathbb S^{d-1}$, $$I_f(\mu)=\int_{\mathbb S^{d-1}}\int_{\mathbb S^{d-1}} f(\langle x, y\rangle)d\mu(x)d\mu(y)\geq 0.$$ Positive definite functions on spheres were characterized by Schoenberg \cite{sch41}. In particular, $t$, $t^2-\frac 1 d$, and $t^4-\frac 6 {d+4} t^2+ \frac 3 {(d+2)(d+4)}$ (Gegenbauer polynomials of degrees 1, 2, 4) are positive definite functions on $\mathbb S^{d-1}$. Since $(t+\frac 1 d)^2= (t^2-\frac 1 d) + \frac 2 d t + \frac {d+1} {d^2}$, $$I^{\Delta}_{d,2} (\mu) \geq \frac {d+1} {d^2}.$$ It is easy to check that for the uniform distribution over the regular simplex, $I^{\Delta}_{d,2}$ is precisely $\frac {d+1} {d^2}$. Using $$\left(t^2-\frac 1 {d+2}\right)^2 = \left(t^4-\frac 6 {d+4} t^2+ \frac 3 {(d+2)(d+4)}\right) + \frac {4(d+1)} {(d+2)(d+4)} \left(t^2-\frac 1 d\right) + \frac {2(d+1)} {d(d+2)^2},$$ we conclude that $$I_{\alpha,2} (\mu) \geq \frac {2(d+1)} {d(d+2)^2}.$$ Again it is easy to check that the equality is attained on the uniform distribution over the maximal ETF. \end{proof} \end{theorem} Using the design conditions, one can also show that the configurations from Theorem \ref{thm:cont-energy} are unique minimizers (up to the uniqueness of maximal ETFs in the second case) for the corresponding energies when $p\in(0,2)$, similarly to how it is done in \cite{bil19a} for the general case of tight designs and $p$-frame energies. \section{Acknowledgments} The authors would like to thank Wei-Hsuan Yu for fruitful discussions, Boris Bukh and Chris Cox for sharing the manuscript and discussing their results, Kasso Okoudjou for talking about his and his coauthors' research in this direction and informing about their conjectures. This paper is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while the first author was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Spring 2018 semester. The second author was supported in part by grant from the US National Science Foundation, DMS-1600693. \bibliographystyle{amsalpha}
{ "timestamp": "2019-08-13T02:17:10", "yymm": "1901", "arxiv_id": "1901.06096", "language": "en", "url": "https://arxiv.org/abs/1901.06096", "abstract": "For a collection of $N$ unit vectors $\\mathbf{X}=\\{x_i\\}_{i=1}^N$, define the $p$-frame energy of $\\mathbf{X}$ as the quantity $\\sum_{i\\neq j} |\\langle x_i,x_j \\rangle|^p$. In this paper, we connect the problem of minimizing this value to another optimization problem, so giving new lower bounds for such energies. In particular, for $p<2$, we prove that this energy is at least $2(N-d) p^{-\\frac p 2} (2-p)^{\\frac {p-2} 2}$ which is sharp for $d\\leq N\\leq 2d$ and $p=1$. We prove that for $1\\leq m<d$, a repeated orthonormal basis construction of $N=d+m$ vectors minimizes the energy over an interval, $p\\in[1,p_m]$, and demonstrate an analogous result for all $N$ in the case $d=2$. Finally, in connection, we give conjectures on these and other energies.", "subjects": "Metric Geometry (math.MG); Functional Analysis (math.FA)", "title": "Repeated minimizers of $p$-frame energies", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9871787853562028, "lm_q2_score": 0.8354835432479661, "lm_q1q2_score": 0.8247716294086238 }
https://arxiv.org/abs/1902.09450
Asymptotic complements in the integers
Let $W\subseteq \mathbb{Z}$ be a non-empty subset of the integers. A nonempty set $C\subseteq \mathbb{Z}$ is said to be an asymptotic complement to $W$ if $W+C$ contains almost all the integers except a set of finite size. $C$ is said to be a minimal asymptotic complement if $C$ is an asymptotic complement, but $C\setminus \lbrace c\rbrace$ is not an asymptotic complement $\forall c\in C$. Asymptotic complements have been studied in the context of representations of integers since the time of Erdős, Hanani, Lorentz and others, while the notion of minimal asymptotic complements is due to Nathanson. In this article, we study minimal asymptotic complements in $\mathbb{Z}$ and deal with a problem of Nathanson on their existence and their inexistence.
\section{Introduction} \subsection{Background and Motivation} Let $(G,\cdot)$ be a group where $\cdot$ denotes its group operation. If $A, B$ are two nonempty subsets of $G$, then we define the product set $A\cdot B$ as $$A\cdot B := \{a\cdot b \,|\, a\in A, b\in B\}.$$ If $A$ contains only one element $a$, then $A\cdot B$ is denoted by $a\cdot B$. Henceforth, we will omit the symbol ``$\,\cdot\,$'' and denote $A\cdot B$ by $AB$. If $A$ (resp. $B$) contains only one element $a$ (resp. $b$), then $AB$ is denoted by $aB$ (resp. $Ab$). Given two nonempty subsets $W,C$ of $G$, the set $C$ is called an asymptotic complement to $W$ if there exists a finite subset $F$ of $G$ such that $W C = G\setminus F$. The set $C$ is called a minimal asymptotic complement to $W$ if $C$ is an asymptotic complement to $W$, but $C\setminus \lbrace c\rbrace$ is not for every $c\in C$. In the context of abelian groups, the composition law is denoted by the symbol ``$+$" and we will call the sets of the form $A+B$ as sumsets instead of product sets. Asymptotic complements have been studied since a long time in the context of representations of the integers. Indeed, one of the earliest instances of asymptotic complements can be found in the work of Lorentz \cite{Lorentz54} who showed that given an infinite set $W\subseteq \mathbb{N}$, there exists a set $C\subseteq \mathbb{N}$ of asymptotic density zero such that $W+C$ covers ``almost all" the positive integers\footnote{By ``almost all" we mean all except a set of finite size.}. This proved a conjecture of Erd\H{o}s (each infinite subset $W$ of $\ensuremath{\mathbb{N}}$ has an asymptotic complement of asymptotic density zero). Erd\H{o}s himself studied a number of questions related to the density of asymptotic complements for particular subsets $W$ of the positive integers. See \cite{Erdos54}, \cite{ErdosSomeUnsolved57} etc. Some more recent results on asymptotic complements can be found in \cite{WolkeOnProbErdosAddNoTh, FangChenAddCo1, FangChenAddCo2, FangChenAddCo3} etc. The notion of minimal asymptotic complements was introduced by Nathanson in 2011 in the context of minimal bases (see \cite[\S 5]{NathansonAddNT4}). He also asked several questions related to the existence and inexistence of minimal asymptotic complements. \begin{questionIntro} \cite[Problem 14]{NathansonAddNT4}\label{nathansonprob14} Let $W$ be a finite or infinite set of integers. \begin{enumerate}[(a)] \item Does there exist a minimal asymptotic complement to $W$? \item Does each asymptotic complement to $W$ contain a minimal asymptotic complement? \end{enumerate} \end{questionIntro} \begin{questionIntro} \cite[Problem 16]{NathansonAddNT4}\label{nathansonprob16} Let $G$ be an infinite group, and let $W$ be a finite or infinite subset of $G$. \begin{enumerate}[(a)] \item Does there exist a minimal asymptotic complement to $W$? \item Does each asymptotic complement to $W$ contain a minimal asymptotic complement? \end{enumerate} \end{questionIntro} In the same article, Nathanson also introduced the concept of minimal complements. Given two nonempty subsets $W,C$ of a group $G$, the set $C$ is called a complement to $W$ if $W C = G$. The set $C$ is called a minimal complement to $W$ if $C$ is a complement to $W$, but $C\setminus \lbrace c\rbrace$ is not for all $c \in C$. Henceforth, by a complement we shall mean as just mentioned above. If we mean set-theoretic complement, we shall explicitly state it. \subsection{Statements of results} In this article, we shall mainly concentrate on Question \ref{nathansonprob14} and derive sufficient conditions on the existence and the inexistence of minimal asymptotic complements in $\mathbb{Z}$ (see Theorems \ref{Thm.A}, \ref{Thm.B}, \ref{Thm.C} and \ref{Thm.D}). Moreover, some of our results hold in the full generality of arbitrary infinite groups. Thus, we deal partially with Question \ref{nathansonprob16} as well (see Theorems \ref{Thm.A}, \ref{Thm.B}). First, we consider the finite subsets and subsets whose set-theoretic complement is finite. \begin{theoremIntro} \label{Thm.A} Let $G$ be an infinite group and $W\subseteq G$. \begin{enumerate} \item If $W$ is finite and nonempty, then no asymptotic complement of $W$ contains a minimal asymptotic complement and $W$ does not admit any minimal asymptotic complement. \item If $G\setminus W$ is finite and nonempty\footnote{If $G\setminus W$ is empty, then any singleton subset of $G$ is a minimal complement and minimal asymptotic complement to $W$.}, then $W$ admits a minimal complement of cardinality two and the singleton subsets of $G$ are precisely the minimal asymptotic complements of $W$. \end{enumerate} \end{theoremIntro} Theorem \ref{Thm.A}(1) answers each part of Questions \ref{nathansonprob14}, \ref{nathansonprob16} in negative when $W$ is finite, while Theorem \ref{Thm.A}(2) answers each part of Questions \ref{nathansonprob14}, \ref{nathansonprob16} in affirmative when $W$ has finite set-theoretic complement in $G$. The above result is proved in Section \ref{Sec:Struc}. Next, we consider the subsets which are subgroups or translates of subgroups, and the subsets which contain subgroups. \begin{theoremIntro} \label{Thm.B} Let $G$ be an infinite group. \begin{enumerate} \item Let $W$ be a (left or right) translate of an infinite subgroup of $G$. Then any asymptotic complement of $W$ contains a minimal asymptotic complement. \item Suppose $G\setminus W$ is infinite and $W$ contains an infinite subgroup of $G$ of finite index in $G$. Then $W$ admits a minimal asymptotic complement and a minimal complement in $G$. In particular, the union of a nonzero subgroup of $\ensuremath{\mathbb{Z}}$ and a finite subset of $\ensuremath{\mathbb{Z}}$ admits a minimal asymptotic complement. \end{enumerate} \end{theoremIntro} Theorem \ref{Thm.B}(1) answers each part of Questions \ref{nathansonprob14}, \ref{nathansonprob16} in affirmative when $W$ is a translate of an infinite subgroup, while Theorem \ref{Thm.B}(2) answers Questions \ref{nathansonprob14}(a), \ref{nathansonprob16}(a) in affirmative when $W$ has infinite set-theoretic complement and contains an infinite subgroup of finite index. The above result is proved in Section \ref{Sec:Struc}. Moreover, we show that eventually periodic subsets (see Definition \ref{Def.2.4}) of $\ensuremath{\mathbb{Z}}$ do not admit any minimal asymptotic complement (see Lemma \ref{Lemma:EventuPeriNoMinAsymCom}). \begin{remark} The group $G$ in the statements of Theorems \ref{Thm.A}, \ref{Thm.B} is not assumed to be abelian. \end{remark} Finally, we concentrate on infinite subsets $W\subseteq \mathbb{Z}$ having less structure. In addition, we compare the seemingly close concepts of minimal complements and minimal asymptotic complements and show that even in the case of $\mathbb{Z}$ (which is a totally ordered abelian group), the existence of one does not automatically imply the existence of the other (see Lemmas \ref{Lemma:SufInfYesMac}, \ref{Lemma:SufInfNoMac}, \ref{Lemma:BddBelowYesMCNoMac}). In the following, $\ensuremath{\mathbb{Z}}_{?x}$ denotes the set $\{n\in \ensuremath{\mathbb{Z}} \,|\, n?x\}$ for $?\in \{<, \leq, >, \geq\}$ and $x\in \ensuremath{\mathbb{Z}}$. More generally, for a subset $A$ of $\ensuremath{\mathbb{Z}}$, $A_{?x}$ denotes the set $\{n\in A \,|\, n?x\}$ for $?\in \{<, \leq, >, \geq\}$ and $x\in \ensuremath{\mathbb{Z}}$. A nonempty subset $X$ of $\ensuremath{\mathbb{Z}}$ is said to be an interval if for any two elements $x_1, x_2\in X$ with $x_1\leq x_2$, the set $X$ contains $n$ for any integer $n$ satisfying $x_1\leq n\leq x_2$. The results below are proved in Section \ref{Sec:General}. \begin{theoremIntro} \label{Thm.C} Let $I_1, J_1, I_2, J_2, \cdots$ be nonempty finite intervals in $\ensuremath{\mathbb{Z}}$ such that the following conditions hold. \begin{enumerate}[(i)] \item For any positive integer $k$, $$\min J_k = \max I_k+1, \quad \min I_{k+1} = \max J_k + 1.$$ \item The cardinalities $\# I_k$ of the sets $I_k$ form a strictly increasing sequence $\{\# I_k\}_{\geq 1}$. \item The cardinalities $\# J_k$ of the sets $J_k$ form a strictly increasing sequence $\{\# J_k\}_{\geq 1}$. \end{enumerate} Then each of the following sets \begin{enumerate} \item $\cup_{k=1}^\infty I_k$, \item $(\ensuremath{\mathbb{Z}}_{< \min I_1})\cup (\cup_{k=1}^\infty I_k)$, \item $(\ensuremath{\mathbb{Z}}_{< \min I_1} \setminus F)\cup (\cup_{k=1}^\infty I_k)$ for any nonempty finite subset $F$ of $\ensuremath{\mathbb{Z}}$, \item $( \{x\in \ensuremath{\mathbb{Z}}_{< \min I_1} \,|\, x\equiv a\ensuremath{\mathrm{\;mod\;}} n\})\cup (\cup_{k=1}^\infty I_k)$ for any two integers $a,n$ with $n\neq 0$, \end{enumerate} admits a minimal complement and no minimal asymptotic complement in $\ensuremath{\mathbb{Z}}$. \end{theoremIntro} \begin{theoremIntro} \label{Thm.D} Let $W$ be a bounded below infinite subset of $\ensuremath{\mathbb{Z}}$ such that the limit of the difference of consecutive elements of $\ensuremath{\mathbb{Z}}_{\geq 1} \setminus W$ is equal to $\infty$. Then $W$ neither admits a minimal complement, nor admits a minimal asymptotic complement. \end{theoremIntro} Note that if a subset $C$ of a group $G$ is a minimal asymptotic complement to a nonempty subset $W$ of $G$, then for any $g\in G$, the set $C$ (resp. $g^{-1} C$) is a minimal asymptotic complement to $gW$ (resp. $Wg$). Thus any statement (for instance, the above results) about the existence of a minimal asymptotic complement of a particular subset $W$ of a group $G$ also remains valid for any of its left or right translates. \section{Primer on Minimal complements} In this section, we collect some known results on the existence of minimal complements in $\mathbb{Z}$ which will help us in comparing with minimal asymptotic complements in the subsequent sections. Nathanson showed that any nonempty finite subset of $\mathbb{Z}$ admits a minimal complement. In fact, he showed the following stronger result. \begin{theorem}[{\cite[Theorem 8]{NathansonAddNT4}}] \label{Thm.2.1} Let $W$ be a nonempty, finite subset of the integers $\ensuremath{\mathbb{Z}}$. Every complement to $W$ in $\ensuremath{\mathbb{Z}}$ contains a minimal complement to $W$. \end{theorem} Chen--Yang were the first to give conditions on infinite sets admitting minimal complements and also not admitting minimal complements in $\mathbb{Z}$. They proved the following results. \begin{theorem}[{\cite[Theorem 1]{ChenYang12}}] \label{CY12Thm1} Let $W$ be a set of integers such that $\inf W = -\infty$ and $\sup W = +\infty$. Then $W$ admits a minimal complement in $\ensuremath{\mathbb{Z}}$. \end{theorem} \begin{theorem}[{\cite[Theorem 2]{ChenYang12}}] \label{CY12Thm2} Let $W = \{1 = w_{1} < w_{2} < \cdots \}$ be a set of integers and $\overline W = \ensuremath{\mathbb{Z}}_{\geq 1} \setminus W = \{ \bar w _1< \bar w_2 < \cdots \}$. \begin{enumerate}[(a)] \item If $\limsup_{i\rightarrow +\infty}(w_{i+1} - w_{i}) = +\infty$, then there exists a minimal complement to $W$. \item If $\lim_{i\rightarrow \infty}( \bar{w}_{i+1} - \bar{w}_{i}) = +\infty,$ then there does not exist a minimal complement to $W$. \end{enumerate} \end{theorem} Later, Kiss--S\'andor--Yang \cite{KissSandorYangJCT19} introduced the notion of eventually periodic sets and gave necessary and sufficient conditions for them to have minimal complements or not. \begin{definition} \label{Def.2.4} Let $A$ be a nonempty bounded below subset of the set of integers $\ensuremath{\mathbb{Z}}$. If there exists a positive integer $T$ such that $a + T \in A$ for all sufficiently large integers $a \in A$, then $A$ is called \textnormal{eventually periodic with period} $T$. \end{definition} However, we shall see in the following sections that minimal asymptotic complements might behave quite differently. \section{Minimal asymptotic complement of structured sets} \label{Sec:Struc} In this section we prove Theorem \ref{Thm.A} and Theorem \ref{Thm.B}. We also show that no eventually periodic subset of $\mathbb{Z}$ admits a minimal asymptotic complement in $\mathbb{Z}$ (see Lemma \ref{Lemma:EventuPeriNoMinAsymCom}). \begin{proof} [Proof of Theorem \ref{Thm.A}] Suppose $W$ is finite and nonempty. To prove Theorem \ref{Thm.A}(1), it is enough to show that for any finite subset $F$ of any asymptotic complement $C$ of $W$ in $G$, the set $C\setminus F$ is also an asymptotic complement of $W$ in $G$. Indeed, for such a finite set $F$, the set $WC$ is equal to $(W(C\setminus F)) \cup WF$. Since $G\setminus WC$ and $WF$ are finite, it follows that $C\setminus F$ is also an asymptotic complement of $W$ in $G$. This proves Theorem \ref{Thm.A}(1). Suppose $G\setminus W$ is finite and nonempty. Note that there exist elements $g_1, g_2, \cdots, g_n$ in $G$ such that $G\setminus W$ is equal to $\{g_1, \cdots, g_n\}$. Since $G$ is infinite and $G\setminus W$ is finite, it follows that $W$ is infinite. So $W$ contains an element $x$ which is not of the form $g_i g_j^{-1}$ for some $i, j$. Thus for each $i$, the element $x^{-1} g_i$ does not belong to $\{g_1, \cdots, g_n\} = G\setminus W$, i.e., $x^{-1} g_i$ belongs to $W$. So $xW$ contains $g_1, g_2, \cdots, g_n$. Hence $\{e, x\}$ is a minimal complement to $W$. Since $G\setminus W$ is finite, it is follows that any singleton subset of $G$ is an asymptotic complement of $W$. \end{proof} We contrast Theorem \ref{Thm.A}(1) with Theorem \ref{Thm.2.1} and \cite[Theorem A]{MinComp1} which ensures the existence of minimal complements of nonempty finite sets in arbitrary groups. \begin{proof} [Proof of Theorem \ref{Thm.B}] First, we prove that any asymptotic complement of an infinite subgroup $H$ of $G$ contains a minimal asymptotic complement. Let $H$ be an infinite subgroup of a group $G$. Let $C$ be an asymptotic complement of $H$ in $G$. Consider the equivalence relation $\sim$ on $C$ defined by: $c_1 \sim c_2$ if $c_1c_2^{-1} \in H$. Let $C'$ denote a nonempty subset of $C$ consisting of pairwise inequivalent elements such that each element of $C$ is equivalent to some element of $C'$. Note that $HC$ is equal to $HC'$, and $HC$ is the union of a collection of right cosets of $H$. Since $G\setminus HC$ is finite and $H$ is infinite, it follows that $HC$ is equal to $G$ (this is clear from the fact that $HC$ is the union of certain right cosets of $H$ and $H$ is infinite). Consequently, $C$ is a complement of $H$ in $G$. Since $HC, HC'$ are equal, the set $C'$ is a complement of $H$ in $G$. Since the elements of $C'$ are inequivalent under $\sim$, it follows that $Hx, Hy$ are disjoint for any two elements $x, y \in C'$. The equality $G = HC'$ implies that $G$ is equal to the union of the pairwise disjoint subsets of the form $Hx$ for $x$ varying in $C'$. Since $H$ is infinite, these subsets are all infinite. Thus $G\setminus (H (C'\setminus F))$ is infinite for any nonempty finite subset $F$ of $C'$. Consequently, the set $C'$ is a minimal asymptotic complement of $H$ in $G$. Let $g$ be an element of $G$ and $D$ be an asymptotic complement of $gH$ in $G$. Then $D$ is also an asymptotic complement of $H$. Hence $D$ is a complement of $H$ in $G$ and it contains a minimal asymptotic complement of $H$. Consequently, $D$ is a complement of $gH$ in $G$ and it contains a minimal asymptotic complement of $gH$. If $E$ is an asymptotic complement of $Hg$ in $G$, then $E$ is an asymptotic complement of the subgroup $g^{-1} Hg$ in $G$. Hence $E$ is a complement of $g^{-1} Hg$ and it contains a minimal asymptotic complement of $g^{-1} H g$. So $E$ is a complement of $Hg$ and it contains a minimal asymptotic complement of $H g$. Theorem \ref{Thm.B}(1) follows from above. To prove Theorem \ref{Thm.B}(2), assume that $G \setminus W$ is infinite and $W$ contains a subgroup $H$ of $G$ having finite index in $G$. Since $H$ admits a finite subset of $G$ as a complement, it follows that $W$ also admits a finite subset $C$ of $G$ as a complement and as an asymptotic complement. Note that the subsets of $C$ which are asymptotic complements (resp. complements) to $W$ form a nonempty finite set $\ensuremath{\mathcal{F}}$ (resp. $\ensuremath{\mathcal{F}}'$) which is partially ordered with respect to containment and any minimal element of $\ensuremath{\mathcal{F}}$ (resp. $\ensuremath{\mathcal{F}}'$) is a minimal asymptotic complement (resp. minimal complement) of $W$ in $G$. Since each of the partially ordered sets $\ensuremath{\mathcal{F}}, \ensuremath{\mathcal{F}}'$ is finite and nonempty, they contain minimal elements. So $W$ admits a minimal complement and a minimal asymptotic complement in $G$. Consequently, the union of a nonzero subgroup of $\ensuremath{\mathbb{Z}}$ and a finite subset of $\ensuremath{\mathbb{Z}}$ admits a minimal complement. \end{proof} \begin{lemma} \label{Lemma:EventuPeriNoMinAsymCom} Let $W$ be an eventually periodic subset of $\ensuremath{\mathbb{Z}}$ (see Definition \ref{Def.2.4}). Then $W$ admits no minimal asymptotic complement in $\ensuremath{\mathbb{Z}}$. \end{lemma} \begin{proof} On the contrary, let us assume that there exists a minimal asymptotic complement to $W$ in $\ensuremath{\mathbb{Z}}$. Let $C$ be such a minimal asymptotic complement. Since $W$ is bounded below, the set $C$ is infinite. Let $T$ denote a period of $W$. So $C$ contains at least two distinct elements $a, b$ which are congruent modulo $T$. Since $W$ is eventually periodic, for some finite subset $\ensuremath{\mathscr{W}}$ of $W$, it follows that $(b-a) + (W\setminus \ensuremath{\mathscr{W}}) \subseteq W$, which implies that $b + (W\setminus \ensuremath{\mathscr{W}})$ is contained in $a + W$. Note that \begin{align*} W + C & = ( W + b) \cup ( W+ (C \setminus \{b\})) \\ & = (\ensuremath{\mathscr{W}} + b) \cup ((W\setminus \ensuremath{\mathscr{W}}) + b) \cup ( W+ (C \setminus \{b\})) \\ & \subseteq (\ensuremath{\mathscr{W}} + b) \cup (W + a) \cup ( W+ (C \setminus \{b\})) \\ & \subseteq (\ensuremath{\mathscr{W}} + b) \cup ( W+ (C \setminus \{b\})) , \end{align*} where the final containment follows since $a$ lies in $C \setminus \{b\}$. This implies that $W + C = (\ensuremath{\mathscr{W}} + b) \cup ( W+ (C \setminus \{b\}))$. Since $\ensuremath{\mathscr{W}}$ is finite, it follows that the set-theoretic complement of $W + (C \setminus \{b\})$ in $\ensuremath{\mathbb{Z}}$ is finite. Hence $C\setminus \{b\}$ is an asymptotic complement of $W$ in $\ensuremath{\mathbb{Z}}$, which contradicts the assumption. Consequently, $W$ admits no minimal asymptotic complement in $\ensuremath{\mathbb{Z}}$. \end{proof} We contrast Lemma \ref{Lemma:EventuPeriNoMinAsymCom} with the work of Kiss--S\'andor--Yang \cite[Theorems 2,3]{KissSandorYangJCT19} who showed that there exist eventually periodic sets in $\mathbb{Z}$ admitting minimal complements. \section{Minimal asymptotic complements of general sets} \label{Sec:General} In this section, we consider infinite subsets $W\subseteq \mathbb{Z}$ which have ``less" structure. We shall see that the existence of minimal asymptotic complements is a trickier concept in this scenario. Let $W$ be a subset of $\ensuremath{\mathbb{Z}}$ such that $\sup W = +\infty$ and $\inf W= -\infty$. Then a minimal complement of $W$ exists by Theorem \ref{CY12Thm1}. However, a minimal asymptotic complement of $W$ may or may not exist (even if we impose the condition that $\ensuremath{\mathbb{Z}}\setminus W$ contains arbitrarily large gaps), as illustrated by Lemmas \ref{Lemma:SufInfYesMac}, \ref{Lemma:SufInfNoMac}. \begin{lemma} \label{Lemma:SufInfYesMac} Let $W$ denote the subset of $\ensuremath{\mathbb{Z}}$ consisting of integers which are not primes. Then $W$ admits a minimal complement and a minimal asymptotic complement. \end{lemma} \begin{proof} Note that $W$ has $\{0, 1, -1\}$ is a minimal complement of $W$ and $\{0,1\}$ is a minimal asymptotic complement of $W$. \end{proof} \begin{remark} The above Lemma can also be seen as a consequence of Theorem \ref{Thm.B}(2) (since $W$ contains the subgroup $4\ensuremath{\mathbb{Z}}$ and $\ensuremath{\mathbb{Z}}\setminus W$ is infinite). \end{remark} On the other hand, the next lemma gives an example of a subset $W$ of $\ensuremath{\mathbb{Z}}$ with $\sup W = +\infty$ and $\inf W= -\infty$ and which admits no minimal asymptotic complement. \begin{lemma} \label{Lemma:SufInfNoMac} Let $W$ denote the subset of $\ensuremath{\mathbb{Z}}$ obtained by taking the union of $\ensuremath{\mathbb{Z}}_{\leq 3}$ and \begin{align*} & \bigcup _{k=1}^\infty \left( ((1+2)+(2+2^2 ) + (3 + 2^3) + \cdots + (k + 2^k) ) + [1, k+1] \right) \\ & = \bigcup _{k=1}^\infty \left[\dfrac{k(k+1)}{2} + 2^{k+1}-2+1, \dfrac{k(k+1)}{2} + 2^{k+1}-2+k+1 \right] \\ & = \bigcup _{k=1}^\infty \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] \\ & = \{4, 5\} \cup \{10, 11, 12\} \cup \{21, 22, 23, 24\}\cup \{41, 42, 43, 44, 45\} \cup \cdots . \end{align*} The set $W$ admits a minimal complement in $\ensuremath{\mathbb{Z}}$. For any three elements $a, b, c$ in an asymptotic complement $C$ of $W$ with $a<b<c$, the set $C\setminus \{b\}$ is also an asymptotic complement to $W$. Moreover, $W$ admits no minimal asymptotic complement in $\ensuremath{\mathbb{Z}}$. \end{lemma} \begin{proof} The set $W$ admits a minimal complement in $\ensuremath{\mathbb{Z}}$ by Theorem \ref{CY12Thm1}. Note that for any two integers $u,v$ with $0 \leq u\leq v$, it follows that $$u+ \{x, x+1, \cdots, y\} \subseteq \{0,v\} + \{x, x+1, \cdots, y\} $$ for any two integers $x,y$ with $y-x\geq v$. Indeed, the set $$\{0,v\} + \{x, x+1, \cdots, y\} = \{x, x+1, \cdots, y\} \cup \{v + x, v+ x+1, \cdots, v+y\} $$ consists of the integers $k$ satisfying $x \leq k \leq v + y$ (since $v+x \leq y$) and the elements of the set $u+ \{x, x+1, \cdots, y\}$ lie between $x$ and $v+y$ (since $u + x \geq x, u + y \leq v+y$). Since $a<b<c$ are integers, \begin{align*} &(b-a)+ \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] \\ &\subseteq \{0,c-a\} + \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] \end{align*} holds for any integer $k > c-a$. Consequently, for any integer $k>c-a$, we obtain \begin{align*} &b+ \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] \\ &\subseteq \{a,c\} + \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] . \end{align*} Also note that for any $w\in W$ with $w\leq -(b-a)$, we have $$w + b-c \leq - (b-a) +b-c = a-c < 0,$$ which implies $w + b-c\in W$ (since $W$ contains $\ensuremath{\mathbb{Z}}_{\leq 3}$), and hence $$b+ w = c + (w + b-c) \in c+ W \subseteq \{a, c\} + W.$$ Consequently, \begin{align*} & b + \left(W \setminus \left[-(b-a), \dfrac{(c-a-1)(c-a+2)}{2} + 2^{c-a+1} + c-a \right] \right) \\ & \subseteq \{a,c\}+ W \\ & \subseteq (C\setminus \{b\} ) + W. \end{align*} This implies that \begin{align*} C + W & = ((C\setminus \{b\} ) + W) \cup (b + W) \\ & = ((C\setminus \{b\} ) + W) \\ & \qquad \cup \left(b + \left(W \setminus \left[-(b-a), \dfrac{(c-a-1)(c-a+2)}{2} + 2^{c-a+1} + c-a\right] \right) \right) \\ & \qquad \cup \left(b + \left(W\cap \left[-(b-a), \dfrac{(c-a-1)(c-a+2)}{2} + 2^{c-a+1} + c-a\right]\right) \right) \\ & = ((C\setminus \{b\} ) + W) \\ & \qquad \cup \left(b + \left(W\cap \left[-(b-a), \dfrac{(c-a-1)(c-a+2)}{2} + 2^{c-a+1} + c-a\right]\right) \right). \end{align*} Thus the set-theoretic complement of $(C\setminus \{b\} ) + W$ in $C+W$ is a finite set. Since the set-theoretic complement of $C+W$ in $\ensuremath{\mathbb{Z}}$ is finite, it follows that the set-theoretic complement of $(C\setminus \{b\} ) + W$ in $\ensuremath{\mathbb{Z}}$ is also finite. So $C\setminus\{b\}$ is also an asymptotic complement to $W$. We claim that each asymptotic complement of $W$ contains at least three distinct elements. Since $\ensuremath{\mathbb{Z}} \setminus W$ is infinite, no singleton subset of $\ensuremath{\mathbb{Z}}$ is an asymptotic complement to $W$. If a two-element subset $S = \{s, t\}$ of $\ensuremath{\mathbb{Z}}$ with $s<t$ is an asymptotic complement to $W$, then $(-s) + S$ is also an asymptotic complement to $W$. Hence we may assume $S$ is equal to $\{0,u\}$ with $u>0$. Let $k_0\geq 1$ be an integer such that $2^{k_0+ 1}> u$. For every integer $k\geq 1$, the integer $\frac{(k-1)(k+2)}{2} + 2^{k+1} + k + 1 + u$ does not belong to $W+u$. Moreover, for any integer $k\geq k_0$, the integer $\frac{(k-1)(k+2)}{2} + 2^{k+1} + k + 1 + u$ does not belong to $W$. Otherwise, for some integer $k'\geq k_0$, the integer $\frac{(k'-1)(k'+2)}{2} + 2^{k'+1} + k' + 1 + u $ belongs to $$ \bigcup _{k> k'}^\infty \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right], $$ which implies $$ \frac{(k'-1)(k'+2)}{2} + 2^{k'+1} + k' + 1 + u \geq \dfrac{(k'+1-1)(k'+1+2)}{2} + 2^{k'+1+1} ,$$ which yields $u \geq 2^{k'+1} \geq 2^{k_0+1}$. This contradicts the inequality $2^{k_0+ 1}> u$. So for any integer $k\geq k_0$, the integer $\frac{(k-1)(k+2)}{2} + 2^{k+1} + k + 1 + u$ does not belong to $\{0,u\} + W = S + W$. Hence $S$ is not a minimal asymptotic complement to $W$. This proves the claim that each asymptotic complement of $W$ contains at least three elements. Hence $W$ admits no minimal asymptotic complement in $\ensuremath{\mathbb{Z}}$. \end{proof} The natural question now is what happens if the set $W$ is bounded from above or from below. In particular, let $W$ satisfy the condition of Theorem \ref{CY12Thm2}(a) (i.e. the difference of consecutive elements of $W$ is not bounded above). We shall see that even in this case minimal asymptotic complement might not exist. \begin{lemma} \label{Lemma:BddBelowYesMCNoMac} Let $W$ denote the subset \begin{align*} & \bigcup _{k=1}^\infty \left( ((1+2)+(2+2^2 ) + (3 + 2^3) + \cdots + (k + 2^k)) + [1, k+1] \right) \\ & = \bigcup _{k=1}^\infty \left[\dfrac{k(k+1)}{2} + 2^{k+1}-2+1, \dfrac{k(k+1)}{2} + 2^{k+1}-2+k+1 \right] \\ & = \bigcup _{k=1}^\infty \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] \\ & = \{4, 5\} \cup \{10, 11, 12\} \cup \{21, 22, 23, 24\}\cup \{41, 42, 43, 44, 45\} \cup \cdots \end{align*} of $\ensuremath{\mathbb{Z}}$. The set $W$ admits a minimal complement in $\ensuremath{\mathbb{Z}}$. For any three elements $a, b, c$ in an asymptotic complement $C$ of $W$ with $a<b<c$, the set $C\setminus \{b\}$ is also an asymptotic complement to $W$. Moreover, $W$ admits no minimal asymptotic complement in $\ensuremath{\mathbb{Z}}$. \end{lemma} \begin{proof} The set $W$ admits a minimal complement in $\ensuremath{\mathbb{Z}}$ by Theorem \ref{CY12Thm2}(a). Let $a, b, c$ be elements of an asymptotic complement $C$ of $W$ with $a<b<c$. Following the same argument as in the proof of Lemma \ref{Lemma:SufInfNoMac}, it follows that $$ b+ \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] $$ is contained in $$ \{a,c\} + \left[\dfrac{(k-1)(k+2)}{2} + 2^{k+1}, \dfrac{(k-1)(k+2)}{2} + 2^{k+1} + k\right] $$ for any $k>c-a$. This implies \begin{align*} & b + \left(W \setminus \left[4, \dfrac{(c-a-1)(c-a+2)}{2} + 2^{c-a+1} + c-a\right] \right) \\ & \subseteq \{a,c\}+ W \\ & \subseteq (C\setminus \{b\} ) + W. \end{align*} Using a similar argument as in the proof of Lemma \ref{Lemma:SufInfNoMac}, it follows that \begin{align*} C + W & = ((C\setminus \{b\} ) + W) \\ & \qquad \cup \left(b + \left(W\cap \left[4, \dfrac{(c-a-1)(c-a+2)}{2} + 2^{c-a+1} + c-a\right] \right)\right), \end{align*} and hence $C\setminus \{b\}$ is an asymptotic complement to $W$. To prove that $W$ does not admit a minimal asymptotic complement, it suffices to prove that any asymptotic complement of $W$ contains at least three elements, which can be proved using a similar argument as in the proof of Lemma \ref{Lemma:SufInfNoMac}. \end{proof} \begin{remark} Similarly, it can be proved that there are subsets of $\ensuremath{\mathbb{Z}}$ bounded from above which admit minimal complements, but admit no minimal asymptotic complements. For instance, we could consider $-W = \{-w\,|\,w\in W\}$ where $W$ is as in Lemma \ref{Lemma:BddBelowYesMCNoMac} and use the fact that multiplication by $-1$ is an automorphism of the group $\ensuremath{\mathbb{Z}}$. \end{remark} We shall now prove Theorem \ref{Thm.C}, which is a general result about the existence of minimal complements and the inexistence of minimal asymptotic complements. \begin{proof}[Proof of Thorem \ref{Thm.C}] We refer to the sets as in part (1), (2), (3), (4) of Theorem \ref{Thm.C} as the first, second, third, fourth set respectively. The first set, i.e., $\cup_{k=1}^\infty I_k$ admits a minimal complement in $\ensuremath{\mathbb{Z}}$ by Theorem \ref{CY12Thm2}(a). The second, third and the fourth sets also admit minimal complements in $\ensuremath{\mathbb{Z}}$ by Theorem \ref{CY12Thm1}. Let $a, b, c$ be three elements in $\ensuremath{\mathbb{Z}}$ with $a<b<c$. From condition (ii), it follows that $\# I_k\geq k$ for any $k\geq 1$. So for any $k\geq c-a$, it follows that $(b-a) + I_k \subseteq \{0, c\} + I_k$, which gives $b + I_k \subseteq \{a, c\} + I_k$, and hence \begin{equation} \label{Eqn:abc} b + \cup_{k=c-a}^\infty I_k \subseteq \{a, c\} + \cup_{k=c-a}^\infty I_k \subseteq \{a, c\} + \cup_{k=1}^\infty I_k. \end{equation} Note that for integers $x, u, v$ with $u<v$, \begin{equation} \label{Eqn:lambdauv} u + \ensuremath{\mathbb{Z}}_{\leq x } \subseteq v + \ensuremath{\mathbb{Z}}_{\leq x }. \end{equation} Also note that by condition (iii), any minimal complement of any one of the first, second, third and the fourth set is infinite. For any three distinct elements $a, b, c$ in an asymptotic complement $C$ of $\cup_{k=1}^\infty I_k$ with $a<b<c$, we obtain from Equation \eqref{Eqn:abc} that $$ b + \cup_{k=c-a}^\infty I_k \subseteq \{a, c\} + \cup_{k=1}^\infty I_k \subseteq (C\setminus \{b\}) + \cup_{k=1}^\infty I_k .$$ Consequently, $\cup_{k=1}^\infty I_k$ does not admit any minimal asymptotic complement. For any three distinct elements $a, b, c$ in an asymptotic complement $C$ of the second set with $a<b<c$, we obtain from Equations \eqref{Eqn:abc}, \eqref{Eqn:lambdauv} that $$ b + \cup_{k=c-a}^\infty I_k \subseteq \{a, c\} + \cup_{k=1}^\infty I_k \subseteq (C\setminus \{b\}) + \cup_{k=1}^\infty I_k, $$ $$ b + \ensuremath{\mathbb{Z}}_{< \min I_1} \subseteq c + \ensuremath{\mathbb{Z}}_{< \min I_1} \subseteq (C\setminus \{b\}) + \ensuremath{\mathbb{Z}}_{< \min I_1} .$$ So $$b + \left((\ensuremath{\mathbb{Z}}_{< \min I_1})\cup (\cup_{k=c-a}^\infty I_k) \right) \subseteq \{a, c\} + \left( (\ensuremath{\mathbb{Z}}_{< \min I_1})\cup (\cup_{k=1}^\infty I_k) \right) .$$ Consequently, $(\ensuremath{\mathbb{Z}}_{< \min I_1})\cup (\cup_{k=1}^\infty I_k)$ does not admit any minimal asymptotic complement. For any three distinct elements $a, b, c$ in an asymptotic complement $C$ of the third set with $a<b<c$, we obtain from Equations \eqref{Eqn:abc}, \eqref{Eqn:lambdauv} that $$ b + \cup_{k=c-a}^\infty I_k \subseteq \{a, c\} + \cup_{k=1}^\infty I_k \subseteq (C\setminus \{b\}) + \cup_{k=1}^\infty I_k, $$ $$ b + \ensuremath{\mathbb{Z}}_{< \min \{\min F, \min I_1\}} \subseteq c + \ensuremath{\mathbb{Z}}_{< \min \{\min F, \min I_1\}} \subseteq (C\setminus \{b\}) + ( \ensuremath{\mathbb{Z}}_{< \min I_1} \setminus F) .$$ So $$b + \left((\ensuremath{\mathbb{Z}}_{< \min \{\min F, \min I_1\}})\cup (\cup_{k=c-a}^\infty I_k) \right) \subseteq \{a, c\} + \left( (\ensuremath{\mathbb{Z}}_{< \min I_1} \setminus F )\cup (\cup_{k=1}^\infty I_k) \right) .$$ Consequently, $(\ensuremath{\mathbb{Z}}_{< \min I_1}\setminus F)\cup (\cup_{k=1}^\infty I_k)$ does not admit any minimal asymptotic complement. Since any asymptotic complement $C$ to the fourth set is infinite, it contains three distinct elements $a<b<c$ which are congruent modulo $n$. From Equation \eqref{Eqn:abc} it follows that $$ b + \cup_{k=c-a}^\infty I_k \subseteq \{a, c\} + \cup_{k=1}^\infty I_k \subseteq (C\setminus \{b\}) + \cup_{k=1}^\infty I_k. $$ Since $b<c$ and $b$ is congruent to $c$ modulo $n$, we obtain \begin{align*} b + \{x\in \ensuremath{\mathbb{Z}}_{< \min I_1} \,|\, x\equiv a\ensuremath{\mathrm{\;mod\;}} n\} & \subseteq c + \{x\in \ensuremath{\mathbb{Z}}_{< \min I_1} \,|\, x\equiv a\ensuremath{\mathrm{\;mod\;}} n\}\\ & \subseteq (C\setminus \{b\}) + \{x\in \ensuremath{\mathbb{Z}}_{< \min I_1} \,|\, x\equiv a\ensuremath{\mathrm{\;mod\;}} n\}. \end{align*} So \begin{align*} &b + \left(( \{x\in \ensuremath{\mathbb{Z}}_{< \min I_1} \,|\, x\equiv a\ensuremath{\mathrm{\;mod\;}} n\})\cup (\cup_{k=c-a}^\infty I_k) \right) \\ &\subseteq \{a, c\} + \left( ( \{x\in \ensuremath{\mathbb{Z}}_{< \min I_1} \,|\, x\equiv a\ensuremath{\mathrm{\;mod\;}} n\})\cup (\cup_{k=1}^\infty I_k) \right). \end{align*} Consequently, $( \{x\in \ensuremath{\mathbb{Z}}_{< \min I_1} \,|\, x\equiv a\ensuremath{\mathrm{\;mod\;}} n\})\cup (\cup_{k=1}^\infty I_k)$ does not admit any minimal asymptotic complement. \end{proof} Lemma \ref{Lemma:BddBelowYesMCNoMac} and Theorem \ref{Thm.C}(1) give examples of bounded below subsets of $\ensuremath{\mathbb{Z}}$, and for each of them, the difference of consecutive elements is not bounded above, i.e., the hypothesis of Theorem \ref{CY12Thm2}(a) holds. Now consider the situation where the hypothesis of Theorem \ref{CY12Thm2}(a) does not hold, i.e., the difference of consecutive elements is bounded. Such a bounded below subset of $\ensuremath{\mathbb{Z}}$ may or may not admit a minimal complement, as mentioned in \cite[Theorem 4, Remark 2]{KissSandorYangJCT19}. Theorem \ref{Thm.D} considers such subsets which do not admit any minimal complement. Now let us show the Theorem. \begin{proof} [Proof of Theorem \ref{Thm.D}] The set $W$ admits no minimal complement in $\ensuremath{\mathbb{Z}}$ by Theorem \ref{CY12Thm2}(b). Let $w_1<w_2<w_3<\cdots$ denote the elements of $W$, and $v_1 < v_2 < v_3 <\cdots $ denote the elements of $\ensuremath{\mathbb{Z}}_{\geq w_1}\setminus W$. Let $i'$ be a positive integer such that $v_{i+1} - v_i\geq 2$ for all $i\geq i'$. Let $C$ be an asymptotic complement of $W$ in $\ensuremath{\mathbb{Z}}$. Since $W$ is bounded below, the set $C$ is infinite. Let $a, b, c$ be three elements of $C$ such that $a<b<c$. Let $i''$ be a positive integer such that $i''\geq i'$ and $v_{i+1} - v_i\geq c-a$ for all $i\geq i''$. For any $i\geq i''$, we obtain $$ b-a + (v_i, v_{i+1}) \subseteq (v_i, v_{i+1}) \cup ((c-a) + (v_{i}, v_{i+1})) \subseteq \{0, c-a\} + (v_i, v_{i+1}) , $$ which gives $$ b + (v_i, v_{i+1}) \subseteq \{a, c\} + (v_i, v_{i+1}) \subseteq \{a, c\} + W . $$ Since $i''\geq i'$, it follows that $v_{i+1} - v_i\geq 2$ for all $i\geq i''$ and hence $W_{\geq v_{i''}}$ is equal to $(v_{i''}, v_{i''+1}) \cup (v_{i''+1}, v_{i''+2}) \cup (v_{i''+2}, v_{i''+3}) \cup (v_{i''+3}, v_{i''+4})\cup \cdots$. So the set $b + W_{\geq v_{i''}}$ is contained in $\{a, c\} + W$. Since $W$ is bounded below, it follows that $C\setminus \{b\}$ is also an asymptotic complement of $W$. Hence $W$ admits no minimal asymptotic complement. \end{proof} We conclude with the following corollary. \begin{corollary} Let $W$ denote the set of all positive integers which do not belong to $$\bigcup _{k=1}^\infty [10k^2, 10k(k+1)].$$ The set $W$ does not admit a minimal complement in $\ensuremath{\mathbb{Z}}$ and $W$ admits no minimal asymptotic complement in $\ensuremath{\mathbb{Z}}$. Moreover, for any three elements $a, b, c$ in an asymptotic complement $C$ of $W$ with $a<b<c$, the set $C\setminus \{b\}$ is also an asymptotic complement to $W$. \end{corollary} \begin{proof} Note that the set $W$ as above satisfies the hypothesis of Theorem \ref{Thm.D}. Hence the first part follows. The second part follows from the proof of Theorem \ref{Thm.D}. \end{proof} \vspace*{1mm} \section{Acknowledgements} We wish to thank the anonymous reviewer for the valuable comments and suggestions. The first author would like to acknowledge the fellowship of the Erwin Schr\"odinger International Institute for Mathematics and Physics (ESI) and would also like to thank the Fakult\"at f\"ur Mathematik, Universit\"at Wien where a part of the work was carried out. The second author would like to acknowledge the Initiation Grant from the Indian Institute of Science Education and Research Bhopal, and the INSPIRE Faculty Award from the Department of Science and Technology, Government of India. \def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D} \def\cfac#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else \setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7 \hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"13\hss}}\penalty 10000 \hskip-1\wd7\penalty 10000\box7} \def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else \setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7 \hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000 \hskip-1\wd7\penalty 10000\box7} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2019-12-11T02:19:55", "yymm": "1902", "arxiv_id": "1902.09450", "language": "en", "url": "https://arxiv.org/abs/1902.09450", "abstract": "Let $W\\subseteq \\mathbb{Z}$ be a non-empty subset of the integers. A nonempty set $C\\subseteq \\mathbb{Z}$ is said to be an asymptotic complement to $W$ if $W+C$ contains almost all the integers except a set of finite size. $C$ is said to be a minimal asymptotic complement if $C$ is an asymptotic complement, but $C\\setminus \\lbrace c\\rbrace$ is not an asymptotic complement $\\forall c\\in C$. Asymptotic complements have been studied in the context of representations of integers since the time of Erdős, Hanani, Lorentz and others, while the notion of minimal asymptotic complements is due to Nathanson. In this article, we study minimal asymptotic complements in $\\mathbb{Z}$ and deal with a problem of Nathanson on their existence and their inexistence.", "subjects": "Number Theory (math.NT); Combinatorics (math.CO)", "title": "Asymptotic complements in the integers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.984336354504839, "lm_q2_score": 0.8376199653600371, "lm_q1q2_score": 0.8244997831629685 }
https://arxiv.org/abs/1302.6597
Independence of l-adic representations of geometric Galois groups
Let k be an algebraically closed field of arbitrary characteristic,let K/k be a finitely generated field extension and let X be a separated scheme of finite type over K. For each prime ell, the absolute Galois group of K acts on the ell-adic etale cohomology modules of X. We prove that this family of representations varying over ell is almost independent in the sense of Serre, i.e., that the fixed fields inside an algebraic closure of K of the kernels of the representations for all ell become linearly disjoint over a finite extension of K. In doing this, we also prove a number of interesting facts on the images and ramification of this family of representations.
\section{Introduction} Let $G$ be a profinite group and $L$ a set of prime numbers. For every $\ell\in L$ let $G_\ell$ be a profinite group and $\rho_\ell: G\to G_\ell$ a homomorphism. Denote by $$\rho: G\to \prod\limits_{\ell\in L} G_\ell$$ the homomorphism induced by the $\rho_\ell$. Following the notation in \cite{bible} we call the family $(\rho_\ell)_{\ell\in L}$ {\em independent} if $\rho(G)=\prod\limits_{\ell\in L} \rho_\ell(G)$. The family $(\rho_\ell)_{\ell\in L}$ is said to be {\em almost independent} if there exists an open subgroup $H$ of $G$ such that $\rho(H)=\prod_{\ell\in L} \rho_\ell(H)$. The main examples of such families of homomorphisms arise as follows: Let $K$ be a field {with algebraic closure $\wt K$ and absolute Galois group $\mathop{\rm Gal}\nolimits(K)=\Aut(\wt K/K)$. Let $X/K$ be a separated algebraic scheme\footnote{A scheme $X/K$ is algebraic if the structure morphism $X\to\mathop{{\rm Spec}}\nolimits K$ is of finite type (cf.~\cite[Def.~6.4.1]{EGAI}).} and denote by ${\mathbb{L}}$ the set of all prime numbers}. For every $q\in {\mathbb{N}}$ and every $\ell\in {\mathbb{L}}\smallsetminus \{\Char(k)\}$ we shall consider the representations $$\rho_{\ell, X}^{(q)}: \mathop{\rm Gal}\nolimits(K)\to \Aut_{{\mathbb{Q}}_\ell}(H^q(X_{\wt K}, {\mathbb{Q}}_\ell))\quad \mbox{and} \quad \rho_{\ell, X, c}^{(q)}: \mathop{\rm Gal}\nolimits(K)\to \Aut_{{\mathbb{Q}}_\ell}(H^q_c(X_{\wt K}, {\mathbb{Q}}_\ell))$$ of {$\mathop{\rm Gal}\nolimits(K)$} on the \'etale cohomology groups $H^q(X_{\wt K}, {\mathbb{Q}}_\ell)$ and $H_c^q(X_{\wt K}, {\mathbb{Q}}_\ell)$. The following independence result has recently been obtained. \begin{thm}\label{Serre-Gajda-Petersen} Let $K$ be a finitely generated extension of ${\mathbb{Q}}$ and $X/K$ a separated algebraic scheme. Then the families $(\rho_{\ell, X}^{(q)})_{\ell\in {\mathbb{L}}}$ and $(\rho_{\ell, X, c}^{(q)})_{\ell\in {\mathbb{L}}}$ are almost independent. \end{thm} The proof of this statement in the important special case $\mathrm{trdeg}(K/{\mathbb{Q}})=0$ is due to Serre (cf.~\cite{bible}). The case $\mathrm{trdeg}(K/{\mathbb{Q}})>0$ was worked out in \cite{gp}, answering a question of Serre (cf.~\cite{bible}, \cite{serre1994}) and Illusie \cite{illusie}. {The present article is concerned with a natural variant of Theorem~\ref{Serre-Gajda-Petersen} that grew out of the study of independence of families over fields of positive characteristic. For $K$ a finitely generated extension of ${\mathbb{F}}\,\!{}_p$ it has long been known, e.g.~\cite{Igusa} or \cite{devicpink}, that the direct analogue of Theorem~\ref{Serre-Gajda-Petersen} is false: If $\varepsilon_\ell: \mathop{\rm Gal}\nolimits({\mathbb{F}}\,\!{}_p)\to {\mathbb{Z}}_\ell^\times$ denotes the $\ell$-adic cyclotomic character that describes the Galois action on $\ell$-power roots of unity, then it is elementary to see that the family $(\varepsilon_\ell)_{\ell\in{\mathbb{L}}\smallsetminus\{ p\}}$ is not almost independent. It follows from this that for every abelian variety $A/K$, if we denote by $\sigma_{\ell, A}: \mathop{\rm Gal}\nolimits(K)\to \Aut_{{\mathbb{Q}}_\ell}(T_\ell(A))$ the representation of $\mathop{\rm Gal}\nolimits(K)$ on the $\ell$-adic Tate module of $A$, then $(\sigma_{\ell, A})_{\ell\in{\mathbb{L}}\smallsetminus \{ p\}}$ is {\em not} almost independent. One is thus led to study independence over the compositum $\wt{\mathbb{F}}_p K$ obtained from the field $K$ by adjoining all roots of unity. Having gone that far, it is then natural to study independence over any field $K$ that is finitely generated over an arbitrary algebraically closed field~$k$. Our main result is the following independence theorem.} \begin{thm} \label{main} {\bf (cf.~Theorem \ref{mainmain})} Let $k$ be an algebraically closed field of characteristic $p\ge 0$. Let $K/k$ be a finitely generated extension. Let $X/K$ be a separated algebraic scheme. Then the families $(\rho_{\ell, X}^{(q)}{|_{\mathop{\rm Gal}\nolimits(K)}})_{\ell\in {\mathbb{L}}\smallsetminus\{p\}}$ and $(\rho_{\ell, X, c}^{(q)}{|_{\mathop{\rm Gal}\nolimits(K)}}))_{\ell\in {\mathbb{L}}\smallsetminus \{p\}}$ are almost independent. \end{thm} It will be clear that many techniques of the present article rely on \cite{bible}. Also, some of the key results of \cite{gp} will be important. The new methods in comparison with the previous results are the following: (i) The analysis of the target of our Galois representations, reductive algebraic groups over ${\mathbb{Q}}_\ell$, will be based on a structural result by Larsen and Pink (cf.~\cite{lp2011}) and no longer as for instance in \cite{bible} on extensions of results by Nori (cf.~\cite{nori1987}). This facilitates greatly the passage from $\mathop{\rm Gal}\nolimits(K)$ to $\mathop{\rm Gal}\nolimits(K\wt k)$ when studying their image under $\rho_{\ell,X,?}^{(q)}$. (ii) Since we also deal with cases of positive characteristic, ramification properties will play a crucial role to obtain necessary finiteness properties of fundamental groups. The results on alterations by de Jong (cf.~\cite{dejong}) will obviously be needed. However we were unable to deduce all needed results from there, despite some known semistability results that follow from~\cite{dejong}. Instead we carry out a reduction to the case where $K$ is absolutely finitely generated and where $X/K$ is smooth and projective (this uses again \cite{dejong}). (iii) In the latter case, we use a result by Kerz-Schmidt-Wiesend (cf.~\cite{kerzschmidt}) that allows one to control ramification on $X$ by controlling it on all smooth curves on~$X$. Since $X$ is smooth, results of Deligne show that the semisimplifications of $\rho_{\ell,X,?}^{(q)}$ form a pure and strictly compatible system. On curves, we can then apply the global Langlands correspondence proved by Lafforgue (\cite{Lafforgue}) to obtain the required ramification properties of $(\rho_{\ell,X,?}^{(q)})_{\ell\in {\mathbb{L}}\smallsetminus \{p\}}$. Part (i) is carried out in Section~\ref{groupconcepts}. Results on fundamental groups and first results on ramification are the theme of Section~\ref{FundGps}; here parts of (ii) are carried out and we also refine some results from \cite{kerzschmidt}). Section~\ref{indep-sec} provides the basic independence criterion on which our proof of Theorem~\ref{main} ultimately rests. Section~\ref{Reductions} performs the reductions mentioned in (ii). The ideas described in (iii) are concluded in Section~\ref{TheProof}, where a slightly more precise form of Theorem~\ref{main} is proved. We would like to point out that an alternative method for the part (ii) of our approach could be based on recent unpublished result by Orgogozo which proves a global semistable reduction theorem (cf. \cite[2.5.8. Prop.]{orgo}). In February 2013 when our paper was complete we were informed by Anna Cadoret that together with Akio Tamagawa she has proven our Theorem \ref{main} by a different method cf. \cite{cata}. {\bf Acknowledgments:} G.B. thanks the Fields Institute for a research stay in the spring of 2012 during which part of this work was written. He also thanks Adam Mickiewicz University in Pozna{\' n} for making possible a joint visit of the three authors in the fall of 2012. He is supported by a grant of the DFG within the SPP 1489. W.G. thanks Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University for hospitality during a research visit in January 2012 shortly after this project had been started. He was partially supported by the Alexander von Humboldt Foundation and a research grant of the National Centre of Sciences of Poland. S.P. thanks Mathematics Department at Adam Mickiewicz University for hospitality and support during several research visits. \section{Notation}\label{notation} For a field $K$ with algebraic closure $\wt K$, we denote by $K_s\subset \wt K$ a separable closure. Then $\mathop{\rm Gal}\nolimits(K)$ is equivalently defined as $\mathop{\rm Gal}\nolimits(K_s/K)$ and as $\Aut(\wt K/K)$, since any field automorphism of $K_s$ fixing $K$ has a unique extension to $\wt K$. If $E/K$ is an arbitrary field extension, and if $\wt K$ is chosen inside $\wt E$, then there is a natural isomorphism $\Aut(\wt K/\wt K\cap E)\stackrel\simeq\longrightarrow \Aut(\wt K E/ E)$. Composing its inverse with the natural restriction $\mathop{\rm Gal}\nolimits(E)\to\Aut(E\wt K/E)$ one obtains a canonical map which we denote $\mathrm{res}_{E/K}: \mathop{\rm Gal}\nolimits(E)\to \mathop{\rm Gal}\nolimits(K)$. If $E/K$ is algebraic, then $\mathrm{res}_{E/K}$ is injective and we often identify $\mathop{\rm Gal}\nolimits(E)$ with the subgroup $\mathrm{res}_{E/K}(\mathop{\rm Gal}\nolimits(E))=\mathop{\rm Gal}\nolimits(E\cap \wt K)$ of $\mathop{\rm Gal}\nolimits(K)$. Let $G$ be a profinite group. A {\em normal series} in $G$ is a sequence $$G\triangleright N_1\triangleright N_2\triangleright \cdots \triangleright N_s=\{e\}$$ of closed subgroups such that each $N_i$ is normal in $G$. A $K$-variety $X$ is a scheme $X$ that is integral separated and algebraic over $K$. Let $S$ be a normal connected scheme with function field $K$. A separable algebraic extension $E/K$ is said to be {\em unramified along $S$} if for every finite extension $F/K$ inside $E$ the normalization of $S$ in $F$ is \'etale over $S$. We usually consider $S$ as a scheme equipped with the generic geometric base point $s: \mathop{{\rm Spec}}\nolimits(\widetilde{K})\to S$ and denote by $\pi_1(S):=\pi_1(S, s)$ the \'etale fundamental group of $S$. If $\Omega$ denotes the maximal extension of $K$ in $K_s$ which is unramified along $S$, then $\pi_1(S)$ can be identified with the Galois group $\mathop{\rm Gal}\nolimits(\Omega/K)$. A homomorphism $\rho: \mathop{\rm Gal}\nolimits(K)\to H$ is said to be {\em unramified along $S$} if the fixed field $K_s^{\ker(\rho)}$ is unramified along $S$. If $E/K$ is an arbitrary algebraic extension, then $\rho|_{\mathop{\rm Gal}\nolimits(E)}$ stands for $\rho\circ \mathrm{res}_{E/K}$. \section{Concepts from group theory}\label{groupconcepts} In this section, we prove a structural results of compact profinite subgroups of linear algebraic groups over $\wt{\mathbb{Q}}_\ell$ (cf.~Theorem~\ref{lp-main}) that will be crucial for the proof of the main main theorem of this article. It is a consequence of a variant (cf.~Proposition~\ref{lp0}) of a theorem of Larsen and Pink (cf.~\cite[Thm.~0.2, p.~1106]{lp2011}). The proof of~Proposition~\ref{lp0} makes strong use of the results and methods in~\cite{lp2011}, and in particular does not depend on the classification of finite simple groups. \smallskip \begin{defi} \label{sigmadef} For $c\in {\mathbb{N}}$ we denote by $\Sigma_\ell(c)$ the class of profinite groups $M$ which possess a normal series by open subgroups $$M\triangleright I\triangleright P\triangleright \{1\}$$ such that $M/I$ is a finite product of finite simple groups of Lie type in characteristic $\ell$, the group $I/P$ is finite abelian of order prime to $\ell$ and index $[I:P]\le c$, and $P$ is a pro-$\ell$ group. \end{defi} \begin{defi} \label{jordef} For $d\in {\mathbb{N}}$ and $\ell$ a prime we denote by $\mathrm{Jor}_\ell(d)$ the class of finite groups $H$ which possess a normal abelian subgroup $N$ of order prime to $\ell$ and of index $[H:N]\le d$. We define $\mathrm{Jor}(d)$ as the union of the $\mathrm{Jor}_\ell(d)$ over all primes $\ell$. \end{defi} \begin{defi} \label{nbddgp-def} A profinite group $G$ is called {\em $n$-bounded at $\ell$} if there exist closed compact subgroups $G_1\subset G_2\subset {\rm GL}_n(\wt{\mathbb{Q}}_\ell)$ such that $G_1$ is normal in $G_2$ and $G\cong G_1/G_2$. \end{defi} The following is the main result of this section. \begin{thm}\label{lp-main} For every $n\in {\mathbb{N}}$ there exists a constant $J'(n)$ (independent of $\ell$) such that the following holds: For any prime $\ell$, any group $G$ that is $n$-bounded at $\ell$ lies in a short exact sequence $$1\to M\to G \to H \to 1$$ such that $M$ is open normal in $G$ and lies in $\Sigma_\ell(2^n)$ and $H$ lies in $\mathrm{Jor}_\ell(J'(n))$. \end{thm} We state an immediate corollary: \begin{coro}\label{lp-cor} Let $G$ be $n$-bounded at $\ell$ and define $G_\ell^+$ as the normal hull of all pro-$\ell$ Sylow subgroups of $G$. Then for $\ell>J'(n)$, the group $G_\ell^+$ is an open normal subgroup of $M$ of index at most~$2^n$. \end{coro} In the remainder of this section we shall give a proof of Theorem~\ref{lp-main}. Moreover we shall derive some elementary permanence properties for the property described by~$\Sigma_\ell(d)$ and~$\mathrm{Jor}_\ell(d)$. \smallskip The content of the following lemma is presumably well-known. \begin{lemm} For every $r\in {\mathbb{N}}$, every algebraically closed field $F$ and every semisimple algebraic group $G$ of rank $r$ the center $Z$ of $G$ satisfies $|Z(F)|\le 2^r$.\label{centfin} \end{lemm} \begin{proof} Lacking a precise reference, we include a proof for the reader's convenience. Observe first that the center $Z$ is a finite (cf.~\cite[I.6.20, p. 43]{testerman}) diagonalizable algebraic group. Let $T$ be a maximal torus of $G$. Denote by $X(T)=\mathop{\rm Hom}\nolimits(T, {\mathbb{G}}_m)$ the character group of $T$ and by $\Phi\subset X(T)$ the set of roots of $G$. Then ${\mathcal{R}}=(X(T)\otimes {\mathbb{R}}, \Phi)$ is a root system. Let $P={\mathbb{Z}} \Phi$ be the root lattice and $Q$ the weight lattice of this root system. Then $P\subset X(T)\subset Q$. The center $Z$ of $G$ is the kernel of the adjoint representation (cf.~\cite[I.7.12, p. 49]{testerman}). Hence $Z=\bigcap_{\chi\in \Phi} \ker(\chi)$ and there is an exact sequence $$0\to Z\to T\to \prod_{\chi\in \Phi} {\mathbb{G}}_m$$ where the right hand map is induced by the characters $\chi: T\to {\mathbb{G}}_m$ ($\chi\in \Phi$). We apply the functor $\mathop{\rm Hom}\nolimits(-, {\mathbb{G}}_m)$ and obtain an exact sequence $$\prod_{\chi\in \Phi} {\mathbb{Z}}\to X(T)\to \mathop{\rm Hom}\nolimits(Z, {\mathbb{G}}_m)\to 0$$ The cokernel of the left hand map is $X(T)/P$. Thus $|Z(F)|\le [X(T):P]\le [Q:P]$. Furthemore, the root system ${\mathcal{R}}$ decomposes into a direct sum $${\mathcal{R}}=\bigoplus_{i=1}^s (E_i, \Phi_i)$$ of indecomposable root systems ${\mathcal{R}}_i:=(E_i, \Phi_i)$. Let $r_i=\dim(E_i)$ be the rank of ${\mathcal{R}}_i$. Let $P_i$ be the root lattice and $Q_i$ the weight lattice of ${\mathcal{R}}_i$. Note that by definition $P=\oplus_iP_i$ and $Q=\oplus_iQ_i$. It follows from the classification of indecomposable root systems that $|Q_i/P_i|\le 2^{r_i}$ (cf.~\cite[Table 9.2, p. 72]{testerman}) for all $i$. Hence $|Z(F)|\le|Q/P|\le 2^{r_1}2^{r_2}\cdots 2^{r_s}=2^r$ as desired. \end{proof} \begin{rema} The semisimple algebraic group $({\rm SL}_{2, {\mathbb{C}}})^r$ has rank $r$ and it's center $(\mu_2)^r$ has exactly $2^r$ ${\mathbb{C}}$-rational points. Hence the bound of Lemma \ref{centfin} cannot be improved. \end{rema} The following result is an adaption of the main result of \cite{lp2011} by Larsen and Pink. \begin{prop} \label{LPProp}For every $n\in{\mathbb{N}}$, there exists a constant $J'(n)$ such that for every field $F$ of positive characteristic $\ell$ and every finite subgroup $\Gamma$ of $GL_{n}(F)$, there exists a normal series $$\Gamma\triangleright L\triangleright M\triangleright I\triangleright P\triangleright \{1\}$$ of $\Gamma$ with the following properties: \begin{enumerate} \item[i)] $[\Gamma:L]\le J'(n)$. \item[ii)] The group $L/M$ is abelian of order prime to $\ell$. \item[iii)] The group $M/I$ is a finite product of finite simple groups of Lie type in characteristic $\ell$. \item[iv)] The group $I/P$ is abelian of order prime to $\ell$ and $[I:P]\le 2^n$. \item[v)] $P$ is an $\ell$-group. \end{enumerate}\label{lp0} Furthermore the constant $J'(n)$ is the same as in \cite[Thm.~0.2, p. 1106]{lp2011}. \end{prop} {\em Proof.} We can assume that $F$ is algebraically closed. Let $J'(n)$ be the constant from \cite[Thm.~0.2, p. 1106]{lp2011}. Larsen and Pink construct in the proof of their Theorem \cite[Thm.~0.2, p. 1155--1156]{lp2011} a smooth algebraic group $G$ over $F$ containing $\Gamma$ and normal subgroups $\Gamma_i$ of $\Gamma$ such that there is a normal series $$\Gamma\triangleright\Gamma_1\triangleright \Gamma_2 \triangleright \Gamma_3\triangleright\{1\}$$ and such that $[\Gamma:\Gamma_1]\le J'(n)$, $\Gamma_1/\Gamma_2$ is a product of finite simple groups of Lie type in characteristic $\ell$, $\Gamma_2/\Gamma_3$ is abelian of order prime to $\ell$ and $\Gamma_3$ is an $\ell$-group. Let $R$ be the unipotent radical of the connected component $G^\circ$ of $G$. The proof of Larsen and Pink shows that $\Gamma_1\triangleleft G^\circ(F)$, $\Gamma_3=\Gamma\cap R(F)$ and $\Gamma_2/\Gamma_3$ is contained in $\ol{Z}(F)$ where $\ol{Z}$ denotes the center of the reductive group $\ol{G}:=G^\circ/R$. Let $\ol{D}=[\ol{G}, \ol{G}]$ be the derived group of $\ol{G}$ and $D=[G^\circ, G^\circ]R$. Now define $L=\Gamma_1$, $M=\Gamma_1\cap D(F)$, $I=\Gamma_2\cap D(F)$ and $P=\Gamma_3$. These groups are normal in $\Gamma$, because $D(F)$ is characteristic in $G^\circ(F)$ and because $\Gamma_1, \Gamma_2, \Gamma_3$ are normal in $\Gamma$. The group $L/M$ is a subgroup of the abelian group $G^\circ(F)/{D(F)}$. The group $M/I$ is a normal subgroup of $\Gamma_1/\Gamma_2$, hence it is a product of finite simple groups of Lie type in characteristic $\ell$. The group $I/P$ is a subgroup of $\Gamma_2/\Gamma_3$, hence $I/P$ is abelian of order prime to $\ell$. Furthermore $I/P=I/\Gamma_3$ is a subgroup of $\ol{G}(F)$ which lies in $\ol{D}(F)$ and in $\ol{Z}(F)$. Thus $I/P$ lies in the center $\ol{Z}(F)\cap \ol{D}(F)$ of the semisimple group $\ol{D}(F)$. It follows by Lemma \ref{centfin} that $[I:P]\le 2^{\mathrm{rk}(\ol{D})}$. It remains to show that $\mathrm{rk}(\ol{D})\le n$. Let $T$ be a maximal torus of $\ol{D}$ and denote by $\pi: G^\circ \to \ol{G}$ the canonical projection. Then the algebraic group $B:=\pi^{-1}(T)$ sits in an exact sequence $$0\to R\to B\to T\to 0$$ and $B$ is connected smooth and solvable, because $R$ and $T$ have these properties. The above exact sequence splits (cf.~\cite[XVII.5.1]{SGA3}); hence $B$ contains a copy of $T$. This copy is contained in a maximal torus $T'$ of ${\rm GL}_{n, F}$. Thus $n=\dim(T')\ge \dim(T)=\mathrm{rk}(\ol{D})$ as desired.\hfill $\Box$ \begin{proof}[Proof of Theorem~\ref{lp-main}] Suppose $G$ is $n$-bounded at $\ell$, so that it is a quotient $G_2/G_1$ with $G_i\subset {\rm GL}_n(\wt{\mathbb{Q}}_\ell)$. By Lemma~\ref{sigmalemm}(a) below, it will suffice to prove the theorem for $G_2$. Thus we assume that $G$ is a compact profinite subgroup of ${\rm GL}_n(\wt{\mathbb{Q}}_\ell)$. By compactness of $G$ and a Baire category type argument (cf.~\cite[proof of Cor.~5]{dickinson}) the group $G$ is contained in ${\rm GL}_n(E)$ for some finite extension $E$ of ${\mathbb{Q}}_\ell$. Let ${\cal O}_E$ be the ring of integers of the local field $E$. Again by compactness of $G$ one can then find an ${\cal O}_E$-lattice in $E^n$ that is stable under $G$. Hence we may assume that $G$ is a closed subgroup of ${\rm GL}_n({\cal O}_E)$. Let $\mathfrak p$ be the maximal ideal of the local ring ${\cal O}_E$ and let ${\mathbb{F}}={\cal O}_E/\mathfrak p$ be its residue field. The kernel $\mathcal{K}$ of the canonical map $p: {\rm GL}_n({\cal O}_E)\to {\rm GL}_n({\mathbb{F}})$ is a pro-$\ell$ group. Hence $Q=\mathcal{K}\cap G$ is pro-$\ell$ and open normal in $G$. We now apply Proposition \ref{lp0} to the subgroup $G/Q$ of ${\rm GL}_n({\mathbb{F}})\subset {\rm GL}_n(F)$ with $F=\overline{{\mathbb{F}}\,\!{}}\cong\overline{{\mathbb{F}}\,\!{}}_\ell$. This yields a normal series $$G\triangleright L\triangleright M\triangleright I\triangleright P\triangleright Q\triangleright \{1\}$$ such that the group $G/M$ lies in $\mathrm{Jor}_\ell(J'(n))$, and the group $M$ lies in $\Sigma_\ell(2^n)$ -- for the latter use that $Q$ is pro-$\ell$ and normal in $G$ and $P/Q$ is a finite $\ell$-group. \end{proof} The following lemma records a useful permanence property of groups in~$\Sigma_\ell(c)$ and~$\mathrm{Jor}_\ell(d)$. \begin{lemm} \label{sigmalemm} Fix any $e\in{\mathbb{N}}$. Then for any prime number $\ell$ the following holds: \begin{enumerate} \item If $H\unlhd H$ is a normal subgroup of some $H\in\mathrm{Jor}_\ell(e)$, then $H'$ and $H/H'$ lie in $\mathrm{Jor}_\ell(e)$. \item If $M'\unlhd M$ is a closed normal subgroup of some $M\in\Sigma_\ell(e)$, then $M'$ and $M/M'$ lie in~$\Sigma_\ell(e)$. \end{enumerate} \end{lemm} If $M'$ in part (b) of the lemma was not normal in $M$ then clearly $M'$ need not lie in $\Sigma_\ell(c)$ again. \begin{proof} We only give the proof of (b), the proof of (a) being similar but simpler. Let $M$ be in $\Sigma_\ell(e)$ and consider a normal series $M\triangleright I\triangleright P\triangleright \{1\}$ as in Definition \ref{sigmadef}. Then $L:=M/I$ is isomorphic to a product $L_1\times \cdots \times L_s$ for certain finite simple groups of Lie type $L_i$ in characteristic $\ell$. Suppose $M'$ is a closed normal subgroup of $M$ and define $\ol{M'}=M'I/I$. By Goursat's Lemma the groups $\ol{M'}$ and $M'/\ol{M'}$ are products of some of the $L_i$. From this it is straightforward to see that both $M'$ and $M/M'$ lie in $\Sigma_\ell(c)$. \end{proof} The following corollary is immediate from Lemma~\ref{sigmalemm}(b): \begin{coro}\label{sigmal-cor} Fix a constant $c\in{\mathbb{N}}$. Let $G$ be a profinite group, and for each $\ell\in L$ let $\rho_\ell\colon G\to G_\ell$ be a homomorphism of profinite groups such that $\mathop{{\rm Im}}\nolimits(\rho_\ell)\in\Sigma_\ell(c)$ for all $\ell\in L$. Then for any closed normal subgroup $N\unlhd G$ one has $\mathop{{\rm Im}}\nolimits(\rho_{\ell|H})\in\Sigma_\ell(c)$ for all $\ell\in L$. In particular, if $H\unlhd G$ is an open subgroup, then the above applies to any normal open subgroup $N\unlhd G$ that is contained in~$H$. \end{coro} \section{Fundamental groups: finiteness properties and ramification} \label{FundGps} The purpose of this section is to recall some finiteness properties of fundamental groups and to provide some basic results on ramification. Regarding the latter we draw from results by Kerz-Schmidt and Wiesend (cf.~\cite{kerzschmidt}) and from de Jong on alterations (cf.~\cite{dejong}). We begin with a finiteness result of which a key part is from \cite{gp}. \begin{prop} \label{kl} Suppose that either $k$ is a finite field and $S$ is a smooth proper $k$-variety or that $k$ is a number field and $S$ is a smooth $k$-variety, and denote by $K=k(S)$ be the function field of $S$. For $d\in{\mathbb{N}}$, let $\mathcal{M}_d$ be the set of all finite Galois extensions $E/K$ inside $\widetilde{K}$ such that $\mathop{\rm Gal}\nolimits(E/K)$ satisfies $\mathrm{Jor}(d)$ and such that $E$ is unramified along $S$. Then there exists a finite Galois extension $K'/K$ which is unramified along $S$ such that $E\subset \widetilde{k}K'$ for every $E\in \mathcal{M}_d$. \end{prop} \begin{proof} Let $\Omega=\prod_{E\in\mathcal{M}_d} E$ be the compositum of all the fields in $\mathcal{M}_d$. For every $E\in \mathcal{M}_d$ the group $\mathop{\rm Gal}\nolimits(E/K)$ satisfies $\mathrm{Jor}(d)$ and hence there is a finite Galois extension $E'/K$ inside $E$ such that $[E':K]\le d$ and such that $E/E'$ is abelian. Define $$\Omega'=\prod_{E\in \mathcal{M}_d} E'.$$ Then $\Omega/\Omega'$ is abelian. Let $\kappa$ (resp. $k_0$) be the algebraic closure of $k$ in $\Omega$ (resp. in $K$). It suffices to prove the following {\bf Claim.} The extension $\Omega/\kappa K$ is finite. In fact, once this is shown, it follows that the finite separable extension $\Omega/\kappa K$ has a primitive element $\omega$. Then $\Omega=\kappa K(\omega)$ and $K(\omega)/K$ is a finite separable extension. Let $K'$ be the normal closure of $K(\omega)/K$ in $\Omega$. Then $\tilde{k}K'\supset \kappa K'\supset \kappa K(\omega)=\Omega$ as desired. In the case $k={\mathbb{Q}}$ the claim has been shown in \cite[Proposition 2.2]{gp}. Assume from now on that $k$ is finite. It remains to prove the claim in that case. The structure morphism $S\to\mathop{{\rm Spec}}\nolimits(k)$ of the smooth scheme $S$ factors through $\mathop{{\rm Spec}}\nolimits(k_0)$ and $S$ is a geometrically connected $k_0$-variety. The profinite group $\pi_1(S\times_{k_0} \mathop{{\rm Spec}}\nolimits({\widetilde{k}}))$ is topologically finitely generated (cf.~\cite[Thm.~X.2.9]{SGA1}) and $\mathop{\rm Gal}\nolimits(k_0)\cong \hat{{\mathbb{Z}}}$. Thus it follows by the exact sequence (cf.~\cite[Thm.~IX.6.1]{SGA1}) $$1\to \pi_1(S\times_{k_0} \mathop{{\rm Spec}}\nolimits({\widetilde{k}}))\to \pi_1(S)\to \mathop{\rm Gal}\nolimits(k_0) {\to 1}$$ that $\pi_1(S)$ is topologically finitely generated. Thus there are only finitely many extensions of $K$ in $\tilde{K}$ of degree $\le d$ which are unramified along $S$. It follows that $\Omega'/K$ is a {\em finite} extension. If we denote by $S'$ the normalization of $S$ in $\Omega'$, then $S'\to S$ is finite and \'etale, hence $S'/k$ is a smooth proper variety. Furthermore $\Omega/\Omega'$ is abelian and unramified along $S'$. Hence $\Omega/\kappa \Omega'$ is finite by Katz-Lang (cf.~\cite[Thm.~2, p. 306]{katzlang}). As $\Omega'/K$ is finite, it follows that $\Omega/\kappa K$ is finite. \end{proof} Our next aim is to introduce several notions of ramification, that are refinements of \cite{kerzschmidt}, useful for coverings of general schemes. We fix some terminology for curves. A curve $C$ over a field $k$ is a smooth (but not necessarily projective) $k$-variety of dimension $1$. By $k(C)$ we denote its function field, and by $P(C)$ the smooth projective model of $k(C)$ (which is unique up to isomorphism). Note that $P(C)$ contains $C$, and we set $\partial C:=P(C)\smallsetminus C$. If $\Char(k)=p\neq \ell$, then an \'etale cover $C'\to C$ is {\sl $\ell$-tame} if for any point $x\in\partial C$ with valuation $v_x$ of $k(C)$, the extension $k(C')/k(C)$ is at most tamely ramified\footnote{This includes the requirement that all extensions of all residue fields are separable.} and for any point $y$ of $\partial C'$ above $x$, the ramification index of $y$ over $x$ is a power of~$\ell$. The cover $C'\to C$ is {\sl tame} if the extension $k(C')/k(C)$ is at most tamely ramified. \begin{defi}\label{ramif-def} Let $f\colon T\to S$ be an \'etale morphism of regular varieties over a field $k$ of characteristic $p\ge0$. Let $\ell$ be a prime different from~$p$ and denote by $E$ and $K$ the function fields of $T$ and $S$, respectively. We define: \begin{enumerate} \item The cover $T/S$ (given by $f$) is {\em curve $\ell$-tame} if for all $k$-morphisms $\varphi\colon C\to S$ with $C$ a smooth curve over $k$, the base changed cover $f_C\colon C\times_ST\to C$ is $\ell$-tame. (The cover is {\em curve-tame} if for all $\varphi$ the cover $f_C$ is tame.) \item The extension $E/K$ is {\em divisor $\ell$-tame} if for all discrete rank $1$ valuations $v$ of $K$ and $w$ of $E$ above $v$, the extension $E_w/K_v$ of the completions at $w$ and $v$, respectively, is tame and the ramification indices are powers of $\ell$. (The extension $E/K$ is {\em divisor tame} if for all discrete rank $1$ valuations $v$ the extension $E_w/K_v$ is tame.) \item If $S$ is an open subscheme of a regular projective scheme $\overline S$ such that $D=\overline S\smallsetminus S$ is a normal crossings divisor (NCD), then $f$ is {\em $\ell$-tame} if condition (b) holds for all valuations defined by the generic points of $D$. (The cover is {\em tame} if the extension $E/K$ is divisor tame at all valuations defined by the generic points of $D$.) \end{enumerate} We extend the above three notions to profinite \'etale covers by saying that such a cover is {\em pro-$\ell$ curve tame}, {\em pro-$\ell$ divisor tame} or {\em pro-$\ell$ tame} if these conditions holds for all subcovers of finite degree. \end{defi} We note right away that if in the above definition $T\to S$ is finite \'etale and Galois, then condition (b) is equivalent to the following condition (cf.~\cite[Rem.~3.4]{kerzschmidt}) \begin{enumerate} \item[(b')] For all discrete rank $1$ valuations $v$ of $K$ and $w$ of $E$ above $v$, the ramification group $I_w$ inside $\mathop{\rm Gal}\nolimits(E_w/K_v)$ is an $\ell$-group.\footnote{If $k(w)$ and $k(v)$ denote the respective residue fields, then $I_w$ is defined as the kernel of the canonical homomorphism $\Aut(E_{w}/K_{v})\to \Aut(k(w)/k(v))$.} \end{enumerate} Note also that since $\ell$ is different from $p$, condition (b') implies that the residue extension of $E_w/K_v$ is separable. \begin{Def}\label{tamelyramrep-def} {\sl Suppose $k$ is a field and $K/k$ is a finitely generated extension. We call a homomorphism $\mathop{\rm Gal}\nolimits(K)\to G$ of profinite groups {\em $\ell$-tame over $k$} if:\begin{enumerate} \item[(a)] it factors via $\pi_1(S)$ for some regular variety $S$ over $k$ with function field $k(S)=K$ and \item[(b)] the fixed field $E:=(K_s)^{\mathop{\rm Ker}\nolimits(\rho)}$ is pro-$\ell$ divisor tame over~$K$. \end{enumerate}} \end{Def} \begin{Rem} \label{divtame-rem} In \cite[p.~12]{kerzschmidt} $f\colon T\to S$ is defined to be {\sl divisor tame} if for any normal compactification $\overline S$ of $S$ and a point $s\in \overline S\smallsetminus S$ with $\codim_{\ol S}s=1$, the rank $1$ valuation $v_s$ on $K$ defined by $s$ is tamely ramified in $E/K$. We claim that this condition is equivalent to the one we give (assuming that $K\to E$ is generically \'etale via some $f$): Clearly our notion of divisor tameness implies that of \cite{kerzschmidt}. For the converse we follow closely the argument in \cite[Thm.~4.4]{kerzschmidt} proof of (ii)$\Rightarrow$(iii) though with different references. Let $w$ be a valuation of $E$ that is trivial on $k$ and denote by $v$ its restriction to $K$. Let $\overline S$ be a normal compactification of $S$, which exists by the theorem of Nagata \cite{Luetkebohmert}. By \cite[Prop.~6.4]{Vaquie}, there exists a blow-up $\overline S'$ of $\overline S$ with center outside $S$ such that $v$ is the valuation of a codimension $1$ point $s\in \overline S'$. By normalization, we may further assume that $\overline S'$ is normal. Both operations, blow-up and normalization, do not affect $S$, and so we may take for $\overline S$ a normal compactification of $S$ that contains a codimension $1$ point $s$ with valuation $v=v_s$. Because $E=k(\overline T)$, we have $w=v_{t}$ for some codimension $1$ point of $T$ above $s$. But then the divisor tameness of \cite{kerzschmidt} implies that $w/v$ is at most tamely ramified. \end{Rem} The following result is a variant of parts of \cite[Thm.~4.4]{kerzschmidt}: \begin{Prop}\label{KS-variant} Let $k,S,T,K,E,f$ be as in Definition~\ref{ramif-def}. Then the following hold: \begin{enumerate}\advance\itemsep by -.4em \item The cover $T/S$ is curve-tame if and only if the extension $E/K$ is divisor tame. \item Suppose $E/K$ is Galois. Then the cover $T/S$ is curve-$\ell$-tame if and only if the extension $E/K$ is divisor $\ell$-tame. \item If $S$ is an open subscheme of a regular projective scheme $\overline S$ such that $D=\overline S\smallsetminus S$ is a NCD, then both conditions from (b) are equivalent to $T/S$ being $\ell$-tame. \end{enumerate} The assertions (a)--(c) extend in an obvious manner to profinite covers. \end{Prop} \begin{proof} In light of Remark~\ref{divtame-rem}, part (a) of the proposition follows directly from the equivalence (i)$\Leftrightarrow$(ii) in \cite[Thm.~4.4]{kerzschmidt}. For the proof of (b), suppose first that $T/S$ is curve-$\ell$-tame and assume that $E/K$ is not divisor $\ell$-tame. By (a) we know that $E/K$ is divisor tame. So let $w$ be a valuation of $E$ at which $E/K$ is not divisor $\ell$-tame. Denote by $K_1\subset K_2\subset E$ extensions of $K$ such that $E/K_1$ is totally ramified (and Galois) at $w$ and $K_2/K_1$ is of prime degree $\ell'\neq\ell$. As in Remark~\ref{divtame-rem}, there exists a normal compactification $\overline S$ of $S$ and a codimension $1$ point $s$ of $\overline S\smallsetminus S$ that has a preimage $t$ in the normalization $\overline T$ of $\overline S$ in $E/K$ with $v_t=w$. We define $T_i,\overline T_i$, $i=1,2$, as the normalizations of $S$ or $\overline S$ in $K_i/K$, respectively. We claim that $T/T_1$ is curve-$\ell$-tame. To see this, observe first, that as with curve-tameness, it is a simple matter of drawing a suitable commutative diagram to see that curve-$\ell$-tameness is stable under base change. In particular the base change $T\times_ST_1\to T_1$ is curve-$\ell$-tame. However, considering the commutative fiber product diagram $$\xymatrix@C+2pc{T\ar[d]\ar[dr]\ar@{-->}@/^/[r]^s&\ar[l] T\times_ST_1 \ar[d]\\ S&\ar[l] T_1\rlap{,}\\ }$$ we see that there is a canonical splitting $s\colon T\to T\times_ST_1$ over $T_1$. Hence $T$ is a connected component of $T\times_ST_1$ and as such the restriction $T\to T_1$ of the morphism $T\times_ST_1\to T_1$ inherits curve-$\ell$-tameness. Having the claim at our disposal, the hypothesis $[K_2:K_1]=\ell'$ yields that for any curve $C_1$ mapping to $T_1$, the induced cover $C_1\times_{T_1}T_2\to C_1$ is everywhere unramified along $\partial C_1$. Now $\overline T_1$ is regular in codimension $1$, hence the regular locus $W_1$ contains $T_1$ as well as the divisor corresponding to $w|_{K_1}$. Let $W_2$ be its preimage in $\overline T_2$. Now by \cite[Prop.~4.1]{kerzschmidt}, which can be paraphrased as: {\em curve-unramifiedness implies unramifiedness over a regular base}, it follows that $W_2\to W_1$ is \'etale. But then $K_2/K_1$ is \'etale along $w$, a contradiction. For the converse of (b) suppose that $E/K$ is divisor $\ell$-tame. We assume that there is a $k$-morphism $C\to S$ for $C$ a smooth curve such that $\pi\colon C\times_ST\to S$ is not $\ell$-tame along $\partial C$. Since $\mathop{\rm Gal}\nolimits(E/K)$ acts faithfully on $C\times_ST\to C$, by passing to a subgroup and thus an intermediate extension of $E/K$ we may assume that $C\times_ST$ is irreducible. Since then $\mathop{\rm Gal}\nolimits(E/K)$ is also the Galois group of the cover $\pi$, some further straightforward reductions allow us to assume that $[E:K]=\ell'\neq\ell$ for some prime $\ell'$ (which by (a) is different from $p$), and that $\mathop{\rm Gal}\nolimits(E/K)\cong{\mathbb{Z}}/\ell'$ is the inertia group above some valuation of $k(C)$. Following the argument in the proof of \cite[Thm.~4.4]{kerzschmidt} (v)$\Rightarrow$(i), we can find a discrete rank $d=\dim S$ valuation of $E$ that is ramified of order $\ell'$ (via a Parshin chain through the image of $\mathop{{\rm Spec}}\nolimits k(C)$). But \cite[Lem.~3.5]{kerzschmidt} says that $E/K$ is ramified at a discrete rank $d$ valuation if and only if it is ramified at a discrete rank $1$ valuation. We reach a contradiction because by hypothesis $E/K$ is unramified at all discrete rank $1$ valuations. Finally, we prove (c). It is clear that divisor $\ell$-tameness implies $\ell$-tameness. The proof that $\ell$-tameness implies curve $\ell$-tameness follows from that argument given in \cite[Prop.~4.2]{kerzschmidt}: there it is shown that tameness implies curve tameness. Consider a curve $C$ over $k$ and a morphism $\varphi\colon C\to S$ over $k$. Then $\varphi$ extends to a morphism $\ol\varphi\colon P(C)\to \overline S$. Denote by $\ol{\varphi(C)}$ the closure of $\varphi(C)$ in $\overline S$. The ramification of $T\times_SC\to C$ occurs precisely at those points of $P(C)$ that under $\ol\varphi$ map to $D\cap \ol{\varphi(C)}$. To analyze the ramification, the proof of \cite[Prop.~4.2]{kerzschmidt} appeals to Abhyankar's lemma. In the notation of {\sl loc.~cit.}, the ramification is then governed by indices $n_i$, $i=1,\ldots, r$, that are prime to $p$. By the $\ell$-tameness of $T\to S$, the $n_i$ must all be powers of~$\ell$. But then {\sl loc.~cit.} implies that $T\times_SC\to C$ is $\ell$-tame, and this completes the proof. \end{proof} Our formulation of divisor-tameness easily transfers under rather general field extensions: \begin{Lem}\label{CodimOneLTameIsInherited} Suppose that $\Char(k)=p>0$ and consider the following inclusions of fields: $$\xymatrix{K \ar@{^{ (}->}[r] & K'\\ \ar@{_{ (}->}[u] k \ar@{^{ (}->}[r]& k'\ar@{_{ (}->}[u]\\ }$$ If $E/K$ is Galois and divisor $\ell$-tame over $k$ (or divisor pro-$\ell$ tame), then so is $EK'/K'$ over~$k'$. \end{Lem} \begin{proof} It clearly suffices to prove the lemma in the case where $E/K$ is finite Galois. Then $E':=EK'$ is finite Galois over~$K'$. Let $w'$ be any discrete rank one valuation of $E'$ trivial on $k'$ and denote by $w$ its restriction to $E$, by $v'$ the restriction to $K'$ and by $v$ the restriction to $K$. We need to show that $[w'({E'}^*):v'({K'}^*)]$ is a power of $\ell$ and that the residue extension is separable. The latter can be taken care of at once: The extension $E/K$ is finite separable. Hence so is $E'/K'$, because a primitive element of $E/K$ will be such an element for $E'/K'$. For the same reason, separability is preserved by the extensions of completions $E'_{w'}/K'_{v'}$ at $v'$. Now by the Cohen structure theorem, the extension of residue fields is a subextension of $E'_{w'}/K'_{v'}$, and as such it must be separable. It remains to consider the index of the value groups. Suppose first that $v(K^*)=0$. Then we must have $w(E^*)=0$, since otherwise, if $\alpha\in E$ would satisfy $w(\alpha)\neq0$, then the sequence $(\alpha^n)_{n\in{\mathbb{Z}}}$ would be linearly independent over $K$, a contradiction. This means that under the residue map of $E'$, the subfield $E$ is mapped injectively to the residue field of $E'$ at $w'$. But then $E/K$ defines purely a residue extension of $E'/K'$, and thus $w'({E'}^*)=v'({K'}^*)$. Suppose now that $w$ is non-trivial, so that by the above $v$ is non-trivial as well. We pass to the completions and note that $E_w/K_v$ and $E'_{w'}/K_{v'}$ remain Galois extensions. By the Cohen structure theorem, $K_v$ now contains the residue field $k(v)$, and $E_v$ the residue field $k(w)$. In particular, $F=k(w) K_v$ is an unramified extension of $K_v$ and $E_w/F$ is totally ramified. We may thus consider these two cases separately. Suppose first that $E_w/K_v$ is unramified. Then $E'_{w'}=K'_{v'}k(w)$ where clearly $k(w)$ defines a separable extension of the residue field of $K'_{v'}$. Hence $E'_{w'}/K'_{v'}$ is unramified. We conclude $w'({E'}^*)=v'({K'}^*)$ which completes the argument. Suppose now that $E_w/K_v$ is totally ramified. By our hypothesis, the extension $E/K$ is at most $\ell$-order ramified at $w$. It follows that $E_w/K_v$ is a Galois extension with $\mathop{\rm Gal}\nolimits(E_w/K_v)$ an $\ell$-group. Now $\mathop{\rm Gal}\nolimits(E'_{w'}/K'_{v'})$ injects into $\mathop{\rm Gal}\nolimits(E_w/K_v)$ because $E'=KE$, and thus $[E'_w:K'_v]$ is a power of $\ell$. Since the order of $w'({E'}^*)/v'({K'}^*)$ divides the degree $[E':K']$, the proof is complete. \end{proof} Combining ramification properties with finiteness properties of fundamental groups, we obtain the following criterion for a family of representations of $\mathop{\rm Gal}\nolimits(K)$ with images in $\mathrm{Jor}_\ell(d)$, or with abelian images of bounded order, to become trivial over $\mathop{\rm Gal}\nolimits(K'\wt k)$ for some finite $K'/K$. \begin{prop}\label{FiniteAndRamif-prop} Let $k$ be a field and let $S/k$ be a normal $k$-variety with function field $K$. Suppose $(\rho_\ell\colon \pi_1(S)\to G_\ell)_{\ell\in L}$ is a family of continuous homomorphisms such that if $\Char(k)>0$ each $\rho_\ell$ is $\ell$-tame. Under either of the following two conditions there exists a finite extension $K'$ of $K$ such that for all $\ell\in L$ we have $\rho_\ell(\mathop{\rm Gal}\nolimits(K'\wt k))=\{1\}$. \begin{enumerate} \item The field $k$ is finite or a number field and there exists a constant $d\in{\mathbb{N}}$ such that for each $\ell\in L$ the group $\mathop{{\rm Im}}\nolimits(\rho_\ell)$ lies in $\mathrm{Jor}_\ell(d)$. \item The field $k$ is algebraically closed and there exists a constant $c\in{\mathbb{N}}$ such that for each $\ell\in L$ the group $\mathop{{\rm Im}}\nolimits(\rho_\ell)$ is abelian of order at most~$c$. \end{enumerate} \end{prop} \begin{proof} First we replace $K$ by a finite Galois extension and $S$ be the normalization in this extension, so that we can assume that $\rho_\ell(\pi_1(S))=\{1\}$ for all $\ell \le d$ or $\ell\le c$, respectively. Next we apply the result of de Jong on alterations (cf.~\cite[Thm.~4.1, 4.2]{dejong}). It provides us with a finite extension $k'$ of $k$, a smooth projective geometrically connected $k'$-variety $T'$, a non-empty open subvariety $S'$ of $T'$ and an alteration $f: S'\to S$, such that furthermore $D':=T'\smallsetminus S'$ is a normal crossing divisor. We define $K'$ to be the function field of~$S'$, so that $K'/K$ is finite. (If $k$ is perfect, we could also assume that $K'/K$ is separable.) Next observe, that if $\Char(k)$ is positive, then the (divisor) $\ell$-tameness of $\rho_\ell$ implies the (divisor) $\ell$-tameness of $\rho_\ell|_{\pi_1(S')}$ by Lemma~\ref{CodimOneLTameIsInherited}, and thus, by Proposition~\ref{KS-variant}, for each $\ell$ the extension $(K'_s)^{\mathop{\rm Ker}\nolimits(\rho_\ell|_{\pi_1(S')})}$ of $K'$ is $\ell$-tame. Because of the first reduction step in the previous paragraph, this implies that each $\rho_\ell|_{\pi_1(S')}$ is unramified at the generic points of $D'$. Purity of the branch locus (cf.~\cite[X.3.1]{SGA1}) now implies that all $\rho_\ell|_{\pi_1(S')}$ factor via $\pi_1(T')$. Now, in case (a), the assertion of the proposition follows from Proposition~\ref{kl}. In case (b) we use that, since $k$ is algebraically closed, the fundamental group $\pi_1(T')$ is topologically finitely generated (cf.~\cite[Thm.~X.2.9]{SGA1}), and that furthermore, if $\Char(k)=0$, the same holds true for $\pi_1(S')$ (cf.~\cite[II.2.3.1]{SGA7}). Hence in the former case $\pi_1(T')^{\rm{ab}}/c\pi_1(T')^{\rm{ab}}$ and in the latter case $\pi_1(S')^{\rm{ab}}/c\pi_1(S')$ are finite. This completes the proof in case~(b). \end{proof} \section{An independence criterion} \label{indep-sec} From now on, let $k$ be any field, let $K/k$ be a finitely generated field extension and let $L$ be a set of prime numbers not containing $p := \Char(k)$. For every $\ell\in L$ let $G_\ell$ be a profinite group and let $\rho_\ell: \mathop{\rm Gal}\nolimits(K)\to G_\ell$ be a continuous homomorphism. If for all $\ell\in L$ the groups $\mathop{{\rm Im}}\nolimits(\rho_\ell)$ are $n$-bounded at $\ell$, then by Theorem~\ref{lp-main} we have a short exact sequence $1\to M_\ell\to \mathop{{\rm Im}}\nolimits(\rho_\ell)\to H_\ell\to 1$ with $H_\ell\in \mathrm{Jor}_\ell(d)$ for $d=J'(n)$ and $M_\ell\in\Sigma_\ell(2^n)$. At the end of the previous section we have seen that a combination of tameness of ramification and results on fundamental groups allow one to control the $H_\ell$ in a uniform manner. In this section we shall show how to control $M_\ell$ in a uniform manner, if one has a uniform control on ramification. We begin by introducing the necessary concepts and then give the result. \medskip To $(\rho_\ell)_{\ell\in L}$ we attach the family $(\wt\rho_\ell)_{\ell\in L}$ by defining each $\wt\rho_\ell$ as the composite homomorphism $$\wt\rho_\ell\colon\mathop{\rm Gal}\nolimits({K}) \buildrel\rho_\ell\over\longrightarrow \mathop{{\rm Im}}\nolimits(\rho_\ell)\to \mathop{{\rm Im}}\nolimits(\rho_\ell)/Q_\ell$$ where $Q_\ell$ is the maximal normal pro-$\ell$ subgroup of $\mathrm{im}(\rho_\ell)$. Note that if $\rho_\ell$ is an $\ell$-adic representation, then $\wt\rho_\ell$ is essentially the semisimplification of the mod $\ell$ reduction of $\rho_\ell$. \begin{Def}\label{rsconditions} {\sl \begin{enumerate} \item The family $(\rho_\ell)_{\ell\in L}$ is said to satisfy the condition ${\cal R}(k)$, if there exist a finite extension $k'$ of $k$, a finite extension $K'/Kk'$ and a smooth $k'$-variety $U'$ with function field $K'$ such that for every $\ell\in L$ the homomorphism $\wt\rho_\ell |_{\mathop{\rm Gal}\nolimits(K')}$ is unramified along~$U'$. \item The family $(\rho_\ell)_{\ell\in L}$ is said to satisfy the condition ${\cal S}(k)$, if it satisfies ${\cal R}(k)$ and if one can choose the field $K'$ for ${\cal R}(k)$ such that each $\wt\rho_\ell|_{\mathop{\rm Gal}\nolimits(K')}$ is $\ell$-tame. \end{enumerate}} \end{Def} The condition ${\cal R}(k)$ says that each member $\wt\rho_\ell$ is up to pro-$\ell$ ramification potentially generically \'etale in a uniform way. The condition ${\cal S}(k)$ is a kind of semistability condition. \begin{exam} \label{abvarex} {\em Set $L={\mathbb{L}}\smallsetminus \{\Char(k)\}$ and let $A/K$ be an abelian variety. For every $\ell\in L$ denote by $\sigma_{\ell}\colon \mathop{\rm Gal}\nolimits(K)\to \Aut_{{\mathbb{Z}}_\ell}(T_\ell(A))$ the representation of $\mathop{\rm Gal}\nolimits(K)$ on the $\ell$-adic Tate module $\invlim_{i\in{\mathbb{N}}} A[\ell^i]$. There exists a finite extension $k'$ of $k$ and a finite separable extension $K'/k'K$ such that $K'$ is the function field of some smooth $k'$-variety $V'$. By the spreading-out principles of \cite{EGAIV3} there exists a non-empty open subscheme $U'$ of $V'$ and an abelian scheme ${\cal A}$ over $U'$ with generic fibre $A$. This implies (cf.~\cite[IX.2.2.9]{SGA7}) that $\sigma_\ell$ is unramified along $U'$ for every $\ell\in L$. Hence the family $(\sigma_\ell)_{\ell\in L}$ satisfies condition ${\cal R}(k)$. In order to obtain also ${\cal S}(k)$, we choose an odd prime $\ell_0\in L$, and we require the field $K'$ above to be finite separable over $k'K(A[\ell_0])$. Now let $v'$ be any discrete valuation of $K'$ which is trivial on $k'$, and let $R_{v'}$ be the discrete valuation ring of $v'$. Let $\mathcal{N}_{v'}/\mathop{{\rm Spec}}\nolimits(R_{v'})$ be the N\'eron model of $A$ over $R_{v'}$. The condition $K'\supset K(A[\ell_0])$ forces $\mathcal{N}_{v'}$ to be semistable (cf.~\cite[IX.4.7]{SGA7}). This in turn implies that $\sigma_{\ell}$ is unipotent (and hence pro-$\ell$) for {\em every} $\ell\in L$ (cf.~\cite[IX.3.5]{SGA7}). It follows that the family $(\sigma_\ell)_{\ell\in L}$ satisfies condition ${\cal S}(k)$.} \end{exam} The following is the main independence criterion of this section: \begin{prop}\label{geometricLemm} Let $k$ be an algebraically closed field and let $K/k$ be a finitely generated extension. Fix a constant $c\in{\mathbb{N}}$ and suppose that $(\rho_\ell)_{\ell\in L}$ is a family of representations of $\mathop{\rm Gal}\nolimits(K)$ that satisfies the following conditions: \begin{enumerate} \item The family $(\rho_\ell)_{\ell\in L}$ satisfies ${\cal R}(k)$ if $\Char(k)=0$ and ${\cal S}(k)$ if $\Char(k)>0$. \item There exists a constant $c\in{\mathbb{N}}$ such that for all $\ell\in L$ one has $\mathop{{\rm Im}}\nolimits(\rho_\ell)\in\Sigma_\ell(c)$. \end{enumerate} Then there exists a finite abelian Galois extension $E/K$ with the following properties. \begin{enumerate} \item[(i)] For every $\ell\in L$ the group $\rho_\ell(\mathop{\rm Gal}\nolimits(E))$ lies in $\Sigma_\ell(c)$ and is generated by its $\ell$-Sylow subgroups, and for $\ell> c$ it is generated by the $\ell$-Sylow subgroups of $\rho_\ell(\mathop{\rm Gal}\nolimits(K))$. \item[(ii)] The group $\mathop{\rm Gal}\nolimits(E)$ is a characteristic subgroup of~$\mathop{\rm Gal}\nolimits(K)$. \item[(iii)] The restricted family $(\rho_\ell|_{\mathop{\rm Gal}\nolimits(E)})_{\ell\in L\smallsetminus\{2,3\}}$ is independent and $(\rho_\ell)_{\ell\in L}$ is almost independent. \end{enumerate} \end{prop} \begin{proof} We can assume that $\rho_\ell$ is surjective for all $\ell\in L$. Denote by $G_\ell^+$ the normal subgroup of $G_\ell$ which is generated by the pro-$\ell$ Sylow subgroups of $G_\ell$. Then $\ol{G_\ell}:=G_\ell/G_\ell^+$ is a finite group of order prime to $\ell$. As $G_\ell$ lies in $\Sigma_\ell(c)$, so does its quotient $\ol G_\ell$ by Lemma~\ref{sigmalemm}(b). Now any group in $\Sigma_\ell(c)$ of order prime to $\ell$ is abelian of order at most~$c$, and thus the latter holds for~$\ol G_\ell$. Let $K_\ell^+$ be the fixed field in $K_s$ of the kernel of the map $\pi_\ell\circ \rho_\ell$, so that $\mathop{\rm Gal}\nolimits(K_\ell^+/K)\cong \ol{G_\ell}$. Then the compositum $E=\prod_{\ell\in L} K_\ell^+$ is an abelian extension of $K$ such that $\mathop{\rm Gal}\nolimits(E/K)$ is annihilated by~$c!$. From Proposition~\ref{FiniteAndRamif-prop}(b) we see that $E/K$ is finite. Assertion (ii) is now straightforward: By definition of the $K_\ell^+$ the subgroups $\mathop{\rm Gal}\nolimits(K_\ell^+)$ of $\mathop{\rm Gal}\nolimits(K)$ are characteristic and hence so is their intersection~$\mathop{\rm Gal}\nolimits(E)$. We turn to the proof of (i): For every $\ell\in L$, from (ii) the group $\rho_\ell(\mathop{\rm Gal}\nolimits(E))$ is normal in $G_\ell$, and hence it lies in $\Sigma_\ell(c)$ by Lemma~\ref{sigmalemm}. By construction $\rho_\ell(\mathop{\rm Gal}\nolimits(E))\subset \rho_\ell(\mathop{\rm Gal}\nolimits(K_\ell^+))=G_\ell^+$ and $N_\ell:=G_\ell^+/\rho_\ell(\mathop{\rm Gal}\nolimits(E))$ is abelian and annihilated by $c!$. We claim that (1) $N_\ell$ is an $\ell$-group, so that $N_\ell$ is trivial if $\ell>c$, and that (2) $\rho_\ell(\mathop{\rm Gal}\nolimits(E))$ is generated by its pro-$\ell$ Sylow subgroups. We argue by contradiction and assume that (1) or (2) fails. If (2) fails, then $\rho_\ell(\mathop{\rm Gal}\nolimits(E))$ has a finite simple quotient of order prime to $\ell$. Because $\rho_\ell(\mathop{\rm Gal}\nolimits(E))$ lies in $\Sigma_\ell(c)$, this simple quotient has to be abelian of prime order $\ell'$ different from $\ell$. Again by (ii), the Galois closure over $K$ of the fixed field of this $\ell'$-extension is a solvable extension. Denote by $F$ either this solvable extension if (2) fails, or the extension of $K_\ell^+$ whose Galois group is canonically isomorphic to $N_\ell$ if (1) fails. In either case $F/K$ is Galois and solvable, and we have a canonical surjection $\pi_\ell'\colon G_\ell\ontoover{\ } \mathop{\rm Gal}\nolimits(F/K)$. Arguing as in the first paragraph, it follows that $I_\ell$ surjects onto $\mathop{\rm Gal}\nolimits(F/K)$. By construction $\mathop{\rm Gal}\nolimits(F/K_\ell^+)$ is not an $\ell$-group. It follows from the definition of $K_\ell^+$ that the normal subgroup $\pi_\ell'(P_\ell)\subset \mathop{\rm Gal}\nolimits(F/K)$ is a proper subgroup of $\mathop{\rm Gal}\nolimits(F/K_\ell^+)$. But then its fixed field is a proper extension of $K_\ell^+$ which is at once Galois and of a degree over $K$ that is prime to $\ell$. This contradicts the definition of $K_\ell^+$, and thus (1) and (2) hold. This in turn completes the proof of~(i). We now prove (iii). Denote by $\Xi_\ell$ the class of those finite groups which are either a finite simple group of Lie type in characteristic $\ell$ or isomorphic to ${\mathbb{Z}}/(\ell)$. The conditions in (i) imply that every simple quotient of $\rho_\ell(\mathop{\rm Gal}\nolimits(E))$ lies $\Xi_\ell$. But now for any $\ell,\ell'\ge5$ such that $\ell\neq\ell'$ one has $\Xi_{\ell}\cap \Xi_{\ell'}=\emptyset$ (cf.~\cite[Thm.~5]{bible}, \cite{artin}, \cite{KLMS}). The first part of (iii) now follows from \cite[Lemme 2]{bible}. The second part follows from the first, the definition of almost independence and from \cite[Lemme 3]{bible}. \end{proof} \section{Reduction steps} \label{Reductions} Let $k$, $K$, $L$, $p$ and $(\rho_\ell)_{\ell\in L}$ be as at the beginning of Section~\ref{indep-sec}. In the previous two sections we have described ramification properties of $(\rho_\ell)_{\ell\in L}$ and properties of $(\rho_\ell(\mathop{\rm Gal}\nolimits(K)))_{\ell\in L}$ that were essential to control in a uniform way the groups $H_\ell$ and $M_\ell$ that occur in $\rho_\ell(\mathop{\rm Gal}\nolimits(K))$ as in Theorem~\ref{lp-main}. The aim of this section is to explain how these properties for a general pair $(K,k)$ in our target Theorem \ref{main}, can be reduced to a pair where $k$ is the prime field and $K$ is finitely generated over it. Moreover we shall explain how one can reduce the proof of our target theorem to the case where $X$ is a smooth and projective variety over~$K$. \medskip \begin{Lem}\label{FamilyRestriction} Suppose we have a commutative diagram of fields $$\xymatrix{K \ar@{^{ (}->} [r]&K'\\ k \ar@{^{ (}->} [r]\ar@{_{ (}->} [u]&k'\ar@{_{ (}->} [u]\\ }$$ such that $K'$ is finite over $Kk'$. Then the following properties hold true: \begin{enumerate}\advance\itemsep by -.5em \item[(i)] If $(\rho_\ell)_{\ell\in L}$ satisfies ${\cal R}(k)$, then $(\rho_\ell |_{\mathop{\rm Gal}\nolimits({K'})})_{\ell\in L}$ satisfies ${\cal R}(k')$. \item[(ii)] If $\Char(k)>0$ and $(\rho_\ell)_{\ell\in L}$ satisfies ${\cal S}(k)$, then $(\rho_\ell|_{\mathop{\rm Gal}\nolimits({K'})})_{\ell\in L}$ satisfies ${\cal S}(k')$. \item[(iii)] If there exists a constant $c\in{\mathbb{N}}$ such that for all $\ell\in L$, the group $\rho_\ell(\mathop{\rm Gal}\nolimits({K\wt k}))$ lies in $\Sigma_\ell(c)$, then there exists a finite Galois extension $E'/K'$ such that for all $\ell\in L$, the group $\rho_\ell(\mathop{\rm Gal}\nolimits(E'\wt k'))$ lies in $\Sigma_\ell(c)$. \end{enumerate} \end{Lem} \begin{proof} Considering the diagram $$\xymatrix{K \ar@{=}[r]&\ar@{^{ (}->} [r]K&k'K \ar@{^{ (}->} [r]&K'\\ k \ar@{^{ (}->} [r]\ar@{_{ (}->} [u]&k'\cap K\ar@{_{ (}->} [u]\ar@{^{ (}->} [r]&k'\ar@{=}[r]\ar@{_{ (}->} [u]&k'\rlap{,}\ar@{_{ (}->} [u]\\ }$$ it suffices to prove the lemma in the following three particular cases: (a) $K'=K$, (b) $Kk'=K'$ and $K\cap k'=k$ (base change), (c) $k=k'$ and $K'$ is finite over $K$. For the proof of (iii) note that in case (a) the group $\mathop{\rm Gal}\nolimits({K'\wt k'})$ is a closed normal subgroup of $\mathop{\rm Gal}\nolimits({K\wt k})$ and in case (b) the canonical homomorphism $\mathop{\rm Gal}\nolimits({K'\wt k'})\to\mathop{\rm Gal}\nolimits({K\wt k})$ is surjective. In either case we take $E=K'$. Finally in case (c) we define $E$ as the Galois closure of $K'$ over $K$. Then in cases (a) and (c) assertion (iii) follows from Corollary~\ref{sigmal-cor} with $H=\mathop{\rm Gal}\nolimits(E)$ and~$G=\mathop{\rm Gal}\nolimits(K)$. Case (b) is obvious. In all cases, the proof of (ii) is immediate from Lemma~\ref{CodimOneLTameIsInherited}, once (i) is proved. Therefore it remains to prove (i). We first consider case (a). By replacing $k$, $k'$ and $K$ several times by finite extensions, we can successively achieve the following, where in each step the previous property is preserved: First, using de Jong's result on alterations (cf.~\cite[Thm.~4.1, 4.2]{dejong}), there exists a smooth projective scheme $X/k'$ whose function field is $K$. Second, by the spreading out principle, there exists an affine scheme $U'$ over $k$ whose function field is $k'$ and a smooth projective $U'$-scheme ${\cal X}$ whose function field is $K$. Third, by hypothesis ${\cal R}(k)$ there exists a smooth $k$-scheme $U$ whose function field is $K$ such that all $\wt\rho_\ell$ factor via $\pi_1(U)$. By shrinking $U$ we may assume it to be affine. Also we choose an affine open subscheme ${\cal V}$ of ${\cal X}$. The corresponding coordinate rings we denote by $R$ and ${\cal R}$, respectively. Both of these rings are finitely generated over $k$. Since the fraction field of both is $K$, by inverting a suitable element $g\neq0$ of $R$ we have $R[g^{-1}]\supset {\cal R}$, and similarly we can find $0\neq f\in{\cal R}. $ such that ${\cal R}[f^{-1}]\supset R$. Inverting both elements shows that we can find an affine open subscheme $V$ of both $U$ and ${\cal V}$. In particular, the function field of $V$ is $K$, the scheme $V$ is smooth over $k$ and over $U'$ and the representations $\rho_\ell$ all factor via $\pi_1(V)$. The following diagram displays the situation: $$\xymatrix{ &\ar@{_{ (}->} [dl]V\ar@{^{ (}->} [dr]\ar[ddr]&V'\ar@{-->}[ddr]\ar@{-->}[l]&&\\ U\ar[ddr]& &\ar[d]{\cal X}&X\ar[l]\ar[d]&\ar[l]\mathop{{\rm Spec}}\nolimits K\ar[dl]\\ &&U'\ar[dl]&\ar[l]\mathop{{\rm Spec}}\nolimits k'\ar[dll]\\ &\mathop{{\rm Spec}}\nolimits k&\\ }$$ Define $V'$ as the base change $V\times_{U'} \mathop{{\rm Spec}}\nolimits k'$, so that $V'$ to $\mathop{{\rm Spec}}\nolimits k'$ is smooth and affine. Now if $W\to U$ is any \'etale Galois cover, then the base change $W\times_{U}V'$ is an \'etale Galois cover of $V'$. We deduce that $\wt\rho_\ell$ factors via $\pi_1(V')$ for all~$\ell$, and thus we have verified ${\cal R}(k')$ for~$(\wt\rho_\ell)_{\ell\in L}$. Case (b): This is a base change. Therefore if we replace $k$ by a finite extension, then $U/k$ becomes a smooth variety with function field a finite separable extension of $Kk'$. But then $U\times_{\mathop{{\rm Spec}}\nolimits k}\mathop{{\rm Spec}}\nolimits k'$ is smooth over $k'$ and its function field is a finite separable extension of $K'=Kk'$. From this (i) is immediate. Case (c): To see ${\cal R}(k)$ over $K'$, let $k'\supset k$ and $K''\supset K'k'$ be finite extensions such that there exists a smooth $k'$-scheme $U$ with function field $K''$ such that all $\wt\rho_\ell$ factor via $\pi_1(U)$. Let $U'$ be the normalization of $U$ in $K'K''$. Now choose $k''\supset k'$ and $K'''\supset K'K''$ finite such that there is a smooth $k''$-scheme $U''$ with function field $K''$ and a finite morphism to $U'$. Then ${\cal R}(k')$ is verified by~$U''$. \end{proof} The following is a standard lemma from algebraic geometry about models of schemes over finitely generated fields. \begin{Lem}\label{descendfg} Let $k$ be a field, $K/k$ be finitely generated and $X$ be an separated algebraic scheme over $K$. Then there exists an absolutely finitely generated field $K_0\subset K$ and a separated algebraic scheme $X_0$ over $K_0$ such that $kK_0=K$ and $X_0\otimes_{K_0}K=X$. If in addition $X/K$ is smooth and projective, then one can choose $X_0$ and $K_0$ in a way so that $X_0/K_0$ is smooth projective. \end{Lem} \begin{proof} Let $\mathfrak{K}$ be the set of all finitely generated subfields of $K$. Then $K=\bigcup_{K'\in \mathfrak{K}} K'$ and $\mathop{{\rm Spec}}\nolimits(K)=\mathop{\varprojlim}\limits_{K'\in \mathfrak{K}} \mathop{{\rm Spec}}\nolimits(K')$. There exists $K'\in \mathfrak{K}$ and a separated algebraic $K'$-scheme $X_0$ such that $X=X_{0, K'}$ (cf.~\cite[8.8.2]{EGAIV3} and \cite[8.10.5(v)]{EGAIV3}). If $X/K$ is projective, then one can choose $K'$ and $X_0$ in such a way that $X_0/K'$ is projective (cf.~\cite[8.10.5(xiii)]{EGAIV3}). If $X/K$ is smooth, then $X_0/K'$ is smooth. Furthermore there exist $x_1,\cdots, x_t\in K$ such that $K=k(x_1,\cdots x_t)$. Define $K_0:=K'(x_1,\cdots, x_t)$. Then $kK_0=K$, the field $K_0$ is finitely generated and $X_0:=X_{0, K_0}$ has the desired properties. \end{proof} For a separated algebraic scheme $X$ over $K$ and any $\ell\in{\mathbb{L}}\smallsetminus \{\Char(k)\}$ we denote by $\rho_{\ell,X}$ the representation of $\mathop{\rm Gal}\nolimits(K)$ on $\bigoplus_{q\ge0}\big(H^q_c(X_{\wt K},{\mathbb{Q}}_\ell)\oplus H^q(X_{\wt K},{\mathbb{Q}}_\ell)\big)$. \begin{Cor}\label{FirstReduction} Suppose that for all absolutely finitely generated fields $K_0$ with field of constants $k_0$ and for all schemes $X_0$ that are separated algebraic over $K_0$, the following conditions are true: \begin{enumerate} \item The family $(\rho_{\ell,X_0})_{\ell\in L}$ satisfies ${\cal R}(k_0)$ if $\Char(k_0)=0$ and ${\cal S}(k_0)$ if $\Char(k_0)>0$. \item There exists a constant $c\in{\mathbb{N}}$ and a finite extension $E_0$ of $K_0$ such that for all $\ell\in L$ one has $\rho_{\ell,X_0}(\mathop{\rm Gal}\nolimits({E_0\wt k_0}))\in\Sigma_\ell(c)$. \end{enumerate} Let $k,K,X$ be as in Theorem~\ref{main}. Then there exists a finite extension $E/K$ such that the assertions and conclusions of Proposition~\ref{geometricLemm} hold for $E\wt k$, and in particular Theorem~\ref{main} holds. \end{Cor} \begin{proof} Let $k$ be any field, let $K$ over $k$ be a finitely generated extension field and let $X$ be a separated algebraic scheme over $K$. By Lemma~\ref{descendfg}, we can find $K_0\subset K$ absolutely finitely generated and $X_0$ a separated algebraic scheme over $K_0$ such that $X=X_0\otimes_{K_0}K$, and moreover if $X$ is smooth and/or projective over $K$, then the same can be assumed for $X_0$ over~$K_0$. Next we replace $K_0$ by $E_0$ as guaranteed by our hypotheses and $K$ by $E:=E_0K$. Now Lemma~\ref{FamilyRestriction} yields a finite Galois extension $E'/E$ such that $(\rho_{\ell,X}|_{\mathop{\rm Gal}\nolimits(E'\wt k)})_{\ell\in L}$ satisfies ${\cal R}(\wt k)$ if $\Char(k)=0$ and ${\cal S}(\wt k)$ if $\Char(k)>0$ and in addition that for all $\ell\in L$, the image $\rho_\ell(\mathop{\rm Gal}\nolimits(E'\wt k))$ lies in $\Sigma_\ell(c)$. By Proposition~\ref{geometricLemm}, there exists a finite extension $E''$ of $E'$ such that the assertions and conclusions of Proposition~\ref{geometricLemm} hold for $E''\wt k$. \end{proof} We now come to the second reduction step. \begin{Def} \label{sssdefi} {\sl For a representation $\rho_\ell\colon\mathop{\rm Gal}\nolimits({K})\to{\rm GL}_n({\mathbb{Q}}_\ell)$ we denote by $\rho_\ell^\mathrm{sss}$ it {\em strict semisimplification}, i.e., the direct sum over the irreducible subquotients of $\rho_\ell$ where each isomorphism type occurs with multiplicity one.} \end{Def} Note that $\mathop{{\rm Im}}\nolimits(\rho_\ell^\mathrm{sss})=\mathop{{\rm Im}}\nolimits(\rho_\ell^\ssi)$ where $\rho_\ell^\ssi$ denotes the usual semisimplification of $\rho_\ell$. \begin{Lem}\label{SemisimplifAndS(k)} For every $\ell\in L$ let $\rho_\ell$ and $\rho_\ell'$ be representations $\mathop{\rm Gal}\nolimits(K)\to {\rm GL}_n({\mathbb{Q}}_\ell)$. Suppose that one of the following two assertions is true: \begin{enumerate}\advance\itemsep by -.5em \item $\rho_\ell^\mathrm{sss}=(\rho_\ell')^\mathrm{sss}$ for all $\ell\in L$, or \item $\rho_\ell$ is a direct summand of $\rho_\ell'$ for all $\ell\in L$. \end{enumerate} Then the following hold: \begin{enumerate}\advance\itemsep by -.5em \item[(i)] if the family $(\rho'_\ell)_{\ell\in L}$ satisfies ${\cal R}(k)$ then so does $(\rho_\ell)_L$; \item[(ii)] if the family $(\rho'_\ell)_{\ell\in L}$ satisfies ${\cal S}(k)$ then so does $(\rho_\ell)_L$; \item[(iii)] for any $\ell\in L$, if $\rho_\ell'(\mathop{\rm Gal}\nolimits({K\wt k}))$ lies in $\Sigma_\ell(c)$ then so does $\rho_\ell(\mathop{\rm Gal}\nolimits({K\wt k}))$. \end{enumerate} \end{Lem} Note that condition (a) is symmetric, so that under (a) each of (i)--(iii) is an equivalences. \begin{proof} The proof of (i)--(iii) under hypothesis (a) is an immediate consequence of the simple fact that the kernel of $\mathop{{\rm Im}}\nolimits(\rho_\ell)\to\mathop{{\rm Im}}\nolimits(\rho_\ell^\ssi)=\mathop{{\rm Im}}\nolimits(\rho_\ell^\mathrm{sss})$ is a pro-$\ell$-group. Assertions (i) and (ii) under hypothesis (b) are trivial. For (iii) note that hypothesis (b) implies that $\rho_\ell'(\mathop{\rm Gal}\nolimits({K\wt k}))$ is a closed normal subgroup of $\rho_\ell(\mathop{\rm Gal}\nolimits({K\wt k}))$, and so we can apply Lemma~\ref{sigmalemm}. \end{proof} The following important result is taken from Seminaire Bourbaki talk of Berthelot on de Jong's alteration technique (cf.~\cite[Thm.~6.3.2]{Berthelot}) \begin{Thm}\label{Berthelot} Let $k$ be a field and $K$ be a finitely generated extension field. Let $X$ be a separated algebraic scheme over $K$. Then there exists a finite extension $k'/k$, a finite separable extension $K'/Kk'$ and a finite set of smooth projective varieties $\{Y_i\}_{i=1,\ldots,k}$ over $K'$ such that for all $\ell\in {\mathbb{L}}\smallsetminus \{\Char(k)\}$ the representation $(\rho_{\ell,X}|_{\mathop{\rm Gal}\nolimits({K'})})^\mathrm{sss}$ is a direct summand of $\big(\bigoplus_i\rho_{\ell,Y_i}\big)^\mathrm{sss}$. \end{Thm} Before giving the proof of Theorem~\ref{Berthelot}, let us state the following immediate consequence of Theorem~\ref{Berthelot}, Lemma~\ref{SemisimplifAndS(k)} and Corollary~\ref{FirstReduction}: \begin{Cor}\label{SecondReduction} Suppose that for all absolutely finitely generated fields $K_0$ with field of constants $k_0$ and for all smooth projective $K_0$-varieties $X_0/K_0$, the following conditions are true: \begin{enumerate} \item The family $(\rho_{\ell,X_0})_{\ell\in L}$ satisfies ${\cal R}(k_0)$ if $\Char(k_0)=0$ and ${\cal S}(k_0)$ if $\Char(k_0)>0$. \item There exists a constant $c\in{\mathbb{N}}$ and a finite extension $E_0$ of $K_0$ such that for all $\ell\in L$ one has $\rho_{\ell,X_0}(\mathop{\rm Gal}\nolimits({E_0\wt k_0}))\in\Sigma_\ell(c)$. \end{enumerate} Let $k,K,X$ be as in Theorem~\ref{main}. Then there exists a finite extension $E/K$ such that the assertions and conclusions of Proposition~\ref{geometricLemm} hold for $E\wt k$, and in particular Theorem~\ref{main} holds. \end{Cor} \begin{proof}[Proof of Theorem~\ref{Berthelot}] For completeness we provide details of the proof in \cite{Berthelot}. For $?\in\{c,\emptyset\}$ we denote by $\rho_{\ell,X,?}$ the representation of $\mathop{\rm Gal}\nolimits(K)$ on $\bigoplus_{q\ge0}\big(H^q_?(X_{\wt K},{\mathbb{Q}}_\ell)\big)$. It suffices to prove the theorem separately for the families $(\rho_{\ell,X,?})_{\ell\in L}$. We also note that whenever it is convenient, we are allowed (by passing from $K$ to a finite extension) to assume that $X$ is geometrically reduced over $K$. This is so because $H^q_?(X_{\wt K},{\mathbb{Q}}_\ell)\cong H^q_?(X_{\wt K,{\rm red}},{\mathbb{Q}}_\ell)$ for any $q\in{\mathbb{Z}}$ and $?\in \{\emptyset,c\}$. We first consider the case of cohomology with compact supports. The proof proceeds by induction on $\dim X$. We may assume, by passing from $K$ to a finite extension, that $X$ is geometrically reduced. Then $X_{\wt K}$ is generically smooth. After passing from $K$ to a finite extension, we can find a dense open subscheme $U\subset X$ that is smooth over $K$. By the long exact cohomology sequence with supports (cf.~\cite[Rem.~III.1.30]{milne}) we have for any $\ell$ an exact sequence $$\ldots\longrightarrow H^i_c(U_{\wt K},{\mathbb{Q}}_\ell)\longrightarrow H_c^i(X_{\wt K},{\mathbb{Q}}_\ell)\longrightarrow H^i_c((X\smallsetminus U)_{\wt K},{\mathbb{Q}}_\ell)\longrightarrow\ldots ,$$ so that for all $\ell$ the representation $(\rho_{\ell,X,c})_{\ell\in L}^\mathrm{sss}$ is a direct summand of $(\rho_{\ell,U,c})_{\ell\in L}^\mathrm{sss}\oplus(\rho_{\ell,X\smallsetminus U,c})_{\ell\in L}^\mathrm{sss}$. By induction hypothesis, it thus suffices to treat the case that $U$ is smooth over $K$. By the induction hypothesis it is also sufficient to replace $U$ by any smaller dense open subscheme, and it is clearly also sufficient to treat the case where $U$ is in addition geometrically irreducible. By de Jong's theorem on alterations (cf.~\cite[Thms.~4.1, 4.2]{dejong}), after passing from $K$ to a finite extension, we can find a smooth projective scheme $Y,$ an open subscheme $U'$ of $Y$ and an alteration $\pi\colon U'\to U$. By replacing $K$ yet another time by a finite extension, we can assume that $U'\to U$ is generically finite \'etale. And now we pass to an open subscheme $V$ of $U$ and to $V':=\pi^{-1}(V)\subset U'$ such that $V'\to V$ is finite \'etale. By the induction hypothesis applied to $Y\smallsetminus V'$ and again the long exact cohomology sequence for cohomology with support, we find that the assertion of the theorem holds true for $(\rho_{\ell,V',c})_{\ell\in L}^\mathrm{sss}$. From now on $\pi$ denotes the restriction to $V'$ and ${\mathbb{Q}}_{\ell,X}$ will be the constant sheaf ${\mathbb{Q}}_\ell$ on any scheme $X$. Since $\pi$ is finite \'etale, say of degree $d$, there exists a trace morphism $\Trace_{\pi}\colon \pi_* {\mathbb{Q}}_{\ell,V'}\to{\mathbb{Q}}_{\ell,V}$ whose composition with the canonical morphism ${\mathbb{Q}}_{\ell,V}\to \pi_* {\mathbb{Q}}_{\ell,V'}$ is multiplication by~$d$ (cf.~\cite[Lem.~V.1.12]{milne}). In particular, the constant sheaf ${\mathbb{Q}}_{\ell,V}$ is a direct summand of $\pi_* {\mathbb{Q}}_{\ell,V'}$. Since $H_c^i(V'_{\wt K},{\mathbb{Q}}_\ell)\cong H_c^i(V_{\wt K},\pi_*{\mathbb{Q}}_\ell)$, we deduce that $(\rho_{\ell,V,c})_{\ell\in L}^\mathrm{sss}$ is a direct summand of $(\rho_{\ell,V',c})_{\ell\in L}^\mathrm{sss}$, and this completes the induction step. Now we turn to the case $?=\emptyset$. The case when $X$ is smooth over $K$ but not necessarily projective is reduced, by Poincar\'e duality, to the case of compact supports: one has $H^q(X_{\wt K},{\mathbb{Q}}_\ell)\cong H^{2d-q}_c(X_{\wt K},{\mathbb{Q}}_\ell(d))^\vee$ for $d=\dim X$ (cf.~\cite[Cor.~VI.11.12]{milne}). Suppose now that $X$ is an arbitrary separated algebraic scheme over $K$. By what we said above, we may assume that $X$ is geometrically reduced. Again we perform an induction on $\dim X$. The first step is a reduction to the case where $X$ is irreducible, which maybe thought as an induction by itself. Suppose $X=X_1\cup X_2$ where $X_1$ is an irreducible component of $X$ and $X_2$ is the closure of $X\smallsetminus X_1$. Consider the canonical morphism $f\colon X_1\sqcup X_2\to X$. It yields a short exact sequence of sheaves \begin{equation}\label{sesforcomponents} 0\longrightarrow {\mathbb{Q}}_{\ell,X}\longrightarrow f_*{\mathbb{Q}}_{\ell,X_1\sqcup X_2}\longrightarrow {\cal F} \longrightarrow 1 \end{equation} where ${\cal F}$ is a sheaf on $X$. We claim that ${\cal F}\cong {\mathbb{Q}}_{\ell,X_1\cap X_2}$. To see this observe first that if we compute the pullback of the sequence along the open immersion $j\colon X\smallsetminus (X_1\cap X_2) \hookrightarrow X$, then ${\cal F}$ vanishes and the morphism on the left becomes an isomorphism. In particular, ${\cal F}$ is supported on $X_1\cap X_2$. To compute the pullback along the closed immersion $i\colon X_1\cap X_2\hookrightarrow X$ we may apply proper base change, since $f$ is proper. But now the restriction of $f$ to $X_0:=X_1\cap X_2$ is simply the trivial double cover $X_0 \sqcup X_0\ontoover{\ } X_0$, so that $i^*{\cal F}\cong {\mathbb{Q}}_{\ell,X_0}$. This proves the claim because ${\cal F}\cong i_*i^*{\cal F}$, as ${\cal F}$ is supported on $X_1\cap X_2$. By an inductive application of the long exact cohomology sequences to sequences like~(\ref{sesforcomponents}), it suffices to prove the theorem for schemes $X$ that are geometrically integral and separated algebraic over $K$. In this case, the proof follows by resolving $X$ by a smooth hypercovering $X_\bullet$ see \cite[p.~51]{dejong}, \cite[6.2.5]{deligne} and the proof of \cite[Thm.~6.3.2]{Berthelot}. Since the hypercovering yields a spectral sequence that computes the cohomology of $X$ in terms of the cohomologies of the smooth $X_i$, and this for all $\ell$, and since only those $X_i$ with $i\le 2\dim X$, contribute to $X$, the induction step is complete since we have reduced the case of arbitrary $X$ to lower dimensions and to smooth~$X_i$. \end{proof} \section{Proof of the main theorem}\label{TheProof} In this final section, we study the family $(\rho_{\ell,X})_{\ell\in L}$ in the particular case where $K$ is absolutely finitely generated with the constant field $k$. We shall establish properties ${\cal R}(k)$ and if $\Char(k)>0$ also property ${\cal S}(k)$. This will use Deligne's purity results from Weil I (cf.~\cite[Thm.~1.6]{deligneweil1}) as well as the global Langlands conjecture for function field proved by Lafforgue (cf.~\cite{Lafforgue}). \medskip Let $k$ be a finite field and $U$ a smooth $k$-variety. For every closed point $u\in U$ let $k(u)$ be the (finite) residue field of $u$, and let $D(u)\subset \pi_1(U)$ be the corresponding decomposition group (defined only up to conjugation). Denote by $\mathrm{Fr}_u\in D(u)$ the preimage under the canonical isomorphism $D(u)\cong \mathop{\rm Gal}\nolimits(k(u))$ of the map $\sigma_u: x\mapsto x^{|k(u)|}$, so that within $\pi_1(U)$, the automorphism $\mathrm{Fr}_u$ is also defined only up to conjugation. \begin{Prop}\label{basechange} Suppose $K$ is a finitely generated field of characteristic $p\ge 0$. Let $k$ be the field of constants of $K$. Let $X/K$ be a smooth projective scheme. \begin{enumerate} \item[a)] There exists a finite separable extension $K'/K$ and smooth $k$-variety $U'/k$ with function field $K'$ such that for each $j\ge 0$ and every $\ell\in{\mathbb{L}}\smallsetminus \{p\}$, the representation $\rho_{\ell,X}^{(j)}|_{\mathop{\rm Gal}\nolimits(K')}$ of $\mathop{\rm Gal}\nolimits(K')$ is unramified along $U'$. \item[b)] If $p>0$, then the family of representations $(\rho_{X, \ell}^{(j)}|_{\mathop{\rm Gal}\nolimits(K')})_{\ell\in{\mathbb{L}}\smallsetminus \{p\}}$ is strictly compatible and pure of weight $j$, that is: For every closed point $u'\in U'$ the characteristic polynomial $p_{u'}(T)$ of $\rho_{\ell, X}^{(j)}(\mathrm{Fr}_{u'})$ has integral coefficients, is independent of $\ell\in {\mathbb{L}}\smallsetminus \{p\}$, and the reciprocal roots of $p_{u'}(T)$ all have absolute value $|k(u')|^{j/2}$. \end{enumerate} \end{Prop} \begin{proof} Note that $k$ is perfect. There exists a finite separable extension $E/K$ such that $X_E$ splits up into a disjoint union of geometrically connected smooth projective $E$-varieties. Thus, after replacing $K$ by a finite separable extension, we can assume that $X/K$ is geometrically connected. Let $(S_1,\cdots, S_r)$ be a separating transcendence base of $K/k$. Identify $k(S_1,\cdots, S_r)$ with the function field of ${\mathbb{A}}_r$ and let $S$ be the normalization of ${\mathbb{A}}_r$ in $K$. Then $S$ is a normal $k$-variety with function field $K$. There exists an alteration $S'\to S$ such that $S'/k$ is a smooth variety and the function field $K'$ of $S'$ is a finite separable extension of $K$ (cf.~\cite[Thm.~4.1, Remark 4.2]{dejong}). Let $\mathfrak{A}$ be the set of all affine open subschemes of $S'$. Then $\mathop{{\rm Spec}}\nolimits(K')=\mathop{\varprojlim}\limits_{U'\in \mathfrak{A}} U'$. There exists $U'\in \mathfrak{A}$ and a projective $U'$-scheme $f: \mathcal{X}'\to U'$ of such that $\mathcal{X}'\times_{U'} \mathop{{\rm Spec}}\nolimits({K'})=X_{K'}$. (cf.~\cite[8.8.2]{EGAIV3} and \cite[8.10.5(v) and (xiii)]{EGAIV3}). By the theorem of generic smoothness, after shrinking $U'$ and $\mathcal{X'}$, we can assume that $f$ is smooth. Furthermore, after removing the support of $f_*({\cal O}_{\mathcal{X}'})$ from U and shrinking $\mathcal{X}'$ accordingly, we can assume that $f$ has geometrically connected fibres. Let $j\ge 0$ and $u'\in U'$. Define $k':=k(u')$ and $X_{u'}:=\mathcal{X}'\times_{U'} \mathop{{\rm Spec}}\nolimits(k')$. Then for every $\ell\in{\mathbb{L}}\smallsetminus\{p\}$ the \'etale sheaf $R^j f_* {\mathbb{Z}}_\ell$ is lisse and compatible with any base change (cf.~\cite[VI.2., VI.4]{milne}). Thus $\rho_{\ell, X}^{(q)}|_{\mathop{\rm Gal}\nolimits(K')}$ factors through $\pi_1(U')$ and a) holds true. Furthermore it follows that $H^j(X_{\wt K}, {\mathbb{Q}}_\ell)$ can be identified with $H^j(X_{u', \wt{k'}}, {\mathbb{Q}}_\ell)$ in a way compatible with the Galois actions. Assume $p>0$. Part b) then follows applying Deligne's theorem on the Weil conjectures (cf.~\cite[Thm.~1.6]{deligneweil1}) to~$X_{u'}$. \end{proof} \begin{Lem} Let the notation be as in the previous proposition and suppose were are in the situation of part (b), so that $k$ is a finite field. Fix $q\in{\mathbb{Z}}$ and denote by $n$ the dimension of $H^q(X_{\wt K},{\mathbb{Q}}_\ell)$ which is independent of $\ell\neq p$. Then the following hold: \begin{enumerate} \item For any smooth curve $C$ over $k$ and morphism $\varphi\colon C\to U'$, there exist irreducible cuspidal automorphic representations $(\pi_{C,i})_{i=1,\ldots,m_\varphi}$ of ${\rm GL}_{n_i}({\mathbb{A}}_{k(C)})$ such that $\sum_in_i=n$~and for all $\ell$ the representations $(\varphi_*\circ \rho_{X,\ell}^{(q)})^\ssi$ for $\varphi_*\colon \pi_1(C)\to\pi_1(U')$ agrees with the $\ell$-adic representation $\oplus_i \rho_{\ell,\pi_{C,i}}$ attached to $\pi_{C,i}$ via the global Langlands correspondence in \cite[p.~2]{Lafforgue}. \item If for some prime $\ell_0\ge3$ there is a ${\mathbb{Z}}_{\ell_0}$-lattice $L$ of the the ${\mathbb{Q}}_{\ell_0}$ representation space underlying $\rho_{\ell_0,X,q}$ that is stabilized by $\mathop{\rm Gal}\nolimits({K'})$ and such that $\mathop{\rm Gal}\nolimits({K'})$ acts trivially on $L/\ell_0 L$, then in (a) for all $\varphi\colon C\to U'$ the representations $\pi_{C,i}$ are semistable\footnote{We call an automorphic representation $\pi$ of ${\rm GL}_n$ {\sl semistable} at a place $v$ if under the bijective local Langlands correspondence between local representations and Frobenius semi-simplified Weil-Deligne parameters, the Weil-Deligne parameter of $\pi_v$ has underlying Weil-representation which is unramified. This definition is for instance used in \cite[\S~2]{Rajan}. In {\sl loc.~cit.}\ on page~8 it is also pointed out that by \cite{bz}, this notion of semistability is equivalent to that of the automorphic representation having an Iwahori fixed vector.} at all places of $P(C)\smallsetminus C$. In particular, for all $\ell\neq p$, the representation $\rho^{(q)}_{\ell,X}$ satisfies ${\cal S}(k)$. \end{enumerate} \end{Lem} \begin{proof} Let $C$ be a smooth curve over $k$ and $\varphi\colon C\to U'$ a morphism over $k$. By Proposition~\ref{basechange}, the family of representations $(\varphi_*\circ \rho_{X,\ell}^{(q)})^\ssi,$ where ${\ell\neq p},$ is pure of weight $j$, semisimple and strictly compatible as a representation of $\pi_1(C)$. By the main theorem of \cite[p.~2]{Lafforgue}, each $(\varphi_*\circ \rho_{X,\ell}^{(q)})^\ssi$ being pure and semisimple gives rise to a list of irreducible cuspidal automorphic representations via the global Langlands correspondence.\footnote{Lafforgue in \cite{Lafforgue} describes only a correspondence between ${\mathbb{Q}}_\ell$-sheaves of weight $0$, i.e., Galois representations to ${\rm GL}_n$ whose determinant has finite image, and automorphic representations with finite order central character. A general statement is given in \cite[\S~0 and 0.1]{Deligne}} By the strict compatibility and the bijectivity of the correspondence on simple objects, this list is the same for all $\ell$ (up to permutation). This proves~(a). Let now $\ell_0$ be as in (b). For each $\varphi\colon C\to U'$ as in (a) consider the representation $(\varphi_*\circ \rho_{X,\ell_0}^{(q)})$ as an action on the lattice $L$, that is trivial modulo $\ell_0L$. Any filtration of $L\otimes_{{\mathbb{Z}}_{\ell_0}}{\mathbb{Q}}_{\ell_0}$ preserved by the action of $\mathop{\rm Gal}\nolimits(k(C))$ induces a filtration of $L$. Denote by $L_C$ the induced lattice for $(\varphi_*\circ \rho_{X,\ell_0}^{(q)})^\ssi$. Then it follows that the induced action of $\mathop{\rm Gal}\nolimits(k(C))$ on $L_C/\ell_0 L_C$ is trivial. Since $\ell_0>2$, this implies that series of the logarithm converges on the image of $(\varphi_*\circ \rho_{X,\ell_0}^{(q)})^\ssi$. This image being pro-$\ell$, a standard argument (cf.~\cite[Cor.~4.2.2]{Tate}) shows that the Weil-Deligne representation at any place of $P(C)$ attached to $(\varphi_*\circ \rho_{X,\ell_0}^{(q)})^\ssi$ has trivial underlying Weil-representation. But then by the compatibility of the global and local Langlands correspondence (cf.~\cite[Thme.~VII.3, Cor.~VII.5]{Lafforgue}), this means that all $\pi_{C,i}$ are semistable at the places in $\partial C$ (they are unramified at all other places). We now apply the same argument to all $\ell\neq p$ in reverse, to deduce that all ramification of $(\varphi_*\circ \rho_{X,\ell}^{(q)})^\ssi$ is $\ell$-tame for all $\ell \neq p$. The point simply is that in a family of Galois representations arising from a set of automorphic forms, the Weil-Deligne representation at a place of $P(C)$ is independent of~$\ell\neq p$. Finally from Proposition~\ref{KS-variant}(b), which is a variant of a result of Kerz-Schmidt-Wiesend, we deduce that for all $\ell\neq p$, the representation $\rho^{(q)}_{\ell,X}$ is divisor $\ell$-tame. Combined with Proposition~\ref{basechange}(b), this establishes~${\cal S}(k)$. \end{proof} \begin{Rem} We would like to point out the parallel between the proof of ${\cal S}(k)$ in part (b) of the previous lemma and in Example~\ref{abvarex}. In the proof of (b) we selected a prime $\ell_0$ and enlarged $K$ so that its image would act trivially on $L/\ell_0L$ via $\rho_{\ell_0}$ for a $\mathop{\rm Gal}\nolimits(K)$-stable lattice $L$. Then we could use the uniformity provided by automorphic representations (after restricting the $\rho_\ell$ to any curve) to deduce from this that all ramification was semistable in a sense. In Example~\ref{abvarex} we selected a prime $\ell_0$ and enlarged $K$ so that it contains $K(A[\ell_0])$. This requirement is equivalent to asking that $\mathop{\rm Gal}\nolimits(K)$ acts trivially on the quotient $T_{\ell_0}(A)/\ell_0T_{\ell_0}(A)$ for the lattice $T_{\ell_0}(A)$ from the Tate-module. Then we used the uniformity provided by the N\'eron model ${\cal N}$ of the abelian variety $A$ over any discrete valuation ring in $K$ to deduce ${\cal S}(k)$. The semistability of ${\cal N}$ follows from the condition at the single prime $\ell_0$ -- in the analogous way that the semistability of automorphic representations was deduced from a single prime~$\ell_0$. \end{Rem} \begin{Cor}\label{CharPFamilyIsPSS} Suppose $K$ is absolutely finitely generated and of characteristic $p>0$. Suppose $X/K$ is smooth projective. Then $(\rho_{\ell,X})_{\ell\in {\mathbb{L}}\smallsetminus\{p\}}$ satisfies~${\cal S}(k)$. \end{Cor} \begin{proof} Fix a prime $\ell_0\ge3$. Choose a lattice $L$ underlying the representation $\rho_{\ell_0,X}$. Replace $K$ by a finite extension such that $\mathop{\rm Gal}\nolimits(K)$ acts trivial on $L/\ell_0$ via $\rho_{\ell_0,X}$. Apply now (b) of the previous lemma to deduce the corollary. \end{proof} \begin{Thm} \label{mainmain} Let $k'$ be a field of characteristic $p\ge 0$ and let $K'/k'$ be a finitely generated extension. Let $L={\mathbb{L}}\smallsetminus \{p\}$. Let $X'/k'$ be a separated algebraic scheme. Then there exists a finite extension $E'/K'$ with the following properties: \begin{enumerate} \item[(i)] For every $\ell\in L$ the group $\rho_{\ell, X'}(\mathop{\rm Gal}\nolimits(\wt {k'}E'))$ lies in $\Sigma_\ell(2^n)$ and is generated by its $\ell$-Sylow subgroups. \item[(ii)] The family $(\rho_{\ell, X'}|_{\mathop{\rm Gal}\nolimits(\wt{k'} E')})_{\ell\in L\smallsetminus\{2,3\}}$ is independent and $(\rho_{\ell, X'}|_{\mathop{\rm Gal}\nolimits(\wt{k'} K)})_{\ell\in L}$ is almost independent. \end{enumerate} \end{Thm} \begin{proof} It suffices to establish conditions (a) and (b) of Corollary~\ref{SecondReduction}. For this, let $K$ be absolutely finitely generated, that $k$ is its field of constants and that $X/K$ is smooth projective. In this case we have proven that ${\cal R}(k)$ holds in Proposition~\ref{basechange} and that ${\cal S}(k)$ holds for $k$ of positive characteristic in Corollary~\ref{CharPFamilyIsPSS}. This verifies condition~(a) of Corollary~\ref{CharPFamilyIsPSS}. For condition~(b) define $n$ to be the dimensions of $\rho_{\ell,X}$. Since $X/K$ is smooth projective, this definition is independent of the chosen $\ell$. Then all images $\rho_{\ell,X}(\mathop{\rm Gal}\nolimits(K))$ are $n$-bounded at $\ell$. Hence by Theorem~\ref{lp-main}, for each $\ell$ there exists a short exact sequence $$1\to M_\ell \to \rho_{\ell,X}(\mathop{\rm Gal}\nolimits(K)) \to H_\ell \to 1$$ with $M_\ell$ in $\Sigma_\ell(2^n)$ and $H_\ell$ in $\mathrm{Jor}_\ell(J'(n))$, for the constant $J'(n)$ from Theorem~\ref{lp-main}. Consider the induced representations $\tau_\ell$ defined as the composite $\mathop{\rm Gal}\nolimits(K)\stackrel{\rho_{\ell,X}}\to \rho_{\ell,X}(\mathop{\rm Gal}\nolimits(K)) \to H_\ell$. Since $\rho_{\ell,X}$ satisfies ${\cal R}(k)$ if $\Char(k)=0$ and ${\cal S}(k)$ if $\Char(k)>0$, the hypotheses of Proposition~\ref{FiniteAndRamif-prop} are met if we replace $\mathop{\rm Gal}\nolimits(K)$ by $\pi_1(U')$ with $U'$ from Proposition~\ref{basechange}. Hence there exists a finite extension $K'$ of $K$ such that $\tau_\ell(\mathop{\rm Gal}\nolimits(K'\wt{k}))$ is trivial for all $\ell\neq p$. But this shows that $\rho_{\ell,X}(\mathop{\rm Gal}\nolimits(K'\wt k))$ lies in $\Sigma_\ell(2^n)$ for all $\ell\neq p$, proving condition~(b) of Corollary~\ref{CharPFamilyIsPSS} and completes the proof. \end{proof} \begin{Rem} Under the hypothesis of the above theorem, Corollary~\ref{SecondReduction} also tells us that ${\cal R}(k)$ holds for $(\rho_{\ell,X})_{\ell\in{\mathbb{L}}}$ if $\Char(k)=0$ and that ${\cal S}(k)$ holds for $(\rho_{\ell,X})_{\ell\in{\mathbb{L}}}$ if $\Char(k)=p>0$. \end{Rem}
{ "timestamp": "2013-02-28T02:00:13", "yymm": "1302", "arxiv_id": "1302.6597", "language": "en", "url": "https://arxiv.org/abs/1302.6597", "abstract": "Let k be an algebraically closed field of arbitrary characteristic,let K/k be a finitely generated field extension and let X be a separated scheme of finite type over K. For each prime ell, the absolute Galois group of K acts on the ell-adic etale cohomology modules of X. We prove that this family of representations varying over ell is almost independent in the sense of Serre, i.e., that the fixed fields inside an algebraic closure of K of the kernels of the representations for all ell become linearly disjoint over a finite extension of K. In doing this, we also prove a number of interesting facts on the images and ramification of this family of representations.", "subjects": "Number Theory (math.NT)", "title": "Independence of l-adic representations of geometric Galois groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9843363549643401, "lm_q2_score": 0.8376199633332891, "lm_q1q2_score": 0.8244997815528541 }
https://arxiv.org/abs/1412.0380
Fixed points for weak contractions in metric type spaces
In this article, we prove some fixed point theorems in metric type spaces. This article is just a generalization some results previously proved in \cite{niyi-gaba}. In particular, we give some coupled common fixed points theorems under weak contractions. These results extend well known similar results existing in the literature.
\section*{\small Keyword: Metric Type Spaces; Fixed Point.} \end{abstract} \rule{5.75in}{.01in} \section{Introduction} The Banach contraction principle is a fundamental result in fixed point theory. Due to its importance, several authors have obtained many interesting extensions and generalizations, from single fixed point to coupled fixed point. In \cite{niyi-gaba}, Olaniyi et al. gave a brief justification on the origin of metric type space and considered it as a suitable framework to generalize some fixed point theorems. Metric type spaces generalize the notion of metric spaces, even if both structures are very close . In this article, we prove fixed point theorems in metric type spaces. We also give coupled fixed point results. For the convenience of the reader, we shall recall some definitions but for a detailed expos\'e of the definition and examples, the interested reader is referred to \cite{niyi-gaba}. \section{Prelimaries} \begin{dfn} Let $X$ be a nonempty set, and let the function $D:X\times X \to [0,\infty)$ satisfy the following properties: \begin{itemize} \item[(D1)] $D(x,x)=0$ for any $x \in X$; \item[(D2)] $D(x,y)=D(y,x)$ for any $x,y\in X$; \item[(D3)] $D(x,y) \leq \alpha \big( D(x,z_1)+D(z_1,z_2)+\cdots+D(z_n,y) \big)$ for any points $x,y,z_i\in X,\ i=1,2,\ldots, n$ where $n\geq 1$ is a fixed natural number and some constant $\alpha\geq 1$. \end{itemize} The triplet $(X,D,\alpha)$ is called a \textbf{metric type space}. \end{dfn} The concepts of Cauchy sequence and convergence for a sequence in a metric type space are defined in the same way as defined for a metric space. For the interested reader the definitions can be obtained in \cite{kam}. Moreover, for $\alpha=1$, we recover the classical metric, hence metric type generalizes metric. It is worth mentioning that if $(X,D,\alpha)$ is a metric type space, then for any $\beta \geq \alpha$, $(X,D,\beta)$ is also a metric type space. Hence, in the sequel we shall denote $(X,D,\alpha)$ simply as $(X,D)$ when there is no confusion. \begin{ex}(Compare \cite{niyi-gaba}) Let $X=\{ a,b,c \}$ and the mapping $D:X\times X \to [0,\infty)$ defined by $D(a,b)=1/5,\ D(b,c)=1/4,\ D(a,c)=1/2$, $D(x,x)=0$ for any $x \in X$ and $D(x,y)=D(y,x)$ for any $x,y\in X$. Since \[ \frac{1}{2} = D(a,c)> D(a,b)+D(b,c)=\frac{9}{20}, \] \end{ex} then we conclude that $X$ is not a metric space space. Nevertheless, with $\alpha=2$, it is very easy to check that $(X,D,2)$ is a metric type space. \begin{dfn} A subset $S$ of a metric type space $(X,D,\alpha)$ is said to be \textbf{totally bounded} if given $\varepsilon>0$ there exists a finite set of points $\{s_1,s_2,\cdots,s_n \} \subseteq S$, called $\varepsilon$-net, such that given any $s\in S$ there exists $i \in \{ 1,2,\cdots,n\}$ for which $D(s,s_i)\leq \varepsilon$. \end{dfn} \begin{dfn} Let $(X,D,\alpha)$ be a metric type space. If for any sequence $(x_n)$ in $X$, there is a subsequence $(x_{n_k})$ of $(x_{n})$ such that $(x_{n_k})$ is convergent in $X$. Then $X$ is called a {\bf sequentially compact} metric type space. \end{dfn} \vspace*{0.5cm} Some topological properties concerning metric type spaces can be read in \cite{niyi-gaba}. We recall some of these results, as we will make use of them. \begin{lem}(Compare \cite{niyi-gaba}) Every metric type space $(X,D,\alpha)$ is Hausdorrf. \end{lem} \begin{lem}(Compare \cite{niyi-gaba}) A Cauchy sequence $(x_n)$ in a metric type space $(X,D,\alpha)$ is always bounded. \end{lem} \begin{lem}(Compare \cite{niyi-gaba}) Let $(x_n)$ be a Cauchy sequence in a metric type space $(X,D,\alpha)$. Then $(x_n)$ converges to $x$ if and only if it has a subsequence that converges to $x$. \end{lem} \section{First Fixed Point Theorems} In this section, we prove some fixed point results in metric type space. In particular we generalize the contractive conditions in the literature by replacing the constants with functions. First, we state the following useful lemma. \begin{lem}(Compare \cite{niyi-gaba})\label{cochi1} Let $(y_n)$ be a sequence in a metric type space $(X,D,\alpha)$ such that \begin{equation}\label{cochi} D(y_n,y_{n+1}) \leq \lambda D(y_{n-1},y_n) \end{equation} for some $\lambda>0$ with $\lambda <\frac{1}{\alpha}$. Then $(y_n)$ is Cauchy. \end{lem} We are now in a position to state the main fixed point theorem of this section. \begin{theor}\label{theorem1} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function. Suppose that there exist functions $\eta, \lambda,\zeta,\mu,\xi:X \to [0,1)$ which satisfy the following for $x,y \in X:$ \begin{enumerate} \item[(1)] $g(f(x)) \leq g(x)$ whenever $g \in \{\eta, \lambda,\zeta,\mu,\xi\} ;$ \item[(2)] $\eta(x)+\lambda(x)+\zeta(x)+\mu(x)+2\alpha \xi(x)<1;$ \item[(3)] $D(f(x),f(y)) \leq H(x) * V(x,y)$ where $H(x)$ and $V(x,y)$ are vectors in $\mathbb{R}^5$ with $H(x):=[\eta(x),\lambda(x),\zeta(x),\mu(x),\xi(x)]$, $V(x,y):= [D(x,y),D(x,f(x)),D(y,f(y)),D(f(x),y),D(x,f(y))]$ and $*$ the usual inner product on $\mathbb{R}^5$. \end{enumerate} Then, $f$ has a unique fixed point. \end{theor} \noindent{\bfseries Proof. } Let $x_0 \in X$ be arbitrary and fixed, and we consider the sequence $x_{n+1}=fx_n=f^{n+1}x_0, $ for all $n\in \mathbb{N}$. If we take $x=x_{n-1}$ and $y=x_n$ in $(3)$ we have \begin{align*} D(x_n,x_{n+1}) & = D(fx_{n-1},fx_n)\\ & \leq H(x_{n-1}) * V(x_{n-1},x_n) \\ & = H(fx_{n-2}) * V(x_{n-1},x_n).\\ \end{align*} Since for all $n \in \mathbb{N},$ $V(x_{n-1},x_n)=[D(x_{n-1},x_n),D(x_{n-1},x_n)D(x_{n},x_{n+1}),0,D(x_{n-1},x_{n+1})$, the above becomes \begin{align*} D(x_n,x_{n+1}) & \leq H(fx_{n-2}) * V(x_{n-1},x_n)\\ & \leq H(x_{n-2}) * V(x_{n-1},x_n) \\ & \leq \eta(x_{n-2})D(x_{n-1},x_n)+\lambda(x_{n-2})D(x_{n-1},x_n)+\zeta(x_{n-2})D(x_n,x_{n+1})\\ & + \alpha \xi(x_{n-2})[D(x_{n-1},x_n)+D(x_n,x_{n+1})] \\ & \vdots \\ & \leq \eta(x_0)D(x_{n-1},x_n)+\lambda(x_0)D(x_{n-1},x_n)+\zeta(x_0)D(x_n,x_{n+1})\\ & + \alpha \xi(x_0)[D(x_{n-1},x_n)+D(x_n,x_{n+1})]. \\ \end{align*} So $$D(x_n,x_{n+1}) \leq \frac{\eta(x_0)+\lambda(x_0)+\alpha \xi(x_0)}{1-\zeta(x_0)-\alpha \xi(x_0)}D(x_{n-1},x_n) . $$ Thus, by Lemma \ref{cochi1}, $(x_n)$ is a Cauchy sequence in $X$. Because of completeness of $X$ and continuity of $f$, there exists $x^* \in X$ such that $x_n \to x^*$ and $x_{n+1}=f(x_n)\to f(x)$. Since $X$ is Hausdorff, $f(x^*)=x^*$ . \vspace*{0.5cm} \underline{ Uniqueness} If $y^*$ is a fixed point of $f$, then \begin{align*} D(x^*,y^*)= D(fx^*,fy^*) & \leq \eta(x^*)D(x^*,y^*) + \lambda(x^*)D(x^*,fx^*) \\ & + \zeta(x^*)D(y^*,fy^*) + \mu(x^*) D(fx^*,y^*) \\ & + \xi(x^*)D(x^*,fy^*) \\ & = \eta(x^*)D(x^*,y^*) + \mu(x^*) D(x^*,y^*) + \xi(x^*)D(x^*,y^*) \\ & = (\eta(x^*) + \mu(x^*) + \xi(x^*))D(x^*,y^*)). \end{align*} Therefore $D(x^*,y^*)=0$ i.e. $x^*=y^*$ since $\eta(x^*) + \mu(x^*) + \xi(x^*)<1$. \begin{cor}\label{cor1} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function. Suppose that there exist functions $\eta, \lambda,\zeta,\mu,\xi:X \to [0,1)$ which satisfy the following for $x,y \in X:$ \begin{enumerate} \item[(1)] $g(f(x)) \leq g(x)$ whenever $g \in \{\eta, \lambda,\zeta,\mu,\xi\} ;$ \item[(2)] $\eta(x)+2\lambda(x)+2(1+\alpha)\mu(x)<1;$ \item[(3)] $D(f(x),f(y)) \leq \eta(x)D(x,y)+\lambda(x)(D(x,f(x))+D(y,f(y)))+\\ \mu(x)(D(f(x),y)+D(x,f(y))) $. \end{enumerate} Then, $f$ has a unique fixed point. \end{cor} \noindent{\bfseries Proof. } We can prove this result by applying Theorem \ref{theorem1} to $\lambda(x)=\zeta(x)$ and $\mu(x)=\xi(x)$. \begin{cor}\label{cor2} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function. Suppose that there exist constants $a,\beta,\gamma,k,l\geq 0$ which satisfy the following for $x,y \in X:$ $D(f(x),f(y)) \leq aD(x,y)+ \beta D(x,f(x))+ \gamma D(y,f(y))+kD(f(x),y)+lD(x,f(y))$ and $a,\beta,\gamma,k,l\geq 0$ with $a+\beta+\gamma+k+2\alpha l< 1$. Then, $f$ has a unique fixed point. \end{cor} \noindent{\bfseries Proof. } We can prove this result by applying Theorem \ref{theorem1} to $ H(x)=(a,\beta,\gamma,k,l)$. \begin{cor}\label{cor3} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function and $$D(f(x),f(y)) \leq \beta [D(x,f(x))+ D(y,f(y))]$$ for all $x,y \in X$ and $\beta\in [0,1/2)$. Then, $f$ has a unique fixed point. \end{cor} \begin{cor}\label{cor4} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function and $$D(f(x),f(y)) \leq \beta [D(f(x),y)+ D(x,f(y))]$$ for all $x,y \in X$ and $\beta\in [0,1/(1+2\alpha))$. Then, $f$ has a unique fixed point. \end{cor} \begin{cor}\label{cor5} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function and $$D(f(x),f(y)) \leq aD(x,y)+ \beta D(f(x),y)$$ for all $x,y \in X$ and $a+ \beta < 1$. Then, $f$ has a unique fixed point. \end{cor} The next corollary is a generalization of Banach contraction principle. \begin{cor}\label{cor6} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function and $$D(f(x),f(y)) \leq aD(x,y)$$ for all $x,y \in X$ and $a < 1$. \end{cor} We conclude this section by he following theorem. \begin{theor}\label{theorem2} Let $(X,D,\alpha)$ be a complete metric type space and let $f: X \to X$ be a continuous function and for all $x,y \in X$ \begin{equation}\label{star} \varepsilon(x,y) \leq s D(x,y) +t D(x,f^2(x)) \tag{*} \end{equation} with $$\varepsilon(x,y)=a D(f(x),f(y)) + \beta D(x,f(x))+ \gamma D(y,f(y)) + k D(x,f(y)) + lD(y,f(x)),$$ and where $s\geq a\geq l,$ $\gamma\geq k\geq t,$ $a+k>0$ and $0\leq (s-l)/(a+k)<1/\alpha$. Then, $f$ has a fixed point. Moreover, if $\alpha>1$, the fixed point is unique. \end{theor} \noindent{\bfseries Proof. } For simplicity, we set $fx:=f(x)$ and subsequently. Let $x_0 \in X$ be arbitrary and fixed, and we consider the sequence $x_{n+1}=fx_n=f^{n+1}x_0, $ for all $n\in \mathbb{N}$. If we take $x=x_{n-1}$ and $y=x_n$ in \eqref{star} we have \begin{align} a D(f(x_{n-1}),f(x_{n})) &+ \beta D(x_{n-1},f(x_{n-1}))+ \gamma D(x_{n},f(x_{n})) \nonumber \\ &+ k D(x_{n-1},f(x_{n})) + lD(x_{n},f(x_{n-1})) \nonumber \\ &\leq s D(x_{n-1},x_{n}) +t D(x_{n-1},f^2(x_{n-1})) \end{align} Rewriting his inequality as \begin{align} a D(x_{n},x_{n+1}) &+ \beta D(x_{n-1},x_{n})+ \gamma D(x_{n},x_{n+1}) \nonumber \\ &+ k D(x_{n-1},x_{n+1}) + l D(x_{n},x_{n}) \nonumber \\ &\leq s D(x_{n-1},x_{n}) +t D(x_{n-1},x_{n+1}) \end{align} implies that \begin{equation}\label{alter} (a+\gamma)D(x_{n},x_{n+1}) +(k-t)D(x_{n-1},x_{n+1}) \leq (s-\beta) D(x_{n-1},x_{n}) . \end{equation} Since $k\geq t$, we have \[ (a+\gamma)D(x_{n},x_{n+1}) \leq (s-\beta) D(x_{n-1},x_{n}), \] which entails that \[ D(x_{n},x_{n+1}) \leq \frac{s-\beta}{a+\gamma} D(x_{n-1},x_{n}) \leq \frac{s-l}{a+k} D(x_{n-1},x_{n}), \] Thus, by Lemma \ref{cochi1}, $(x_n)$ is a Cauchy sequence in $X$. Because of completeness of $X$ and continuity of $f$, there exists $x^* \in X$ such that $x_n \to x^*$ and $x_{n+1}=f(x_n)\to f(x)$. Since $X$ is Hausdorff, $f(x^*)=x^*$ . \vspace*{0.5cm} \underline{Uniqueness} We assume here that $K>1$. If $y^*$ is a fixed point of $f$, then for $x=x*$ and $y=y^*$ in \eqref{star}, we obtain \begin{align} a D(f(x^*),f(y^*)) &+ \beta D(x^*,f(x^*))+ \gamma D(y^*,f(y^*)) + k D(x^*,f(y^*)) + lD(y^*,f(x^*)) \tag{*} \\ &\leq s D(x^*,y^*) +t D(x^*,f^2(x^*)). \nonumber \end{align} Hence, \[ (a+k)D(x^*,y^*) +l D(x^*,y^*) \leq s D(x^*,y^*) \Longleftrightarrow D(x^*,y^*) \leq \frac{s-l}{a+k} D(x^*,y^*) \] which is absurd except $D(x^*,y^*)=0$, i.e. $x^*=y^*$. \begin{rem} Observe in \eqref{alter} that the term $D(x_{n-1},x_{n+1})$ can be split via the triangle inequality. This yields to \[ (a+\gamma)D(x_{n},x_{n+1}) \leq (k-t) \alpha [D(x_{n-1},x_{n})+ D(x_n, x_{n+1}) ] + (s-\beta) D(x_{n-1},x_{n}), \] i.e. \[ [a+\gamma + (k-t)\alpha ] D(x_n, x_{n+1}) \leq [(k-t) \alpha + s-\beta] D(x_{n-1},x_{n}) \] or \[ D(x_n, x_{n+1}) \leq \frac{[(k-t) \alpha + s-\beta]}{[a+\gamma -(k-t)\alpha ] } D(x_{n-1},x_{n}). \] In this case, we shall require that $\frac{[(k-t) \alpha + s-\beta]}{[a+\gamma -(k-t)\alpha ] } < 1/\alpha$ which implies that $(s-l)/(a+k)<1/\alpha$. Hence there is no loss of generality if we do not use the triangle inequality at this stage. \end{rem} The next sections present a generalization of the results proved above, namely, we remplace the set of one variable functions $\eta, \lambda,\zeta,\mu,\xi:X \to [0,1)$ by a set of functions of two variables $\eta, \lambda,\zeta,\mu,\xi:X\times X \to [0,1)$ to which we add an additional condition. \section{Fixed point of generalized contraction mappings} \begin{theor}\label{res1} Let $(X,D,K)$ be a complete metric type space and let $T: X \to X$ a self mapping on $X$ such that for each $x,y \in X$ \begin{align}\label{Equation1} D(Tx,Ty)\leq \ &\alpha(x,y) D(x,y) + \beta(x,y)D(x,Tx)+ \gamma(x,y)D(y,Ty) \nonumber \\ &+\delta(x,y)[D(x,Ty)+D(y,Tx)] \end{align} where $\alpha,\beta,\gamma,\delta$ are functions from $X\times X \to [0,1)$ such that \begin{equation}\label{Equation2} \lambda = \sup \{\alpha(x,y)+\beta(x,y)+\gamma(x,y) + 2K\delta(x,y): x,y \in X\}<1, \end{equation} then $T$ has a unique fixed point, say $x^*$, the orbit $\{T^nx\}$ converges to the fixed point $x^*$ for any $x\in X$. Moreover $x^*$ is such that for each $x\in X$ \[ D(T^nx,x^*)\ \leq \frac{\lambda^n}{1-\lambda}D(x,Tx). \] \end{theor} \noindent{\bfseries Proof. } Let $x \in X$. Let $(x_n)$ be the sequence defined by $x_0=x, \ x_1=Tx_0, \ \cdots x_{n+1}=Tx_n$. From \eqref{Equation1}, \begin{align*} D(Tx_n,Tx_{n+1})= D(Tx_{n-1},Tx_n)\leq \ &\alpha D(x_{n-1},x_{n}) + \beta D(x_{n-1},x_{n}) + \gamma D(x_{n},x_{n+1}) \nonumber \\ &+\delta [D(x_{n-1},x_{n+1})+D(x_{n},x_n)], \end{align*} or \begin{align}\label{Equation3} D(Tx_n,Tx_{n+1})= D(Tx_{n-1},Tx_n)\leq \ &\alpha D(x_{n-1},x_{n}) + \beta D(x_{n-1},x_{n}) + \gamma D(x_{n},x_{n+1}) \nonumber \\ &+\delta D(x_{n-1},x_{n+1}). \end{align} Hence, using the triangle inequality, we have \begin{align}\label{Equation4} D(x_{n-1},x_{n+1})& \leq K [D(x_{n-1},x_{n+1})+D(x_{n-1},x_{n+1}) ] \nonumber \\ &\leq 2K \max \{ D(x_{n-1},x_{n}),D(x_{n},x_{n+1})\}. \end{align} By \eqref{Equation4}, equation \eqref{Equation3} turns to be \begin{align*} D(x_{n},x_{n+1})& \leq (\alpha+ \beta+\gamma) \max \{ D(x_{n-1},x_{n+1}),D(x_{n-1},x_{n+1})\}\\ &+ 2K \max \{ D(x_{n-1},x_{n}),D(x_{n},x_{n+1})\}. \end{align*} Then \[ D(x_{n},x_{n+1}) \leq \lambda \max \{ D(x_{n-1},x_{n}),D(x_{n},x_{n+1})\}. \] Since $\lambda<1,$ then \begin{equation} D(x_{n},x_{n+1}) \leq \lambda D(x_{n-1},x_{n}). \end{equation} By inductivity, we obtain that \[ D(x_{n},x_{n+1}) \leq \lambda D(x_{n-1},x_{n}) \leq \lambda.\lambda D(x_{n-2},x_{n-1})\leq \cdots \leq \lambda^n D(x,Tx). \] By triangle inequality, for $m>n$ we get \begin{align*} D(x_n,x_m)&\leq K[D(x_{n},x_{n+1})+D(x_{n+1},x_{n+2})+\cdots + D(x_{m-1},x_{m})]\\ &\leq K[\lambda^n +\lambda^{n+1}+\cdots +\lambda^{m-1} ]D (x,Tx)\\ & \frac{\lambda^n}{1-\lambda}KD (x,Tx) \end{align*} or \begin{equation}\label{Equation7} D(x_n,x_m)\frac{\lambda^n}{1-\lambda}KD (x,Tx) \end{equation} Letting $m,n \to\infty$, in \eqref{Equation7} implies that $(x_n)$ is a Cauchy sequence. Since $(X,D,K)$ is complete, there exists $x^*\in X$ such that \begin{equation}\label{Equation8} \underset{n\to \infty}{\lim}x_n=x^*. \end{equation} We now show that $T(x^*)=x^*$. Form the contractive condition, we know that \begin{align*} D(Tx^*,Tx_n) & \leq \alpha D(x^*,x_n) + \beta D(x^*,Tx^*) +\gamma D(x_n,Tx_n)\\ & + \delta [D(u,Tx_n)+D(x_n,Tu)] \\ & \leq \lambda \max \{ D(x^*,x_n) , D(x^*,Tx^*),D(x_n,x_{n+1}), D(u,x_{n+1}),D(x_n,Tu) \}. \end{align*} As $n\to \infty$, we obtain \begin{equation} D(Tx^*,x^*)\leq \lambda D(x^*,Tx^*). \end{equation} Since $\lambda<1$, we conclude that $T(x^*)=x^*$ because $DT(x^*,x^*)=0$. For uniqueness, assume $y^* \in $ is a fixed point of $T$. Then, we have \begin{align*} D(x^*,y^*)&=D(Tx^*,Ty^*)\\ & \leq \alpha D(x^*,y^*) + \beta D(x^*,Tx^*)+ \gamma D(y^*,Ty^*) \nonumber \\ &+\delta[D(x^*,Ty^*)+D(y^*,Tx^*)] \\ & \leq (\alpha +2\delta)D(x^*,y^*) \leq \lambda D(x^*,y^*), \end{align*} and $D(x^*,y^*)=0$ since $\lambda<1$. Taking the limit in \eqref{Equation7} as $m\to \infty$, we get $D(T^nx,x^*)\ \leq \frac{\lambda^n}{1-\lambda}D(x,Tx)$ for each $n$. The proof is complete. Mappings which satisfy \eqref{Equation1} and \eqref{Equation2} are called {\bf generalized contraction}. From the proof of Theorem \ref{res1}, we have \begin{cor} Let $(X,D,K)$ be a complete metric type space and let $T: X \to X$ self mapping on $X$ such that for each $x,y \in X$ \begin{equation}\label{Equation10} D(Tx,Ty)\leq \lambda \max\left\lbrace D(x,y),D(x,Tx),D(y,Ty),\frac{1}{2}[D(x,Ty)+D(y,Tx)]\right\rbrace \end{equation} where $\lambda \in (0,1)$. Then $T$ has a unique fixed point. \end{cor} \begin{cor} Let $(X,D,K)$ be a complete metric type space and let $T: X \to X$ a self mapping on $X$ such that for each $x,y \in X$ \begin{equation}\label{Equation1m} D(Tx,Ty)\leq \lambda_1 D(x,Tx)+ \lambda_2 D(y,Ty)+ \lambda_3 D(x,Ty)+\lambda_4 D(y,Tx) \end{equation} where $\lambda_i \in [0,1)$ and $\lambda_1 +\lambda_2+K(\lambda_3+\lambda_4)< \min\{ 1,2/K\}, i=1,2,3,4$. Then $T$ has a unique fixed point. \end{cor} \section{Common fixed point of generalized contraction mappings} Given a non empty set $X$ and $\{T_\alpha\}_{\alpha\in J}$ a family of selfmappings on $X$ where $J$ is an index set. A point $u\in X$ is called common fixed point for the family $\{T_\alpha\}_{\alpha\in J}$ if and only if $u=T_\alpha(u)$ for each $\alpha$. \begin{theor} Let $(X,D,K)$ be a complete metric type space and $\{T_\alpha\}_{\alpha\in J}$ a family of selfmappings on $X$. If there exists a fixed $\beta\in J$ such that for each $x,y \in X$ such that for each $\alpha\in J$: \begin{equation}\label{Equation18} D(T_\alpha x,T_\beta y)\leq \lambda \max \left\lbrace D(x,y),D(x,T_\alpha x),D(y,T_\beta y),\frac{1}{2}[ D(x,T_\beta y), D(y,T_\alpha x)]\right\rbrace \end{equation} for some $\lambda=\lambda(\alpha) \in [0,1)$ with $\lambda K<1$ and for each $x,y \in X$. Then all the $T_\alpha$ have a unique common fixed point, which is unique for each $T_\alpha, \alpha\in J$. \end{theor} \noindent{\bfseries Proof. } Let $\alpha \in J$ and $x\in X$ be arbitrary. We consider the sequence defined by $x_0=x,\ x_{2n+1}=T_\alpha x_{2n}, \ x_{2n+2}=T_\beta x_{2n+1}$. Form \eqref{Equation18} we get \begin{align*} D(x_{2n+1}, x_{2n+2}) &= D(T_\alpha x_{2n},T_\beta x_{2n+1})\\ & \leq \lambda \max \left\lbrace \begin{array}{ll} D(x_{2n}, x_{2n+1}), D(x_{2n}, x_{2n+1}), D(x_{2n+1}, x_{2n+2}),\\ \frac{1}{2} [D(x_{2n}, x_{2n+2})+D(x_{2n+1}, x_{2n+1})] \end{array} \right\rbrace \end{align*} Since \begin{equation}\label{Equation19} D(x_{2n}, x_{2n+2}) \leq K [ D(x_{2n}, x_{2n+1})+D(x_{2n+1}, x_{2n+2})] \end{equation} so, \begin{align*} \frac{1}{2} D(x_{2n}, x_{2n+2}) & \leq K [D(x_{2n}, x_{2n+1})+D(x_{2n+1}, x_{2n+2}) ] \\ & \leq K \max\{ D(x_{2n}, x_{2n+1}),D(x_{2n+1}, x_{2n+2})\}, \end{align*} we have \[ D(x_{2n+1}, x_{2n+2}) \leq K \max\{ D(x_{2n}, x_{2n+1}),D(x_{2n+1}, x_{2n+2})\}. \] Hence, as $\lambda K<1$ \[ D(x_{2n+1}, x_{2n+2}) \leq \lambda K D(x_{2n}, x_{2n+1}). \] Similarly, we get that $D(x_{2n}, x_{2n+1}) \leq \lambda K D(x_{2n-1}, x_{2n}).$ thus for any $n\geq 1$ we have \begin{equation}\label{Equation20} D(x_{n},x_{n+1}) \leq \lambda K D(x_{n-1},x_{n})\leq K^2 D(x_{n-2},x_{n-1})\leq \cdots \leq (\lambda K)^{n} D(x_{0},x_{1}) \end{equation} From \eqref{Equation20} and the triangle inequality, for $m>n$ we get \begin{align*} D(x_n,x_m) & \leq K[D(x_{n},x_{n+1})+D(x_{n+1},x_{n+2})+\cdots + D(x_{m-1},x_{m})]\\ & \leq K ( (\lambda K)^{n} D(x_{0},x_{1}) +(\lambda K)^{n+1} D(x_{0},x_{1}) + \cdots + (\lambda K)^{m-1} D(x_{0},x_{1}) ) \\ & \leq K \left[ \lambda K)^{n} +(\lambda K)^{n+1} + \cdots + (\lambda K)^{m-1}\right] D(x_{0},x_{1}) \\ & \leq K \frac{(\lambda K)^n}{1-(\lambda K)} D(x_{0},x_{1}). \end{align*} If we let $n \to \infty$, we conclude that $(x_n)$ is a Cauchy sequence in $X$. Because of completeness of $X$ there exists $x^* \in X$ such that $x_n \to x^*$. From \eqref{Equation18}, we have \begin{align*} D(T_\beta x^*,x_{2n+1}) &= D(T_\beta x^*,T_\beta x_{2n})\\ & \leq \lambda \max \left\lbrace \begin{array}{ll} D(x^*, x_{2n}), D(x^*, T_\beta x^*), D(x_{2n}, x_{2n+1}),\\ \frac{1}{2} [D(x^*, x_{2n+1})+D(x_{2n}, T_\beta x^*)] \end{array} \right\rbrace . \end{align*} Taking the limit as $n\to \infty$, we get $D(T_\beta x^*, x^*)\leq \lambda D(T_\beta x^*, x^*)$. Therefore $D(T_\beta x^*, x^*)=0$, i.e. $T_\beta x^*= x^*$. Moreover, from \eqref{Equation18} with $x=y=T_\beta x^*= x^*$, we have \[ D(T_\alpha x^*, x^*) = D(T_\alpha x^*,T_\beta x^*) \leq \lambda(\alpha) \max \left\lbrace D(T_\alpha x^*, x^*), \frac{1}{2}D(T_\alpha x^*, x^*)\right\rbrace \] and hence $T_\alpha x^*= x^*$ for all $\alpha \in J$. Thus, all $T_\alpha$ have a common fixed point. Suppose that $w$ is a fixed point of $T_\beta$, it follows as above , that $w$ is a common fixed point for all $T_\alpha$. Thus, from \eqref{Equation18} we have $D(z,x^*)=D(T_\beta x^*, T_\alpha w)\leq \lambda D(x^*,w)$ and so $x^*=w$. Thus, $x^*$ is the unique common fixed point for all $\{T_\alpha\}_{\alpha\in J}$. This completes the proof. \section{Weak contractions} In this section, we present some common fixed point under contractive conditions for $w$-compatible mappings in metric type spaces. We start with the following definition. \begin{dfn}(Compare \cite{ce}) Let $X$ be a non empty set. An element $(x,y) \in X\times X$ is called: \begin{enumerate} \item[(E1)] a \textbf{coupled fixed point} of the mapping $F:X\times X \to X$ if $F(x,y)=x$ and $F(y,x)=y$; \item[(E2)] a \textbf{coupled coincidence point} of the mappings $F:X\times X \to X$ and $g:X \to X$ if $F(x,y)=g(x)$ and $F(y,x)=g(y)$. In this case $(gx,gy)$ is called the \textbf{coupled point of coincidence}; \item[(E3)] a \textbf{coupled common fixed point} of the mappings $F:X\times X \to X$ and $g:X \to X$ if $F(x,y)=gx=x$ and $F(y,x)=gy=y$. \end{enumerate} \end{dfn} \begin{dfn} Let $X$ be a non empty set. The mappings $F:X\times X \to X$ and $g:X \to X$ are said to be \textbf{$w$-compatible} if $g(F(x,y))=F(gx,gy)$ whenever $F(x,y)=g(x)$ and $F(y,x)=g(y)$. Observe here that if $(x,y)$ is a coupled common fixed point of the mappings $F:X\times X \to X$ and $g:X \to X$, then likewise $(y,x)$. \end{dfn} \begin{theor}\label{thm2.2} Let $(X,D,K)$ a metric type space and $F:X\times X \to X$ and $g:X \to X$ be maps such that for all $x,y,u,v \in X$ \begin{align}\label{Eq2.1} D(F(x,y),F(u,v)) & \leq \alpha_1 D(gx,gu) + \alpha_2 D(F(x,y),gx) +\alpha_3 D(gy,gv) \nonumber \\ & + \alpha_4 D(F(u,v),gu) +\alpha_5 D(F(x,y),gu) \nonumber \\ & + \alpha_6 D(F(u,v),gx), \end{align} where $\alpha_i$ for $i=1,\cdots ,6$ are nonnegative constants with \begin{equation}\label{Eq2.2} 2K(\alpha_1+\alpha_3)+(K+1)(\alpha_2+\alpha_4)+(K^2+K)(\alpha_5+\alpha_6)<2. \end{equation} If $F(X\times X)\subset g(X) $ and $g(X)$ is complete, then $F$ and $g$ have a coupled coincidence point in $X$. \end{theor} \noindent{\bfseries Proof. } Let $x_0,y_0 \in X$ be arbitrary and set \[ g(x_1)=F(x_0,y_0),\ g(y_1)=F(y_0,x_0),\cdots , \ g(x_{n+1})= F(x_n,y_n),\ g(y_{n+1})=F(y_n,x_n) \] This construction is always possible since $F(X\times X)\subset g(X) $. From \eqref{Eq2.1}, we have \begin{align}\label{Eq2.3} D(gx_{n+1},gx_n) & = D(F(x_n,y_n),F(x_{n-1},y_{n-1})) \nonumber \\ & \leq \alpha_1 D(gx_n,gx_{n-1}) + \alpha_2 D(F(x_n,y_n),gx_n) +\alpha_3 D(gy_n,gy_{n-1}) \nonumber \\ & \ \ \ + \alpha_4 D(F(x_{n-1},y_{n-1}),gx_{n-1}) +\alpha_5 D(F(x_n,y_n),gx_{n-1}) \nonumber \\ &\ \ \ + \alpha_6 D(F(x_{n-1},y_{n-1}),gx_n) \nonumber \\ & \leq \alpha_1 D(gx_n,gx_{n-1}) + \alpha_2 D(g(x_{n+1}),gx_n) +\alpha_3 D(gy_n,gy_{n-1}) \nonumber \\ & \ \ \ + D(gx_n,gx_{n-1}) +\alpha_5 D(gx_{n+1},gx_{n-1})+ \alpha_6 D(gx_n,gx_n) \nonumber \\ & \leq \alpha_1 D(gx_n,gx_{n-1}) + \alpha_2 D(g(x_{n+1}),gx_n) +\alpha_3 D(gy_n,gy_{n-1}) \nonumber \\ & \ \ \ + D(gx_n,gx_{n-1}) + K \alpha_5 [D(gx_{n+1},gx_n)+ D(gx_n,gx_{n-1})]. \end{align} It follows that \begin{equation}\label{Eq2.4} (1-\alpha_2-K\alpha_5)D(gx_{n+1},gx_n) \leq (\alpha_1+\alpha_4+K\alpha_5)D(gx_n,gx_{n-1}) + \alpha_3 D(gy_n,gy_{n-1}). \end{equation} Similarly, we have \begin{equation}\label{Eq2.5} (1-\alpha_2-K\alpha_5)D(gy_{n+1},gy_n) \leq (\alpha_1+\alpha_4+K\alpha_5)D(gy_n,gy_{n-1}) + \alpha_3 D(gx_n,gx_{n-1}). \end{equation} From \[ D(gx_{n+1},gx_n) = D(F(x_n,y_n),F(x_{n-1},y_{n-1}))= D(F(x_{n-1},y_{n-1}),F(x_n,y_n)) = D(gx_n,gx_{n+1}), \] and applying \eqref{Eq2.1} to $D(gx_n,gx_{n+1})$, we derive that \begin{equation}\label{Eq2.7} (1-\alpha_4-K\alpha_6)D(gx_n,gx_{n+1}) \leq (\alpha_2+\alpha_4+K\alpha_5)D(gx_{n-1},gx_{n}) +\alpha_3 D(gy_{n-1},gy_n). \end{equation} and \begin{equation}\label{Eq2.8} (1-\alpha_4-K\alpha_6)D(gy_n,gy_{n+1}) \leq (\alpha_2+\alpha_4+K\alpha_5)D(gy_{n-1},gy_{n}) +\alpha_3 D(gx_{n-1},gx_n). \end{equation} Now, by adding up \eqref{Eq2.4} and \eqref{Eq2.5} and setting $D_n = D(gx_n,gx_{n+1}) + D(gy_n,gy_{n+1})$, we obtain \begin{equation}\label{Eq2.9} (1-\alpha_2-K\alpha_5) D_n \leq (\alpha_1+\alpha_3+\alpha_4+K\alpha_5)D_{n-1}. \end{equation} Similarly, adding \eqref{Eq2.1} and \eqref{Eq2.8} and setting $D_n = D(gx_n,gx_{n+1}) + D(gy_n,gy_{n+1})$, we obtain \begin{equation}\label{Eq2.10} (1-\alpha_4-K\alpha_6) D_n \leq (\alpha_1+\alpha_2+\alpha_3+K\alpha_6)D_{n-1}. \end{equation} Finally, adding up \eqref{Eq2.9} and \eqref{Eq2.10}, we have \begin{equation}\label{Eq2.11} [2-\alpha_2-\alpha_4-K(\alpha_5+\alpha_6)] D_n \leq [2\alpha_1+\alpha_2+2\alpha_3+\alpha_4+ K(\alpha_5+\alpha_6)]D_{n-1}. \end{equation} Hence, for all $n,$ \begin{equation}\label{Eq2.12} 0\leq D_n \leq \lambda D_{n-1} \leq \lambda^2 D_{n-2}\leq \cdots \leq \lambda^n D_0, \end{equation} where \begin{equation}\label{Eq2.13} \lambda = \frac{2\alpha_1+\alpha_2+2\alpha_3+\alpha_4+ K(\alpha_5+\alpha_6)}{2-\alpha_2-\alpha_4-K(\alpha_5+\alpha_6)}< \frac{1}{K}. \end{equation} If $D_0=0$ then $(x_0,y_0)$ is a coupled coincidence of $F$ and $g$. Now, assume that $D_0>0$. From Lemma \ref{cochi1}, and since $\lambda< 1/K$, we conclude that $(gx_n)$ and $(gy_n)$ are Cauchy sequences in $X$. Since $g(X)$ is complete, there exixt $x^*,y^* \in X$ such that $gx_n\to gx^*$ and $gy_n\to gy^*$. Now, we prove that $F(x^*,y^*)=gx^*$ and $F(y^*,x^*)=gy^*$. Indeed, using \eqref{Eq2.1} \begin{align}\label{Eq2.16} D(F(x^*,y^*),gx^*) & \leq K [ D(F(x^*,y^*),gx_{n+1})+ D(gx_{n+1},gx^*) ] \nonumber \\ & = [ D(F(x^*,y^*),F(x_n,y_n))+ D(gx_{n+1},gx^*) ] \nonumber \\ & \leq K [ \alpha_1 D(gx,gx_n)+\alpha_2 D(F(x^*,y^*),gx^*)+\alpha_3 D(gy^*,gy_n) \nonumber \\ & \ \ + K \alpha_4[D(gx_{n+1},gx^*)+D(gx^*,gx_n)] \nonumber \\ & \ \ + K \alpha_5 [ D(x^*,y^*)+D(gx^*,gx_n)] \nonumber \\ & \ \ + \alpha_6 D(gx_{n+1},gx^*) + D(gx_{n+1},gx^*) ]. \end{align} Therefore \begin{align}\label{Eq2.17} D(F(x^*,y^*),gx^*) & \leq \frac{K\alpha_1+K^2(\alpha_4+\alpha_5)}{1-K\alpha_2-K^2\alpha_5} \nonumber \\ & \ \ + \frac{K+K^2\alpha_4 + K \alpha_6}{1-K\alpha_2-K^2\alpha_5} \nonumber \\ & \ \ + \frac{K\alpha_3}{1-K\alpha_2-K^2\alpha_5}. \end{align} Since $gx_n\to gx^*$ and $gy_n\to gy^*$, we have $D(F(x^*,y^*),gx^*)=0$, i.e. $F(x^*,y^*)=gx^*$. Similarly, we can get $F(y^*,x^*)=gy^*$. Therefore $(x^*,y^*)$ is a coupled coincidence point for $F$ and $g$. \begin{theor}\label{thm2.3} Let $(X,D,K)$ a metric type space and $F:X\times X \to X$ and $g:X \to X$ be maps which satisfy the conditions of Theorem \ref{thm2.2}. If $F$ and $g$ are $w$-compatible, then $F$ and $g$ have a unique coupled common fixed point, which belongs to the diagonal of $X$. \end{theor} \noindent{\bfseries Proof. } Step 1: In a first time, we establish the uniqueness of the coupled point of coincidence. Suppose by the way of contraction that $F(x^*,y^*)=gx^*$, $F(y^*,x^*)=gy^*$, $F(a^*,b^*)=ga^*$, $F(b^*,a^*)=gb^*$ for some $x^*,y^*,a^*,b^* \in X$. From \eqref{Eq2.1}, we have \[ D(ga^*,gx^*)= D(F(a^*,b^*),F(x^*,y^*) ) \leq (\alpha_1+\alpha_5+\alpha_6)D(ga^*,gx^*)+\alpha_3 D(gb^*,gy^*) \] and \[ D(gb^*,gy^*)= D(F(b^*,a^*),F(y^*,x^*) ) \leq (\alpha_1+\alpha_5+\alpha_6)D(gb^*,gy^*)+\alpha_3 D(ga^*,gx^*). \] Adding up the above two inequalities, we get \[ D(ga^*,gx^*)+ D(gb^*,gy^*) \leq (\alpha_1+\alpha_3+\alpha_5+\alpha_6)[D(ga^*,gx^*)+ D(gb^*,gy^*)] \] Since $2K(\alpha_1+\alpha_3)+(K+1)(\alpha_2+\alpha_4)+(K^2+K)(\alpha_5+\alpha_6)<2$, we have $D(ga^*,gx^*)+ D(gb^*,gy^*)=0$, i.e. $ ga^*=gx^*$ and $gb^*=gy^*$. It is also very easy to observe that a similar computation as the one did above establishes that $ga^*=gy^*$ and $gb^*=gx^*$. Thus $ga^*=gb^*$ and $(gx^*,gx^*)$ is the unique coupled point of coincidence of $F$ and $g$. Step 2: Now let $g(x^*)=z$. Then we have $z=g(x^*)=F(x^*,x^*)$. By $w$-compatibility of $F$ and $g$, we have \[ g(z)=g(g(x^*))=g(F(x^*,x^*))=F(g(x^*),g(x^*))=F(z,z). \] Thus $(gz,gz)$ is a coupled point of coincidence of $F$ and $g$. Therefore $z=gz=F(z,z)$. Consequently $(z,z)$ is the unique coupled common fixed point of $F$ and $g$. \begin{cor}\label{cor2.4} Let $(X,D,K)$ a metric type space and $F:X\times X \to X$ and $g:X \to X$ be maps such that for all $x,y,u,v \in X$ \begin{align}\label{Eq2.21} D(F(x,y),F(u,v)) & \leq \alpha [ D(gx,gu) + D(F(x,y),gx) ] + \nonumber \\ & \beta [ D(gy,gv) D(F(u,v),gu) ] \nonumber \\ & + \gamma [ D(F(x,y),gu) D(F(u,v),gx)], \end{align} where $\alpha, \beta$ and $\gamma$ are nonnegative constants with \begin{equation}\label{Eq2.22} (3K+1+)(\alpha+\beta)+2(K^2+K)\gamma <2. \end{equation} If $F(X\times X)\subset g(X) $ and $g(X)$ is complete, then $F$ and $g$ have a coupled coincidence point in $X$. Also if $F$ and $g$ are $w$-compatible, then $F$ and $g$ have a unique coupled common fixed point, which belongs to the diagonal of $X$. \end{cor} \noindent{\bfseries Proof. } It follows from Theorems \ref{thm2.2} and \ref{thm2.3} by setting $\alpha_1=\alpha_2=\alpha,$ $\alpha_3=\alpha_4=\beta$ and $\alpha_5=\alpha_6=\gamma$. \begin{cor}\label{cor2.5} Let $(X,D,K)$ a metric type space and $F:X\times X \to X$ and $g:X \to X$ be maps such that for all $x,y,u,v \in X$ \begin{equation}\label{Eq2.23} D(F(x,y),F(u,v)) \leq \alpha D(gx,gu) + \beta D(gy,gv) \nonumber \\ \end{equation} where $\alpha, \beta$ are nonnegative constants with $\alpha+\beta <1/K.$ If $F(X\times X)\subset g(X) $ and $g(X)$ is complete, then $F$ and $g$ have a coupled coincidence point in $X$. Also if $F$ and $g$ are $w$-compatible, then $F$ and $g$ have a unique coupled common fixed point, which belongs to the diagonal of $X$. \end{cor} \noindent{\bfseries Proof. } It follows from Theorems \ref{thm2.2} and \ref{thm2.3} by setting $\alpha_1=\alpha,$ $\alpha_3=\beta$ and $\alpha_2=\alpha_4=\alpha_5=\alpha_6=0$. \begin{cor}\label{cor2.6} Let $(X,D,K)$ a metric type space and $F:X\times X \to X$ and $g:X \to X$ be maps such that for all $x,y,u,v \in X$ \begin{equation}\label{Eq2.24} D(F(x,y),F(u,v)) \leq \alpha D(F(x,y),gu) + \beta D(F(u,v),gx) \end{equation} where $\alpha, \beta$ are nonnegative constants with $\alpha+\beta <2/(K^2+K).$ $(3K+1+)(\alpha+\beta)+2(K^2+K)\gamma <2.$ If $F(X\times X)\subset g(X) $ and $g(X)$ is complete, then $F$ and $g$ have a coupled coincidence point in $X$. Also if $F$ and $g$ are $w$-compatible, then $F$ and $g$ have a unique coupled common fixed point, which belongs to the diagonal of $X$. \end{cor} \noindent{\bfseries Proof. } It follows from Theorems \ref{thm2.2} and \ref{thm2.3} by setting $\alpha_5=\alpha,$ $\alpha_6=\beta$ and $\alpha_1=\alpha_2=\alpha_3=\alpha_4=0$. We give here an example to illustrate the above results. \begin{ex} Let $X=[0,1]$ and define on $X\times X$ the map $D$ given by $D(x,y)=|x-y|^2$. The reader can convince himself that $D$ does not satisfy the triangle inequality and that $(X,D,2)$ is a metric type space. For that, we could use the Minkowski inequality $|x-z|^2 \leq 2 (|x-y|^2+ |y-z|^2)$. Define the map $F:X\times X \to X$ by $F(x,y)=(x+y)/4$ and $g(x)=x$. For $\alpha=\beta=1/8$, it is easy to check that $F$ and $g$ satisfy the condition \eqref{Eq2.23} and that $\alpha+\beta = 1/4 \in [0,1/K)=[0,1/2)$, i.e. \begin{equation*} D(F(x,y),F(u,v)) \leq \frac{1}{8} [D(x,u) + D(y,v)]. \end{equation*} We apply Corollary \ref{cor2.5} to deduce the existence of coupled fixed point for $F$ and $g$, which is the present case is $(0,0)$. \end{ex}
{ "timestamp": "2015-05-12T02:08:58", "yymm": "1412", "arxiv_id": "1412.0380", "language": "en", "url": "https://arxiv.org/abs/1412.0380", "abstract": "In this article, we prove some fixed point theorems in metric type spaces. This article is just a generalization some results previously proved in \\cite{niyi-gaba}. In particular, we give some coupled common fixed points theorems under weak contractions. These results extend well known similar results existing in the literature.", "subjects": "General Topology (math.GN)", "title": "Fixed points for weak contractions in metric type spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357205793903, "lm_q2_score": 0.8397339756938818, "lm_q1q2_score": 0.8243968397228292 }
https://arxiv.org/abs/1603.03526
Cycles in graphs of fixed girth with large size
Consider a family of graphs having a fixed girth and a large size. We give an optimal lower asymptotic bound on the number of even cycles of any constant length, as the order of the graphs tends to infinity.
\section{Introduction} All graphs we consider in this article are simple graphs. We denote the size of $G$ by $e(G)$ and the order of $G$ by $v(G)$. A $j$-path in $G$ is a path of length $j$ in $G$. A $j$-cycle in $G$ is a cycle of length $j$ in $G$, and it is called an even cycle if $j$ is even. The {\em girth} of a graph $G$ is the length of the shortest cycles in $G$. For $x\in{V(G)}$, let $\Gamma^k_G(x)=\Gamma^k(x)$ denote the set of vertices of $G$ having distance exactly $k$ from the vertex $x$. In the following, the big-$O$ notations $f(n)=O(g(n))$ are understood as $f(n)=O(g(n))$ as $n\to\infty$, where $n$ denotes the order of a graph. The same applies to $\Theta$. It is easy to see that a graph having large girth cannot have too many edges. The famous Erd\H{o}s girth conjecture asserts the existence of graphs of any given girth with a size of maximum possible order. \begin{conj}[Erd\H{o}s girth conjecture] For any positive integer $m$, there exist a constant $c>0$ depending only on $m$ and a family of graphs $\{G_n\}$ such that $v(G_n)=n$, $e(G_n)\geq{c}n^{1+1/m}$ and $\text{girth}(G_n)>2m$. \end{conj} Indeed, such size is maximum by the result of Bondy and Simonovits \cite{BONSIM}, in which an explicit constant is given. They showed that a graph $G_n$ of order $n$ with $\text{girth}(G_n)>2m$ has a size less than $100mn^{1+1/m}$. This conjecture has been proved true for $m=1,2,3,5$. See \cite{REI}, \cite{BRO}, \cite{BEN} and \cite{WEN}. For a general $m$, Sudakov and Verstra\"ete \cite{SUDVER} showed that if such graphs exist, then they contain at least one cycle of any even length between and including $2m+2$ and $Cn$, for some constant $C>0$. For $\ell>m$, by counting the number of $(2\ell-2m)$-paths, one can show that the number of $2\ell$-cycles in such graphs has an order not greater than $O(n^{2\ell/m})$, provided that $\deg_{G_n}(x)=\Theta(n^{1/m})$ for any vertex $x$ in $G_n$. We will see in Section \ref{reduction} that in the asymptotic case, one can assume this without loss of generality. This suggests the definition of {\em almost regularity} given in Section \ref{reduction}. In this article, we give a lower bound on the number of $2\ell$-cycles when $\ell=O(1)$, and conclude that the number of $2\ell$-cycles is $\Theta(n^{2\ell/m})$. The precise statement is the following. \begin{thm} \label{ourthm} For any real number $c>0$ and integers $M,m$ with $M>m\geq2$, there exist a constant $\alpha>0$ and an integer $N$ such that if $\{G_n\}$ is a family of graphs satisfying $v(G_n)=n$, $e(G_n)\geq{c}n^{1+1/m}$ and $\text{girth}(G_n)>2m$, then for $n\geq{N}$ and $m+1\leq\ell\leq{M}$, the number of $2\ell$-cycles in $G_n$ is at least $\alpha{n^{2\ell/m}}$. \end{thm} We proceed as follows. In Section 2, we show that by adjusting the threshold $N$ in our theorem, we can further assume that the graphs have some nice properties, namely bipartite and almost regular. In Section 3, we count the number of short even cycles in $G_n$ up to length $4m$. Finally, in Section 4, we entend the argument to longer cycles, completing the proof of the main theorem. \section{Reduction to a simpler case} \label{reduction} In this section, we show that it suffices to consider only bipartite graphs which are almost regular, defined as follows. \begin{defn} Suppose $\{G_n\}$ is a family of graphs with $v(G_n)=n$ and $\text{girth}(G_n)>2m$, we say that $\{G_n\}$ is {\em almost regular} if there exist $c_1,c_2>0$ such that $c_1n^{1/m}\leq\deg(x)\leq{c_2}n^{1/m}$ for any vertex $x\in{V(G_n)}$. \end{defn} It is a well-known fact that any graph has a bipartite subgraph with at least half of its edges. It remains to construct subgraphs whose maximum and minimum degree is of order $n^{1/m}$. To achieve this, we repeatedly apply a theorem of Bondy and Simonovits \cite{BONSIM}, which states that if an $n$-vertex graph $G_n$ has girth larger than $2m$, then $G_n$ has less than $100mn^{1+1/m}$ edges. First we delete vertices of small degree. \begin{lem} \label{degreelowerbound} For any real number $c>0$ and integer $m\geq2$, there exists a constant $\beta>0$ such that any bipartite graph $G$ of order $n$, size at least $cn^{1+1/m}$ and girth larger than $2m$ has a subgraph $H$ having at least $\beta{n}$ vertices, at least $\frac{9c}{10}n^{1+1/m}$ edges and minimum degree at least $\frac{c}{10}n^{1/m}$. \end{lem} \begin{proof} Let $G=H_0$. For $i\geq1$, inductively define $H_i$ to be the subgraph of $H_{i-1}$ induced by all vertices having degree at least $\frac{c}{10}n^{1/m}$ in $H_{i-1}$. Then for all $i$ we have $e(H_i)\geq{e(H_{i+1})}$ and \begin{equation*}\begin{split} e(H_i)&\geq{e(H_0)}-(v(H_0)-v(H_i))\dfrac{c}{10}n^{1/m}\\ &\geq{cn^{1+1/m}}-n\dfrac{c}{10}n^{1/m}\\ &=\dfrac{9c}{10}n^{1+1/m}, \end{split}\end{equation*} and so there exists some $j\geq0$ such that $H_i=H_j$ for all $i\geq{j}$. Set $H=H_j$, which has girth larger than $2m$ and size at least $\frac{9c}{10}n^{1+1/m}$. Then, \begin{equation*} \dfrac{9c}{10}n^{1+1/m}<100m\cdot{v(H)}^{1+1/m}, \end{equation*} or $v(H)\geq\beta{n}$, where \begin{equation*} \beta=\left(\dfrac{9c}{1000m}\right)^{m/(m+1)}. \end{equation*} \end{proof} \begin{lem} \label{degreeupperbound} For every real number $c>0$ and integer $m\geq2$, there exists a constant $\gamma>0$ such that any bipartite graph $G$ of order $n$, size at least $cn^{1+1/m}$ and girth larger than $2m$ has a subgraph $H$ having at least $\frac{n}2$ vertices, at least $\frac{c}4n^{1+1/m}$ edges and maximum degree at most $\gamma{n}^{1/m}$. \end{lem} \begin{proof} For any $\gamma>0$, let $S_{\gamma}$ be the set of vertices of $G$ having degree at least $\gamma{n}^{1/m}$, and let $T_{\gamma}$ be the remaining vertices of $G$. We want to find a $\gamma$ so large that $H$ can be chosen as the subgraph $G[T_\gamma]$ of $G$ induced by $T_\gamma$. It suffices to find $\gamma$ large enough so that $e(G[S_\gamma])<e(G)/4$, $e(T_\gamma,S_\gamma)<e(G)/2$ and $v(G[T_\gamma])\geq{n}/2$. Since both $G$ and $G[S_{\gamma}]$ have girth larger than $2m$, we can apply the result of \cite{BONSIM} twice to obtain \begin{equation*}\begin{split} e(G[S_{\gamma}])&<100m|S_{\gamma}|^{1+1/m}\\ &\leq100m\left(\dfrac{2e(G)}{\gamma{n}^{1/m}}\right)^{1+1/m}\\ &\leq100m\left(\dfrac{2\cdot100mn^{1+1/m}}{\gamma{n}^{1/m}}\right)^{1+1/m}\\ &=\left(\dfrac{2}{\gamma}\right)^{1+1/m}(100m)^{2+1/m}n^{1+1/m}. \end{split}\end{equation*} To satisfy the first condition, we choose $\gamma$ large enough so that $e(G[S_\gamma])<\frac{e(G)}{4}<\frac{100m}{4}n^{1+1/m}$, or \begin{equation} \label{e(S)} \gamma>2\cdot100m\cdot4^{m/(m+1)}>400m. \end{equation} The second condition can be obtained via its contrapositive. Suppose $e(S_\gamma,T_\gamma)\geq\frac{c}{2}n^{1+1/m}$. Let $G_\gamma$ be the subgraph of $G$ induced by $E(S_\gamma,T_\gamma)$. Then apply Lemma \ref{degreelowerbound} to $G_\gamma$, we get a subgraph $H_\gamma$ of $G_\gamma$ having $\nu\geq\left(\frac{9c}{2000m}\right)^{m/(m+1)}n$ vertices and minimum degree at least $\frac{c}{20}n^{1/m}$. Now, we consider the subgraph $G'_\gamma$ of $G$ deleting the edges in $E(T_\gamma)$. Then, in $G'_\gamma$, every vertex in $S_\gamma\cap{V(H_\gamma)}$ still has degree at least $\gamma{n}^{1/m}$ and every vertex in $T_\gamma\cap{V(H_\gamma)}$ has degree at least $\frac{c}{20}n^{1/m}$. Note that any $m$-path in $G'_\gamma$ has at most $\lfloor{m/2}\rfloor$ internal vertices in $T_\gamma$, therefore the number of $m$-paths in $G'_\gamma$ is at least \begin{equation*} \dfrac12\nu\left(\dfrac{c}{20}n^{1/m}\right)^{\lfloor{m/2}\rfloor}\left(\gamma{n}^{1/m}\right)^{\lceil{m/2}\rceil}\geq\dfrac12\left(\dfrac{9c}{2000m}\right)^{m/(m+1)}\left(\dfrac{c\gamma}{20}\right)^{\lfloor{m/2}\rfloor}n^2. \end{equation*} But the number of $m$-paths in $G$ cannot be larger than $n^2$, since otherwise there is a pair of vertices being the endpoints of two $m$-paths, contradicting the girth of $G$ is larger than $2m$. Hence, \begin{equation*} \dfrac12\left(\dfrac{9c}{2000}\right)^{m/(m+1)}\left(\dfrac{c\gamma}{20}\right)^{\lfloor{m/2}\rfloor}<1, \end{equation*} or \begin{equation*} \gamma<\dfrac{20}{c}\left(2\left(\dfrac{2000}{9c}\right)^{m/(m+1)}\right)^{1/\lfloor{m/2}\rfloor}<\dfrac{40}{c}\left(\dfrac{2000}{9c}\right)^{3/(m+1)}. \end{equation*} Therefore, if \begin{equation*} \label{e(S,T)} \gamma>\dfrac{40}{c}\left(\dfrac{2000}{9c}\right)^{3/(m+1)}, \end{equation*} then $e(S_\gamma,T_\gamma)<\frac{c}{2}n^{1+1/m}\leq\frac{e(G)}{2}$. Finally, since $|S_\gamma|\leq\frac{200m}{\gamma}n$, the third condition is fulfilled by \eqref{e(S)}. This finishes the proof. \end{proof} \section{Counting short cycles} \label{countingshortcycles} From now on, we suppose that $G$ is a bipartite $n$-vertex graph having at least $cn^{1+1/m}$ edges with girth larger than $2m$, such that for some constants $c_1,c_2>0$, there holds $c_1n^{1/m}\leq\deg_G(x)\leq{c_2}n^{1/m}$ for any vertex $x\in{V(G)}$. In this section, we give a lower bound on the number of the $2\ell$-cycles in $G$, for each $m+1\leq\ell\leq2m$. We first sketch the idea. Let $x$ be a vertex in $G$. Suppose we have a path of odd length $k$ in $\Gamma_G^m(x)\cup\Gamma_G^{m+1}(x)$ with endpoints $w_0\in\Gamma_G^m(x)$ and $w_k\in\Gamma_G^{m+1}(x)$. For each neighbor $y$ in $\Gamma_G^m(x)$ of $w_k$, the four paths joining $x$ to $w_0$, $w_0$ to $w_k$, $w_k$ to $y$, and $y$ to $x$ form a closed walk, which contains a cycle, as shown in Figure \ref{idea}. We show in Section \ref{generic case} that generically these paths are internally disjoint, i.e. the length of the cycle is $2m+k+1$. Then we count in Section \ref{subgraph} the number of such paths and the number of neighbours of $w_k$. Finally, we obtain the desired lower bound in Section \ref{shortcycleslowerbound}. \begin{figure}[ht!] \centering \includegraphics[width=80mm]{w0wkyx.JPEG} \caption{The four paths form a closed walk, which contains a cycle} \label{idea} \end{figure} \subsection{Internally disjoint closed walk} \label{generic case} Note that for distinct $i,j\leq{m+1}$, we know that $\Gamma^i_G(x)\cap\Gamma^j_G(x)$ is empty because $G$ is bipartite and has girth larger than $2m$. In particular, the subgraph $G_x$ of $G$ induced by the vertices $\Gamma^m_G(x)\cup\Gamma^{m+1}_G(x)$ is bipartite with bipartition $\{\Gamma^m_G(x),\Gamma^{m+1}_G(x)\}$. Hence, any path of odd length in $G_x$ has one endpoint in $\Gamma^m_G(x)$ and the other endpoint in $\Gamma^{m+1}_G(x)$. For $1\leq{i}\leq{m}$ and any vertex $w\in\Gamma_G^i(x)$, there is a unique $(x,w)$-path $P_w^x$ of length $i$ in $G$. Note that for any two vertices $y_1,y_2\in\Gamma_G^m(x)$, the intersection of the paths $P_{y_1}^x$ and $P_{y_2}^x$ must be a path, of which $x$ is an endpoint. The following lemma guarantees that if $w_0\in\Gamma_G^m(x)$ and $w_k\in\Gamma_G^{m+1}(x)$, there is at most one neighbour $u\in\Gamma_{G_x}^1(w_k)$ so that $P_{w_0}^x$ and $P_u^x$ intersect internally, see Figure \ref{intersectinternally}. \begin{figure}[ht!] \centering \includegraphics[width=80mm]{xw0wk.JPEG} \caption{Other neighbours of $w_k$ give internally disjoint paths} \label{intersectinternally} \end{figure} \begin{lem} \label{disjointpath} Suppose two vertices $y_1,y_2\in\Gamma_G^m(x)$ share a common neighbour $w\in\Gamma_G^{m+1}(x)$, then the paths $P_{y_1}$ and $P_{y_2}$ are internally disjoint. \end{lem} \begin{proof} \begin{figure}[ht!] \centering \includegraphics[width=80mm]{xvy1wy2.png} \caption{A cycle of length at most $2m$ formed} \label{v} \end{figure} Suppose the paths $P_{y_1}^x$ and $P_{y_2}^x$ intersects internally, then their intersection must be a path of length $L\geq1$, with endpoints $x$ and $v$, for some $v\in\Gamma_G^L(x)$. Thus, the union of the paths $P_{y_1}\backslash{P_v}$, $P_{y_2}\backslash{P_v}$ and the edges $(y_2,w)$, $(w,y_1)$ is a cycle of length $2(m-L)+2\leq2m$ in $G$, as in Figure \ref{v}, contradiction. \end{proof} Now, given a path $P=(w_0,w_1,\ldots,w_k)$ of odd length $k\leq2m-1$ in $G_x$ with $w_0\in\Gamma_G^m(x)$ and $w_k\in\Gamma_G^{m+1}(x)$. Note that $V(P)\cap\Gamma_{G_x}^1(w_k)=\{w_{k-1}\}$ as $\text{girth}(G_n)>2m$. Let $y\in\Gamma_{G_x}^1(w_k)$. As shown in Figure \ref{cycle2m+k+1}, the four paths $P_{w_0}^x$, $P$, $(w_k,y)$ and $P_y^x$ contain a cycle of length $2m+k+1$, with at most two exceptions, namely $y=w_{k-1}$ and $y=u$. \begin{figure}[ht!] \centering \includegraphics[width=80mm]{xw0wk-1wkuy.JPEG} \caption{Most neighbours of $w_k$ give cycles of length $2m+k+1$} \label{cycle2m+k+1} \end{figure} \subsection{Number of paths in $G_x$} \label{subgraph} Note that in $G_x$, the minimum degree can be as small as $1$. Instead of counting the number of paths of a given length in $G_x$, we work with a subgraph of $G_x$ having large minimum degree. We adopt the result from Section \ref{reduction}. It is easy to see that $v(G_x)\leq{n}$ and \begin{equation*}\begin{split} e(G_x)\geq\big|\Gamma_G^m(x)\big|c_1n^{1/m}\geq\big(c_1n^{1/m}\big)^{m+1}=c_1^{m+1}n^{1+1/m}. \end{split}\end{equation*} Using Lemma \ref{degreelowerbound}, we obtain a bipartite subgraph $H_x$ of $G_x$ having order at least $ \left(\frac{9c_1^{m+1}}{1000m}\right)^{1/(1+m)}n$, size at least $\frac{9c_1^{m+1}}{10}n^{1+1/m}$, and $\frac{c_1^{m+1}}{10}n^{1/m}\leq\deg_{H_x}(u)\leq{c_2}n^{1/m}$, for any vertex $u\in{V(H_x)}$, with bipartition $\{A_x,B_x\}$, where $A_x=\Gamma_G^m(x)\cap{H_x}$, and $B_x=\Gamma_G^{m+1}(x)\cap{H_x}$. \begin{lem} \label{kpath} Let $k$ be an odd number satisfying $1\leq{k}\leq{2m-1}$. The number of $k$-paths in $G_x$ is at least $$\dfrac{9c_1^{(m+1)(k+1)}}{10^{k+1}c_2}n^{1+k/m}.$$ \end{lem} \begin{proof} The result follows from $$|B_x|\geq\dfrac{e(H_x)}{c_2n^{1/m}}\geq\dfrac{9c_1^{m+1}}{10c_2}n$$ and that the number of $k$-paths in $G_x$ is at least $$|B_x|\left(\dfrac{c_1^{m+1}}{10}n^{1/m}\right)^k.$$ \end{proof} \subsection{Lower bound on the number of short cycles} \label{shortcycleslowerbound} The work in the preceding sections allows us to find a lot cycles in $G$. It is clear that a $2\ell$-cycle can be counted by at most $2\ell$ times as each vertex of the cycle can play the role of $x$ once. We are ready to give a lower bound on the number of short even cycles, up to length $4m$ in $G$. \begin{prop} \label{smallcycles} Let $m$ be a positive integer. Let $G$ be a bipartite $n$-vertex graph having girth larger than $2m$ and $c_1n^{1/m}\leq\deg_G(v)\leq{c_2}n^{1/m}$ for any vertex $v\in{V(G)}$, for some $c_1,c_2>0$. Then for $m+1\leq{\ell}\leq{2m}$, the number of $2\ell$-cycles in $G$ is at least $\alpha_{\ell}{n^{2\ell/m}}$, where \begin{equation*} \alpha_\ell=\dfrac{9}{2\ell{c_2}}\left(\dfrac{c_1^{m+1}}{10}\right)^{2\ell-2m+1}>0. \end{equation*} \end{prop} \begin{proof} Using Lemma \ref{kpath} with $k=2\ell-2m-1$ and the observation above, the number of $2\ell$-cycles in $G$ is at least \begin{equation*}\begin{split} &\quad\,\,\dfrac{v(G)}{2\ell}\left(\min_{x\in{V(G)}}\deg(H_x)\right)\left(\min_{x\in{V(G)}}\text{number of }(2\ell-2m-1)\text{-paths in }H_x\right)\\ &\geq\dfrac{n}{2\ell}\left(\dfrac{c_1^{m+1}}{10}n^{1/m}\right)\left(\dfrac{9c_1^{(2\ell-2m)(m+1)}}{10^{2\ell-2m}c_2}n^{(2\ell-m-1)/m}\right)\\ &=\dfrac{9}{2\ell{c_2}}\left(\dfrac{c_1^{m+1}}{10}\right)^{2\ell-2m+1}n^{2\ell/m}. \end{split}\end{equation*} \end{proof} \section{Proof of main theorem} To count the number of longer cycles, we observe that $H_x$ has all the nice properties we wanted, namely bipartite, almost regular and large girth, we can apply Proposition \ref{smallcycles} to $H_x$ and get many short cycles in $H_x$. From them, we obtain a lot of paths in $H_x$, and each of them corresponds to many longer cycles in $G$ as in Section \ref{countingshortcycles}. These longer cycles give many longer paths in $H_x$ and again, each of these paths corresponds to many even longer cycles in $G_x$. Eventually, we have Theorem \ref{ourthm}. For simplicity, we will assume that $m$ is even from now on. For odd $m$, one can proceed similarly. Changing the parameters in Proposition \ref{smallcycles}, the number of $2\ell$-cycles in $H_x$ is at least \begin{equation*} \dfrac{9}{2\ell{c_2}}\left(\dfrac{c_1^{(m+1)^2}}{10^{m+2}}\right)^{2\ell-2m+1}n^{2\ell/m}, \end{equation*} for $2\ell\in{L_0}:=\{3m,3m+2,3m+4,\ldots,4m\}$, and so the number of paths of length $2\ell-m\in\{2m,2m+2,2m+4,\ldots,3m\}$ in $H_x$ is at least \begin{equation*} \dfrac{9}{c_2}\left(\dfrac{c_1^{(m+1)^2}}{10^{m+2}}\right)^{2\ell-2m+1}n^{2\ell/m}. \end{equation*} Then, for $2\ell\in{L_0}$, the number of $((2\ell-m)+2m+2)$-cycles in $G$ is at least \begin{equation*}\begin{split} &\quad\,\dfrac{n}{2\ell+m+2}\dfrac{c_1^{2(m+1)}}{100}n^{2/m}\dfrac{9}{c_2}\left(\dfrac{c_1^{(m+1)^2}}{10^{m+2}}\right)^{2\ell-2m+1}n^{2\ell/m}\\ &=\dfrac{9}{(2\ell+m+2)c_2}\dfrac{c_1^{2(m+1)}}{100}\left(\dfrac{c_1^{(m+1)^2}}{10^{m+2}}\right)^{2\ell-2m+1}n^{(2\ell+m+2)/m}, \end{split}\end{equation*} or for $2\ell\in{L_1}:=\{4m+2,4m+4,\ldots,5m+2\}$, the number of $2\ell$-cycles in $G$ is at least $\alpha_\ell{n^{2\ell/m}}$, where $\alpha_\ell$ is a positive constant depending on $m,c_1,c_2, \ell$ only. Repeating the same argument with the sets $L_j:=\{3m+j(m+2),3m+j(m+2)+2,\ldots,4m+j(m+2)\}$, the proof of Theorem \ref{ourthm} is completed.
{ "timestamp": "2016-03-14T01:04:57", "yymm": "1603", "arxiv_id": "1603.03526", "language": "en", "url": "https://arxiv.org/abs/1603.03526", "abstract": "Consider a family of graphs having a fixed girth and a large size. We give an optimal lower asymptotic bound on the number of even cycles of any constant length, as the order of the graphs tends to infinity.", "subjects": "Combinatorics (math.CO)", "title": "Cycles in graphs of fixed girth with large size", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936082881853, "lm_q2_score": 0.8376199592797929, "lm_q1q2_score": 0.8242964481018542 }
https://arxiv.org/abs/2202.04722
On the stability of unevenly spaced samples for interpolation and quadrature
Unevenly spaced samples from a periodic function are common in signal processing and can often be viewed as a perturbed equally spaced grid. In this paper, we analyze how the uneven distribution of the samples impacts the quality of interpolation and quadrature. Starting with equally spaced nodes on $[-\pi,\pi)$ with grid spacing $h$, suppose the unevenly spaced nodes are obtained by perturbing each uniform node by an arbitrary amount $\leq \alpha h$, where $0 \leq \alpha < 1/2$ is a fixed constant. We prove a discrete version of the Kadec-1/4 theorem, which states that the nonuniform discrete Fourier transform associated with perturbed nodes has a bounded condition number independent of $h$, for any $\alpha < 1/4$. We go on to show that unevenly spaced quadrature rules converge for all continuous functions and interpolants converge uniformly for all differentiable functions whose derivative has bounded variation when $0 \leq \alpha < 1/4$. Though, quadrature rules at perturbed nodes can have negative weights for any $\alpha > 0$, we provide a bound on the absolute sum of the quadrature weights. Therefore, we show that perturbed equally spaced grids with small $\alpha$ can be used without numerical woes. While our proof techniques work primarily when $0 \leq \alpha < 1/4$, we show that a small amount of oversampling extends our results to the case when $1/4 \leq \alpha < 1/2$.
\section{Introduction}\label{sec:introduction} In signal processing, function approximation, and econometrics, unevenly spaced time series data naturally occur. For example, natural disasters occur at irregular time intervals~\cite{quan}, observational astronomy takes measurements of celestial bodies at times determined by cloud coverage and planetary configurations~\cite{vio}, clinical trials may monitor health diagnostics at irregular time intervals~\cite{stahl}, and wireless sensors only record information when a state changes to conserve battery life~\cite{gowrishankar}. In most applications, the samples can usually be considered as obtained from perturbed equally spaced nodes. Therefore, a common approach to deal with unevenly spaced samples is to first transform the data into equally spaced observations using some form of low-order interpolation~\cite{eckner2012algorithms}. However, transforming data in this way can introduce a number of significant and hard-to-quantify biases~\cite{scholes1977estimating,rehfeld2011comparison}. Ideally, unevenly spaced time series are analyzed in the original form and there seems to be limited theoretical results on this in approximation theory~\cite{austintrefethen}. Suppose that there is an unknown $2\pi$-periodic function $f:[-\pi,\pi)\rightarrow \mathbb{C}$ that is sampled at $2N+1$ unevenly spaced nodes, and one would like to recover $f$ via interpolation or compute integrals involving $f$. How much does the uneven distribution of the samples impact the quality of interpolation or quadrature? To make progress on this question, we assume that the unevenly spaced nodes can be viewed as perturbed equally spaced nodes. That is, we have acquired samples $f_{-N},\ldots,f_{N}$ from $f$ at nodes that are perturbed from equally spaced nodes, i.e., \begin{equation} f_j = f(\tilde{x}_j), \qquad \tilde{x}_j = (j + \delta_j)h \qquad -N \leq j \leq N, \label{eq:UnevenSamples} \end{equation} where $h = 2\pi/(2N+1)$ is the grid spacing and $\delta_j$ is the perturbation of $jh$ with $|\delta_j|\leq \alpha$ for some $0 \leq \alpha < 1/2$ (see~\cref{fig.perturb}). We call the nodes $\tilde{x}_{-N},\ldots,\tilde{x}_N$ a set of $\alpha$-perturbed nodes. Here, we assume that $\alpha < 1/2$ so that nodes cannot coalesce. When $\alpha = 0$ the nodes are equally spaced and we are in a classical setting of approximation theory. In particular, when $\alpha=0$, one can use the fast Fourier transform (FFT)~\cite{cooley1965algorithm} to compute interpolants that converge rapidly to $f$~\cite{wright2015extension}. Moreover, the associated quadrature estimate is the trapezoidal rule, which is numerically stable and can be geometrically convergent for computing integrals involving $f$~\cite{hunter,trefethenweide}. Surprisingly, there has been far less theoretical attention on the case when $\alpha\neq 0$, despite it appearing in numerous applications and many encouraging numerical observations~\cite{austin,trefethenweide}. A notable exception is Austin and Trefethen's work~\cite{austintrefethen}, where they showed that interpolants and the quadrature at perturbed grids converge when the underlying function $f$ is twice continuously differentiable and $\alpha<1/2$. Using a discrete analogue of the Kadec-1/4 theorem~\cite{kadec}, we strengthen these results when $\alpha<1/4$. \begin{figure} \label{fig.perturb} \centering \begin{tikzpicture} \draw[black,thick] (0,0)--(9,0); \draw[black,thick] (0,-.2)--(0,.2); \filldraw (.5,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (2.5,0) circle (2pt); \filldraw (3.5,0) circle (2pt); \filldraw (4.5,0) circle (2pt); \filldraw (5.5,0) circle (2pt); \filldraw (6.5,0) circle (2pt); \filldraw (7.5,0) circle (2pt); \filldraw (8.5,0) circle (2pt); \draw[black,thick] (9,-.25) arc (-15:15:1); \draw[red,ultra thick] (.5-0.333,0)--(.5+0.333,0); \draw[red,ultra thick] (1.5-0.333,0)--(1.5+0.333,0); \draw[red,ultra thick] (2.5-0.333,0)--(2.5+0.333,0); \draw[red,ultra thick] (3.5-0.333,0)--(3.5+0.333,0); \draw[red,ultra thick] (4.5-0.333,0)--(4.5+0.333,0); \draw[red,ultra thick] (5.5-0.333,0)--(5.5+0.333,0); \draw[red,ultra thick] (6.5-0.333,0)--(6.5+0.333,0); \draw[red,ultra thick] (7.5-0.333,0)--(7.5+0.333,0); \draw[red,ultra thick] (8.5-0.333,0)--(8.5+0.333,0); \node at (0,-.5) {$-\pi$}; \node at (9,-.5) {$\pi$}; \node at (4.5,-.3) {$0$}; \node at (5.5,-.3) {$h$}; \node at (6.5,-.3) {$2h$}; \node at (7.5,-.3) {$3h$}; \node at (8.5,-.3) {$4h$}; \node at (3.5,-.3) {$-h$}; \node at (2.5,-.3) {$-2h$}; \node at (1.5,-.3) {$-3h$}; \node at (.5,-.3) {$-4h$}; \node at (4.5,.5) {$h=\frac{2\pi}{9}$}; \end{tikzpicture} \caption{Nine equally spaced nodes (black dots) on $[-\pi,\pi)$. The intervals (red) show where the unevenly spaced function samples can be when $\alpha =1/3$ in~\cref{eq:UnevenSamples}.} \end{figure} There are several aspects of unevenly spaced samples that we investigate: \begin{itemize}[leftmargin=*] \item {\em Conditioning of a nonuniform discrete Fourier transform.} The analogue of the FFT for unevenly spaced nodes is the nonuniform discrete Fourier transform (NUDFT). There are many variants of the NUDFT, but we focus on the one that is closely related to~\cref{eq:UnevenSamples} and common in signal processing~\cite{bagchi2012nonuniform}. Let $N$ be an integer and $\underline{c} = \left(c_{-N},\ldots,c_{N}\right)^\top$ be a vector. The NUDFT task related to~\cref{eq:UnevenSamples} is to compute the vector $\underline{f} = \left(f_{-N},\ldots,f_{N}\right)^\top$, defined by the following sums: \begin{equation} f_j = \sum_{k=-N}^{N} c_k e^{-i \tilde{x}_j k}, \qquad -N\leq j\leq N. \label{eq:NUDFT} \end{equation} As~\cref{eq:NUDFT} involves $2N+1$ sums each with $2N+1$ terms, the naive algorithms for computing $\underline{f}$ require $\mathcal{O}(N^2)$ operations; however, there are efficient algorithms that require only $\mathcal{O}(N\log N)$ operations~\cite{barnett2019parallel}. The NUDFT task has an inverse for any perturbed grid with $0\leq \alpha<1/2$, which aims to recover the vector $\underline{c}$ from $\underline{f}$. We show that for $\alpha<1/4$, the NUDFT task and its inverse have a condition number that can be bounded independent of $N$ when $\alpha<1/4$ (see~\cref{cor:NUDFTconditioning}). \item {\em Trigonometric interpolation.} A trigonometric polynomial $q$ of degree $\leq n$ is a function defined on $[-\pi,\pi)$ of the form \begin{equation} q(x) = \sum_{k=-n}^n c_k e^{ikx}, \qquad c_k \in \C. \label{eq:trigpoly} \end{equation} We denote the space of all trigonometric polynomials of degree $\leq n$ by $\mathcal{T}_n$. The goal of trigonometric interpolation is to find coefficients $c_{-n},\ldots,c_{n}$ such that $q$ interpolates $f$ at the samples, i.e., $q(\tilde{x}_j) = f_j$ for $-N\leq j\leq N$. Since the summand in~\cref{eq:trigpoly} can be reversed, an algorithm to find the coefficients is the inverse NUDFT when $n = N$. We show that interpolants at $\alpha$-perturbed nodes converge to $f$ for all differentiable functions whose derivative has bounded variation when $0\leq \alpha<1/4$ (see~\cref{thm.interpolatemax} and~\cref{thm.interpolateoptimal}). \item {\em Exact quadrature.} A quadrature rule is a method for numerical integration that approximates the integral of $f$ by a weighted sum of the function's samples, i.e., \begin{equation} I = \int_{-\pi}^\pi f(x) dx \approx \tilde{I}_N= \sum_{j=-N}^{N} \tilde{w}_j f(\tilde{x}_j). \label{eq:quadratureRule} \end{equation} Here, $\tilde{x}_{-N},\ldots,\tilde{x}_N$ are called the quadrature nodes and $\tilde{w}_{-N},\ldots,\tilde{w}_N$ the quadrature weights. Given quadrature nodes, it is often desirable to design the quadrature weights so that~\cref{eq:quadratureRule} is exact for all trigonometric polynomials in $\mathcal{T}_n$ for some $n$. When $n = N$, the exact condition uniquely defines a quadrature rule for any $\alpha<1/2$. While quadrature rules at $\alpha$-perturbed nodes can have negative weights for any $\alpha>0$ (see~\cref{thm.negative}), we show that the absolute sums of the weights, i.e., $\sum_{j=-N}^N |\tilde{w}_j|$, is bounded independent of $N$ for $\alpha<1/4$ (see~\cref{thm.bounded}), which shows that the quadrature rules are numerically stable. We provide an explicit upper bound on $|I-\tilde{I}_N|$ and conclude that $\lim_{N\rightarrow\infty} \tilde{I}_N = I$ for all continuous periodic functions when $\alpha<1/4$ \item {\em Marcinkiewicz--Zygmund inequalities.} The stability of quadrature and interpolation is closely connected to so-called Marcinkiewicz--Zygmund (MZ) inequalities~\cite{mhaskar, grochenig}. We show that the following MZ inequality holds for all $q\in\mathcal{T}_N$: \begin{equation} \frac{(1-\varphi_\alpha)^2}{2\pi}\! \int_{-\pi}^\pi |q(x)|^2 dx \leq \frac{1}{2N+1} \sum_{j=-N}^{N} \abs{q(\tilde{x}_j)}^2 \leq \frac{(1+\varphi_\alpha)^2}{2\pi}\!\int_{-\pi}^\pi |q(x)|^2 dx, \label{eq.MZp} \end{equation} where $\tilde{x}_{-N},\ldots,\tilde{x}_N$ are any $\alpha$-perturbed nodes in~\cref{eq:UnevenSamples} with $\alpha<1/4$ and $\varphi_\alpha = 1-\cos(\pi\alpha)+\sin(\pi\alpha)$. Other MZ inequalities at perturbed nodes are found in~\cite{marzoseip,ortega}. We use the MZ inequality in~\cref{eq.MZp} to derive explicit error bounds and the rate of convergence of quadrature rules and interpolants at $\alpha$-perturbed nodes when $0\leq \alpha < 1/4$. \end{itemize} \modifyadd{While our proof techniques work primarily for $0 \leq \alpha < 1/4$} when $N = n$ (see~\cref{sec:MZbounded}), we show that oversampling, i.e., $N>n$, \modifyadd{allows us to extend our results to the case when $1/4\leq \alpha <1/2$}. By oversampling by a factor of $1+\epsilon$, i.e., $N = \lceil (1+\epsilon)n\rceil$ for any $\epsilon>0$, we show that the same convergence results for interpolation and quadrature carry over to when $1/4\leq \alpha <1/2$ (see~\cref{sec:oversample}), improving on a result by Mhaskar, Narcowich, and Ward~\cite[Cor.~4.1]{mhaskar}. \modifyadd{Many results in this paper are motivated by Austin's thesis~\cite{austin}. In particular, for $0 \leq \alpha < 1/4$, we prove Conjecture~3.10 of~\cite{austin} on the $2$-norm Lebesgue constant (see~\cref{cor.MZkadec}) and Conjecture~3.14 on the absolute sum of the quadrature weights (see~\cref{thm.bounded}). Our results for interpolation also confirm Conjecture 3.7. Moreover, we provide an answer to questions regarding the signs of quadrature weights raised in section~3.5.2 of~\cite{austin} (see~\cref{thm.negative} and~\cref{thm.thetaN}).} The paper is structured as follows. In~\cref{sec:kadec}, we prove a discrete version of the Kadec-$1/4$ theorem that leads to a condition number bound on the nonuniform discrete Fourier transform and discuss its consequences. In~\cref{sec:interpolation} and~\cref{sec:quad}, we study the interpolation and quadrature rules at $\alpha$-perturbed nodes. In~\cref{sec:MZbounded} we look at further consequences of MZ inequalities. Finally, in~\cref{sec:oversample}, we investigate what happens when in the oversampling setting. \section{The Kadec-$\mathbf{1/4}$ theorem and its consequences}\label{sec:kadec} In sampling theory, the Kadec-$1/4$ theorem shows that the Fourier modes $\left\{e^{i\lambda_k x}\right\}$ for $k\in\mathbb{Z}$ form a Riesz basis when $|\lambda_k-k|\leq \alpha$ and $0\leq \alpha<1/4$~\cite{kadec}, which in signal processing means that one can recover a square-integrable function if given its inner-products with $\left\{e^{i\lambda_k x}\right\}$. While the standard setting for Kadec's theorem applies to perturbing an infinite number of Fourier wave numbers from integers $k$ to $\lambda_k$, we show that there is a discrete analogue when perturbing a finite number of nodes. It has consequences for the condition number of the NUDFT and MZ inequalities. For an integer $N$ and $\alpha$-perturbed nodes $\{\tilde{x}_j\}$, let $F$ be the DFT matrix given by \[ F_{jk} = e^{-2\pi i jk/(2N+1)}, \qquad -N\leq j,k\leq N, \] and $\tilde{F}$ be a NUDFT matrix given by \begin{equation} \tilde{F}_{jk} = e^{-i \tilde{x}_j k} = e^{-2\pi i (j + \delta_j) k/(2N+1)}, \qquad -N\leq j,k\leq N. \label{eq:NUDFTmatrix} \end{equation} The matrix $\tilde{F}$ is of interest because the sums in~\cref{eq:NUDFT} can be neatly written as the matrix-vector product $\tilde{F}\underline{c} = \underline{f}$. Therefore, the action of $\tilde{F}$ onto a vector is equivalent to evaluating a trigonometric polynomial of degree $N$ at $\tilde{x}_{-N},\ldots,\tilde{x}_N$. This can be performed in $\mathcal{O}(N\log N)$ operations~\cite{ruiz2018nonuniform} and in Chebfun (a MATLAB package for computing with functions)~\cite{driscoll2014chebfun} can be implemented as follows: \begin{verbatim} N = 1e4; alpha = 0.1; h = 2*pi/(2*N+1); tilde_x = (-N:N)'*h + alpha*(2*rand(2*N+1,1)-1)*h; c = randn(2*N+1,1); Fc = exp(1i*tilde_x*N).*chebfun.nufft(c,tilde_x/(2*pi)); \end{verbatim} An alternative algorithm that has even faster execution times is available in the fiNUFFT package~\cite{barnett2019parallel}. We now show that $\tilde{F}$ and $F$ are relatively close in the sense of their distance in the spectral norm. Our proof follows from a similar strategy to the proof of Kadec's theorem~\cite{young}. \begin{theorem}\label{thm.kadec} Suppose that $\abs{\delta_j} \leq \alpha <1/4$ for all $-N\leq j\leq N$ in~\cref{eq:NUDFTmatrix}, then \begin{equation}\label{eq.Kadec} \left\|F - \tilde{F}\right\|_2 \leq \varphi_\alpha\|F\|_2, \qquad \varphi_\alpha = 1-\cos(\pi\alpha)+\sin(\pi\alpha), \end{equation} where $\|\cdot\|_2$ is the spectral norm. \end{theorem} \begin{proof} Since $\|F - \tilde{F}\|_2 = \|F^\top - \tilde{F}^\top\|_2$ and $\|F\|_2 = \|F^\top\|_2$, where superscript `$\top$' denotes the matrix transpose, we prove that $\|F^\top - \tilde{F}^\top\|_2\leq \varphi_\alpha \|F^\top\|_2$. Let $\underline{c} = (c_{-N}, \ldots, c_{N})^\top$ be a vector of unit length so that $\|\underline{c}\|_2 = 1$. We have \begin{equation} (F^\top-\tilde{F}^\top)\underline{c} = \sum_{j=-N}^{N} e^{-ij\underline{t}} \circ \left(\underline{\mathbbm{1}} - e^{-i\delta_j\underline{t}}\right)c_j, \label{eq:NUDFTdifference} \end{equation} where $\underline{t} = (t_{-N},\ldots,t_{N})^\top$ with $t_k = 2\pi k/(2N+1)$, `$\circ$' is the Hadamard product denoting entry-by-entry multiplication between vectors, $\underline{\mathbbm{1}}$ is the column vector of all ones, and the exponential function of a vector is applied entrywise. As in~\cite{young}, for each $-N\leq j \leq N$, we write \begin{align*} \underline{\mathbbm{1}} - e^{-i\delta_j\underline{t}} &= \underbrace{\left(1 - \frac{\sin (\pi\delta_j)}{\pi\delta_j}\right)\underline{\mathbbm{1}}}_{A_j} + \underbrace{\sum_{\ell=1}^\infty \frac{(-1)^\ell 2\delta_j \sin (\pi\delta_j)}{\pi(\ell^2 - \delta_j^2)} \cos(\ell\underline{t})}_{B_j} \\ &\qquad -\underbrace{i\sum_{\ell=1}^\infty \frac{(-1)^\ell 2\delta_j \cos (\pi\delta_j)}{\pi(\ell - 1/2)^2 - \pi\delta_j^2} \sin\left(\left(\ell-\frac{1}{2}\right)\underline{t}\right)}_{C_j}, \end{align*} where $\sin$ and $\cos$ of a vector is applied entrywise. By substituting this expression into~\cref{eq:NUDFTdifference}, we can bound $\| (F^\top-\tilde{F}^\top)\underline{c} \|_2$ by separately bounding $\sum_{j=-N}^{N} A_j\circ e^{-ij\underline{t}}c_j, \sum_{j=-N}^{N} B_j\circ e^{-ij\underline{t}}c_j,$ and $\sum_{j=-N}^{N} C_j\circ e^{-ij\underline{t}}c_j$. \noindent \textbf{Bounding $\mathbf{\sum_{j=-N}^{N} A_j\circ e^{-ij\underline{t}}c_j}$.} Since $\abs{\delta_j} \leq \alpha < 1/4$, from elementary calculus, we have that $\max_j \left(1 - \sin(\pi\delta_j) / (\pi\delta_j)\right) \leq 1 - \sin(\pi\alpha)/(\pi\alpha)$. Also, since $\{e^{-ij\underline{t}}\}_{j=-N}^{N}$ is an orthogonal set and $A_j$ is a constant vector, $\{A_j \circ e^{-ij\underline{t}}c_j\}_{j=-N}^{N}$ is a set of orthogonal vectors. Hence, by the Pythagorean Theorem, we have \begin{equation}\label{eq.controlA} \begin{aligned} &\norm{\sum_{j=-N}^{N} A_j \circ e^{-ij\underline{t}}c_j}_2^2\!= \!\! \sum_{j=-N}^{N} \!\!\norm{A_j \circ e^{-ij\underline{t}}c_j}_2^2 \leq\! \left(1-\frac{\sin(\pi\alpha)}{\pi\alpha}\right)^2 \!\!\!\!\sum_{j=-N}^{N} \!\!\norm{e^{-ij\underline{t}}c_j}_2^2 \\ &\qquad\qquad\qquad =\left(1-\frac{\sin(\pi\alpha)}{\pi\alpha}\right)^2 \norm{\sum_{j=-N}^{N} e^{-ij\underline{t}}c_j}_2^2 \leq \left(1-\frac{\sin(\pi\alpha)}{\pi\alpha}\right)^2\! \|F^\top\|_2^2. \end{aligned} \end{equation} \noindent \textbf{Bounding $\mathbf{\sum_{j=-N}^{N} B_j\circ e^{-ij\underline{t}}c_j}$.} Let $D_{\ell,j} = \left((-1)^\ell 2\delta_j \sin (\pi\delta_j)\right)/\left(\pi(\ell^2 - \delta_j^2)\right)$. Since $|D_{\ell,j}| = \mathcal{O}(\ell^{-2})$ and $\abs{\cos(\ell t_k) e^{-ijt_k}c_j}\leq 1$, we have by the Fubini--Tonelli Theorem that \begin{equation*} \sum_{j=-N}^{N} \!\!B_j \circ e^{-ij\underline{t}} c_j = \!\!\sum_{j=-N}^{N}\!\! \left[\sum_{\ell = 1}^\infty D_{\ell,j}\cos(\ell \underline{t})\right] \circ e^{-ij\underline{t}} c_j = \sum_{\ell = 1}^\infty \sum_{j=-N}^{N} \!\!\left[D_{\ell,j}\cos(\ell \underline{t}) \circ e^{-ij\underline{t}}c_j\right]. \end{equation*} Define $\underline{v}^\ell = \sum_{j=-N}^{N} \left[D_{\ell,j}\cos(\ell \underline{t}) \circ e^{-ij\underline{t}}c_j\right]$. Since $|\cos(\ell t_k) e^{-ijt_k}c_j|\leq 1$, there exists $C > 0$, independent of $\ell$, such that $\norm{\underline{v}^\ell}_2 \leq C\max_j \abs{D_{\ell,j}}$. Since $\max_j \abs{D_{\ell,j}} = \mathcal{O}(\ell^{-2})$, we have $\sum_{\ell = 1}^\infty \norm{\underline{v}^\ell}_2 < \infty$. Hence, by Minkowski's inequality for integrals, we have $\norm{\sum_{\ell = 1}^\infty \underline{v}^\ell}_2 \leq \sum_{\ell = 1}^\infty \norm{\underline{v}^\ell}_2$ and \begin{align*} \norm{\sum_{j=-N}^{N} B_j \circ e^{-ij\underline{t}}c_j}_2 &= \norm{\sum_{\ell = 1}^\infty \underline{v}^\ell}_2 \leq \sum_{\ell = 1}^\infty \norm{\underline{v}^\ell}_2 = \sum_{\ell = 1}^\infty \norm{\cos(\ell\underline{t}) \circ \sum_{j=-N}^{N} D_{\ell,j}e^{-ij\underline{t}}c_j}_2\\ &\stackrel{(*)}{\leq} \sum_{\ell = 1}^\infty \norm{\cos(\ell\underline{t})}_\infty \norm{\sum_{j=-N}^{N} D_{\ell,j}e^{-ij\underline{t}}c_j}_2 \\ & \stackrel{(**)}{\leq} \sum_{\ell = 1}^\infty \max_{j}\abs{D_{\ell,j}}\norm{\sum_{j=-N}^{N} e^{-ij\underline{t}}c_j}_2 \leq \|F^\top\|_2\sum_{\ell = 1}^\infty \max_{j}\abs{D_{\ell,j}}, \end{align*} where ($*$) follows by applying H\"older's inequality and ($**$) holds due to the orthogonality of $\{e^{-ij\underline{t}}c_j\}_{j=-N}^N$. Since $\sum_{\ell = 1}^\infty 2\alpha/[\pi(\ell^2 - \alpha^2)]$ is the partial fraction expansion of $[1/(\pi \alpha) - \cot(\pi\alpha)]$ and $|D_{\ell,j}|$ is maximized when $|\delta_j| = \alpha$, we have \begin{equation*} \sum_{\ell=1}^\infty \max_{j}\abs{D_{\ell,j}} \leq \sum_{\ell=1}^\infty \frac{2\alpha \sin(\pi\alpha)}{\pi(\ell^2-\alpha^2)} = \frac{\sin(\pi\alpha)}{\pi\alpha} - \cos(\pi\alpha). \end{equation*} Hence, we find that \begin{equation}\label{eq.controlB} \norm{\sum_{j=-N}^{N} B_j \circ e^{-ij\underline{t}}c_j}_2 \leq \left(\frac{\sin(\pi\alpha)}{\pi\alpha} - \cos(\pi\alpha)\right)\|F^\top\|_2. \end{equation} \noindent \textbf{Bounding $\mathbf{\sum_{j=-N}^{N} C_j\circ e^{-ij\underline{t}}c_j}$.} Let $E_{\ell,j} \!= \!\!\left((-1)^\ell 2\delta_j\!\cos (\pi\delta_j)\right)\!/\!\!\left(\pi[(\ell-1/2)^2 \!- \delta_j^2]\right)$. Then, we have that \begin{equation*} \norm{\sum_{j=-N}^{N} \!\!C_j \circ e^{-ij\underline{t}}c_j}_2\!\! \leq \!\sum_{\ell = 1}^\infty \norm{\sin\!\left(\!\left(\ell-\frac{1}{2}\right)\!\underline{t}\right) \!\circ\!\!\! \sum_{j=-N}^{N} \!\!E_{\ell,j}e^{-ij\underline{t}}c_j}_2 \!\!\!\leq \!\|F^\top\|_2\!\sum_{\ell = 1}^\infty \!\max_{j}\abs{E_{\ell,j}}. \end{equation*} Since $\sum_{\ell = 1}^\infty 2\alpha/(\pi[(\ell-1/2)^2-\alpha^2])$ is the partial fraction expansion of $\tan(\pi\alpha)$ and $|E_{\ell,j}|$ is maximized when $|\delta_j| = \alpha$, we find that \begin{equation*} \sum_{\ell = 1}^\infty \max_{j}\abs{E_{\ell,j}} \leq \sum_{\ell = 1}^\infty \frac{2\alpha\cos(\pi\alpha)}{\pi((\ell-1/2)^2 - \alpha^2)} = \sin(\pi\alpha). \end{equation*} Hence, we conclude that \begin{equation}\label{eq.controlC} \norm{\sum_{j=-N}^{N} C_j \circ e^{-ij\underline{t}}c_j}_2 \leq \sin(\pi\alpha) \|F^\top\|_2. \end{equation} The statement of the theorem follows by combining \cref{eq.controlA}, \cref{eq.controlB}, and \cref{eq.controlC}. \end{proof} Since $F/\sqrt{2N+1}$ is a unitary matrix, we find that $\|F\|_2 = \sqrt{2N+1}$. Therefore,~\cref{thm.kadec} shows that $\|\tilde{F}-F\|_2\leq \varphi_\alpha \sqrt{2N+1}$. It is not immediately obvious that~\cref{thm.kadec} is a discrete analogue of Kadec's theorem. We can write $\tilde{F} = F + E$ with $\|E\|_2\leq \varphi_\alpha \|F\|_2$, so we have \begin{equation} \|\tilde{F}\|_2 \leq (1+\varphi_\alpha)\|F\|_2. \label{eq:NUDFTnorm} \end{equation} Moreover, since all the singular values of $F$ are $\sqrt{2N+1}$, for $0\leq \alpha<1/4$ (as this ensures that $\varphi_\alpha<1$) we find the following bound on the inverse NUDFT by Weyl's inequality: \begin{equation} \|\tilde{F}^{-1}\|_2 \leq \frac{1}{\norm{F}_2 - \norm{E}_2} \leq \frac{1}{(1-\varphi_\alpha)\|F\|_2}. \label{eq:NUDFTinversenorm} \end{equation} Hence,~\cref{thm.kadec} gives the following discrete version of Kadec's theorem, which is an analogue of the frame bound for Riesz bases in sampling theory. \begin{corollary} Under the same assumptions as~\cref{thm.kadec}, we have \begin{equation} (1-\varphi_\alpha)^2\|\underline{c}\|_2^2 \leq \frac{1}{2N+1}\|\tilde{F}\underline{c}\|_2^2 \leq (1+\varphi_\alpha)^2\|\underline{c}\|_2^2, \qquad \underline{c}\in\mathbb{C}^{2N+1}, \label{eq:discretekadec} \end{equation} where $\varphi_\alpha$ is given in~\cref{eq.Kadec}. \end{corollary} \begin{proof} The inequalities follow immediately from~\cref{eq:NUDFTnorm},~\cref{eq:NUDFTinversenorm}, and the definition of the spectral norm. \end{proof} When $\alpha\geq1/4$, we believe that there is no constant $C_1>0$ that is independent of $N$ such that $C_1\|\underline{c}\|_2^2\leq \tfrac{1}{2N+1}\|\tilde{F}\underline{c}\|_2^2$. This is because Levinson has showed that $\left\{e^{i\lambda_k x}\right\}$ for $k\in\mathbb{Z}$ does not always form a Riesz basis when $|\lambda_k-k|\geq 1/4$~\cite{kadec}. Instead, since $\tilde{F}^{-1}$ exists for any perturbed nodes with $1/4\leq \alpha < 1/2$ (as $\tilde{F}$ is a Vandermonde matrix), there must be a $C_1>0$ that depends on $N$ such that $C_1\rightarrow 0$ as $N\rightarrow \infty$. Therefore, $\alpha = 1/4$ is less of a significant threshold in the discrete setting than in sampling theory. From~\cite[Chapt.~4]{austin}, it is likely that one can show that $C_1 = \Omega(N^{-4\alpha})$;\footnote{For two functions $g_1(N)$ and $g_2(N)$, one writes $g_1(N) = \Omega(g_2(N))$ if there is a constant $C>0$ that is independent of $N$ such that $g_1(N)\geq Cg_2(N)$ for all $N$.} however, this is not asymptotically tight. We do not know how to improve the lower bounds on $C_1$ for $1/4\leq \alpha<1/2$. \subsection{The condition number of a NUDFT matrix}\label{sec:NUDFT} Of course,~\cref{eq:discretekadec} also has something to say about the condition number of the NUDFT matrix in~\cref{eq:NUDFTmatrix}. While there are many different versions of a NUDFT matrix, we expect that similar bounds can be derived for their condition numbers. \begin{corollary} \label{cor:NUDFTconditioning} Under the same assumptions as~\cref{thm.kadec}, the condition number of $\tilde{F}$ can be bounded independently of $N$: \begin{equation*} \kappa_2(\tilde{F}) = \norm{\tilde{F}}_2\norm{\tilde{F}^{-1}}_2 \leq \frac{1+\varphi_\alpha}{1-\varphi_\alpha}, \end{equation*} where $\varphi_\alpha$ is given in~\cref{eq.Kadec}. \end{corollary} \begin{proof} The bound follows immediately from~\cref{eq:NUDFTnorm} and~\cref{eq:NUDFTinversenorm}. \end{proof} Many fast algorithms for computing $\tilde{F}^{-1}\underline{f}$ rely on a Krylov solver and an $\mathcal{O}(N\log N)$ complexity matrix-vector product for $\smash{\tilde{F}}$ and $\smash{\tilde{F}^\top}$~\cite{dutt1993fast}. The number of required iterations of the Krylov solver depends on $\kappa_2(\tilde{F})$. \Cref{cor:NUDFTconditioning} shows that for perturbed equally spaced grids with a fixed constant $0\leq \alpha<1/4$, the number of iterations is independent of $N$. This means that these Krylov-based algorithms for computing $\tilde{F}^{-1}\underline{f}$ only require $\mathcal{O}(N\log N)$ operations when $0\leq \alpha<1/4$, which theoretically justifies previous numerical observations~\cite[Fig.~4]{ruiz2018nonuniform}. In Chebfun~\cite{driscoll2014chebfun}, a Krylov-based inverse NUDFT is implemented in the \texttt{chebfun.inufft} command~\cite{ruiz2018nonuniform}: \begin{verbatim} N = 1e4; alpha = 0.1; h = 2*pi/(2*N+1); tilde_x = (-N:N)'*h + alpha*(2*rand(2*N+1,1)-1)*h; fx = randn(2*N+1,1); cfs = chebfun.inufft(exp(-1i*tilde_x*N).*fx,tilde_x/(2*pi)) \end{verbatim} However, we believe that Krylov-based algorithms do not have an $\mathcal{O}(N\log N)$ complexity when $1/4\leq \alpha <1/2$. Instead, in the regime $1/4\leq \alpha<1/2$, other $\mathcal{O}(N\log N)$ algorithms can be used for the inverse NUDFT~\cite[Sect.~4]{dutt1995fast}, though we are not aware of any publicly available code. \subsection{Marcinkiewicz--Zygmund inequalities}\label{sec:MZ} \Cref{eq:discretekadec} can also be rephrased as an MZ inequality at perturbed nodes. While more general MZ inequalities are available in the literature (see~\cite{marzoseip}) and will be discussed in~\cref{sec:MZbounded}, the inequalities here have explicit bounds. \begin{corollary}\label{cor.MZkadec} Let $\tilde{x}_{-N},\ldots,\tilde{x}_N$ be $\alpha$-perturbed nodes with $0\leq\alpha<1/4$. The following MZ inequalities hold: \begin{equation} \label{eq:MZinequalities} \frac{(1\!-\!\varphi_\alpha)^2}{2\pi} \int_{-\pi}^\pi \! |q(x)|^2 dx \leq \sum_{j=-N}^{N} \! \frac{\abs{q(\tilde{x}_{j})}^2}{2N\!+\!1} \leq \frac{(1\!+\!\varphi_\alpha)^2}{2\pi} \int_{-\pi}^\pi \! |q(x)|^2 dx, \quad q \!\in\! \mathcal{T}_N, \end{equation} where $\varphi_\alpha$ is given in~\cref{eq.Kadec}. \end{corollary} \begin{proof} Let $q(x) = \sum_{k=-N}^N c_k e^{ikx}=\sum_{k=-N}^N c_{-k} e^{-ikx}$ be a trigonometric polynomial in $\mathcal{T}_N$. Note that we have the matrix equation $\tilde{F}\underline{c} = \underline{q}$ with $\tilde{F}$ in~\cref{eq:NUDFTmatrix}, where $\underline{c} = (c_N, \ldots, c_{-N})^\top$ and $\underline{q} = (q(\tilde{x}_{-N}), \ldots, q(\tilde{x}_N))^\top$. The result follows from~\cref{eq:discretekadec}, the definition of $\|\tilde{q}\|_2^2$, and Parseval's Theorem, i.e., $\norm{\underline{c}}_2^2 = \tfrac{1}{2\pi}\int_{-\pi}^\pi |q(x)|^2dx$. \end{proof} The MZ inequalities in~\cref{eq:MZinequalities} confirms that the 2-norm Lebesgue constant, which is denoted by $\tilde{\Lambda}_{2N+1}^{(2)}$ (see~\cite[sect.~3.4.1]{austin}), is bounded in $N$ when $0 \leq \alpha < 1/4$. In particular, we have $\tilde{\Lambda}_{2N+1}^{(2)} \leq 1/(1-\varphi_\alpha)$ for all $0 \leq \alpha < 1/4$ and integer $N$. The MZ inequalities are also useful for studying quadrature weights and the rate of convergence of quadrature rules and interpolation at perturbed nodes~\cite{grochenig,chui,konstantin}. By calculating the explicit constants in these bounds, we can give explicit error estimates, as opposed to asymptotic convergence bounds (see~\cref{sec:interpolation,sec:quad}). \section{Interpolation at unevenly spaced samples}\label{sec:interpolation} At equally spaced nodes, trigonometric interpolation enjoys rapid convergence to functions. Suppose that $f:[-\pi,\pi)\rightarrow \mathbb{C}$ is a continuous periodic function and that $q_N(x)$ is its trigonometric interpolant in $\mathcal{T}_N$ at the equally spaced nodes $x_j = jh$ for $-N\leq j\leq N$, where $h = 2\pi/(2N+1)$. Then, if $f$ is $\sigma\geq 1$ times differentiable and $f^{(\sigma)}$ is of bounded variation $V$ on $[-\pi,\pi)$, then~\cite[Thm.~4.2]{wright2015extension} \begin{equation} \|f - q_N\|_{L^\infty} \leq \frac{2V}{\pi \sigma N^{\sigma}}, \label{eq:diffFunctions} \end{equation} where throughout this paper, $\norm{\cdot}_{L^r} = \norm{\cdot}_{L^r([-\pi,\pi))}$ is the standard $L^r([-\pi,\pi))$ norm induced by the Lebesgue measure on $[-\pi,\pi)$. While if $f$ is analytic with $|f(z)| \leq M$ in the open strip of half-width $\rho_0>0$ along the real axis in the complex plane, one has exponential convergence~\cite[Thm.~4.2]{wright2015extension}: \begin{equation} \|f - q_N\|_{L^\infty} \leq \frac{4Me^{-\rho_0 N}}{e^{\rho_0}-1}. \label{eq:analyticFunctions} \end{equation} One wonders if trigonometric interpolants at perturbed equally spaced nodes enjoy the same kind of rapid convergence. When $\alpha<1/4$, we can show that one loses at most a factor of $\sqrt{N}$ in the convergence rate via the MZ inequality in~\cref{eq:MZinequalities}. \begin{theorem}\label{thm.interpolatemax} Let $f: [-\pi,\pi) \rightarrow \C$ be a continuous periodic function and $0 \leq \alpha < 1/4$. For any $\alpha$-perturbed nodes of size $2N+1$, if $\tilde{q}_N \in \mathcal{T}_N$ is the corresponding interpolant of $f$, then \[ \norm{f-\tilde{q}_N}_{L^\infty} \leq \left(1 + \frac{\sqrt{2N+1}}{\cos(\pi\alpha) - \sin(\pi\alpha)}\right) \min_{q\in\mathcal{T}_N} \norm{f-q}_{L^\infty}. \] \end{theorem} \begin{proof} First, note that for any trigonometric polynomial $q \in \mathcal{T}_N$ we have by Young's inequality and Parseval's Theorem that \[ \norm{q}_{L^\infty} = \frac{1}{2\pi}\norm{p * q}_{L^\infty} \leq \frac{1}{2\pi} \norm{p}_{L^2} \norm{q}_{L^2} = \sqrt{\frac{2N+1}{2\pi}} \norm{q}_{L^2}, \] where $(p\ast q)(x) = \int_{-\pi}^\pi p(x-s)q(s)ds$ and $p(x) = \sum_{j=-N}^N e^{ijx}$. If $\tilde{x}_{-N},\ldots,\tilde{x}_N$ are the perturbed nodes and $q$ is any polynomial in $\mathcal{T}_N$, then, by~\cref{eq:MZinequalities}, we have \begin{align*} \norm{f-\tilde{q}_N}_{L^\infty} &\leq \norm{f-q}_{L^\infty} + \norm{q-\tilde{q}_N}_{L^\infty} \\ &\leq \norm{f-q}_{L^\infty} + \sqrt{\frac{2N+1}{2\pi}} \norm{q-\tilde{q}_N}_{L^2} \\ &\leq \norm{f-q}_{L^\infty} + \frac{1}{1-\varphi_\alpha} \sqrt{\sum_{j=-N}^N \abs{q(\tilde{x}_j) - f(\tilde{x}_j)}^2} \\ &\leq \left(1 + \frac{1}{1-\varphi_\alpha} \sqrt{2N+1}\right) \norm{f-q}_{L^\infty}, \end{align*} where $\varphi_\alpha$ is given by~\cref{eq.Kadec}. The result follows by selecting $q$ to be the best fit polynomial in $\mathcal{T}_N$ to $f$ in $\|\cdot\|_{L^\infty}$ and noting that $1-\varphi_\alpha = \cos(\pi\alpha) - \sin(\pi\alpha)$. \end{proof} Using~\cref{eq:diffFunctions} and~\cref{eq:analyticFunctions} along with the same smoothness conditions, we find that for any $0\leq \alpha<1/4$ and $\rho <\rho_0$, we have \begin{equation} \|f - \tilde{q}_N\|_{L^\infty} =\mathcal{O}(N^{1/2-\sigma}), \quad \|f - \tilde{q}_N\|_{L^\infty} = \mathcal{O}(e^{-\rho N}), \label{eq:FirstConvergenceInterpolation} \end{equation} respectively. This means that with unevenly spaced samples, one can safely use $\tilde{q}_N$ as a surrogate for $f$, and the price to pay for perturbed samples is minimal when $\alpha$ is small. In~\cref{thm.interpolateoptimal}, we show that the convergence rates in~\cref{eq:FirstConvergenceInterpolation} can be slightly improved. Convergence results of interpolants at $\alpha$-perturbed nodes in the $L^2$ norm are also possible to derive from the MZ inequality in~\cref{eq:MZinequalities} by using~\cite[Thm~2.2]{grochenig}. One can easily do unevenly spaced trigonometric interpolation in Chebfun via the \texttt{chebfun.interp1} command. For example, below we compute a trigonometric polynomial that interpolates $\cos(x)$ at $\alpha$-perturbed nodes: \begin{verbatim} N = 1e2; alpha = 0.1; h = 2*pi/(2*N+1); tilde_x = (-N:N)'*h + alpha*(2*rand(2*N+1,1)-1)*h; qN = chebfun.interp1(tilde_x, cos(tilde_x), 'periodic', [-pi pi]); \end{verbatim} Instead of using an optimal complexity algorithm based on the NUDFT, this code uses the so-called trigonometric barycentric formula~\cite{wright2015extension}. This approach has some advantages as there is a convenient way to update the formula when a new interpolation node becomes available. \section{Quadrature at unevenly spaced samples}\label{sec:quad} Another important computational task is approximating integrals from knowledge of function samples at unevenly spaced nodes. While the quadrature rule at equally spaced nodes is the trapezoidal rule that enjoys perfect stability, positive quadrature weights, and rapid convergence, we wonder how much perturbing the nodes changes things. We are particularly interested in the signs of the quadrature weights~\cite{filbir,mhaskar}, the boundedness of the absolute sum of the quadrature weights~\cite{austin,mhaskar}, and the convergence rate of quadrature rules at perturbed nodes~\cite{austintrefethen,trefethenweide,grochenig}. \subsection{Are quadrature weights at perturbed nodes nonnegative?} It is highly desirable to use quadrature rules for which all the weights are non-negative as such rules are perfectly stable. The absolute condition number of the integral $\int_{-\pi}^\pi f(x) dx$ is $2\pi$, while the absolute condition number of the quadrature rule in~\cref{eq:quadratureRule} is $\sum_{j=-N}^N |\tilde{w}_j|$. Since~\cref{eq:quadratureRule} is exact, we know that $\smash{\sum_{j=-N}^N \tilde{w}_j = \int_{-\pi}^\pi 1 dx = 2\pi}$ and hence the quadrature rule is perfectly stable when $\tilde{w}_j\geq 0$ for $-N\leq j\leq N$. Since the trapezoidal rule has all nonnegative weights, one might expect that quadratures at $\alpha$-perturbed nodes also have non-negative weights for sufficiently small $\alpha>0$. Surprisingly, we find that this is not case and now give a quadrature rule at $\alpha$-perturbed nodes with a negative weight for any fixed $\alpha>0$. For any $\alpha>0$ and integer $N$, consider the following perturbed grid where the nodes are maximally perturbed in an alternating fashion (see~\cref{fig:NegativeWeights}): \begin{equation} \tilde{x}_j = \begin{cases} (j-\alpha)h, & -N\leq j \leq -1, j=\text{ even},\\ (j+\alpha)h, & -N\leq j \leq -1, j=\text{ odd},\\ 0, & j = 0,\\ (j-\alpha)h, & 1\leq j \leq N, j=\text{ odd},\\ (j+\alpha)h, & 1\leq j \leq N, j=\text{ even},\\ \end{cases} \qquad h = \frac{2\pi}{2N+1}. \label{eq:perturbedQuadrature} \end{equation} \begin{figure} \label{fig:NegativeWeights} \centering \begin{tikzpicture} \draw[black,thick] (0,0)--(9,0); \draw[black,thick] (0,-.2)--(0,.2); \filldraw (.5,0) circle (2pt); \filldraw (1.5,0) circle (2pt); \filldraw (2.5,0) circle (2pt); \filldraw (3.5,0) circle (2pt); \filldraw[red] (4.5,0) circle (2pt); \filldraw (5.5,0) circle (2pt); \filldraw (6.5,0) circle (2pt); \filldraw (7.5,0) circle (2pt); \filldraw (8.5,0) circle (2pt); \draw[black,thick] (9,-.25) arc (-15:15:1); \draw [stealth-, thick] (.5-.333,.2) -- (.5,.2); \draw [stealth-, thick] (1.5+.333,.2) -- (1.5,.2); \draw [stealth-, thick] (2.5-.333,.2) -- (2.5,.2); \draw [stealth-, thick] (3.5+.333,.2) -- (3.5,.2); \draw [stealth-, thick] (5.5-.333,.2) -- (5.5,.2); \draw [stealth-, thick] (6.5+.333,.2) -- (6.5,.2); \draw [stealth-, thick] (7.5-.333,.2) -- (7.5,.2); \draw [stealth-, thick] (8.5+.333,.2) -- (8.5,.2); \filldraw[red] (.5-.333,0) circle (2pt); \filldraw[red] (1.5+.333,0) circle (2pt); \filldraw[red] (2.5-.333,0) circle (2pt); \filldraw[red] (3.5+.333,0) circle (2pt); \filldraw[red] (5.5-.333,0) circle (2pt); \filldraw[red] (6.5+.333,0) circle (2pt); \filldraw[red] (7.5-.333,0) circle (2pt); \filldraw[red] (8.5+.333,0) circle (2pt); \node at (0,-.5) {$-\pi$}; \node at (9,-.5) {$\pi$}; \node at (4.5,-.3) {$0$}; \node at (5.5,-.3) {$h$}; \node at (6.5,-.3) {$2h$}; \node at (7.5,-.3) {$3h$}; \node at (8.5,-.3) {$4h$}; \node at (3.5,-.3) {$-h$}; \node at (2.5,-.3) {$-2h$}; \node at (1.5,-.3) {$-3h$}; \node at (.5,-.3) {$-4h$}; \node at (4.5,.5) {$h=\frac{2\pi}{9}$}; \end{tikzpicture} \caption{For any $\alpha>0$ and sufficiently large $N$, a quadrature rule at $\alpha$-perturbed nodes with negative weights can be constructed by maximally perturbing equally spaced nodes in an alternating fashion, as depicted.} \end{figure} Associated with the perturbed nodes in~\cref{eq:perturbedQuadrature} is an exact quadrature rule such that \begin{equation} \int_{-\pi}^\pi q(x) dx= \sum_{j=-N}^N \tilde{w}_j q(\tilde{x}_j), \qquad q\in\mathcal{T}_N. \label{eq:perturbedQuadratureRule} \end{equation} We now show that $\tilde{w}_0$ is negative if $N$ is taken to be sufficiently large. This means that the quadrature rule in~\cref{eq:perturbedQuadratureRule} is unfortunately not perfectly stable. \begin{theorem}\label{thm.negative} For any $\alpha > 0$ and sufficiently large even integer $N$, the quadrature weight $\tilde{w}_0$ in~\cref{eq:perturbedQuadratureRule} associated with the perturbed nodes in~\cref{eq:perturbedQuadrature} is negative. \end{theorem} \begin{proof} Let $\ell_0$ be the trigonometric Lagrange polynomial for $\tilde{x}_0$ associated with the perturbed nodes in~\cref{eq:perturbedQuadrature}, i.e.,~\cite{henrici} \begin{equation} \ell_0(x) = \prod_{j=-N,j\neq 0}^{N}\frac{\sin\left(\frac{x-\tilde{x}_j}{2}\right)}{\sin\left(\frac{\tilde{x}_0-\tilde{x}_j}{2}\right)}. \label{eq:lagrange} \end{equation} Note that $\ell_0\in\mathcal{T}_N$ satisfies $\ell_0(\tilde{x}_j) = 0$ for $-N\leq j\leq N$ and $j\neq 0$ as well as $\ell_0(\tilde{x}_0) = 1$. Therefore, using the fact that~\cref{eq:perturbedQuadratureRule} is an exact quadrature rule, we have \[ \tilde{w}_{0} = \sum_{j=-N}^N \tilde{w}_j \ell_0(\tilde{x}_j) = \int_{-\pi}^\pi \ell_0(x) dx = \frac{2\pi}{2N+1} \sum_{j=-N}^{N} \ell_0( jh ), \] where in the last equality we used the fact that the trapezoidal rule at equally spaced nodes is exact. Now, we have \begin{equation*} \tilde{w}_{0} = \frac{2\pi}{2N+1}\left(1 + \sum_{\substack{j=-N\\j \neq 0}}^{N} \ell_0(jh)\right). \\ \end{equation*} Since $N$ is an even integer, by~\cref{lem.Ly} and the fact that $(2/\pi)x\leq \sin(x)\leq x$ for $x \in [0,\pi/2]$, we have \begin{equation*} \begin{aligned} \sum_{\substack{j=-N\\j \neq 0}}^{N} \ell_0(jh) \leq -2\sum_{j=1}^N \frac{\sin\left(\alpha h/2\right)}{\sin\left((jh + \alpha h)/2\right)} \leq -\frac{4}{\pi} \sum_{j=1}^N \frac{\alpha h}{jh + \alpha h} \rightarrow -\infty \end{aligned} \end{equation*} as $N \rightarrow \infty$. Hence, for a sufficiently large even $N$, we have $\tilde{w}_0 < 0$. \end{proof} \cref{thm.negative} tells us that, no matter how small, a perturbation of equally spaced nodes can cause the corresponding quadrature rule to have a negative weight. However, when $\alpha$ is small, $N$ must be extremely large for this to happen for the perturbed nodes in~\cref{eq:perturbedQuadrature}. In fact, the proof in~\cref{thm.negative} requires that $N$ is even and so large that \begin{equation*} 1 -\frac{4\alpha}{\pi} \sum_{j=1}^N\frac{1}{j+\alpha} < 0, \end{equation*} which is ensured when $N\geq \exp({\tfrac{\pi}{4\alpha}+\tfrac{1}{2}})$.\footnote{Note that $\sum_{j =1}^{N}1/(j+\alpha) > \sum_{j=1}^N 1/(j+1) >\log(N) - 1/2$.} Thus, while quadrature at $\alpha$-perturbed nodes can have negative weights when $\alpha=1/5$ and $N$ is an even integer $\geq 83$, for $\alpha = 10^{-2}$ negative weights are only ensured when $N$ is even and $N>2.12\times 10^{34}$! One may wonder if there are more devilish perturbations than in~\cref{eq:perturbedQuadrature}, which guarantee negative weights for much smaller $N$. In~\cref{section:lowerbound}, we show that there are no such perturbations in the sense that it is required that $\log(N) = \Omega(1/\alpha)$ to have a quadrature rule at $\alpha$-perturbed nodes with a negative weight. The analysis in~\cref{section:lowerbound} is quite explicit. For example, we know that when $\alpha=10^{-2}$, all quadrature rules at $\alpha$-perturbed nodes with $N<9.4\times 10^6$ have positive weights. So, when $\alpha$ is small, such quadrature rules are perfectly stable for all practical $N$; however, for large perturbations one does begin to get concerned about their potential numerical instability. \subsection{Bounding the absolute sum of the quadrature weights}\label{sec:BoundedAbsoluteSum} To analyze the stability of quadrature rules at $\alpha$-perturbed nodes further, we take a look at the absolute sum of the quadrature weights as this is the absolute condition number of computing the quadrature rule. Since we are considering exact quadrature rules, there is a close connection between the NUDFT matrix, $\tilde{F}$, in~\cref{eq:NUDFTmatrix} and quadrature weights. In fact, since $e^{ijx}\in\mathcal{T}_N$ for $-N\leq j\leq N$, the quadrature rule must integrate them exactly. This means that the following linear system must be satisfied: \begin{equation} \tilde{F}^\top \underline{\tilde{w}} = 2\pi \underline{e}_0, \qquad \underline{e}_0 = \left(0,\ldots,0,1,0,\ldots,0\right)^\top \label{eq:linearSystemForWeights} \end{equation} where $\underline{\tilde{w}} = (\tilde{w}_{-N},\ldots,\tilde{w}_{N})^\top$ is the vector of quadrature weights. This means that the quadrature weights associated with $\alpha$-perturbed nodes are simple to compute: \begin{verbatim} N = 1e3; alpha = 0.1; h = 2*pi/(2*N+1); tilde_x = (-N:N)'*h + alpha*(2*rand(2*N+1,1)-1)*h; tF = exp(-1i*tilde_x*(-N:N)); e0 = zeros(2*N+1,1); e0(N+1)=1; w = tF'\(2*pi*e0); \end{verbatim} There are also $\mathcal{O}(N\log N)$ algorithms for solving the linear system in~\cref{eq:linearSystemForWeights} as it is equivalent to an inverse NUDFT (type I)~\cite{ruiz2018nonuniform}. We can also use the results in~\cref{sec:kadec} to get a bound on $\sum_{j=-N}^N|\tilde{w}_j|$. \begin{theorem}\label{thm.bounded} Let $0 \leq \alpha < 1/4$ and $N$ be a positive integer. For any quadrature rule at $\alpha$-perturbed nodes that is exact for $\mathcal{T}_N$, we have \begin{equation*} \sum_{j=-N}^{N} \abs{\tilde{w}_j} \leq \frac{2\pi}{\cos(\pi\alpha) - \sin(\pi\alpha)}, \end{equation*} where $\tilde{w}_{-N},\ldots,\tilde{w}_N$ are the quadrature rule's weights. \end{theorem} \begin{proof} Let $q(x) = \sum_{k=-N}^N c_k e^{ikx}$ be the trigonometric polynomial in $\mathcal{T}_N$ such that \[ q(\tilde{x}_{j}) = \text{sgn}(\tilde{w}_j), \qquad -N \leq j \leq N, \] where $\text{sgn}(w) = -1$ if $w < 0$ and $\text{sgn}(w) = 1$ for $w\geq 0$. Then, by~\cref{cor.MZkadec} and H\"{o}lder's inequality, we have \begin{align*} \sum_{j=-N}^{N} \abs{\tilde{w}_j} &= \sum_{j=-N}^{N} \tilde{w}_jq(\tilde{x}_j) = \int_{-\pi}^\pi q(x) dx \leq \int_{-\pi}^\pi |q(x)| dx \leq \sqrt{2\pi}\left(\int_{-\pi}^\pi |q(x)|^2 dx\right)^{1/2} \\ &\leq \frac{\sqrt{2\pi}}{\cos(\pi\alpha) - \sin(\pi\alpha)} \sqrt{\frac{2\pi}{2N+1}\sum_{j=-N}^{N}\abs{q(\tilde{x}_{j})}^2} = \frac{2\pi}{\cos(\pi\alpha) - \sin(\pi\alpha)}. \end{align*} \end{proof} While quadrature rules at $\alpha$-perturbed nodes can have negative weights for any $\alpha>0$,~\cref{thm.bounded} shows that they are relatively stable for small $\alpha$. For example, when $\alpha = 1/5$, while there is a negative weight for $N\geq 83$ with the nodes in~\cref{eq:perturbedQuadrature}, the condition number of the quadrature rule is $\leq 4.53\times (2\pi)$. \subsection{Convergence rates of quadrature rules at perturbed nodes}\label{sec:QuadratureConvergence} By P\'{o}lya's celebrated theorem~\cite{polya},~\cref{thm.bounded} tells us that the quadrature rule associated with perturbed nodes when $0\leq \alpha <1/4$ converges when the integrand is continuous. That is, for any continuous periodic function $f:[-\pi,\pi)\rightarrow \mathbb{C}$ we have \[ \tilde{I}_N = \sum_{j=-N}^N \tilde{w}_j f( \tilde{x}_j) \rightarrow I = \int_{-\pi}^\pi f(x)dx \quad \text{ as } \quad N\rightarrow\infty \] provided that $|\tilde{x}_j - jh| < \alpha h$ for $-N\leq j\leq N$ and $0\leq \alpha<1/4$, where $h = 2\pi/(2N+1)$. In fact, together with~\cite[Thm.~2.7]{grochenig} and~\cref{cor.MZkadec}, we have that \[ \abs{\tilde{I}_N(f) - I(f)} \leq \frac{4\pi}{\cos(\pi\alpha) - \sin(\pi\alpha)} \min_{q\in \mathcal{T}_N} \norm{f-q}_{L^\infty} \] for any continuous periodic function $f$. Therefore, if $f$ is $\sigma\geq 1$ times differentiable and $f^{(\sigma)}$ is of bounded variation $V$ on $[-\pi,\pi)$, then provided $0\leq \alpha<1/4$ (see~\cref{eq:diffFunctions}), \begin{equation} \abs{\tilde{I}_N(f) - I(f)} \leq \frac{8V\pi}{(\cos(\pi\alpha) - \sin(\pi\alpha))\sigma} N^{-\sigma}. \label{eq:QuadDiffFunctions} \end{equation} Alternatively, if $f$ is analytic with $|f(z)|\leq M$ in the open strip of half-width $\rho_0>0$ along the real axis in the complex plane, we have (see~\cref{eq:analyticFunctions}) \[ \abs{\tilde{I}_N(f) - I(f)} \leq \frac{16 M \pi}{\cos(\pi\alpha) - \sin(\pi\alpha)} \frac{e^{-\rho_0N}}{e^{\rho_0}-1}. \] Theorem~1.1 of~\cite{austintrefethen} also provides convergence rates on $|I-I_N|$ for quadratures rules associated with perturbed nodes. While their convergence rates hold for any $0\leq \alpha<1/2$, our rates are a strict improvement for differentiable functions for any $0\leq \alpha<1/4$. In particular, when $\alpha$ is close to $1/4$, the convergence rate in~\cref{eq:QuadDiffFunctions} is almost an order of $N$ improvement. \section{Further consequences of MZ inequalities}\label{sec:MZbounded} MZ inequalities are closely connected to the convergence of interpolation and quadrature. Up to now, we have only explored MZ inequalities using the $L^2$ norm, even though MZ inequalities in other $L^r$ norms are also useful for $1\leq r\leq \infty$. The beauty of focusing on the $L^2$-based MZ inequalities is that we derived explicit constants in the bounds (see~\cref{eq:GeneralMZ}). However, $L^r$-based MZ inequalities are extensively studied and a significant amount is known at perturbed nodes~\cite{ortega,marzoseip,lubinsky}. \begin{proposition}\label{prop:NoMZinequality} Fix $1\leq r < \infty$ and constants $C_1,C_2>0$. For any $0<\alpha<1/2$ such that $\alpha\geq \min\{1/(2r), (r-1)/(2r)\}$, there exists an integer $N$ and a set of $\alpha$-perturbed nodes $\tilde{x}_{-N},\ldots,\tilde{x}_N$ such that the following inequalities do not hold: \begin{equation} C_1 \int_{-\pi}^\pi |q(x)|^r dx \leq \frac{1}{2N+1} \sum_{j=-N}^{N} \abs{q(\tilde{x}_{j})}^r \leq C_2 \int_{-\pi}^\pi |q(x)|^r dx. \label{eq:GeneralMZ} \end{equation} Conversely, if $\alpha< \min\{1/(2r), (r-1)/(2r)\}$, then there exist constants $C_1,C_2>0$ (depending on $r$) such that~\cref{eq:GeneralMZ} holds for all $\alpha$-perturbed nodes and all $q\in\mathcal{T}_N$. \end{proposition} \begin{proof} A proof of this statement can be found in~\cite[Thm.~1.1]{marzoseip} and~\cite[Thm.~5]{ortega}. \end{proof} When $r = 2$,~\cref{eq:GeneralMZ} are the MZ inequalities found in~\cref{eq:MZinequalities} and we know that one can take $C_1 = (1-\varphi_\alpha)^2/(2\pi)$ and $C_2 = (1+\varphi_\alpha)^2/(2\pi)$.~\Cref{prop:NoMZinequality} tells us that~\cref{cor.MZkadec} is sharp in the sense that for any $\alpha>1/4$ the constants $C_1$ and $C_2$ must depend on $N$. As $r$ varies between $1\leq r<\infty$, the value of $\min\{1/(2r), (r-1)/(2r)\}$ takes the maximum value of $1/4$ at $r = 2$. This tells us that MZ inequalities cannot help us extend our results in this paper to the $1/4\leq\alpha<1/2$ regime. \modifyadd{On the other hand, the failure of MZ inequalities when $1/4\leq\alpha<1/2$ does not seem to cause problems for interpolation and quadrature when $\alpha \geq 1/4$. In fact, for each $N$, one can find a set of $1/4$-perturbed nodes that do not satisfy an MZ inequality for any $L^r$ norm but the associated exact quadrature rule has bounded absolute sum of its weights~\cite{marzoseip}. Hence, while $\alpha = 1/4$ is a theoretically important threshold for NUDFT conditioning and MZ inequalities, we find no indication that it is a threshold for interpolation and quadrature, which agrees with the findings in~\cite{austintrefethen}.} However, we can use~\cref{prop:NoMZinequality} to improve the rates of convergence of interpolants at perturbed nodes when $0\leq \alpha<1/4$, and derive a convergence rate that depends on $\alpha$ for differentiable functions. \begin{theorem}\label{thm.interpolateoptimal} Let $f:[-\pi,\pi)\rightarrow \mathbb{C}$ be a continuous periodic function and $0 \leq \alpha < 1/4$. For any $\alpha$-perturbed nodes of size $2N+1$, let $\tilde{q}_N\in\mathcal{T}_N$ be the corresponding interpolant of $f$. Then, for any $\alpha<\alpha_0 < 1/2$, we have \begin{equation}\label{eq.interpoptimal} \norm{f-\tilde{q}_N}_{L^\infty} = \mathcal{O}(N^{2{\alpha_0}}) \min_{q \in \mathcal{T}_N} \norm{f-q}_{L^\infty}. \end{equation} \end{theorem} \begin{proof} Without loss of generality, assume $\alpha < (1-2\alpha_0)/2$.\footnote{If $\alpha \geq (1-2\alpha_0)/2$, then one can pick some $\alpha_1$ such that $\alpha < \alpha_1 < \alpha_0$ and $\alpha < (1-2\alpha_1)/2$. Since~\cref{eq.interpoptimal} holds if $\alpha_0$ is replaced by $\alpha_1$, it also holds for $\alpha_0$.} Let $r = 1/(2\alpha_0)$ so that~\cref{eq:GeneralMZ} holds and let $r' = 1/(1-2\alpha_0)$ so that $1/r + 1/r' = 1$. For any $q \in \mathcal{T}_N$, by Young's inequality, we have \begin{equation*} \norm{q}_{L^\infty} = \frac{1}{2\pi}\norm{p * q}_{L^\infty} \leq \frac{1}{2\pi} \norm{p}_{L^{r'}} \norm{q}_{L^r} \leq C (2N+1)^{1/r} \norm{q}_{L^r}, \end{equation*} where $C$ is a constant independent of $N$ and $p(x) = \sum_{j=-N}^N e^{ijx}$. Here, we also used the fact that the $L^{r'}$ norm of $p$ is $\mathcal{O}(N^{1/r})$~\cite[Lem. 2.1]{anderson}. By~\cref{eq:GeneralMZ}, we have \begin{align*} \norm{f-\tilde{q}_N}_{L^\infty} &\leq \norm{f-q}_{L^\infty} + \norm{q-\tilde{q}_N}_{L^\infty} \\ &\leq \norm{f-q}_{L^\infty} + C (2N+1)^{1/r} \norm{q-\tilde{q}_N}_{L^r} \\ &\leq \norm{f-q}_{L^\infty} + C C_1^{-1/r} \left(\sum_{j=-N}^N \abs{q(\tilde{x}_j) - f(\tilde{x}_j)}^r\right)^{1/r} \\ &\leq \left(1+ C C_1^{-1/r} (2N+1)^{1/r}\right) \norm{f-q}_{L^\infty}, \end{align*} where $C_1>0$ is independent of $N$. The result follows by selecting $q$ to be the closest polynomial in $\mathcal{T}_N$ to $f$ in $\|\cdot\|_{L^\infty}$. \end{proof} By controlling the Lebesgue constant, Austin and Trefethen proved that if $f$ has $\sigma$ derivatives and $\sigma > 4\alpha$, then $\norm{f-\tilde{q}_N}_{L^\infty}= \mathcal{O}(N^{4\alpha-\sigma})$ and $\abs{\tilde{I}_N(f) - I(f)} = \mathcal{O}(N^{4\alpha-\sigma})$~\cite{austintrefethen}. They also conjectured that the factor $4\alpha$ in the exponent can be improved to $2\alpha$. \Cref{thm.interpolateoptimal} proves that the convergence rate for interpolants is arbitrarily close to what they conjectured as when $f$ has $\sigma$ derivatives we know that $\min_{q \in \mathcal{T}_N} \norm{f-q}_{L^\infty} = \mathcal{O}(N^{-\sigma})$. The convergence rate for quadrature rules is even better than conjectured. (See~\cref{sec:QuadratureConvergence}.) \section{Oversampling with unevenly spaced samples}\label{sec:oversample} Most of the results in the paper so far assumed that $0\leq \alpha<1/4$ as MZ inequalities may not hold for all $\alpha$-perturbed nodes when $\alpha\geq 1/4$ (see~\cref{prop:NoMZinequality}). In this section, we briefly discuss oversampling, where one assumes that there are more unevenly spaced samples than the number of coefficients in the desired trigonometric approximant. A small amount of oversampling allows us to \modifyadd{use MZ inequalities to} extend our results to the $1/4\leq \alpha <1/2$ regime. The oversampling regime is studied in approximation theory~\cite{grochenig}, including the higher dimensional setting~\cite{mhaskar}. Here, we restate existing results in terms of our notation and present sharper statements in one dimension. Let $0\leq \alpha<1/2$ and $\varepsilon>0$ be an oversampling rate. Suppose one has $\alpha$-perturbed samples of $f$ at $\tilde{x}_{-N},\ldots,\tilde{x}_N$. Since we are oversampling, we want to find a trigonometric polynomial of degree $\leq n$ (see~\cref{eq:trigpoly}) that ``fits" the samples of the function as best as possible, where $n$ is an integer such that $n \leq \lfloor(1-\varepsilon) N\rfloor$. Since there is typically no hope of a trigonometric polynomial interpolant, a common approach is to compute an approximant by solving a least squares problem. To do this, we first construct a tall-skinny NUDFT matrix given by \begin{equation} \tilde{F}_{jk} = e^{-i\tilde{x}_jk}, \qquad -N\leq j\leq N, \quad -n\leq k\leq n \label{eq:RectangularNUDFT} \end{equation} and then solve $\tilde{F}\underline{c} = \underline{f}$ for the vector $\underline{c}$, where $\underline{f} = \left(f(\tilde{x}_{-N}),\ldots,f(\tilde{x}_N)\right)^\top$. The least-squares approximant to $f$ can be formed as \begin{equation} \tilde{q}_n(x) = \sum_{k=-n}^n c_{k}e^{-ikx}. \label{eq:LSQ} \end{equation} For notational consistency, we continue to assume that there is an odd number of samples; however, all results in this section hold for an even number of samples too. \subsection{Oversampled Marcinkiewicz--Zygmund inequalities} In~\cref{sec:kadec}, we saw the MZ inequality is closely related to the NUDFT condition number via Parseval's theorem. Below, we state the MZ inequality in the oversampling case and then discuss its consequences for the condition number of $\tilde{F}$ in~\cref{eq:RectangularNUDFT}. Let $0\leq \alpha <1/2$ and $\varepsilon>0$. If $\tilde{x}_{-N},\ldots,\tilde{x}_N$ are $\alpha$-perturbed nodes and $n = \lfloor(1-\varepsilon) N\rfloor$. Then, it is easy to verify that \begin{equation*} \liminf_{R \rightarrow \infty} \left(\liminf_{N \rightarrow \infty} \frac{\min_{x \in [-\pi,\pi)} \abs{\{\tilde{x}_j\}_{j=-N}^{N} \cap (x,x+R/n)}}{R} \right) \geq \frac{1}{(1-\varepsilon)\pi}. \end{equation*} When this condition is satisfied, Ortega-Cerd\`a and Saludes~\cite{ortega} proved that the MZ inequality in~\cref{eq:GeneralMZ} holds for any $1 \leq r < \infty$ and $q\in\mathcal{T}_N$, where the constants $C_1, C_2 > 0$ are independent of $N, n, $ and $\tilde{x}_j$, but possibly depend on $\alpha, \varepsilon$, and $r$. \subsection{The condition number of a rectangular NUDFT matrix} Let $\tilde{F}$ be the NUDFT matrix given in~\cref{eq:RectangularNUDFT}. Then, for any vector $\underline{c} \in \C^{2n+1}$, we can rewrite~\cref{eq:GeneralMZ} for $r = 2$ using Parseval's theorem to obtain \begin{equation*} 2\pi C_1 \norm{\underline{c}}_2^2 \leq \frac{1}{2N+1} \norm{\tilde{F}\underline{c}}_2^2 \leq 2\pi C_2 \norm{\underline{c}}_2^2, \end{equation*} where $C_1,C_2>0$ are constants that only depend on $\alpha$ and $\varepsilon$. This immediately gives us a bound on the spectral norm of $\tilde{F}$, i.e., $\|\tilde{F}\|_2 \leq \sqrt{2\pi C_2(2N+1)}$. We also have \begin{equation*} \norm{\tilde{F}^\dagger \underline{f}}_2^2 \leq \frac{1}{2\pi C_1(2N+1)} \norm{\tilde{F}\tilde{F}^\dagger \underline{f}}_2^2 \leq \frac{1}{2\pi C_1(2N+1)} \norm{\underline{f}}_2^2, \qquad \underline{f} \in \C^{2N+1}, \end{equation*} where $\tilde{F}^\dagger$ is the Moore--Penrose pseudoinverse of $\tilde{F}$. Here, the second inequality follows from the fact that $\|\tilde{F}\tilde{F}^\dagger\|_2 = 1$. We find that $\|\tilde{F}^\dagger\|_2 \leq 1/\sqrt{2\pi C_1(2N+1)}$ and hence, the condition number of $\tilde{F}$ is bounded independently of $N$ and $n$: \[ \kappa_2(\tilde{F}) = \norm{\tilde{F}}_2 \norm{\tilde{F}^\dagger}_2 \leq \sqrt{\frac{C_2}{C_1}}. \] We find this a remarkable bound because as soon as there is a small amount of oversampling, the condition number of $\tilde{F}$ can be bounded independently of $N$ and $n$, even in the $1/4\leq \alpha<1/2$ regime. In contrast, we believe that when $N = n$, the condition number of $\tilde{F}$ grows slowly with $N$ when $1/4\leq \alpha<1/2$. \subsection{Convergence of least-squares approximation at unevenly spaced samples} Let $f:[-\pi,\pi)\rightarrow \mathbb{C}$ be a continuous periodic function and let $\tilde{q}_n$ be the least-squares approximant in~\cref{eq:LSQ}. Then, by definition of $\tilde{q}_n$, we have that $\sum_{j=-N}^N \abs{f(\tilde{x}_j) - \tilde{q}_n(\tilde{x}_j)}^2 \leq \sum_{j=-N}^N \abs{f(\tilde{x}_j) - q(\tilde{x}_j)}^2$ for any $q \in \mathcal{T}_n$. Hence, we find that \begin{align*} &\sum_{j=-N}^N \abs{q(\tilde{x}_j) - \tilde{q}_n(\tilde{x}_j)}^2 \leq \sum_{j=-N}^N (\abs{q(\tilde{x}_j) - f(\tilde{x}_j)} + \abs{f(\tilde{x}_j) - \tilde{q}_n(\tilde{x}_j)})^2 \\ &= \sum_{j=-N}^N (\abs{q(\tilde{x}_j) - f(\tilde{x}_j)}^2 + \abs{f(\tilde{x}_j) - \tilde{q}_n(\tilde{x}_j)}^2 + 2\abs{q(\tilde{x}_j) - f(\tilde{x}_j)} \abs{f(\tilde{x}_j) - \tilde{q}_n(\tilde{x}_j)}) \\ &\leq 4 \sum_{j=-N}^N \abs{q(\tilde{x}_j) - f(\tilde{x}_j)}^2, \end{align*} where the last inequality follows from the Cauchy--Schwarz inequality. By a similar argument to the proof of~\cref{thm.interpolatemax}, we find that \[ \left\|f - \tilde{q}_n\right\|_{L^\infty} \leq \left(1 + \sqrt{\frac{2(2n+1)}{C_1\pi}}\right) \min_{q\in\mathcal{T}_n} \|f - q\|_{L^\infty}. \] The reader can now use their favorite bounds on $\min_{q\in\mathcal{T}_n} \|f - q\|_{L^\infty}$ (see~\cref{eq:diffFunctions,eq:analyticFunctions}). \subsection{Nonexact quadrature rules} There also exists a quadrature rule that is exact for all $q \in \mathcal{T}_n$ such that~\cite[Thm.~2.7]{grochenig} \[ \abs{\tilde{I}_n(f) - I(f)} \leq 2\pi \left(1 + \sqrt{\frac{C_2}{C_1}}\right) \min_{q\in \mathcal{T}_n} \norm{f-q}_{L^\infty}, \quad \tilde{I}_n(f) = \sum_{j=-N}^N \tilde{w}_j f(\tilde{x}_j) \] for all continuous periodic $f$. In Lemma~3.6 of~\cite{grochenig}, it is shown that the quadrature weights $\tilde{w}_{j}$ are the least-squares solution of the underdetermined system $\tilde{F}^\top \underline{\tilde{w}} = 2\pi \underline{e}_0$, where $\underline{e}_0$ is the zero vector except at its central entry is $1$. \begin{comment} \begin{figure}\label{fig.oversample} \begin{center} \includegraphics[scale=0.32]{oversample.pdf} \caption{Numerically computed quadrature rules at $\alpha$-perturbed nodes defined in~\cref{eq:perturbedQuadrature}. (a) The absolute sum of the quadrature weights when $M_N = N$. (b) The absolute sum of the quadrature weights when $M_N = \ceil{1.05N}$. (c) The minimum quadrature weights when $M_N = N$. (d) The minimum the quadrature weights when $M_N = \ceil{1.05N}$.} \end{center} \end{figure} \end{comment} From numerical experiments, we observer that while oversampling usually avoids negative weights and also reduces the absolute sum of the weights, it is much more unpredictable as $n$ and $N$ vary. We know that oversampling by a minimal amount gives us good rates of convergence for any $\alpha < 1/2$. A natural question to ask is how much we need to oversample before we can guarantee that the quadrature weights are non-negative. Again, this problem was studied in \cite{mhaskar} for higher dimensions. A translation of their one dimensional technique states that if $n\leq N/\pi$, then for every $N$ there exists a quadrature rule, $\tilde{I}_n$, that is exact for $q \in \mathcal{T}_n$ and has non-negative weights.
{ "timestamp": "2022-02-11T02:03:16", "yymm": "2202", "arxiv_id": "2202.04722", "language": "en", "url": "https://arxiv.org/abs/2202.04722", "abstract": "Unevenly spaced samples from a periodic function are common in signal processing and can often be viewed as a perturbed equally spaced grid. In this paper, we analyze how the uneven distribution of the samples impacts the quality of interpolation and quadrature. Starting with equally spaced nodes on $[-\\pi,\\pi)$ with grid spacing $h$, suppose the unevenly spaced nodes are obtained by perturbing each uniform node by an arbitrary amount $\\leq \\alpha h$, where $0 \\leq \\alpha < 1/2$ is a fixed constant. We prove a discrete version of the Kadec-1/4 theorem, which states that the nonuniform discrete Fourier transform associated with perturbed nodes has a bounded condition number independent of $h$, for any $\\alpha < 1/4$. We go on to show that unevenly spaced quadrature rules converge for all continuous functions and interpolants converge uniformly for all differentiable functions whose derivative has bounded variation when $0 \\leq \\alpha < 1/4$. Though, quadrature rules at perturbed nodes can have negative weights for any $\\alpha > 0$, we provide a bound on the absolute sum of the quadrature weights. Therefore, we show that perturbed equally spaced grids with small $\\alpha$ can be used without numerical woes. While our proof techniques work primarily when $0 \\leq \\alpha < 1/4$, we show that a small amount of oversampling extends our results to the case when $1/4 \\leq \\alpha < 1/2$.", "subjects": "Numerical Analysis (math.NA)", "title": "On the stability of unevenly spaced samples for interpolation and quadrature", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9865717444683929, "lm_q2_score": 0.8354835432479661, "lm_q1q2_score": 0.82426445673678 }
https://arxiv.org/abs/1507.00827
Estimating the number of communities in networks by spectral methods
Community detection is a fundamental problem in network analysis with many methods available to estimate communities. Most of these methods assume that the number of communities is known, which is often not the case in practice. We study a simple and very fast method for estimating the number of communities based on the spectral properties of certain graph operators, such as the non-backtracking matrix and the Bethe Hessian matrix. We show that the method performs well under several models and a wide range of parameters, and is guaranteed to be consistent under several asymptotic regimes. We compare this method to several existing methods for estimating the number of communities and show that it is both more accurate and more computationally efficient.
\section{Introduction} \input{introduction} \section{Preliminaries} \input{preliminaries} \section{Spectral estimates of the number of communities} \input{methods} \section{Consistency} \input{consistency} \section{Numerical results}\label{sec: simulation} \input{simulation} \section{Discussion}\label{sec: discussion} \input{discussion} \subsection{Estimating $K$ from the non-backtracking matrix}\label{sec: NB} Under the SBM, the informative eigenvalues of the non-backtracking matrix are real-valued and separated from the bulk of radius $\|\tilde{B}\|^{1/2}$ \cite{Bordenave.et.al2015non-backtracking}. Therefore we can estimate $K$ by counting the number of real eigenvalues of $\tilde{B}$ that are at least $\|\tilde{B}\|^{1/2}$. We denote this method by NB (for non-backtracking). As shown by numerical results in Section~\ref{sec: simulation}, this estimate of $K$ also works under the DCSBM. When the network is balanced (communities have similar sizes and edge densities), NB performs well; however, the accuracy of NB drops if the communities are unbalanced in either size or edge density. Computationally, since $\tilde{B}$ is not symmetric, computing the eigenvalues of $\tilde{B}$ is more demanding for large networks. Thus we focus instead on the Bethe Hessian matrix, which is symmetric. \subsection{Estimating $K$ from the Bethe Hessian matrix}\label{sec: BH} The number of communities corresponds to the number of negative eigenvalues of $H(r)$; the challenge is in choosing an appropriate value of $r$. It was argued in \cite{Saade&Krzakala&Zdeborova2014} that when $r=\|\tilde{B}\|^{1/2}$, the informative eigenvalues of $H(r)$ are negative, while the bulk are positive; by \cite{Krzakala.et.al2013spectral}, $\|\tilde{B}\|$ can be approximated by $\tilde{d}$ from \eqref{eq: non-bactracking spectral norm}. Following these results, we first choose $r$ to be $r_m = \tilde{d}^{1/2}$ and denote the corresponding method by BHm. Simulations show that using $r=r_m$ and $r=\|\tilde{B}\|^{1/2}$ produce similar results; we choose $r = r_m$ because computing $r_m$ is less demanding than computing $\|\tilde{B}\|^{1/2}$. Another choice of $r$ is $r_a = \sqrt{(d_1+\cdots+d_n)/n}$, which was proposed in \cite{Saade&Krzakala&Zdeborova2014} for recovering the community structure under the SBM; we denote the corresponding method by BHa. We have found that when the network is balanced , NB, BHm, and BHa perform similarly; when the network is unbalanced, BHa produces better results. Both BHm and BHa tend to underestimate the number of communities, especially when the network is unbalanced. In that setting, some informative eigenvalues of $H(r)$ become positive, although they may still be far from the bulk. Based on this observation, we correct BHm and BHa by also using positive eigenvalues of $H(r)$ that are much close to zero than to the bulk. Namely, we sort eigenvalues of $H(r)$ in non-increasing order $\rho_1\ge \rho_2\ge\cdots\ge\rho_n$, and estimate $K$ by \begin{equation}\label{eq: K correction} \hat{K} = \max\{k: t\rho_{n-k+1} \le \rho_{n-k} \}, \end{equation} where $t>0$ is a tuning parameter. Note that if $\rho_{n-k_0+1}<0$ then $\hat{K} \ge k_0$ because $\rho_{n-k_0+1} \le \rho_{n-k_0}$, therefore the number of negative eigenvalues of $H(r)$ is always upper bounded by $\hat{K}$. Heuristically, if the bulk follows the semi-circular law and $\rho_{n-k}\ge 0$ is given, then the probability that $0 \le \rho_{n-k+1} \le \rho_{n-k}/t$ is less than $1/t$. When $1/t$ is sufficiently small, we may suspect that $\rho_{n-k+1}$ is an informative eigenvalue. In practice we find that $t\in [4,6]$ works well; we will set $t=5$ for all computations in this paper. Simulations show that $\hat{K}$ performs well, especially for unbalanced networks. The resulting methods are denoted by BHmc and BHac, respectively. We will also use BH to refer to all the methods that use the Bethe Hessian matrix. \subsection{The non-backtracking matrix}\label{sec: nonbacktracking} Let $m$ be the number of edges in the undirected network. To construct the non-backtracking matrix $B$, we represent the edge between node $i$ and node $j$ by two directed edges, one from $i$ to $j$ and the other from $j$ to $i$. The $2m \times 2m$ matrix $B$, indexed by these directed edges, is defined by \begin{equation*} B_{i \ra j, k \ra l} = \left\{ \begin{array}{ll} 1 & \hbox{if } j = k \text{ and } i \neq l \\ 0 & \hbox{otherwise.} \end{array} \right. \end{equation*} It is well-known \cite{Angel&Friedman&Hoory2007}\cite{Krzakala.et.al2013spectral} that the spectrum of $B$ consists of $\pm 1$ and eigenvalues of an $2n\times 2n$ matrix \begin{equation*} \tilde{B} = \left( \begin{array}{cc} 0_n & D - I_n \\ -I_n & A \\ \end{array} \right). \end{equation*} Here $0_n$ is the $n\times n$ matrix of all zeros, $I_n$ is the $n\times n$ identity matrix, and $D = \mathrm{diag}(d_i)$ is $n \times n$ diagonal matrix with degrees $d_i$ on the diagonal. It was observed in \cite{Krzakala.et.al2013spectral} that if a network has $K$ communities then the first $K$ largest eigenvalues in magnitude of $\tilde{B}$ are real-valued and well separated from the bulk, which is contained in a circle of radius $\|\tilde{B}\|^{1/2}$. We will refer to these $K$ eigenvalues as informative eigenvalues of $\tilde{B}$. It was also shown in \cite{Krzakala.et.al2013spectral} that the spectral norm of the non-backtracking matrix is approximated by \begin{equation}\label{eq: non-bactracking spectral norm} \tilde{d} = \Big(\sum_{i=1}^n d_i\Big)^{-1} \Big(\sum_{i=1}^n d_i^2\Big) - 1. \end{equation} For the special case of the SBM, \cite{Bordenave.et.al2015non-backtracking} proved that the leading eigenvalues of $\tilde{B}$ concentrate around non-zero eigenvalues of $\bar{A}$ and the bulk is contained in a circle of radius $\|\tilde{B}\|^{1/2}$, and used the corresponding leading eigenvectors to recover the community labels. \subsection{The Bethe Hessian matrix}\label{sec: bethe hessian} The Bethe Hessian matrix is defined as \begin{equation}\label{eq: bethe hessian} H(r) = (r^2 -1) I - r A + D, \end{equation} where $r \in \R$ is a parameter. In graph theory, the determinant of $H(r)$ is the Ihara-Bass formula for the graph zeta function. It vanishes if $r$ is an eigenvalue of the non-backtracking matrix \cite{Hashimoto1989,Bass1992,Angel&Friedman&Hoory2007}. Saade et al \cite{Saade&Krzakala&Zdeborova2014} used the Bethe Hessian for community detection. Under the SBM, they argued that the best choice of $r$ is $|r_c|=\sqrt{\l_n}$, depending on whether the network is assortative or disassortative; for a more general network, their choice of $r$ is $|r_c| = \|\tilde{B}\|^{1/2}$. For assortative sparse networks with $K$ communities and bounded $\l_n$, they showed that the $K$ eigenvalues of $H(r_c)$ whose corresponding eigenvectors encode the community structure are negative, while the bulk of $H(r_c)$ are positive. Thus, the number of negative eigenvalues of $H(r_c)$ corresponds to the number of communities. We will refer to these negative eigenvalues of $H(r_c)$ as informative eigenvalues. \subsection{Synthetic networks} To generate test case networks, we fix the label vector $c \in \{1,...,K\}^n$ so that $c_i = k$ if $n\pi_{k-1} + 1 \le i < n\pi_k$, where $\pi_0=0$. The label matrix $Z \in \R^{n\times K}$ encodes $c$ by representing each node with a row of $K$ elements, exactly one of which is equal to 1 and the rest are equal to 0: $Z_{ik}={\mathbf 1}(c_i=k)$. Let $\tilde{P}$ be an $K \times K$ matrix with diagonal $w=(w_1,...,w_K)$ and off-diagonal entries $\beta$, and $M = ZPZ^T$. Under the stochastic block model, we generate $A$ according to an edge probability matrix $\bar{A} = \E A$ proportional to $M$; the average degree $\l_n$ is controlled by appropriately rescaling $M$. The parameter $w$ controls the relative edge densities within communities, and $\beta$ controls the out-in probability ratio. Smaller values of $\beta$ and larger values of $\lambda_n$ make the problem easier. For the DCSBM, we generate the degree parameters $\theta_i$ from a distribution that takes two values, $\P(\theta=1)=1-\gamma$ and $\P(\theta=0.2)=\gamma$. Parameter $\gamma$ controls the fraction of ``hubs'', the high-degree nodes in the network, and setting $\gamma = 0$ gives back the regular SBM. Given $\theta=(\theta_i,...,\theta_n)$, the edges are generated independently with probabilities $\bar{A} = \E A$ proportional to $\mathrm{diag}(\theta)M\mathrm{diag}(\theta)$, where $\mathrm{diag}(\theta)$ is a diagonal matrix with $\theta_i$'s on the diagonal. The number of nodes is set to $n=1200$, the out-in probability ratio $\beta = 0.2$, and we vary the average degree $\l_n$, weights $w$, and community sizes. We consider three different values for the number of communities, $K = 2$, 4, and 6. For each setting, we generate $200$ replications of the network and record the accuracy, defined as the fraction of the times a method correctly estimates the true number of communities $K$. The methods NCV and VLH require a pre-specified set of $K$ values to choose from; we use the set $\{1,2,...,8\}$ for synthetic networks and $\{1,2,...,15\}$ for real-world networks. We start by varying the average degree $\l_n$, which controls the overall difficulty of the problem, and keeping all community sizes equal. Figure~\ref{fig:sparsity} shows the performance of all methods when all edge density weights are also equal, $w_i=1$ for all $1\le i \le K$; in Figure~\ref{fig:unbalanced w}, $w=(1,2)$ for $K=2$, $w=(1,1,2,3)$ for $K=4$, and $w=(1,1,1,1,2,3)$ for $K=6$, resulting in communities with varying edge density. In all figures, the top row corresponds to the SBM ($\gamma = 0$) and the bottom row to the DCSBM ($\gamma=0.9$, which means that 10\% of nodes are hubs). In general, we see that when everything is balanced (Figure~\ref{fig:sparsity}), all spectral methods perform fairly similarly and outperform both cross-validation (NCV) and the BIC-type criterion (VLH). Also, for larger $K$ and especially under DCSBM, we can see that the corrected versions are slightly better than the uncorrected ones, and the best Bethe Hessian based methods are better than the non-backtracking estimator. For networks with equal size communities but different edge densities within communities (Figure~\ref{fig:unbalanced w}), cross-validation performs poorly, but VLH relatively improves. For larger $K$ the spectral methods are also distinguishable, with all BH methods dominating NB, and corrected versions providing improvement. Overall, BHac is the best spectral method, comparable to VLH for the SBM, and best overall for DCSBM where VLH is not applicable. \begin{figure}[!ht] \centering \includegraphics[trim=110 10 100 10,clip,width=1\textwidth]{sparsity_comparison}\\ \caption{The accuracy of estimating $K$ as a function of the average degree. All communities have equal sizes, and $w_i=1$ for all $1\le i \le K$.} \label{fig:sparsity} \end{figure} \begin{figure}[!ht] \centering \includegraphics[trim=110 10 100 10,clip,width=1\textwidth]{unbalanced_w_comparison}\\ \caption{The accuracy of estimating $K$ as a function of the average degree. All communities have equal sizes; $w=(1,2)$ for $K=2$, $w=(1,1,2,3)$ for $K=4$, and $w=(1,1,1,1,2,3)$ for $K=6$.} \label{fig:unbalanced w} \end{figure} Communities of different sizes present a challenge for community detection methods in general, and the presence of relatively small communities makes the problem of estimating $K$ difficult. To test the sensitivity of all the methods to this factor, we change the proportions of nodes falling into each community setting $\pi_1 = r/K$, $\pi_K = (2-r)/K$, and $\pi_i = 1/K$ for $2 \le i \le K-1$, and varying $r$ in the range $[0.2, 1]$. As $r$ increases, the community sizes become more similar, and are all equal when $r=1$. Figure \ref{fig:community_ratio} shows the performance of all methods as a function of $r$. The top row corresponds to the SBM ($\gamma = 0$), the bottom row to the DCSBM ($\gamma=0.9$), and the within-community edge density parameters $w_i = 1$ for all $1 \le i \le K$. Here we see that VLH is less sensitive to $r$ than the spectral methods, but unfortunately it is not available under the DCSBM. Cross-validation is still dominated by spectral methods except for very small values of $r$, where all methods perform poorly. The corrections still provide a slight improvement for Bethe Hessian based methods, although all spectral methods perform fairly similarly in this case. \begin{figure}[!ht] \centering \includegraphics[trim=110 10 100 10,clip,width=1\textwidth]{community_ratio_comparison}\\ \caption{The accuracy of estimating $K$ as a function of the community-size ratio $r$: $\pi_1 = r/K$, $\pi_K = (2-r)/K$, and $\pi_i=1/K$ for $2 \le i \le K-1$. In all plots, $w_i=1$ for $1\le i \le K$; the average degrees are $\l_n = 10$ (left), $15$ (middle), and $20$ (right).} \label{fig:community_ratio} \end{figure} \subsection{Real world networks} Finally, we test the proposed methods on several popular network datasets. In the college football network \cite{Girvan&Newman2002}, nodes represent 115 US college football teams, and edges represent the games played in 2000. Communities are the 12 conferences that the teams belong to. The political books network \cite{Newman2006}, compiled around 2004, consists of 105 books about US politics; an edge is ``frequently purchased together'' on Amazon. Communities are ``conservative'', ``liberal'', or ``neutral'', labelled manually based on contents. The dolphin network \cite{Lusseau2003} is a social network of 62 dolphins, with edges representing social interactions, and communities based on a split which happened after one dolphin left the group. Similarly, the karate club network \cite{Zachary1977} is a social network of 34 members of a karate club, with edges representing friendships, and communities based on a split following a dispute. Finally, the political blog network \cite{Adamic05}, collected around 2004, consists of blogs about US politics, with edges representing web links, and communities are manually assigned as ``conservative'' or ``liberal''. For this dataset, as is commonly done in the literature, we only consider its largest connected component of 1222 nodes. Table~\ref{Table:real data} shows the estimated number of communities in these networks. All spectral methods estimate the correct number of communities for dolphins and the karate club, and do a reasonable job for the college football and political books data. For political blogs, all methods but NCV and VLH estimate a much larger number of communities, suggesting the estimates correspond to smaller sub-communities with more uniform degree distributions that have been perviously detected by other authors. We also found that the VLH method was highly dependent on the tuning parameter, and the estimates of NCVbm and NCVdc varied noticeably from run to run due to their use of random partitions. \begin{table}[!ht] \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{l|ccccccccc} \hline \textbf{Dataset} & NB & BHm & BHmc & BHa & BHac & NCVbm & NCVdc & VLH & Truth\\ \hline \text{College football } & 10 & 10 & 10 & 10 & 10 & 14 & 13 & 9 & 12\\ \text{Political books } & 3 & 3 & 4 & 4 & 4 & 8 & 2 & 6 & 3\\ \text{Dolphins} & 2 & 2 & 2 & 2 & 2 & 4 & 3 & 2 & 2\\ \text{Karate club} & 2 & 2 & 2 & 2 & 2 & 3 & 3 & 4 & 2\\ \text{Political blogs} & 8 & 7 & 8 & 7 & 8 & 10 & 2 & 1 & 2\\ \hline \end{tabular} \caption{Estimates of the number of communities in real-world networks.} \label{Table:real data} \end{table}
{ "timestamp": "2015-07-06T02:04:54", "yymm": "1507", "arxiv_id": "1507.00827", "language": "en", "url": "https://arxiv.org/abs/1507.00827", "abstract": "Community detection is a fundamental problem in network analysis with many methods available to estimate communities. Most of these methods assume that the number of communities is known, which is often not the case in practice. We study a simple and very fast method for estimating the number of communities based on the spectral properties of certain graph operators, such as the non-backtracking matrix and the Bethe Hessian matrix. We show that the method performs well under several models and a wide range of parameters, and is guaranteed to be consistent under several asymptotic regimes. We compare this method to several existing methods for estimating the number of communities and show that it is both more accurate and more computationally efficient.", "subjects": "Machine Learning (stat.ML); Social and Information Networks (cs.SI); Statistics Theory (math.ST)", "title": "Estimating the number of communities in networks by spectral methods", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9674102580527665, "lm_q2_score": 0.8519528076067262, "lm_q1q2_score": 0.824187885455602 }
https://arxiv.org/abs/1001.1406
Some experiments with integral Apollonian circle packings
Bounded Apollonian circle packings (ACP's) are constructed by repeatedly inscribing circles into the triangular interstices of a configuration of four mutually tangent circles, one of which is internally tangent to the other three. If the original four circles have integer curvature, all of the circles in the packing will have integer curvature as well. In \cite{ll}, Sarnak proves that there are infinitely many circles of prime curvature and infinitely many pairs of tangent circles of prime curvature in a primitive integral ACP. In this paper, we give a heuristic backed up by numerical data for the number of circles of prime curvature less than $x$, and the number of "kissing primes," or {\it pairs} of circles of prime curvature less than $x$ in a primitive integral ACP. We also provide experimental evidence towards a local to global principle for the curvatures in a primitive integral ACPs.
\section{Introduction}\label{intro} Start with four mutually tangent circles, one of them internally tangent to the other three as in Fig.~\ref{circles}. One can inscribe into each of the curvilinear triangles in this picture a unique circle (the uniqueness follows from an old theorem of Apollonius of Perga circa 200 BC). If one continues inscribing the circles in this way the resulting picture is called an Apollonian circle packing (ACP). A key aspect of studying such packings is to consider the radii of the circles which come up in a given ACP. However, since these radii become small very quickly, it is more convenient to study the {\it curvatures} of the circles, or the reciprocals of the radii. Studied in this way, ACP's possess a beautiful number theoretic property that all of the circles in an ACP have integer curvature if the initial four have integer curvature. The number theory associated with these integral ACP's has been investigated extensively in \cite{Apollo}, \cite{Fuchs}, and \cite{oh}. \begin{figure}[H] \centering \includegraphics[height = 30 mm]{packingcircles} \caption{Packing Circles}\label{circles} \end{figure} A central theorem to any of the results in these papers is Descartes' theorem, which says that the curvatures $(v_1,v_2,v_3,v_4)$ of any four mutually tangent circles satisfy what is called the Descartes equation, \begin{equation}\label{descartes} F(v_1,v_2,v_3,v_4)= 2(v_1^2+v_2^2+v_3^2+v_4^2)-(v_1+v_2+v_3+v_4)^2 =0. \end{equation} where a circle which is internally tangent to the other three is defined to have negative curvature (see \cite{Coxeter} for a proof). Given this formula we may assign to every set of $4$ mutually tangent circles in an integral packing $P$ a vector $\mathbf v\in \mathbb Z^4$ of the circles' curvatures. We use Descartes' equation to express any integral ACP as an orbit of a subgroup of the orthogonal group $\textrm{O}_F(\mathbb Z)$ acting on $\mathbf v$. This subgroup, called the Apollonian group, is specified in \cite{Apollo}, and we denote it by $A$. It is a group on the four generators \begin{equation}\label{gens} \small{ S_1=\left( \begin{array}{llll} -1&2&2&2\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{array} \right)\quad S_2=\left( \begin{array}{llll} 1&0&0&0\\ 2&-1&2&2\\ 0&0&1&0\\ 0&0&0&1\\ \end{array} \right)} \end{equation} \begin{equation*} \small{ S_3=\left( \begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 2&2&-1&2\\ 0&0&0&1\\ \end{array} \right)\quad S_4=\left( \begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 2&2&2&-1\\ \end{array} \right),} \end{equation*} derived by fixing three of the coordinates of $\mathbf v$ and solving $F(\mathbf v)=0$ for the fourth. Note that each $S_i$ is of order $2$ and determinant $-1$. Also, $S_i$ fixes all but the $i$th coordinate of $\mathbf v\in \mathbb Z^4$, producing a new curvature in the $i$th coordinate. In their paper \cite{Apollo} the five authors Graham, Lagarias, Mallows, Wilks, and Yan ask several fundamental questions about the curvatures in a given integer ACP, which have mostly been resolved in \cite{Fuchs}, \cite{Fuchs1}, \cite{FuchsB}, and \cite{oh}. In particular, they make some observations about the congruence classes of curvatures which occur in any given ACP, and suggest a ``strong density" conjecture, that every sufficiently large integer satisfying these congruence conditions should appear as a curvature in the packing. In \cite{FuchsB}, the first author and Bourgain prove a weaker conjecture of Graham et.al. of this flavor that the integers appearing as curvatures in a given ACP make up a positive fraction of $\mathbb N$. Proving the ``strong density" conjecture would be significantly more difficult. In this paper, we use the $p$-adic description of the Apollonian orbit from \cite{Fuchs1} to formulate this conjecture precisely and provide strong experimental evidence in Section~\ref{locglobal} in support of it. Our conjecture is specified further in the case of two different ACP's in Section~\ref{locglobal}. It is stated generally here. \begin{conj} {\bf Local to Global Principle for ACP's: }\label{LG} Let $P$ be an integral ACP and let $P_{24}$ be the set of residue classes mod $24$ of curvatures in $P$. Then there exists $X_P\in\mathbb Z$ such that any integer $x>X_P$ whose residue mod $24$ lies in $P_{24}$ is in fact a curvature of a circle in $P$. \end{conj} We note that $X$ above depends on the packing $P$ under consideration. In this paper, we investigate two ACP's which we call the {\it Bugeye} packing $P_B$ and the {\it Coins} packing $P_C$. These packings are represented by right action of the Apollonian group on $(-1,2,2,3)$ and $(-11,21,24,28)$, respectively (see Fig.~\ref{bugcoin} for a picture). In the case of $P_B$, our data suggests that $X_{P_B}$ exists and is $\leq 10^6$, as we find no integers $x>10^6$ which violate the above conjecture. The data for $P_C$, however, suggests that $X_{P_C}$ exists, but that it is $>10^8$. Namely, there are integers $x>10^8$ in certain residue classes in the set $S_{24}$ which do {\it not} appear as curvatures in the packing we consider. We explain this further in Section~\ref{locglobal}. \begin{figure}[H] \centering \includegraphics[height = 65 mm]{bug} \qquad\qquad \includegraphics[height = 65 mm]{coins} \caption{{\it Bugeye} and {\it Coins} packings}\label{bugcoin} \end{figure} Another interesting problem regarding ACP's is counting circles of prime curvature in a given packing. Sarnak proves in \cite{ll} that there are infinitely many circles of prime curvature in any packing. In light of this, we give a heuristic in Section~\ref{primeACP} for the weighted prime count $\psi_P(x)$: \begin{equation} \psi_P(x) = \mathop{\sum_{a(C) \leq x}}_{a(C) \;\text{prime}} \log \bigl(a(C)\bigr) \label{intropsi} \end{equation} where $C$ is a circle in the packing $P$ and $a(C)$ is its curvature. This count is closely related to the number $\pi_P(x)$ of prime curvatures less than $x$ in a packing $P$ (see Remark~\ref{katsremark}). We confirm experimentally that our heuristic holds for the packings $P_B$ and $P_C$. We note that our heuristic does not depend on the chosen packing $P$ -- in fact, it yields the correct count of prime curvatures for all of the packings we checked. We summarize it in the following conjecture: \begin{conj}\label{pconj} Let $N_P(x)$ be the number of circles in a packing $P$ of curvature less than $x$, and let $\psi_P(x)$ be as in (\ref{intropsi}). Then as $x \rightarrow \infty$, $$\psi_P(x) \sim L(2, \chi_4) \cdot N_P(x)$$ where $L(2, \chi_4)= 0.9159\dots$ is the value of the Dirichlet $L$-series at $2$ with character $\chi_4(p)= 1$ for $p\equiv 1\;(4)$ and $\chi_4(p)=-1$ for $p\equiv 3\;(4)$.\end{conj} Sarnak also shows in \cite{maa} that there are infinitely many pairs of tangent circles of prime curvature (we call these {\it kissing primes}). We address the question of counting kissing primes in $P$ via the weighted sum $\psi_P^{(2)}(x)$: \begin{equation}\label{introsum2} \psi_P^{(2)}(x)\quad =\mathop{\sum_{(C,\,C')\in S}}_{a(C),\, a(C')<x}\log (a(C))\cdot \log (a(C')) \end{equation} where $S$ is the set of unordered pairs of tangent circles $(C,C')$ of prime curvature in a packing $P$, and $a(C)$ and $a(C')$ denote their respective curvatures. In this case, it is less obvious what the relation is between $\psi_P^{(2)}(x)$ and the number $\pi_P^2(x)$ of kissing prime circles in a packing $P$ both of whose curvatures are less than $x$. We therefore stick with $\psi_P^{(2)}(x)$ in our computation: \begin{conj}\label{tpconj} Let $\psi_P^{(2)}(x)$ be as in (\ref{introsum2}), and let $N_P(x)$ be the number of circles in a packing $P$ of curvature less than $x$. Then $$\psi_P^{(2)}(x)\sim c\cdot {L^2(2,\chi_4)}\cdot N_P(x),$$ where $N_{P}(x)$ is as above and $c= 1.646\dots$ is given by $$2 \cdot \prod_{p\equiv 3\,(4)}\left(1-\frac{2}{p(p-1)^2}\right).$$ \end{conj} These heuristics are computed by counting primes in orbits of the Apollonian group, which is possible due to recent results of Bourgain, Gamburd, and Sarnak in \cite{BoGaSa}, as well as the recent asymptotic count of Kontorovich and Oh in \cite{oh} for the number $N_P(x)$. Our computer experiments were conducted using Java and Matlab, and the programs are available at http://www.math.princeton.edu/\~{}ksanden/ElenaKatCode.html. A brief description of our algorithm and a discussion of its running time can be found in Section~\ref{algorithm}. \subsection{Arithmetic structure of the Apollonian group and its orbit}\label{Prelim} Since all of the computations and claims in this paper concern the orbit $\mathcal O$ of the Apollonian group $A$ acting on a vector $\mathbf v\in \mathbb Z^4$, we recall the description of the orbit modulo $d$ for any integer $d$ from \cite{Fuchs1}. We use this description throughout Sections~\ref{primeACP} and \ref{locglobal}. \begin{thm}(Fuchs): \label{padicorbit} Let $\mathcal O$ be an orbit of $A$ acting on a root quadruple\footnote[2]{A root quadruple of a packing $P$ is essentially the $4$-tuple of the curvatures of the largest four circles in $P$. It is well defined and its properties are discussed in \cite{Apollo}.} of a packing, and let $\mathcal O_d$ be the reduction of this orbit modulo an integer $d>1$. Let $C=\{\mathbf{v} \not = \mathbf{0} \,| F(\mathbf{v})=0\}$ denote the cone without the origin, and let $C_d$ be $C$ over $\mathbb Z/d\mathbb Z$: $$C_d=\{\mathbf{v}\in\mathbb Z/d\mathbb Z\, | \mathbf{v}\not \equiv \mathbf{0}\, (d),\, F(\mathbf{v})\equiv 0\, (d)\}$$ Write $d=d_1d_2$ with $(d_2,6)=1$ and $d_1 = 2^n3^m$ where $n,m \geq 0$. Write $d_1=v_1v_2$ where $v_1=gcd(24,d_1)$. Then \begin{itemize} \item[(i)] The natural projection $\mathcal O_d\longrightarrow \mathcal O_{d_1}\times \mathcal O_{d_2}$ is surjective. \item[(ii)]Let $\pi:C_{d_1}\rightarrow C_{v_1}$ be the natural projection. Then $\mathcal O_{d_1} = \pi^{-1}(\mathcal O_{v_1})$. \item[(iii)]The natural projection $\mathcal O_{d_2} \longrightarrow \prod_{p^r||d_2}\mathcal O_{p^{r}}$ is surjective and $\mathcal O_{p^r}= C_{p^r}$. \end{itemize} \end{thm} \noindent This result is obtained by analyzing the reduction modulo $d$ of the inverse image of the Apollonian group $A$ in the spin double cover of $SO_F$. We note that Theorem~\ref{padicorbit} implies that the orbit $\mathcal O$ of $A$ has multiplicative structure in reduction modulo $d=\prod_{p^r||d}p^r$ and that it is completely characterized by its reduction mod $24$, or by $\mathcal O_{24}$ in our notation. This explains the dependence on $P_{24}$ in Conjecture~\ref{LG}. \vspace{0.3in} \noindent {\bf Acknowledgements:} We thank Peter Sarnak, Alex Kontorovich, and Kevin Wayne for many insightful conversations and helpful suggestions. \section{Prime number theorems for ACP's}\label{primeACP} In \cite{BoGaSa}, Bourgain et.al. construct an affine linear sieve that gives lower and upper bounds for prime and almost-prime points in the orbits of certain groups. In this section, we use their analysis to predict precise asymptotics on the number of prime curvatures less than $x$, as well as the number of pairs of tangent circles of prime curvature less than $x$ in a given primitive Apollonian packing $P$. The conditions associated with the affine linear sieve for $A$ are verified in \cite{BoGaSa}. We recall the setup below. Let $a_n=\#\{\mbox{circles of curvature }n {\mbox{ in a bounded packing }}P\}$, and note that $a_n$ is finite since the number of circles of any given radius can be bounded in terms of the area of the outermost circle. We consider $1\leq n\leq x$ and note that the sum $$\sum_n a_n = N_P(x),$$ where $N_P(x)$ is the number of circles of curvature less than $x$ and is determined by the asymptotic formula in \cite{oh} (see Lemma~\ref{oh}). Key to obtaining our asymptotics is computing the averages of progressions mod $d$ of curvatures less than $x$, where $d>1$ ranges over positive square-free integers of suitable size. To this end, we define $$X_d= \sum_{n \equiv 0\,(d)}a_n$$ and introduce a multiplicative density function $\beta(d)$ for which $$X_d=\beta(d)\cdot N_P(x) +r(A,d)$$ where the remainder $r(A,d)$ is small according to the results in \cite{BoGaSa}. In the case of ACP's, we define $\beta(d)$ as follows. Let $\mathcal O$ be an integral orbit of $A$, and let $\mathcal O_d$ be the reduction of $\mathcal O$ modulo $d$ for a square-free positive integer $d$. Then \begin{equation}\label{betadef} \beta_j(d)=\dfrac{\#\{\mathbf v \in \mathcal O_d \, | \, v_j = 0\}} {\#\{\mathbf v \in \mathcal O_d\}} \end{equation} where $v_j$ is the $j$th coordinate of $\mathbf v$. We recall from Theorem~\ref{padicorbit} that the orbit $\mathcal O_d$ has a multiplicative structure which carries over to the function $\beta_j$ so that $$\beta_j(d)=\prod_{p|d}\beta_j(p).$$ Thus in order to evaluate $\beta_j(d)$ for arbitrary square-free $d$, we need only to determine $\beta_j(p)$ for $p$ prime. This is summarized in the following theorem. \begin{lemma}\label{betathm} Let $d = \prod p_i$ be the prime factorization of a square-free integer $d>1$. Then \begin{itemize} \item[(i)] $\beta_j(d)=\prod\beta_j(p_i)$ for $1\leq j\leq 4$. \item[(ii)] For $p\not =2$, we have $$\beta_j(p) = \beta_k(p) \mbox{ for $1\leq j,k \leq 4.$}$$ \item[(iii)] For any orbit $\mathcal O$ there exist two coordinates, $i$ and $j$, such that $$\beta_i(2)=\beta_j(2)=1,$$ $$\beta_k(2)=0 {\mbox{ for $k\not= i, j$.}}$$ We say that the $i$th and $j$th coordinates are even throughout the orbit, while the other two coordinates are odd throughout the orbit. \item[(iv)] For $p\not=2$, let $\beta(p)=\beta_i(p)$ for $1\leq i \leq 4$. Then \begin{equation}\label{bvalue} \beta(p)= \left\{ \begin{array}{ll} \frac{1}{p+1}& \mbox{for $p\equiv 1$ mod $4$} \\ \frac{p+1}{p^2+1} & \mbox{for $p\equiv 3$ mod $4$}\\ \end{array} \right. \\ \end{equation} \end{itemize} \end{lemma} \begin{proof} The statements in (i) and (ii) follow from Theorem~\ref{padicorbit}. Let $\mathbf v$ be the root quadruple (the quadruple of the smallest curvatures) of the packing $P$. To show (iii), note that any quadruple in a primitive integral ACP consists of two even and two odd curvatures (see \cite{sand} for a discussion). Without loss of generality, assume $\mathbf v = (1,1,0,0)$ mod $2$, so $i=1$ and $j=2$ in this case. Since the Apollonian group is trivial modulo $2$, we have that every vector in the orbit is of the form $(1,1,0,0)$ mod $2$, so we have what we want. To prove (iv), we use results in \cite{Fuchs1} and recall from Theorem~\ref{padicorbit} that $\mathcal O_p$ is the cone $C_p$ for $p>3$. Thus the numerator of $\beta(p)$ is $$\#\{\mathbf{v} \in \mathcal O_p \, | \, v_j = 0\}= \#\{(v_1,v_2,v_3)\in\mathbb F_p^3-\{{\bf 0}\}\, | \, F(v_1,v_2,v_3,0)=0\}$$ where $F$ is the Descartes quadratic form and $p>3$. So the numerator counts the number of nontrivial solutions to the ternary quadratic form obtained by setting one of the $v_i$ in the Descartes form $F(\mathbf v)$ to $0$. Similarly we have that the denominator of $\beta(p)$ is $$\#\{\mathbf{v} \in \mathcal O_p\}=\#\{(v_1,v_2,v_3,v_4)\in\mathbb F_p^4-\{{\bf 0}\} \, | \, F(v_1,v_2,v_3,v_4)=0\}$$ where $p>3$. So the denominator counts the number of nontrivial solutions to the Descartes form. The number of nontrivial solutions to ternary and quaternary quadratic forms over finite fields is well known (see \cite{Cassels}, for example). Namely, \begin{equation}\label{orbit} \#\{(v_1,v_2,v_3,v_4)\in\mathbb F_p^4-\{{\bf 0}\} \, | \, F(v_1,v_2,v_3,v_4)=0\} = \left\{ \begin{array}{ll} p^3+p^2-p-1& \mbox{for $p\equiv 1$ mod $4$} \\ p^3-p^2+p-1& \mbox{for $p\equiv 3$ mod $4$}\\ \end{array} \right. \\ \end{equation} for $p>3$, and \begin{equation}\label{orbitbeta} \#\{(v_1,v_2,v_3)\in\mathbb F_p^3-\{{\bf 0}\} \, | \, F(v_1,v_2,v_3,0)=0\}= p^2-1\mbox{ for all odd primes $p$}. \end{equation} Combining (\ref{orbit}) and (\ref{orbitbeta}), we obtain the expression in (\ref{bvalue}) for $p>3$. For $p=3$, we compute $\mathcal O_p$ explicitly and find that there are two possible orbits of $A$ modulo $3$ which are illustrated via finite graphs in Fig.~\ref{mod31} and Fig.~\ref{mod32}. Both of these orbits consist of $10$ vectors $\mathbf v\in \mathbb Z^4$. In both orbits, $4$ of the vectors $\mathbf v$ have $v_i=0$ for any $1\leq i\leq 4$. Thus $\beta(3)=\frac{2}{5}$ as desired. \end{proof} \begin{figure}[H] \centering \includegraphics[height = 54 mm]{mod31} \caption{Orbit I modulo $3$}\label{mod31} \end{figure} \vspace{0.3in} \begin{figure}[H] \centering \includegraphics[height = 54 mm]{mod32} \caption{Orbit II modulo $3$}\label{mod32} \end{figure} In the following two sections, we use this setup to produce a precise heuristic for the number of circles of prime curvature as well as the number of pairs of tangent circles of prime curvature less than $x$ in a given ACP. \subsection{Predicting the prime number theorem for ACP's}\label{theory} In order to compute the number of prime curvatures in an ACP as proposed in Conjecture~\ref{pconj}, we use the setup above paired with properties of the Moebius function to pick out primes in the orbit of $A$ (see (\ref{Lam})). We use the asymptotic in \cite{oh} for the number $N_P(x)$ of curvatures less than $x$ in a given packing $P$: \begin{thm}(Kontorovich, Oh): \label{oh} Given a bounded Apollonian circle packing $P$, there exists a constant $c_P>0$ which depends on the packing, such that as $x\rightarrow\infty$, $$N_P(x) \sim c_P\cdot x^{\delta},$$ where $\delta = 1.30568\dots$ is the Haussdorf dimension of the limit set of $A$ acting on hyperbolic space. \end{thm} For the purpose of our computations, we will need a slightly stronger statement of Theorem~\ref{oh}, as we will sum over each coordinate of the points in the orbit of $A$ separately. Namely, each circle in the packing is uniquely represented in the orbit $\mathcal O$ as a maximal coordinate of a vector $\mathbf v$ in $\mathbb Z^4$. We would like to know how many circles there are of curvature less than $x$ that are represented in this way in the $i$th coordinate of a vector in the orbit. We denote this by $N^{(i)}_P(x)$: \begin{equation}\label{coordinate} N^{(i)}_P(x)=\sum_{\stackrel{\mathbf v\in\mathcal O}{v_i^{\ast}\leq x}} 1, \end{equation} where $v_i^{\ast}$ denotes the $i$th coordinate of $\mathbf v\in \mathbb Z^4$ which is also a maximal coordinate of $\mathbf v$\footnote[3]{It is possible that there is more than one $i$ for which the $i$th coordinate is maximal}. To this end we have \begin{lemma}\label{sullivan} Let $N^{(i)}_P(x)$ and $N_P(x)$ be as above. Then \begin{equation} N^{(1)}_P(x)\sim N^{(2)}_P(x)\sim N^{(3)}_P(x)\sim N^{(4)}_P(x)\sim \frac{N_P(x)}{4} \end{equation} as $x$ approaches infinity. \end{lemma} \begin{proof} The computation in \cite{oh} of the main term in the asymptotics in Theorem~\ref{oh} relies on the Patterson-Sullivan measure on the limit set of the Apollonian group $A$. In order to prove Lemma~\ref{sullivan}, we show that this measure is invariant under transformations on the coordinates of a vector $\mathbf v$ in an orbit $\mathcal O$ of $A$. To this end, let $G$ be the group of permutations of the coordinates $v_1, \dots, v_4$ of a vector $\mathbf v\in \mathcal O$. The group $G$ is finite and its elements can be realized as $4$ by $4$ integer matrices. For example, the matrix \begin{equation*} M=\left( \begin{array}{cccc} 0&1&0&0\\ 1&0&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{array} \right) \end{equation*} switches the first and second coordinates by left action on $\mathbf v^{\textrm{T}}$. Let $$L=(G,A)$$ be the group of $4$ by $4$ matrices generated by the Apollonian group $A$ together with $G$, and note that each element of $G$ normalizes $A$. For example, if $M$ is as above, we have $M^{-1}S_1M = S_2$, where $S_1$ and $S_2$ are as in (\ref{gens}). Similarly, any element of $G$ switching the $i$th and $j$th coordinate of $\mathbf v$ conjugates $S_i$ to $S_j$ in this way. Thus $L/A$ is finite, and so $L$ is a finite extension of $A$. In particular, this implies that the Patterson-Sullivan measure for $L$ is the same as for $A$. Since $L$ is precisely an extension of $A$ by the permutations of the coordinates of $\mathbf v$, we have that the Patterson-Sullivan measure is invariant under $G$. Together with Theorem~\ref{oh} and its proof in \cite{oh}, this proves the lemma. \end{proof} Since we are interested in counting the points in $\mathcal O$ for which $v_i^{\ast}$ is prime, we sum over points for which $v_i^{\ast}$ is $0$ modulo some square-free $d$. It is convenient to count primes in the orbit of $A$ with a logarithmic weight. To this end, we consider the function \[ \Lambda(n)= \left\{ \begin{array}{ll} \log p& \mbox{if $n = p^l$} \\ 0 & \mbox{otherwise} \\ \end{array} \right. \\ \] for which it is well known that \begin{equation}\label{Lam} \Lambda(n) = -\sum_{d|n}{\mu (d) \log d}, \end{equation} where $\mu(d)$ is the Moebius function. Using this we write down a concrete expression for the number of prime curvatures less than $x$ in a packing $P$ counted with a logarithmic weight: \begin{lemma}\label{primtm} Let $v_i^{\ast}$ be the $i$th coordinate of a vector $\mathbf v$ in $\mathcal O$ such that $v_i^{\ast}$ is the maximal coordinate of $\mathbf v$, and let $\psi_P(x)$ be as before. Then \begin{equation}\label{lambdasum1} \psi_P(x)=-\sum_{1\leq i\leq 4}\sum_{\stackrel{\mathbf v\in \mathcal O}{v_i^{\ast}\leq x}}\Lambda(v_i^{\ast}) + \textrm{O}(x). \end{equation} \end{lemma} The sum in (\ref{lambdasum1}) is a count of all circles whose curvatures are {\it powers} of primes. Including powers of primes in our count will not affect the final answer significantly. Namely, let $N_P^{\Box}(x)$ be the number of circles in a packing $P$ whose curvatures are less than $x$ and perfect squares. Note that $$N_P^{\Box}(x)=\textrm{O}(x).$$ This is insignificant compared to the count of all curvatures in Theorem~\ref{oh}, so the sum in (\ref{lambdasum1}) is the correct one to consider. Denote by $D<N_P(x)$ be the level distribution from the analysis in \cite{BoGaSa} -- i.e. our moduli $d$ are taken to be less than $D$. We combine Lemma~\ref{primtm}, Lemma~\ref{sullivan}, and the expression for $\Lambda(n)$ in (\ref{Lam}) to get \begin{eqnarray}\label{sumprimes1} \psi_P(x)&=&-\sum_{1\leq i\leq 4}\sum_{\stackrel{\mathbf v\in \mathcal O}{v_i^{\ast}\leq x}}\sum_{d|v_1^{\ast}}\mu(d)\log d+\textrm{O}(x)\nonumber \\ &=& -\sum_{1\leq i\leq 4} \sum_{\stackrel{\mathbf v\in\mathcal O}{v_i^{\ast}\leq x}}\sum_{d\leq D}\mu(d)\log d\hspace{-0.1in}\sum_{v_i^{\ast}\equiv 0\;(d)}\hspace{-0.1in}1 - \sum_{1\leq i\leq 4} \sum_{\stackrel{\mathbf v\in\mathcal O}{v_i^{\ast}\leq x}}\sum_{d> D}\mu(d)\log d\hspace{-0.1in}\sum_{v_i^{\ast}\equiv 0\;(d)}\hspace{-0.1in}1 \, +\textrm{O}(x) \end{eqnarray} Assuming that the Moebius function $\mu(d)$ above becomes random as $d$ grows, the sum over $d>D$ in (\ref{sumprimes1}) is negligible, and we omit it below. We proceed by rewriting the sum over $d\leq D$ in (\ref{sumprimes1}) using the density function $\beta(d)$ in (\ref{betadef}). Recall that the analysis in \cite{BoGaSa} and \cite{oh} gives us that $$ \sum_{n \equiv 0\,(d)}a_n = \beta(d)\cdot c_P x^{\delta} + r(A,d)$$ where $r(A,d)$ is small on average. In particular, $$\sum_{d\leq D}r(A,d) = \textrm{O}(x^{\delta-\epsilon_0})$$ for some $\epsilon_0>0$. Paired with the assumption that $\mu$ is random, this evaluation of the remainder term allows us to rewrite (\ref{sumprimes1}) as follows: \begin{eqnarray}\label{sumprimes} &&{-\sum_{1\leq i\leq 4} \frac{N_P^{i}(x)}{4}}\sum_{d\leq D}\beta_i(d)\mu(d)\log d+\textrm{O}(x^{\delta-\epsilon_0})\nonumber \\ &=& -\frac{N_P(x)}{4}\sum_{1\leq i\leq 4}\sum_{d\leq D}\beta_i(d)\mu(d)\log d +\textrm{O}(x^{\delta-\epsilon_0})\\\nonumber \end{eqnarray} To compute the innermost sum in the final expression above, note that \begin{equation}\label{infinity} \sum_{d\leq D}\beta_i(d)\mu(d)\log d= \sum_{d>0}\beta_i(d)\mu(d)\log d- \sum_{d> D}\beta_i(d)\mu(d)\log d. \end{equation} Assuming once again that the sum over $d>D$ is insignificant due to the conjectured randomness of the Moebius function, we have that the sum over $d\leq D$ in (\ref{infinity}) can be approximated by the sum over all $d$. With this in mind, the following lemma yields the heuristic in Conjecture~\ref{pconj}. \begin{lemma}\label{infinitesumcalc} Let $\beta_i(d)$ be as before. We have $$\sum_{1\leq i\leq 4}\sum_{d>0}\beta_i(d)\mu(d)\log d = 4\cdot L(2,\chi_4)$$ where $L(2,\chi_4)= 0.91597\dots$ is the value of the Dirichlet $L$-function at $2$ with character \begin{eqnarray*} \chi_4(p) = \begin{cases} 1 & \text{if $p \equiv 1$ (mod 4)} \\ -1 & \text{if $p \equiv 3$ (mod 4)} \end{cases} \end{eqnarray*} \end{lemma} \begin{proof} We introduce a function \begin{equation*} f(s) = \sum_d \beta_i(d) \mu(d) d^{-s}, \end{equation*} and note that its derivative at $0$ is precisely what we want: \begin{equation*}\label{Fprimezero} f'(0) = -\sum_d \beta_i(d) \mu(d) \log d. \end{equation*} Since the functions $\beta$, $\mu$, and $d^s$ are all multiplicative, we may rewrite $f(s)$ as an Euler product and obtain \begin{eqnarray*} f(s) &=& \prod_p \left( 1 - \beta_i(p) p^{-s} \right)\\&=& \prod_p (1 - p^{-s-1})\cdot \frac{1 - \beta_i(p) p^{-s}}{1 - p^{-s-1}}\\&=& \zeta^{-1}(s+1) \cdot\prod_p \frac{1-\beta_i(p)p^{-s}}{1-p^{-s-1}}\\ &=& \zeta^{-1}(s+1)\cdot H(s), \\ \end{eqnarray*} where $H(s) = \prod_p (1-\beta_i(p)p^{-s})({1-p^{-s-1}})^{-1}$ is holomorphic in $\Re (s) > 1/2$. Differentiating, we obtain $$f'(0) = -\zeta'(1)\zeta^{-2}(1) \cdot H(0) +\zeta^{-1}(1)\cdot H'(0) = H(0)$$ since $-\zeta'(1)\zeta^{-2}(1) = 1$ and $\zeta^{-1}(1)=0$. Thus it remains to compute $$H(0) = \prod_p \frac{1-\beta_i(p)}{1-p^{-1}}.$$ Part (iii) of Lemma~\ref{betathm} says $\beta_i(2)=\beta_j(2) = 1$ for two coordinates $1\leq i,j\leq 4$. For these two coordinates $1 - \beta_i(2) = 0$ and so $H(0) = 0.$ Otherwise $\beta_i(2) = 0$ and we have \begin{eqnarray*} H(0) &=& \frac{1}{1-\frac{1}{2}}\cdot \prod_{p \equiv 1\, (4)} \left( 1 - \frac{1}{p+1} \right) \frac{1}{1 - p^{-1}} \; \prod_{p \equiv 3 \, (4)} \left(1 - \frac{p+1}{p^2+1} \right)\frac{1}{1 - p^{-1}}\\ &=& 2 \cdot \prod_{p \equiv 1 \,(4)} \frac{p^2}{p^2 - 1} \prod_{p \equiv 3 \,(4)}\frac{p^2}{p^2+1}\\ \\&=& 2\cdot L(2,\chi_4). \label{productanswer} \end{eqnarray*} Thus the sum we wish to compute is $4\cdot L(2,\chi_4)$, as desired. \end{proof} Lemma~\ref{infinitesumcalc} implies that the contribution of the two of the coordinates that are even throughout the orbit to the sum in (\ref{sumprimes}) is $0$, and the contribution for the other two coordinates is $$\frac{N_P(x)}{4}\cdot 4\cdot L(2,\chi_4) = N_P(x)\cdot L(2,\chi_4),$$ yielding the predicted result in Conjecture ~\ref{pconj}. \begin{rmk}\label{katsremark} It is well known that $\pi_P(x) \sim \frac{\psi_P(x)}{\log x}$ as $x \rightarrow \infty$. Thus Conjecture ~\ref{pconj} can also be stated in terms of $\pi_P(x)$: $$ \pi_P(x) \sim \frac{L(2,\chi_4)\cdot N_P(x)}{\log x}.$$ \end{rmk} \begin{figure}[H] \centering \includegraphics[height = 72 mm]{primes} \caption{Prime Number Heuristic for $P_C$}\label{pntfigure} \end{figure} \begin{figure}[H] \centering \includegraphics[height = 72 mm]{primesB} \caption{Prime Number Heuristic for $P_B$}\label{pntfigureB} \end{figure} Since our computations rely on the multiplicativity inherent to the reduction mod $d$ of the Apollonian group $A$, our heuristic is independent of the chosen packing $P$ in which we count prime curvatures. This is confirmed by our data: Fig.~\ref{pntfigure} and Fig.~\ref{pntfigureB} above show the graphs of \begin{equation}\label{graphp} y= \frac{\psi_{P}(x)}{N_{P}(x)} \end{equation} where $P=P_C$ and $P_B$ as in Section~\ref{intro} and $x\leq 10^8$. In both cases, (\ref{graphp}) converges to $y=L(2,\chi_4)$ as predicted in Conjecture~\ref{pconj}. \vspace{0.5in} \subsection{Predicting a prime number theorem for kissing primes}\label{theory2} In this section, we use the analysis in \cite{BoGaSa}, as well as the conjectured randomness of the Moebius function to arrive at the heuristic in Conjecture~\ref{tpconj} for the number of kissing primes, i.e. pairs of tangent circles both of prime curvature less than $x$. With the same notation as in Section~\ref{theory}, we would now like to count the points in $\mathcal O$ for which $v_i^{\ast}$ and $v_j$ are prime for some $j\not=i$, so we sum over points for which either $v_i^{\ast}$ or $v_j$ is $0$ modulo some square-free $d$. To do this we need the total number of pairs of mutually tangent circles of curvature less than $x$ in a packing $P$. If $N_P(x)$ is the number of circles of curvature up to $x$ as specified by Theorem~\ref{oh}, it is not difficult to see that \begin{equation} \#\{{\mbox{pairs of mutually tangent circles of curvature }}a\leq x\}=3\cdot N_P(x) \end{equation} since each circle of curvature $a$ in the packing is tangent to a distinct triple of circles of curvature $\leq a$. This can be seen via a simple proof by induction. We again employ the function $\Lambda(n)$ in order to write down a concrete expression for the number of kissing primes less than $x$ in a packing $P$. \begin{lemma}\label{twintm} Let $v_i^{\ast}$ and $v_j$ be two distinct coordinates of a vector $\mathbf v$ in $\mathcal O$, where $v_i^{\ast}$ denotes the maximum coordinate of $\mathbf v$, and let $\psi_P^{(2)}(x)$ be as before. Then \begin{equation}\label{lambdasumm} \psi_P^{(2)}(x)=\sum_{\stackrel{\mathbf v\in \mathcal O}{v_i^{\ast}\leq x}}\sum_{j\not=i}\Lambda(v_i^{\ast})\Lambda(v_j) + \textrm{O}(x). \end{equation} \end{lemma} Again, the sum in (\ref{lambdasumm}) is a count of all mutually tangent pairs of circles whose curvatures are {\it powers} of primes, but by a similar argument to that in Section~\ref{theory} we have that including powers of primes in our count does not affect the final answer significantly. Note that in order to evaluate (\ref{lambdasumm}) we introduce in (\ref{referg}) a function which counts points in $\mathcal O$ for which {\it two} of the coordinates are $0$ modulo $p$. Denote by $D<x$ be the level distribution from the analysis in \cite{BoGaSa} -- i.e. the moduli $d>1$ in the computations below may be taken to be less than $D$. We rewrite the expression in (\ref{lambdasumm}) using (\ref{Lam}) and get \begin{eqnarray}\label{sum11} \qquad&&\sum_{\stackrel{1\leq i,j\leq 4}{i\not=j}}\sum_{\stackrel{\mathbf v\in \mathcal O}{v_i^{\ast}\leq x}}\Biggl(\sum_{d_i|v_i^{\ast}}\mu(d_i)\log d_i\sum_{d_j|v_j}\mu(d_j)\log d_j\Biggr)\\&=& \sum_{\stackrel{1\leq i,j\leq 4}{i\not=j}}\sum_{\stackrel{\mathbf v\in\mathcal O}{v_i^{\ast}\leq x}}\left(\Sigma^{-}+\Sigma^{+}\right)+\textrm{O}(x) \end{eqnarray} where $$\Sigma^{-} = \Bigl(\sum_{d_i\leq D}\mu(d_i)\log d_i \sum_{v_i^{\ast}\equiv 0\;(d_i)}1\Bigr)\Bigl(\sum_{d_j\leq D}\mu(d_j)\log d_j \sum_{v_j\equiv 0\;(d_j)}1\Bigr),$$ and $$\Sigma^{+} = \Bigl(\sum_{d_i>D}\mu(d_i)\log d_i \sum_{v_i^{\ast}\equiv 0\;(d_i)}1\Bigr)\Bigl(\sum_{d_j>D}\mu(d_j)\log d_j \sum_{v_j\equiv 0\;(d_j)}1\Bigr).$$ As in Section~\ref{theory}, we omit $\Sigma^{+}$ in (\ref{sum11}) under the assumption that $\mu$ behaves randomly for large values of $d$. Along with the results about the remainder term in the sieve in \cite{BoGaSa}, the expression in (\ref{sum11}) becomes \begin{equation}\label{sum1} {N_P(x)}\sum_{\stackrel{1\leq i,j\leq 4}{i\not=j}}\sum_{[d_i,d_j]\leq D'}\beta_i\left(\frac{d_i}{(d_i,d_j)}\right)\beta_j\left(\frac{d_j}{(d_i,d_j)}\right)g((d_i,d_j))\mu(d_i)\mu(d_j)\log d_i\log d_j+\textrm{O}(x^{\delta-\epsilon_0}), \end{equation} where $\beta_i(d)$ is as before, $[d_i,d_j]$ is the least common multiple of $d_i$ and $d_j$, and $(d_i,d_j)$ is their gcd. The function $g$ above is the ratio \begin{equation}\label{referg} g((d_i,d_j)) = \frac{\#\{\mathbf v\in {\mathcal O_{(d_i,d_j)}} \, | \, v_i\equiv 0 \, ((d_i,d_j)) \mbox{ and }v_j\equiv 0 \, ((d_i,d_j))\}}{\#\{\mathbf v\in{\mathcal O_{(d_i,d_j)}}\}} \end{equation} where $(d_i,d_j)$ is square-free in our case. Note that $g(d)$ is multiplicative outside of the primes $2$ and $3$ by Theorem~\ref{padicorbit}, so $$g(d)=\prod_{p|d}p$$ and we need only to compute $g(p)$ for $p$ prime in evaluating the sum above. \begin{lemma}\label{gthm} Let $g(p)$ be as before where $p$ is a prime. Then \begin{itemize} \item[(i)] $g(2)=\left\{ \begin{array}{ll} 1& \mbox{if both $v_i$ and $v_j$ are even} \\ 0& \mbox{if at least one of $v_i$ or $v_j$ is odd} \\ \end{array} \right.. \\ $ \item[(ii)] $ g(p)= \left\{ \begin{array}{ll} \frac{1}{(p+1)^2}& \mbox{for $p \equiv 1$ mod $4$} \\ \frac{1}{p^2+1} & \mbox{for $p\equiv 3$ mod $4$} \\ \end{array} \right.. \\$ \end{itemize} \end{lemma} \begin{proof} To prove (ii), we note that Theorem~\ref{padicorbit} implies that the numerator of $g(p)$ is $$\#\{\mathbf v\in {\mathcal O_d} \, | \, v_1\equiv 0 \, (d) \mbox{ and }v_2\equiv 0 \, (d)\} = \#\{(v_1,v_2)\in\mathbb F_p^2-\{{\bf 0}\}\, | \, F(v_1,v_2,0,0)=0\}$$ for $p>3$. Thus it is the number of non-trivial solutions to a binary quadratic form with determinant $0$ (the Descartes form in (\ref{descartes}) with two of the $v_i$, $v_j$ set to $0$), and so $$\#\{\mathbf v\in \mathcal O_p | v_1\equiv 0 \mbox{ mod }p \mbox{ and }v_2\equiv 0 \mbox{ mod }p\} = p-1$$ for all $p>3$ (see \cite{Cassels}, for example). In the case $p=3$, we observe that in both of the possible orbits of $A$ mod $3$ in Figures~\ref{mod31} and (\ref{mod32}) we have $g(3)=\frac{1}{10}$ as desired. Part (i) follows from the structure of the orbit $\mathcal O_2$ as observed in the proof of Lemma~\ref{betathm}. \end{proof} Denote by $\nu(d_i,d_j)$ the gcd of $d_i$ and $d_j$ (we write just $\nu$ from now on and keep in mind that $\nu$ depends on $d_i$ and $d_j$). Note that $$\beta\left(\frac{[d_i,d_j]}{\nu}\right)=\beta_i\left(\frac{d_i}{\nu}\right)\beta_j\left(\frac{d_j}{\nu}\right),$$ where $\frac{d_i}{\nu}$ and $\frac{d_j}{\nu}$ are relatively prime, so we rewrite the expression in (\ref{sum1}) below: \begin{equation}\label{desired} \frac{N_P(x)}{4}\sum_{\stackrel{1\leq i,j\leq 4}{i\not=j}}\sum_{[d_i,d_j]\leq D'}\beta_i\left(\frac{d_i}{\nu}\right)\beta_j\left(\frac{d_j}{\nu}\right)g(\nu)\mu(d_i)\mu(d_j)\log d_i\log d_j. \end{equation} To compute the sum in (\ref{desired}), we use a similar argument to that in Section~\ref{theory}. We note that the inner sum in the expression above is equal to \begin{equation} \sum_{d_i>0}\sum_{d_j>0} \beta_i\left(\frac{d_i}{\nu}\right)\beta_j\left(\frac{d_j}{\nu}\right)g(\nu)\mu(d_i)\mu(d_j)\log d_i\log d_j - \sum_{[d_i,d_j]> D'}\beta_i\left(\frac{d_i}{\nu}\right)\beta_j\left(\frac{d_j}{\nu}\right)g(\nu)\mu(d_i)\mu(d_j)\log d_i\log d_j \end{equation} where we assume the sum over $[d_i,d_j]> D'$ is insignificant by the conjectured randomness of the Moebius function, and thus the sum over all $d_i$ and $d_j$ is a good heuristic for the sum in (\ref{desired}). We compute the infinite sum in the following lemma and obtain the heuristic in Conjecture~\ref{tpconj}. \begin{lemma}\label{infinitesum2calc} Let $\beta_i(d)$, $\beta_j(d)$, and $g(\nu)$ be as before. We have $$\sum_{\stackrel{1\leq i,j\leq 4}{i\not=j}}\sum_{d_i>0}\sum_{d_j>0} \beta_i\left(\frac{d_i}{\nu}\right)\beta_j\left(\frac{d_j}{\nu}\right)g(\nu)\mu(d_i)\mu(d_j)\log d_i\log d_j = 8 \cdot L^2(2,\chi_4)\cdot \prod_{p\equiv 3\,(4)}\left(1-\frac{2}{p(p-1)^2}\right).$$ \end{lemma} \begin{proof} We introduce the function \begin{equation*} f(s_i,s_j) = \sum_{d_i}\sum_{d_j} \beta_i\left(\frac{d_i}{\nu}\right)\beta_j\left(\frac{d_j}{\nu}\right)g(\nu)\mu(d_i)\mu(d_j)d_i^{s_i}d_j^{s_j}, \end{equation*} and note that \begin{equation*} \left.\frac{\partial^2 f(s_i,s_j)}{\partial s_i\partial s_j}\right|_{(0,0)}=\sum_{d_i}\sum_{d_j}\beta\left(\frac{d_i}{\nu}\right)\beta\left(\frac{d_j}{\nu}\right)g(\nu)\mu(d_i)\mu(d_j)\log d_i\log d_j, \end{equation*} which is a good heuristic for the sum in (\ref{desired}) as $x$ tends to infinity. The difficulty in computing this is the interaction of $d_i$ and $d_j$ in $g(\nu)$. To this end, write $d_i=\nu e_i$ and $d_j=\nu e_j$ where $(e_i,e_j)=1$. This gives us the following formula for $f(s_i,s_j)$. \begin{eqnarray} f(s_i,s_j)&=& \sum_{\nu}g(\nu)\sum_{(e_i,e_j)=1}\beta_i(e_i)\beta_j(e_j)\mu(\nu e_i)\mu(\nu e_j)(\nu e_i)^{s_i}(\nu e_j)^{s_j}\nonumber\\&=& \sum_{\nu}g(\nu)\nu^{-(s_i+s_j)}\sum_{e_i, e_j}\sum_{m|(e_i,e_j)} \mu(m)\beta_i(e_i)\beta_j(e_j)\mu(\nu e_i)\mu(\nu e_j) e_i^{s_i}e_j^{s_j}\\ &=& \sum_{\nu}g(\nu)\nu^{-(s_i+s_j)}\sum_m\mu(m)\mu^2(\nu m)m^{-(s_i+s_j)}\nonumber \\ &&\cdot \sum_{\stackrel{(b_i,\nu m)=1}{(b_j,\nu m)=1}}\beta_i(mb_i)\beta_j(mb_j)\mu(b_i)\mu(b_j) b_i^{s_i}b_j^{s_j}\nonumber \\ &=& \sum_{\nu}g(\nu)\mu^2(\nu)\nu^{-(s_i+s_j)}\sum_{(m,\nu)=1}\mu(m)\beta_i(m)\beta_j(m)m^{-(s_i+s_j)}\,A(\nu, m, s_i, s_j)\nonumber\\\nonumber \end{eqnarray} where \begin{eqnarray} A(\nu, m, s_i, s_j)&=&\sum_{\stackrel{(b_i,\nu m)=1}{(b_j,\nu m)=1}}\beta_i(b_i)\beta_j(b_j)\mu(b_i)\mu(b_j)b_i^{-s_i}b_j^{-s_j} \nonumber \\&=& \prod_{\stackrel{p_i|\nu m}{p_j|\nu m}}\left((1-\beta_i(p_i)p_i^{-s_i})(1-\beta_j(p_j)p_j^{-s_j})\right)^{-1}\nonumber\\ &&\cdot \prod_{p_i,p_j}(1-\beta_i(p_i)p_i^{-s_i})(1-\beta_j(p_j)p_j^{-s_j}) \\&=& P_{\nu}(s_i,s_j)\cdot P_m(s_i,s_j)\nonumber \\ &&\cdot \zeta^{-1}(s_i+1)\prod_{p_i}\frac{1-\beta_i(p_i)p_i^{-s_i}}{1-p_i^{-s_i-1}}\zeta^{-1}(s_j+1)\prod_{p_j}\frac{1-\beta_j(p_j)p_j^{-s_j}}{1-p_j^{-s_j-1}}\nonumber \\&=& P_{\nu}(s_i,s_j)\cdot P_m(s_i,s_j)\cdot \zeta^{-1}(s_i+1)\zeta^{-1}(s_j+1)\cdot B(s_i,s_j),\nonumber\\\nonumber \end{eqnarray} where $$P_{\nu}(s_i,s_j) = \prod_{\stackrel{p_i|\nu}{p_j|\nu}}\left((1-\beta_i(p_i)p_i^{-s_i})(1-\beta_j(p_j)p_j^{-s_j})\right)^{-1},$$ $$P_m(s_i,s_j) = \prod_{\stackrel{p_i|m}{p_j|m}}\left((1-\beta_i(p_i)p_i^{-s_i})(1-\beta_j(p_j)p_j^{-s_j})\right)^{-1},$$ $$B(s_i,s_j) = \prod_{p_i}\frac{1-\beta_i(p_i)p_i^{-s_i}}{1-p_i^{-s_i-1}}\prod_{p_j}\frac{1-\beta_j(p_j)p_j^{-s_j}}{1-p_j^{-s_j-1}}.$$ We write $$C(\nu,s_i,s_j) = g(\nu)\nu^{-(s_i+s_j)},$$ $$D(m, s_i,s_j) = \mu(m)\beta_i(m)\beta_j(m)m^{-(s_i+s_j)}$$ which gives us \begin{eqnarray} f(s_i,s_j)&=&\sum_{\nu}C(\nu, s_i, s_j)\cdot P_{\nu}(s_i,s_j)\sum_{(m,\nu)=1}D(m,s_i,s_j)\cdot P_m(s_i,s_j)\nonumber \\&&\cdot B(s_i,s_j)\cdot \zeta^{-1}(s_i+1)\zeta^{-1}(s_j+1). \end{eqnarray} Note that the sums over $\nu$ and $m$, as well as $B(s_i,s_j)$ converge and are holomorphic. We now compute the desired derivative. Write $$G_{i,j}(\nu,m,s_i,s_j)=\sum_{\nu}C(\nu, s_i, s_j)\cdot P_{\nu}(s_i,s_j)\sum_{(m,\nu)=1}D(m,s_i,s_j)\cdot P_m(s_i,s_j).$$ Then we have \begin{eqnarray}\label{shiit2} \left.\frac{\partial^2 f(s_i,s_j)}{\partial s_i\partial s_j}\right|_{(0,0)}=\zeta^{-2}(1)\left.\frac{\partial^2 (G_{i,j}(\nu,m,s_i,s_j))}{\partial s_i\partial s_j}\right|_{(0,0)} +\, G_{i,j}(\nu,m,0,0)\cdot B(0,0)\left(\frac{\zeta'(1)}{\zeta^2(1)}\right)^2\\ \nonumber \end{eqnarray} Since $\zeta^{-2}(1)=0$, we need not compute the partial derivative of $G$. If $\beta_i(2)=\beta_j(2)=0$ in the expression for $B$, we have that $B(0,0) = 4\cdot L^2(2,\chi_4)$, since $\frac{\zeta'(1)}{\zeta^2(1)} = 2\cdot L(2,\chi_4)$ as in Section~\ref{theory}. This holds for the sums in (\ref{desired}) over $(v_i^{\ast}, v_j)$ and $(v_j^{\ast}, v_i)$ where the $i$th and $j$th coordinates in our orbit are everywhere odd. The contribution to (\ref{desired}) from the terms where $v_i^{\ast}$ or $v_j$ is even is $0$: \begin{lemma}\label{zero} Let $\mathcal O$ be an orbit of the Apollonian group. Given that the $i$-th or $j$-th coordinate of each vector $\mathbf v\in\mathcal O$ is even, we have $$G_{i,j}(\nu,m,0,0)\cdot B(0,0)=0$$ \end{lemma} \begin{proof} Recall that two of the coordinates of the vectors in $\mathcal O$ are always even, and two are odd. We write \begin{eqnarray}\label{split} G_{i,j}(\nu,m,0,0)\cdot B(0,0)&=& B(0,0)\left(\sum_{2|\nu} C(\nu, 0, 0)\cdot P_{\nu}(0,0)\sum_{(m,\nu)=1}D(m,0,0)\cdot P_m(0,0)\right.\nonumber \\&&+ \left. \sum_{(\nu,2)=1} C(\nu, 0, 0)\cdot P_{\nu}(0,0)\sum_{(m,\nu)=1}D(m,0,0)\cdot P_m(0,0)\right). \end{eqnarray} \vspace{0.2in} \noindent{\it Case 1: only one of the $i$-th and $j$-th coordinates is odd throughout the orbit.} Recall from Lemma~\ref{gthm} that $g(2)=0$ in the case that only one of the coordinates $(i,j)$ is even throughout the orbit, and $g(2)=1$ if both coordinates are even throughout the orbit. So, if only one of the coordinates $(i,j)$ is even throughout the orbit, we have $$B(0,0)\cdot\left(\sum_{2|\nu} C(\nu, 0, 0)\cdot P_{\nu}(0,0)\sum_{(m,\nu)=1}D(m,0,0)\cdot P_m(0,0)\right)=0.$$ Recalling that $\beta_i(p)=\beta_j(p)$ for $p>2$ from Lemma~\ref{betathm}, we have \begin{eqnarray}\label{2m} &&B(0,0)\left(\sum_{(\nu,2)=1}C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{(m,\nu)=1}D(m,0,0)\cdot P_m(0,0)\Bigr)\right) \\ &=& \prod_{p_i,p_j}\left(\frac{(1-\beta_i(p_i))(1-\beta_j(p_j))}{(1-p_i^{-1})(1-p_j^{-1})}\right)\nonumber\\ &&\cdot\Biggl(\sum_{(\nu,2)=1}C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{\stackrel{(m,\nu)=1}{(m,2)=1}}D(m,0,0)\prod_{p|m}(1-\beta_i(p))^{-2}\Bigr)\Biggr)\nonumber\\ &+& B(0,0)\cdot \left(\sum_{(\nu,2)=1}C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{(2m,\nu)=1}\mu(2m)\beta_i(2m)\beta_j(2m)\cdot P_m(0,0)\Bigr)\right)\nonumber \\ &=& 0 + 0 \nonumber \\ &=& 0\nonumber\\ \nonumber \end{eqnarray} since Lemma~\ref{betathm} implies either $1-\beta_i(2)=0$ or $1-\beta_j(2)=0$ in the first term, and either $\beta_i(2m)=0$ or $\beta_j(2m)=0$ in the second term in (\ref{2m}). We now compute the expression in (\ref{split}) in the case that both the $i$th and $j$th coordinate are even throughout the orbit. \vspace{0.2in} \noindent {\it Case 2: the $i$-th and $j$-th coordinates are both even throughout the orbit.} In this case, Lemma~\ref{gthm} implies that $g(2\nu)=g(\nu)$ and that $C(2\nu,0,0)=C(\nu,0,0)$ for odd $\nu$. Also, we again have that $\beta_i(p)=\beta_j(p)$ for $p>2$, so the first sum in (\ref{split}) is \begin{eqnarray}\label{first} &&B(0,0)\cdot\left(\sum_{2|\nu} C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{(m,\nu)=1}D(m,0,0)\cdot P_m(0,0)\Bigr)\right)\\ &=&\left(\sum_{(\nu,2)=1} \bigl(C(2\nu, 0, 0)\prod_{p|\nu}(1-\beta_i(p))^{-2}\bigr)\Bigl(\sum_{(m,2\nu)=1}\bigl(D(m,0,0)\prod_{p|m}(1-\beta_i(p))^{-2}\bigr)\Bigr)\right)\nonumber \\ &&\cdot \prod_{p\not=2}\left(\frac{1-\beta_i(p)}{1-p^{-1}}\right)^2\nonumber\\ &=&\left(\sum_{(\nu,2)=1} \bigl(C(\nu, 0, 0)\prod_{p|\nu}(1-\beta_i(p))^{-2}\bigr)\Bigl(\sum_{(m,2\nu)=1}\bigl(D(m,0,0)\prod_{p|m}(1-\beta_i(p))^{-2}\bigr)\Bigr)\right)\cdot L^2(2,\chi_4)\nonumber \end{eqnarray} To compute the second sum in (\ref{split}), we note that $\beta_i(2m)=\beta_j(2m)=\beta_i(m)$ by Lemma~\ref{betathm} and write \begin{eqnarray*} &&B(0,0)\left(\sum_{(\nu,2)=1}C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{(m,\nu)=1}D(m,0,0)\cdot P_m(0,0)\Bigr)\right)\\ &=& M_1(\nu,0,0)+M_2(\nu,0,0), \end{eqnarray*} where $$M_1(\nu,0,0) = \Biggl(\sum_{(\nu,2)=1}C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{(m,2\nu)=1}\mu(m)\beta_i^2(m)\cdot\prod_{p|m}(1-\beta_i(p))^{-2}\Bigr)\Biggr) \cdot B(0,0)$$ $$M_2(\nu,0,0) = \left(\sum_{(\nu,2)=1}C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{(m,2\nu)=1}\mu(2m)\beta_i^2(m)\cdot \prod_{p|m}(1-\beta_i(p))^{-2}\Bigr)\right)\cdot B'(0,0)$$ where $$B'(0,0)=\prod_{p\not=2}\left(\frac{1-\beta_i(p)}{1-p^{-1}}\right)^2$$ Note that, since $1-\beta_i(2)=1-\beta_j(2)=0$, we have $B(0,0)=0$ in this case, so $$M_1(\nu,0,0) =0.$$ On the other hand, \begin{eqnarray}\label{second} &&M_2(\nu,0,0)\\ &=&\left(\sum_{(\nu,2)=1}C(\nu, 0, 0)\cdot P_{\nu}(0,0)\Bigl(\sum_{(m,2\nu)=1}-\mu(m)\beta_i^2(m)\cdot \prod_{p|m}(1-\beta_i(p))^{-2}\Bigr)\right)\cdot B'(0,0)\nonumber\\ &=&-\left(\sum_{(\nu,2)=1} \bigl(C(\nu, 0, 0)\prod_{p|\nu}(1-\beta_i(p))^{-2}\bigr)\Bigl(\sum_{(m,2\nu)=1}\bigl(D(m,0,0)\prod_{p|m}(1-\beta_i(p))^{-2}\bigr)\Bigr)\right)\cdot L^2(2,\chi_4)\nonumber \end{eqnarray} Combining (\ref{first}) and (\ref{second}), we have that the expression in (\ref{split}) is $0$, as desired. \end{proof} Therefore, we have that in the contributing terms of (\ref{desired}) both $v_i^{\ast}$ and $v_j$ are odd. This makes up two of the terms in (\ref{desired}), so we have that the sum we wish to compute is equal to \begin{equation}\label{contterms} 2\cdot G_{i,j}(\nu,m,0,0)\cdot 4 \cdot L^2(2,\chi_4)= 8\cdot L^2(2,\chi_4)\cdot G_{i,j}(\nu,m,0,0). \end{equation} It remains to compute $G_{i,j}(\nu,m,0,0)$ in this case. Recall that $\beta_i(2)=\beta_j(2)=0$ in this case, and that $\beta_i(p)=\beta_j(p)$ for $p>2$. We have \begin{eqnarray}\label{B00} &&\sum_{(m,\nu)=1}\mu(m)\beta_i^2(m)\prod_{p|m}(1-\beta_i(p))^{-2}\nonumber \\ &=& \prod_p 1-\frac{\beta_i^2(p)}{(1-\beta_i(p))^2}\prod_{p|\nu}\left(1-\frac{\beta_i^2(p)}{(1-\beta_i(p))^2}\right)^{-1} \\ &=& \prod_{p\equiv 1\,(4)}1-\frac{1}{p^2}\prod_{p\equiv 3\,(4)}1-\frac{(p+1)^2}{(p^2-p)^2}\prod_{p|\nu}\left(1-\frac{\beta_i^2(p)}{(1-\beta_i(p))^2}\right)^{-1}\nonumber \\\nonumber \end{eqnarray} We write $$\sigma = \prod_{p\equiv 1\,(4)}1-\frac{1}{p^2}\prod_{p\equiv 3\,(4)}1-\frac{(p+1)^2}{(p^2-p)^2}$$ and get \begin{eqnarray}\label{A00} &&G_{i,j}(\nu,m,0,0)\nonumber\\&=& \sigma\cdot \sum_{\nu}C(\nu,0,0)P_{\nu}(0,0)\cdot\prod_{p|\nu}\left(1-\frac{\beta^2(p)}{(1-\beta(p))^2}\right)^{-1}\nonumber \\ &=& \sigma\cdot\sum_{\nu}\prod_{p|\nu}\frac{g(p)}{(1-\beta^2(p)(1-\beta(p))^{-2})(1-\beta(p))^2}\\&=& \sigma\cdot\prod_p 1+\frac{g(p)}{(1-\beta^2(p)(1-\beta(p))^{-2})(1-\beta(p))^2}\nonumber \\ &=& \sigma\cdot\prod_{p\equiv 1\, (4)}\left(1-\frac{1}{p^2}\right)^{-1}\prod_{p\equiv 3 \, (4)} 1+\frac{p^2+1}{p^4-2p^3-2p-1}\nonumber\\ &=& \prod_{p\equiv 3\,(4)}\left(1-\frac{(p+1)^2}{(p^2-p)^2}\right)\left(1+\frac{p^2+1}{p^4-2p^3-2p-1}\right)\nonumber \end{eqnarray} Therefore our infinite sum is equal to \begin{equation}\label{answer} 8\cdot L^2(2,\chi_4)\cdot \prod_{p\equiv 3\,(4)}1-\frac{2}{p(p-1)^2} \end{equation} as desired. \end{proof} Conjecture~\ref{tpconj} follows from our assumption that the Moebius function is random and from Lemma~\ref{infinitesumcalc2}. namely, we predict $$\psi_P^{(2)}(x)\approx 8\cdot\frac{N_P(x)}{4}\cdot L^2(2,\chi_4)\cdot \prod_{p\equiv 3\,(4)}1-\frac{2}{p(p-1)^2} = c\cdot L^2(2,\chi_4),$$ where $c=1.646\dots$ as in Conjecture~\ref{tpconj}. \begin{figure}[H] \centering \includegraphics[height = 62 mm]{kissprimes} \caption{Prime Number Theorem for Kissing Primes for the packing $P_B$}\label{kpnt} \end{figure} \begin{figure}[H] \centering \includegraphics[height = 62 mm]{kissprimesC} \caption{Prime Number Theorem for Kissing Primes for the packing $P_C$}\label{kpntC} \end{figure} As with our heuristic for the number of prime curvatures less than $x$ in a packing, this count does not depend on the packing $P$. Fig.~\ref{kpnt} and Fig.~\ref{kpntC} above show graphs of $$y=\frac{\psi^{(2)}_{P}(x)}{N^{(2)}_{P}(x)}$$ for $P=P_B$ and $P_C$, and $x\leq 10^8$. Both of these graphs converge to $\alpha = \frac{ c\cdot L^2(2,\chi_4)}{3}$ as conjectured, even though they converge slower than in the case of counting circles of prime curvature in Section~\ref{theory}. \section{Local to Global Principle for ACP's}\label{locglobal} In this section we present numerical evidence in support of Conjecture~\ref{LG}, which predicts a local to global principle for the curvatures in a given integral ACP. Since the Apollonian group $A$ is small -- it is of infinite index in $O_F(\mathbb Z)$ -- it is perhaps surprising that its orbit should eventually cover all of the integers outside of the local obstruction mod $24$ as specified in Theorem~\ref{padicorbit}. Proving this rigorously, however, appears to be very difficult. An analogous problem over $\mathbb Z$ would be to show that all large integers satisfying certain local conditions are represented by a general ternary quadratic form -- this analogy is realized by fixing one of the curvatures in Descartes' form and solving the problem for the resulting ternary form. While this problem has been recently resolved in general in \cite{Co} and \cite{DS}, even there the local to global principle comes in a much more complicated form, relying on congruence obstructions specified in the spin double cover. Our conjecture, which therefore has the flavor of Hilbert's 11th problem for an indefinite form, predicts a local to global principle of a more straightforward nature. Our computations suggest that this conjecture is true and we predict the value $X_P$ in the examples we check. We consider the packings $P_B$ and $P_C$ introduced in Section~\ref{intro}. Recall that $P_B$ corresponds to the orbit of $A$ acting on $(-1,2,2,3)$, and $P_C$ corresponds to the orbit of $A$ acting on $(-11,21,24,28)$. In order to explain the data we obtain in both cases, we use Theorem~\ref{padicorbit} to determine the congruence classes (mod $24$) in the given packing. Recall that the Apollonian group $A$ is generated by the four generators $S_i$ in (\ref{gens}). We can view an orbit of $A$ modulo $24$ as a finite graph $\mathcal G_{24}$ in which each vertex corresponds to a distinct (mod $24$) quadruple of curvatures, and two vertices $\mathbf v$ and $\mathbf{v'}$ are joined by an edge iff $S_i\mathbf v=\mathbf{v'}$ for some $1\leq i\leq 4$. Recall from Theorem~\ref{padicorbit} that for any orbit $\mathcal O$ of the Apollonian group, \begin{equation}\label{2483} \mathcal O_{24}=\mathcal O_8\times\mathcal O_3, \end{equation} so the graph $\mathcal G_{24}$ is completely determined by the structure of $\mathcal O_3$ and $\mathcal O_8$. There are only two possible orbits modulo $3$, pictured in Fig.~\ref{mod31} and Fig.~\ref{mod32}. There are many more possible orbits modulo $8$, and we provide the graphs for these orbits in the case of $P_B$ and $P_C$ in Fig.~\ref{mod8}. \begin{figure}[H] \centering \includegraphics[height = 55 mm]{mod8b} \qquad \includegraphics[height = 55 mm]{mod8c} \caption{Orbits of $P_B$ and $P_C$ modulo $8$}\label{mod8} \end{figure} Note that each vertex in $\mathcal G_{8}$ and $\mathcal G_3$ is connected to its neighboring vertices via all of the generators $S_i$. Therefore the curvatures of circles in a packing modulo $24$ are equally distributed among the coordinates of the vertices in $\mathcal G_{24}$. Combined with Theorem~\ref{padicorbit}, this lets us compute the ratio of curvatures in a packing which fall into a specific congruence class modulo $24$. Namely, let $\mathcal{O}_{24}(P)$ be the orbit mod $24$ corresponding to a given packing $P$. For $\mathbf w\in \mathcal{O}_{24}(P)$ let $w_i$ be the $i$th coordinate of $\mathbf w$. We define $\gamma(n,P)$ as the proportion of coordinates in $\mathcal{O}_{24}(P)$ congruent to $n$ modulo $24$. That is, \begin{equation} \gamma(n,P) = \frac{\sum_{i=1}^4 \#\{ w \in \mathcal{O}_{24}(P) | \; w_i = n \} }{4 \cdot \#\{ \mathbf w \in \mathcal{O}_{24}(P) \} }. \label{gammadef} \end{equation} With this notation, a packing $P$ contains a circle of curvature congruent to $n$ modulo $24$ iff $\gamma(n,P)>0$. Given (\ref{2483}), we express $\gamma$ as follows: \begin{eqnarray}\label{multgamma} \gamma(n,P)&=& \frac{\sum_{i=1}^4\#\{ \mathbf w \in \mathcal{O}_{24}(P) | \; w_i = n \} }{ 4\cdot \#\{ \mathbf w \in \mathcal{O}_{24} \} } \\\nonumber \\ &=&\frac{\sum_{i=1}^4 \#\{ \mathbf w \in \mathcal{O}_{8} | \; w_i \equiv n \; (3)\}\cdot\#\{ w \in \mathcal{O}_{3} |\; w_i \equiv n \;(8)\}}{ 4\cdot \#\{ \mathbf w \in \mathcal{O}_{8} \} \,\cdot\, \#\{ \mathbf w \in \mathcal{O}_{3} \} }.\nonumber \end{eqnarray} The significance of $\gamma$ in the case of any packing (not only the two we consider) is explained in the following lemma. \begin{lemma}\label{gammalemma} Let $N_P(x)$ be as before, let $C$ be a circle in an integral Appolonian packing P, and let $a(C)$ be the curvature of $C$. Then $$\sum_{\stackrel{C\in P}{\stackrel{a(C)<x}{a(C)\equiv n\,(24)}}} 1 \sim \gamma(n,P)\cdot N_P(x)$$ \end{lemma} This follows from Theorem~\ref{padicorbit}. Note that in general the orbits $\mathcal O_8(P)$ and $\mathcal O_3(P)$ have $4$ and, respectively, $10$ vertices in the corresponding finite graphs\footnote[4]{There are only two possible orbits mod $3$, but many more mod $8$. We examine just two such orbits here.}. Therefore $\mathcal G_{24}$ always has $40$ vertices, and the ratio in (\ref{multgamma}) is easily computed using this graph. With this in mind, we observe the following about the packing $P_B$. \begin{lemma}\label{bugob} Let $P_{B,24}$ denote the possible congruence classes of curvatures mod $24$ in the packing $P_B$, and let $N_{P_B}(x)$ be as in Theorem~\ref{oh}. Then we have \begin{itemize} \item[(i)]$N_{P_B}(x)\sim c_{P_B}\cdot x^{\delta}$, where $c_{P_B}=0.402\dots$ \item[(ii)] $P_{B,24}=\{2,3,6,11,14,15,18,23\}$ \vspace{0.6in} \item[(iii)] \vspace{-0.45in} \begin{eqnarray*} \gamma(2,P_B) = \frac{3}{20} &\qquad& \gamma(14,P_B) = \frac{3}{20} \\ \gamma(3,P_B) = \frac{1}{10} &\qquad& \gamma(15,P_B) = \frac{1}{10} \\ \gamma(6,P_B) = \frac{1}{10} &\qquad& \gamma(18,P_B) = \frac{1}{10} \\ \gamma(11,P_B) = \frac{3}{20} &\qquad& \gamma(23,P_B) = \frac{3}{20}. \end{eqnarray*} \vspace{0.2in} \item[(iv)] For $10^6<x<5\cdot 10^8$, let $x_{24}$ denote $x$ mod $24$. If $x_{24}\in P_{B,24}$ then $x$ is a curvature in the packing $P_B$. \end{itemize} \end{lemma} Part (iv) is an observation based solely on our computations using the algorithm described in Section~\ref{algorithm} -- these are illustrated in the histograms below. The first three parts follow from computations combined with Theorem~\ref{oh} and Lemma~\ref{gammalemma}. Note that $\gamma(n,P_B) = \gamma(n+12,P_B)$ -- for this particular packing, one can hence express the local obstructions modulo $12$ rather than modulo $24$. Whenever this is the case for an integral ACP, we will find that there are eight congruence classes modulo $24$ in the curvatures of the circles. Graham et.al. observe this in \cite{Apollo} where they compute which integers less than $10^6$ are ``exceptions" for $P_B$-- they find integers which satisfy these local conditions for $P_B$ modulo $12$ but do not occur as curvatures in the packing\footnote[5]{There is a small error in the computations of Graham et.al. -- we have found that the integer $13806\equiv 6\;(12)$ does not appear as a curvature in $P_B$. Their results do not reflect this.}. Our data extends their findings (we consider integers up to $5 \times 10^8$), and shows that all integers between $10^6$ and $10^7$ belonging to one of the congruence classes in part (ii) of Observation~\ref{bugob} appear as curvatures in the packing $P_B$. The following histograms illustrate the distribution of the frequencies with which each integer in the given range satisfying the specified congruence condition occurs as a curvature in the packing $P_B$. The frequencies seem normally distributed, and the means of the distributions can be computed, as we do in (\ref{meaneq}). The variance, however, is much more difficult to predict at this time -- explaining the behavior of the variance as we consider larger integers would shed more light on our local to global conjecture. Note that there are no exceptions to the local to global principle in this range whenever $0$ is not a frequency represented in the histogram (i.e. each integer occurs at least once). There are several other frequencies not represented in the histograms for both $P_B$ and $P_C$ (these show up as gaps in the graphs), and an explanation of this aspect would be interesting. \vspace{1in} \includegraphics[height=54mm]{bugeye_class2} \hfill \includegraphics[height=54mm]{bugeye_class3} \includegraphics[height=54mm]{bugeye_class6} \hfill \includegraphics[height=54mm]{bugeye_class11} \includegraphics[height=54mm]{bugeye_class14} \hfill \includegraphics[height=54mm]{bugeye_class15} \includegraphics[height=54mm]{bugeye_class18} \hfill \includegraphics[height=54mm]{bugeye_class23} We do the same analysis for the packing $P_C$. In this case, we must consider much larger integers than in the case of $P_B$ in order to get comparable results. This can be explained partially by the fact that the constant $c_P$ in Kontorovich-Oh's formula \begin{equation}\label{c} N_P(x)\sim c_P\cdot x^\delta \end{equation} is much smaller for the packing $P_C$ than the packing $P_B$ since the initial four circles in $P_C$ are much larger than the initial four in $P_B$ (See part (i) of Lemmata~\ref{bugob} and \ref{coinob}). Specifically, $c_{P_C}=0.0176\dots$ and $c_{P_B}=0.402\dots$. However, our data suggests that the proposed local to global principle should hold for this packing as well. \begin{lemma}\label{coinob} Let $P_{C,24}$ denote the possible congruence classes mod $24$ in the packing $P_C$, and let $N_{P_C}(x)$ be as in Theorem~\ref{oh}. Then we have \begin{itemize} \item[(i)]$N_{P_C}(x)\sim c_{P_C} \cdot x^{\delta}$, where $c_{P_C}=0.0176\dots$ \item[(ii)] $P_{C,24}=\{ 0,4,12,13,16,21\}$ \vspace{0.4in} \item[(iii)] \vspace{-0.3in} \begin{eqnarray*} \gamma(0,P_C) = \frac{1}{10} &\qquad& \gamma(13,P_C) = \frac{3}{10} \\ \gamma(4,P_C) = \frac{3}{20} &\qquad& \gamma(16,P_C) = \frac{3}{20} \\ \gamma(12,P_C) = \frac{1}{10} &\qquad& \gamma(21,P_C) = \frac{4}{20} \\ \end{eqnarray*} \item[(iv)] For $10^8<x<5 \,\cdot\,10^8$, let $x_{24}$ denote $x$ mod $24$. If $x_{24}=13$ or $x_{24}=21$, then $x$ is a curvature in the packing $P_C$. \end{itemize} \end{lemma} Again, note that part (iv) is an observation based solely on our computations, while the first three parts in Lemma~\ref{coinob} rely on Lemma~\ref{gammalemma} and Theorem~\ref{oh}. The histograms below illustrate the distribution of the frequencies with which each integer in the given range satisfying the specified congruence condition occurs as a curvature in the packing $P_C$. Note that, as with $P_B$, the frequencies with which integers are represented in the packing seem to have a normal distribution. However, since the mean of this distribution is much smaller for $P_C$ than for $P_B$, we find that $0$ is often a frequency represented in the histograms, and so there are still some exceptions to the proposed local to global principle in the range we consider. \includegraphics[height=54mm]{coins_class0} \hfill \includegraphics[height=54mm]{coins_class4} \includegraphics[height=54mm]{coins_class12} \hfill \includegraphics[height=54mm]{coins_class13} \includegraphics[height=54mm]{coins_class16} \hfill \includegraphics[height=54mm]{coins_class21} As we mentioned before, the mean in each of these histograms is easily computable: let $C$ denote a circle in an Apollonian packing $P$ and let $a(C)$ denote the curvature of $C$. Let $I = [k, k+K]$ be an interval of length $K$ and let $x\in I$ be an integer. Let $$\nu(x)=\#\{C\in P \, | \, a(C)=x\}$$ be the number of times $x$ is a curvature of a circle in $P$. For and integer $m\geq 0$, let $$\delta(m,n)=\#\{x\in I \, | x\equiv n \, (24),\, \nu(x) = m\}.$$ Then by Lemma~\ref{gammalemma}, \begin{equation}\label{equivalentsums} \sum_{\stackrel{x\in I}{x\equiv n \, (24)}}\nu(x) = \sum_{m\geq 0}\delta(m,n)\cdot m. \end{equation} The equivalence of the two sums above is easy to observe -- one counts the same set of curvatures, but partitions them differently. In particular, the expression in (\ref{equivalentsums}) allows us to determine the mean of the distributions in the histograms above. Namely, denote by $x\in I$ an integer in some interval $I = [k, k+K]$ of length $K$. Let $1\leq n\leq 24$, and let $\mu(n,P)$ denote the mean of the number of times $x\equiv n$ mod $24$ is represented as a curvature in the packing $P$. Note that there are precisely $K/24$ integers congruent to $n$ mod $24$ in the interval $I$. Combined with (\ref{equivalentsums}), this gives us \begin{equation}\label{meaneq} \mu(n,P)\approx\frac{24\cdot \gamma(n,P)\cdot(N_P(k+K)-N_P(k))}{K}. \end{equation} This formula predicts the following values for the means in $P_B$ in the range $[10^6, 10^8)$; and $P_C$ in the range $[4 \times 10^8, 5 \cdot 10^8)$: \begin{eqnarray*} &&\mu(2,P_B)=\mu(11,P_B)=\mu(14,P_B)=\mu(14,P_B)=\mu(23,P_B) = 406.70\dots\\ &&\mu(3,P_B)=\mu(6,P_B)=\mu(15,P_B)=\mu(18,P_B) = 271.13 \dots\\ &&\mu(0,P_C)=\mu(12,P_C)= 24.35\dots\\ &&\mu(4,P_C)=\mu(16,P_C)= 36.52\dots\\ &&\mu(13,P_C)=73.05\dots\\ &&\mu(21,P_C)=48.70\dots \end{eqnarray*} which coincides with the means observed in the histograms. This clarifies why the mean is small for packings where the constant $c_P$ in the formula $N_P(x)\sim c_P\cdot x^{\delta}$ is small, and why one needs to consider very large integers to see that the local to global principle for such packings should hold. This analysis can be carried out for any ACP, and will likely yield similar results. In the direction of proving Conjecture~\ref{LG}, one might investigate how $X_P$ depends on the given packing -- can it perhaps be expressed in terms of the constant $c_P$ in (\ref{c})? One might also ask how the variance of the distributions above depend on the packing, and how it changes with the size of the integers we consider -- answering this would give further insight into the local to global correspondence for curvatures in integer ACP's. \section{A description of our algorithm and its running time}\label{algorithm} We represent an ACP by a tree of quadruples. Fig.~\ref{tree} shows the first two generations of the tree corresponding to $P_C$. To generate all curvatures of magnitude less than $x$, we use a LIFO (last-in-first-out) stack to generate and prune this tree. The algorithm is as follows: \begin{itemize} \item[\it{1)}] Push the root quadruple onto the stack. \item[\it{2)}]Until the stack is empty, perform an iterative process: \begin{itemize} \item[\it{a)}] Pop a quadruple off of the stack and generate its children. \item[\it{b)}] For each child, if the new curvature created (i.e. the maximum entry of the quadruple) is less than $x$, then push the child onto the stack. \end{itemize} \end{itemize} \vspace{.5 cm} By pushing a quadruple onto the stack only if its maximum entry is less than $x$, we effectively prune the the tree. Since we know each quadruple has a larger maximum entry than its parent, we use step 2\emph{b)} to avoid generating branches whose quadruples are known to have entries greater than $x$. Although we use the concept of a tree to \emph{generate} curvatures, we note that the entire tree structure is not necessary to \emph{store} such curvatures. Instead, we store the curvatures in a one-dimensional array of $x$ elements, all initialized to zero. The $i$th element of the array contains the number of curvatures with magnitude $i$. For instance, the $24$th element of the array for $P_C$ is equal to $1$, while the $25$th element is equal to $0$, since there are no curvatures equal to 25 in $P_C$. We use these arrays to generate the histograms in Section~\ref{locglobal}. Due to Matlab's memory constraints, we limit our Matlab arrays to $10^8$ entries. So, to check for exceptions in the entire range of $[10^6, 5\cdot 10^8)$, we check each of the intervals $[10^6, 10^8)$, $[10^8, 2\cdot 10^8), \dots, [4\cdot 10^8, 5\cdot 10^8)$ individually. We have chosen to display the interval $[10^6, 10^8)$ in our figures in Section ~\ref{locglobal}. To count primes less than $x$, we simply increment a sum whenever a prime curvature is produced. To count kissing primes less than $x$, we increment a sum whenever a prime curvature is produced \emph{and} some other member of the curvature's quadruple is prime. It takes our algorithm $\textrm{O}(N_P(x))$ steps to compute $N_P(x)$, which is optimal since each node on the tree must be visited; that is, it is not possible to skip any quadruples. Our programs rely on Wayne and Sedgewicks's Stack data type and standard draw library \cite{sw}. \begin{figure}[H] \includegraphics[width=80mm]{tree} \caption{The tree of quadruples for $P_C$, pictured up to 2 generations.}\label{tree} \end{figure}
{ "timestamp": "2010-01-12T01:44:00", "yymm": "1001", "arxiv_id": "1001.1406", "language": "en", "url": "https://arxiv.org/abs/1001.1406", "abstract": "Bounded Apollonian circle packings (ACP's) are constructed by repeatedly inscribing circles into the triangular interstices of a configuration of four mutually tangent circles, one of which is internally tangent to the other three. If the original four circles have integer curvature, all of the circles in the packing will have integer curvature as well. In \\cite{ll}, Sarnak proves that there are infinitely many circles of prime curvature and infinitely many pairs of tangent circles of prime curvature in a primitive integral ACP. In this paper, we give a heuristic backed up by numerical data for the number of circles of prime curvature less than $x$, and the number of \"kissing primes,\" or {\\it pairs} of circles of prime curvature less than $x$ in a primitive integral ACP. We also provide experimental evidence towards a local to global principle for the curvatures in a primitive integral ACPs.", "subjects": "Number Theory (math.NT)", "title": "Some experiments with integral Apollonian circle packings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357610169274, "lm_q2_score": 0.8418256393148982, "lm_q1q2_score": 0.8241774054302228 }
https://arxiv.org/abs/1203.4066
On the last digit and the last non-zero digit of $n^n$ in base $b$
In this paper we study the sequences defined by the last and the last non-zero digits of $n^n$ in base $b$. For the sequence given by the last digits of $n^n$ in base $b$, we prove its periodicity using different techniques than those used by W. Sierpinski and R. Hampel. In the case of the sequence given by the last non-zero digits of $n^n$ in base $b$ (which had been studied only for $b=10$) we show the non-periodicity of the sequence when $b$ is an odd prime power and when it is even and square-free. We also show that if $b=2^{2^s}$ the sequence is periodic and conjecture that this is the only such case.
\section{Introduction} The study of the last digit of the elements in a sequence is a recurrent topic in Number Theory. In this sense, one of the most studied sequences is, of course, the Fibonacci sequence which was already studied by Lagrange observing that the last digit of the Fibonacci sequence repeats with period 60 (see \cite{Lagrange}). In any base $b$, the sequence of Fibonacci modulo $b$ is also periodic \cite{Wall} and the periods $\pi(b)$ for each base $b$ (see \cite{Rob,Morris} for some of their properties) are called Pisano periods (Sloane's OEIS A001175). These periods have been conjectured to satisfy the relation $\pi(p^e) = p^{e-1}\pi(p)$ which is called Wall's conjecture and that has been verified for primes up to $10^{14}$. Primes for which this relation fails (if any exists) are called Wall-Sun-Sun primes. There are many other examples of works of similar orientation. In \cite{Walter}, for instance, the last decimal digit of $\binom{2n}{n}$ and $\sum \binom{n}{i}\binom{2n-2i}{n-i}$ is explicitly computed and D.B. Shapiro and S.D. Shapiro show in \cite{Daniel}, among other results, that the sequence $k, k^k, k^{k^{k}}, \dots, k\uparrow\uparrow n,\dots$ (mod $b$) is eventually constant. In this paper we focus on the sequence $n^n$. The study of the residues of this sequence was started by W. Sierpinski who, in his 1950 paper \cite{sier}, proved that the last digits of the numbers $n^n$ form a periodic sequence whose shortest period consists of 20 terms. More generally, it was proved that, for every positive integer $b$, the sequence $S_b (n)$ consisting of the residues mod $b$ of the numbers $n^n$ form an infinite, eventually periodical, sequence. In 1955, R. Hampel (see \cite{Hampel}) proved that the period of $S_b(n)$ (Sloane's OEIS A174824) is $\textrm{lcm}(b,\lambda(b))$, where $\lambda$ is the Carmichael function. Moreover, he proved that if $b=\prod_{i=1}^t p_i^{s_i}$, the sequence is periodic if and only if $s_i\leq p_i $ and that periodicity starts with the maximum of the numbers $\eta_{i}:=1-p_i (1+\lceil -\frac{s_i}{p_i}\rceil)$ for $i=1,\cdot\cdot\cdot,t$. These results were established first in the prime case (by Sierpinski), then in the prime power case, and finally in general. The methods of the proof lie in the theory of linear congruences and frequent use is made of the Euler-Fermat congruence and of the properties of primitive roots. It seems remarkable to us the fact that this work by Hampel was not cited in recent work on this topic, such \cite{Cro1,Cro2,Euler,Dresden,Dresden2}. In a somewhat different direction we find the works by R. Crocker \cite{Cro1,Cro2} and L. Somer \cite{somer} where they study the number of residues (mod $p$) of $n^n$, for $n$ between $1$ and $p$. More recently the interest on the sequence $n^n$ was revived by G. Dresden in \cite{Dresden}, where he established the non-periodicity of the last non-zero digit of the decimal expansion of this sequence and in \cite{Dresden2}, where he proves that the number formed by this digits is transcendental. Our paper is organized as follows. In the second section we revisit, using different techniques, the work by Sierpinski and Hampel. In the third section we focus on the last non-zero digit of $n^n$ in base $b$. In particular we establish the non-periodicity of this sequence when $b$ is an odd prime power or an even square-free integer. We also show that if $b=2^{2^s}$ the sequence is periodic and conjecture that this is the only such case. \section{The last digit of $n^n$ in base $b$} The results that we present in this section were already proved in \cite{Hampel,sier}. We revisit then using quite different techniques. We will start with some notation. Given $n,b\in\mathbb{N}$ we consider the following functions: $$\mathcal{H}(n):=\textrm{lcm}(n,\lambda(n)),$$ $$S_b(n):=n^n\ \textrm{(mod b)}.$$ Observe that $S_b(n)$ gives the last digit of $n^n$ in base $b$. We are interested in studying the behavior of this sequence. A first step in this direction is given in the following proposition. This proposition not only determines the eventual periodicity of $S_b(n)$, but also the values that break the periodicity. This question was not studied by Hampel in \cite{Hampel}. \begin{prop} For every $b\in\mathbb{N}$ let $S_b(n)$ be the sequence defined above. Let $M\in\mathbb{N}$ and put $M=\displaystyle{\prod_{i=1}^t p_i^{k_i}}$ with $t>0$ its prime power decomposition. Then $S_b(M)\neq S_b(M+\mathcal{H}(b))$ if and only if $p_i^{k_iM+1}$ divides $b$ for some $i\in\{1,\dots,t\}$. \end{prop} \begin{proof} Put $b=p_1^{a_1}\cdots p_t^{a_t}q_1^{r_1}\cdots q_s^{r_s}$ the prime-power decomposition of $b$ ($q_i\neq p_j$). We have that $M+\mathcal{H}(b)\equiv M$ (mod $b$). Also, since $\lambda(q_i^{r_i}) \mid \mathcal{H}(b)$, we have that $M^{\mathcal{H}(b)}\equiv 1$ (mod $b$) and it follows that $M^M\equiv (M+\mathcal{H}(b))^{M+\mathcal{H}(b)}$ (mod $q_i^{r_i}$) for every $i\in\{1,\dots,s\}$. As a consequence $M^M\not\equiv (M+\mathcal{H}(b))^{M+\mathcal{H}(b)}$ (mod $b$) if and only if $M^M\not\equiv M^{M+\mathcal{H}(b)}$ (mod $p_i^{a_i}$) for some $i\in\{1,\dots, t\}$. Clearly, this happens if and only if $M^M(M^{\mathcal{H}(b)}-1)\not\equiv 0$ (mod $p_i^{a_i}$). But, since $p_i$ does not divide $M^{\mathcal{H}(b)}-1$, this happens if and only if $M^M\neq 0$ (mod $p_i^{a_i}$). Finally, $M^M=\prod_{i=1}^t p_i^{k_iM}\not\equiv 0$ (mod $p_i^{a_i}$) if and only if $a_i\geq k_iM+1$; i.e., if and only if $p_i^{k_iM+1} \mid b$. \end{proof} This result clearly implies that the sequence $S_b(n)$ is eventually periodic with its period being a divisor of $\mathcal{H}(b)$. The next results are devoted to show that the period is exactly $\mathcal{H}(b)$. \begin{prop} If $S_b(n)=S_b(n+T)$ for every $n\geq n_0$, then $b$ divides $T$. \end{prop} \begin{proof} We can choose $n\equiv 0$ (mod $b$) and it follows that $T^{n+T}\equiv 0$ (mod $b$). This implies that $\textrm{rad}(b) \mid T$. Now, $n^n\equiv (n+T)^{n+T}$ (mod rad($b$)) for every $n\geq n_0$, so we have that $n^n\equiv n^{n+T}$ (mod rad($b$)). We can choose $n$ such that $\gcd(n,b)=1$ so that $n^T\equiv 1$ (mod rad($b$)). From this, it follows that $\varphi(\textrm{rad}(b)) \mid T$; i.e., if $b=p_1^{b_1}\cdots p_s^{b_s}$ then $(p_1-1)\cdots (p_s-1) \mid T$. If we choose $n\equiv 1$ (mod $b$), then it follows that $(T+1)^{n+T}\equiv 1$ (mod $b$). Since rad($b$) divides $T$, we have that $\gcd(T+1,b)=1$ and, consequently, that $\gcd(T+1,p_i^{b_i})=1$. Thus $(T+1)^{\gcd(\varphi(p_i^{b_i}),n+T)}\equiv 1$ (mod $p_i^{b_i}$). Assume that $p_i \mid n+T$. Then, since $p_i \mid T$ it follows that $p_i \mid n$. This is a contradiction because $p_i \mid n-1$ and we get that $\gcd(\varphi(p_i^{b_i}),n+T)=\gcd(n+T,p_i-1)$. On the other hand, it can be easily seen that, in our conditions, $\gcd(n+T,p_i-1)=\gcd(n,p_i-1)$. We have thus seen that $(T+1)^{\gcd(n,p_i-1)}\equiv 1$ (mod $b$) for every $n\equiv 1$ (mod $b$). We can now choose $n=K(p_1-1)\cdots (p_s-1)b+1$ with $k$ such that $n\geq n_0$. It is clear that $n\equiv 1$ (mod $b$) and, moreover, $\gcd(n,p_i-1)=1$. Thus we obtain that $(T+1)\equiv 1$ (mod $b$) and the result follows. \end{proof} \begin{cor} If $S_b(n)=S_b(n+T)$ for every $n\geq n_0$, then $\lambda(b)$ divides $T$. \end{cor} \begin{proof} In the previous proposition be have seen that $b \mid T$. Thus, $S_b(n)=S_b(n+T)$ implies that $n^n\equiv (n+T)^{n+T}\equiv n^{n+T}$ (mod $b$) and, consequently, that $n^n(n^T-1)\equiv 0$ (mod $b$) for every $n\geq n_0$. There is no problem in choosing $n$ such that $\gcd(n,b)=1$ and then $n^T\equiv 1$ (mod $b$) for every $n\geq n_0$ coprime to $b$. This clearly completes the proof. \end{proof} \begin{cor} Given $b\in\mathbb{N}$, the sequence $S_b(n)$ is eventually periodic of period $\mathcal{H}(b)$. \end{cor} \begin{proof} Due to Proposition 1, the sequence $S_b(n)$ is eventually periodic and its period must divide $\mathcal{H}(b)$. Now, let $T$ be the period. Proposition 2 and Corollary 1 imply that $b$ and $\lambda(b)$ both divide $T$ and hence the result. \end{proof} We have seen that $S_b(n)$ is eventually periodic. It is also interesting to study in which cases this sequence is periodic. \begin{prop} Let $b=\displaystyle{\prod_{i=1}^t p_i ^{s_i}}$. The sequence $S_b(n)$ is periodic if and only if $s_i \leq p_i$ for every $i\in\{1, \dots,t\}$. \end{prop} \begin{proof} Assume that $S_b(n)$ is periodic with period $\mathcal{H}(b)$. Then $S_b(p_i)=S_b(p_i+\mathcal{H}(b))$ for every $i$ so the Proposition 1 implies that $p_i^{p_i+1}$ does not divide $p_i^{s_i}$; i.e., $p_i\geq s_i$ for every $i$ as claimed. Conversely, assume that $s_i\leq p_i$ for every $i$. Let $n=p_1^{k_1}\cdots p_t^{k_t}$ be an integer such that the primes in its decomposition are the same than those in the decomposition of $b$. If $i\in\{1,\dots,t\}$ is such that $k_i\neq 0$ we have that $s_i\leq p_i\leq k_ip_i^{k_i}\leq k_iM$ so by Proposition 1 again $S_b(n)=S_b(n+\mathcal{H}(b))$. On the other hand, if $\gcd(n,b)=1$ we have that $S_b(n)=S_b(n+\mathcal{H}(b))$ since $\lambda(b)$ divides $\mathcal{H}(b)$. To finish the proof it is enough to observe that every $n\in\mathbb{N}$ can be written in the form $n=n_1n_2$ with $\gcd(n_2,b)=1$ and to reason like in the previous cases. \end{proof} \section{The last non-zero digit of $n^n$ in base $b$} In the previous section we have proved that the sequence $S_b(n)=n^n$ (mod $b$) given by the last digit of $n^n$ is eventually periodic. For instance, if $b=3$ the first elements of $S_3(n)$ are: $$1, 1, 0, 1, 2, 0, 1, 1, 0, 1, 2, 0, 1, 1, 0, 1, 2, 0, 1,1, 0, 1, 2, 0, 1, 1, 0,1,2,0,1,\dots$$ and the period is $(1,1,0,1,2,0)$. We can see that there are many zeros in the previous sequence, in fact if $3\mid n$ then clearly $S_3(n)=0$. We wonder what will happen if we consider the sequence given by the last non-zero digit of $n^n$ instead. In this case the 0's will disappear and they will be replaced by 1 or 2 and periodicity could be possibly broken. For the case $b=10$ it is well-known (see \cite{Dresden}) that the sequence given by the last non-zero digit of $n^n$ in base 10 is not eventually periodic. In this section we will focus on the behavior of this sequence for some choices of $b$. In particular we will study the case when $b$ is a square-free even integer and when it is a prime power. Before we proceed, we will introduce some notation. In what follows $L_b(n)$ will denote the last non-zero digit of $n$ in base $b$. For every $b\in\mathbb{N}$ we will consider the sequence $\mathfrak{S}_b(n):=L_b(n^n)$; i.e., $\mathfrak{S}_b(n)$ is the last non-zero digit of $n^n$. Observe that if $b\nmid n$, then $L_b(n)\equiv n$ (mod $b$). \subsection{The even square-free case} We will show in this subsection that $\mathfrak{S}_b(n)$ is not eventually periodic when $b$ is a square-free even integer. Our proof will be simpler than the one given in \cite{Dresden} for the case $b=10$. We will start with a series of technical lemmas. \begin{lem}\label{L1} Let $a(n)$ be a sequence such that $a(n)\in\{e_1\dots,e_r\}$ for every $n\in\mathbb{N}$. If $a(n)$ is eventually periodic, then the set $\Theta(e_i):=\{n:a(n)=e_i\}$ is (possibly with the exception of a finite number of elements) the union of a finite number of arithmetic sequences. \end{lem} \begin{proof} Assume that $a(n)$ is periodic with period $T$ and put $n_{0,i}=\min\Theta(e_i)$. Clearly $n_{0,i}+kT\in\Theta(e_i)$ for every $k$. Let $\{n_{1,i},\dots,n_{m_i,i}\}=\Theta(e_i)\cap(n_{0,i},n_{0,i}+T)$. We claim that $$\Theta(e_i)=\bigcup_{j=0}^{m_i}\{n_{j,i}+kT:k\in\mathbb{N}\}.$$ For let $n\in\Theta(e_i)$. Then there must exist $k\in\mathbb{N}$ such that $n_{0,i}+kT\leq n<n_{0,i}+(k-1)T$. But in this case $n_{0,i}\leq n-kT<n_{0,i}+T$ so $n-kT=n_{j,i}$ for some $j\in\{0,\dots,m_i\}$ as claimed. If $a(n)$ is not periodic, but eventually periodic, we can reason in the same way but a finite number of initial terms must be considered separately and the result follows. \end{proof} \begin{lem} Let $b$ be an even square-free integer and put $b=2m$. Then $\Theta(m):=\{n:\mathfrak{S}_b(n)=m\}=\{n:L_{b}(n)=m\}$. \end{lem} \begin{proof} Let $n=b^rn'$ with $r\geq 0$ and $b$ not dividing $n'$. $\mathfrak{S}_b(n)=m$ if and only if $(n')^n\equiv m$ (mod $b$). This implies that $(n')^m\equiv 0$ (mod $m$) and $(n')^n\equiv 1$ (mod $2$) simultaneously. But, $b$ being square-free, it follows that $n'\equiv 0$ (mod $m$) and $n'\equiv 1$ (mod $2$); i.e., $m\equiv (n')^n\equiv n'$ (mod $b$). Thus $L_b(n)=L_b(n')\equiv n'\equiv m$ (mod $b$). Since the steps above are reversible the proof is complete. \end{proof} Let us now define the following family of sets: $$\mathcal{C}_i:=\{mb^{i-1}+kb^i:k\in\mathbb{N}\}.$$ Observe that $\mathcal{C}_i\subset \Theta(m)$ and the previous lemma implies that $$\Theta(m)=\bigcup_{i\geq 1}\mathcal{C}_i.$$ We are now in the conditions to prove the following result. \begin{prop} For every even and square-free integer $b$, the sequence $\mathfrak{S}_b(n)$ is not eventually periodic. \end{prop} \begin{proof} Assume that $\mathfrak{S}_b(n)$ is eventually periodic. Then, due to Lemma \ref{L1} it follows that (with the exception of a finite number of elements) the set $\Theta(m)$ is a finite union of arithmetic sequences; i.e., $\displaystyle{\Theta(m)=\bigcup_{i=1}^r A_i}$. To prove the result we can put aside, without loss of generality, the finite number of elements which do not lie in this finite union of arithmetic sequences. Let $a_{0,i}=\min A_i$ so that $A_i=\{a_{0,i}+kd_i:k\in\mathbb{N}\}$ for every $i$. If we denote by $a_{k,i}=a_{0,i}+kd_i$, the following facts should be clear from the very definition of the sets $\mathcal{C}_i$: \begin{enumerate} \item If $a_{k,i},a_{k+1,i}\in\mathcal{C}_j$ for some $k\in\mathbb{N}$, then $b^j \mid d_i$ and $a_{h,i}\in\mathcal{C}_j$ for every $h\geq k$. \item If $a_{k,i}\in\mathcal{C}_{j_1}$ and $a_{k+1,i}\in\mathcal{C}_{j_2}$, then $j_2>j_1$ because, otherwise, $a_{k+1,i}\not\in\Theta(m)$. \item If $a_{k,i}\in\mathcal{C}_{j_1}$ and $a_{k+1,i}\in\mathcal{C}_{j_2}$ with $j_2>j_1$, then $a_{k+2,i}\in C_{j_1}$. Consequently, we can apply the previous point to find a contradiction. \end{enumerate} The three points above show that if $\displaystyle{\Theta(m)=\bigcup_{i=1}^r A_i}$, then each $A_i$ is eventually contained in some fixed $\mathcal{C}_{j(i)}$. This clearly contradicts the fact that $\displaystyle{\Theta(m)=\bigcup_{i\geq 1}\mathcal{C}_{i}}$ and the proof is finished. \end{proof} \subsection{The prime power case} In this section we focus on the behavior of the sequence $\mathfrak{S}_{p^t}(n)$ with $p$ a prime and $t\geq 1$. Since this situation is rather different from the situation of the previous section we will have to use different techniques here. In fact we have to study the case $t=1$ separately. \subsubsection{The case $t=1$} To study the behavior of the sequence $\mathfrak{S}_{p}(n)$ for every prime $p$ we will make use of some kind of ``fractality'' of this sequence, which is established in the following lemma. \begin{lem}\label{fractal} If $p$ is a prime, then $\mathfrak{S}_p(n)=\mathfrak{S}_p(pn)$. \end{lem} \begin{proof} If $p\nmid n$, since $p\nmid n^n$, we have that $\mathfrak{S}_p(n)=L_p(n^n)\equiv n^n$ (mod $p$). On the other hand, $\mathfrak{S}_p(pn)=L_p(p^{pn}n^{pn})=L_p(n^{pn})\equiv n^{pn}\equiv n^n$ (mod $p$). Now, if $n=p^mn'$ with $p\nmid n'$, then: \begin{align*} \mathfrak{S}_{p}(pn)&=L_p(p^{pn}n^{pn})=L_p(n^{pn})=L_p(p^{mpn}(n')^{pn})=\\&=L_p((n')^{pn})\equiv (n')^{pn}=(n')^{p^{m+1}n'}\equiv (n')^{n'}\ \textrm{(mod $p$)}, \end{align*} while: $$\mathfrak{S}_p(n)=L_p(n^n)=L_p(p^{mn}(n')^{n})=L_p((n')^{p^mn'})\equiv (n')^{p^mn'}\equiv (n')^{n'}\ \textrm{(mod $p$)}$$ and hence the result. \end{proof} The previous lemma gives us, in addition, some information about the period of $\mathfrak{S}_p(n)$, if it exists. \begin{lem} If the sequence $\mathfrak{S}_p(n)$ is eventually periodic of period $T$, then $p\nmid T$ \end{lem} \begin{proof} If $p \mid T$ we have: $$\mathfrak{S}_p(n)=\mathfrak{S}_p(pn)=\mathfrak{S}_p(pn+T)=\mathfrak{S}_p(pn+pT')=\mathfrak{S}_{p}(n+T')$$ with $T'<T$, a contradiction. \end{proof} The next proposition proves the non-periodicity of $\mathfrak{S}_p(n)$ if $p$ is odd. \begin{prop} If $p$ is an odd prime, the sequence $\mathfrak{S}_p(n)$ is not eventually periodic. \end{prop} \begin{proof} Assume, on the contrary, that the sequence is eventually periodic; i.e., that $\mathfrak{S}_p(n)=\mathfrak{S}_p(n+T)$ eventually, with minimal $T$. Take $n=p^mn'$ with $p\nmid n'$. Like in Lemma 3 above, $\mathfrak{S}_{p}(n)\equiv (n')^{n'}$ (mod $p$). Now, since $p \mid n$ but $p\nmid T$, it follows that $p\nmid (n+T)^{n+T}$ and thus: $$\mathfrak{S}_p(n+T)\equiv (n+T)^{n+T}\equiv T^{n+T}\equiv T^{n'+T}\ \textrm{(mod $p$)}.$$ We have thus seen that for every $n'$ such that $p\nmid n'$, $T^{n'+T}\equiv (n')^{n'}$ (mod $p$). If we take $n'=1$ it follows that $T^{T+1}\equiv 1$ (mod $p$). If we take $n'=p-1$ (recall that $p\neq 2$) it follows that $T^T\equiv 1$ (mod $p$). This facts together imply that $T\equiv 1$ (mod $p$) but this would imply that $(n')^{n'}\equiv 1$ (mod $p$) for every $n'$ with $p\nmid n'$. This is a contradiction and the proof is complete. \end{proof} To complete the study in this case it is enough to observe that $\mathfrak{S}_2(n)$ is obviously constant with $\mathfrak{S}_2(n)=1$ for every $n\in\mathbb{N}$ because, in base 2, the last non-zero digit of any number is 1. \subsubsection{The case $t>1$} Now, we turn to the sequence $\mathfrak{S}_{p^t}(n)$ with $p$ an odd prime and $t>1$. In this case we have the following analogue of Lemma \ref{fractal} to describe the ``fractality'' of $\mathfrak{S}_{p^t}(n)$. \begin{lem} Let $p$ be a prime and let $t>1$ be any integer. Then $\mathfrak{S}_{p^t}(p^tn)=\mathfrak{S}_{p^t}(p^{t+\varphi(t)}n)$ for every $n\in\mathbb{N}$. \end{lem} \begin{proof} Put $n=p^mn'$ with $m\geq 0$ and $p\nmid n'$. Then: \begin{align*}\mathfrak{S}_{p^t}(p^tn)&=\mathfrak{S}_{p^t}(p^{m+t}n')=L_{p^t}\left(p^{(m+t)p^tn}(n')^{p^tn}\right)=\\&=L_{p^t}\left(p^{mp^tn}(n')^{p^tn}\right)\equiv p^{\alpha}(n')^{p^tn}\ \textrm{(mod $p^t$)},\end{align*} where $\alpha\in\{0,\dots,t-1\}$ is the class of $mp^tn$ modulo $t$. On the other hand it can be seen in the same way that: $$\mathfrak{S}_{p^t}(p^{t+\varphi(t)}n)\equiv p^{\beta}(n')^{p^{t+\varphi(t)}n}\ \textrm{(mod $p^t$)},$$ where $\beta\in\{0,\dots,t-1\}$ is the class of $mp^{t+\varphi(t)}n$ modulo $t$. Now, to finish the proof it is enough to see that $p^{\alpha}(n')^{p^tn}\equiv p^{\beta}(n')^{p^{t+\varphi(t)}n}$ (mod $p^t$). Obviously $p^{t+\varphi(t)}\equiv p^t$ (mod $t$), thus $\alpha=\beta$ and since $p^{t+\varphi(t)}=p\frac{p^{\varphi(t)}-1}{p-1}\varphi(p^t)+p^t$ it follows (recall that $p\nmid n'$) that $(n')^{p^{t+\varphi(t)}}\equiv (n')^{p^t}$ (mod $p^t$) and we are done. \end{proof} Let us now define the sequence $S(n):=\mathfrak{S}_{p^t}(p^tn)$. The following result summarizes some properties of this sequence. \begin{lem}\label{propi} Let $p$ be a prime and let $S(n)$ the sequence defined above. Then, the following properties hold: \begin{enumerate} \item[i)] $S(n)=S(p^{\varphi(t)}n)$ for every $n\in\mathbb{N}$. \item[ii)] If $S(n)$ is (eventually) periodic of period $T$, then $p\nmid T$. \item[iii)] If $\mathfrak{S}_{p^t}(n)$ is (eventually) periodic of period $T$, then $S(n)$ is also (eventually) periodic and its period divides $T$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[i)] This is the previous lemma. \item[ii)] If $p\mid T$ then $T=pT'$ and we have that $S(n)=S(p^{\varphi(t)}n)=S(p^{\varphi(t)}n+p^{\varphi{t}-1}T)=S(p^{\varphi(t)}n+p^{\varphi(t)}T')=S(n+T')$ with $T'<T$, a contradiction. \item[iii)] Let $T$ be the period of $\mathfrak{S}_{p^t}(n)$. Then $S(n)=\mathfrak{S}_{p^t}(p^tn)=\mathfrak{S}_{p^t}(p^tn+p^tT)=S(n+T)$ as claimed. \end{enumerate} \end{proof} As a consequence of the previous lemma, to prove that $\mathfrak{S}_{p^t}(n)$ is not eventually periodic it is enough to see that neither is $S(n)$. \begin{prop} Let $p$ be a prime and $S(n)=\mathfrak{S}_{p^t}(p^tn)$. If $S(n)$ is eventually periodic, then $t=p^s$. \end{prop} \begin{proof} We know by hypothesis that $S(n)=S(n+T)$ for every $n\geq n_0$ and put $n=p^mn'$ with $p\nmid n'$ as usual. Note that since we are dealing with eventual periodicity, we can assume without loss of generality that $m\geq t$. Then: \begin{align*} S(n)&=\mathfrak{S}_{p^t}(p^{m+t}n')=L_{p^t}\left(p^{(m+t)p^tn}(n')^{p^tn}\right)=L_{p^t}\left(p^{mnp^t}(n')^{p^tn}\right)\equiv\\ &\equiv p^{\alpha}(n')^{p^tn}\ \textrm{(mod $p^t$)}, \end{align*} where $\alpha\in\{0,\dots,t-1\}$ is the class of $mnp^t$ modulo $t$. On the other hand, \begin{align*}S(n+T)&=\mathfrak{S}_{p^t}(p^tn+p^tT)=L_{p^t}\left((p^tn+p^tT)^{p^tn+p^tT}\right)=\\&=L_{p^t}\left((n+T)^{p^tn+p^tT}\right)\equiv (n+T)^{p^tn+p^tT}\equiv T^{p^t(n+T)}\ \textrm{(mod $p^t$)},\end{align*} since $p^t\mid n$ because we have chosen $m\geq t$. Thus, we have seen that for every $n_0\leq n=p^mn'$ with $p\nmid n'$, if $\alpha$ is the class of $mnp^t$ modulo $t$, then: $$T^{p^t(n+T)}\equiv p^{\alpha}(n')^{p^tn}\ \textrm{(mod $p^t$)}.$$ Clearly, if $t$ is not a power of $p$, we can choose $m$ and $n$ such that $\alpha\neq 0$ so it follows that $p$ divides $T^{p^t(n+T)}$, a contradiction. \end{proof} Due to the previous proposition we only have to worry about the case $\mathfrak{S}_{p^{p^s}}(n)$. We will see that if $p$ is odd, this sequence is not eventually periodic. \begin{prop} Let $p$ be an odd prime and $S(n)=\mathfrak{S}_{p^t}(p^tn)$. Then the sequence $S(n)$ is not eventually periodic. \end{prop} \begin{proof} We can reason in a similar way to that in the previous proposition, but in this case $\alpha=0$ necessarily. Hence, using the same notation as in the previous result, we get $T^{p^{p^s}(n+T)}\equiv (n')^{p^{p^s}n}$ (mod $p^{p^s}$). We can choose $n'=1$ so it follows that $T^{p^{p^s}(p^m+T)}\equiv 1$ (mod $p^{p^s}$) and, consequently, that $T^{T+1}\equiv 1$ (mod $p$). If we now choose $n'=p-1$ (recall that $p\neq 2$) we get $T^{p^{p^s}(p^m(p-1)+T)}\equiv 1$ (mod $p^{p^s}$) and, consequently, that $T^{T}\equiv 1$ (mod $p$). Putting these two results together we get that $T\equiv 1$ (mod $p$) so $T^{p^s}\equiv 1$ (mod $p^{p^s}$). Then, we have that $(n')^{p^{p^s}n}\equiv 1$ (mod $p^{p^s}$) for every $n'$ such that $p\nmid n'$; i.e., $(n')^{n'}\equiv 1$ (mod $p$) for every $n'$ such that $p\nmid n'$. Clearly this is impossible in $p\neq 2$ and the proof is complete. \end{proof} So, it only remains to study the sequence $\mathfrak{S}_{2^{2^s}}(n)$. \begin{prop} Let $b=2^{2^s}$. Then $\mathfrak{S}_b(n)=\mathfrak{S}_b(n+b)$ for every $n\in\mathbb{N}$. \end{prop} \begin{proof} First of all we consider the case $b\nmid n$. In this case, since $b\nmid n+b$ it follows that $\mathfrak{S}_b(n)\equiv n^n$ (mod $b$) and $\mathfrak{S}(n+b)\equiv (n+b)^{n+b}\equiv n^{n+b}$ (mod $b$). We now consider two possibilities: \begin{itemize} \item[i)] If $n$ is odd $n^b\equiv 1$ (mod $b$) because $\varphi(b)=\varphi(2^{2^s}) \mid 2^{2^s}=b$. Thus $n^{n+b}\equiv n^n$ (mod $b$) and we are done. \item[ii)] If $n$ is even we put $n=2^mn'$ with $n'$ odd and $m<2^s$. Thus, $n^{n+b}=2^{m2^{2^s}}(n')^{2^{2^s}}n^n$. Since $s<2^s$ it follows that $2^{m2^{2^s}}$ is a power of $b$ and consequently $L_b\left(2^{m2^{2^s}}\right)=1$. Moreover, since $(n')^{b}\equiv 1$ (mod $b$) it also follows that $L_b\left((n')^{b}\right)=1$ so: \begin{align*}\mathfrak{S}_b(n+b)&=L_b(n^{n+b})\equiv L_b\left(2^{m2^{2^s}}\right)L_b\left((n')^{b}\right)L_b(n^n)=L_b(n^n)=\\&=\mathfrak{S}_b(n)\ \textrm{(mod $b$)}\end{align*} an the proof is complete in this case. \end{itemize} Now, we have to consider the case when $b \mid n$; i.e., when $n=b^an'=2^{a2^s}n'$ with $b\nmid n'$. In this case $\mathfrak{S}_b(n)=L_b(n^n)=L_b\left((n')^{2^{a2^s}n'}\right)$. Now, if $n'$ is odd we have that $L_b\left((n')^{2^{a2^s}n'}\right)(n')^{2^{a2^s}n'}\equiv 1$ (mod $b$). On the other hand, if $n'$ is even; i.e., $n'=2^mn''$ with $n''$ odd and $m<2^s$ we have that: $L_b\left((n')^{2^{a2^s}n'}\right)=L_b\left(2^{amn'2^{2^s}}(n'')^{2^{an'2^s}}\right)=L_b\left((n'')^{2^{an'2^s}}\right)\equiv 1$ (mod $b$). Thus, we have seen that if $b \mid n$, then $\mathfrak{S}_b(n)=1$. We will have to compute now $\mathfrak{S}_b(n+b)$ in this case. To do so, observe that \begin{align*}\mathfrak{S}_b(n+b)&=L_b\left((n+b)^{n+b}\right)=L_b\left(\left(2^{2^s}(2^{2^s(a-1)}n'+1)\right)^{n+2^{2^s}}\right)=\\&=L_b\left(\left(2^{2^s(a-1)}n'+1\right)^{n+2^{2^s}}\right),\end{align*} and two cases arise: \begin{itemize} \item[i)] If $a>1$ then $b\nmid 2^{2^s(a-1)}n'+1$ and thus $L_b\left(\left(2^{2^s(a-1)}n'+1\right)^{n+2^{2^s}}\right)\equiv \left(2^{2^s(a-1)}n'+1\right)^{n+2^{2^s}}\equiv 1$ (mod $b$). \item[ii)] If $a=1$ we must compute $L_b\left((n'+1)^{n+2^{2^s}}\right)$ and we have two sub cases: \begin{itemize} \item[ii1)] If $n'+1$ is odd, then $(n'+1)^{n+2^{2^s}}\equiv 1$ (mod $b$). \item[ii2)] If $n'+1$ is even, $n'+1=2^mn''$ with $n''$ odd an clearly $$L_b\left((n'+1)^{n+2^{2^s}}\right)=L_b\left((n'')^{2^{2^s}(n'+1)}\right)=1.$$ \end{itemize} \end{itemize} Thus, we have seen that if $b \mid n$, then $\mathfrak{S}_b(n+b)=1=\mathfrak{S}_b(n)$ and the proof is completely finished. \end{proof} After all the work done, we have proved the following result. \begin{thm} The sequence $\mathfrak{S}_{p^t}(n)$ is eventually periodic if and only if $p=2$ and $t=2^s$ for some $s\in\mathbb{N}$ and, in that case, it is periodic. \end{thm} \section{An ending conjecture} The techniques that we have used in this paper have not been useful in order to attack the general case. Nevertheless, based on computational evidence, the authors have the conviction that the case $b=2^{2^s}$ provides us with the only example in which the considered sequence is eventually periodic (and, in fact, periodic); i.e., we present the conjecture below. \begin{conj} The sequence $\mathfrak{S}_b(n)$ is eventually periodic if and only if $b=2^{2^s}$ for some $s\in \mathbb{N}$ and, moreover, in that case the sequence is periodic. \end{conj}
{ "timestamp": "2012-03-20T01:03:46", "yymm": "1203", "arxiv_id": "1203.4066", "language": "en", "url": "https://arxiv.org/abs/1203.4066", "abstract": "In this paper we study the sequences defined by the last and the last non-zero digits of $n^n$ in base $b$. For the sequence given by the last digits of $n^n$ in base $b$, we prove its periodicity using different techniques than those used by W. Sierpinski and R. Hampel. In the case of the sequence given by the last non-zero digits of $n^n$ in base $b$ (which had been studied only for $b=10$) we show the non-periodicity of the sequence when $b$ is an odd prime power and when it is even and square-free. We also show that if $b=2^{2^s}$ the sequence is periodic and conjecture that this is the only such case.", "subjects": "Number Theory (math.NT)", "title": "On the last digit and the last non-zero digit of $n^n$ in base $b$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9890130576932458, "lm_q2_score": 0.8333245870332531, "lm_q1q2_score": 0.824168897872719 }
https://arxiv.org/abs/1310.1181
On the expectation of normalized Brownian functionals up to first hitting times
Let B be a Brownian motion and T its first hitting time of the level 1. For U a uniform random variable independent of B, we study in depth the distribution of T^{-1/2}B_{UT}, that is the rescaled Brownian motion sampled at uniform time. In particular, we show that this variable is centered.
\section{Introduction}\label{intro} In this paper, we study the expectations of the random variables $A_a^{(m)}$ and $\tilde A_a^{(m)}$ defined for $a>0$ and $m\geq 0$ by $$A_a^{(m)}=\frac{1}{T_a^{1+m/2}}\int_0^{T_a}|B_s|^m\text{sgn}(B_s) ds,~~\tilde A_a^{(m)}=\frac{1}{T_a^{1+m/2}}\int_0^{T_a}|B_s|^mds,$$ where $B$ is a Brownian motion and $T_a$ denotes the first hitting time of the level $a$ by $B$. First, remark that $T_a/a^2$ is the first hitting time of $a$ by $(B_{a^2s})$. Therefore, the scaling property of the Brownian motion implies that the laws of $A_a^{(m)}$ and $\tilde A_a^{(m)}$ do not depend on $a$.\\ \noindent To fix ideas, let us now focus in this introduction on the variables $A_a^{(m)}$. These variables are clearly asymmetric functionals of the Brownian motion. Nevertheless, we may wonder whether there exist values of $m$ such that $A_a^{(m)}$ is centered (we will show later that these variables have moments of all orders). Indeed, consider for example the case where $m$ is an odd integer: using a symmetry argument, it is clear that $$\mathbb{E}[A_a^{(m)}]=-\mathbb{E}[A_{-a}^{(m)}],$$ where $A_{-a}^{(m)}$ is obviously defined. Since these two quantities do not depend on $a$, we get a given value, say $v_m$, for the expectations when the barrier is positive and $-v_m$ when it is negative. This somewhat suggests that $v_m$ may be equal to zero.\\ \noindent In fact, it turns out that the random variable $A_a^{(m)}$ is centered only for $m=1$. This result has several interesting consequences. In particular, we show that it can be very simply interpreted in terms of the Brownian meander. Moreover, we prove that the expectation of $A_a^{(m)}$ is negative for $m<1$ and positive for $m>1$.\\ \noindent Finally, note that these expectations are closely connected with the random variable $\alpha$ defined by $$\alpha=\frac{B_{UT_1}}{\sqrt{T_1}},$$ where $U$ is a uniform random variable, independent of $B$. For example, for $m$ an odd integer, the $m$-th moment of $\alpha$ is the expectation of $A_1^{(m)}$. This led us to give in Theorem \ref{density} the law of $\alpha$.\\ \noindent The paper is organized as follows. The specific case $m=1$ is treated in Section \ref{m1}. Our main theorem which provides the expectations of $A_{1}^{(m)}$ for any $m\geq 0$ is given in Section \ref{genm} together with its proof and some related results. The proofs of several technical results together with additional remarks are relegated to the four Appendices A, B, C and D. \section{The case $m=1$}\label{m1} In this section, we state the nullity of the expectation of $A_1^{(1)}$, together with some associated results. \subsection{Centering property in the case $m=1$} \begin{theorem}\label{theo1} The random variable $A_1^{(1)}$ admits moments of all orders and is centered. \end{theorem} \noindent Theorem \ref{theo1} states that, as far as the expectation is concerned, between $0$ and $T_1$, the time spent by the Brownian motion in $(-\infty,0)$ is balanced by that spent in $[0,1]$. Again, it is tempting to deduce this result from the scaling and symmetry properties of $A_1^{(1)}$. However, Theorem \ref{theo2} will formalize that such intuition is wrong. Indeed, we will for example show that the expectation of $A_1^{(3)}$ is non zero, although it satisfies the same scaling and symmetry properties as $A_1^{(1)}$. In fact, we will see that the expectation of $A_1^{(m)}$ is strictly positive for $m>1$ and stricly negative for $m<1$.\\ \noindent Theorem \ref{theo1} can in fact be interpreted as a corollary of the general result given in Theorem \ref{theo2} below. However, using Williams time reversal theorem and some absolute continuity results for Bessel processes, a specific, elegant proof can be written for Theorem \ref{theo1}. So we give this proof in Appendix \ref{prooftheo1}. \subsection{More integrability properties for $A_1^{(1)}$ and connection with Knight's identity} Let $(L_t)_{t\geq 0}$ be the local time process at $0$ of the Brownian motion $B$ and set $$\tau_l=\text{inf}\{t\geq 0,~L_t>l\},$$ for $l>0$. Recall that L\'evy's equivalence result, gives the following equality: $$(|B_t|,L_t)_{t\geq 0}\underset{\mathcal{L}}{=}(S_t-B_t,S_t)_{t\geq 0},$$ with $S_t=\underset{s\leq t}{\text{sup }}B_s$ and $\underset{\mathcal{L}}{=}$ denotes equality in law, see \cite{revuz1999continuous}. Thus, we obtain that $A_1^{(1)}$ has the same law as the random variable $\zeta$ defined by $$\zeta=\frac{1}{\tau_1^{3/2}}\int_0^{\tau_1}(L_u-|B_u|)du.$$ We obviously have $$|\zeta|\leq \frac{1}{\sqrt{\tau_1}}+\frac{1}{\sqrt{\tau_1}}\underset{u\leq \tau_1}{\text{sup }}|B_u|.$$ On the one hand, it is well known that $1/\sqrt{\tau_1}$ follows the law of the absolute value of a standard Gaussian random variable. On the other hand, the celebrated Knight's identity states the following equality: $$\frac{\tau_1}{(\underset{u\leq \tau_1}{\text{sup}}|B_u|)^2}\underset{\mathcal{L}}{=}T_2^3,$$ where $T_2^3=\text{inf}\{t,~R_t=2\},$ with $R$ a three dimensional Bessel process, see \cite{knight1988inverse}. Using the scaling property of the three dimensional Bessel process, we easily get the equality: $$T_2^3\underset{\mathcal{L}}{=}\frac{4}{(\underset{u\leq 1}{\text{sup }}R_u)^2}.$$ Therefore, we deduce that $$\frac{1}{\sqrt{\tau_1}}\underset{u\leq \tau_1}{\text{sup }}|B_u|\underset{\mathcal{L}}{=}\frac{1}{2}\underset{u\leq 1}{\text{sup }} R_u.$$ Hence, we easily deduce the following proposition: \begin{proposition} There exists $\varepsilon>0$ such that $$\mathbb{E}[\emph{exp}\big(\varepsilon (A_1^{(1)})^2\big)]<+\infty.$$ \end{proposition} \noindent We note that the same arguments yield that $A_1^{(m)}$ and $\tilde A_1^{(m)}$ admit moments of all orders. \subsection{Consequences of Theorem \ref{theo1} for the Bessel process, Brownian meander and Brownian bridge} We give in this subsection some corollaries of Theorem \ref{theo1} involving very classical processes, namely the three dimensional Bessel process, the Brownian meander, and the Brownian bridge. We start with a result about the three dimensional process, whose proof is given within the proof of Theorem \ref{theo1} in Appendix \ref{prooftheo1}. \begin{corollary}\label{bessel} Let $(R_t)_{t\geq 0}$ denote a three dimensional Bessel process. We have $$\mathbb{E}\big[\frac{1}{R_1^2}\int_0^1R_udu\big]=\sqrt{\frac{2}{\pi}}.$$ \end{corollary} \noindent Thanks to Imhof's relation, see \cite{biane1987processus,imhof1984density}, we immediately deduce the following result from Corollary \ref{bessel}: \begin{corollary}\label{mea} Let $(m_t)_{t\leq 1}$ be the Brownian meander. We have $$\mathbb{E}\big[\frac{1}{m_1}\int_0^1m_udu\big]=1.$$ \end{corollary} \noindent We now give a corollary involving the Brownian bridge. \begin{corollary} Let $(b_t)_{t\leq 1}$ denote the Brownian bridge and $(l_t)_{t\leq 1}$ its local time at zero. We have $$\mathbb{E}\big[\frac{1}{l_1}\int_0^1|b_u|du\big]=\mathbb{E}\big[\frac{1}{l_1}\int_0^1l_udu\big]=\frac{1}{2}.$$ \end{corollary} \begin{proof} From \cite{biane1988quelques}, we get the following equality: $$(m_t,~t\leq 1)\underset{\mathcal{L}}{=}(|b_t|+l_t,~t\leq 1).$$ Thus, using Corollary \ref{mea}, we get \begin{equation}\label{blone} \mathbb{E}\big[\frac{1}{l_1}\int_0^1(|b_u|+l_u)du\big]=1.\end{equation} Now remark that the process $(\hat{b}_t)=(b_{1-t})$ is also a Brownian bridge whose local time at time $t$, denoted by $\hat{l}_t$, satisfies $$\hat{l}_t=l_1-l_{1-t}.$$ Consequently, $$\mathbb{E}\big[\frac{1}{l_1}\int_0^1l_udu\big]=\mathbb{E}\big[\frac{1}{l_1}\int_0^1 \hat l_u du\big]=\mathbb{E}\big[\frac{1}{l_1}\int_0^1(l_1-l_u)du\big].$$ This implies $$\mathbb{E}\big[\frac{1}{l_1}\int_0^1l_udu\big]=\frac{1}{2}$$ and therefore Equation \eqref{blone} provides $$\mathbb{E}\big[\frac{1}{l_1}\int_0^1|b_u|du\big]=\frac{1}{2}.$$ \end{proof} \noindent Finally, using a pathwise transformation between the meander and the Brownian excursion, see \cite{bertoin1994path}, Corollary \ref{mea} also enables to show the following result: \begin{corollary} Let $(e_t)_{t\leq 1}$ denote the standard Brownian excursion. We have $$\mathbb{E}\big[\int_0^1e_tdt\int_0^1\frac{1}{e_u}du\big]=\frac{3}{2}.$$ \end{corollary} \subsection{The case of two barriers} After the striking result given in Theorem \ref{theo1}, it is natural to wonder whether the expectation remains equal to zero if $T_a$ is replaced by $T_{a,b}$, where $T_{a,b}$ is the first exit time of the interval $(-b,a)$, with $a>0$ and $b>0$. Indeed, remark that the random variable $A_{a,b}^{(1)}$ defined by $$A_{a,b}^{(1)}=\frac{1}{T_{a,b}^{3/2}}\int_0^{T_{a,b}}B_s ds,$$ still enjoys a scaling property in the sense that its law only depends on the ratio $b/a$. In fact, the following theorem states that the expectation is no longer zero in this case\footnote{However, note that from a numerical point of view, the obtained values for $\mathbb{E}[A_{a,b}^{(1)}]$ are systematically very small, for any $a$ and $b$.}: \begin{theorem}\label{2bar} Let $\lambda=b/a$. We have $$\mathbb{E}[A_{a,b}^{(1)}]=\frac{1}{\sqrt{2\pi}}(1+\lambda)\int_0^\infty\frac{\delta}{\emph{sh}(\delta (1+\lambda))^2}(\lambda\emph{sh}(\delta)-\emph{sh}(\delta \lambda))d\delta.$$ In particular, $\mathbb{E}[A_{a,b}^{(1)}]\neq 0$ if $\lambda\notin \{0,1\}$. \end{theorem} \noindent The proof of this result is given in Appendix \ref{proof2bar}. In fact a general formula for $$\mathbb{E}\big[\frac{1}{T_{a,b}^{\theta}}\int_0^{T_{a,b}} B_s ds \big],$$ with $\theta>0$ is given within this proof. Eventually, note that Theorem \ref{theo1} can also be recovered from Theorem \ref{2bar} letting the downward barrier tend to $-\infty$. \section{The general case}\label{genm} \subsection{Computation of the expectations} For $x\in\mathbb{R}$, we set $x^+=\text{max}(x,0)$ and $x^-=\text{max}(-x,0)$. For $m\geq 0$, we define $$A_{+}^{(m)}=\frac{1}{T_1^{1+m/2}}\int_0^{T_1}(B_s^+)^m ds,~~A_{-}^{(m)}=\frac{1}{T_1^{1+m/2}}\int_0^{T_1}(B_s^-)^m ds,$$ with the convention $0^0=0$. We also write $$I_+^{(m)}=\mathbb{E}[A_{+}^{(m)}],~~I_-^{(m)}=\mathbb{E}[A_{-}^{(m)}].$$ and $$I^{(m)}=I_+^{(m)}-I_-^{(m)}.$$ Furthermore, we note that $I_{\pm}^{(m)}$ is the moment of order $m$ of the random variable $\alpha^{\pm}$ where $$\alpha=\frac{B_{UT_1}}{\sqrt{T_1}},$$ with $U$ a uniform random variable independent of the Brownian motion $B$. We study the variable $\alpha$ in more details in Section \ref{alpha}.\\ \noindent For $m\geq 0$, let $$c_m=\frac{\Gamma(1+m)}{2^{m/2}\Gamma(1+m/2)}=\frac{1}{\sqrt{\pi}}2^{m/2}\Gamma(\frac{1+m}{2})=\mathbb{E}[|N|^m],$$ where $N$ is a standard Gaussian random variable and $\Gamma$ denotes the Gamma function. We have the following theorem: \begin{theorem}\label{theo2} Let $m\geq 0$ and introduce $$\phi(m)=\int_0^2\frac{y^{m+1}}{1+y}dy.$$ The following formulas hold: $$I_+^{(m)}=\frac{c_m}{2^{m+1}}\phi(m),~~I_-^{(m)}=\frac{c_m}{2^{m+1}}\emph{log}(3).$$ \end{theorem} \noindent In particular, we note that $\phi(0)=2-\text{log}(3)$, $\phi(1)=\text{log}(3)$, $\phi(2)=8/3-\text{log}(3)$ and $\phi(3)=4/3+\text{log}(3)$. We give the proof of Theorem \ref{theo2} in Section \ref{prooftheo2}. \subsection{Comments about Theorem \ref{theo2}} \noindent $\bullet$ The function $\phi$ is well defined for $m\in(-2,+\infty)$ and satisfies $\phi(-1)=\phi(1)=\text{log}(3)$. Thus, we retrieve in Theorem \ref{theo2} the fact that $\mathbb{E}[A_1^{(1)}]=I_+^{(1)}-I_-^{(1)}=0.$\\ \noindent $\bullet$ We easily get that $\phi$ is twice differentiable and, for $m\ge 0$, $$\phi'(m)=\int_0^2\frac{y^{m+1}\text{log}(y)}{1+y}dy,~~\phi''(m)=\int_0^2\frac{y^{m+1}(\text{log}(y))^2}{1+y}dy.$$ Hence $\phi$ is convex and furthermore, we show in Appendix \ref{fonction} that $\phi'(0)>0$. This implies that $\phi$ and $\phi'$ are increasing on $\mathbb{R}^+$. Hence, since $$I^{(m)}=\frac{c_m}{2^{m+1}}\big(\phi(m)-\text{log}(3)\big),$$ we get $I^{(m)}>0$ for $m>1$ and $I^{(m)}<0$ for $m<1$. This can be interpreted as follows: from the point of view of $A_1^{(m)}$, for $m>1$, the time spent by the Brownian motion in $[0,1]$ is dominant whereas for $m<1$, the time spent in $(-\infty,0)$ is more important.\\ \noindent $\bullet$ Let $(L_t^x,~x\in\mathbb{R},~t\geq 0)$ denote the local time of the Brownian motion $B$. Within the proof of Theorem \ref{theo2}, we are led to show the following interesting result: \begin{proposition}\label{transfo} Let $\mu>0$, $0<b<1$ and $x\geq 0$, we have $$\mathbb{E}[L_{T_1}^b\emph{exp}(-\mu^2T_1/2)]=\frac{1}{\mu}\Big(\emph{exp}(-\mu)-\emph{exp}\big(-\mu(3-2b)\big)\Big)$$ and $$\mathbb{E}[L_{T_1}^{-x}\emph{exp}(-\mu^2T_1/2)]=\frac{1}{\mu}\Big(\emph{exp}\big(-\mu(1+2x)\big)-\emph{exp}\big(-\mu(3+2x)\big)\Big).$$ \end{proposition} \noindent We also give another proof of Proposition \ref{transfo}, based on the Ray-Knight theorem, in Appendix \ref{RK}. \subsection{Uniform sampling up to hitting time}\label{alpha} We now want to interpret Theorem \ref{theo2} as a result about sampling independently and uniformly the properly rescaled Brownian motion up to its first hitting time $T_1$. More precisely, let us introduce $(l_1^y,~y\in \mathbb{R}),$ the local time at time $1$ of the process $$\big(\frac{B_{sT_1}}{\sqrt{T_1}},~s\leq 1\big).$$ Let $f$ be a Borel non negative function and $U$ a uniform random variable independent of any other random variable defined here. Using the occupation formula, we get $$\mathbb{E}[f\big(\frac{B_{UT_1}}{\sqrt{T_1}}\big)]=\mathbb{E}[\int_0^1f\big(\frac{B_{sT_1}}{\sqrt{T_1}}\big)ds]=\int_{-\infty}^{+\infty}f(y)\mathbb{E}[l_1^y]dy.$$ Hence $h(y)=\mathbb{E}[l_1^y]$ is the density of $\alpha$ at point $y$. The following result is easily deduced from Theorem \ref{theo2}, by injectivity of the Mellin transform. \begin{theorem}\label{density} The density $h$ satisfies for $y\geq 0$ $$h(y)=\sqrt{\frac{2}{\pi}}\int_0^2\frac{1}{1+w}\emph{exp}(-2y^2/w^2)dw$$ and for $y\leq 0$ $$h(y)=\sqrt{\frac{2}{\pi}}\emph{log}(3)\emph{exp}(-2y^2).$$ \end{theorem} \noindent Hence, conditional on $\alpha>0$, the law of $\alpha^+$ is a mixture of absolute Gaussian laws, whereas conditional on $\alpha<0$, $\alpha^-$ is distributed as the absolute value of a Gaussian random variable.\\ \noindent Remark that for $y\geq 0$, we have the obvious inequality $$h(y)\leq \sqrt{\frac{2}{\pi}}\text{log}(3)\text{exp}(-y^2/2).$$ Therefore, we have the following corollary: \begin{corollary} For $\varepsilon<1/2$, the random variable $\alpha^+$ satisfies $$\mathbb{E}[\emph{exp}\big(\varepsilon(\alpha^+)^2\big)]<+\infty.$$ \end{corollary} \noindent In fact, thanks to Proposition \ref{transfo}, we can even provide the density at point $y$ of $\alpha$ conditional on $T_1=t$. We denote this density by $h(y,t)$. Obvious relations between $(l_1^y)$ and $(L_t^y)$ yield $$h(y,t)=\mathbb{E}_{T_1=t}[l_1^y]=\frac{1}{\sqrt{t}}\mathbb{E}_{T_1=t}[L_t^{y\sqrt{t}}].$$ We easily obtain the following corollary from Proposition \ref{transfo}: \begin{corollary}\label{cond} The conditional density $h(y,t)$ satisfies for $0\leq y\sqrt{t}\leq 1$, $$h(y,t)\emph{exp}\big(-1/(2t)\big)t^{-1/2}=\emph{exp}\big(-1/(2t)\big)-\emph{exp}\big(-(3-2y\sqrt{t})^2/(2t)\big)$$ and for $x\geq 0$ $$h(-x,t)\emph{exp}\big(-1/(2t)\big)t^{-1/2}=\emph{exp}\big(-(1+2x\sqrt{t})^2/(2t)\big)-\emph{exp}\big(-(1+3x\sqrt{t})^2/(2t)\big).$$ \end{corollary} \subsection{Interpretation in terms of the Brownian meander} In the same spirit as in Corollary \ref{mea}, we can give an interpretation of Theorem \ref{theo2} in terms of the Brownian meander. Using Williams theorem in the same way as in the proof of Theorem \ref{theo1}, see Appendix \ref{prooftheo1}, together with Imhof's relation, see \cite{biane1987processus,imhof1984density}, as already done for Corollary \ref{mea}, we get that for any non negative measurable functions $f$ and $g$, $$\mathbb{E}\Big[\int_0^1f\big(\frac{B_{sT_1}}{\sqrt{T_1}}\big)dsg\big(\frac{1}{\sqrt{T_1}}\big)\Big]=\sqrt{\frac{2}{\pi}}\mathbb{E}\big[\int_0^1f(m_1-m_u)du\frac{g(m_1)}{m_1}\big],$$ where $m$ denotes the Brownian meander. Let $U$ be a uniform random variable, independent of all other quantities. The last relation is equivalent to $$\mathbb{E}\Big[f\big(\frac{B_{UT_1}}{\sqrt{T_1}}\big)g\big(\frac{1}{\sqrt{T_1}}\big)\Big]=\sqrt{\frac{2}{\pi}}\mathbb{E}\big[f(m_1-m_U)\frac{g(m_1)}{m_1}\big].$$ Thus, from Corollary \ref{cond}, we are able to compute the density of $m_U$ conditional on the value $m_1$. More precisely, we have the following theorem: \begin{theorem} Let $f$ be a Borel non negative function. We have $$\mathbb{E}_{m_1=y}[f(m_U)]=\int_0^yh\big(y-z,\frac{1}{y^2}\big)f(z)dz+\int_y^{+\infty}h\big(-(z-y),\frac{1}{y^2}\big)f(z)dz.$$ \end{theorem} \subsection{Future developments} In this work, we have studied some properties of random sampling through the random variable $$\alpha=\frac{B_{UT_1}}{\sqrt{T_1}}.$$ Another interesting variable is the variable $\beta$ defined by $$\beta=\frac{B_{U\tau_1}}{\sqrt{\tau_1}},$$ with $\tau_l=\text{inf}\{t\geq 0,~L_t>l\}.$ In fact the associated process $$\big(\frac{B_{s\tau_1}}{\sqrt{\tau_1}},~s\leq 1\big)$$ is called pseudo Brownian bridge and has been considered more explicitly in the literature than $$\big(\frac{B_{sT_1}}{\sqrt{T_1}},~s\leq 1\big).$$ In particular, it enjoys some absolute continuity property with respect to the standard Brownian bridge, see \cite{biane1987processus}. We intend to present results related to $\beta$ in a forthcoming work, in a way which will help us to recover the interesting law of $\alpha$. For now, we only mention that $\beta$ is distributed as $N/2$, where $N$ is a standard Gaussian random variable. \subsection{Proof of Theorem \ref{theo2}}\label{prooftheo2} Let $m\ge 0$. We split the proof into several steps. \subsubsection*{Step 1: Introducing a natural measure} First, let us remark that \begin{align*} I_{\pm}^{(m)}&=\frac{1}{\Gamma(1+m/2)}\mathbb{E}\big[\int_0^{+\infty}\lambda^{m/2}\text{exp}(-\lambda T_1)d\lambda\int_0^{T_1} (B_s^{\pm})^m ds\big]\\ &=\frac{1}{2^{m/2}\Gamma(1+m/2)}\mathbb{E}\big[\int_0^{+\infty}\mu^{1+m}\text{exp}(-\mu^2 T_1/2)d\mu\int_0^{T_1} (B_s^{\pm})^mds\big]. \end{align*} Hence, it is natural to introduce for $\mu\ge 0$ the measure $I_{\mu}$, which to a positive function $\psi$ associates $$I_{\mu}(\psi)=\mathbb{E}\big[\int_0^{T_1}\psi(B_s)\text{exp}(-\mu^2 T_1/2)ds\big]=e^{-\mu}\mathbb{E}\big[\int_0^{T_1}\psi(B_s)\text{exp}(\mu-\mu^2 T_1/2)ds\big].$$ \subsubsection*{Step 2: Computation of $I_{\mu}(\psi)$} Let $(S_s)=(\underset{u\leq s}{\text{sup }}B_u)$. Using the martingale property of the process $\text{exp}(\mu B_s-\mu^2s/2)$, we get $$I_{\mu}(\psi)=e^{-\mu}\mathbb{E}\big[\int_0^{+\infty}\psi(B_s)\mathrm{1}_{\{S_s<1\}}\text{exp}(\mu B_s-\mu^2 s/2)ds\big].$$ We now use the following well known formula, see for example \cite{pitman1999distribution}: for $s> 0$ and $b\in\mathbb{R}$, $$\mathbb{P}[S_s<1|B_s=b]=1-\text{exp}\big(-\frac{2}{s}(1-b)^+\big).$$ It implies that $I_{\mu}(\psi)$ is equal to $$e^{-\mu}\int_0^{+\infty}\text{exp}(-\mu^2 s/2)ds\int_{-\infty}^1e^{\mu b}\psi(b)\frac{1}{\sqrt{2\pi s}}\text{exp}\big(-b^2/(2s)\big)\Big(1-\text{exp}\big(-\frac{2}{s}(1-b)\big)\Big)db,$$ which can be rewritten $$e^{-\mu}\int_{-\infty}^1e^{\mu b}\psi(b)db\int_0^{+\infty}\text{exp}(-\mu^2 s/2)\frac{1}{\sqrt{2\pi s}}\Big(\text{exp}\big(-b^2/(2s)\big)-\text{exp}\big(-(2-b)^2/(2s)\big)\Big)ds.$$ Then, using the density and the value of the first moment of an inverse Gaussian random variable, we get that for $\mu>0$ and $y\in\mathbb{R}$, $$\int_0^{+\infty}\frac{1}{\sqrt{2\pi s}}\text{exp}\big(-y^2/(2s)-\mu^2s/2\big)ds=\frac{1}{\mu}\text{exp}(-\mu|y|).$$ From this, we deduce that when the support of $\psi$ is included in $[0,1]$, \begin{equation}\label{eqfond1} I_{\mu}(\psi)=\frac{1}{\mu}\int_0^1\psi(b)\Big(\text{exp}(-\mu)-\text{exp}\big(-\mu(3-2b)\big)\Big)db, \end{equation} and when the support of $\psi$ is included in $(-\infty,0)$, \begin{equation}\label{eqfond2} I_{\mu}(\psi)=\frac{1}{\mu}\int_0^{+\infty}\psi(-x)\Big(\text{exp}\big(-\mu(1+2x)\big)-\text{exp}\big(-\mu(3+2x)\big)\Big)dx. \end{equation} Remark here that Proposition \ref{transfo} immediately follows from Equation \eqref{eqfond1} and Equation \eqref{eqfond2}. \subsubsection*{Step 3: End of the proof of Theorem \ref{theo2}} We end the proof of Theorem \ref{theo2} in this final step. We start with the following elementary lemma: \begin{lemma}\label{lab} For $a>0$, $b>0$ and $m\geq 0$, we define $$L(a,b,m)=\int_0^{+\infty}y^m\big(\frac{1}{(a+y)^{m+1}}-\frac{1}{(b+y)^{m+1}}\big)dy.$$ The following equality holds: $$L(a,b,m)=\emph{log}(b/a).$$ \end{lemma} \begin{proof} We have \begin{align*} L(a,b,m)&=\underset{n\rightarrow +\infty}{\text{lim}}\int_0^{n}y^m\big(\frac{1}{(a+y)^{m+1}}-\frac{1}{(b+y)^{m+1}}\big)dy\\ &=\underset{n\rightarrow +\infty}{\text{lim}}\big(\int_0^{n/a}\frac{y^m}{(1+y)^{m+1}}dy-\int_0^{n/b}\frac{y^m}{(1+y)^{m+1}}dy\big)\\ &=\underset{n\rightarrow +\infty}{\text{lim}}\int_{1/b}^{1/a}\frac{n^{m+1}y^m}{(1+ny)^{m+1}}dy=\text{log}(b/a). \end{align*} \end{proof} \noindent Now we take $\psi(x)=(x^{\pm})^m$ in Equation \eqref{eqfond1} and Equation \eqref{eqfond2}. Integrating in $\mu$, we easily derive $$I_{+}^{(m)}=\frac{\Gamma(1+m)}{2^{m/2}\Gamma(1+m/2)}\int_0^1b^m\big(1-\frac{1}{(3-2b)^{m+1}}\big)db$$ and $$I_{-}^{(m)}=\frac{\Gamma(1+m)}{2^{m/2}\Gamma(1+m/2)}\int_0^{+\infty}x^m\big(\frac{1}{(1+2x)^{m+1}}-\frac{1}{(3+2x)^{m+1}}\big)dx.$$ Applying Lemma \ref{lab}, we obtain the result for $I_{-}^{(m)}$. For $I_{+}^{(m)}$, we write $$I_{+}^{(m)}=\frac{\Gamma(1+m)}{2^{m/2}\Gamma(1+m/2)}\big(\int_0^1b^mdb-\int_0^1\frac{1}{(3-2b)}\frac{b^m}{(3-2b)^{m}}db\big).$$ Then we use the change of variable $y=b/(3-2b)$ in the second integral in order to retrieve the expression of $I_{+}^{(m)}$ given in Theorem \ref{theo2}. \newpage
{ "timestamp": "2013-10-07T02:04:51", "yymm": "1310", "arxiv_id": "1310.1181", "language": "en", "url": "https://arxiv.org/abs/1310.1181", "abstract": "Let B be a Brownian motion and T its first hitting time of the level 1. For U a uniform random variable independent of B, we study in depth the distribution of T^{-1/2}B_{UT}, that is the rescaled Brownian motion sampled at uniform time. In particular, we show that this variable is centered.", "subjects": "Probability (math.PR)", "title": "On the expectation of normalized Brownian functionals up to first hitting times", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9814534333179648, "lm_q2_score": 0.8397339716830605, "lm_q1q2_score": 0.8241597895820704 }
https://arxiv.org/abs/2208.14334
Doubly Sequenceable Groups
Given a sequence ${\bf g}: g_0,\ldots, g_{m}$, in a finite group $G$ with $g_0=1_G$, let ${\bf \bar g}: \bar g_0,\ldots, \bar g_{m}$, be the sequence defined by $\bar g_0=1_G$ and $\bar g_i=g_{i-1}^{-1}g_i$ for $1\leq i \leq m$. We say that $G$ is doubly sequenceable if there exists a sequence ${\bf g}$ in $G$ such that every element of $G$ appears exactly twice in each of ${\bf g}$ and ${\bf \bar g}$. If a group $G$ is abelian, odd, sequenceable, R-sequenceable, or terraceable, then $G$ is doubly sequenceable. In this paper, we show that if $N$ is an odd or sequenceable group and $H$ is an abelian group, then $N \times H$ is doubly sequenceable.
\section{Introduction} Let $G$ be a finite group with $n$ elements and ${\bf g}: g_0,\ldots, g_{kn-1},$ be a sequence of elements of $G$ such that $g_0=1_G$, where $1_G$ is the identity element of $G$. The sequence of consecutive quotients ${\bf \bar g}$ is defined by ${\bar g}_0=1_G$ and ${\bar g}_i=g_{i-1}^{-1}g_{i}$ for $1\leq i \leq kn-1$. We say that ${\bf g}$ is a {\it $k$-graceful ordering} if each element of $G$ appears exactly $k$ times in ${\bf g}$ and exactly $k$ times in ${\bf \bar g}$. When $G$ is the cyclic group $\mathbb{Z}_n$ of order $n$, a 1-graceful ordering is related to the notion of a DDP permutation (where DDP stands for the {\it distinct difference property}). A DDP permutation of $\{1,2,\ldots, n\}$ is a permutation $x_1,\ldots, x_n$, such that $x_{i+1}-x_{i} \neq x_{j+1}-x_j$ for all $1\leq i<j <n$. Batten and Sane showed that there exist at least $2^{n-1}$ DDP permutations of order $n$; \cite{BattenSane,BellWang}. A {\it modular} DDP sequence $x_1,\ldots, x_n$, is a permutation of elements of $\mathbb{Z}_n$ such that $x_{i+1}-x_{i} \neq x_{j+1}-x_j \pmod n$ for all $1\leq i<j <n$. A 1-graceful ordering of $\mathbb{Z}_n$ is a modular DDP sequence with $x_1=0$. Interestingly, the first example of a 1-graceful ordering came from music theory. F. H. Klein in 1925 gave an example of an all-interval twelve-tone row, which is written below in integers modulo 12; \cite{Ferentz,Klein}. \begin{align}\nonumber {\bf g}: &~ 0, 11, 7, 4, 2, 9, 3, 8, 10, 1, 5, 6;\\ \nonumber {\bf \bar g}: & ~0, 11, 8, 9, 10, 7, 6, 5, 2, 3, 4, 1. \end{align} DDP permutations are related to Costas arrays, where the condition that $x_{i+1}-x_{i} \neq x_{j+1}-x_j$ for all $1\leq i<j <n$, is replaced by the stronger condition that vectors $(j-i, x_j-x_i)$, $i\neq j$, are all distinct \cite{Costas1, Costas2}. Costas arrays are rarer and more difficult to construct \cite{Golomb1, Golomb2, Golomb3}. Another related notion to graceful ordering of groups is the notion of {\it graceful labeling} of graphs. A graceful labeling of a graph $G$ with $n$ edges is a labeling of the vertices with a subset of $\{0,\ldots, n\}$ such that distinct vertices have distinct labels and the {\it absolute differences} of labels at the endpoints of the edges are all distinct. Graceful labeling of graphs were first studied by Rosa in connection to Ringel’s conjecture \cite{Rosa}. Rosa observed that Ringel’s conjecture follows from the {\it graceful tree conjecture}, which states that every tree has a graceful labeling \cite{Gallian,MPS}. If $G$ is a directed path and we consider the differences modulo $n+1$ instead of absolute differences, we arrive at modular DDP sequences. Graceful orderings are the natural extension of modular DDP sequences to groups. A finite abelian group has a 1-graceful ordering if and only if it has a unique element of order 2, or equivalently, if and only if it is the direct product of an odd abelian group with $\mathbb{Z}_{2^n}$ for some $n\geq 1$; \cite{JK}. Therefore, if $G$ is an odd abelian group, by projecting a 1-graceful ordering of $G \times \mathbb{Z}_2$ onto $G$, one obtains a 2-graceful ordering of $G$. Our main theorem below generalizes this result to all finite abelian groups. \begin{theorem}\label{main} Every finite abelian group has a $2$-graceful ordering. \end{theorem} We conjecture that all nonabelian groups also have 2-graceful orderings. This is true at least for nilpotent odd groups, since if $K$ is a nilpotent odd group, then $K\times \mathbb{Z}_2$ has a 1-graceful ordering \cite{JK}. This is how this paper is organized. In Section \ref{lift}, we show that if $H$ is an abelian group with a 2-graceful ordering, then $H \times \mathbb{Z}_{k}$ has a 2-graceful ordering, where $k$ is odd or a power of 2 grater than 2 (Theorems \ref{extodd} and \ref{extpower2}). In Section \ref{maint}, we construct explicit examples of 2-graceful orderings of $\mathbb{Z}_k$ and $(\mathbb{Z}_2)^n$, $n\geq 1$. Theorem \ref{main} then follows from these and the classification of finite abelian groups. \section{Lifting 2-Graceful Orderings}\label{lift} Let ${\bf h}: h_0,\ldots, h_{2n-1},$ be a 2-graceful ordering of $H$ and ${\bf g}: g_0,\ldots, g_{2kn-1},$ be a 2-graceful ordering of $G=H \times \mathbb{Z}_k$. We say that ${\bf g}$ is a lift of ${\bf h}$ by $\mathbb{Z}_k$ if $\pi(g_{i+2tn})=h_i$ for all $0\leq i \leq 2n-1$ and $0\leq t \leq k-1$, where $\pi: G \rightarrow H$ is the projection onto the first component. In this section, we show that every 2-graceful ordering of $H$ has a lift by $\mathbb{Z}_k$, when $k$ is odd or a power of 2 greater than 2. \begin{theorem}\label{extodd} Let $H$ be an abelian group that has a $2$-graceful ordering. If $m$ is odd, then $H \times \mathbb{Z}_m$ has a $2$-graceful ordering. \end{theorem} \begin{proof} Let ${\bf h}: h_0,\ldots, h_{2n-1}$, be a $2$-graceful ordering in $H$. We define a 2-graceful ordering ${\bf g}$ in $H \times \mathbb{Z}_m$ as follows. Given $0\leq i \leq 2mn-1$, we write $i=2qn+r$, where $0\leq q \leq m-1$ and $0\leq r \leq 2n-1$, and let $$g_i=\begin{cases} (h_r,-q) & \mbox{if $r$ is even,}\\ (h_r,q+1) & \mbox{if $r$ is odd.} \end{cases} \Rightarrow \bar g_i=\begin{cases} (\bar h_r,-2q) & \mbox{if $r=0$,}\\ (\bar h_r,2q+1) & \mbox{if $r$ is odd,}\\ (\bar h_r,-2q-1) & \mbox{otherwise.} \end{cases} $$ We first show that every $(h,t)\in H \times \mathbb{Z}_m$ appears twice in ${\bf g}$. Let $p,q \in \{0,\ldots, m-1\}$ such that $p=-t \pmod m$ and $q=t-1 \pmod m$. Since every element of $H$ appears twice in ${\bf h}$, there exist distinct values $r,s \in \{0,\ldots, 2n-1\}$ such that $h_{r}=h_{s}=h$. If $r$ is even, we let $i=2pn+r$, and if $r$ is odd, we let $i=2qn+r$. Similarly, if $s$ is even, we let $j=2pn+s$, and if $s$ is odd, we let $j=2qn+s$. We note that $i \neq j$, since $r \neq s$. It follows that $g_i=g_j=(h,t)$, and so every element of $H \times \mathbb{Z}_m$ appears at least twice, hence exactly twice, in ${\bf g}$. Next, we show that every $(u, v) \in H \times \mathbb{Z}_m$ appears twice in ${\bf \bar g}$. Since every element of $H$ appears twice in ${\bf \bar h}$, there exist $r,s \in \{0,\ldots, 2n-1\}$ such that $\bar h_{r}=\bar h_{s}=u$. Suppose that $u \neq 0$, and so $r,s>0$. Since $m$ is odd, there exist $p,q \in \{0,\ldots, m-1\}$ such that $2p+1=-v \pmod m$ and $2q+1=v \pmod m$. If $r$ is even, we let $i=2pn+r$, and if $r$ is odd, we let $i=2qn+r$. Similarly, if $s$ is even, we let $j=2pn+s$, and if $s$ is odd, we let $j=2qn+s$. It follows that $i \neq j$ and $\bar g_i=\bar g_j=(u,v)$ in this case. If $u=0$, let $p,q \in \{0,\ldots, m-1\}$ such that $-2p=v \pmod m$ and $2q+1= v \pmod m$. Let $i=2pn$ and $j=2qn+2n-1$. It follows that $\bar g_i=\bar g_j=(u,v)$ in this case as well. Therefore, every element of $H \times \mathbb{Z}_m$ appears at least twice, hence exactly twice, in ${\bf \bar g}$, and so $\bf g$ is a 2-graceful ordering of $H \times \mathbb{Z}_m$. \end{proof} We will use three special permutations of $\mathbb{Z}_{2^n}$ (described in the following lemma) to construct lifts by $\mathbb{Z}_{2^n}$, $n\geq 2$. Since these permutations cannot exist when $n=1$, this method only applies when $n\geq 2$. Given permutations $R$ and $S$ of $\mathbb{Z}_k$, we write $R \perp S$ if for every $v\in \mathbb{Z}_k$ there exist $\alpha, \beta \in \{0,\ldots, k-1\}$ such that one of the following occurs \begin{itemize} \item[i)] $\alpha \neq \beta$ and $R(\alpha)-\alpha=R(\beta)-\beta=v$, \item[ii)] $\alpha \neq \beta$ and $S(\alpha)-\alpha=S(\beta)-\beta=v$, \item[iii)] or $R(\alpha)-\alpha=S(\beta)-\beta=v$. \end{itemize} In other words, $R \perp S$ means that the lists $R(i)-i,S(i)-i, 0\leq i \leq k-1$, altogether include every element of $\mathbb{Z}_k$ exactly twice. In the lemma below, by a {\it cyclic} permutation, we mean a permutation consisting of one nontrivial cycle (with no fixed points). \begin{lemma}\label{lemtec} If $n\geq 2$, then there exist permutations $X,B,D: \mathbb{Z}_{2^n} \rightarrow \mathbb{Z}_{2^n}$ such that \begin{itemize} \item[i)] $X \perp X^{-1}$; \item[ii)] $B \perp D$; \item[iii)] Both $DB$ and $DX^{-1}BX$ are cyclic permutations. \end{itemize} \end{lemma} \begin{proof} If $n=2$, let $B(i)=1-i$ and $D(i)=-i$ for $0\leq i \leq 3$, and $X: 0 \mapsto 1, 1 \mapsto 3, 2 \mapsto 2, 3 \mapsto 0$. It is straightforward to verify the conditions (i-iii) in this case. Thus, suppose that $n>2$. Let $B(i)=2^{n-2}-1-i$ and $D(i)=-i$ for $0\leq i \leq 2^n-1$. It is again straightforward to see that $B$ and $D$ are permutations and $B \perp D$. Next, we define \begin{equation}\label{defxi} X(i)=\begin{cases} 2i+1 & \mbox{if $0\leq i \leq 2^{n-3}-1$}, \\ 2i+2^{n-2}+1 & \mbox{if $2^{n-3} \leq i \leq 3 \times 2^{n-3}-1$}, \\ 2i+ 2^{n-1}+1 & \mbox{if $3 \times 2^{n-3} \leq i \leq 2^{n-1}-1$},\\ -X(2^n-i-1)+2^{n-2}-1& \mbox{if $2^{n-1} \leq i \leq 2^n-1$}. \end{cases} \end{equation} To show that $X$ is a permutation, we need to show that $X$ is onto. First, let $k\in \{0,\ldots, 2^n-1\}$ be an odd number. If $1\leq k \leq 2^{n-2}-1$, then $X((k-1)/2)=k$. If $2^{n-2}+1 \leq k \leq 2^{n-1}-1$, then $X((k-1+2^{n-1})/2)=k$. Finally, if $2^{n-1}+1 \leq k \leq 2^n-1$, then $X((k-1-2^{n-2})/2)=k$. It follows that every odd number modulo $2^n$ belongs to the image of $X$. Next, let $k\in \{0,\ldots, 2^n-1\}$ be an even number. Then $2^{n-2}-1-k$ modulo $2^n$ equals an odd number $k’$ in the set $\{0,\ldots,2^n-1\}$. Therefore, there exists $0\leq i \leq 2^n-1$ such that $X(i)=k’$, and so $X(2^n-i-1)=k$. We have shown that $X$ is onto, hence it is a permutation of $\mathbb{Z}_{2^n}$. To verify property (i), it is sufficient to show that for every $0\leq k \leq 2^{n-1}$ the equations $X(i)-i=k$ and $j-X(j)=k$ together have at least two solutions. If $0\leq k \leq 2^{n-3}-1$, then $X(i)-i=k$ for $i=k-1$ (if $k=0$, let $i=2^{n-1}-1$), while $j-X(j)=k$ for $j=2^{n-1}-k-1$. The equation $X(i)-i=2^{n-3}$ has two solutions $i=2^{n-3}-1$ and $i=7 \times 2^{n-3}$. If $2^{n-3}+1 \leq k \leq 2^{n-2}-1$, then $X(i)-i=k$ for $i=3 \times 2^{n-2}+k$, while $j-X(j)=k$ for $j=2^n-k$. If $2^{n-2} \leq k \leq 3 \times 2^{n-3}-1$, then $X(i)-i=k$ for $i=k+2^{n-2}$, while $j-X(j)=k$ for $j=2^n-k$. If $k=3 \times 2^{n-3}$, then $j-X(j)=k$ has two solutions $j=3 \times 2^{n-3}-1$ and $j=5 \times 2^{n-3}$. If $3\times 2^{n-3}+1 \leq k \leq 2^{n-1}$, then $X(i)-i=k$ for $i=k-2^{n-2}-1$, while $j-X(j)=k$ for $j=3\times 2^{n-2}-1-k$. To verify property (iii), note that $DB(i)=D(2^{n-2}-1-i)=i+1-2^{n-2}$, which is a cyclic permutation modulo $2^{n}$, since $1-2^{n-2}$ is odd (here we are using the assumption that $n>2$). Moreover, we have $$DX^{-1}BX(i)=DX^{-1}(2^{n-2}-1-X(i))=D(-i-1)=i+1,$$ modulo $2^n$, by the last line in equation \eqref{defxi}, hence $DX^{-1}BX$ is a cyclic permutation. This completes the proof of Lemma \ref{lemtec}. \end{proof} Now, we are ready to describe an algorithm to construct lifts of 2-graceful orderings by $\mathbb{Z}_{2^n}$, $n\geq 2$. \begin{theorem}\label{extpower2} If $H$ has a 2-graceful ordering, then $G=H \times \mathbb{Z}_{2^n}$ has a 2-graceful ordering for every $n\geq 2$. \end{theorem} \begin{proof} Let $|H|=m$ and ${\bf h}: h_0,\ldots, h_{2m-1},$ be a 2-graceful ordering of $H$ and $\bf \bar h$ be the corresponding sequence of consecutive differences. Let $\sigma: \{0,\ldots, 2m-1\} \rightarrow \{0,\ldots, 2m-1\}$ be the one-to-one correspondence such that $\sigma(i) \neq i$ and $\bar h_{i}=\bar h_{\sigma(i)}$ for all $0\leq i \leq 2m-1$. Let \begin{align}\nonumber S & = \{i: 1\leq i<\sigma(0)<\sigma(i) \leq 2m-1\};\\ \nonumber T & = \{i: i, \sigma(i) \in [1,\sigma(0)-1]~or~i,\sigma(i) \in [\sigma(0)+1,2m-1]\}. \end{align} Choose any subset $S’\subseteq S$ comprised of $\lceil |S|/2 \rceil$ elements of $S$, where $\lceil x \rceil$ denotes the least integer greater than or equal to $x$. Also, choose any subset $T’ \subseteq T$ such that for every $i\in T$ one has either $i \in T’$ or $\sigma(i) \in T’$ but not both. Next, for $0\leq i \leq 2m-1$, we define a permutation $P_i$ by letting $$P_i=\begin{cases} X & \mbox{if $i \in S’ \cup T’$ or $\sigma(i) \in S \backslash S’$}; \\ X^{-1} & \mbox{if $i \in S \backslash S’$ or $\sigma(i) \in S’ \cup T’$;}\\ D & \mbox{if $i=0$}\\ B & \mbox{if $i=\sigma(0)$}\\ \end{cases} $$ One has $P_i \perp P_{\sigma(i)}$, since if $i\neq 0, \sigma(0)$ and $P_i=X$ then $P_{\sigma(i)}=X^{-1}$, while $P_0=D$ and $P_{\sigma(0)}=B$ (recall from Lemma \ref{lemtec} that $X \perp X^{-1}$ and $B \perp D$). Given $i\in \mathbb{Z}$, let $0\leq i’ \leq 2m-1$ such that $i=i’ \pmod{2m}$, and let $h_i=h_{i’}$ and $P_i=P_{i’}$. One has $\{1,\ldots, \sigma(0)-1\}=S’ \cup (S \backslash S’) \cup T^{\prime \prime}$, where $T^{\prime \prime}= T \cap \{1,\ldots, \sigma(0)-1\}$. If $i \in T^{\prime \prime}$, then $i, \sigma(i) \in\{1,\ldots, \sigma(0)-1\}$ and $P_iP_{\sigma(i)}=XX^{-1}$ or $X^{-1}X$ and equals the identity permutation in either case. If $i\in S’$, then $P_i=X$, while if $i \in S \backslash S’$, then $P_i=X^{-1}$. It follows that $P_{\sigma(0)-1}\cdots P_1=X^{|S’|}X^{-|S\backslash S’|}$ and similarly $P_{2m-1}\cdots P_{\sigma(0)+1}=X^{-|S’|}X^{|S \backslash S’|}$. Since $|S’|=\lceil |S|/2 \rceil$, we have $|S’|-|S\backslash S’|=0$ or 1. Therefore, $\hat P=P_{2m}P_{2m-1}\cdots P_1=DX^{-1}BX$ (if $|S|$ is odd) or $P_{2m}P_{2m-1}\cdots P_1=DB$ (if $|S|$ is even). It follows from Lemma \ref{lemtec} that $\hat{P}$ is a cyclic permutation. We now define sequences ${\bf t}$ in $\mathbb{Z}_{2^n}$ and ${\bf g}$ in $G=H \times \mathbb{Z}_{2^n}$ by letting $t_0=0$ and $$t_k=P_k\cdots P_1(0)=\prod_{j=k}^1 P_j(0);~g_j=(h_j,t_j),$$ where $0\leq j \leq 2^{n+1}m-1$. To show that each element of $G$ is listed twice in ${\bf g}$, it is sufficient to show that, given $0\leq i \leq 2m-1$, the terms $t_{i+2km}$, $0\leq k \leq 2^n-1$, are distinct. First, suppose $i>0$ and let $L_i=\prod_{j=i}^{1}P_j$ and $M_i=\prod_{j=2m}^{i+1}P_j$ and note that $L_iM_i=\prod_{j=2m}^{1}P_j={\hat P}$ is a cyclic permutation. It follows that $M_iL_i$ is also a cyclic permutation, since $M_iL_i=M_i(L_iM_i)M_i^{-1}$ has the same cycle structure as $L_iM_i$. Then, one has \begin{align}\nonumber t_{i+2(k+1)m} &=\prod_{j=i+2(k+1)m}^1 P_j (0)=\prod_{j=i+2(k+1)m}^{i+2km+1}P_j(t_{i+2km})\\ \nonumber &=\prod_{j=2m+i}^{i+1}P_j(t_{i+2km})=\prod_{j=2m}^{i+1}P_j \prod_{j=2m+i}^{2m+1}P_j(t_{i+2km})\\ \nonumber &=\prod_{j=2m}^{i+1}P_j \prod_{j=i}^{1}P_j(t_{i+2km})=M_iL_i(t_{i+2km}). \end{align} Since $M_iL_i$ is a cyclic permutation, the terms $t_i,t_{i+2m},\ldots, t_{i+2(2^n-1)m}$, are all distinct. If $i=0$, then a similar argument shows that $$t_{2(k+1)m}=\prod_{i=2m}^1P_i(t_{2km})=\hat P(t_{2km}),$$ and again, since $\hat P$ is a cyclic permutation, the terms $t_{2km}$, $0\leq k \leq 2^n-1$, are all distinct in this case as well. Next, we show that every element $(u,v)\in G$ is listed twice in $\bf \bar g$. There exists $0\leq i \leq 2m-1$ such that $\bar h_i=\bar h_{\sigma(i)}=u$. Also, since $P_i \perp P_{\sigma(i)}$, there exist $\alpha,\beta \in \{0,\ldots, 2^n-1\}$ such that one of the following occurs: \begin{itemize} \item[(i)] $\alpha \neq \beta$ and $P_i(\alpha)-\alpha=P_i(\beta)-\beta=v$, \item[(ii)] $\alpha \neq \beta$ and $P_{\sigma(i)}(\alpha)-\alpha=P_{\sigma(i)}(\beta)-\beta=v$, \item[(iii)] or $P_i(\alpha)-\alpha=P_{\sigma(i)}(\beta)-\beta=v$. \end{itemize} Suppose that (i) occurs. Since the terms $t_{i-1+2km}$, $0\leq k \leq 2^n-1$, are all distinct, there exist distinct values $k,l\in \{0,\ldots, 2^n-1\}$ such that $t_{i-1+2km}=\alpha$ and $t_{i-1+2lm}=\beta$. Then \begin{align}\nonumber \bar g_{i+2km} &=(\bar h_{i+2km}, \bar t_{i+2km}) \\ \nonumber & = (\bar h_i, -t_{i+2km-1}+t_{i+2km}) \\ \nonumber & =(u,-t_{i-1+2km}+P_{i+2km}(t_{i-1+2km})) \\ \nonumber &=(u,-\alpha+P_i(\alpha))=(u,v), \end{align} and similarly $\bar g_{i+2lm}=(u,v)$. Therefore, $(u,v)$ is listed in ${\bf \bar g}$ at least twice. It follows similarly in case (ii) that $(u,v)$ appears twice in ${\bf \bar g}$ as well. Thus, suppose (iii) occurs. Choose $k,l \in \{0,\ldots, 2^n-1\}$ such that $t_{i-1+2km}=\alpha$ and $t_{\sigma(i)-1+2lm}=\beta$. It then follows that $\bar g_{i+2km}=(\bar h_i, -\alpha+P_i(\alpha))=(u,v)$ and $\bar g_{\sigma(i)+2lm}=(\bar h_{\sigma(i)},-\beta+P_{\sigma(i)}(\beta))=(u,v)$. We have shown that every $(u,v) \in G$ appears at least twice, hence exactly twice, in ${\bf \bar g}$. This completes the proof of Theorem \ref{extpower2}. \end{proof} Let ${\bf h}$ be a 2-graceful ordering of $H$ and ${\bf g}$ be a lift of ${\bf h}$ by $\mathbb{Z}_k$. If $\pi_1,\pi_2: H \times \mathbb{Z}_k \rightarrow H$ are the projection onto the first and second components and $\phi: \mathbb{Z}_k \rightarrow \mathbb{Z}_k$ is an automorphism, then $\hat \phi({\bf g})$ defined by $\hat \phi(g_i)=(\pi_1(g_i), \phi(\pi_2(g_i)))$, $0\leq i \leq 2k|H|-1$, is a 2-graceful ordering of $H \times \mathbb{Z}_k$. It follows from Theorems \ref{extodd} and \ref{extpower2} that every 2-graceful ordering of $H$ has at least $\phi(k)$ lifts by $\mathbb{Z}_k$, where $k$ is odd or a power of 2 greater than 2. \section{Proof of the Main Theorem}\label{maint} In this section, we prove our main theorem which states that every abelian group has a 2-graceful ordering. We first need a lemma to construct a 2-graceful ordering in $(\mathbb{Z}_{2})^n$ using the field structure of $GF(2^n)$, the field with $2^n$ elements. Recall that the additive group of $GF(2^n)$, being a vector space over $\mathbb{Z}_2$, is isomorphic to $(\mathbb{Z}_{2})^n$. \begin{lemma}\label{2factor} There exists a 2-Graceful ordering of $(\mathbb{Z}_{2})^n$ for every $n\geq 1$. \end{lemma} \begin{proof} Let $GF(2^n)$ be a field of $m=2^n$ elements and let $\alpha$ be a primitive element in $GF(2^n)$. Let $1\leq k \leq m-1$ such that $\alpha^{k+1}=1-\frac{1}{\alpha(1-\alpha)}$. To show such $k$ exists, we note that $\alpha \neq (\alpha-1)^2$ (since $\alpha$ is a generator), hence $1-\frac{1}{\alpha(1-\alpha)}\neq 0$, and so such $k$ exists because $\alpha$ is a generator. Let ${\bf g}:g_0,\ldots, g_{m-1}$, such that $$g_i=\begin{cases} 0 & i=0; \\ \frac{1-\alpha^{i}}{1-\alpha} & 1\leq i \leq k+1; \\ \frac{1}{\alpha(1-\alpha)^2} & i=k+2;\\ \frac{1-\alpha^{i-1}}{1-\alpha} & k+3 \leq i \leq m-1;\\ \frac{1}{1-\alpha} & i=m; \\ \frac{1}{\alpha(1-\alpha)} & i=m+1;\\ \frac{1-\alpha^{i-m}}{\alpha(1-\alpha)} & m+2 \leq i \leq 2m-1. \end{cases} \Rightarrow \bar g_i=\begin{cases} 0 & i=0; \\ \alpha^{i-1} & 1\leq i \leq k+1; \\ 0 & i=k+2;\\ \alpha^{i-2} & k+3 \leq i \leq m-1;\\ \frac{1}{\alpha(1-\alpha)} & i=m; \\ \frac{1}{\alpha} & i=m+1;\\ \frac{\alpha^{i-m-1}}{\alpha(1-\alpha)} & m+2 \leq i \leq 2m-1. \end{cases} $$ It is straightforward although tedious to check that $g$ is a 2-graceful ordering of $(\mathbb{Z}_2)^n$. \end{proof} Next, we construct a 2-graceful ordering for $\mathbb{Z}_m$ for all $m\geq 2$. \begin{lemma}\label{zm} The group $\mathbb{Z}_m$ has a 2-graceful ordering for every $m\geq 2$. \end{lemma} \begin{proof} For $0\leq i \leq 2m-1$, let $g_i=(-1)^i \lceil i/2 \rceil$ and so $\bar g_i=(-1)^ii$. Then ${\bf g}$ is a 2-graceful ordering of $\mathbb{Z}_m$. \end{proof} Now, we are ready to prove Theorem \ref{main}. \\ \\ \noindent {\it Proof of Theorem 1.} Let $G$ be a finite abelian group. By the classification of finite abelian groups, we can write $G=(\mathbb{Z}_2)^n \times \prod_{i=1}^k \mathbb{Z}_{m_i}$, where $n \geq 0$ and each $m_i$, $1\leq i \leq k$, is either an odd number or a power of 2 greater than 2. If $n>0$, then starting with a 2-graceful ordering of $(\mathbb{Z}_2)^n$ (which exists by Lemma \ref{2factor}) and using a simple induction together with Theorems \ref{extodd} and \ref{extpower2} imply that $G$ has a 2-graceful ordering. Thus, suppose that $n=0$. By Lemma \ref{zm}, the group $\mathbb{Z}_{m_1}$ has a 2-graceful ordering. Again, a finite induction starting with $\mathbb{Z}_{m_1}$ using Theorems \ref{extodd} and \ref{extpower2} imply that $G$ has a 2-graceful ordering. $\hfill \square$
{ "timestamp": "2022-08-31T02:20:53", "yymm": "2208", "arxiv_id": "2208.14334", "language": "en", "url": "https://arxiv.org/abs/2208.14334", "abstract": "Given a sequence ${\\bf g}: g_0,\\ldots, g_{m}$, in a finite group $G$ with $g_0=1_G$, let ${\\bf \\bar g}: \\bar g_0,\\ldots, \\bar g_{m}$, be the sequence defined by $\\bar g_0=1_G$ and $\\bar g_i=g_{i-1}^{-1}g_i$ for $1\\leq i \\leq m$. We say that $G$ is doubly sequenceable if there exists a sequence ${\\bf g}$ in $G$ such that every element of $G$ appears exactly twice in each of ${\\bf g}$ and ${\\bf \\bar g}$. If a group $G$ is abelian, odd, sequenceable, R-sequenceable, or terraceable, then $G$ is doubly sequenceable. In this paper, we show that if $N$ is an odd or sequenceable group and $H$ is an abelian group, then $N \\times H$ is doubly sequenceable.", "subjects": "Group Theory (math.GR); Number Theory (math.NT)", "title": "Doubly Sequenceable Groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9863631679255075, "lm_q2_score": 0.8354835350552603, "lm_q1q2_score": 0.8240901863867083 }
https://arxiv.org/abs/2103.07644
A Jordan Curve Theorem for 2-dimensional Tilings
The classical Jordan curve theorem for digital curves asserts that the Jordan curve theorem remains valid in the Khalimsky plane. Since the Khalimsky plane is a quotient space of $\mathbb R^2$ induced by a tiling of squares, it is natural to ask for which other tilings of the plane it is possible to obtain a similar result. In this paper we prove a Jordan curve theorem which is valid for every locally finite tiling of $\mathbb R^2$. As a corollary of our result, we generalize some classical Jordan curve theorems for grids of points, including Rosenfeld's theorem.
\section{Introduction} The Jordan curve theorem asserts that a simple closed curve divides the plane into two connected components, one of these components is bounded whereas the other one is not. This theorem was proved by Camille Jordan in 1887 in his book \textit{Cours d'analyse} \cite{Jordan}. During the decade of the seventies, Azriel Rosenfeld published a series of articles \cite{Rosenfeld1, Rosenfeld2, Rosenfeld3, Rosenfeld4, Rosenfeld5, Rosenfeld6} wherein he studies connectedness properties of the grid of points with integer coordinates $\mathbb{Z}^{2}$. For any point $(n,m)\in\mathbb{Z}^{2}$ Rosenfeld defines its \textit{4-neighbors} as the four points $(n,m\pm 1)$ and $(n\pm 1, m)$, whilst the four points $(n\pm 1,m\pm 1)$ as well as the 4-neighbors are its \textit{8-neighbors}. For $k=4$ or $k=8$, a \textit{k-path} is a finite sequence of points $(x_{0}, \dots, x_{n})$ in $\mathbb{Z}^{2}$ such that for every $i\in\{1, \dots, n\}$, $x_{i-1}$ is a $k$-neighbor of $x_{i}$. A subset $S$ of $\mathbb{Z}^{2}$ is \textit{k-connected} if there is a $k$-path between any two elements of $S$. A \textit{k-component} of $S$ is a maximal $k$-connected subset. Lastly, a \textit{simple closed k-path} is a $k$-connected set $J$ which contains exactly two $k$-neighbors for each of its points. In this case we can describe $J$ as a $k$-path $(x_{1}, \dots, x_{n})$, where $x_i$ is a $k$-neighbor of $x_{j}$ iff $i\equiv j \pm 1$ (modulo $n$). In \cite{Rosenfeld2}, Rosenfeld proves the following discrete Jordan curve theorem. \begin{theorem}\label{teo:Rosenfeld} Let $J\subset\mathbb{Z}^{2}$ be a simple closed $4$-path with at least five points. Then $\mathbb{Z}^{2}\setminus J$ has exactly two $8$-components. \end{theorem} \begin{figure}[htp] \centering \includegraphics[width=12cm]{CuatroCurva.png} \caption{A simple closed $4$-path in $\mathbb{Z}^2$.} \label{fig:4-curva} \end{figure} In the same decade, Efim Khalimsky developed a different approach for studying topological properties of the sets $\mathbb{Z}$ and $\mathbb{Z}^{2}$ \cite{Khalimsky1, Khalimsky2}, endowing them with topologies that captured the proximity of its elements. These sets, together with the topologies proposed by Khalimsky, are known as the digital line (or Khalimsky line) and the digital plane (or Khalimsky plane), respectively. Khalimsky also proved a Jordan curve theorem for the digital plane (see \cite{Khalimsky1, Khalimsky2}). We explain this theorem later, in Section~\ref{sec:preliminares} ( Theorem~ \ref{teo:curvadejordanplanodigital}). There are other versions of Jordan curve theorems for grids of points that are not the usual squared grid. For example, the case where the points are configured in a hexagonal grid has been studied in \cite[Theorem 26]{Kopperman} and \cite{Kong2}. In \cite{Neumann-Lara}, V. Neumann-Lara and R. Wilson presented an analogue to the Jordan curve theorem in the context of graph theory (see Theorem~\ref{teo:jordangraficas}). This result will be a key tool for the proof of our main theorem. One way to define the topology of the digital plane is by an equivalence relation in $\mathbb{R}^2$ resulting of identifying the points in the same edge or the same face of a tiling of the plane by squares of the same size (see equation~\ref{e: square surjection} in Section~\ref{sec:preliminares}). This approach for describing the topology of the digital plane leads us to question if it is possible to generalize Khalimsky's Jordan curve theorem for every quotient space of $\mathbb R^2$ obtained by doing a similar identification in any nice enough tiling. Our main result, Theorem \ref{teo:jordanteselaciones}, states that this generalization is possible if the tiling is locally finite. Our paper is organized as follows. In Section~\ref{sec:preliminares} we recall all basic notions concerning Alexandrov spaces and tilings. In Section~\ref{sec_ Digital Topology of a tiling}, we introduce what the digital version of a tiling is and we prove some basic properties of these topological spaces. In Section~\ref{Sec:main theorem}, we introduce the definition of a digital Jordan curve and we prove our version of the Jordan curve theorem for tilings (Theorem~\ref{teo:jordanteselaciones}). To finish the paper, in Section~\ref{sec:open and closed} we explore some basic properties of closed and open digital Jordan curves, that will be used in Section~\ref{Sec:Well behaved} to obtain conditions in order to guarantee that a digital Jordan curve encloses a face of the tiling. \section{Preliminaries}\label{sec:preliminares} \subsection{Alexandrov Topologies} Let $(X,\tau)$ be a topological space. The topology $\tau$ is called an \textit{Alexandrov topology} if the intersection of an arbitrary family of open sets is open. In this case we say that $(X,\tau)$ is an \textit{Alexandrov discrete space}. It is not difficult to see that $(X,\tau)$ is an Alexandrov discrete space iff each point $x\in X$ has a smallest neighborhood, which we denote by $N(x)$. Clearly, $N(x)$ is the intersection of all open neighborhoods of $x$ and therefore $N(x)$ is open. Also, every subspace of an Alexandrov space is an Alexandrov space. The first important example of an Alexandrov space concerning the contents of this work is the digital line, also known as the Khalimsky line. \begin{example} For every $n\in\mathbb{Z}$, let \[ N(n) = \begin{cases} \{n\} &\quad\text{if $n$ is odd,}\\ \{n-1, n, n+1\} &\quad\text{if $n$ is even.}\\ \end{cases} \] Then the set of integers $\mathbb{Z}$, together with the topology $\tau_{K}$ given by the base $\mathcal{B}=\{N(n)\mid n\in\mathbb{Z}\}$ is known as the \textit{digital line}. \end{example} Another important example is the digital plane, also known as the Khalimsky plane. \begin{example} The \textit{Khalimsky plane} (or \textit{digital plane}) is the product space of the Khalimsky line $(\mathbb Z,\tau_K)\times(\mathbb Z,\tau_{K})$. This product topology coincides with the quotient topology generated by the surjection $p:\mathbb{R}^{2}\rightarrow\mathbb{Z}^{2}$ defined by: \begin{equation}\label{e: square surjection} p(x,y) = \begin{cases} (2n+1,2m+1) &\quad\text{if there are $n,m\in\mathbb{N}$ such that} \\ &\quad(x,y)\in(2n,2n+2)\times(2m,2m+2),\\ \\ (2n,2m+1) &\quad\text{if there are $n,m\in\mathbb{N}$ such that} \\ &\quad x=2n,\hspace{.2cm}y\in(2m,2m+2),\\ \\ (2n+1,2m) &\quad\text{if there are $n,m\in\mathbb{N}$ such that} \\ &\quad x\in(2n,2n+2),\hspace{.2cm}y=2m,\\ \\ (2n,2m) &\quad\text{if there are $n,m\in\mathbb{N}$ such that} \\ &\quad x=2n,\hspace{.2cm}y=2m.\\ \end{cases} \end{equation} It is worth highlighting that the quotient map $p$ induces an equivalence relation that identifies the interior, the edges (without vertices) and the vertices of the rectangles $[2n,2n+2]\times[2m,2m+2]$ in the plane. Thus, another way to understand the topology of the digital plane is to think of this space as the set of equivalence classes of this equivalence relation together with the quotient topology induced by $\mathbb{R}^{2}$. This description of the digital plane is depicted in Figure \ref{fig:planodigitalreales} . As we already mentioned, the digital plane is an Alexandrov space which can be readily verified by noting that \[ N(n,m) = \begin{cases} \{(n,m)\} &\quad\text{if $2\not\vert n$, $2\not\vert m$}\\ \{n-1, n, n+1\}\times\{m-1, m, m+1\} &\quad\text{if $2\vert n$, $2\vert m$}, \\ \{n\}\times\{m-1, m, m+1\} &\quad\text{if $2\not\vert n$, $2\vert m$}, \\ \{n-1, n, n+1\}\times \{m\}&\quad\text{if $2\vert n$, $2\not\vert m$}. \\ \end{cases} \] \begin{figure}[htp] \centering \includegraphics[width=12cm]{PlanoDigitalReales4.png} \caption{The digital plane as equivalence classes of $\mathbb{R}^{2}$.} \label{fig:planodigitalreales} \end{figure} \end{example} An important tool to understand the connectedness of an Alexandrov space is its connectedness graph. \begin{definition}\label{graficadeconexidad} Let $X$ be a topological space. The connectedness graph of $X$ is the graph $G=(V_{G},E_{G})$ where $V_{G}=X$ and $\{x,y\}\in E_{G}$ if and only if $\{x,y\}$ is connected. \end{definition} We refer the reader to \cite{Bondy} and \cite{Diestel} for any unknown notion concerning graph theory. Let $X$ be a topological space. For every $x\in X$ the \textit{adjacency set} of $x$ is the set $$\mathscr{A}(x)=\big\{y\in X~\vert~x\neq y,\hspace{.1cm}\{x,y\}\hspace{.1cm}\text{is connected}\big\}.$$ Observe that $x\in\mathscr{A}(y)$ if and only if $y\in\mathscr{A}(x)$. In this case we say that $x$ and $y$ are adjacent. Given two points $x,y\in X$, we define a \textit{digital path} from $x$ to $y$ as a finite sequence of elements of $X$, $(x_{0}, x_{1}, \dots, x_{n})$, such that $x=x_{0}$, $y=x_{n}$ and for every $i\in\{0, 1, \dots, n-1\}$, $x_{i}$ and $x_{i+1}$ are adjacent. We say that $X$ is \textit{digitally pathwise connected} if for every $x$, $y\in X$ there is a digital path from $x$ to $y$. Lastly, a digital path $(x_{0}, \dots, x_{n})$ is a \textit{digital arc} if $n=1$ or if there is a homeomorphism from a finite interval $I$ of the digital line into $\{x_{0}, \dots, x_{n}\}$. We say that $X$ is \textit{digitally arcwise connected} if for every $x$, $y\in X$ there is a digital arc from $x$ to $y$. From the definition of the connectedness graph and the definition of a digital path it is clear that $(x_0, \dots, x_n)$ is a digital path in $X$ if and only if $(x_0, \dots, x_n)$ is a path in the connectedness graph $G$ of $X$. The following remark specifies the type of subgraphs in the connectedness graph of $X$ corresponding to digital arcs. For a proof of the non-trivial implication, see e.g. \cite[Theorem 5.6]{Melin}. \begin{remark}\label{obs:arcocamino} Let $X$ be a topological space. Then $(x_0, x_{1}, \dots, x_{n})$ is a digital arc if and only if $(x_0, x_{1}, \dots, x_{n})$ is an induced simple path in the connectedness graph of $X$. \end{remark} The following theorem summarizes important results about connectedness in Alexandrov discrete spaces. Its proof can be found in \cite[Theorem 3.2]{Khalimsky3} and \cite[Lemma 20]{Kopperman}. \begin{theorem}\label{teo:resumenconexidad} Let $X$ be an Alexandrov discrete space. \begin{enumerate}[\rm(1)] \item For every $x\in X$, $\mathscr{A}(x)=\big(N(x)\cup\overline{\{x\}}\big)\setminus\{x\}.$ \item The following are equivalent: \begin{enumerate}[\rm a)] \item X is connected. \item X is digitally pathwise connected. \item X is digitally arcwise connected. \item The connectedness graph of $X$ is connected. \end{enumerate} \item The connected components of $X$ are open and closed. \end{enumerate} \end{theorem} Finally, we enunciate Khalimsky's Jordan curve theorem for the digital plane. First, we say that a subset $J$ of the digital plane is a \textit{Jordan curve in the digital plane} if for every $j\in J$ the complement $J\setminus\{j\}$ is a digital arc, and $\vert J\vert\geq4$. For a detailed exposition of this theorem, see \cite{Khalimsky3}. \begin{theorem}\label{teo:curvadejordanplanodigital} Let $J$ be a Jordan curve in the digital plane. Then $\mathbb{Z}^{2}\setminus J$ has exactly two connected components. \end{theorem} \subsection{Tilings}\label{sec:teselacionesdelplano} To finish this section, let us recall some basic notions about tilings. \begin{definition} Let $\mathcal{T}=\{T_{n}\}_{n\in\mathbb{N}}$ be a countable family of closed, connected subsets of $\mathbb{R}^{2}$. We say that $\mathcal T$ is a \textit{tiling} of $\mathbb R^2$ if: \begin{enumerate}[\rm(a)] \item $\bigcup_{n\in\mathbb{N}}T_{n}=\mathbb{R}^{2}$, \item if $i\neq j$, then $\Int{(T_{i})}\cap\Int{(T_{j})}=\emptyset$. \end{enumerate} In this case every element $T_n\in\mathcal T$ is called a \textit{tile}. \end{definition} Given a tiling $\mathcal{T}=\{T_{n}\}_{n\in\mathbb{N}}$ of $\mathbb R^2$, the points in the plane that are elements of three or more tiles are called \textit{vertices}. If $i,j\in\mathbb{N}$, $i\neq j$, satisfy that $$E_{i,j}:=\{u\in\mathbb{R}^{2}\mid u\in T_{i}\cap T_{j},\;u \text{ is not a vertex}\}\neq\emptyset,$$ then each connected component of the set $E_{i,j}$ is called an \textit{edge}. Lastly, the interior of a tile is called a \textit{face}. If a vertex, an edge or a face are subsets of a tile $T$, we say that it is a vertex, edge or face of the tile, respectively. As we mentioned in the introduction, for the purposes of this work, we limit our attention to a particular class of tilings, namely tilings that, besides (a) and (b) of the preceding definition, satisfy the following conditions: \textit{ \begin{enumerate}[\rm(a)] \setcounter{enumi}{2} \item $\mathcal{T}$ is a locally finite collection (namely, each point of the plane has a neighborhood that only intersects a finite number of elements of $\mathcal{T}$), \item for each $n\in\mathbb{N}$, $T_{n}$ is homeomorphic to the closed unit disk, \item each edge of the tiling is homeomorphic to the interval $(0,1)$, \item if two different tiles intersect each other, then their intersection is the disjoint union of a finite set of vertices and edges. \end{enumerate} } Consequently, from this point onwards, when we talk about tilings, we will refer to tilings that satisfy these requirements. \begin{example}\label{ej:teselacionplano} The family $\big\{[2n,2n+2]\times[2m,2m+2]\hspace{.2cm}\big\vert\hspace{.2cm}(n,m)\in\mathbb{Z}^{2}\big\}$ is a tiling of the plane. \end{example} In the following proposition we encapsulate basic properties of tilings that can be easily proved from the definition. \begin{proposition}\label{p:basic tilings} Let $\mathcal{T}$ be a tiling. \begin{enumerate}[\rm(1)] \item If $S$ and $T$ are two tiles such that $S\cap T\neq\emptyset$, then $S\cap T=\partial S\cap\partial T$. \item Every tile only intersects a finite number of tiles. \item Every tile has at least two vertices. \item Every tile has the same number of vertices and edges. Moreover, the boundary of a tile is the disjoint union of alternating vertices and edges, and the boundary of every edge is the set of two vertices that surround it. \end{enumerate} \end{proposition} From Proposition~\ref{p:basic tilings}-(4), we infer that each edge is determined by two vertices. From its definition, it is clear that each edge is also determined by exactly two faces. Similarly, each face is determined by its boundary, that is, by the vertices and edges of the tile to whom it serves as interior. Also, a vertex is determined both by the faces of the tiles that intersect in it and by the edges that converge in it. In this way we can talk about the faces and edges of a vertex, the vertices and edges of a face, and the faces and vertices of an edge. We denote the sets of faces, edges and vertices of a tiling by $\mathcal{F}_{\mathcal{T}}$, $\mathcal{E}_{\mathcal{T}}$ and $\mathcal{V}_{\mathcal{T}}$, respectively. Lastly, if $v$ is a vertex, we denote the sets of edges and faces of $v$ by $\mathcal{E}_{v}$ and $\mathcal{F}_{v}$, respectively. We can establish analogous notation for the vertices and edges of a face and for the faces and vertices of an edge. \section{Digital Topology of a Tiling}\label{sec_ Digital Topology of a tiling} Let $\mathcal{T}$ be a tiling of $\mathbb R^2$. Consider the set set $\mathcal{D}_{\mathcal{T}}:=\mathcal{F}_{\mathcal{T}}\cup\mathcal{E}_{\mathcal{T}}\cup\mathcal{V}_{\mathcal{T}}$ and the surjection $q:\mathbb R^2\to \mathcal D_{\mathcal T}$ given by \begin{equation}\label{eq:quotient map} q(x)=\begin{cases} x,&\text{ if }x\in\mathcal V_{\mathcal T},\\ E,&\text{ if }x\in E\in\mathcal E_{\mathcal T},\\ F,&\text{ if }x\in F\in\mathcal F_{\mathcal T}. \end{cases} \end{equation} Namely, $q$ identifies the points in the same vertex, edge or face. \begin{definition} The set $\mathcal D_{\mathcal T}$ equipped with the quotient topology induced by $q$ is called the \textit{digital version of $\mathcal{T}$}. \end{definition} Let $\mathcal{T}$ be a tiling of $\mathbb R^2$. Clearly we have that the closure (in $\mathbb R^2$) of any vertex of the tiling is the vertex itself. The closure of an edge is the union of the edge and its two vertices. And, the closure of a face is the union of the the face and all its vertices and edges. This allow us to describe the closure of any element $x$ in $\mathcal{D}_{\mathcal{T}}$ as follows: \begin{equation}\label{eq:cerradura} \overline{\{x\}} = \begin{cases} \{x\} &\quad\text{if $x\in\mathcal{V}_{\mathcal{T}}$,}\\ \{x\}\cup\mathcal{V}_{x} &\quad\text{if $x\in\mathcal{E}_{\mathcal{T}}$,} \\ \{x\}\cup\mathcal{V}_{x}\cup\mathcal{E}_{x} &\quad\text{if $x\in\mathcal{F}_{\mathcal{T}}$.}\\ \end{cases} \end{equation} In the following theorem we prove that $\mathcal{D}_{\mathcal{T}}$ is an Alexandrov space, by describing the smallest neighborhood of every element of $\mathcal{D}_{\mathcal{T}}$. \begin{theorem}\label{teo:teselacionAlexandrov} Let $\mathcal{T}$ be a tiling. Then $\mathcal{D}_{\mathcal{T}}$ is an Alexandrov space. Moreover, for every element $x\in\mathcal D_{\mathcal T}$ the smallest neigborhood of $x$ is given by \[ N(x) = \begin{cases} \{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}&\quad\text{if $x\in\mathcal{V}_{\mathcal{T}}$,}\\ \{x\}\cup\mathcal{F}_{x} &\quad\text{if $x\in\mathcal{E}_{\mathcal{T}}$,} \\ \{x\} &\quad\text{if $x\in\mathcal{F}_{\mathcal{T}}$.}\\ \end{cases} \] \end{theorem} \begin{proof} Let $\mathcal{T}$ be a tiling, $q:\mathbb{R}^2\rightarrow\mathcal{D}_{\mathcal{T}}$ the quotient map associated to $\mathcal{D}_{\mathcal{T}}$ and $x\in\mathcal{D}_{\mathcal{T}}$ an arbitrary element. If $x\in\mathcal{V}_{\mathcal{T}}$ and $U$ is an open neighborhoof of $x$, then $q^{-1}(U)$ is an open, saturated neighborhood of $q^{-1}(x)$. Since $x$ is a vertex, it is contained in the closure of its edges and faces, therefore $q^{-1}(U)$ intersects each element of $\mathcal E_x\cup\mathcal F_x$. Moreover, since $q^{-1}(U)$ is a saturated neighborhood then $q^{-1}(U)$ contains all of the faces and edges of $x$, that is, $q^{-1}\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)\subset q^{-1}(U)$. Next we see that $q^{-1}\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)$ is an open set. Let $y\in q^{-1}\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)$. If $y$ is an element of a face of $x$, then that face is an open set contained in $q^{-1}\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)$. If $y$ is an element of an edge $E\in \mathcal E_x$, then there are two tiles $S, T\in\mathcal T$ that have $x$ as a common vertex and such that $y\in E\subset S\cap T$. Furthermore, for any other tile $R\in\mathcal{T}\setminus\{S,T\}$, the point $y$ does not belong to $R$. Since $\mathcal{T}\setminus\{T,S\}$ is a locally finite collection of closed sets, its union is a closed subset of $\mathbb{R}^{2}$ (\cite[Theorem 1.1.11.]{Engelking}), thus $$\mathbb{R}^2\setminus\left(\bigcup_{\substack{R\in\mathcal T\\ R\notin\{S,T\}}}R\right)$$ is an open neighborhood of $y$ contained in $q^{-1}(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x})$. Finally, if $y=q^{-1}(x)$ we consider the set $\mathcal{S}$ of tiles that contain $y$, and we recall that $\mathcal{S}$ is a finite set. By the previous argument $\mathbb{R}^2\setminus\left(\bigcup_{T\in(\mathcal{T}\setminus\mathcal{S})}T\right)$ is an open neighborhood of $y$ that is contained in $q^{-1}\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)$. This proves that $q^{-1}\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)$ is an open set such that $$q^{-1}\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)\subset q^{-1}(U).$$ Since $U$ was an arbitrary open neighborhood of $x$ in $\mathcal{D}_{\mathcal{T}}$ we conclude that $\left(\{x\}\cup\mathcal{E}_{x}\cup\mathcal{F}_{x}\right)$ is the smallest neighborhood of $x$ in $\mathcal{D}_{\mathcal{T}}$. If $x\in\mathcal{E}_{\mathcal{T}}$ and $U$ is an open neighborhood of $x$, then $q^{-1}(U)$ is an open, saturated neighborhood of $q^{-1}(x)$. Since $x$ is an edge we know that it is contained in the closure of its faces, so $q^{-1}(U)$ intersects the two elements of $\mathcal F_{x}=\{F_1,F_2\}$. Moreover, since $q^{-1}(U)$ is a saturated neighborhood then $q^{-1}(U)$ contains $F_1\cup F_2$. Thus, $q^{-1}\left(\{x\}\cup\mathcal{F}_{x}\right)\subset q^{-1}(U)$. In the same vein as the previous case, we can see that $q^{-1}\left(\{x\}\cup\mathcal{F}_{x}\right)$ is open in $\mathbb{R}^{2}$. Since $U$ is an arbitrary open neighborhood of $x$ in $\mathcal{D}_{\mathcal{T}}$, we have that $\left(\{x\}\cup\mathcal{F}_{x}\right)$ is the smallest neighborhood of $x$ in $\mathcal{D}_{\mathcal{T}}$. In order to complete the proof, we see that if $x\in\mathcal{F}_{\mathcal{T}}$, then $\{x\}$ is open in $\mathcal{D}_{\mathcal{T}}$ since $x$ is the interior of a tile in $\mathbb{R}^{2}$. \end{proof} We infer from theorems~\ref{teo:resumenconexidad} and \ref{teo:teselacionAlexandrov}, in combination with equality~\ref{eq:cerradura}, that the adjacency set of any element $x\in \mathcal{D}_{\mathcal{T}}$ is given by \[ \mathscr{A}(x) = \begin{cases} \mathcal{E}_{x}\cup\mathcal{F}_{x}&\quad\text{if $x\in\mathcal{V}_{\mathcal{T}}$,}\\ \mathcal{V}_{x}\cup\mathcal{F}_{x} &\quad\text{if $x\in\mathcal{E}_{\mathcal{T}}$,} \\ \mathcal{V}_{x}\cup\mathcal{E}_{x} &\quad\text{if $x\in\mathcal{F}_{\mathcal{T}}$.}\\ \end{cases} \] This information allows us to describe the connectedness graph of any digital version of a tiling. In Figure \ref{fig:teselacionhexagonal} we illustrate a portion of the tiling of the plane with regular hexagons of the same size as well as the connectedness graph of its digital version. In this graph we depict with white points the vertices of the graph corresponding to the faces of the tiling, with black points the vertices of the graph corresponding to vertices of the tiling, and with black rectangles the vertices of the graph corresponding to the edges of the tiling. \begin{figure}[htp] \centering \includegraphics[width=12cm]{TeselacionHexagonal.png} \caption{Tiling of the plane with regular hexagons and the connectedness graph of its digital version.} \label{fig:teselacionhexagonal} \end{figure} In Figure \ref{fig:teselaciondosfiguras} we present another example, illustrating a portion of a tiling of the plane with squares and hexagons, and the connectedness graph of its digital version. The vertices of the graph correspond to the elements of the digital version in the same way as in Figure \ref{fig:teselacionhexagonal}. \begin{figure}[htp] \centering \includegraphics[width=11cm]{TeselacionDosFiguras.png} \caption{Tiling of the plane with hexagons and squares and the connectedness graph of its digital version.} \label{fig:teselaciondosfiguras} \end{figure} For the subsequent material, we recall the definition of a planar graph: \begin{definition}\label{def:graficaplana} Let $G=(V_G, E_G)$ be a graph. We say that $G$ is a planar graph if there is a function $\psi:V_{G}\cup E_{G}\rightarrow\mathscr{P}(\mathbb{R}^{2})$ such that: \begin{enumerate}[\rm(1)] \item The image of each element of $V_{G}$ is a singleton. \item The image of each element of $E_{G}$ is a subset of $\mathbb{R}^{2}$ homeomorphic to $(0,1)$. \item For any two elements $x,y\in V_{G}\cup E_{G}$, $x\neq y$, $$\psi(x)\cap\psi(y)=\emptyset.$$ \item If $v$ and $w$ are the vertices of the edge $e$, then $$\overline{\psi(e)}=\psi(e)\cup\psi(v)\cup\psi(w).$$ \end{enumerate} In this situation, we say that $\vert G\vert:=\psi(V_{G}\cup E_{G})$ is a planar embedding of $G$. Sometimes the set $|G|$ is also called a \textit{geometric realization} of $G$. \end{definition} \begin{lemma}\label{lema:graficaplana} Let $\mathcal{T}$ be a tiling. The connectedness graph of $\mathcal{V}_{\mathcal{T}}\cup\mathcal{E}_{\mathcal{T}}$, with the subspace topology induced by $\mathcal{D}_{\mathcal{T}}$, is a planar graph. \end{lemma} \begin{proof} Let $H=(V_H, E_H)$ be the connectedness graph of $\mathcal{V}_{\mathcal{T}}\cup\mathcal{E}_{\mathcal{T}}$. The set of vertices of $H$ is precisely the set $V_{H}=\mathcal{V}_{\mathcal{T}}\cup\mathcal{E}_{\mathcal{T}}$. We know that the edges of $H$ always denote the adjacency between an edge of the tiling and its two vertices. Therefore, every edge in $E_{H}$ is a pair of the form $\{A,z\}$, where $A\in \mathcal E_z\subset \mathcal E_{\mathcal T}$ and $z\in\mathcal V_A\subset \mathcal V_{\mathcal T}$. For every $A\in\mathcal{E}_{\mathcal{T}}$, pick a point $x_{A}\in A\subset\mathbb R^2$. If $x\in\mathcal{V}_{\mathcal{T}}$ is a vertex of $A\in\mathcal{E}_{\mathcal{T}}$, we define $I_{x_{A},x}$ as the connected component of $A\setminus\{x_{A}\}$ delimited by $x_{A}$ and $x$. It is clear that $\{x_{A},x\}\cap I_{x_{A}, x}=\emptyset$. Since $A$ is an edge of the tiling, then $I_{x_{A},x}$ is homeomorphic to the interval $(0,1)$ and $$\overline{I_{x_{A},x}}=I_{x_{A},x}\cup\{x_{A},x\}.$$ Moreover, if $\{x_{A},x\}\neq\{x_{B},y\}$ then $I_{x_{A},x}\cap I_{x_{B},y}=\emptyset$, considering that any two different edges of the tiling have an empty intersection. Therefore the function $\psi:V_{H}\cup E_{H}\rightarrow\mathscr{P}(\mathbb{R}^{2})$ given by \[ \psi(x) = \begin{cases} \{x\} &\quad\text{if}\hspace{.2cm} x\in\mathcal{V}_{\mathcal{T}},\\ \{x_{A}\} &\quad\text{if}\hspace{.2cm} x=A\hspace{.2cm}\text{for some}\hspace{.2cm}A\in\mathcal{E}_{\mathcal{T}}, \\ I_{x_{A},z} &\quad\text{if}\hspace{.2cm} x=\{A,z\}\in E_{H},\hspace{.2cm}\text{where}\hspace{.2cm}A\in\mathcal{E}_{\mathcal{T}}\hspace{.2cm}\text{and}\hspace{.2cm}z\in\mathcal{V}_{\mathcal{T}}, \end{cases} \] proves the planarity of $H$. \end{proof} \begin{proposition}\label{prop:graficaplana} Let $\mathcal{T}$ be a tiling. The connectedness graph of $\mathcal{D}_{\mathcal{T}}$ is a planar graph. \end{proposition} \begin{proof} Let $G=(V_{G},E_{G})$ be the connectedness graph of $\mathcal{D}_{\mathcal{T}}$ and $H=(V_H, E_H)$ the connectedness graph of $\mathcal{V}_{\mathcal{T}}\cup\mathcal{E}_{\mathcal{T}}$. Then $H$ is a subgraph of $G$ and by Lemma \ref{lema:graficaplana} there is a bijection $\psi:V_{H}\cup E_{H}\rightarrow\vert H\vert\subset \mathscr{P}(\mathbb{R}^{2})$ from the set of vertices and edges of $H$ to a planar embedding $\vert H\vert$ of $H$. In order to prove the planarity of $G$, we want to extend the definition of $\psi$ to a function $\phi:V_{G}\cup E_{G}\rightarrow\mathbb R^2$ satisfying conditions (1)-(4) of Definition~\ref{def:graficaplana}. For that purpose, it suffices to define $\phi$ for every element of $\mathcal{F}_{\mathcal{T}}$ and for every edge of $G$ connecting a face of the tiling with one of its vertices or edges. Let $C\in\mathcal{F}_{\mathcal{T}}$, we know that there is a unique tile $T$ that has $C$ as a face, and we know that $\mathscr{A}(C)=\mathcal{V}_{C}\cup\mathcal{E}_{C}$. Let $\gamma:\mathbb{B}^{2}\rightarrow T$ be a homeomorphism between $\mathbb{B}^2$, the unit closed disk of $\mathbb R^2$, and $T$. Consider the set $\gamma^{-1}(\mathcal{V}_{C})=\{w_1, w_2, \dots, w_n\}$, which is contained in $\mathbb{S}^{1}$ since $\gamma$ is a homeomorphism and $\mathcal{V}_{C}\subset\partial T$. Without lost of generality we can suppose that $w_{1}, \dots, w_{n}$ are numbered clockwise. Then we can number the set $\gamma^{-1}(\mathcal{E}_{C})=\{B_{1}, \dots, B_{n}\}$ denoting by $B_{1}$ the arc delimited by $w_{1}$ and $w_{2}$ and then proceeding in a clockwise manner. For every $i\in\{1, \dots,n\}$, define $A_{i}:=\gamma(B_{i})$, considering that $\gamma$ is a homeomorphism we can assure that $\mathcal{E}_{C}=\{A_{1}, \dots, A_{n}\}$. Being consistent with the notation introduced in Lemma \ref{lema:graficaplana}, we have a point $x_{A_{i}}\in A_{i}$ such that $\psi(A_{i})=\{x_{A_{i}}\}$. Let $b_{i}:=\gamma^{-1}(x_{A_{i}})$, which is an element of $B_i\subset\mathbb{S}^{1}\setminus(\gamma^{-1}(\mathcal{V}_{C}))$. For every $w\in\{w_{1}, \dots,w_{n}, b_{1}, \dots, b_{n}\}$, let $L_{w}:=\left\{t w~\vert~t\in(0,1)\right\}.$ Now we consider the family $$\mathcal{L}_{C}:=\left\{L_{w}~\vert~w\in\{w_{1}, \dots,w_{n}, b_{1}, \dots, b_{n}\}\right\},$$ whose elements are pairwise disjoint and they are contained in $\Int{\mathbb{B}^{2}}$. Let $x_{C}:=\gamma(0,0)$. Since $(0,0)\in\Int{\mathbb{B}^{2}}$, it follows that $x_{C}\in C$. Each element of the family $\gamma(\mathcal{L}_{C})=\{\gamma(L)~\vert~ L\in\mathcal{L}_{C}\}$ is an arc delimited by $x_{C}$ and one of the points in the set $\{v_{1}, \dots,v_{n}, x_{A_{1}}, \dots, x_{A_{n}}\}$, where $v_i=\gamma(w_i)\in\mathcal V_C$. Thus we can denote each arc $\gamma(L)\in\gamma(\mathcal{L}_{C})$ as $M_{x_{C}, v}$, where $v$ is the point in $\{v_{1}, \dots,v_{n}, x_{A_{1}}, \dots, x_{A_{n}}\}$ that, alongside $x_{C}$, delimits $\gamma(L)$. From its definition, it is clear that $M_{x_{C}, v}\cap\{x_{C},v\}=\emptyset$. These arcs are pairwise disjoint, since $\gamma$ is a homeomorphism and the elements of $\mathcal{L}_{C}$ are pairwise disjoint. Also, each arc $M_{x_C,v}$ is contained in $C$, since the elements of $\mathcal{L}_{C}$ are contained in $\Int{\mathbb{B}^{2}}$. From here we can infer that if $D$ is another face of the tiling, then the elements of $\gamma(\mathcal{L}_{C})$ and $\gamma(\mathcal{L}_{D})$ are pairwise disjoint. Furthermore, we can also conclude that the elements of $\gamma(\mathcal{L}_{C})$ do not intersect $\partial T$. Lastly, note that $M_{x_{C}, v}\in\gamma(\mathcal{L}_{C})$ implies $$\overline{M_{x_{C}, v}}=M_{x_{C}, v}\cup\{x_{C},v\}.$$ Therefore the function $\phi:V_{G}\cup E_{G}\rightarrow\mathscr{P}(\mathbb{R}^{2})$ given by \[ \phi(x) = \begin{cases} \psi(x) &\quad\text{if}\hspace{.2cm} x\in V_{H}\cup E_{H},\\ \{x_{C}\} &\quad\text{if}\hspace{.2cm} x=C\hspace{.2cm}\text{for some}\hspace{.2cm}C\in\mathcal{F}_{\mathcal{T}}, \\ M_{x_{C},v} &\quad\text{if}\hspace{.2cm} x=\{C,v\}\in E_{G}\setminus E_{H},\hspace{.2cm}\text{where}\hspace{.2cm}C\in\mathcal{F}_{\mathcal{T}}\hspace{.2cm}\text{and}\hspace{.2cm}v\in\mathcal{V}_{C}, \\ M_{x_{C}, x_{A}} &\quad\text{if}\hspace{.2cm} x=\{C,A\}\in E_{G}\setminus E_{H},\hspace{.2cm}\text{where}\hspace{.2cm}C\in\mathcal{F}_{\mathcal{T}}\hspace{.2cm}\text{and}\hspace{.2cm}A\in\mathcal{E}_{C}, \\ \end{cases} \] proves the planarity of $G$. \end{proof} \section{A Jordan Curve Theorem for Tilings}\label{Sec:main theorem} In this section we present our generalization of Khalimsky's Jordan curve theorem. To achieve this we rely on the connectedness graph of the digital version of a tiling. First we generalize the definition of a digital Jordan curve that we have previously introduced in Section~\ref{sec:preliminares}. \begin{definition} Let $\mathcal{T}$ be a tiling. A subset $J$ of $\mathcal{D}_{\mathcal{T}}$ is a \textit{digital Jordan curve} if $\vert J\vert\geq4$ and if for every $j\in J$, $J\setminus\{j\}$ is a digital arc. \end{definition} Recall that $J\setminus\{j\}$ is a digital arc in $\mathcal{D}_{\mathcal{T}}$ if and only if $J\setminus\{j\}$ is an induced simple path in the connectedness graph of $\mathcal{D}_{\mathcal{T}}$ (Remark \ref{obs:arcocamino}). This correspondence allows us to define a Jordan curve in the context of graph theory. \begin{definition} Let $G$ be a graph. A graph-theoretical Jordan curve is an induced cycle of length equal or greater than four. \end{definition} Let $G$ be a graph and $x$ a vertex of $G$. Keeping in mind the correspondence between a topological space and its connectedness graph, we denote by $\mathscr{A}(x,G)$ the subgraph induced by the vertices of $G$ adjacent to $x$. We say that $G$ is \textit{locally Hamiltonian} if for every vertex $x$ of $G$, $\mathscr{A}(x,G)$ has a Hamiltonian cycle. The following theorem is crucial for our generalization and it is due to V. Neumann-Lara and R. Wilson (\cite{Neumann-Lara}). \begin{theorem}\label{teo:jordangraficas} If $G$ is a locally Hamiltonian, connected graph and $J\subset G$ is a graph-theoretical Jordan curve, then \begin{enumerate}[\rm a)] \item the complement of $J$ has at most two connected components in $G$, \item if $G$ is planar, then the complement of $J$ has exactly two connected components in $G$. \end{enumerate} \end{theorem} It can be seen from Neumann-Lara and Wilson's proof that if $G$ is an infinite, planar, locally Hamiltonian and connected graph and $J\subset G$ is a graph-theoretical Jordan curve, then out of the two connected components that make up the complement of $J$ in $G$, one is bounded whereas the other one is not. We call the set of vertices of the bounded component the \textit{interior} of $V_G\setminus V_J$, and denote it by $I(J)$. In a similar manner, the set of vertices of the unbounded component, $O(J)$, are the \textit{exterior} of $V_{G}\setminus V_{J}$. \begin{theorem}\label{teo:jordanteselaciones} Let $\mathcal{T}$ be a tiling and $J\subset\mathcal{D}_{\mathcal{T}}$ a digital Jordan curve. Then $\mathcal{D}_{\mathcal{T}}\setminus J$ has exactly two connected components. \end{theorem} \begin{proof} Let $G$ be the connectedness graph of $\mathcal{D}_{\mathcal{T}}$, it suffices to see that $G$ satisfies the hypotheses of Theorem \ref{teo:jordangraficas}. Let $q:\mathbb{R}^{2}\rightarrow \mathcal{D}_{\mathcal{T}}$ be the quotient function associated to the topology of $\mathcal{D}_{\mathcal{T}}$ (see equation~\ref{eq:quotient map}). We know that $q$ is continuous and therefore the connectedness of $\mathcal{D}_{T}$ is guaranteed by the connectedness of $\mathbb{R}^{2}$. Moreover, we know that $\mathcal{D}_{T}$ is an Alexandrov discrete space, so by Theorem~\ref{teo:resumenconexidad} we have that $G$ is connected. In addition, $G$ is a planar graph by Proposition \ref{prop:graficaplana}. It only rests us to prove that $G$ is a locally Hamiltonian graph. By Theorem \ref{teo:teselacionAlexandrov}, if $x$ is a vertex of $G$ we have that: \[ \mathscr{A}(x,G) = \begin{cases} \mathcal{E}_{x}\cup\mathcal{F}_{x}&\quad\text{if $x\in\mathcal{V}_{\mathcal{T}}$,}\\ \mathcal{V}_{x}\cup\mathcal{F}_{x} &\quad\text{if $x\in\mathcal{E}_{\mathcal{T}}$,} \\ \mathcal{V}_{x}\cup\mathcal{E}_{x} &\quad\text{if $x\in\mathcal{F}_{\mathcal{T}}$.}\\ \end{cases} \] If $x\in\mathcal{V}_{\mathcal{T}}$, we know that at least three tiles intersect in $x$. Each of these tiles has exactly two edges whose vertex is $x$. This fact allows us to infer that if we consider the set $\mathcal E_{x}$ (which is finite) and we number their elements clockwise, then every two consecutive edges determine a face of $x$ (and none of these faces is determined by two different pairs of edges). We also know that every edge of $x$ is determined by exactly two faces of $x$. Lastly, we know that in the connectedness graph there are no two adjacent edges nor two adjacent faces. Thus the subgraph induced by $\mathcal{E}_{x}\cup\mathcal{F}_{x}$ is a cycle wherein the vertices of the graph corresponding to edges of the tiling and the vertices of the graph corresponding to faces of the tiling alternate. If $x\in\mathcal{E}_{\mathcal{T}}$, $x$ has exactly two vertices and two faces, furthermore, both vertices are vertices of both faces. For that reason the subgraph of $G$ induced by $\mathcal{V}_{x}\cup\mathcal{F}_{x}$ is a cycle with exactly four vertices. Finally, if $x\in\mathcal{F}_{\mathcal{T}}$ we recall the analysis made in Proposition \ref{prop:graficaplana} to conclude that the vertices and edges of $x$ alternate in $\partial x$, allowing us to verify that $\mathcal{V}_{x}\cup\mathcal{E}_{x}$ is a cycle. In any case, we have proved that the subgraph induced by $\mathscr{A}(x,G)$ is a cycle and thus $G$ is locally Hamiltonian. In addition, we know that since $J$ is a digital Jordan curve, then the subgraph induced by $J$ in $G$ is a graph-theoretical Jordan curve. Hence, by Theorem \ref{teo:jordangraficas} we can conclude that $G\setminus J$ has exactly two connected components in $G$. To finish the proof, recall that $\mathcal{D}_{\mathcal{T}}$ is an Alexandrov space. Since $G\setminus J$ has exactly two connected components in $G$, by Theorem~\ref{teo:resumenconexidad} we have that $\mathcal{D}_{\mathcal{T}}\setminus J$ has exactly two connected components. Moreover, we know that the connected components of $\mathcal{D}_{\mathcal{T}}\setminus J$ are precisely the vertices of the connected components of $G\setminus J$; in other words, the connected components of $\mathcal{D}_{\mathcal{T}}\setminus J$ are its interior $I(J)$, which is bounded, and its exterior $O(J)$, which is not. \end{proof} \section{Open and Closed Digital Jordan Curves}\label{sec:open and closed} In \cite{Khalimsky4}, Khalimsky, Kopperman and Meyer developed a theory on the processing of digital pictures by means of studying boundaries. Their work relies on properties of the digital plane and on Khalimsky's Jordan curve theorem. The digital plane is not a homogeneous space, nonetheless for any two faces $C_{1}, C_{2}\in\mathcal{F}_{\mathcal{T}}$ (where $\mathcal{T}$ is the tiling of Example \ref{ej:teselacionplano}) there is a homeomorphism from the digital plane into itself that maps $C_1$ to $C_2$. For this reason, it is convenient to think about the pixels on a screen as the faces of the tiling. Motivated by this approach, we are now interested in studying digital Jordan curves $J$ that satisfy two conditions: \begin{itemize} \item[(W1)]half of the elements of $J$ are faces of the tiling (and they alternate with the non-faces elements of the curve). \item[(W2)] $J$ encloses a face of the tiling (namely, $I(J)\cap \mathcal F_{\mathcal T}\neq\emptyset$). \end{itemize} \begin{lemma}\label{lema:ultimolema} Let $\mathcal{T}$ be a tiling and $J\subset\mathcal{D}_{\mathcal{T}}$ a digital Jordan curve. Then, for every $x\in J$, $\mathscr{A}(x)\cap I(J)\neq\emptyset$. \end{lemma} \begin{proof} Let $x\in J$ and let $G$ be the connectedness graph of $\mathcal{D}_{\mathcal{T}}$. We proceed by contradiction. Suppose that $\mathscr{A}(x)\cap I(J)=\emptyset$ and let $x_{0}$ and $x_{1}$ be the two adjacent points to $x$ in $J$. It is clear that $x_{0}$ and $x_{1}$ are not adjacent. We know that the subgraph induced by $\mathscr{A}(x)$ is a cycle with at least four elements. For that reason there are two paths contained in $\mathscr{A}(x,G)$ that only intersect at $x_{0}$ and $x_{1}$. Let $a$ and $b$ be two vertices, different from $x_{0}$ and $x_{1}$, in each of these paths. Since $\mathscr{A}(x)\cap I(J)=\emptyset$, then, with the exception of $x_{0}$ and $x_{1}$, the vertices of the paths we mentioned earlier are contained in $O(J)$. In particular, $a,b\in O(J)$. Since $\mathcal{T}$ is a tiling, $G$ is a planar graph and thus there is a map $\psi:V_{G}\cup E_G\to\mathscr{P}(\mathbb{R}^{2})$ that attests the planarity of $G$. Let $H$ be the subgraph induced by the vertices $V_{H}:=I(J)\cup J$ in $G$. Being a subgraph of $G$, $H$ is a planar graph too. Since $\mathscr{A}(x)\cap I(J)=\emptyset$, we have that $\mathscr{A}(x)\cap (I(J)\cup J)=\{x_{0}, x_{1}\}$, therefore $x$ is a vertex of exactly two faces of $H$. Let $\mathrm{C}$ be the cycle that determines the bounded face of $H$ having $x$ as a vertex (the other face contains the exterior face of the subgraph induced by the vertices of $J$). Then we know that $\bigcup \psi(\mathrm{C})$ is the boundary of a set $\mathrm{D}$, homeomorphic to a closed disk, and that $\psi(x_0),\psi(x),\psi(x_1)\subset \bigcup \psi(\mathrm{C})=\partial \mathrm{D}$. Now we construct a graph $\Gamma$ with the set of vertices $V_{\Gamma}=V_{G}\cup\{p\}$, where $p\notin V_{G}$, and the set of edges $E_{\Gamma}=E_{G}\cup\left\{\{p,w\}~\vert~w\in V_{\mathrm{C}}\right\}$. We sustain that $\Gamma$ is a planar graph. Indeed, since $\mathrm{D}$ is homeomorphic to a closed disk and $\psi(x_0),\psi(x),\psi(x_1)\subset\partial\mathrm{D}$, we can replicate the construction of the point $x_{C}$ and the sets $M_{x_{C},v}$ in the proof of Proposition \ref{prop:graficaplana} to prove the existence of a point $x_{p}\in\Int\mathrm{D}$ and sets $M_{x_{p},w}\subset\Int\mathrm{D}$ such that the function $\phi:V_{\Gamma}\cup E_{\Gamma}\rightarrow\mathscr{P}(\mathbb{R}^{2})$ defined as \[ \phi(x) = \begin{cases} \psi(x) &\quad\text{if}\hspace{.2cm} x\in V_{G}\cup E_{G},\\ \{x_{p}\} &\quad\text{if}\hspace{.2cm} x=p, \\ M_{x_{p},w} &\quad\text{if}\hspace{.2cm} x\in E_{\Gamma}\setminus E_{G},\hspace{.2cm}\text{where}\hspace{.2cm}p\hspace{.2cm}\text{and}\hspace{.2cm}w\in V_{\mathrm{C}} \\ &\quad\text{are the vertices of}\hspace{.2cm}x, \\ \end{cases} \] proves the planarity of $\Gamma$. However, if we take a look at the sets of vertices $\{p,a,b\}$ and $\{x_0,x,x_1\}$, we see that $\Gamma$ contains a subdivision of a complete bipartite graph $K_{3,3}$, which contradicts Kuratowski's theorem (\cite[Theorem 4.4.6]{Diestel}). Therefore we have proved that $\mathscr{A}(x)\cap I(J)\neq\emptyset$. \end{proof} \begin{proposition}\label{prop:ultimaprop} Let $\mathcal{T}$ be a tiling and $J\subset\mathcal{D}_{\mathcal{T}}$ a digital Jordan curve. The following are equivalent: \begin{enumerate}[\rm a)] \item $J$ is closed. \item $I(J)$ and $O(J)$ are open. \item $J$ is nowhere dense. \item $I(J)\cup O(J)$ is dense. \item $J\cap\mathcal{F}_{\mathcal{T}}=\emptyset$. \item $J=\partial(I(J))$. \end{enumerate} \end{proposition} \begin{proof} a)$\implies$b). $I(J)\cup O(J)$ is the complement of $J$ and thus it is open. In addition, since $\mathcal D_{\mathcal T}$ is an Alexandrov space, Theorem \ref{teo:resumenconexidad} guarantees that $I(J)$ and $O(J)$ are open and closed in $I(J)\cup O(J)$, because they are the connected components of $I(J)\cup O(J)$. Since the latter is open in $\mathcal D_{\mathcal T}$, we conclude that $I(J)$ and $O(J)$ are also open in $\mathcal{D}_{\mathcal{T}}.$ b)$\implies$c). We prove the contrapositive. Suppose there is an element $x\in\Int{\left(\overline{J}\right)}$. This implies that $N(x)\subset\overline{J}$. In particular, we have that $x\in\overline{J}$ and hence $\overline{\{x\}}\subset\overline{J}$. Thus $\mathscr{A}(x)\cup\{x\}\subset\overline{J}$. In this situation, $\overline{J}$ can not be a digital Jordan curve since the connectedness graph of $\mathscr{A}(x)\cup\{x\}$ contains at least one vertex, one edge and one face adjacent to each other; in other words, the connectedness graph of $\mathscr{A}(x)\cup\{x\}$ contains cycles of length $3$. For that reason, $J$ is not closed and therefore $I(J)\cup O(J)$ is not open, which in turn implies that \textit{b)} is false. c)$\implies$d). Suppose that $I(J)\cup O(J)$ is not dense. Since $\mathcal{D}_{\mathcal{T}}$ is the disjoint union of $I(J)$, $O(J)$ and $J$, there is a point $x\in J$ such that $N(x)\cap\left( I(J)\cup O(J)\right)=\emptyset$. Therefore $N(x)\subset J$, so $x\in\Int(J)$ and thus $\Int{\left(\overline{J}\right)}\neq\emptyset$. d)$\implies$e). Assume there is a point $x\in J\cap\mathcal{F}_{\mathcal{T}}$. Thus $N(x)=\{x\}\subset J$. Since $J\cap\left(I(J)\cup O(J)\right)=\emptyset$, we have that $N(x)\cap\left(I(J)\cup O(J)\right)=\emptyset$ and therefore $I(J)\cup O(J)$ cannot be dense. e)$\implies$f). First let us prove that $J$ is closed. By e), the elements of $J$ are alternating vertices and edges. The closure of a vertex is the vertex itself, and the closure of an edge is the union of the edge and its two vertices. Since the vertices and edges alternate in $J$, the vertices of each edge in $J$ are contained in $J$. Therefore the closure of each element of $J$ is contained in $J$. Since $\mathcal{D}_{\mathcal{T}}$ is an Alexandrov discrete space this implies that $J$ is closed. Now by the implication a)$\implies$b), we infer that $I(J)$ and $O(J)$ are open. This proves that $\partial(I(J))=\overline{(I(J))}\setminus I(J)$. Furthermore, since $\mathcal{D}_{\mathcal{T}}$ is the disjoint union of $J$, $I(J)$ and $O(J)$, we conclude that $I(J)\cup J$ is a closed set that contains $I(J)$. Thus $$\partial(I(J))=\overline{(I(J))}\setminus I(J)\subset (I(J)\cup J)\setminus I(J)=J.$$ It rests us to prove that $J\subset\partial (I(J))$. Since $I(J)$ and $J$ are disjoint sets, it is enough to prove that $N(x)\cap I(J)\neq\emptyset$ for every $x\in J$. Let $x\in J$, by Lemma \ref{lema:ultimolema} we have that $\mathscr{A}(x)\cap I(J)\neq\emptyset$. If $x$ is an edge, then the elements of $N(x)$ are $x$ and its two faces, and the elements of $\mathscr{A}(x)$ are the two faces and the two vertices of $x$. We know that $J$ and $I(J)$ are disjoint, and that the vertices of $x$ are contained in $J$. Thus $\mathscr{A}(x)\cap I(J)$ contains at least one of the faces of $x$, implying that $N(x)\cap I(J)\neq\emptyset$. Lastly, if $x$ is a vertex, then $\mathscr{A}(x)\subset N(x)$ and therefore $N(x)\cap I(J)\neq\emptyset$. In any case, $J\subset\partial (I(J))$. f)$\implies$a). This follows from the fact that the boundary of any subset of a topological space is always closed. \end{proof} Analogously, we can obtain a dual version of Proposition \ref{prop:ultimaprop}. We let the proof to the reader, since it is similar to the previous one. \begin{proposition}\label{prop:ultimaultimaprop} Let $\mathcal{T}$ be a tiling and $J\subset\mathcal{D}_{\mathcal{T}}$ a digital Jordan curve. The following are equivalent: \begin{enumerate}[\rm(a)] \item $J$ is open. \item $I(J)$ and $O(J)$ are closed. \item $J\cap\mathcal{V}_{\mathcal{T}}=\emptyset$. \end{enumerate} \end{proposition} After Proposition~\ref{prop:ultimaultimaprop}-(c), we conclude that open digital curves are made exclusively of faces and edges (hence, they satisfy condition W1). Furthermore, if $J$ is an open digital Jordan curve, then $I(J)$ is closed, so for every $x\in I(J)$, $\overline{\{x\}}\subset I(J)$, and whether $x$ is a vertex, an edge or a face, $\overline{\{x\}}$ always contains a vertex of the tiling. This reasoning yields the following remark. \begin{remark}\label{r:Jopen I contains vertex} If $J$ is an open digital curve, then $I(J)\cap \mathcal V_{\mathcal T}\neq \emptyset$. \end{remark} However, we cannot guarantee that $I(J)$ contains a face of the tiling. This situation is pictured for the tiling of the plane with regular hexagons in Figure \ref{fig:ultimafigura}. In this figure, the dotted line intersects all the elements of a digital Jordan curve consisting only of edges and faces of the tiling, whilst the thick black line intersects all the elements of its interior, none of which is a face of the tiling. \begin{figure}[htp] \centering \includegraphics[width=12cm]{teselacionhexagonalcontraejemplo.png} \caption{A digital Jordan curve and its interior.} \label{fig:ultimafigura} \end{figure} On the other hand, by Proposition~\ref{prop:ultimaprop}-(d), every closed digital Jordan curve satisfies condition W2. However, this kind of curves may not be very interesting, since they do not contain any face (and therefore they do not satisfy condition W1). This is why, if we are looking for a class of digital Jordan curves satisfying W1 and W2, we need to add an extra condition. We formalize this in the following section. \section{Well-behaved Digital Jordan Curves}\label{Sec:Well behaved} Intuitively, the problem of the example in Figure~\ref{fig:ultimafigura} is that the elements of $J$ that are faces of the tiling, are very close to each other. In order to prevent this from happening, we need the following. \begin{definition} Let $\mathcal{T}$ be a tiling and $J\subset\mathcal{D}_{\mathcal{T}}$ a digital Jordan curve. We say that $J$ is \textit{well-behaved} if for every $C\in\mathcal F_{\mathcal T}\cap J$ and every $x\in\mathcal V_{C}$, $\mathcal{F}_{x}\not\subset J$. \end{definition} Clearly the curve of Figure~\ref{fig:ultimafigura} is not well-behaved. \begin{lemma}\label{l:wb caras en interior} \label{obs:ultimaobs} Let $\mathcal{T}$ be a tiling and $J\subset\mathcal{D}_{\mathcal{T}}$ a well-behaved digital Jordan curve. If $\mathcal{V}_{\mathcal{T}}\cap I(J)\neq\emptyset$ (in particular, if $J$ is a well-behaved open digital jordan curve), then $\mathcal{F}_{\mathcal{T}}\cap I(J)\neq\emptyset$. \end{lemma} \begin{proof} Since $I(J)$ and $O(J)$ are the components of $Y:=I(J)\cup O(J)$, we can find an open set $U\subset \mathcal D_{\mathcal T}$ such that $U\cap Y=I(J)$. Thus, $$I(J)\subset U\subset I(J)\cup J.$$ Assume that $\mathcal F_{\mathcal T}\cap I(J)=\emptyset$, and pick an element $x\in\mathcal{V}_{\mathcal{T}}\cap I(J)$. Notice that $$\mathcal F_x\subset N(x)\subset U\subset I(J)\cup J.$$ Now, we can use the assumption that $\mathcal{F}_{\mathcal{T}}\cap I(J)=\emptyset$ to conclude that $\mathcal{F}_{x}\subset J$, which is impossible because $J$ is well-behaved. Therefore $\mathcal F_{\mathcal T}\cap I(J)\neq\emptyset$, as desired. \end{proof} As an immediate consequence of Lemma~\ref{obs:ultimaobs}, we have the following. \begin{remark}\label{remark.wellbehaved are nice} Let $\mathcal{T}$ be a tiling and $J\subset\mathcal{D}_{\mathcal{T}}$ a well-behaved open digital Jordan curve. Then $J$ satisfies condition W1 and W2. \end{remark} To finish this paper we will use the theory we have developed in order to prove a generalization of Rosenfeld's theorem that we presented on the introduction (Theorem~\ref{teo:Rosenfeld}). Let $\mathcal{D}_{\mathcal{T}}$ be the digital plane. Recall that Rosenfeld works with a squared grid endowed with the $k$-adjacency relations ($k\in\{4,8\}$) previously described. If we think of the faces of the digital plane as the points in Rosenfeld's grid, two faces $C_{1}, C_{2}\in\mathcal{F}_{\mathcal{T}}$ are \textit{$4$-adjacent} if $\mathcal{E}_{C_{1}}\cap\mathcal{E}_{C_{2}}\neq\emptyset$. Similarly, we say that they are \textit{$8$-adjacent} if $\mathcal{V}_{C_{1}}\cap\mathcal{V}_{C_{2}}\neq\emptyset$. We also give the corresponding definition of a simple closed $4$-path with at least five points: $J\subset \mathcal{D}_{\mathcal{T}}$ is a \textit{$4$-Jordan curve} if $J$ is an open digital Jordan curve such that $\vert\mathcal{F}_{\mathcal{T}}\cap J\vert\geq5$ and for every $C\in\mathcal{F}_{\mathcal{T}}\cap J$, $C$ has exactly two $4$-adjacent faces in $J$. This reinterpretation of Rosenfeld's work, motivates the following definition \begin{definition} Let $\mathcal{T}$ be a tiling and $\mathcal{D}_{\mathcal{T}}$ its digital version. \begin{enumerate}[\rm(1)] \item We say that two faces $C_1,C_2\in\mathcal F_{\mathcal T}$ are edge-adjacent (or that $C_1$ and $C_2$ are edge-neighbors) if $\mathcal E_{C_1}\cap\mathcal E_{C_2}\neq\emptyset$. \item We say that two faces $C_1,C_2\in\mathcal F_{\mathcal T}$ are vertex-adjacent (or that $C_1$ and $C_2$ are vertex-neighbors) if $\mathcal V_{C_1}\cap\mathcal V_{C_2}\neq\emptyset$. \item A finite secuence of faces $(C_0,\dots, C_n)$ is an edge-path of faces (vertex-path of faces) if $C_i$ is edge-adjacent (vertex-adjacent) to $C_{i+1}$ and $C_{i-1}$, for every $i=1,\dots, n-1.$ \item A set $Y\subset\mathcal D_{\mathcal T}\cap\mathcal F_{\mathcal T}$ is edge-connected (vertex-connected) if there is an edge-path (vertex-path) between any two elements $C_1,C_2\in Y$, such that every element of the path belongs to $Y$. \item A digital Jordan curve $J\subset \mathcal D_{\mathcal T}$ is an edge-Jordan curve, if $J$ is open and for every face $C\in J\cap\mathcal F_{\mathcal T}$ there are exactly two faces in $J$ that are edge-adjacent with $C$. \end{enumerate} \end{definition} Let $\mathcal{T}$ be a tiling and define $$\Delta(\mathcal T)=\sup\{|\mathcal F_{x}|\}_{ x\in\mathcal V_{\mathcal T}}.$$ Observe that if $J$ is an edge-Jordan curve and $\mathcal F_{x}\subset J$ for a certain $x\in \mathcal V_x$, then every element of $\mathcal F_x$ has exactly two edge-neighbors in $J$, and therefore $J$ cannot contain any other face of the tiling. This yields the following remark. \begin{remark}\label{r: grado acotado implica wb} Let $\mathcal T$ be a tiling and $J\subset\mathcal D_{\mathcal T}$ be an edge-Jordan curve. If $|J\cap\mathcal F_{\mathcal T}|\geq \Delta(\mathcal{T})+1$, then $J$ is well-behaved. \end{remark} We can now generalize Rosenfeld's theorem as follows. \begin{theorem}\label{teo:cuadrosrosenfeld} Let $\mathcal T$ be a tiling with $\Delta(\mathcal T)<\infty$. If $J\subset\mathcal{D}_{\mathcal{T}}$ is an edge-Jordan curve with $|J\cap\mathcal F_{\mathcal T}|\geq \Delta(\mathcal{T})+1,$ then $I(J)\cap \mathcal{F}_{\mathcal{T}}\neq\emptyset$, $O(J)\cap \mathcal{F}_{\mathcal{T}}\neq\emptyset$ and these two sets are vertex-connected. \end{theorem} \begin{proof} Since $I(J)\cup J$ is a finite set and $\mathcal F_{\mathcal T}$ is infinite, we always have that $O(J)\cap \mathcal{F}_{\mathcal{T}}\neq\emptyset$. On the other hand, by Remark~\ref{r: grado acotado implica wb} and Lemma~\ref{l:wb caras en interior}, we infer that $I(J)\cap \mathcal{F}_{\mathcal{T}}\neq\emptyset$. Let us prove that $I(J)\cap\mathcal{F}_{\mathcal{T}}$ is vertex connected. We know that $I(J)$ is connected and closed. Thus, for every $C\in \mathcal{F}_{\mathcal{T}}\cap I(J)$ we have that $\mathcal{V}_{C}\cup\mathcal{E}_{C}\subset I(J)$. \textit{Claim:} For every $A\in\mathcal{E}_{\mathcal{T}}\cap I(J)$, $\mathcal{F}_{A}\cap I(J)\neq\emptyset$. Indeed, if $A\in\mathcal{E}_{\mathcal{T}}\cap I(J)$ is such that $\mathcal{F}_{A}\cap I(J)=\emptyset$, we can use the fact that $O(J)$ is closed to conclude that $\mathcal{F}_{A}\subset J$. Since $J$ is an open digital Jordan curve in $\mathcal D_{\mathcal T}$, Proposition~\ref{prop:ultimaultimaprop} guarantees that $J\cap \mathcal V_{\mathcal T}=\emptyset$. Thus, each face of $A$ has a common edge with three faces of $J$, which contradicts the definition of $J$. This proves the claim. Since $I(J)$ is a connected Alexandrov space, Theorem \ref{teo:resumenconexidad} ensures that $I(J)$ is digitally arc connected. Thus for any $C_{1},C_{2}\in I(J)\cap \mathcal{F}_{\mathcal{T}}$ there is a digital arc of minimum length $\alpha_{0}=(x_0=C_{1}, x_1, \dots, x_n=C_{2})$ in $I(J)$ from $C_{1}$ to $C_{2}$. We can also assume that $\alpha_0$ contains a maximum number of faces. Since $\alpha_0$ has minimum length, there are at most three consecutive elements of the same tile in $\alpha_0$. Let $$m:=\max\{k\in\mathbb{N}\mid \forall i\leq k: x_{2i}\in\mathcal{F}_{\mathcal{T}}\}.$$ Suppose that $m\neq\frac{n}{2}$. First observe that if $2m= n-1$, then $x_{2m}$ and $x_n$ would be two consecutive faces in $\alpha_0$, which is impossible. Thus, $0\leq 2m<n-1$. In this situation, $x_{2m}$ is a face of the tiling while $x_{2m+2}$ is not. If $x_{2m+1}\in\mathcal{E}_{x_{2m}}$, then $x_{2m+2}\in\mathcal{V}_{x_{2m}}$, contradicting the fact that $\alpha_0$ is a digital arc. Therefore we infer that $x_{2m+1}\in\mathcal{V}_{x_{2m}}$, $x_{2m+2}\in\mathcal{E}_{\mathcal{T}}\setminus\mathcal{E}_{x_{2m}}$ and $x_{2m+1}\in\mathcal{V}_{x_{2m+2}}$. By the claim, we can pick a face $C\in\mathcal{F}_{x_{2m+2}}\cap I(J)$. Once again, since $\alpha_0$ is a digital arc, we conclude that $C\notin\alpha_0$ and $x_{2m+3}\in\mathcal{V}_{C}$ (if $2m+2<n$). Thus, $\{x_{2m_{0}+1}, x_{2m_{0}+2}, x_{2m_{0}+3}\}\subset\overline{\{C\}}$, implying that $x_{2m_{0}+4}\notin\overline{\{C_{x_{2m_{0}+2}}\}}$. For $i\in\{0,\dots ,n\}$, let \[ y_i = \begin{cases} x_i &\quad\text{if}\hspace{.2cm}i\neq 2m_{0}+2,\\ C&\quad\text{if}\hspace{.2cm}i= 2m_{0}+2.\\ \end{cases} \] and define $\alpha_1:=(y_0, y_1, \dots, y_n)$. Then $\alpha_1$ is a digital arc from $C_{1}$ to $C_{2}$ in $I(J)$ containing more faces than $\alpha_0$. This contradicts the fact that $\alpha_0$ contains a maximum number of faces. Thus, we can conclude that $m=\frac{n}{2}$ and therefore $(x_0,x_2,\dots,x_{2m})$ is a simple vertex-path between $C_1$ and $C_2$. This proves that $I(J)\cap\mathcal{F}_{\mathcal{T}}$ is vertex-connected. Similarly we can prove that $O(J)\cap\mathcal{F}_{\mathcal{T}}$ is vertex-connected. This completes the proof. \end{proof} It is clear that Theorem~\ref{teo:cuadrosrosenfeld} directly implies Rosenfeld's theorem (Theorem~\ref{teo:Rosenfeld}). Moreover, we can also deduce similar results for every grid of points induced by the faces of a regular tiling of the plane, such as the one given by Kopperman in \cite[Theorem 26]{Kopperman} for a hexagonal grid of points. Finally, there is a dual theorem, also due to Rosenfeld (see \cite[Theorem 3.3]{Rosenfeld6}), that is obtained by interchanging the roles of $4$ and $8$ in Theorem \ref{teo:Rosenfeld}. To finish this paper, we present a generalization of this result in the context that we have been working on. In order to do this, consider a tiling $\mathcal T$ and define a \textit{vertex-Jordan curve} as a digital Jordan curve $J\subset \mathcal D_{\mathcal T}$ such that $J\subset \mathcal F_{\mathcal T}\cup\mathcal V_{\mathcal T}$ and with the property that every $C\in\mathcal{F}_{\mathcal{T}}\cap J$ has exactly two vertex-neighbors in $J$. \begin{theorem} Let $\mathcal{T}$ be a tiling with $\Delta (\mathcal T)<\infty$. Suppose that $J\subset\mathcal{D}_{\mathcal{T}}$ is a vertex-Jordan curve such that $|J\cap \mathcal F_{\mathcal T}|\geq \Delta (\mathcal T)+1$. Then \begin{enumerate}[\rm(1)] \item $J$ is well-behaved. \item $I(J)\cap \mathcal{F}_{\mathcal{T}}\neq\emptyset$ and $O(J)\cap \mathcal{F}_{\mathcal{T}}\neq\emptyset$. \item $I(J)\cap \mathcal{F}_{\mathcal{T}}$ and $O(J)\cap \mathcal{F}_{\mathcal{T}}$ are edge-connected. \end{enumerate} \end{theorem} \begin{proof} Since $I(J)$ and $O(J)$ are the components of $Y:=I(J)\cup O(J)$, we can find two sets $U, K\subset \mathcal D_{\mathcal T}$, with $U$ open and $K$ closed, such that $$U\cap Y=I(J)=K\cap Y.$$ (1) If $J$ is not well-behaved, there exists a vertex $x\in\mathcal V_{\mathcal T}$, such that $\mathcal F_{x}\subset J$. Now, for every $C_1,C_2\in \mathcal F_{x}$, $C_1$ and $C_2$ are vertex-adjacent. Since $J$ is a vertex-Jordan curve and $|\mathcal F_{x}|\geq 3$, we infer that $J\cap \mathcal F_{\mathcal T}=\mathcal F_x$. Thus $$|\mathcal F_x|=|J\cap \mathcal F_{\mathcal T}|\geq \Delta (\mathcal T)+1\geq |\mathcal F_x|+1,$$ a contradiction. (2) Since we always have that $O(J)\cap \mathcal{F}_{\mathcal{T}}\neq\emptyset$, we only need to prove the other inequality. By contradiction, assume that $I(J)\cap \mathcal{F}_{\mathcal{T}}=\emptyset$. We claim that $\mathcal V_{\mathcal T}\cap I(J)\neq\emptyset$. Indeed, if $\mathcal V_{\mathcal T}\cap I(J)=\emptyset$, then every element of $I(J)$ is an edge. Thus, for every $E\in I(J)$, we get the following inclusions $$\mathcal F_{E}\subset N(E)\subset U\subset I(J)\cup J\;\text{ and }\;\mathcal V_{E}\subset \overline{\{E\}}\subset K\subset I(J)\cup J.$$ This implies that $\mathscr A (E)=\mathcal V_{E}\cup\mathcal F_{E}\subset J$. Since $J$ is a digital Jordan curve, $J$ cannot contain any other element of $\mathcal D_{\mathcal T}$. This implies that $J$ contains only two faces, which contradicts the hypothesis $|J\cap \mathcal F_{\mathcal T}|\geq \Delta (\mathcal T)+1$. Therefore $\mathcal V_{\mathcal T}\cap I(J)\neq\emptyset$ and since $J$ is well behaved, we infer from Lema~\ref{l:wb caras en interior} that $I(J)$ contains a face of the tiling. This contradicts the original assumption that $I(J)\cap \mathcal{F}_{\mathcal{T}}=\emptyset$. (3) Since $I(J)$ is connected, for any $C_1,C_2\in I(J)\cap \mathcal F_{\mathcal T}$, there exists a digital path $\alpha_0=(C_1=x_0, x_1,\dots, x_n=C_2)$ completely contained in $I(J)$. Furthermore, we can assume that $\alpha_0$ contains a minimum number of vertices (namely, every other path between $C_1$ and $C_2$ contains at least the same amount of vertices than $\alpha_0$). If $\alpha_0$ contains a vertex, let $$m:=\min\{k\in \{1,\dots, n\}\mid x_k\text{ is a vertex}\}.$$ In this case $x_{m-1}$ and $x_{m+1}$ belong to $\mathscr{A}(x_m)=\mathcal E_{x_m}\cup\mathcal F_{x_m}$. Observe that $\mathscr{A}(x_m)$ can only contain one element of $J$, at most. Indeed, if $D_1,D_2\in J\cap \mathscr{A}(x_m)$, then $\{D_1, D_2\}\subset \mathcal F_{x_m}$, because $J$ does not have any edge. This implies that $D_1$ and $D_2$ are vertex-adjacent, but since $x_{m}\notin J$, there must exist two other faces, $D, D'\in J$, different from $D_1$ and $D_2$, such that $D_1$ is vertex adjacent to $D$ and $D'$. Hence $J$ cannot be a vertex-Jordan curve, a contradiction. Since $\mathscr{A}(x_m)$ induces a cycle in the connectedness graph of $\mathcal D_{\mathcal T}$, we can find a digital path $\beta=(x_{m-1}=y_0,y_1,\dots, y_k=x_{m+1})$, where each $y_i$ belongs to $\mathscr{A}(x_m)\setminus J$. In particular, there are no vertices in $\beta$. Furthermore, since $N(x_m)\subset U\subset I(J)\cup J$ and $\overline{\{x_m\}}\subset K\subset I(J)\cup J$, we get that $$\{y_1,\dots, y_k\}\subset \mathscr{A}(x_m)\setminus J\subset \big(I(J)\cup J\big)\setminus J=I(J).$$ Therefore the path $$\alpha_1:=(C_1=x_0,\dots ,x_{m-1},y_1,\dots, y_{k-1},x_{m+1},\dots ,x_n=C_2)$$ is a digital path between $C_1$ and $C_2$, which is completely contained in $I(J)$ and it contains one vertex less than $\alpha_0$, a contradiction. Thus $\alpha_0$ does not contain any vertex, and therefore $x_{k}\in \mathcal F_{\mathcal T}$ if $k$ is even and $x_{k}\in \mathcal E_{\mathcal T}$ if $k$ is odd. Then $$\alpha:=(C_1=x_0,x_2,\dots, x_n=C_2)$$ is an edge-path of faces, which proves that $I(J)$ is edge-connected, as desired. Analogously we can prove that $O(J)$ is edge-connected. \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2021-03-16T01:07:33", "yymm": "2103", "arxiv_id": "2103.07644", "language": "en", "url": "https://arxiv.org/abs/2103.07644", "abstract": "The classical Jordan curve theorem for digital curves asserts that the Jordan curve theorem remains valid in the Khalimsky plane. Since the Khalimsky plane is a quotient space of $\\mathbb R^2$ induced by a tiling of squares, it is natural to ask for which other tilings of the plane it is possible to obtain a similar result. In this paper we prove a Jordan curve theorem which is valid for every locally finite tiling of $\\mathbb R^2$. As a corollary of our result, we generalize some classical Jordan curve theorems for grids of points, including Rosenfeld's theorem.", "subjects": "General Topology (math.GN)", "title": "A Jordan Curve Theorem for 2-dimensional Tilings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471684931717, "lm_q2_score": 0.8376199673867852, "lm_q1q2_score": 0.8240900331868315 }
https://arxiv.org/abs/1704.05494
The pinnacle set of a permutation
The peak set of a permutation records the indices of its peaks. These sets have been studied in a variety of contexts, including recent work by Billey, Burdzy, and Sagan, which enumerated permutations with prescribed peak sets. In this article, we look at a natural analogue of the peak set of a permutation, instead recording the values of the peaks. We define the "pinnacle set" of a permutation w to be the set {w(i) : i is a peak of w}. Although peak sets and pinnacle sets mark the same phenomenon for a given permutation, the behaviors of these sets differ in notable ways as distributions over the symmetric group. In the work below, we characterize admissible pinnacle sets and study various enumerative questions related to these objects.
\section{Introduction}\label{sec:intro} Let $S_n$ denote the set of permutations of $[n] = \{1,2,\ldots, n\}$, which we will always write as words, $w = w(1)w(2)\cdots w(n)$. An \emph{ascent} of a permutation $w$ is an index $i$ such that $w(i)< w(i+1)$, while a \emph{descent} is an index $i$ such that $w(i) > w(i+1)$. A \emph{peak} is a descent that is preceded by an ascent, whereas a \emph{valley} is an ascent that is preceded by a descent. This terminology refers to the shape of the graph of $w$, that is, the set of points $(i,w(i))$. The fact that we mark descents, ascents, peaks, and valleys by their positions ($x$-coordinates) rather than by their values ($y$-coordinates) is a matter of longstanding convention. \begin{example} The descents of $315264 \in S_6$ are $1$, $3$, and $5$, and the ascents are $2$ and $4$. The peaks are $3$ and $5$, while the valleys are $2$ and $4$. \end{example} The \emph{descent set} of a permutation $w$, denoted $\Des(w)$, is the collection of its descents, \[ \Des(w) = \{ i : w(i) > w(i+1) \}, \] while the \emph{peak set} of a permutation $w$, denoted $\Pk(w)$, is the collection of its peaks, \[ \Pk(w) = \{ i : w(i-1) < w(i) > w(i+1) \}. \] Note in particular that the descent set completely determines the peak set: \[ \Pk(w) = \{ i > 1 : i \in \Des(w) \text{ and } i-1 \notin \Des(w)\}. \] Any subset of $\{1,2,\ldots,n-1\}$ is the descent set of some permutation in $S_n$, but the same cannot be said for peak sets. For example, peaks cannot occur in the first or last positions of a permutation, so $\Pk(w) \subseteq \{2,\ldots,n-1\}$ for any $w \in S_n$. Moreover, peaks cannot occur in consecutive positions, so if $i \in \Pk(w)$ then $i\pm1 \not\in \Pk(w)$. This characterization of peak sets, as subsets of $\{2,\ldots,n-1\}$ with no consecutive elements, turns out to imply that the number of distinct peaks sets is given by the Fibonacci numbers. It has long been known that counting permutations according to the number of descents gives rise to the Eulerian numbers, while the number of permutations with a given descent set is also well known; see, e.g., \cite[Prop. 1.4.1]{StanleyEC1}. More recently Billey, Burdzy, and Sagan~\cite{BilleyBurdzySagan} considered the related enumerative question for peaks: how many permutations in $S_n$ have a given peak set? One of their results is that for a fixed set $S$, the number of $w \in S_n$ for which $\Pk(w) = S$ is a power of two times a polynomial in $n$, and they give techniques for explicit computation of this polynomial in special cases. In the present article, we study analogous questions related to peaks, but rather than tracking peaks by their indices ($x$-coordinates in the graph of the permutation), we use their values ($y$-coordinates). \begin{defn} A \emph{pinnacle} of a permutation $w$ is a value $w(i)$ such that $w(i-1)< w(i) > w(i+1)$; equivalently, $j$ is a pinnacle of $w$ if and only if $w^{-1}(j) \in \Pk(w)$. The \emph{pinnacle set} of $w$ is \[ \Pin(w) = \{ w(i) : i \in \Pk(w) \}. \] \end{defn} Certainly $|\Pk(w)|=|\Pin(w)|$, but the sets themselves need not be the same, as we now demonstrate. \begin{example}\label{ex:peaks and pinnacles} If $w = 315264$, then $\Pk(w) = \{3,5\}$ and $\Pin(w) = \{5,6\}$. \end{example} The definition of pinnacle sets leads naturally to questions about the value \begin{equation}\label{eqn:counting permutations with a given pinnacle set} p_S(n) := | \{ w \in S_n : \Pin(w) = S\} |. \end{equation} The questions we address are the following. \begin{enumerate} \item When is $p_S(n) > 0$? That is, which sets $S$ are the pinnacle set of some permutation in $S_n$?\label{q:0} \item Given a pinnacle set $S \subseteq [n]$, how do we compute $p_S(n)$? \label{q:1} \item For a given $n$, what choice of $S \subseteq [n]$ maximizes or minimizes $p_S(n)$? \label{q:2} \end{enumerate} In Section~\ref{sec:admissible} we identify conditions under which a set $S$ is the pinnacle set for some permutation, fully answering Question \ref{q:0}. \begin{defn}\label{def:admissible} A set $S$ is an \emph{$n$-admissible pinnacle set} if there exists a permutation $w \in S_n$ such that $\Pin(w) = S$. If $S$ is $n$-admissible for some $n$, then we simply say that $S$ is \emph{admissible}. \end{defn} Some small examples of admissible pinnacle sets are shown in Table \ref{tab:pinn}. The main result about admissible pinnacle sets is the following. \begin{thm}[Admissible pinnacle sets]\label{thm:pinsets} Let set $S$ be a set of integers with $\max S = m$. Then $S$ is an admissible pinnacle set if and only if both \begin{enumerate} \item $S\setminus\{m\}$ is an admissible pinnacle set, and \item $m > 2|S|$. \end{enumerate} Moreover, there are $\binom{m-2}{\lfloor m/2 \rfloor}$ admissible pinnacle sets with maximum $m$, and \[ 1+\sum_{m=3}^n \binom{m-2}{\lfloor m/2 \rfloor} = \binom{n-1}{\lfloor (n-1)/2 \rfloor} \] admissible pinnacle sets $S \subseteq [n]$. \end{thm} Our characterization of admissible pinnacle sets is in contrast to the characterization of peak sets mentioned earlier. Whereas the number of peak sets is given by the Fibonacci numbers, here we get a central binomial coefficient. In Section \ref{sec:rec} we develop both a quadratic and a linear recurrence for $p_S(n)$, which partially answers Question~\ref{q:1}. Further, we identify the following bounds for $p_S(n)$ answering Question~\ref{q:2}. \begin{thm}[Bounds on $p_S(n)$]\label{thm:bounds} Let $d$ and $n$ be any positive integers such that $2d < n$. Then for any admissible pinnacle set $S\subseteq[n]$ such that $|S|=d$, we have the following sharp bounds: \begin{equation}\label{eq:2bounds} 2^{n-d-1} \leq p_S(n) \leq d!\cdot (d+1)! \cdot 2^{n-2d-1} \cdot S(n-d,d+1), \end{equation} where $S(\cdot,\cdot)$ denotes the Stirling number of the second kind. \end{thm} It follows that across all admissible pinnacle sets $S\subseteq [n]$, the cardinality $\#\{w \in S_n : \Pin(w) = S\}$ has a uniform lower bound of $2^{\lfloor n/2 \rfloor}$, while the upper bound is achieved for the particular value of $d = |S|$ that maximizes the rightmost expression in Equation~\eqref{eq:2bounds}. While this choice of $d$ appears to be a little less than $n/3$, we have no simple expression for $d$ in terms of $n$. Section \ref{sec:conclude} contains this and other open questions. We close the introduction with two remarks. \begin{rem}[Descent topsets] Just as the pinnacle set records the values that sit at peaks, the \emph{descent topset} records the values that sit at descents: \begin{align*} \Dtop(w) &= \{ w(i) : w(i) > w(i+1) \} \\ &= \{ w(i) : i \in \Des(w) \}. \end{align*} Descent topsets and related ideas have appeared sporadically in the literature on permutation statistics, e.g., see \cite{EhrenborgSteingrimsson, EhrenborgSteingrimssonAlternating, FoataZ, HNTT, KitaevMansourRemmel, NovelliThibonWilliams, SteinWilliams}. Enumeration of permutations with a fixed topset is considered in work of Ehrenborg and Steingr\'imsson \cite{EhrenborgSteingrimsson}, via a correspondence with excedance sets. The question of enumeration by pinnacle sets does not appear to have been addressed in the literature. While the peak set $\Pk(w)$ is completely determined by the descent set $\Des(w)$, the pinnacle set is not determined by the descent topset. For example, suppose $w=3175264$ and $v = 7651324$. Then we have $\Dtop(w) = \{3,5,6,7\} = \Dtop(v)$, yet $\Pin(w) =\{6,7\}$ while $\Pin(v) = \{3\}$. Thus it seems unlikely that enumeration results for pinnacle sets will follow directly from results for descent topsets. \end{rem} \begin{rem}[Descent algebras and peak algebras] Grouping permutations according to descent sets or peak sets leads to interesting and well-studied algebraic structures. For example, the group algebra of the symmetric group has a subalgebra known as \emph{Solomon's descent algebra} \cite{Solomon}, with linear basis given by sums of descent classes, i.e., by the elements \[ y_I = \sum_{\substack{w \in S_n \\ \Des(w) = I}} w. \] A subalgebra of Solomon's descent algebra known as the \emph{peak algebra} has a basis whose elements are sums of peak classes, i.e., \[ z_J = \sum_{\substack{w \in S_n \\ \Pk(w) = J}} w. \] There are a number of papers investigating the connections between descent algebras and peak algebras, e.g., \cite{AguiarNymanOrellana, BilleraHsiaoVW, GarsiaReutenauer, Nyman, Schocker}. It is natural to wonder whether some similar algebraic structures can be associated to descent topsets or pinnacle sets. However, taking sums of descent topset classes or sums of pinnacle classes do not yield subalgebras of the group algebra in general. \end{rem} \section{Admissible pinnacle sets}\label{sec:admissible} Not every set is the peak set of a permutation. Likewise, not every set is a pinnacle set. For one thing, each peak must have a non-peak on each side of it, so the number of peaks must be strictly less than half the number of letters in the permutation. \begin{lemma}[Limited number of peaks]\label{lem:halfn} A permutation $w \in S_n$ has at most $\lfloor (n-1)/2\rfloor$ peaks. That is, $n > 2|\Pk(w)| = 2|\Pin(w)|$. \end{lemma} Our goal in this section is to push this result a bit further and to completely characterize pinnacle sets. \subsection{Characterization of admissible pinnacle sets} Recall from Definition~\ref{def:admissible} that a set $S$ is an $n$-admissible pinnacle set if there exists a permutation $w \in S_n$ such that $\Pin(w) = S$. \begin{example}\ \begin{enumerate}\renewcommand{\labelenumi}{(\alph{enumi})} \item The set $S = \{3,7,8\}$ is an $8$-admissible pinnacle set because $\Pin(13247586) = S$. The set $S$ is certainly not $n$-admissible for any $n<8$, because $8 \in S$. \item For the set $S = \{3, 5, 6\}$ to be an admissible pinnacle set, there would have to be a permutation \[ w = \cdots a \ x \ b_1 \cdots b_2 \ y \ c_1 \cdots c_2 \ z \ d \cdots \] such that $S = \{x,y,z\}$, with $a < x > b_1$, $b_2 < y > c_1$, and $c_2 < z > d$. It is possible that $b_1 = b_2$ or $c_1 = c_2$ or both, but $a$, $b_1$, $c_1$, and $d$ must be distinct. In fact, these four values must all be less than $6$, and none can be an element of $S$. However, there are only three positive integers less than $6$ and not in $S$, so there can be no such permutation $w$. Thus $S$ is not an admissible pinnacle set. \end{enumerate} \end{example} Pinnacle sets are stable in the sense that if $S$ is an $n$-admissible pinnacle set, then $S$ is also $(n+1)$-admissible. Indeed, if $\Pin(w) = S$ for $w \in S_n$, then we can form a permutation in $S_{n+1}$ with pinnacle set $S$ by putting $n+1$ at the far left or far right of the permutation. That is, if $u = (n+1)w(1)\cdots w(n)$ and $v = w(1)\cdots w(n)(n+1)$, then \[ \Pin(u) = \Pin(v) = \Pin(w). \] Moreover, any other way to insert $n+1$ into $w$ will give a different pinnacle set, since $n+1$ would sit at a peak. Thus a kind of converse to this stability observation is the observation that if $\max S = m$, and $S$ is an $n$-admissible pinnacle set for some $n\geq m$, then $S$ is $m$-admissible. Extending this idea leads to the following recursive characterization of admissible pinnacle sets, which establishes the first half of Theorem \ref{thm:pinsets} from the introduction. \begin{prop}[Admissible pinnacle sets]\label{prop:admissible} Suppose that $S$ is a set of positive integers with maximal element $m$. Then $S$ is an admissible pinnacle set if and only if both \begin{enumerate} \item $S \setminus \{m\}$ is an admissible pinnacle set, and \item $m > 2|S|$. \end{enumerate} Moreover, $S$ is $n$-admissible for all $n\geq m$. \end{prop} Some admissible pinnacle sets are shown in Table \ref{tab:pinn}. \begin{table}[htbp] \[ \begin{array}{c|| l |l| l| l} m & d = 1 & d = 2 & d = 3 & d = 4 \\ \hline \hline \raisebox{.1in}[.2in][.1in]{}3 & \{3\} & &\\ \hline \raisebox{.1in}[.2in][.1in]{}4 & \{4\} & &\\ \hline \raisebox{.1in}[.2in][.1in]{}5 & \{5\} & \{3,5\}, \{4,5\} &\\ \hline \raisebox{.1in}[.2in][.1in]{}6 & \{6\} & \{3,6\}, \{4,6\}, &\\ \raisebox{.1in}[.1in][.1in]{}& & \{5,6\} & \\ \hline \raisebox{.1in}[.2in][.1in]{}7 & \{7\} & \{3,7\}, \{4,7\}, & \{3,5,7\}, \{3,6,7\}, \{4,5,7\}, \\ \raisebox{.1in}[.1in][.1in]{}& & \{5,7\}, \{6,7\} & \{4,6,7\}, \{5,6,7\} \\ \hline \raisebox{.1in}[.2in][.1in]{}8 & \{8\} & \{3,8\}, \{4,8\}, & \{3,5,8\}, \{3,6,8\}, \{3,7,8\},\\ \raisebox{.1in}[.1in][.1in]{} & & \{5,8\}, \{6,8\}, & \{4,5,8\}, \{4,6,8\}, \{4,7,8\}, \\ \raisebox{.1in}[.1in][.1in]{} & & \{7,8\} & \{5,6,8\}, \{5,7,8\}, \{6,7,8\} \\ \hline \raisebox{.1in}[.2in][.1in]{}9 & \{9\} & \{3,9\}, \{4,9\}, & \{3,5,9\}, \{3,6,9\}, \{3,7,9\},& \{3,5,7,9\}, \{3,5,8,9\}, \{3,6,7,9\}, \\ \raisebox{.1in}[.1in][.1in]{} & & \{5,9\}, \{6,9\},& \{3,8,9\}, \{4,5,9\}, \{4,6,9\}, & \{3,6,8,9\}, \{3,7,8,9\},\{4,5,7,9\}, \\ \raisebox{.1in}[.1in][.1in]{} & & \{7,9\}, \{8,9\} & \{4,7,9\}, \{4,8,9\}, \{5,6,9\}, & \{4,5,8,9\}, \{4,6,7,9\}, \{4,6,8,9\}, \\ \raisebox{.1in}[.1in][.1in]{} & & & \{5,7,9\}, \{5,8,9\}, \{6,7,9\}, & \{4,7,8,9\},\{5,6,7,9\}, \{5,6,8,9\}, \\ \raisebox{.1in}[.1in][.1in]{} & & & \{6,8,9\}, \{7,8,9\} & \{5,7,8,9\}, \{6,7,8,9\} \end{array} \] \caption{Nonempty admissible pinnacle sets $S$ with maximum element $m$ and $|S|=d$.}\label{tab:pinn} \end{table} In order to prove this proposition, it will be helpful to have a canonical way to construct a permutation with a given (admissible) pinnacle set, which we describe now. First we order the elements of $S = \{ s_1 < s_2 < \cdots < s_d\}$. Then we use these as the values of the even positions of a permutation $w$, so that $w(2i) = s_i$ for $i \in [d]$. We place the elements not in $S$ into the odd positions of $w$, in increasing order. Let $w_S$ denote the permutation we have thus formed. More precisely, suppose $S= \{ s_1 < s_2 < \cdots < s_d\}$ and $s_d = m$. Define the complementary set $[m]\setminus S = \{ t_1 < t_2 < \cdots < t_{m-d}\}$. Then, for $1 \leq i \leq m$, set \begin{equation}\label{eqn:canonical perm with given pinnacle set} w_S(i) := \begin{cases} s_j & \text{ if } i = 2j \text{ and } i \leq 2d \\ t_j & \text { if } i = 2j-1 \text{ and } i \leq 2d \\ t_{i-d} & \text{ if } i > 2d. \end{cases} \end{equation} Visually, we can imagine labels on a ``mountain range" diagram, an illustration of which is shown in Figure \ref{fig:mountain}. \begin{figure}[htbp] \[ \begin{tikzpicture}[yscale=.5] \draw (1,1) node[circle, draw=black, fill=white, inner sep=2] {$t_1$} -- (2,4) node[circle, draw=black, fill=white, inner sep=2] {$s_1$} -- (3,2) node[circle, draw=black, fill=white, inner sep=2] {$t_2$} -- (4,5) node[circle, draw=black, fill=white, inner sep=2] {$s_2$} -- (5,3) node[circle, draw=black, fill=white, inner sep=2] {$t_3$} -- (5.5,4.75); \draw (5.9,5.5) node[circle, draw=white, fill=white, inner sep=4] {\reflectbox{$^{\ddots}$}}; \draw (6.5,5.75) -- (7,5) node[circle, draw=black, fill=white, inner sep=2] {$t_d$} -- (8,9) node[circle, draw=black, fill=white, inner sep=2] {$s_d$} -- (9,5.5) node[circle, draw=black, fill=white, inner sep=2] {$t_{d+1}$} -- (10.5,6.5) node[circle, draw=black, fill=white, inner sep=2] {$t_{d+2}$} -- (12,7.5) node[circle, draw=white, fill=white, inner sep=2] {\ \ \ \ } -- (13.5,8.5) node[circle, draw=black, fill=white, inner sep=2] {$t_{m-d}$}; \foreach \x in {(11.85,7.4),(12,7.5),(12.15,7.6)} {\fill \x circle (.05);} \end{tikzpicture} \] \caption{Canonical construction of $w_S$, a permutation having pinnacle set $ S=\{s_1 < s_2<\cdots < s_d\}$.}\label{fig:mountain} \end{figure} \begin{example} The set $S=\{5,8,9\}$ is an admissible pinnacle set. To produce $w_S$, we first set $w(2)= 5$, $w(4)= 8$, and $w(6)= 9$. Next, we position the values $\{1,2,3,4,6,7\}$ in increasing order, yielding $w = 152839467$. \end{example} Let us now clearly state and prove our assertion about $w_S$. \begin{prop}[Canonical permutation with a given pinnacle set] Let $S$ be an admissible pinnacle set with maximum $m$, and let $w_S \in S_m$ be as defined in Equation~\eqref{eqn:canonical perm with given pinnacle set}. Then $\Pin(w_S) = S$. \end{prop} \begin{proof} Suppose $S$ is an admissible pinnacle set and $w_S$ is the permutation constructed above. Since $S$ is admissible, for each $i \le |S|$, there are at least $i+1$ elements of $[m] \setminus S$ that are less than $s_i$. So, the elements $t_1,\ldots,t_{i+1}$ will always be less than $s_i$. This implies that when $i$ is even and $i \leq 2d$, we have $w_{i-1}w_iw_{i+1} = t_js_jt_{j+1}$ for some $j$. Thus, $s_j \in \Pin(w_S)$ for each $j$, and moreover, $t_j, t_{j+1} \notin \Pin(w_S)$ since there cannot be two adjacent peaks. Finally, observe that $w(2d+1) w(2d+2) \cdots w(m) = t_{d+1}t_{d+2}\cdots t_{m-d}$ is an increasing sequence, so none of $t_{d+1},t_{d+2},\ldots,t_{m-d}$ will appear in $\Pin(w_S)$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:admissible}] We proceed by induction on $d=|S|$. First observe that $\emptyset$ is an admissible pinnacle set, since it is the pinnacle set for the identity permutation. Next, suppose that $|S| = 1$, meaning that $S = \{m\}$. If $S$ is an admissible pinnacle set, then $S \setminus \{m\} = \emptyset$ is an admissible pinnacle set and $m \geq 3 > 2|S|$. The converse implication clearly holds as well. Now, assume that for some $d \ge 1$, the result holds for any set of size $d$, and consider a set $S \subseteq\{1,2,3,\ldots\}$ of size $d+1$ with maximal element $m$. Set $S' := S \setminus \{m\}$. Suppose, first, that $S$ is an admissible pinnacle set. Let $w_S$ be the canonical permutation described by Equation~\eqref{eqn:canonical perm with given pinnacle set}, for which $\Pin(w_S) = S$. Since $w_S \in S_m$, Lemma~\ref{lem:halfn} tells us $m > 2|S|=2(d+1)$. Moreover, if we remove $m=s_{d+1} = w(2(d+1))$ from $w_S$, then we are left with a permutation $w'$ with pinnacle set $S \setminus \{m\} = S'$. Thus $S'$ is an admissible pinnacle set. Now suppose that $S'$ is an admissible pinnacle set and that $m > 2(d+1)$. We must show that $S$ is an admissible pinnacle set. The set $S'$ has size $d$, and maximal element $s_d < m=s_{d+1}$. As $S'$ is admissible, there is a permutation $w' \in S_{m-1}$ that has pinnacle set $S'$. Let $T = [m-1]\setminus S'$, the set of non-pinnacles in $w'$. Since we are assuming $m>2(d+1)$, we have $|T| = m-1-d > d+1$. There are only $d$ peaks in $w'$, hence, by the pigeonhole principle, at least two elements of $T$ appear consecutively in $w'$. Let $w \in S_m$ be the permutation obtained by inserting $m$ between these two consecutive elements of $T$. This yields a permutation $w$ for which $\Pin(w) = S' \cup \{m\} = S$. Hence $S$ is an admissible pinnacle set. \end{proof} \subsection{Enumeration of admissible pinnacle sets} We now use our characterization of admissible pinnacle sets from Proposition~\ref{prop:admissible} to count these sets. \begin{defn} Given nonnegative integers $m$ and $d$, define \[\mathfrak{p}(m;d)\] to be the number of admissible pinnacle sets with maximum element $m$ and cardinality $d$, using the convention $\mathfrak{p}(0,0) = 1$. \end{defn} In Table \ref{tab:admissible} we see the numbers $\mathfrak{p}(m;d)$ for small values of $m$ and $d$. From our characterization of admissible pinnacle sets in Proposition~\ref{prop:admissible}, we have the following recurrence for the array: \[ \mathfrak{p}(m;d) = \begin{cases} \hspace{.35in} 1 & \text{if } m = d = 0,\\ \sum\limits_{k<m} \mathfrak{p}(k;d-1) & \text{if } m > 2d, \text{ and}\\ \hspace{.35in} 0 & \text{otherwise.} \end{cases} \] \begin{table}[htbp] \[ \begin{array}{c || c c c c c c | l} m & d = 0 & d=1 & d=2 & d=3 & d=4 & d = 5 & \mbox{row sums}\\ \hline \hline \raisebox{.1in}[.2in][.1in]{} - & 1 & & & & & & 1\\ \raisebox{.1in}[.2in][.1in]{} 1 & 0 & & & & & & 0 \\ \raisebox{.1in}[.2in][.1in]{} 2 & 0 & & & & & & 0 \\ \raisebox{.1in}[.2in][.1in]{} 3 & 0 & 1 & & & & & 1 = \binom{1}{0}\\ \raisebox{.1in}[.2in][.1in]{} 4 & 0 & 1 & & & & & 1 = \binom{2}{0}\\ \raisebox{.1in}[.2in][.1in]{} 5 & 0 & 1 & 2 & & & & 3 = \binom{3}{1}\\ \raisebox{.1in}[.2in][.1in]{} 6 & 0 & 1 & 3 & & & & 4 = \binom{4}{1}\\ \raisebox{.1in}[.2in][.1in]{} 7 & 0 & 1 & 4 & 5 & & & 10 = \binom{5}{2}\\ \raisebox{.1in}[.2in][.1in]{} 8 & 0 & 1 & 5 & 9 & & & 15 = \binom{6}{2}\\ \raisebox{.1in}[.2in][.1in]{} 9 & 0 & 1 & 6 & 14 & 14 & & 35 = \binom{7}{3}\\ \raisebox{.1in}[.2in][.1in]{} 10 & 0 & 1 & 7 & 20 & 28 & & 56 = \binom{8}{3}\\ \raisebox{.1in}[.2in][.1in]{} 11 & 0 & 1 & 8 & 27 & 48 & 42 & 126 = \binom{9}{4}\\ \raisebox{.1in}[.2in][.1in]{} 12 & 0 & 1 & 9 & 35 & 75 & 90 & 210 = \binom{10}{4} \end{array} \] \caption{The number $\mathfrak{p}(m;d)$ of admissible pinnacle sets with maximum element $m$ and cardinality $d$.}\label{tab:admissible} \end{table} Notice in the table that the row sums (that is, $\sum_{d\geq 1} \mathfrak{p}(m;d)$) seem to equal \[ \binom{m-2}{\lfloor m/2\rfloor}. \] If this result were to hold, then we could inductively compute the number of admissible pinnacle sets $S \subseteq [n]$ to be $\binom{n-1}{\lfloor (n-1)/2\rfloor}$. That is, if there are $\binom{n-1}{\lfloor (n-1)/2\rfloor}$ admissible pinnacle sets $S\subseteq [n]$ for some value of $n\geq 3$, then the number of admissible pinnacle sets $S \subseteq [n+1]$ would be \begin{align*} 1+\sum_{m=3}^{n+1} \binom{m-2}{\lfloor m/2\rfloor} &= \binom{n-1}{\lfloor (n-1)/2 \rfloor} + \binom{n-1}{\lfloor (n+1)/2 \rfloor},\\ &=\binom{n}{\lfloor n/2 \rfloor}. \end{align*} Indeed, this result is the assertion in the second half of Theorem \ref{thm:pinsets}. The simplicity of this formula suggests a nice combinatorial explanation for the number of admissible pinnacle sets. Another nudge toward this combinatorial structure comes when we recognize that the numbers $\mathfrak{p}(m;d)$ satisfy a two-term recurrence for $m-1>2d:$ \begin{align*} \mathfrak{p}(m;d) &= \sum_{k<m} \mathfrak{p}(k;d-1),\\ &=\mathfrak{p}(m-1;d-1)+\sum_{k<m-1} \mathfrak{p}(k;d-1),\\ &=\mathfrak{p}(m-1;d-1) + \mathfrak{p}(m-1;d-1). \end{align*} In Table \ref{tab:admissible}, we see that the boundary cases for this recurrence are Catalan numbers. That is, \[\mathfrak{p}(2d+1;d) = C_d\] for $d \geq 1$, where $C_d = \binom{2d}{d}/(d+1)$. This hints at a connection between admissible pinnacle sets and lattice paths, which we will introduce now and examine more deeply in the next section. \begin{defn} A \emph{diagonal lattice path} is a sequence of steps, composed of \emph{up-steps} $(1,1)$ and \emph{down-steps} $(1,-1)$. \end{defn} For fixed $n$, consider all paths from $(0,0)$ to $(n-1,1)$ if $n$ is even, or to $(n-1,0)$ if $n$ is odd. Any such path takes $n-1$ steps, $\lfloor (n-1)/2 \rfloor$ of which are down-steps. Hence there are $\binom{n-1}{\lfloor (n-1)/2 \rfloor}$ such paths, which (we claim) is precisely the number of admissible pinnacle sets $S \subseteq [n]$. Catalan numbers count Dyck paths, i.e., diagonal lattice paths that never pass below the $x$-axis. To convert a lattice path with $n-1$ steps into an admissible pinnacle set, we first label the steps of the path, from left to right, by $2,3,\ldots,n$. Then the labels of up-steps that are strictly below the $x$-axis and of down-steps that are weakly above the $x$-axis will form an admissible pinnacle set $S \subseteq [n]$. \begin{example} The path shown in Figure~\ref{fig:diagonal lattice path and negative regions} corresponds to the set $\{4,6,12,13,19,20\}$. \end{example} In the next section we will prove that this correspondence is a bijection, and we will develop facts related to the enumeration of diagonal lattice paths. \subsection{Diagonal Lattice Paths} In this section, our goal is to prove the bijective correspondence between diagonal lattice paths and admissible pinnacle sets. In fact, we will refine the bijection to focus on the paths ending with a down-step, which correspond to pinnacle sets with a fixed maximum. To this end, consider diagonal lattice paths from $(0,0)$ to $(x,\epsilon_x)$ where $\epsilon_x \in \{1,2\}$ is determined by the parity of $x$. For $x \in \mathbb{Z}$, set \[ \epsilon_x = \begin{cases} 1 & \text{ if $x$ is odd, and}\\ 2 & \text{ if $x$ is even.}\end{cases} \] Clearly, appending a down-step to any such path yields a down-step that is weakly above the $x$-axis, and so in our correspondence will give a pinnacle set with maximum $x+2$. \begin{figure}[htbp] \begin{tikzpicture}[scale=.5] \draw (-1,0) -- (19,0); \draw (0,0) -- (2,-2) --node[scale=.75,midway,below right]{4} (3,-1) -- (4,-2) --node[scale=.75,midway,below right]{6} (5,-1) --(6,0) -- (7,-1) -- (10,2) --node[scale=.75,midway,above right]{12} (11,1) --node[scale=.75,midway,above right]{13} (12,0)-- (13,-1) -- (17,3) -- node[scale=.75,midway,above right]{19} (18,2)-- node[scale=.75,midway,above right]{20} (19,1); \draw[red, ultra thick] (2,-2)--(3,-1); \draw[red, ultra thick] (4,-2)--(5,-1); \draw[red, ultra thick] (10,2)--(11,1); \draw[red, ultra thick] (11,1)--(12,0); \draw[red, ultra thick] (17,3)--(18,2); \draw[red, ultra thick] (18,2)--(19,1); \foreach \x in {(0,0), (1,-1), (2,-2), (3,-1), (4,-2), (5,-1), (6,0), (7,-1), (8,0), (9,1), (10,2), (11,1), (12,0), (13,-1), (14,0), (15,1), (16,2), (17,3), (18,2), (19,1)} {\fill \x circle (3pt);} \end{tikzpicture} \caption{A diagonal lattice path corresponding to the admissible pinnacle set $\{4,6,12,13,19,20\}$. This path has three negative regions.}\label{fig:diagonal lattice path and negative regions} \end{figure} \begin{lemma}[Lattice path steps]\label{lem:number of up/down} A diagonal lattice path from $(0,0)$ to $(x, \epsilon_x)$ has $\lfloor x/2 \rfloor + 1$ up-steps and $\lceil x/2 \rceil - 1$ down-steps. \end{lemma} \begin{proof} Such a path $\mathcal{P}$ consists of $x$ steps in total, and \[\left|\{\text{up-steps in }\mathcal{P}\}\right| - \left|\{\text{down-steps in }\mathcal{P}\}\right| = \epsilon_x.\] Thus, for odd $x$, there must be $(x+1)/2$ up-steps and $(x-1)/2$ down-steps. Similarly, for even $x$, there must be $x/2+1$ up-steps and $x/2-1$ down-steps. \end{proof} \begin{defn} A \emph{negative region} in a diagonal lattice path begins with a down-step from a point $(x,0)$, terminates with an up-step to a point $(x',0)$, and does not touch the $x$-axis anywhere between those two points. The number of negative regions of a path $\mathcal{P}$ will be denoted $\text{neg}(\mathcal{P})$. \end{defn} Figure~\ref{fig:diagonal lattice path and negative regions} depicts a diagonal lattice path $\mathcal{P}$ for which $\text{neg}(\mathcal{P}) = 3$. \begin{lemma}[Sub-axis regions]\label{lem:number of neg regions} For a diagonal lattice path $\mathcal{P}$, \[\text{neg}(\mathcal{P}) = \left| \left\{\text{down-steps in $\mathcal{P}$ starting from the $x$-axis}\right\}\right|.\] \end{lemma} \begin{proof} Negative regions can be identified uniquely by their leftmost step, which is necessarily a down-step from a point on the $x$-axis. \end{proof} Given a diagonal lattice path $\mathcal{P}$, we define the \emph{marking} of $\mathcal{P}$ to be the path obtained by marking (1) down-steps that are weakly above the $x$-axis and (2) up-steps that are strictly below the $x$-axis. Examples of marked paths appear in Figures~\ref{fig:diagonal lattice path and negative regions} and~\ref{fig:paths and marked edges}, with marked edges colored in red. \begin{figure}[htbp] \begin{tikzpicture}[scale=.5] \draw (-1,0) -- (7,0); \draw (0,0) -- (1,-1) -- (4,2) -- (5,1) -- (6,2); \draw[red, ultra thick] (4,2) -- (5,1); \foreach \x in {(0,0), (1,-1), (2,0), (3,1), (4,2), (5,1), (6,2)} {\fill \x circle (3pt);} \draw[white] (0,-2) circle (3pt); \end{tikzpicture} \hspace{1in} \begin{tikzpicture}[scale=.5] \draw (-1,0) -- (8,0); \draw (0,0) -- (2,-2) -- (5,1) -- (6,0) -- (7,1); \draw[red, ultra thick] (2,-2) -- (3,-1); \draw[red, ultra thick] (5,1) -- (6,0); \foreach \x in {(0,0), (1,-1), (2,-2), (3,-1), (4,0), (5,1), (6,0), (7,1)} {\fill \x circle (3pt);} \end{tikzpicture} \caption{Two marked diagonal lattice paths.} \label{fig:paths and marked edges} \end{figure} Now, given a marked path, we will use two sets defined in terms of its marked and unmarked edges. Let $\mathcal{P}$ be a marked diagonal lattice path starting at $(0,0)$ and having $x$ steps. Label the steps of the path, from left to right, by $\{2,3,\ldots,x+1\}$. Set \begin{align*} M(\mathcal{P}) &= \{y : \text{the step labeled $y$ is marked}\} \cup \{x+2\}, \text{ and}\\ U(\mathcal{P}) &= \{y : \text{the step labeled $y$ is unmarked}\} \cup \{1\}\\ &= [1,x+2] \setminus M(\mathcal{P}). \end{align*} \begin{example} For the leftmost path in Figure~\ref{fig:paths and marked edges}, $M(\mathcal{P}) = \{6,8\}$ and $U(\mathcal{P}) = \{1,2,3,4,5,7\}$. For the rightmost path in Figure~\ref{fig:paths and marked edges}, $M(\mathcal{P}) = \{4,7,9\}$ and $U(\mathcal{P}) = \{1,2,3,5,6,8\}$. \end{example} It will transpire that the set $M(\mathcal{P})$ is a pinnacle set, and that the map $\mathcal{P} \mapsto M(\mathcal{P})$ is a bijection. Before we can prove this, we elaborate on properties of diagonal lattice paths. \begin{lemma}[Enumeration of marked edges]\label{lem:counting marked edges} For a diagonal lattice path $\mathcal{P}$ from $(0,0)$ to a point $(x,\epsilon_x)$, \begin{align*} |M(\mathcal{P})| &= \lceil x/2\rceil - \text{neg}(\mathcal{P}), \text{ and}\\ |U(\mathcal{P})| &= \lfloor x/2\rfloor + \text{neg}(\mathcal{P}) + 2. \end{align*} \end{lemma} \begin{proof} Two types of steps get marked in $\mathcal{P}$: down-steps that lie weakly above the $x$-axis, and up-steps that lie strictly below the $x$-axis. Each step from $(a,b)$ to $(a+1,b+1)$ in the latter category can be paired, injectively, with the down-step from $(a',b+1)$ to $(a'+1,b)$ where $a'$ is the largest possible value less than $a$; this down-step is necessarily unmarked because it must lie below the $x$-axis. Thus the number $|M(\mathcal{P})| - 1$ of marked steps in $\mathcal{P}$ is equal to the number of down-steps in $\mathcal{P}$ that do not start at the $x$-axis. Lemmas~\ref{lem:number of up/down} and~\ref{lem:number of neg regions} complete the calculation. To compute $|U(\mathcal{P})|$, note that the number of steps, $x$, in $\mathcal{P}$ is precisely $(|M(\mathcal{P})| - 1) + (|U(\mathcal{P})| - 1)$. \end{proof} In particular, Lemma~\ref{lem:counting marked edges} shows that $|M(\mathcal{P})| < |U(\mathcal{P})|$ for all diagonal lattice paths $\mathcal{P}$. The elements of the sets $M(\mathcal{P})$ and $U(\mathcal{P})$ bear some relation to each other. \begin{lemma}[Labels of marked edges]\label{lem:marked labels are larger than unmarked} Fix a diagonal lattice path $\mathcal{P}$ from $(0,0)$ to a point $(x,\epsilon_x)$, and index the elements $\{m_i\}$ of $M(\mathcal{P})$ and $\{u_i\}$ of $U(\mathcal{P})$ in increasing order. Then \[m_i > u_{i+1}\] for all $m_i \in M(\mathcal{P})$. \end{lemma} \begin{proof} First recall that $u_1 = 1$ by construction, and the step labels in $\mathcal{P}$ begin with $2$. Each marked step in $\mathcal{P}$ corresponds to a preceding (and hence smaller-labeled) unmarked step; namely, the nearest-to-the-left step of the same height. Thus $m_i > u_{i+1}$ for all $m_i \in M(\mathcal{P})$. \end{proof} \begin{prop}[Diagonal lattice paths construct admissible pinnacle sets]\label{prop:path yields admissible pinnacle set} Fix a diagonal lattice path $\mathcal{P}$ from $(0,0)$ to a point $(x,\epsilon_x)$. The set $M(\mathcal{P})$ is an admissible pinnacle set. \end{prop} \begin{proof} Index the elements $\{m_i\}$ of $M(\mathcal{P})$ and $\{u_i\}$ of $U(\mathcal{P})$ in increasing order and consider the permutation \[u_1m_1u_2m_2u_3m_3u_4 \cdots.\] By Lemma~\ref{lem:marked labels are larger than unmarked}, $m_i > u_{i+1}$. Moreover, the elements of $U(\mathcal{P})$ are indexed in increasing order, so $u_{i+1} > u_i$, and $m_i > u_i$ by transitivity. Thus the pinnacle set of this permutation is exactly $M(\mathcal{P})$. \end{proof} \begin{example} For the leftmost path in Figure~\ref{fig:paths and marked edges}, the permutation produced by Proposition~\ref{prop:path yields admissible pinnacle set} is $16283457 \in S_8$. For the rightmost path in Figure~\ref{fig:paths and marked edges}, the permutation is $142739568 \in S_9$. \end{example} We now show that the mapping from diagonal lattice paths to pinnacle sets, described in Proposition~\ref{prop:path yields admissible pinnacle set}, is invertible. Note that the pinnacle set described in Proposition~\ref{prop:path yields admissible pinnacle set} has size $\lceil x/2\rceil - \text{neg}(\mathcal{P})$, by Lemma~\ref{lem:counting marked edges}, and its maximum value is $x+2$. We will show that we can start with an arbitrary pinnacle set, of size $\lceil x/2\rceil - \text{neg}(\mathcal{P})$ and having maximum value $x+2$, and produce the corresponding diagonal lattice path from $(0,0)$ to the point $(x,\epsilon_x)$. \begin{defn}\label{defn:constructing the path from the pinnacle set} Let $S$ be an admissible pinnacle set with $\max S = m$. Define the diagonal lattice path $\mathcal{P}(S)$ as follows. \begin{quote} Start at the point $(x_0,y_0) := (m-2,\epsilon_m)$, with $S_0 := S$.\\ Set $S_1 := S_0 \setminus \{m\}$. \\ For $i$ from $1$ to $m-2$: \\ \hspace*{.3in} If $\max S_i = m - i$ then set $S_{i+1} := S_i \setminus \{m-i\}$ and:\\ \hspace*{.6in} If $y_{i-1} \ge 0$, then set $(x_i,y_i) := (x_{i-1} - 1, y_{i-1} +1)$.\\ \hspace*{.6in} Otherwise (that is, if $y_{i-1} < 0$), set $(x_i,y_i) := (x_{i-1} - 1, y_{i-1} -1)$.\\ \hspace*{.3in} Otherwise (that is, if $\max S_i \neq m - i$), then set $S_{i+1} := S_i$ and:\\ \hspace*{.6in} If $y_{i-1} \ge 0$, then set $(x_i,y_i) := (x_{i-1} - 1, y_{i-1} -1)$.\\ \hspace*{.6in} Otherwise (that is, if $y_{i-1} < 0$), set $(x_i,y_i) := (x_{i-1} - 1, y_{i-1} +1)$. \end{quote} \end{defn} Consider the admissible pinnacle set $S = \{4,7,9\}$, for which $m = \max S = 9$. The procedure described in Definition~\ref{defn:constructing the path from the pinnacle set} produces the following data. \[\begin{array}{c||c|c|c|c|c|c|c|c} i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline \raisebox{.1in}[.2in][.1in]{} S_i & \{4,7,9\} & \{4,7\} & \{4,7\} & \{4\} & \{4\} & \{4\} & \emptyset & \emptyset \\ \hline \raisebox{.1in}[.2in][.1in]{} (x_i, y_i) & (7,1) & (6,0) & (5,1) & (4,0) & (3,-1) & (2,-2) & (1,-1) & (0,0) \end{array}\] The path described by this data is the rightmost path depicted in Figure~\ref{fig:paths and marked edges}. We will show that this map $S \mapsto \mathcal{P}(S)$, from admissible pinnacle sets to paths, is the inverse of the map $\mathcal{P} \mapsto M(\mathcal{P})$. First, however, we must show that the diagonal lattice path $\mathcal{P}(S)$ of Definition~\ref{defn:constructing the path from the pinnacle set} is, in fact, the kind of path we want to work with; namely, that its left endpoint is $(0,0)$. \begin{lemma}[Endpoint of pinnacle-created paths] For any admissible pinnacle set $S$, the left endpoint of the diagonal lattice path $\mathcal{P}(S)$ is $(0,0)$. \end{lemma} \begin{proof} That the leftmost endpoint of $\mathcal{P}(S)$ has $x$-coordinate $0$ is clear by construction. Now consider the $y$-coordinate of this point. Let $\max S = m$. The path $\mathcal{P}(S)$ has $m-2$ steps, constructed in Definition~\ref{defn:constructing the path from the pinnacle set} as $i$ ranges from $1$ to $m-2$. Recall from Proposition~\ref{prop:admissible} that $m \ge 2|S| + 1$. Consider the right-to-left path construction described in Definition~\ref{defn:constructing the path from the pinnacle set}. Each element of $S \setminus \{m\}$ moves the path away from the line $y = -0.5$, whereas each element of $[2,m-1]\setminus S$ moves the path toward (and, if $y_{i-1} \in \{-1,0\}$, across) that line. We have an excess of steps moving toward this line because \[ |[2,m-1] \setminus S| = |[2,m] \setminus S| \geq |S| > |S \setminus \{m\}|,\] so the path $\mathcal{P}(S)$ terminates at $(0,y)$ with $y \in \{-1,0\}$ if $\epsilon_m = 1$, or $y \in \{-1,0,1\}$ if $\epsilon_m = 2$. If $\epsilon_m = 2$, then $m$ is even and $\mathcal{P}(S)$ is a path with an even number of steps. Thus the heights of its endpoints have the same parity. On the other hand, if $\epsilon_m = 1$, then $m$ is odd and $\mathcal{P}(S)$ is a path with an odd number of steps, meaning that the heights of its endpoints have opposite parities. In either case, the leftmost height of $\mathcal{P}(S)$ must be even, and the only available option is to land on the $x$-axis itself. \end{proof} We can now prove that the two maps discussed above, between pinnacle sets and diagonal lattice paths, are inverse of each other. \begin{thm}[Bijection between admissible pinnacle sets and diagonal lattice paths]\label{thm:admissible pinnacle set yields path} The map $S \mapsto \mathcal{P}(S)$ from admissible pinnacle sets to diagonal lattice paths is the inverse of the map $\mathcal{P} \mapsto M(\mathcal{P})$, and together these maps give a bijection between admissible pinnacle sets and diagonal lattice paths. \end{thm} \begin{proof} Let $S$ be an admissible pinnacle set. By the construction given in Definition~\ref{defn:constructing the path from the pinnacle set}, elements of $S \setminus \{\max S\}$ correspond to down-steps that are weakly above the $x$-axis and up-steps that are strictly below the $x$-axis in the resulting path (that is, steps that move away from the line $y = -0.5$). These are exactly the steps in a path that are marked by a lattice path marking, and which, together with $\max S$, constitute the set $M(\mathcal{P}(S))$. \end{proof} We are now ready to prove the main result of this section. \begin{thm}[Enumerating admissible pinnacle sets in terms of paths]\label{thm:path bijection} For all $m,d \ge 1$, the number $\mathfrak{p}(m;d)$ of admissible pinnacle sets with maximum element $m$ and cardinality $d$ is \[\mathfrak{p}(m;d) = \left|\left\{\begin{matrix} \text{diagonal lattice paths $\mathcal{P}$ from $(0,0)$ to $(m-2,\epsilon_m)$},\\ \text{with } \text{neg}(\mathcal{P}) = \lceil m/2\rceil - 1 - d \end{matrix} \right\}\right|.\] \end{thm} \begin{proof} Proposition~\ref{prop:path yields admissible pinnacle set} and Theorem~\ref{thm:admissible pinnacle set yields path} give a bijection between admissible pinnacle sets with maximum element $m$ and diagonal lattice paths from $(0,0)$ to $(m-2,\epsilon_m)$. Let $S$ and $\mathcal{P}$ be such a corresponding pair. By Lemma~\ref{lem:counting marked edges}, \begin{align*} |S| &= |M(\mathcal{P})|\\ &= \lceil (m-2)/2 \rceil - \text{neg}(\mathcal{P})\\ &= \lceil m/2 \rceil - 1 - \text{neg}(\mathcal{P}), \end{align*} which completes the proof. \end{proof} We now pause to demonstrate the bijection of Theorem~\ref{thm:admissible pinnacle set yields path}. \begin{example} The leftmost lattice path in Figure~\ref{fig:paths and marked edges} corresponds to the admissible pinnacle set $\{6,8\}$, counted by $\mathfrak{p}(8;2)$, and the rightmost path corresponds to the admissible pinnacle set $\{4,7,9\}$, counted by $\mathfrak{p}(9;3)$. \end{example} Because it is easy to count diagonal lattice paths between two fixed points, we can make the following enumerative corollaries. The latter of these establishes the boundary case, discussed above, in the recursive expression for $\mathfrak{p}(m;d)$ when $m \geq 2d + 2$. \begin{cor}[Enumerating admissible pinnacle sets] \ \begin{enumerate}\renewcommand{\labelenumi}{(\alph{enumi})} \item The total number of admissible pinnacle sets (regardless of size) with maximum element $m$ is \[\binom{m-2}{\lfloor m/2\rfloor}.\] \item For $m=2d+1$, the number of admissible pinnacle sets with maximum element $m$ and size $d$ is the Catalan number $C_d = \binom{2d}{d}/(d+1)$. \end{enumerate} \end{cor} \begin{proof} \ \begin{enumerate}\renewcommand{\labelenumi}{(\alph{enumi})} \item This is simply the total number of diagonal lattice paths from $(0,0)$ to $(m-2, \epsilon_m)$. Each contains $m-2$ steps, of which $\lfloor m/2 \rfloor$ are up-steps, by Lemma~\ref{lem:number of up/down}. \item By Theorem~\ref{thm:path bijection}, this is the number of diagonal lattice paths from $(0,0)$ to $(m-2, 1)$ that never go below the $x$-axis. Since every Dyck path must end with a down-step, these are in bijective correspondence with Dyck paths from $(0,0)$ to $(m-1,0)$, and Dyck paths are enumerated by the Catalan numbers. \end{enumerate} \end{proof} \section{Recurrences, explicit formulas, and bounds for $p_S(n)$}\label{sec:rec} Now that we have characterized and enumerated admissible pinnacle sets, we turn to the question of counting permutations with a given pinnacle set. Recall that $p_S(n)$ denotes the number of permutations $w \in S_n$ with $\Pin(w) = S$. To begin our study of $p_S(n)$, we make the easy observation that there are $2^{n-1}$ permutations in $S_n$ having no peaks; that is, \[ p_{\emptyset}(n) = 2^{n-1}. \] Indeed, if $\Pin(w)=\emptyset$, then we can write $w = u1v$, a concatenation of strings, where $u$ is a word whose letters are strictly decreasing and $v$ is a word whose letters are strictly increasing. If $w \in S_n$, then each such permutation is determined by the elements of $u$, which can be any subset of the $(n-1)$-element set $\{2,3,\ldots,n\}$. A similar argument shows that when $S$ is nonempty, we can reduce to the case where $w \in S_t$ for any $t \in [\max S,n]$, because none of the letters $\{t+1,\ldots, n\}$ are pinnacles in $w$. \begin{lemma}[Reduction of permutation size]\label{lem:null/reduction} If $S$ is nonempty and $t \in [\max S,n]$, then \[ p_S(n) = 2^{n-t}p_S(t). \] For permutations with no pinnacles (nor peaks), we have $p_{\emptyset}(n) = 2^{n-1}$. \end{lemma} \begin{proof} We prove only the first statement, as the case for $S = \emptyset$ was discussed above. Suppose that $w \in S_n$ and $\Pin(w)=S$. Further suppose that $t \in [\max S,n]$. Because none of the letters $\{t+1,\ldots,n\}$ are pinnacles in $w$, we can write $w = uw'v$, a concatenation of strings, for some $w' \in S_t$ with $\Pin(w') = S$. Since the elements of $u$ and $v$ are drawn from the set $[n]\setminus [t]$, it must be the case that $u$ is a decreasing word and $v$ is increasing. Hence $w$ depends only on $w'$ and the set of elements in $u$. The set of elements in $u$ can be any subset of $[n]\setminus [t]$, yielding $2^{n-t}$ possibilities. The number of permutations $w' \in S_t$ having pinnacle set $S$ is, by definition, $p_S(t)$, and so $p_S(n) = 2^{n-t}p_S(t)$. \end{proof} In practice, we will most often employ Lemma~\ref{lem:null/reduction} with $t = \max S$ or $t = n-1$. \subsection{A quadratic recurrence}\label{sec:quad} Let us assume that $S$ is an admissible pinnacle set with $\max S = n$. To construct one of the permutations in $S_n$ counted by $p_S(n)$, we could first choose the elements that will appear to the left of $n$ and those that will appear to the right of $n$, and then try to arrange the letters on each side of $n$ in order to achieve our desired pinnacle set. To be more precise consider the following steps: \begin{enumerate} \item Write $[n-1] = A \sqcup A^c$ as a disjoint union of nonempty sets. \item Let $I=S\cap A$ (pinnacles to appear to the left of $n$) and $J = S \cap A^c$ (pinnacles to appear to the right of $n$). \item If possible, form permutations $u$ of the set $A$ and $v$ of the set $A^c$, with $\Pin(u) = I$ and $\Pin(v) = J$. \item Let $w = u\,n\,v$, a concatenation of strings. Then $\Pin(w) = I\cup \{n\}\cup J =S$. \end{enumerate} We will now analyze the number of ways to perform this procedure. \begin{defn} The \emph{standardization map} relative to a set $X = \{x_1 < x_2 < \cdots\}$ is \[\std_X(x_i) =i.\] \end{defn} Fix a nonempty set $A=\{ a_1 < a_2 < \cdots < a_{|A|}\} \subsetneq [n-1]$, and let \[ I=\std_A(S) = \{ i : a_i \in S\}. \] In other words, $I$ is the set of relative values of pinnacles within the subset $A$. With this notation, the number of permutations $u$ of set $A$ such that $\Pin(u) = S'$ equals the number of permutations in $S_{|A|}$ with pinnacle set $I$. That is, the number of such $u$ is $p_I(|A|)$. Likewise, letting $J=\std_{A^c}(S)$ denote the set of relative values of the pinnacles within $A^c$, we have $p_J(|A^c|) = p_J(n-1-|A|)$ ways to form the permutation $v$. Running over all cases of the set $A$, we get the following result. \begin{prop}[The quadratic recurrence]\label{prp:quadratic} Suppose that $S$ is an admissible pinnacle set with $\max S = n$. Then \begin{equation}\label{eq:quad} p_S(n) = \sum_{\emptyset \neq A \subsetneq [n-1]} p_{\std_A(S)}(|A|) \cdot p_{\std_{A^c}(S)}(n-1-|A|). \end{equation} \end{prop} This construction is illustrated in Figure \ref{fig:quadratic}, and we give a specific example below. We remark that the recursive structure inherent in the quadratic recurrence suggests that there might be a relationship between pinnacle sets and permutation pattern containment, perhaps for vincular patterns in particular. Indeed, a peak is exactly a $\underbracket[.5pt][1pt]{132}$ or $\underbracket[.5pt][1pt]{231}$ vincular pattern. \begin{figure}[htbp] \[ \begin{tikzpicture}[>=stealth,bend angle=45,auto,xscale=.4,yscale=.5] \tikzstyle{state}=[circle,draw=black,inner sep=2] \draw (0,10) node[state] (1) {$n$}; \draw (-4,3) node[rectangle, draw=black, inner sep=2,scale=.5] (2) { \begin{tikzpicture} \draw (0,4) node[fill=black,circle, inner sep =2] {} -- (1,6) node[fill=black,circle, inner sep =2] {} -- (2,1) node[fill=black,circle, inner sep =2] {} -- (3,7) node[fill=black,circle, inner sep =2] {}; \end{tikzpicture} }; \draw[line width=2] (1)--(-2.11,6); \draw[line width=2] (1)--(1.5,-1); \draw (4,3) node[rectangle, draw=black, inner sep=2,scale=.5] (3) { \begin{tikzpicture} \draw (0,2) node[fill=black,circle, inner sep =2] {} -- (1,10) node[fill=black,circle, inner sep =2] {} -- (2,8) node[fill=black,circle, inner sep =2] {} -- (3,5) node[fill=black,circle, inner sep =2] {}-- (4,4) node[fill=black,circle, inner sep =2] {}; \end{tikzpicture} }; \draw (2) node[xshift=-2.5cm] {\begin{tabular}{c} Choose set $A$ \\and permute in \\$p_I(|A|)$ ways\end{tabular}}; \draw (3) node[xshift=3.5cm] {\begin{tabular}{c} Permute remaining \\elements in \\$p_J(n-1-|A|)$ ways\end{tabular}}; \draw (2) node[yshift=-2cm] {$u$}; \draw (3) node[yshift=-2.5cm] {$v$}; \end{tikzpicture} \] \caption{Construction of the quadratic recurrence.}\label{fig:quadratic} \end{figure} \begin{example}\label{ex:quadratic recurrence} Let $n =9$ and $S = \{4,7,9\}$. Then we would choose any proper nonempty subset of $[8]$, say $A=\{1,2,4\}$, so that $A^c = \{ 3,5,6,7,8\}$. Here, \[ I = \std_{\{1,2,4\}}(\{4,7,9\}) = \{3\}, \] while \[ J=\std_{\{3,5,6,7,8\}}(\{4,7,9\}) = \{4\}, \] so this $A$ contributes a term of $p_{\{3\}}(3)p_{\{4\}}(5) = 2 \cdot 24 = 48$ to the computation of $p_{\{4,7,9\}}(9)$. \end{example} While it may seem that the quadratic recurrence must sum over $2^{n-1}-2$ subsets $A$, note that many of these selections contribute zero to the sum, because both $\std_A(S)$ and $\std_{A^c}(S)$ must themselves be admissible pinnacle sets. \begin{example} With the set $S = \{4,7,9\}$ of Example~\ref{ex:quadratic recurrence}, only $44$ of the possible $2^8-2 = 254$ summands in Equation~\eqref{eq:quad} are nonzero. \end{example} By combining Proposition \ref{prp:quadratic} with Lemma~\ref{lem:null/reduction}, we can obtain explicit formulas for pinnacle sets with one or two elements. \begin{prop}\label{prp:12peak} We have the following explicit formulas for admissible pinnacle sets with one or two elements. Let $3\leq l < m$. Then, for any $n\geq l$, \begin{equation}\label{eq:onepeak} p_{\{l\}}(n) = 2^{n-2}(2^{l-2}-1) \end{equation} and for any $n\geq m$, \begin{equation}\label{eq:2peak} p_{\{l,m\}}(n) = 2^{n+m-l-5}\left(3^{l-1}-2^l + 1\right) - 2^{n-3}(2^{l-2}-1). \end{equation} \end{prop} \begin{proof} First consider a pinnacle set with one element, say $S = \{l\}$. Then Equation~\eqref{eq:quad} tells us that each nonempty set $A \subsetneq [l-1]$ contributes \[ p_{\emptyset}(|A|)p_{\emptyset}(l-1-|A|) = 2^{|A|-1}2^{l-2-|A|} = 2^{l-3}, \] to the sum. As there are $2^{l-1}-2$ subsets $A$ to consider, we find that $p_{\{l\}}(l) = 2^{l-3}(2^{l-1}-2)=2^{l-2}(2^{l-2}-1)$. By Lemma~\ref{lem:null/reduction}, we see that for any $n\geq l \geq 3$, the number of permutations in $S_n$ with pinnacle set $\{l\}$ is \[ p_{\{l\}}(n) = 2^{n-2}(2^{l-2}-1), \] which proves Equation \eqref{eq:onepeak}. Now to prove Equation \eqref{eq:2peak}, we suppose $w$ in $S_m$ with pinnacle set $\{l,m\}$, where $l < m$. We now analyze all sets $A$ that contribute to the sum of Equation \eqref{eq:quad}. First of all, notice we can count all possibilities where $l$ appears to the left of $m$, i.e., where $l \in A$, and multiply by two. Thus, we assume $l\in A$ for the time being. In order for set $A$ to form a permutation whose only pinnacle is $l$, i.e., for $I=\std_{A}(S)$ to be admissible, $A$ must contain at least two elements smaller than $l$. Let $j\geq 2$ denote the number of elements in $A$ smaller than $l$, so that $I = \{j+1\}$. Further, let $k$ denote the number of elements in $A$ that are bigger than $l$ and smaller than $m$. Since $A^c$ is nonempty, we must have $0<m-1-|A|= m-2-j-k$, or $j+k < m-2$. Given fixed $j \geq 2$ and $k$ as above, the number of ways to permute set $A$ to get a pinnacle set of $\{l\}$ is, by Equation \eqref{eq:onepeak}, \begin{align*} p_{\{j+1\}}(|A|) &= p_{\{j+1\}}(k+j+1)\\ &= 2^k p_{\{j+1\}}(j+1)\\ &= 2^{k+j-1}(2^{j-1}-1). \end{align*} Since we don't want any pinnacles on the other side of $m$, i.e., since $J = \std_{A^c}(S) = \emptyset$, there are \[ p_{\emptyset}(m-1-|A|) = 2^{m-3-j-k} \] ways to permute the elements on the right side of $m$. Therefore the total contribution from set $A$ is \begin{align*} p_{\{j+1\}}(|A|)p_{\emptyset}(m-1-|A|) &=2^{m-3-j-k}2^{k+j-1}(2^{j-1}-1) \\ &= 2^{m-4}(2^{j-1}-1). \end{align*} Notice that all that really matters here is $j$ (and not $k$). It remains to describe how to count sets $A$ with these properties. First, there are $\binom{l-1}{j}$ ways to choose $j$ elements smaller than $l$. There are $\binom{m-1-l}{k}$ ways to choose $k$ elements greater than $l$ and less than $m$. Thus, summing over all $j$ and $k$ (and doubling to consider the possibility that $l\notin A$), we find \[ p_{\{l,m\}}(m) = 2^{m-3}\sum_{\substack{ 2 \leq j \leq l-1 \\ 0\leq k\leq m-l-1\\ j+k < m-2}}\binom{l-1}{j}\binom{m-1-l}{k}(2^{j-1}-1). \] The condition that $j+k < m-2$ excludes only the case that $j=l-1$ and $k=m-1-l$, i.e., the case that $A^c$ is empty. This means we can write \begin{align*} p_{\{l,m\}}(m) &= 2^{m-3}\sum_{2 \leq j \leq l-1} \binom{l-1}{j}(2^{j-1}-1)\sum_{0\leq k\leq m-1-l} \binom{m-1-l}{k} - 2^{m-3}(2^{l-2}-1),\\ &=2^{m-3}\sum_{2 \leq j \leq l-1} \binom{l-1}{j}(2^{j-1}-1)\cdot 2^{m-1-l} - 2^{m-3}(2^{l-2}-1),\\ &=2^{2m-l-4}\sum_{2 \leq j \leq l-1} \binom{l-1}{j}(2^{j-1}-1) - 2^{m-3}(2^{l-2}-1). \end{align*} A bit of manipulation shows \begin{align*} 2\cdot\sum_{2 \leq j \leq l-1} \binom{l-1}{j}(2^{j-1}-1) &= 1+\sum_{0\leq j\leq l-1} \binom{l-1}{j}(2^j-2)\\ &= 1 + \sum_{0\leq j\leq l-1}\binom{l-1}{j}2^j - 2\sum_{0\leq j \leq l-1} \binom{l-1}{j},\\ &=1 + 3^{l-1} - 2^l. \end{align*} Thus, \[ p_{\{l,m\}}(m) =2^{2m-l-5}(3^{l-1}-2^l+1) - 2^{m-3}(2^{l-2}-1). \] Applying Lemma \ref{lem:null/reduction} yields \eqref{eq:2peak}, completing the proof. \end{proof} There may also be other special cases of explicit formulas that one can deduce from the quadratic recurrence, by exploring precisely which nonzero terms appear in the sum. For now, though, we turn to another recursive approach. \subsection{A linear recurrence} In this section, we present a different way to build from the case of a one-element pinnacle set to that of a two-element set. As before, suppose that $S=\{l,m\}$ with $l < m$. Consider some $w \in S_m$ for which $\Pin(w) = \{l,m\}$, and let $w' \in S_{m-1}$ be the permutation obtained by deleting the letter $m$ from $w$. Then either $\Pin(w') = \{l\}$, or $\Pin(w') = \{j,l\}$ where $j$ was adjacent to $m$ in $w$. Thus, to evaluate $p_{\{l,m\}}(m)$, we should count such $w' \in S_{m-1}$ and the ways to insert $m$ appropriately. More precisely, we want permutations $u \in S_{m-1}$ with exactly one pinnacle, $\Pin(u) = l$, and permutations $v \in S_{m-1}$ with exactly two pinnacles, $\{j,l\}$. Let $u \in S_{m-1}$ be a permutation with $\Pin(u) = \{l\}$. We want to insert the letter $m$ into $u$ to produce a permutation $w \in S_m$ having pinnacle set $S = \{l,m\}$. We cannot insert $m$ at either end of $u$ (because then $m$ would not be a pinnacle of $w$), nor on either side of $l$ in $u$ (because then $l$ would not be a pinnacle of $w$). Because $l$ is a pinnacle of $u$, this letter $l$ cannot appear at either end of the word $u$. Thus there are $m-4$ positions at which inserting $m$ into $u \in S_{m-1}$ will yield a permutation in $S_m$ having pinnacle set $\{l,m\}$. (This is depicted in Figure \ref{fig:insertl}.) The permutations constructed in this manner contribute \[ (m-4)p_{\{l\}}(m-1) \] to the count $p_{\{l,m\}}(m)$. \begin{figure}[htbp] \[ \begin{tikzpicture}[>=stealth,bend angle=45,auto,xscale=.4,yscale=.5] \tikzstyle{state}=[circle,draw=black,inner sep=2] \draw (1,10) node[draw=none] (1) {}; \draw (4,8) node[state] (2) {}; \draw (6,6) node[state] (3) {}; \draw (10,2) node[state] (4) {}; \draw (15,7) node[state] (5) {$l$}; \draw (18,4) node[state] (6) {}; \draw (21,1) node[state] (7) {}; \draw (23,3) node[state] (8) {}; \draw (25,5) node[state] (9) {}; \draw (29,9) node[state] (10) {}; \draw (31,10) node[draw=none] (11) {}; \draw (5,10) node[state] (m1) {$m$}; \draw (8,10) node[state] (m2) {$m$}; \draw (19.5,10) node[state] (m3) {$m$}; \draw (22,10) node[state] (m4) {$m$}; \draw (24,10) node[state] (m5) {$m$}; \draw (27,10) node[state] (m6) {$m$}; \draw[line width=2] (1)--(2); \draw (2)--(3); \draw (3)--(4); \draw[line width=2] (5)--(6); \draw (6)--(7); \draw[line width=2] (4)--(5); \draw (7)--(8); \draw (8)--(9); \draw (9)--(10); \draw[line width=2] (10)--(11); \draw[dashed] (2)--(m1)--(3); \draw[dashed] (4)--(m2)--(3); \draw[dashed] (6)--(m3)--(7); \draw[dashed] (7)--(m4)--(8); \draw[dashed] (8)--(m5)--(9); \draw[dashed] (9)--(m6)--(10); \end{tikzpicture} \] \caption{Insert a new highest peak in any of the gaps except those on the far left, far right, and adjacent to an existing peak.}\label{fig:insertl} \end{figure} Now suppose that $v \in S_{m-1}$ is a permutation with pinnacle set $\Pin(v) = \{j,l\}$, where $l\neq j < m$. In this situation, if we place $m$ immediately to the left or right of $j$, then $j$ is no longer a pinnacle, but both $l$ and $m$ are pinnacles. Thus for each admissible pinnacle set $\{j,l\}$ with $j < m$, we have a contribution of $2p_{\{j,l\}}(m-1)$ as well. Hence, applying Equation~\eqref{eq:onepeak} we get \begin{equation}\label{eq:twopeak} p_{\{l,m\}}(m) = (m-4)2^{m-3}(2^{l-2}-1) + 2\sum_{l\neq j < m} p_{\{j,l\}}(m-1). \end{equation} This line of reasoning can be generalized to sets $S=\{ s_1 < s_2 < \cdots < s_d\}$, with $s_d=m$. The analysis proceeds along the same steps as in the case $d=2$, which produced Equation~\eqref{eq:twopeak}, and applying Lemma~\ref{lem:null/reduction}. \begin{prop}[A linear recurrence]\label{prp:linear} Suppose that $S$ is an admissible pinnacle set with $|S|=d$ and $\max S =m$. Then for any $n\geq m$, \begin{equation} p_S(n) = 2^{n-m}\left( (m-2d) p_{S \setminus \{m\}}(m-1) + 2\sum_{\substack{T = (S \setminus \{m\}) \cup \{j\} \\ j \in [m] \setminus S}} p_T(m-1) \right). \label{eq:rec2} \end{equation} \end{prop} \begin{proof} When deleting $m$ from a permutation $w$ with $\Pin(w)=S$, either we reduce the number of peaks by one (i.e., we have $u$ such that $\Pin(u) = S\setminus \{m\}$) or the resulting permutation has the same number of peaks (i.e., we have $v$ such that $\Pin(v) = (S\setminus\{m\}) \cup \{j\}$ for some $j < m$). First, suppose that $u \in S_{m-1}$ is any permutation with $\Pin(u) = S \setminus \{m\}$, and insert $m$ into a gap of $u$ to form a permutation with pinnacle set $S$, as in Figure~\ref{fig:insertl}. The forbidden gaps are those at the far left end of $u$, at the far right end of $u$, and adjacent to any of the existing peaks. Since $u$ has $m-1$ letters, there are $m-2$ internal gaps, and since $u$ has $d-1$ peaks, we must avoid $2(d-1)$ of these. This leaves \[ (m-2)-2(d-1) = m - 2d \] gaps in which we can place $m$ to obtain a permutation $w \in S_{m}$ with $\Pin(w) = S$. In other words, the permutations constructed in this manner contribute \[ (m - 2d)p_{S \setminus \{m\}}(m-1) \] to the count $p_S(m)$. Next, suppose that $T = (S \setminus \{m\}) \cup \{j\}$ for some $j \in [m] \setminus S$. Let $v \in S_{m-1}$ have $\Pin(v) = T$. Then we can form a permutation with pinnacle set $S$ by inserting $m$ to the left or to the right of the letter $j$. This will mean that $j$ no longer sits at a peak, but $m$ does, as shown in Figure \ref{fig:deletej}. Combining the two cases produces \[ p_S(m) = (m-2d) p_{S\setminus \{m\}}(m-1) + 2\sum_T p_T(m-1), \] where the sum is over all $T$ of the form $T=(S\setminus\{m\}) \cup \{j\}$ for some $j \in [m] \setminus S$. Lemma~\ref{lem:null/reduction} completes the proof. \end{proof} \begin{figure}[htbp] \[ \begin{tikzpicture}[>=stealth,bend angle=45,auto,xscale=.4,yscale=.5] \tikzstyle{state}=[circle,draw=black,inner sep=2] \draw (1,10) node (1) {}; \draw (4,8) node[state] (2) {}; \draw (6,6) node[state] (3) {}; \draw (10,2) node[state] (4) {}; \draw (15,7) node[state] (5) {$j$}; \draw (18,4) node[state] (6) {}; \draw (21,1) node[state] (7) {}; \draw (23,3) node[state] (8) {}; \draw (25,5) node[state] (9) {}; \draw (29,9) node[state] (10) {}; \draw (31,10) node (11) {}; \draw (12.5,10) node[state] (m1) {$m$}; \draw (16.5,10) node[state] (m2) {$m$}; \draw[line width=2] (1)--(2); \draw[line width=2] (2)--(3); \draw[line width=2] (3)--(4); \draw (5)--(6); \draw[line width=2] (6)--(7); \draw (4)--(5); \draw[line width=2] (7)--(8); \draw[line width=2] (8)--(9); \draw[line width=2] (9)--(10); \draw[line width=2] (10)--(11); \draw[dashed] (4)--(m1)--(5); \draw[dashed] (5)--(m2)--(6); \end{tikzpicture} \] \caption{Inserting a new highest peak adjacent to an existing peak replaces that element of the pinnacle set. The element $j$ was in the original pinnacle set, but now it is replaced by $m$.}\label{fig:deletej} \end{figure} This linear recurrence tends to be very efficient in practice. It can also be used to yield explicit formulas when desired. For example, it was used to compute the formulas for some small sets $S$ in Table \ref{tab:lmformulas}. \begin{table}[htbp] \[ \begin{array}{c| c | c|c} S & p_S(n) & p_S(\max S) =p_S(n)/2^{n-\max S} & p_S(7)\\ \hline \hline \emptyset & 2^{n-1} & - & 64\\ \{ 3\} & 2^{n-2} & 2 & 32\\ \{ 4\} & 3\cdot 2^{n-2}& 3\cdot 2^2 & 96\\ \{ 5\} & 7\cdot 2^{n-2} & 7\cdot 2^3 & 224\\ \{ 6\} & 5\cdot 3\cdot 2^{n-2} & 5\cdot 3\cdot 2^4 & 480 \\ \{ 7\} & 31\cdot 2^{n-2} & 31\cdot 2^5 & 992\\ \{ 3,5\} & 2^{n-3} & 2^2 & 16 \\ \{ 4,5\} & 3\cdot 2^{n-3} & 3\cdot 2^2 & 48\\ \{ 3,6\} & 3\cdot 2^{n-3} & 3\cdot 2^3 & 48\\ \{ 4,6\} & 3^2\cdot 2^{n-3} & 3^2\cdot 2^3 & 144\\ \{ 5,6\} & 3^2\cdot 2^{n-2} & 3^2 \cdot 2^4 & 288\\ \{ 3,7\} & 7\cdot 2^{n-3} & 7 \cdot 2^4 & 112\\ \{ 4,7\} & 7\cdot 3\cdot 2^{n-3} & 7\cdot 3\cdot 2^4 & 336\\ \{ 5,7\} & 43\cdot 2^{n-3} & 43\cdot 2^4 & 688\\ \{ 6,7\} & 5^2\cdot 3\cdot2^{n-3} & 5^2\cdot 3 \cdot 2^4 & 1200\\ \{ 3,5,7\} & 2^{n-4} & 2^3 & 8\\ \{ 3,6,7\} & 3\cdot 2^{n-4} & 3\cdot2^3 & 24\\ \{ 4,5,7\} & 3\cdot 2^{n-4} & 3\cdot 2^3 & 24\\ \{ 4,6,7\} & 3^2\cdot 2^{n-4} & 3^2\cdot 2^3 & 72\\ \{ 5,6,7\} & 3^2 \cdot 2^{n-3} & 3^2 \cdot 2^4 & 144 \end{array} \] \caption{Some formulas for admissible pinnacle sets with $\max S \leq 7$. The formulas are only valid when $n\geq \max S$. The rightmost column has each of these evaluated at $n=7$ for the sake of comparison.}\label{tab:lmformulas} \end{table} \subsection{Some formulas and bounds} The previous discussion leads to a nice result on the bounds of $p_S(n)$. For instance, in Table \ref{tab:lmformulas} it seems that for fixed $d=|S|$, the pinnacle set that maximizes $p_S(n)$ is the one that consists of the largest $d$ elements in $[n]$ (that is, $S = \{n-d+1, n-d+2, \ldots, n\}$). In fact, this is true, and we have an explicit formula for $p_S(n)$ in this case. We begin with the enumeration. \begin{prop}[Enumerating permutations with maximal pinnacles]\label{prop:stirling formula} Let $d$ and $n$ be any positive integers such that $2d < n$. Then the number of permutations in $S_n$ with pinnacle set $[n-d+1,n]=\{n-d+1,n-d+2,\ldots,n\}$ is \[p_{[n-d+1,n]}(n) = d!\cdot (d+1)! \cdot 2^{n-2d-1} \cdot S(n-d,d+1)\] where $S(\cdot,\cdot)$ denotes the Stirling number of the second kind. \end{prop} \begin{proof} Let $w$ be a permutation in $S_n$ with pinnacle set $S=\{n-d+1,\ldots,n\}$. Then $w$ has exactly $d$ peaks. Since the $d$ elements in $S$ are each greater than any of the elements of $[n]\setminus S$, these pinnacles are independent of whatever non-pinnacle values abut them. To construct such a $w$, we can start by ordering the elements of $S$ as pinnacles in $w$, in $d!$ ways. There are $n-d$ remaining elements to place in the $d+1$ regions around these $d$ peaks. The Stirling number $S(n-d,d+1)$ counts the number of set partitions of $[n]\setminus S$ into $d+1$ nonempty subsets. Given such a set partition, there are $(d+1)!$ ways to order the subsets, i.e., to choose which subset goes in which region around the peaks. Finally, it must be the case that the elements in the regions between peaks are arranged in such a way that there are no new peaks. We know by Lemma~\ref{lem:null/reduction} that if there are $k$ elements in a given region, then there are $p_{\emptyset}(k) = 2^{k-1}$ permutations of these elements that have no peaks. Let $k_1,\ldots,k_{d+1}$ be the sizes of the subsets in each region between peaks. The product across all regions is \[ \prod_{i=1}^{d+1} 2^{k_i-1} = 2^{\sum k_i -(d +1)} = 2^{n-2d-1}, \] since $\sum k_i = n-d$ is the total number of elements that are not peaks. \end{proof} Next we will show that $p_{[n+1-d,n]}(n)$ is maximal among all admissible pinnacle sets having $d$ elements. We preface that work with a lemma that will aid an inductive argument. \begin{lemma}[Lifting property]\label{lem:lift} Suppose that $S$ and $T$ are admissible pinnacle sets with $|S|=|T|$, neither of which contains $n$. Then \[ \mbox{ if } p_S(n-1) \leq p_T(n-1), \mbox{ then } p_{S \cup \{n\}}(n) \leq p_{T\cup \{n\}}(n). \] \end{lemma} \begin{proof} Suppose that $|S|=|T|=d$. By the argument that precedes Proposition \ref{prp:linear}, consider any permutation $u \in S_{n-1}$ having $d$ peaks. We have $n-2d$ gaps into which we can insert $n$ to get a permutation with $d+1$ peaks, such that $n$ is a pinnacle. Thus, because $S$ and $T$ each have $d$ elements, \[ p_{S \cup \{n\}}(n) = (n-2d)p_S(n-1) \text{ \ \ and \ \ } p_{T\cup \{n\}}(n) = (n-2d)p_T(n-1), \] yielding the desired implication. \end{proof} The following result establishes the upper bound in Theorem \ref{thm:bounds}. More precisely, the following result will describe the pinnacle sets that are achieved most frequently by permutations in $S_n$, and Proposition~\ref{prop:stirling formula} gave the corresponding enumeration. \begin{prop}[Upper bounds]\label{prop:upper} Let $d$ and $n$ be any positive integers such that $2d < n$. Then for any admissible pinnacle set $S \subseteq [n]$ with $|S|=d$, we have \[ p_S(n) \leq p_{[n+1-d,n]}(n). \] \end{prop} \begin{proof} We proceed by induction on $n$ and $d$. The enumeration of admissible pinnacle sets with one element, given in Equation~\eqref{eq:onepeak}, shows that this bound holds in the case when $d=1$ and $n> 2$. Now suppose that the inequality holds for admissible pinnacle sets that are subsets of $[n-1]$ and that have cardinality less than $(n-1)/2$. Let $S \subseteq [n]$ be an admissible pinnacle set of cardinality $d$. If $n \in S$, then write $S = S'\cup \{n\}$ for $S' \subseteq [n-1]$. Suppose that $d < n/2$. Then since $|S'|=d-1 < n/2-1 < (n-1)/2$, we can claim, by the inductive hypothesis, that \[ p_{S'}(n-1) \leq p_{[n+1-d,n-1]}(n-1). \] Now by the lifting property in Lemma \ref{lem:lift}, we have \[ p_S(n) \leq p_{[n+1-d,n]}(n), \] as desired. If $n\notin S$ and $d < (n-1)/2$, then Lemma~\ref{lem:null/reduction} yields $p_S(n) = 2p_S(n-1)$. Hence, the induction hypothesis shows that \[ p_S(n) = 2p_S(n-1)\leq 2p_{[n-d,n-1]}(n-1). \] Further, using our explicit formula from Proposition \ref{prop:stirling formula}, we have \begin{align*} 2p_{[n-d,n-1]}(n-1) &= 2\left(d!(d+1)!2^{n-2d-2}S(n-1-d,d+1)\right)\\ &=d!(d+1)!2^{n-2d-1}S(n-1-d,d+1)\\ &<d!(d+1)!2^{n-2d-1}S(n-d,d+1) = p_{[n+1-d,n]}(n). \end{align*} If $n$ is even, then we are done. But if $n$ is odd, then we must also consider the case where $d=(n-1)/2$. Suppose that $|S| = d = (n-1)/2$. Further suppose that $u \in S_n$ is a permutation of $n=2d+1$ elements having $d$ peaks. Then \[ u(1) < u(2) > u(3) < \cdots >u(2d-1) < u(2d) > u(2d+1). \] With this structure, the letter $n$ must be a pinnacle of $u$. Hence if $n\notin S$ and $|S|=(n-1)/2$, then $S$ is not an admissible pinnacle. Thus, $p_S(n) = 0$ and the result follows trivially. \end{proof} Next we will prove that for admissible pinnacle sets with $d$ elements, the one that minimizes $p_S(n)$ (that is, the one achieved least often by permutations in $S_n$) is the admissible pinnacle set whose elements are as small as possible. This is the set $\{3,5,\ldots,2d+1\}$. Let us denote this minimizing set \[M_d := \{ 2k+1 : k=1,\ldots,d\}.\] We have the following enumerative result. \begin{prop}[Enumerating permutations with minimal pinnacles]\label{prp:lowcount} Let $d$ and $n$ be any positive integers such that $2d < n$. Then the number of permutations in $S_n$ with pinnacle set $M_d$ is \[ p_{M_d}(n) = 2^{n-d-1}. \] \end{prop} \begin{proof} The formula is a direct application of the linear recurrence in Equation~\eqref{eq:rec2}, noting that the second summand ranges over an empty set. Hence the sets $M_d$ yield this recurrence: \[ p_{M_d}(2d+1) = (2d+1-2d)p_{M_{d-1}}(2d) = p_{M_{d-1}}(2d) = 2p_{M_{d-1}}(2d-1), \] with base case $p_{\{3\}}(3) = 2$. Hence $p_{M_d}(2d+1) = 2^d$, and for $n\geq 2d+1$, we use Lemma~\ref{lem:null/reduction} to obtain \[ p_{M_d}(n) = 2^{n-d-1}, \] as desired. \end{proof} Alternatively, one could prove Proposition~\ref{prp:lowcount} by explicitly constructing such a permutation. For, if $w \in S_{2d+1}$ has $\Pin(w) = \{3,5,\ldots,2d+1\}$, then $w$ has a simple structure: either $w = (2d)(2d+1)w'$ or $w=w'(2d+1)(2d)$, where $w'$ has $\Pin(w') = M_{d-1}$. This choice of two options at each of $d$ steps gives rise to $2^d$ such permutations. \begin{example} The permutation $w'=13254$ has $\Pin(w') = \{3,5\}$, and there are only two ways to insert $6$ and $7$ in $w'$ to form a permutation $w \in S_7$ with $\Pin(w) = \{3,5,7\}$: either $w = 6713254$ or $w = 1325476$. \end{example} If $w\in S_n$ has $\Pin(w) = M_d$, with $n > 2d+1$, then any numbers larger than $2d+1$ have the choice of going on the far left or far right of the permutation, as in the discussion prior to Lemma~\ref{lem:null/reduction}. That is, $w = u w' v$, where $w'\in S_{2d+1}$ has $\Pin(w') = M_d$, the elements of $u$ are decreasing, and the elements of $v$ are increasing. We will keep this structure in mind for the proof of the following result, which establishes the lower bound in Theorem \ref{thm:bounds}. \begin{prop}[Lower bounds]\label{prp:lower} Let $d$ and $n$ be any positive integers such that $2d < n$. Then for any admissible pinnacle set $S\subseteq[n]$ with $|S|=d$, we have \[ p_S(n) \geq p_{M_d}(n) = 2^{n-d-1}. \] \end{prop} \begin{proof} Fix $d$ and $n>2d$. Let $S\subseteq [n]$ be an admissible pinnacle set with $|S|=d$. Let $A$ denote the set of permutations in $S_n$ with pinnacle set $M_d$, and let $B$ denote the set of permutations in $S_n$ with pinnacle set $S$. We will construct an injection from $A$ to $B$ as follows. Let $w \in A$. Then $w = uw'v$, a concatenation of strings, where $w' \in S_{2d+1}$ has $\Pin(w') = M_d$, $u$ is a list of decreasing elements, and $v$ is a list of increasing elements. Now order the elements of set $S = \{ s_1 < s_2 < \cdots < s_d\}$, and recall that $s_k \geq 2k+1$ for each $k = 1,\ldots, d$. We will define the permutation $\widehat{u}\widehat{w}'\widehat{v}=\widehat{w} \in B$ as follows. Let $\widehat{w}'$ be the permutation with $2d+1$ letters formed by replacing the peaks of $w'$ with the elements of $S$, in the same relative order. That is, if $w'(j) = 2k+1$, then $\widehat{w}'(j) = s_k$. For the remaining elements on the left and the right of $\widehat{w}'$, we form $\widehat{u}$ and $\widehat{v}$ by placing the elements of $[n]\setminus \{\widehat{w}'(i)\} =\{b_1 < \cdots < b_{n-2d-1}\}$ in the same positions as the elements in the same relative order in $[n]\setminus \{w'(i)\} = \{a_1 < \cdots < a_{n-2d-1}\}$. That is, each $a_i$ is replaced by $b_i$. For example, consider $M_2 = \{3,5\}$. A permutation in $S_9$ with pinnacle set $\{3,5\}$ is $w = 813254679$. Here we have $u=8$, $w'= 13254$, and $v=679$. If $S = \{5,8\}$, then we replace $3$ by $5$ and $5$ by $8$ to get $\widehat{w}' = 15284$. The remaining elements are $\{3,6,7,9\} = \{b_1< b_2< b_3<b_4\}$, and they need to replace $\{6,7,8,9\}=\{a_1 < a_2 < a_3 < a_4\}$ in the same relative order. Hence $\widehat{u} = 7$ and $\widehat{v} = 369$. Bringing it all together we have: \[ \begin{array}{cccccccccc} w= & 8 & 1 & 3 & 2 & 5 & 4 & 6 & 7 & 9 \\ & & & \downarrow & & \downarrow & \\ & \cdot & 1 & \mathbf{5} & 2 & \mathbf{8} & 4 & \cdot & \cdot & \cdot \\ & \downarrow & & & & & & \downarrow & \downarrow & \downarrow \\ \widehat{w}= & \mathbf{7} & 1 & 5 & 2 & 8 & 4 & \mathbf{3} & \mathbf{6} & \mathbf{9} \end{array} \] i.e., $\widehat{w} = \widehat{u}\widehat{w}'\widehat{v} = 715284369$. The construction of this permutation $\widehat{w}$ guarantees that $\widehat{w} \in B$. The only other pinnacles that $\widehat{w}$ could have would be in $\widehat{u}$, in $\widehat{v}$, or at either end of $\widehat{w}'$. However, the strings $\widehat{u}$ and $\widehat{v}$ are monotonic, so they contain no peaks, while the left end of $\widehat{w}'$ is an ascent and the right end of $\widehat{w}'$ is preceded by a descent, so these cannot be peaks either. Therefore $\widehat{w} \in B$. We claim that the map $w = uw'v \mapsto \widehat{u}\widehat{w}'\widehat{v}=\widehat{w}$ is an injection from $A$ to $B$. Indeed, if $w$ and $x$ are two different permutations in $A$, then either $w' \neq x'$, in which case their images are clearly different, or $w'= x'$. But if $w'=x'$ then the remaining elements are in a different relative order, and hence again they have different images in $B$. \end{proof} The results in this section allow us to find admissible pinnacle sets $S$ that maximize and minimize $p_S(n)$, for fixed $n$. For the lower bound, we have \[ \min\{p_S(n): \mbox{admissible } S\subseteq [n] \} = \min\{ 2^{n-d-1} : d < n/2\} = 2^{\lfloor n/2 \rfloor}. \] For the upper bound, we have something a little less satisfying: \[ \max\{p_S(n): \mbox{admissible } S\subseteq [n] \} = \max\{ d!(d+1)!2^{n-2d-1}S(n-d,d+1) : d < n/2\}. \] This introduces an interesting statistic. \begin{defn}\label{defn:maximizing with stirling} For fixed $n$, let $d(n) = d < n/2$ be the value maximizing the expression $d!(d+1)!2^{n-2d-1}S(n-d,d+1)$. \end{defn} We can compute $d(n)$ for small values of $n$, and some of this data appears in Table \ref{tab:maxdata}. \begin{table}[htbp] \[ \begin{array}{c|| ccccccccccccccccccccc} \raisebox{.1in}[.2in][.1in]{}n & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22\\ \hline \raisebox{.1in}[.2in][.1in]{}d(n) & 1& 1& 1 & 2& 2& 2& 3 & 3 & 3& 4& 4& 4& 4& 5& 5& 5& 6& 6& 6 \\ \end{array} \] \caption{The value $d(n)$ that maximizes $d!(d+1)!2^{n-2d-1}S(n-d,d+1)$, producing the maximum value for $p_S(n)$ across all admissible pinnacle sets $S \subseteq [n]$.}\label{tab:maxdata} \end{table} Initially, this $d(n)$ appears to be a step function that increases by one as $n$ increases by three. But $d(16) = 4$ shows that this is false. In Figure~\ref{fig:maxdata}, we plot the function $d(n)$ for $n\leq 100$. A first look at this picture suggests that the step function cycles through seven plateaus of width three and an eighth plateau of width four, but this pattern also does not persist. For example, $d(n)=12$ for the four consecutive values from $n=38$ to $n=41$ and $d(n)=20$ for the four consecutive values from $n=63$ to $n=66$. But the next plateau of four is only seven steps away: $d(n) = 27$ from $n=85$ to $n=88$. \begin{figure}[htbp] \includegraphics[scale=.7]{dmax} \caption{The step function $d(n)$ for $n\leq 100$. Some plateaus have width four; all others have width three.}\label{fig:maxdata} \end{figure} In Table \ref{tab:dn} we list the values of $n$ and $d(n)$ for which there are four consecutive values $n$ with the same $d(n)$, i.e., for which $\{d(n), d(n+1), d(n+2), d(n+3)\}$ is a set of size $1$. All other values of $d(n)$ that we have observed so far ($n\leq 200$) come in runs of three. The fact that the plateaus of size four are not quite periodic is puzzling. \begin{table}[htbp] \[ \begin{array}{c | ccccccccc} \raisebox{.1in}[.2in][.1in]{} n & 13 & 38 & 63 & 85 & 110 & 135 & 160 & 185 \\ \hline \raisebox{.1in}[.2in][.1in]{} d(n) & 4 & 12 & 20 & 27 & 35 & 43 & 51 & 59 \end{array} \] \caption{The values of $n\leq 200$ and corresponding $d(n)$ that mark the beginnings of four consecutive equal values: $d(n)=d(n+1)=d(n+2)=d(n+3)$.}\label{tab:dn} \end{table} While it seems that $d(n)$ is approximately $n/3$, an exact formula for $d(n)$ (and hence the maximal value for $p_S(n)$) is so far elusive. \section{Further questions}\label{sec:conclude} The results proven in this paper are a small sample of the directions in which the study of pinnacle sets may be taken. The first question we pose here is the same one with which we closed the last section. \begin{question} Is there a simple formula for $d(n)$, as introduced in Definition~\ref{defn:maximizing with stirling}? What is $d(n)$ asymptotically? \end{question} Another question seeks to explore nontrivial ways in which permutations with the same pinnacle set are related. \begin{question} For a given $S$, is there a class of operations (e.g., valley hopping as in \cite{Branden}) that one may apply to any $w \in S_n$ with $\Pin(w) = S$ to obtain any other permutation $w' \in S_n$ with $\Pin(w') = S$, and no other permutations? \end{question} Among the admissible pinnacle sets $S\subseteq [n]$ of a fixed size, we know which sets $S$ minimize $p_S(n)$ and which maximize $p_S(n)$. However, it seems trickier to compare two randomly selected sets. For example, with $n=7$, here are the $2$-element admissible subsets of $[7]$ ordered according to $p_S(7)$: \begin{align*} p_{\{3,5\}}(7) < p_{\{4,5\}}(7) = p_{\{3,6\}}&(7) < p_{\{3,7\}}(7) \\ &< p_{\{4,6\}}(7) < p_{\{5,6\}}(7) < p_{\{4,7\}}(7) < p_{\{5,7\}}(7) < p_{\{6,7\}}(7). \end{align*} The linear ordering here seems difficult to explain, but a partial ordering on sets that is compatible with comparison might be more feasible. For example the coordinate-wise dominance order shown below is compatible with the ordering on $p_S(n)$. \[ \begin{tikzpicture} \draw (0,0) node[inner sep=2] (35) {35}; \draw (1,1) node[inner sep=2] (36) {36}; \draw (-1,1) node[inner sep=2] (45) {45}; \draw (0,2) node[inner sep=2] (46) {46}; \draw (2,2) node[inner sep=2] (37) {37}; \draw (1,3) node[inner sep=2] (47) {47}; \draw (0,4) node[inner sep=2] (57) {57}; \draw (-1,3) node[inner sep=2] (56) {56}; \draw (-1,5) node[inner sep=2] (67) {67}; \draw (35)--(36)--(37)--(47)--(57)--(67); \draw (35)--(45)--(46)--(47); \draw (36)--(46)--(56)--(57); \end{tikzpicture} \] \begin{question} Is there a partial order on admissible pinnacle sets such that if $S\leq T$ in the partial order, then $p_S(n) \leq p_T(n)$? \end{question} In Section~\ref{sec:rec} we established certain recursive formulas for $p_S(n)$, but we only had explicit formulas in a few special cases, such as those used to prove our upper and lower bounds. Perhaps it is possible to do better. \begin{question} For general $n$ and $S$, is there a closed-form, non-recursive formula for $p_S(n)$? \end{question} As a step in this direction, notice that combining the formulas \eqref{eq:onepeak} and \eqref{eq:2peak} from Proposition \ref{prp:12peak} yields the following: \begin{align*} p_{\emptyset}(n) &= 2^{n-1},\\ p_{\emptyset}(n) + 2p_{\{l\}}(n) &= 2^{n+l-3},\\ p_{\emptyset}(n) + 2p_{\{l\}}(n) + 2p_{\{m\}}(n) + 4p_{\{l,m\}}(n) &= 2^{n+m-l-3}(3^{l-1}+1). \end{align*} It is not completely clear what the pattern might be here, but perhaps for an admissible pinnacle set $S$, the quantity $q_S(n)$ defined as follows, \[ q_S(n) = \sum_{I\subseteq S} 2^{|I|}p_I(n), \] might be well-behaved. If so, this would give an inclusion-exclusion formula for $p_S(n)$. \begin{question} For general $n$ and $S$, is there a closed-form, non-recursive formula for $q_S(n)$? \end{question} \bibliographystyle{plain}
{ "timestamp": "2017-04-20T02:00:48", "yymm": "1704", "arxiv_id": "1704.05494", "language": "en", "url": "https://arxiv.org/abs/1704.05494", "abstract": "The peak set of a permutation records the indices of its peaks. These sets have been studied in a variety of contexts, including recent work by Billey, Burdzy, and Sagan, which enumerated permutations with prescribed peak sets. In this article, we look at a natural analogue of the peak set of a permutation, instead recording the values of the peaks. We define the \"pinnacle set\" of a permutation w to be the set {w(i) : i is a peak of w}. Although peak sets and pinnacle sets mark the same phenomenon for a given permutation, the behaviors of these sets differ in notable ways as distributions over the symmetric group. In the work below, we characterize admissible pinnacle sets and study various enumerative questions related to these objects.", "subjects": "Combinatorics (math.CO)", "title": "The pinnacle set of a permutation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9888419680942664, "lm_q2_score": 0.8333246015211008, "lm_q1q2_score": 0.8240263390294956 }
https://arxiv.org/abs/2111.10624
Spectral perturbation by rank one matrices
Let $A$ be a matrix of size $n \times n$ over an algebraically closed field $F$ and $q(t)$ a monic polynomial of degree $n$. In this article, we describe the necessary and sufficient conditions of $q(t)$ so that there exists a rank one matrix $B$ such that the characteristic polynomial of $A+B$ is $q(t)$.
\section{Introduction and main results} Let $A$ be an $n \times n$ matrix over an algebraically closed field $F$. The eigenspectrum of matrices of the form $A+B$ where $B$ is a low rank matrix has been studied extensively in the literature (for example, see \cite{[Bau]}, \cite{[Kato]}, \cite{[Lidskii]}, \cite{[MMRR]}). The case $B$ has rank at most one is particularly interesting. In \cite[Theorem 1.1]{[CNg]}, the authors prove the following. \begin{thm} \label{thm: CNg} (see \cite[Theorem 1.1]{[CNg]}) Let $A$ be a matrix with characteristic polynomial $p_{A}(t)=\prod_{i=1}^n (t-\lambda_i)$. Let $q(t)$ be a monic polynomial of degree $n$ and $(a_1, \ldots, a_n) \in F^n$ such that \[ \frac{q(t)}{p_{A}(t)}=1+\sum_{i=1}^n \frac{a_i}{t-\lambda_i} .\] Then there exists a matrix $B$ of rank at most one such that the characteristic polynomial of $A+B$ is $q(t)$. \end{thm} This is a quite interesting theorem. This leads to the following question. \begin{question} Suppose $A$ is given. Find the necessary and sufficient conditions on $q(t)$ so that there exists a rank $1$ matrix $B$ such that the characteristic polynomial of $A+B$ is $q(t)$. \end{question} In this article, we solve this question completely. To state the main theorem, we introduce some notation. Let $\lambda \in F$. For a polynomial $f(t) \in F[t]$, we define $m_{\lambda}(f)$ to be the multiplicity of $(t-\lambda)$ in $f(t)$. For a matrix $A$, we denote by $p_A(t)$ its characteristic polynomial, and by $\alg_{\lambda}(A)$ the algebraic multiplicity of $\lambda$ as an eigenvalue of $A$, namely \[ \alg_{\lambda}(A)= m_{\lambda}(p_{A}(t)).\] Finally, we let $j_{\lambda}(A)$ to be the size of the largest Jordan block in $A$ with $\lambda$ on the main diagonal. Our main theorem is the following. \begin{thm} \label{thm:main} Let $A$ be an $n \times n$ matrix and $q(t)$ is a monic polynomial of degree $n$. Then there exists a rank one matrix $B$ such that \[ p_{A+B}(t)=q(t) ,\] if and only if for all $\lambda \in F$, the following condition is satisfied \[ m_{\lambda}(q) \geq \alg_{\lambda}(A)-j_{\lambda}(A) .\] \end{thm} \subsection{Acknowledgment} The last author (Tung T. Nguyen) would like to thank Prof. Christian Mehl and Prof. Fuzhen Zhang for their interesting correspondences. He is also grateful to Prof. Christopher Godsil for teaching him Lemma \ref{lem:WA_formula} and Lemma \ref{lem:factorization} which play a key role in this article. The topic of this article arose when we studied some problems in spectral graph theory during the 2021 Fields Undergraduate Summer Research Program. We want to thank the participants for several inspiring discussions and the Fields Institute for providing an excellent working environment. \section{Necessary conditions} In this section, we describe the necessary conditions for Theorem \ref{thm:main}. We first start with a simple observation. By theorem \ref{thm: CNg}, if $q(t)$ satisfies \[ m_{\lambda}(q)+1 \geq m_{\lambda}(p) , \forall \lambda \in F \] then there will be a rank 1 matrix $B$ such that the characteristic polynomial of $B+A$ is $q(t)$. While the above condition is sufficient, there are examples that do not have this property. Here is one particular example. \begin{expl} \label{expl:first_expl} Let \[ A=\begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix} .\] Let \[ B=\begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} .\] Note that $B$ has rank at most $1$ if and only if $a_{11}a_{22}-a_{12}a_{21}=\det(B)=0$. We then have \[ p_{A+B}(t)=t^2-(a_{11}+a_{22})t-a_{21} .\] It is easy to see that for all monic quadratic polynomials $q(t)=t^2+at+b$, there exists $B$ such that \[ p_{A+B}(t)=q(t) .\] For example, we can take $a_{11}=-a, a_{21}=-b, a_{12}=a_{22}=0$. We see in particular that $q(t)=t^2+1$ violates the above inequality at $\lambda=0$. However, there is still $B$ such that $p_{A+B}(t)=q(t)$. \end{expl} This example shows that it is possible to find more relaxed conditions so that Theorem \ref{thm:main} still holds true. We begin the derivation of the necessary and sufficient conditions on $q(t)$ with a lemma. \begin{lem} \label{lem:power} Let R be a ring and let $A,B\in R$. Then \[ (B+A)^k=\left[ \sum_{m=0}^{k-1} A^m B (B+A)^{k-m-1} \right]+A^k .\] \end{lem} \begin{proof} Let us prove this by induction. For $k=1$, the left hand side and the right hand side are both $B+A$. Let's consider $k=2$. The left hand side is \[ (B+A)^2=(B+A)(B+A)=B(B+A)+A(B+A)=B(A+B)+AB +A^2.\] Suppose the formula is true for $k$. Let us show that it is true for $k+1$. Indeed we have \begin{align*} (B+A)^{k+1} &=(B+A)(B+A)^k=B(B+A)^k+A (A+B)^k \\ &=B(B+A)^k+ A \left[ A^k+ \sum_{m=0}^{k-1} A^m B (B+A)^{k-m-1} \right]\\ &=B(B+A)^k+A^{k+1}+\sum_{m=0}^{k-1} A^{m+1}B(B+A)^{k-m-1}. \end{align*} For the last term, let $n=m+1$ \[ \sum_{m=0}^{k-1} A^{m+1}B(B+A)^{k-m-1}=\sum_{n=1}^{k} A^n B (B+A)^{(k+1)-n-1}.\] Therefore, we can see that \[ (B+A)^{k+1}=\left[ \sum_{m=0}^{k} A^m B (B+A)^{k-m} \right]+A^{k+1} .\] By induction, the above formula is true for all $k$. \end{proof} We provide another pictorial proof for Lemma \ref{lem:power}. \begin{proof} Terms in the expression of $(A+B)^k$ corresponds to paths of length $k-1$ in the following labelled graphs (with $2k$ nodes). \xymatrix{ &&&& A \ar[r] \ar[rd] & A \ar[r] \ar[rd] & A \ldots \ar[r] & A\\ &&&& B \ar[r] \ar[ru] & B \ar[r] \ar[ru] & B \ldots \ar[r] & B \\ } Apart from the term $A^k$, each term has an initial block of the form $A^mB$, $0 \leq m \leq k-1$. Visualizing that block in the graph above (starting from the left), and considering all terms which begin with that block, we see that they correspond to continuing paths through the expansion $(B+A)^{k-m-1}$. \end{proof} Here is a direct corollary of Lemma \ref{lem:power}. \begin{cor} \label{cor:rank} For each $k$ \[ \rank((A+B)^k) \leq k \rank(B)+\rank(A^k).\] \end{cor} \begin{proof} This is a direct consequence of lemma 1 and the facts that for two matrices $M,N$ \[ \rank(MN) \leq \min \{\rank(M), \rank(N) \}, \] and \[ \rank(M+N) \leq \rank(M)+\rank(N) .\] \end{proof} \begin{rem} After writing this article, we learned that Lemma \ref{lem:power} and Corollary \ref{cor:rank} have been discussed in \cite[Theorem 2.2]{[MMRR]}. We decide to keep those statements here in order to make this article self-contained. \end{rem} We recall the following notation which was mentioned in the introduction. Let $A$ be a matrix and $\lambda_0 \in F$. We denote by $\alg_{\lambda_0}(A)$ the algebraic multiplicity of $\lambda_0$ with respect to $A$. More precisely, \[ \alg_{\lambda_0}(A)=m_{\lambda_0}(p_{A}(t)) .\] \begin{prop} Let $B$ be a rank 1 matrix. Recall $j_{\lambda}(A)$ be the largest size of the Jordan block of $A$ of the form \[ J_{\lambda, r} = \begin{pmatrix} \lambda& 1 & \; & \; \\ \; & \lambda & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & \lambda \end{pmatrix}. \] Then \[ \rank((B+A-\lambda)^{j_{\lambda}})) \leq j_{\lambda}(A) \rank(B)+n-\alg_{\lambda}(A)=j_{\lambda}(A)+n-\alg_{\lambda}(A) .\] In particular, $\lambda$ is an eigenvalue of $B+A$ with algebraic multiplicity at least $\alg_{\lambda}(A)-j_{\lambda}$. \end{prop} \begin{proof} It is enough to prove the above statement when $\lambda=0$. In this case, all Jordan blocks of $A$ with $0$ on the diagonal will become zero when we raise $A$ the $j_{0}$-power. We then see that \[ \rank(A^{j_{0}})=n-\alg_{0}(A) .\] Therefore, by Corollary $1$, we get the inequality \[ \rank((B+A-\lambda)^{j_{0}})) \leq j_{0}(A)+n-\alg_{0}(A) .\] The second statement is a direct consequence of this inequality. \end{proof} A direct consequence of this proposition is the following. \begin{cor} \label{cor:condition} Suppose that $q(t)=p_{A+B}(t)$ for some rank $1$ matrix $B$. Then for all $\lambda \in F$ \[ m_{\lambda}(q) \geq \alg_{\lambda}(A)-j_{\lambda}(A).\] \end{cor} The matrix $q(t)$ in Example \ref{expl:first_expl} has this property. In the next section we will show that in fact this condition is also sufficient. \section{Sufficient conditions} In this section, we show that the conditions given in Corollary \ref{cor:condition} are in fact sufficient. To show this, we first introduce some lemmas in matrix algebra. \begin{lem}(Weinstein–Aronszajn formula, see \cite[Proposition 11]{[Chervov]}) \label{lem:WA_formula} Let $M$ be a $m \times n$ matrix and $N$ is an $n \times m$ matrix. Then \[ \det(I_m-MN)=\det(I_n-NM) .\] \end{lem} \begin{lem} \label{lem:factorization} Let $B=vw^{T}$ where $v, w \in F^{n}$. Let $p_{A+B}(t)$ and $p_{A}(t)$ be the characteristic polynomial of $A+B$ and $A$ respectively. Then \[ p_{A+B}(t)=p_{A}(t) (1-w^t(tI-A)^{-1}v) .\] Equivalently \[ \frac{p_{A+B}(t)}{p_{A}(t)}=1-w^{t} (tI-A)^{-1} v .\] \end{lem} \begin{proof} We have \begin{align*} p_{A+B}(t) &=\det(tI-B-A) \\ &=\det(tI-A) \det(I-(tI-A)^{-1}B)\\ &=\det(tI-A) \det(I - (tI-A)^{-1}vw^t )\\ &=p_{A}(t) (1-w^t(tI-A)^{-1}v)\\ \end{align*} Note that the third equality follows from Lemma \ref{lem:WA_formula} by taking $M=(tI-A)^{-1}v$ and $N=w^t$. \end{proof} By this lemma, we see that the condition $p_{A+B}(t)=q(t)$ is equivalent to \[ 1-w^{t}(tI-A)^{-1} v=\frac{q(t)}{p_{A}(t)}.\] In other words \[ w^{t} (tI-A)^{-1} v= \frac{p_{A}(t)-q(t)}{p_A(t)}=\frac{h(t)}{p_A(t)} ,\] with $h(t)=p_{A}(t)-q(t)$. Note that because $p_{A}(t), q(t)$ are both monic polynomials of degree $n$, $h(t)$ is a polynomial of degree at most $n-1$. By choosing a suitable basis of generalized eigenvectors for $A$, we can assume that in the standard basis of $F^n$, $A$ is a direct sum of Jordan blocks $J_{\lambda,r}$, namely \[ A=\bigoplus_{\lambda, r} J_{\lambda,r } .\] Here $J_{\lambda, r}$ is a basic Jordan block of the form \[ J_{\lambda, r} = \begin{pmatrix} \lambda& 1 & \; & \; \\ \; & \lambda & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & \lambda \end{pmatrix}. \] We will consider different cases. \subsection{$A$ is a single Jordan block} We first consider the easiest case, namely $A$ is a single Jordan block $A=J_{\lambda, n}$. In this case, the corresponding condition in Corollary \ref{cor:condition} is simply \[ m_{\lambda}(q) \geq n-n=0 .\] We see that all monic polynomials of degree $n$ satisfy this condition. We will show that in fact, there always exists a rank $1$ matrix $B$ such that \[ p_{A+B}(t)=q(t).\] Let us write \[ A= \lambda I_n +N ,\] where $N$ is the nilpotent matrix \[ N = \begin{pmatrix} 0& 1 & \; & \; \\ \; & 0 & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & 0 \end{pmatrix}. \] We need the following lemma. \begin{lem} \label{lem: inverse} Let $A=J_{\lambda, n}=\lambda I_n +N $ with $N$ is the above nilpotent matrix. Then \[ (tI_n-A)^{-1}= \sum_{i=0}^{n-1} \frac{N^i}{(t-\lambda)^{i+1}} .\] \end{lem} \begin{proof} We recall that if $x,y$ are two commuting matrices of the same size then \[ x^n-y^n=(x-y) \left[\sum_{i=0}^{n-1} x^{n-i-1}y^{i} \right] .\] Apply this formula for $x=(t-\lambda) I_n, y=N$ and note that $N^n=0$, we have \begin{align*} (t-\lambda)^n I_n &=(t-\lambda)I_n^n - N^n \\ &=((t-\lambda)I_n-N) \left[ \sum_{i=0}^{n-1} (t-\lambda)^{n-1-i} I_n^{n-1-i} N^{i} \right] \\ &=((t-\lambda)I_n-N) \left[ \sum_{i=0}^{n-1} (t-\lambda)^{n-i-i} N^{i} \right]. \end{align*} Consequently \begin{equation} \label{eq:inverse} ((t-\lambda)I_n-N) \left[ \sum_{i=0}^{n-1} \frac{N^{i}}{(t-\lambda)^{i+1}} \right]=I_n. \end{equation} Note that $(t-\lambda)I_n-N=(tI_n-A)$. Therefore, the above lemma follows directly from Equation \ref{eq:inverse}. \end{proof} We are now ready to prove the following. \begin{prop} \label{prop:single_block} Suppose $A=J_{\lambda, n}$ and $q(t)$ is a monic polynomial of degree $n$. Then there exists a rank $1$ matrix $B=vw^{t}$ such that \[ p_{A+B}(t)=q(t) .\] More concretely, let $\{e_1, e_2, \ldots, e_n \}$ be the standard basis for $F^n$. Then we can take $v=e_n$ and \[ w=\sum_{i=1}^n a_{n-i} e_i ,\] where $a_i$ is determined explicitly by $q(t)$. \end{prop} \begin{proof} Since $A=J_{\lambda, n}$ we have \[ p_{A}(t)=(t-\lambda)^n .\] By Lemma \ref{lem:factorization}, it is enough to show that there exists $v,w \in F^n$ such that \begin{equation} \label{eq:equality} w^t (tI-A)^{-1} v= \frac{h(t)}{p_{A}(t)}=\frac{h(t)}{(t-\lambda)^n} , \end{equation} with $h(t)=p_{A}(t)-q(t)$. Since $h(t)$ is a polynomial of degree at most $n-1$, by taking the Taylor expansion of $h(t)$ at $\lambda$ we have \[ h(t)=\sum_{i=0}^{n-1} a_i (t-\lambda)^{n-1-i} .\] Therefore Equation \ref{eq:equality} is equivalent to \begin{equation} \label{eq:second_equality} w^{t}(tI-A)^{-1} v= \sum_{i=0}^{n-1} \frac{a_i}{(t-\lambda)^{i+1} }. \end{equation} Let us take \[ w=\sum_{i=0}^{n-1} a_i e_{n-i}, \quad v=e_n .\] We claim that $v, w$ satisfy the condition given in Equation \ref{eq:second_equality}. In fact, by Lemma \ref{lem: inverse} we have \[ (tI-A)^{-1} =\sum_{i=0}^{n-1} \frac{N^i}{(t-\lambda)^{i+1}} .\] Therefore \begin{align*} w^{t} (tI-A)^{-1}v &= w^t \left[\sum_{i=0}^{n-1} \frac{N^i e_n}{(t-\lambda)^{i+1}} \right] \\ &= \sum_{i=0}^{n-1} \frac{w^t e_{n-i}}{(t-\lambda)^{i+1}}\\ &=\sum_{i=0}^{n-1} \frac{a_i}{(t-\lambda)^{i+1}}. \end{align*} Here we use the fact that $N^i e_n=e_{n-i-1}$ and $w^t e_{n-i}=a_i$. \end{proof} \subsection{A is the direct sum of Jordan blocks with different $\lambda_i$} Next, we consider the case where $A$ is a direct sum of these blocks with different $\lambda$, namely \[ A=\bigoplus_{i=1}^m J_{\lambda_i, n_i}, \] with $\lambda_i$ are pairwisely different. We note that in this case, the condition in Corollary \ref{cor:condition} is simply \[ m_{\lambda_i}(q) \geq n_i-n_i=0 .\] In other words, all $q(t)$ have this property. In fact, we have the following proposition \begin{prop} \label{prop:distinct_block} Suppose \[ A=\bigoplus_{i=1}^m J_{\lambda_i, n_i}, \] with $\lambda_i$ are pairwisely different. Let $q(t)$ is a monic polynomial of degree $n$. Then there exists a rank $1$ matrix $B=vw^{t}$ such that \[ p_{A+B}(t)=q(t) .\] \end{prop} \begin{proof} In this case, we have \[ p_{A}(t)=\prod_{i=1}^m (t-\lambda_i)^{n_i} .\] As before, we need to find $v, w$ such that \[ w^t (tI-A)^{-1} v= \frac{h(t)}{p_{A}(t)}=\frac{h(t)}{\prod_{i=1}^m(t-\lambda_i)^{n_i}} .\] with $h(t)=p_{A}(t)-q(t)$. We need the following lemma. \begin{lem} There exist polynomials $h_i(t)$ such that $\deg(h_i(t)) \leq n_i-1$ and \[ \frac{h(t)}{q(t)}=\sum_{i=1}^m \frac{h_i(t)}{(t-\lambda_i)^{n_i}} .\] \end{lem} \begin{proof} By the Chinese remainder theorem, there exist polynomials $g_i(t) \in F[t]$ such that \[ \frac{h(t)}{q(t)}=\sum_{i=1}^m \frac{g_i(t)}{(t-\lambda_i)^{n_i}} .\] By the Euclidean algorithm, we can write \[ \frac{g_i(t)}{(t-\lambda_i)^{n_i}}=\frac{h_i(t)}{(t-\lambda_i)^{n_i}}+r_i(t) ,\] where $h_i(t), r_i(t)$ are a polynomials and the degree of $h_i(t)$ is at most $n_i-1$. Hence we have \[ \frac{h(t)}{q(t)}=r(t)+\sum_{i=1}^m \frac{h_i(t)}{(t-\lambda_i)^{n_i}} ,\] with $r(t)=r(t)+\ldots+r_{m}(t)$. Since $\deg(h)<\deg(q)$, we must have $r(t)=0.$ Consequently, \[ \frac{h(t)}{q(t)}=\sum_{i=1}^m \frac{h_i(t)}{(t-\lambda_i)^{n_i}} .\] \end{proof} Let us denote by $A_{\lambda}$ the generalized eigenspace associated with $\lambda$, namely \[ A_{\lambda}=\{ v \in F^n| (A-\lambda)^m v=0, \text{for some $m$} \} .\] We see that , if $z_i \in A_{\lambda_i}$, $z_j \in A_{\lambda_j}$, and $\lambda_{i} \neq \lambda_{j}$ then \[ z_i^t z_j=z_j^t z_i=0. \] Let us write \[ v=\sum_{i=1}^m v_i, \quad w=\sum_{i=1}^m w_i .\] with $v_i, w_i \in A_{\lambda_i}$. Since $(tI-A)^{-1} v_i \in A_{\lambda_i}$ and $w_j \in A_{\lambda_j}$ we see that if $i \neq j$ then \[ w_j^t (tI-A)^{-1} v_i=0 .\] Therefore \begin{equation} \label{eq:composition} w^t (tI-A)^{-1}v= \sum_{i=1}^m \overline{w}_{i}^t (tI_{n_i}-J_i)^{-1} \overline{v}_{i} . \end{equation} Here for a vector $z_i \in A_{\lambda_i}$, $\overline{z}_i$ is the projection of $z_i$ to the component corresponding to $J_{\lambda_i}$. More precisely, suppose \[ z_i=(0, \ldots, 0, \underbrace{m_1, m_2, \ldots, m_{n_i}}_{\text{$i$-th component}}, \ldots 0 ) \in F^n,\] then \[ \overline{z}_i=(m_1, m_2, \ldots, m_{n_i}) \in F^{n_i} .\] Note that, we can recover $z_i$ from $\overline{z}_i$ as well. Now, by Proposition \ref{prop:single_block}, we know that we can find $\overline{v}_i, \overline{w}_i$ (and hence $v_i, w_i$) such that that \[ \overline{w}_{i}^t (tI_{n_i}-J_i)^{-1}=\frac{h_i(t)}{(t-\lambda_i)^{n_i}} .\] From Equation \ref{eq:composition}, we see that \[ w^t (tI-A)^{-1}v= \sum_{i=1}^m \overline{w}_{i}^t (tI_{n_i}-J_i)^{-1} \overline{v}_{i} . \] This completes the proof in this special case. \end{proof} \subsection{The general case} Finally, we consider the general case. \begin{prop} Let $A$ be an $n \times n$ matrix. Suppose $q(t)$ be a monic polynomial of degree $n$ satisfying the condition in Corollary \ref{cor:condition}, namely \[ m_{\lambda}(q) \geq \alg_{\lambda}(A)-j_{\lambda}(A) .\] Then, there exists a rank $1$ matrix $B$ such that \[ p_{A+B}(t)=q(t) .\] \end{prop} \begin{proof} As explained above, it is enough to consider the case where $A$ is a direct sum of Jordan blocks (possibly with repeated $\lambda_i$). Let $\{\lambda_1, \lambda_2, \ldots, \lambda_m \}$ be the set of distinct eigenvalues of $A$. We can then decompose $A$ as \[ A=A_1 \oplus A_2 ,\] where \[ A_1= \bigoplus_{i=1}^m J_{\lambda_i, j_{\lambda_i}} ,\] and $A_2$ is the rest. In other words, for each eigenvalue $\lambda$, $A_1$ contains exactly one copy of $J_{\lambda,r}$ with $r$ being the largest size of all Jordan blocks in $A$ of the form $J_{\lambda, r_k}$. We note, by the assumption, we can write \[ q(t)=q_1(t) \prod_{i=1}^m (t-\lambda_i)^{\alg_{\lambda_i}(A)-j_{\lambda_i}(A)} ,\] where $q_1(t)$ is a monic polynomial of degree $d=\sum_{j=1}^n j_{\lambda_j}(A)$. By Proposition \ref{prop:distinct_block}, we know that there exists a rank $1$ matrix $B_1$ of size $d \times d$ such that \[ p_{A_1+B_1}(t)=q_1(t) .\] Let $B=B_1 \oplus 0_{n-d}$ be a matrix of size $n \times n$. Note that, $B$ has rank $1$ as well. Furthermore \[ p_{A+B}(t)=p_{A_1+B_1}(t) p_{A_2+B_2}(t)=q_1(t) p_{A_2+B_2}(t) .\] It is easy to see that \[ p_{A_2+B_2}(t) =\prod_{i=1}^m (t-\lambda_i)^{\alg_{\lambda_i}(A)-j_{\lambda_i}(A)} .\] We then conclude that \[ p_{A+B}(t)=q(t) .\] \end{proof}
{ "timestamp": "2021-11-23T02:12:11", "yymm": "2111", "arxiv_id": "2111.10624", "language": "en", "url": "https://arxiv.org/abs/2111.10624", "abstract": "Let $A$ be a matrix of size $n \\times n$ over an algebraically closed field $F$ and $q(t)$ a monic polynomial of degree $n$. In this article, we describe the necessary and sufficient conditions of $q(t)$ so that there exists a rank one matrix $B$ such that the characteristic polynomial of $A+B$ is $q(t)$.", "subjects": "Spectral Theory (math.SP)", "title": "Spectral perturbation by rank one matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9861513922264661, "lm_q2_score": 0.8354835330070838, "lm_q1q2_score": 0.8239132492572223 }
https://arxiv.org/abs/2111.10331
Maximum arrangements of nonattacking kings on the $2n\times 2n$ chessboard
To count the number of maximum independent arrangements of $n^2$ kings on a $2n\times 2n$ chessboard, we build a $2^n \times (n+1)$ matrix whose entries are independent arrangements of $n$ kings on $2\times 2n$ rectangles. Utilizing upper and lower bound functions dependent of the entries of the matrix, we recursively construct independent solutions, and provide a straight-forward formula and algorithm.
\section{Introduction} The problem of finding and counting the number of independent, also called nonattacking, arrangements of pieces on a chessboard is a long-established problem in mathematics and computer science. Many variations have been studied including modifications of traditional pieces and of board size and shape. In particular, we will be concerned here with maximum independent arrangements on a square chessboard where arrangements of pieces are always fixed, that is, rotations and reflections are considered distinct. For some traditional chess pieces, bishops, rooks, knights, and pawns, both the size of a maximum independent set and the number of distinct maximal independent arrangements are known. For the other pieces, kings and queens, only partial results are known. For those interested in these kinds of questions, books by Dudeney~\cite{Dudeney}, Kraitchik~\cite{Kraitchik}, Madachy~\cite{Madachy}, Watkins~\cite{Watkins}, and Yaglom and Yaglom~\cite{Yaglom_Yaglom} provide good references, and Table~\ref{tab_counts} illustrates the currently known enumerative results. \begin{table}[ht] \centering \scalebox{0.85}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Kings & Queens & Bishops & Knights & Rooks & Pawns\\ \hline Number of pieces&&&&&&\\ in a maximum & $\big\lceil \frac{n}{2} \big\rceil^2$ & $n$ & $2n-2$ & $\big\lceil \frac{n}{2} \big\rceil^2 + \big\lfloor \frac{n}{2} \big\rfloor ^2$ & $n$ &$n \big\lceil \frac{n}{2} \big\rceil$\\ arrangement&&&&&&\\ \hline Number of& 1 if $n$ is odd,& unknown if & & 1 if $n$ is odd, & & 2 if $n$ is odd,\\ maximum&unknown if& $n>27$ & $2^n$ & 2 if $n$ is even & $n!$ & ${n\choose n/2}^2$ if $n$ is even\\ arrangements &$n>26$ is even & & & & & \\ \hline \end{tabular} } \caption{Enumerative results for maximum independent sets placed on an $n\times n$ chessboard where $n>1$} \label{tab_counts} \end{table} For the still-open questions, it is known that the maximum number of nonattacking queens on an $n\times n$ chessboard is $n$, but counting the number of these arrangements is a famously difficult question as can be the many variations on this problem. (See a survey paper by Bell and Stevens~\cite{Bell_Stevens} on this topic.) However, here we wish to consider questions of maximum arrangements of nonattacking kings. First, the maximum size is easily found. We observe that every $2\times 2$ square on the chessboard may contain at most one king, so the upper bound on the number of kings is $\lceil \frac{n}{2} \rceil ^2$. But this number can be achieved for both odd- and even-length boards by alternating kings in both rows and columns, as shown in Figure~\ref{fig_kingssolutions}. \begin{figure}[ht] \begin{center} \scalebox{0.9}{\includegraphics{KP_Figure1}} \end{center} \caption{Maximum arrangements of 16 kings on $7\times 7$ and $8\times 8$ chessboards} \label{fig_kingssolutions} \end{figure} Determining the number of distinct maximum independent arrangements in the case that $n$ is odd, is also straight-forward. The arrangement placing kings in alternating positions and alternating rows as illustrated in Figure~\ref{fig_kingssolutions} is unique. To see this, observe that every $2\times n$ strip and every $n\times 2$ strip must contain exactly $\lceil \frac{n}{2} \rceil$ kings in order to achieve the maximum. Because the kings necessarily must at least be placed in alternating rows or columns, the only ways to do this is to place a king in the top row or leftmost column and alternate to the bottom row or rightmost column. Thus kings must appear in all four corners and in alternating rows and columns. Counting maximum arrangements of independent kings on an even-length $2n\times 2n$ chessboard is not so simple as there is more latitude for king placement in this case. Figure~\ref{fig_evenkings} illustrates some of these arrangements. As we will now only be considering even-length boards, note that for the rest of the discussion $n$ will always refer to the half-length of the even-length chessboard. \begin{figure}[ht] \begin{center} \scalebox{0.9}{\includegraphics{KP_Figure2}} \end{center} \caption{Examples of maximum arrangements of nonattacking kings on chessboards with even-lengths 6, 8, and 10} \label{fig_evenkings} \end{figure} The question of enumerating maximum independent arrangements of kings on the even-length $2n\times 2n$ chessboard has been studied by Knuth~\cite{Knuth} in an unpublished work and later by Wilf~\cite{Wilf}, and then Larsen~\cite{Larsen} who gives an asymptotic approximation. Entry \href{https://oeis.org/A018807}{A018807} in the Online Encyclopedia of Integer Sequences~\cite{OEIS} contributes counts up to $n=26$, and Kot\v{e}\v{s}ovec~\cite{Kotesovec} also provides enumerative results in his extensive book on independent chessboard arrangements. Variations have been studied by Calkin et. al.~\cite{Calkin_etal1, Calkin_etal2} who consider all non-maximum arrangements, while Abramson and Moser~\cite{Abramson_Moser} count arrangements of $n$ independent kings with exactly one king in each row and column. For the original question, Knuth's technique was to partition the chessboard into $2\times 2$ squares in such a way that would generate two vectors of integers. He then sets up a graph where directed paths with certain restrictions could be used to find maximum independent kings solutions. Wilf's strategy is to also partition the chessboard into $2n\times 2$ columns, but then uses recursive matrix multiplication of the \textit{transfer matrix} to identify solutions. Algorithmically, this requires construction of the transfer matrix and repeated matrix multiplication on the $2^n (n+1) \times 2^n (n+1)$ matrix. Here, we will use a smaller $2^n \times (n+1)$ matrix and apply techniques similar to those used in an earlier work by the author~\cite{TMB_pawns} on arrangements of independent pawns to find a formula to enumerate the number of arrangements of $n^2$ kings. The complexity is still exponential, but smaller with a worst-case estimate of $O(3^n n^3)$. \section{Counting nonattacking arrangements} Let us begin similarly to Wilf~\cite{Wilf} by constructing a matrix which we will call $M_{2n}$ whose entries are all possible arrangements of $n$ independent kings on a $2\times 2n$ rectangle. First, given such $2\times 2n$ rectangle, we partition the rectangle into $n$ $2\times 2$ squares. From left to right, index each of these squares with the integers $1, 2, \ldots, n$. We note, each $2\times 2$ square must contain exactly one king so that the set of kings placed on the rectangle is both non-attacking and maximum. To determine the placement of the king, we need to know if the king is in the top or bottom row of the square and if the king is in the left or right column. Thus we associate any $2\times 2n$ arrangement with a subset $A \subseteq [n]$ and an index $1\leq k \leq n+1$ where $A$ equal the set of indices of the $2\times 2$ squares that have their king in the top row of the square and $k$ is the index of the square such that all squares with index less than $k$ have a left king and all squares with an index greater than or equal to $k$ have a right king. We use the $2^n$ subsets to index the rows of the matrix $M_{2n}$ and the values $1\leq k \leq n+1$ to index the columns. The $2\times 2n$ rectangles are arranged from left to right in increasing order by the number of kings which appear in the left column of their $2\times 2$ square in the partition of the rectangle. Specifically, the column index is the index of the leftmost square containing a king in the right column, and also is one plus the number of such kings appearing in a left column. See Figure~\ref{3matrix} for an example of this matrix in the case $n=3$. \begin{figure}[ht] \begin{center} \scalebox{0.9}{\includegraphics{KP_Figure3}} \end{center} \caption{Entries in the matrix $M_6$} \label{3matrix} \end{figure} \begin{lemma} Every independent arrangement of $n$ kings on a $2\times 2n$ rectangle appears exactly once as an entry in the matrix $M_{2n}$. \end{lemma} \begin{proof} By construction, the set of matrix entries is contained in the set of independent $n$-kings arrangements on $2\times 2n$ rectangles. Further the matrix entries encompass all ways to places kings in the top or bottom row of each rectangle and all the ways to place kings in left or right columns so that all right kings appear to the left of all left kings. Any arrangement that does not appear as a matrix entry would necessarily then have a right king appearing to the right of a left king, but this contradicts independence. Thus we have arrangement all possible arrangements of non-attacking kings on a $2\times 2n$ rectangle appearing in exactly one entry of the $2^n \times (n+1)$ matrix $M_{2n}$. \end{proof} Our goal now is to vertically concatenate these rectangular arrangements to form allowable independent solutions. The first observation is that for such a concatenation to be independent, the row index of the upper rectangle, must contain the row index of the lower rectangle. \begin{lemma}\label{lemma_BsubA} Given an independent arrangement of $n$ kings on the rectangle indexed by $(A,k)$ where $A\subseteq [n]$ and $1\leq k \leq n+1$, any independent arrangement on another $2\times 2n$ rectangle that can be concatenated below to form an independent arrangement of size $4\times 2n$ must be indexed by $B$ where $B\subseteq A$. \end{lemma} \begin{proof}To the contrary, if $B \nsubseteq A$, then there exists an index $i$ that is in $B$ and is not in $A$. This indicates a king in the top row of the strip indexed by $B$ and a king in the bottom row of the strip indexed by $A$ in square $i$. No matter the placement, left or right, for each king, the two kings may attack. \end{proof} This condition is necessary but not sufficient. For example the strip in position $(\{2\}, 2)$ may not be placed independently under the strip indexed by $(\{1,2\}, 4)$ despite $\{2\} \subseteq \{1,2\}$. This is because the king in the top row of strip $(\{2\},2)$ in square $2$ has been pushed to the right, so it conflicts with the king in the bottom row of the strip $(\{1,2\},4)$ in square $3$. We need a condition on the left and right positions of the kings. To find this, we define two functions dependent on $A$, $B$, and $k$. \begin{definition} \begin{enumerate} \item Let $p(A,B,k) = \max \{ \{i\in [n+1] : i<k , i \notin A,\mbox{ and }i-1 \in B\} \cup \{1\}\}$. \item Let $q(A,B,k)= \min \{ \{ i \in [n+1]: i >k, i \in B, \mbox{ and } i-1 \notin A\} \cup \{n+1\}\}$ \end{enumerate} \end{definition} \begin{proposition}\label{prop_pandq} Given a $2\times 2n$ arrangement of $n$ independent kings indexed by $(A,k)$, the $2\times 2n$ independent arrangements that maybe independently concatenated below $(A,k)$ are exactly those which are indexed by $(B,i)$ where $B\subseteq A$ and $p(A,B,k) \leq i \leq q(A,B,k)$. \end{proposition} \begin{proof} Lemma~\ref{lemma_BsubA} forces $B\subseteq A$. Then, note that locally for all $1< j \leq n+1$ the concatenation of two strips indexed by $A$ and $B$ will never have a king in square $j$ of $(A,k)$ and a king in square of $j$ of $(B,k)$ attacking each other, because either both kings are in the top row, both are in the bottom row, or the king in $(A,k)$ is in the top row while the king in $(B,i)$ is in the bottom row. There are however two ways for a concatenation of rectangular strips $(A,k)$ and $(B,i)$ to have attacking kings in squares indexed some $j-1$ and $j$; either there is a right king in the top row of square $j-1$ in $(B,i)$ and a left king in the bottom row of square $j$ in $(A,k)$, or there is a right king in the bottom row of square $j-1$ in $(A,k)$ and a left king in the top row of square $j$ in $(B,i)$. (See Figure~\ref{conflict}.) \begin{figure}[ht] \begin{subfigure}{.5\textwidth} \hspace{0.4in} \begin{tabular}{cccccccc} &&i&&k&&&\\ \hline $\cdots$ &L&L&\hl{L}&R&R&R&$\cdots$\\ \hline $\cdots$ &L&\hl{R}&R&R&R&R&$\cdots$\\ \hline \end{tabular} \caption{Type 1 attacking kings}\label{conflict1} \end{subfigure} \quad \begin{subfigure}{.5\textwidth} \hspace{0.4in} \begin{tabular}{cccccccc} &&&k&&i&&\\ \hline $\cdots$ &L&L&\hl{R}&R&R&R&$\cdots$\\ \hline $\cdots$ &L&L&L&\hl{L}&R&R&$\cdots$\\ \hline \end{tabular} \caption{Type 2 attacking kings}\label{conflict2} \end{subfigure} \caption{Possible attacks by arrangements of independent kings on two $2\times 2n$ rectangles}\label{conflict} \end{figure} Let us first consider how we can avoid the situation of an attack by a left king in the fixed strip $(A,k)$ and a right king in strip $(B,i)$ for some $i$. We note, the largest value for $i$ that could generate a pair of attacking kings is $i=k-2$, as we can see from Figure~\ref{conflict1}, because for any larger values there cannot be a left king in $(A,k)$ that is immediately to the left of a right king in $(B,i)$. So we will have an attacking pair in this case if the king in $(B,i)$ is in the top row and the king in $(A,k)$ is in the bottom row, that is, if \begin{enumerate}[i.] \item $i< k-1$, \item $i \in B$, and \item $i+1 \notin A$. \end{enumerate} In particular, we look to the largest index over all values $i$ satisfying the criteria above, that is, over all $i$ where there is a pair of attacking kings of Type 1, because every index less than or to the left of this maximum would still have the attacking pair of kings with one king in the square indexed by that maximum and the other directly to the right. This is because moving to the left in the matrix will only increase the number of right kings in the arrangement indexed by $B$. We can then choose the index which is one greater than the maximum as the starting value for which all strips indexed by $B$ and any index greater than or equal to this maximum plus one will not have a Type 1 conflict. If no such value exists, then all arrangements in row $B$ will not have a conflict with the arrangement $(A,k)$ and we set our lower bound to $1$. Using a change of variable where we send all $i+1$ to $i$, we can say the lower bound for non-attacking arrangements is: \begin{equation*} p(A,B,k) = \max \{ \{i\in [n+1] : i<k , i \notin A,\mbox{ and }i-1 \in B\} \cup \{1\}\}. \end{equation*} Similarly, we need to look at the situation of an attack by a right king in $(A,k)$ by a left king in $(B,i)$ for some $i$. In Figure~\ref{conflict2}, we can see that the smallest value for an attacking pair would be $i=k+2$. So to have an attacking pair of the second type, we need \begin{enumerate}[i.] \item $i>k+1$, \item $i-1\in B$, and \item $i-2 \notin A$. \end{enumerate} After determining the minimum of these values $i$ where have pairs of attacking kings of Type 2, we set the upper bound to the minimum minus one as the last possible non-attacking arrangement when we move from left to the right across a row of $M_{2n}$ increasing the number of queens in left columns. If no such value exists, then all arrangements of $B$ have no attacking pairs of the second type, and we can set the upper bound to $n+1$. Again applying a change of variable sending all $i-1$ to $i$, we have \begin{equation*} q(A,B,k)= \min \{ \{ i\in [n+1]:i >k, i \in B, \mbox{ and } i-1 \notin A\} \cup \{n+1\}\}. \end{equation*} \end{proof} We illustrate these functions with the following example. \begin{example}\label{ex_pandq} Let $n=7$. Let $A=\{1,2,5,7,8\}$, $B= \{1,2,5,7\}$ and $k=4$. We compute the upper and lower bounds. \begin{align*} p(A,B,4) &= \max \{ \{i :i \notin A, i<4 \mbox{ and }i-1 \in B\} \cup \{1\}\}\\ &= \max \{ \{3\} \cup \{1\}\} = 3\\ q(A,B,4) &= \min \{ \{ i : i \in B, i >4 \mbox{ and } i-1 \notin A\} \cup \{n+1\}\}\\ &=\min \{\{5,7\} \cup \{8\}\}=5. \end{align*} Thus arrangements indexed by $B$ that may be concatenated below $(A,4)$ are exactly $(B,3)$, $(B,4)$, and $(B,5)$. See Figure~\ref{example}. \begin{figure}[ht] \begin{center} \scalebox{0.9}{\includegraphics{KP_Figure5}} \end{center} \caption{Example of independent arrangements $(\{1,2,5,7\},i)$ which may be concatenated independently below the independent arrangement $(\{1,2,5,7,8\},4)$} \label{example} \end{figure} \end{example} Now we are ready for the main result. \begin{theorem}\label{theorem_kings} Define a sequence of matrices $M^\ell$ with entries $\{m_{A,k}^{\ell}\}$ for $1\leq \ell \leq n$, $A\subseteq [n]$, and $1\leq k \leq n+1$ such that $m^1_{A,k} = 1$ for all pairs $(A,k)$ and for $1<\ell \leq n$ \[ m^{\ell}_{A,k} = \sum_{B\subseteq A} \sum_{i=p_(A,B,k)}^{q_(A,B,k)} m^{\ell -1}_{B,i}. \] Then $K(n)$, the number of maximum arrangements of nonattacking kings on the $2n\times 2n$ chessboard, is \[K(n) = \sum_{A\subseteq [n]} \sum_{k=1}^ {n+1} m^{n}_{A,k}. \] \end{theorem} \begin{proof} The sum of the entries of the matrix $M^1$ gives the number of nonattacking arrangements of $n$ kings on a $2\times 2n$ strip as each of the entries corresponds bijectively with a specific strip indexed by a subset $A\subseteq [n]$ and a value $1\leq k \leq n+1$. Because $p(A,B,k)$ and $q(A,B,k)$ provide bounds for all valid arrangement which may be independently concatenated below $(A,k)$, by Proposition~\ref{prop_pandq} the double sum \[m^2_{A,k}=\sum_{B\subseteq A} \sum_{i=p_(A,B,k)}^{q_(A,B,k)} m^{1}_{B,i}\] counts all valid $4\times 2n$ maximum arrangements of nonattacking kings where the top $2\times 2n$ arrangement is $(A,k)$, that is, the entries $m_{A,k}^2$ correspond to the number of independent arrangements on $2\times 2n$ rectangles which may be concatenated below the arrangement $(A,k)$ while preserving independence. Their sum thus is the total number of nonattacking arrangements of $2n$ kings on a $4\times 2n$ strip. As this process is repeated, we see that a $2\ell \times 2n$ independent arrangement is found by concatenating a single independent arrangement on a $2\times 2n$ rectangle $(A,k)$ above and existing independent arrangement of the $2(\ell-1)\times 2n$ strip so that independence is also preserved along this concatenation. Inductively, we know the number of $2(\ell-1)\times 2n$ arrangements with arrangement $(B,i)$ at the top is $m_{B,i}^{\ell-1}$, and as in the case where $\ell =2$ we utilize the upper and lower bound functions $p$ and $q$ to sum over precisely the arrangements which can be concatenated independently \end{proof} \begin{example} Let $n=3$. \[ m^1 = \begin{bmatrix} 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ 1&1&1&1\\ \end{bmatrix} \quad m^2 = \begin{bmatrix} 32&32&32&32\\ 12&16&16&16\\ 14&14&14&14\\ 16&16&16&12\\ 7&7&8&8&\\ 6&8&8&6\\ 8&8&7&7\\ 4&4&4&4\\ \end{bmatrix} \quad m^3 = \begin{bmatrix} 408&408&408&408\\ 88&134&134&134\\ 110&110&110&110\\ 134&134&134&88\\ 38&38&46&46\\ 30&44&44&30\\ 46&46&38&38\\ 16&16&16&16\\ \end{bmatrix} \] \[K(3) = \sum_{A\subseteq [3]} \sum_{k=1}^ {4} m^{3}_{A,k}=3600. \] \end{example} Of course, if we want to enumerate arrangements on a $2m \times 2n$ rectangular chessboard with $m\leq n$, we can simply utilize matrices $M^m$. We have the following corollary: \begin{corollary} The number of maximum arrangements of nonattacking kings on a $2m\times 2n$ rectangle for $m\leq n$ is given by the sum \[K(n,m)= \sum_{A\subseteq [n]} \sum_{k=1}^{n+1} m^{m}_{A,k}.\] \end{corollary} Finally, Theorem~\ref{theorem_kings} provides the following algorithm for computing the values of $K(n)$. \begin{algorithm}\label{algorithm} \begin{enumerate} \item For every triple $(A,B,k)$ such that $B\subseteq A \subseteq [n]$ and $1\leq k \leq n+1$, compute $p(A,B,k)$ and $q(A,B,k)$. \item Define the matrix $M^1$ with rows indexed by subsets of $[n]$ and columns indexed by $1\leq k \leq n+1$ where each entry is set to 1. \item For $2\leq \ell \leq n$, determine the entries in the matrix $M_{\ell}$ by summing over entries $m_{B,i}^{\ell -1}$ in the matrix $M^{\ell -1}$ where $B\subseteq A$ and $p(A,B,k) \leq i \leq q(A,B,k)$. \item Sum the entries of $M^n$. \end{enumerate} \end{algorithm} We can estimate the computational complexity of this process. It is computationally cheap to determine the values of $p(A,B,k)$ and $q(A,B,k)$. Each of these functions run in polynomial $\mathcal{O}(n^2)$, that is, $2n+1$ ways to check the requirements multiplied by $n$ for all possible values of $i$, plus $n$ to determine the minimum. However, the difficulty is that the algorithm sums over all pairs of subsets $B\subseteq A \subseteq [n]$ in $\sum_{i=0}^n {n\choose i} 2^i = 3^n$ ways, providing an overall upper bound on the order of $\mathcal{O}(3^n n^3)$. We note, many of the entries in each matrix $M$ are repeated; for example columns indexed by $i$ and $n+2-i$ are equal, so some futher modifications to this bound can be made, but the algorithm already an improvement on previous methods. A Mathematica~\cite{Mathematica} program that uses Algorithm~\ref{algorithm} to calculate the values of $K(n)$ is available upon request \newcommand{\book}[4]{{\sc #1,} #2, #3 (#4)} \newcommand{\preprint}[3]{{\sc #1,} #2, preprint #3.}
{ "timestamp": "2021-11-22T02:20:09", "yymm": "2111", "arxiv_id": "2111.10331", "language": "en", "url": "https://arxiv.org/abs/2111.10331", "abstract": "To count the number of maximum independent arrangements of $n^2$ kings on a $2n\\times 2n$ chessboard, we build a $2^n \\times (n+1)$ matrix whose entries are independent arrangements of $n$ kings on $2\\times 2n$ rectangles. Utilizing upper and lower bound functions dependent of the entries of the matrix, we recursively construct independent solutions, and provide a straight-forward formula and algorithm.", "subjects": "Combinatorics (math.CO)", "title": "Maximum arrangements of nonattacking kings on the $2n\\times 2n$ chessboard", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.991288614770221, "lm_q2_score": 0.8311430457670241, "lm_q1q2_score": 0.8239026385142958 }
https://arxiv.org/abs/2103.04483
An Amazing Prime Heuristic
Dickson conjectured that a set of polynomials will take on infinitely many simultaneous prime values. Later others, such as Hardy and Littlewood, gave estimates for the number of these primes. In this article we look at this conjecture, develop a simple heuristic and rederive these classic estimates. We then apply them to several special forms of primes and compare the estimates with the actual numbers.
\section{Introduction} \addtocontents{toc}{\vspace{3pt}} The record for the largest known twin prime is constantly changing. For example, in October of 2000, David Underbakke found the record primes: $$83475759\cdot 2^{64955}\pm 1.$$ The very next day Giovanni La Barbera found the new record primes: $$1693965\cdot 2^{66443}\pm 1.$$ The fact that the size of these records are close is no coincidence! Before we seek a record like this, we usually try to estimate how long the search might take, and use this information to determine our search parameters. To do this we need to know how common twin primes are. It has been conjectured that the number of twin primes less than or equal to $N$ is asymptotic to $$2C_{2}\int_{2}^{N}\frac{dx}{(\log x)^{2}}\sim \frac{2C_{2}N}{(\log N)^{2}}$$ where $C_2$, called the twin prime constant, is approximately $0.6601618$. Using this we can estimate how many numbers we will need to try before we find a prime. In the case of Underbakke and La Barbera, they were both using the same sieving software (NewPGen\footnote{Available from http://www.utm.edu/research/primes/programs/NewPGen/} by Paul Jobling) and the same primality proving software (Proth.exe\footnote{Available from http://www.utm.edu/research/primes/programs/gallot/} by Yves Gallot) on similar hardware--so of course they choose similar ranges to search. But where does this conjecture come from? In this chapter we will discuss a general method to form conjectures similar to the twin prime conjecture above. We will then apply it to a number of different forms of primes such as Sophie Germain primes, primes in arithmetic progressions, primorial primes and even the Goldbach conjecture. In each case we will compute the relevant constants (e.g., the twin prime constant), then compare the conjectures to the results of computer searches. A few of these results are new--but our main goal is to illustrate an important technique in heuristic prime number theory and apply it in a consistent way to a wide variety of problems. \subsection{The key heuristic} A heuristic is an educated guess. We often use them to give a rough idea of how long a program might run--to estimate how long we might need to wait in the brush before a large prime comes wandering by. The key to all the results in this chapter is the following: \begin{quote} The prime number theorem states that the number of primes less than $n$ is asymptotic to $1/\log n$. So if we choose a random integer $m$ from the interval $[1,n]$, then the probability that $m$ is prime is asymptotic to $1/\log n$. \end{quote} \noindent Let us begin by applting this to a few simple examples. First, as $n$ increases, $1/\log n$ decreases, so it seemed reasonable to Hardy and Littlewood to conjecture that there are more primes in the set $\{1, 2, 3, \ldots, k\}$ than in $\{n+1, n+2, n+3, \ldots, n+k\}$. In other words, Hardy and Littlewood \cite{HL23} conjectured. \begin{conjecture} For sufficiently large integers $n$, $\pi(k) \ge \pi(n+k) - \pi(n)$. \label{BadConjecture} \end{conjecture} They made this conjecture on the basis of very little numerical evidence saying ``An examination of the primes less than 200 suggests forcibly that $\rho(x) \le \pi(x) (x \ge 2)$'' (page 54). (Here $\rho(x) = \limsup_{n \rightarrow \infty} \pi(n+x) - \pi(x)$.) By 1961 Schinzel \cite{Schinzel61} had verified this to $k=146$ and by 1974 Selfridge {\it et. al.} \cite{HR74} had verified it to 500. As reasonable sounding as this conjecture is, we will give a strong argument against it in just a moment. Second, suppose the Fermat numbers $F_n = 2^{2^n}+1$ behaved as random numbers.\footnote{There are reasons not to assume this such as the Fermat numbers are pairwise relatively prime.} Then the probability that $F_n$ is prime should be about $1/\log(F_n) \sim 1/(2^n \log 2)$. So the expected number of such primes would be on the order of $\sum_{n=0}^{\infty} 1/(2^n \log 2) = 2/\log 2$. This is the argument Hardy and Wright used when they presented the following conjecture \cite[pp. 15, 19]{HW79}: \begin{conjecture} \label{FermtPrimeConjecture} There are finitely many Fermat primes. \end{conjecture} The same argument, when applied to the Mersenne numbers, Woodall numbers, or Cullen numbers suggest that there are infinitely many primes of each of these forms. But it would also imply there are infinitely many primes of the form $3^n-1$, even though all but one of these are composite. So we must be a more careful than just adding up the terms $1/\log n$. We will illustrate how this might be done in the case of polynomials in the next section. As a final example we point out that in 1904, Dickson conjectured the following: \begin{conjecture} \label{DicksonsConjecture} Suppose $a_i$ and $b_i$ are integers with $a_i>1$. If there is no prime which divides each of $$\{b_1 x + a_1, b_2 x + a_2, \ldots, b_n x + a_n \}$$ for every $x$, then there are infinitely many integers values of $x$ for which these terms are simultaneously prime. \end{conjecture} How do we arrive at this conclusion? By our heuristic, for each $x$ the number $b_i x + a_i$ should be prime with a probability $1/\log N$. If the probabilities that each term is prime are independent, then the whole set should be prime with probability $1/(\log N)^n$. They are not likely to be independent, so we expect something on the order of $C/(\log N)^n$ for some constant $C$ (a function of the coefficients $a_i$ and $b_i$). In the following section we will sharpen Dickson's conjecture in to a precise form like that of the twin prime conjecture above. \subsection{A warning about heuristics} What (if any) value do such estimates have? They may have a great deal of value for those searching for records and predicting computer run times, but mathematically they have very little value. They are just (educated) guesses, not proven statements, so not ``real mathematics.'' Hardy and Littlewood wrote: ``{\it Probability} is not a notion of pure mathematics, but of philosophy or physics'' \cite[pg 37 footnote 4]{HL23}. They even felt it necessary to apologize for their heuristic work: \begin{quotation} Here we are unable (with or without Hypothesis $R$) to offer anything approaching a rigorous proof. What our method yields is a {\it formula}, and one which seems to stand the test of comparison with the facts. In this concluding section we propose to state a number of further formulae of the same kind. Our apology for doing so must be (1) that no similar formulae have been suggested before, and that the process by which they are deduced has at least a certain algebraic interest, and (2) that it seems to us very desirable that (in default of proof) the formula should be checked, and that we hope that some of the many mathematicians interested in the computative side of the theory of numbers may find them worthy of their attention. (\cite[pg 40]{HL23}) \end{quotation} \noindent When commenting on this Bach and Shallit wrote: \begin{quotation} Clearly, no one can mistake these probabilistic arguments for rigorous mathematics and remain in a state of grace.\footnote{Compare this quote to John von Neumann's remark in 1951 ``Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.'' \cite[p. 1]{Knuth75}} Nevertheless, they are useful in making educated guesses as to how number-theoretic functions should ``behave.'' (\cite[p. 248]{BS96}) \end{quotation} Why this negative attitude? Mathematics seeks proof. It requires results without doubt or dependence on unnecessary assumptions. And to be blunt, sometimes heuristics fail! Not only that, they sometimes fail for even the most cautious of users. In fact we have already given an example (perhaps you noticed?) Hardy and Littlewood, like Dickson, conjectured that if there is no prime which divides each of the following terms for every $x$, then they are simultaneously primes infinitely often: $$\{x+a_1, x+a_2, x+a_3, x+a_4, x+a_5, \ldots, x+a_k\}$$ \cite[Conjecture X]{HL23}. This is a special case of Dickson's Conjecture is sometimes called {\bf the prime $k$-tuple conjecture}. We have also seen that they conjectured $\pi(k) \ge \pi(n+k) - \pi(n)$ (conjecture \ref{BadConjecture}). But in 1972, Douglas Hensley and Ian Richards proved that one of these two conjectures is false \cite{HR72, HR73, Richards74}! Perhaps the easiest way to see the conflict between these conjectures is to consider the following set of polynomials found by Tony Forbes \cite{Forbes95}: \begin{eqnarray*} \{n-p_{24049}, n-p_{24043}, \ldots, n-p_{1223}, n-p_{1217},\\ n+p_{1217}, n+p_{1223}, \ldots, n+p_{24043}, n+p_{24049}\} \end{eqnarray*} \noindent where $p_n$ is the $n$-th prime. By Hardy and Littlewood's first conjecture there are infinitely many integers $n$ so that each of these $4954$ terms are prime. But the width of this interval is just $48098$ and $\pi(48098)<4954$. So this contradicts the second conjecture. If one of these conjectures is wrong, which is it? Most mathematicians feel it is the second conjecture that $\pi(k) \ge \pi(n+k) - \pi(n)$ which is wrong. The prime $k$-tuple conjecture receives wide support (and use!) Hensley and Richards predicted however \begin{quotation} Now we come to the second problem mentioned at the beginning of this section: namely the smallest number $x_1+y_1$ for which $\pi(x_1+y_1) > \pi(x_1)+\pi(y_1)$. We suspect, even assuming the $k$-tuples hypothesis (B) is eventually proved constructively, that the value of $x_1+y_1$ will never be found; and moreover that no pair $x,y$ satisfying $\pi(x+y) > \pi(x)+\pi(y)$ will ever be computed. (\cite[p. 385]{HR74}) \end{quotation} What can we conclude from this example of clashing conjectures? First that heuristics should be applied only with care. Next they should then be carefully tested. Even after great care and testing you should not stake to much on their predictions, so read this chapter with the usual bargain hunter's mottoes in mind: ``buyer beware'' and ``your mileage may vary.'' \subsection{Read the masters} The great mathematician Abel once wrote "It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils." Good advice, but this is an area short of masters. Hardy and Littlewood's third paper on their circle method \cite{HL23} is one of the first key papers in this area. In this paper they made the first real step toward the proving the Goldbach conjecture, then gave more than a dozen conjectures on the distribution of primes. Their method is far more complicated than what we present here--but it laid the basis for actual proofs of some related results. The approach we take here may have first been laid out by Cherwell and Wright \cite[section 3]{CW60} (building on earlier work by Cherwell \cite{Cherwell46}, St{\"a}ckel \cite{Stackel1916}, and of course Hardy and Littlewood). The same approach was taken by Bateman and Horn \cite{BH62} (see also \cite{BH65}). Many authors give similar arguments including Brent, Shanks \cite{Shanks62,SK67,Shanks78} and P{'o}lya \cite{Polya59}. There are also a couple excellent ``students'' we should mention. Ribenboim included an entire chapter on heuristics in his text ``the new book of prime number records'' \cite{Ribenboim95}. Riesel also develops much of this material in his fine book \cite{Riesel94}. See also Schroeder \cite{Schroeder83,Schroeder97}. And as enthusiastic students we also add our little mark. Again, most of what we present here was first done by others. Our only claim to fame is a persistent unrelenting application of one simple idea to a wide variety of problems. Enough talk, let's get started! \addtocontents{toc}{\vspace{5pt}} \section{The prototypical example: sets of polynomials} \addtocontents{toc}{\vspace{3pt}} \subsection{Sets of polynomials} We regularly look for integers that make a set of (one or more) polynomials simultaneously prime. For example, simultaneous prime values of $\{n,n+2\}$ are twin primes, of $\{n,2n+1\}$ are Sophie Germain primes, and of $\{n,2n+1,4n+3\}$ are Cunningham chains of length three. So this is an interesting test case for our heuristic. What might stop a set of integer valued polynomials from being simultaneously prime? The same things that keep a single polynomial from being prime: It might factor like $9x^2-1$, or there might be a prime which divides every value of the polynomial such as $3$ and $x^3-x+9$. So before we go much further we need a few restrictions on our polynomials $f_{1}(x)$, $f_{2}(x)$, \ldots, $f_{k}(x)$. We require that \begin{itemize} \item the polynomials $f_{i}(x)$ are irreducible, integer valued, and have positive leading terms, and \item the degree $d_{i}$ of $f_{i}(x)$ is greater than zero $(i=1,2,...,k)$. \end{itemize} If we could treat the values of these polynomials at $n$ as independent random variables, then by our key heuristic, the probability that they would be simultaneously prime at $n$ would be \begin{equation} \prod\limits_{i=1}^{k}\frac{1}{\log \text{ }f_{i}(n)}\sim \frac{1}{ d_{1}d_{2}...d_{k}(\log n)^{k}}. \label{p(independent)} \end{equation} \noindent So the number of values of $n$ with $0<n\leq N$ which yield primes would be primes approximately \ \begin{equation*} \frac{1}{d_{1}d_{2}...d_{k}}\int_{2}^{N}\frac{dx}{(\log x)^{k}}\sim \frac{N}{d_{1}d_{2}...d_{k}(\log N)^{k}}. \end{equation*} However, the polynomials are unlikely to behave both randomly and independently. For example, $\{n,n+2\}$ are either both odd or both even; and the second of $\{n,2n+1\}$ is never divisible by two. To attempt to adjust for this, for each prime $p$, we will multiply by the ratio of the probability that $p$ does not divide the product of the polynomials at $n$, to the probability that $p$ does not divides one of $k$ random integers. In other words, we will adjust by multiplying by a measure of how far from independently random the values are. To find this {\em adjustment factor}, we start with the following definition: \begin{definition} $w(p)$ is the number of solutions to $f_{1}(x)f_{2}(x)\cdot ...\cdot f_{k}(x)\equiv 0 \pmod{p}$ with $x$ in $\{0,1,2,...,p-1\}$. \end{definition} For each prime then, we need to multiply by \begin{equation} \frac{\frac{p-w(p)}{p}}{(\frac{p-1}{p})^{k}}=\frac{1-w(p)/p}{(1-1/p)^{k}}, \label{factor for prime} \end{equation} and our complete adjustment factor is found by taking the product over all primes $p$: \begin{equation} \prod\limits_{p}\frac{1-w(p)/p}{(1-1/p)^{k}}. \label{adjustment factor} \end{equation} This gives us the following conjecture (see \cite{BH62,Dickson04}). \begin{conjecture}[Dickson's Conjecture] Let the irreducible polynomials $f_1 (x),$ $f_2 (x),$ \ldots, $f_k (x)$ be integer valued, have a positive leading term, and suppose $f_i(x)$ has degree $d_i >0$ $(i=1,2,\ldots,k)$. The number of values of $n$ with $0 < n \leq N$ which yield simultaneous primes is approximately \begin{equation} \frac{1}{d_{1}d_{2}...d_{k}}\prod\limits_{p}\frac{1-w(p)/p}{(1-1/p)^{k}}% \int_{2}^{N}\frac{dx}{(\log x)^{k}}\sim \frac{N}{d_{1}d_{2}...d_{k}(\log N)^{k}}\prod\limits_{p}\frac{1-w(p)/p}{(1-1/p)^{k}}. \label{pi(general)} \end{equation} \label{KeyConjecture} \end{conjecture} The ratio on the right is sufficient if $N$ is very large or we just need a rough estimate, but the integral usually gives a better estimate for small $N$. Sometimes we wish an even tighter estimate for relatively small $N$. Then we use the right side of equation \ref{p(independent)} and write the integral in the conjecture above as \begin{equation}\label{pi(better)} \prod\limits_{p}\frac{1-w(p)/p}{(1-1/p)^{k}} \int_{2}^{N}\frac{dx}{\log f_1(x) \log f_2(x)\cdot\ldots\cdot\log f_k(x) } \end{equation} \addtocontents{toc}{\vspace{5pt}} \section{Sequences of linear polynomials} \addtocontents{toc}{\vspace{3pt}} Conjecture \ref{KeyConjecture} gives us an approach to estimating the number of primes of several forms. In this section we will apply conjecture it to twin primes, Sophie Germain primes, primes of the form $n^{2}+1$, and several other forms of primes. In each case, we will compare the estimates in the conjecture to the actual numbers of such primes. \subsection{Twin primes} To find twin primes we can use the polynomials $n$ and $n+2$. Note that $w(2)=1$, and $w(p)=2$ for all odd primes $p$. With a little algebra, we see our adjustment factor \ref{adjustment factor} is \begin{equation}\label{eq:C_2} 2\prod\limits_{p>2}1-\frac{1}{(p-1)^{2}} = 2\prod\limits_{p>2}\frac{p(p-2)}{(p-1)^{2}}. \end{equation} \noindent This product over odd primes is called the twin primes constant: $$C_{2}=0.66016\ 18158\ 46869\ 57392\ 78121\ 10014\ 55577\ 84326\ ...$$ \noindent Gerhard Niklasch has computed $C_2$ to over 1000 decimal places using the methods of Moree \cite{Moree2000}. In this case, conjecture \ref{KeyConjecture} becomes: \begin{conjecture}[Twin prime conjecture] \label{TwinPrimeConjecture} The expected number of twin primes $\{p,p+2\}$ with $p\leq N$ is \begin{equation}\label{pi(twin)} 2C_{2}\int_{2}^{N}\frac{dx}{(\log x)^{2}}\sim \frac{2C_{2}N}{(\log N)^{2}}. \end{equation} \end{conjecture} \noindent (This is \cite[Conjecture ??]{HL14}.) For a different heuristic argument for the same result see \cite[section 22.20]{HW79}. In practice this seems to be a exceptionally good estimate (even for small $N$)--see Table \ref{table:Twin}. (The last few values in Table \ref{table:Twin} were calculated by T. Nicely \cite{Nicely95}.\footnote{See also http://www.trnicely.net/counts.html}.) \begin{table} \caption{Twin primes less than $N$} \begin{tabular}{crrr} \hline & \textbf{actual} & \multicolumn{2}{c}{\textbf{predicted}} \\ $N$ & \textbf{number} & \textbf{integral} & \textbf{ratio} \\ \hline $10^{3}$ & \textbf{35} & \textbf{46} & 28 \\ $10^{4}$ & \textbf{205} & \textbf{214} & 155 \\ $10^{5}$ & \textbf{1224} & \textbf{1249} & 996 \\ $10^{6}$ & \textbf{8169} & \textbf{8248} & 6917 \\ $10^{7}$ & \textbf{58980} & \textbf{58754} & 50822 \\ $10^{8}$ & \textbf{440312} & \textbf{440368} & 389107 \\ $10^{9}$ & \textbf{3424506} & \textbf{3425308} & 3074426 \\ $10^{10}$ & \textbf{27412679} & \textbf{27411417} & 24902848 \\ $10^{11}$ & \textbf{224376048} & \textbf{224368865} & 205808661 \\ $10^{12}$ & \textbf{1870585220} & \textbf{1870559867} & 1729364449 \\ $10^{13}$ & \textbf{15834664872} & \textbf{15834598305} & 14735413063 \\ $10^{14}$ & \textbf{135780321665} & \textbf{135780264894} & 127055347335 \\ $10^{15}$ & \textbf{1177209242304} & \textbf{1177208491861} & 1106793247903 \\ \hline \end{tabular} \label{table:Twin} \end{table} It has been proven by sieve methods, that if you replace the $2$ in our estimate (\ref{pi(twin)}) for the number of twin primes with a $7$, then you have a provable upper bound for $N$ sufficiently large. Brun first took this approach in 1919 when he showed we could replace the $2$ with a $100$ and get an upper bound from some point $N_{0}$ onward \cite{Brun19}. There has been steady progress reducing the constant since Brun's article (and $7$ is not the current best possible value). Unfortunately there is no known way of changing this into a lower bound--as it has not yet been proven there are infinitely many twin primes. \subsection{Prime pairs $\{n,n+2k\}$ and the Goldbach conjecture} What if we replace the polynomials $\{n,n+2\}$ with $\{n,n+2k\}$? In this case $w(p)=1$ if $p|2k$ and $w(p)=2$ otherwise, so the adjustment factor \ref{eq:C_2} becomes \begin{equation} C_{2,k} = C_{2}\prod\limits_{p\mid k, p>2}\frac{p-1}{p-2}. \end{equation} With this slight change, conjecture \ref{KeyConjecture} now is \begin{conjecture}[Prime pairs conjecture] The expected number of prime pairs $\{p,p+2k\}$ with $p\leq N$ is \begin{equation} 2C_{2,k}\int_{2}^{N}\frac{dx}{(\log x)^{2}}\sim \frac{2C_{2,k}N}{(\log N)^{2}}. \label{eq:PrimePairs} \end{equation} \label{conj:PrimePairs} \end{conjecture} \noindent (This is \cite[Conjecture B]{HL23}.) For example, when searching for primes $\{n,n+210\}$ we expect to find (asymptotically) $\frac{2}{1}\frac{4}{3}\frac{6}{5}=3.2$ times as many primes as we find twins. Table \ref{table:pairs} shows that this is indeed the case. \begin{table}[htbp] \caption{Prime pairs $\{n,n+2k\}$ with $n\le N$} \begin{tabular}{rrr|rr|rr} \hline & \multicolumn{2}{c}{$k=6$} & \multicolumn{2}{c}{$k=30$} & \multicolumn{2}{c}{$k=210$} \\ $N$ & \textbf{actual} & \textbf{predicted} & \textbf{actual} & \textbf{predicted} & \textbf{actual} & \textbf{predicted} \\ \hline $10^3$ & \textbf{74} & \textbf{86} & \textbf{99} & \textbf{109} & \textbf{107} & \textbf{118} \\ $10^4$ & \textbf{411} & \textbf{423} & \textbf{536} & \textbf{558} & \textbf{641} & \textbf{653} \\ $10^5$ & \textbf{2447} & \textbf{2493} & \textbf{3329} & \textbf{3316} & \textbf{3928} & \textbf{3962} \\ $10^6$ & \textbf{16386} & \textbf{16491} & \textbf{21990} & \textbf{21981} & \textbf{26178} & \textbf{26358} \\ $10^7$ & \textbf{117207} & \textbf{117502} & \textbf{156517} & \textbf{156663} & \textbf{187731} & \textbf{187976} \\ $10^8$ & \textbf{879980} & \textbf{880730} & \textbf{1173934} & \textbf{1174300} & \textbf{1409150} & \textbf{1409141} \\ $10^9$ & \textbf{6849047} & \textbf{6850611} & \textbf{9136632} & \textbf{9134141} & \textbf{10958370} & \textbf{10960950} \\ \hline \end{tabular} \label{table:pairs} \end{table} Note that asymptotically equation \ref{eq:PrimePairs} must also give the expected number of consecutive primes whose difference is $k$. This can be shown (and the values estimated more accurately for small $N$) using the inclusion-exclusion principle \cite{Brent74,Mayoh68}. From this it is conjectured that the most common gaps between primes $\le N$ is always either 4 or a primorial number ($2, 6, 30, 210, 2310, \ldots$) \cite{Harley94}. ``But wait--there is more'' the old infomercial exclaimed ``it dices, it slices...'' Look at the prime pairs set this way: $\{ n, 2k-n\}$. Now when both terms are prime we have found two primes which add to $2k$. Our adjustment factor is unchanged, so the number of ways of writing $2k$ as a sum of two primes, often denoted $G(2k)$, is approximately: \begin{equation} G(2k) \sim 2C_{2,k}N \int_{2}^{N}\frac{dx}{\log x \log(2k-x)}. \end{equation} This is equivalent to the conjecture as given by Hardy and Littlewood \cite[Conjecture A]{HL23}: \begin{conjecture}[Extended Goldbach conjecture] The number of ways of writing $2k$ as a sum of two primes is asymptotic to \begin{equation} 2C_{2,k}\int_{2}^{N}\frac{dx}{(\log x)^{2}}\sim \frac{2C_{2,k}N}{(\log N)^{2}}. \end{equation} \end{conjecture} Hardy and Littlewood suggest that for testing this against the actual results for small numbers, we follow Shah and Wilson and use $1/((\log N)^2 - \log N)$ instead of $1/(\log N)^2$. \subsection{Primes in Arithmetic Progression} The same reasoning could be applied to estimate the number of arithmetic progressions of primes with length $k$ by seeking integers $n$ and $k$ such that each term of $$\{n, n+d, n+2d, \ldots, n+(k-1)d\}$$ is prime. In this case $w(p)=1$ if $p$ divides $d$, and $w(p) = \min(p,k)$ otherwise. In particular, if we wish all of the terms to be primes we must have $p|d$ for all primes $p \le k$. When this is the case, for a fixed $d$ we have \begin{equation}\label{eq:Akd} A_{k,d} = \prod\limits_{p | d}\frac{1}{(1-1/p)^{k-1}} \prod\limits_{p \nmid d}\frac{1-k/p}{(1-1/p)^{k}}. \end{equation} \noindent We can rewrite these in terms of the Hardy-Littlewood constants \begin{equation}\label{eq:c_k} c_k = \prod\limits_{p > k}\frac{1-k/p}{(1-1/p)^{k-1}} \end{equation} \noindent as follows $$A_{k,d} = c_k \prod\limits_{p \leq k}\frac{1}{(1-1/p)^{k-1}} \prod\limits_{p > k, p \mid d}\frac{p-1}{p-k}.$$ \noindent Of course $A_{k,d}=0$ if $k\#$ does not divide $d$. It is possible to estimate $c_k$ and $A_{k,k\#}$ to a half dozen significant digits using product above over the first several hundred million primes--but at the end of this section we will show a much better method. Table \ref{table:A} contains approximations of the first of these constants. \begin{table}[htbp] \caption{Adjustment factors $A_{k,k\#}$ for arithmetic sequences} \begin{tabular}{rrl|rrl} \hline $k$ & $k\#$ & $A_{k,k\#}$ & $k$ & $k\#$ & $A_{k,k\#}$ \\ \hline 2 & 2 & 1.32032363169374 & 11 & 2310 & 629913.461423349 \\ 3 & 6 & 5.71649719143844 & 12 & 2310 & 1135007.50238685 \\ 4 & 6 & 8.30236172647483 & 13 & 30030 & 45046656.1742087 \\ 5 & 30 & 81.0543595999686 & 13 & 30030 & 132128113.722194 \\ 6 & 30 & 138.388898492679 & 15 & 30030 & 320552424.308155 \\ 7 & 210 & 2590.65351840622 & 16 & 30030 & 527357440.662591 \\ 8 & 210 & 7130.47817586170 & 17 & 510510 & 23636723084.1607 \\ 9 & 210 & 16129.6476839631 & 18 & 510510 & 47093023670.0967 \\ 10 & 210& 24548.2695388318 & 19 & 9699690& 3153485401596.08 \\ \hline \end{tabular} \label{table:A} \end{table} Once again we reformulate conjecture \ref{KeyConjecture} for our specific case and this time find the following. \begin{conjecture} The number of arithmetic progressions of primes with length $k$, common difference $d$, and beginning with a prime $p\leq N$ is \begin{equation} A_{k,d}\int_{2}^{N}\frac{dx}{(\log x)^{k}} \sim \frac{A_{k,d}N}{(\log N)^{k}}. \end{equation} \end{conjecture} To check this conjecture we counted the number of such arithmetic progressions with common differences $6$, $30$, $210$ and $2310$. The results (table \ref{table:AP345}) seem to support this estimate well. \begin{table}[htbp] \caption{Primes in arithmetic progression, starting before $10^9$} \begin{tabular}{rrr|rr|rr} \hline common & \multicolumn{2}{c|}{length $k=3$} & \multicolumn{2}{c|}{length $k=4$} & \multicolumn{2}{c}{length $k=5$} \\ difference & actual & predicted & actual & predicted & actual & predicted \\ \hline 6 & \textbf{758163} & \textbf{759591} & \textbf{56643} & \textbf{56772} & \textbf{0} & \textbf{0} \\ 30 & \textbf{1519360} & \textbf{1519170} & \textbf{227620} & \textbf{227074} & \textbf{28917} & \textbf{28687} \\ 210 & \textbf{2276278} & \textbf{2278725} & \textbf{452784} & \textbf{454118} & \textbf{85425} & \textbf{86037} \\ 2310 & \textbf{2847408} & \textbf{2848284} & \textbf{648337} & \textbf{648640} & \textbf{142698} & \textbf{143326} \\ \hline & \multicolumn{2}{c|}{length $k=6$} & \multicolumn{2}{c|}{length $k=7$} & \multicolumn{2}{c}{length $k=8$} \\ \hline 30 & \textbf{2519} & \textbf{2555} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} \\ 210 & \textbf{15146} & \textbf{15315} & \textbf{2482} & \textbf{2515} & \textbf{353} & \textbf{370}\\ 2310 & \textbf{30339} & \textbf{30588} & \textbf{6154} & \textbf{6266} & \textbf{1149} & \textbf{1221} \\ \hline \end{tabular} \label{table:AP345} \end{table} This conjecture also includes some of the previous results as special cases. For example, when $k$ is one, we are just counting primes, and as expected, $A_{1,d}=1$. It is also easy to show that $A_{2,d}=2C_{2,d}$ and $A_{2,2}=2C_2$, so $A_{2,d}$ matches the values from the Prime Pairs Conjecture \ref{conj:PrimePairs}. What if we do not fix the common difference? Instead we might ask how many arithmetic progressions of primes(with any common difference) there are all of whose terms are less than $x$. Call this number $N_k(x)$. Grosswald \cite{Grosswald82} modified Hardy \& Littlewoods' Conjecture X to conjecture: \begin{conjecture} The number of arithmetic progressions of primes $N_k(N)$ with length $k$ all of whose terms are less than $N$ is \begin{equation} N_k(x) \sim \frac{D_k N^2}{2(k-1)(\log N)^{k}} \end{equation} where \begin{equation} D_k = \prod_{p \leq k} \frac{1}{p} \left(\frac{p}{p-1}\right) ^{k-1} \prod_{p > k} \left( \frac{p}{p-1} \right) ^{k-1} \frac{p-k+1}{p}. \end{equation} \end{conjecture} \noindent Grosswald was able to prove this result in the case $k=3$ \cite{GH79}. His article also included approximations of these constants $D_k$ with five significant digits. Writing these in terms of the Hardy-Littlewood constants $$D_k = c_{k-1} \prod_{p < k} \frac{1}{p} \left(\frac{p}{p-1}\right) ^{k-1}$$ we have calculated these with 13 significant digits in Table \ref{table:D}. \begin{table}[htbp] \caption{Adjustment factors $D_{k}$ for arithmetic sequences} \begin{tabular}{rl|rl} \hline $k$ & $D_k$ & $k$ & $D_k$ \\ \hline 3 & 1.320323631694 & 12 & 1312.319711299 \\ 4 & 2.858248595719 & 13 & 2364.598963306 \\ 5 & 4.151180863237 & 14 & 7820.600030245 \\ 6 & 10.13179495000 & 15 & 22938.90863233 \\ 7 & 17.29861231159 & 16 & 55651.46255350 \\ 8 & 53.97194830013 & 17 & 91555.11122614 \\ 9 & 148.5516286638 & 18 & 256474.8598544 \\ 10 & 336.0343267492 & 19 & 510992.0103092 \\ 11 & 511.4222820590 & 20 & 1900972.584874 \\ \hline \end{tabular} \label{table:D} \end{table} The longest known sequence of arithmetic primes (at the time this was written) was found in 1993 \cite{PMT95}: it begins with the prime $11410337850553$ and continues with common difference $4609098694200$. Ribenboim \cite[p. 287]{Ribenboim95} has a table of the first known occurrence of arithmetic sequences of primes of length $k$ for $12 \le k \le 22$. \subsection{Evaluating the adjustment factors} In 1961 Wrench \cite{Wrench61} evaluated the the twin prime constant with just forty-two decimal place accuracy. He clearly did not do this with the product from equation (\ref{eq:C_2})! Just how do we calculate these adjustment factors (also called Hardy Littlewood constants and Artin type constants) with any desired accuracy? The key is to rewrite them in terms of the zeta-functions which are easy to evaluate \cite{Bach97,BBC1999}. Let $P(s)$ be the prime zeta-function: $$P(s) = \sum_p \frac{1}{p^s}.$$ We can rewrite this using the usual zeta-function $\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}$ and the M\"obius function $\mu(k)$ as follows (see \cite[pg. 65]{Riesel94}): $$P(s) = \sum_{k=1}^\infty \frac{\mu(k)}{k} \log \zeta(ks).$$ To evaluate $c_{k}$ we take the logarithm of equation \ref{eq:c_k} and find $$\log \left( \prod_{p > k} \frac{1-k/p}{(1-1/p)^k} \right) = \sum_{p>k} \left( \log(1-k/p) - k \log(1-1/p) \right).$$ Using the McClaurin expansion for the $\log$ this is \begin{equation*} - \sum_{p>k} \sum_{j=1}^\infty \frac{k^j-k}{j p^j} = - \sum_{j=2}^\infty \frac{k^j-k}{j} \sum_{p>k} p^j = - \sum_{j=2}^\infty \frac{k^j-k}{j} \left( P(j) - \sum_{p \le k} p^{-j} \right). \end{equation*} It is relatively easy to calculate the zeta-function (see \cite{BBC1999,Riesel94}), so we now have a relatively easy way to calculate $c_k$ and $A_{k,d}$. This approach will easily get us the fifteen significant decimal place accuracy shown in table \ref{table:A}. If we need more accuracy, then we could use the techniques found in Moree \cite{Moree2000} which Niklasch\footnote{http://www.gn-50uma.de/alula/essays/Moree/Moree.en.shtml} used to calculate many such constants with $1000$ decimal places of accuracy. Moree's key result is that the product $$C_{f,g}(n) = \prod_{p>p_n}\left( 1-\frac{f(p)}{g(p)} \right)$$ (where $f$ and $g$ are monic polynomials with integer coefficients satisfying $\deg(f) + 2 \le \deg(g)$ and $p_n$ is the $n$th prime) can be written as $$C_{f,g}(n) = \sum_{k=2}^\infty \zeta_n(k)^{-e_k}$$ where the exponents $-e_k$ are integers and $\zeta_n(s) = \zeta(s) \prod_{p \le p_n}(1-p_n^{-s})$ is the partial zeta function. \subsection{Sophie Germain primes} Recall that $p$ is a Sophie Germain prime if $2p+1$ is also prime \cite {Yates91}. Therefore, we will use the polynomials $n$ and $2n+1$. Again, $w(2)=1$ and $w(p)=2$ for all odd primes $p$; so again our adjustment factor is the twin primes constant $C_{2}$. This gives us exactly the same estimated number of primes as in (\ref{pi(twin)}). We can improve this estimate by not replacing $\log (2n+1)$ with $\log n$ (as we did in (\ref {p(independent)})). This gives us the following, \begin{conjecture} The number of Sophie Germain primes $p$ with $p\leq N$ is approximately \begin{equation*} 2C_{2}\int_{2}^{N}\frac{dx}{\log x\log 2x}\sim \frac{2C_{2}N}{(\log N)^{2}} \end{equation*} \end{conjecture} Again, this estimate (at least the integral) is surprisingly accurate for small values of $N$, see Table\footnote{Chip Kerchner provided the last two entries in table \ref{table:SophieGermain}. (Personal e-mail 25 May 1999.)} \ref{table:SophieGermain}. \begin{table}[htbp] \caption{Sophie Germain primes less than $N$} \begin{tabular}{rrrr} \hline & \textbf{actual} & \multicolumn{2}{c}{\textbf{predicted}} \\ $N$ & \textbf{number} & \textbf{integral} & \textbf{ratio} \\ \hline 1,000 & \textbf{37} & \textbf{39} & 28 \\ 10,000 & \textbf{190} & \textbf{195} & 156 \\ 100,000 & \textbf{1171} & \textbf{1166} & 996 \\ 1,000,000 & \textbf{7746} & \textbf{7811} & 6917 \\ 10,000,000 & \textbf{56032} & \textbf{56128} & 50822 \\ 100,000,000 & \textbf{423140} & \textbf{423295} & 389107 \\ 1,000,000,000 & \textbf{3308859} & \textbf{3307888} & 3074425 \\ 10,000,000,000 & \textbf{26569515} & \textbf{26568824} & 24902848 \\ 100,000,000,000 & \textbf{218116524} & \textbf{218116102} & 205808662 \\ \hline \end{tabular} \label{table:SophieGermain} \end{table} \subsection{Cunningham chains} Cunningham chains can be thought of as a generalization of Sophie Germain primes. If the terms of the sequence $$\{ p, 2p+1, 4p+3, \ldots, 2^{k-1}p+2^{k-1}-1 \}$$ are all prime, then this sequence is called a Cunningham chain of length $k$. (Sophie Germain primes yield Cunningham chains of length two.) There is a second type of these chains, called Cunningham chains of the second kind, which are prime sequences of the form $$\{p, 2p-1, 4p-3, \ldots, 2^{k-1}p-2^{k-1}+1\}.$$ For either of these forms it is easy to show that $w(2)=1$, and that for odd primes $p$, $w(p)=\min (k,ord_{p}(2))$. So the resulting estimate is as follows. \begin{conjecture} The number of Cunningham chains of length $k$ beginning with primes $p$ with $p\leq N$ is approximately \begin{equation*} B_{k}\int_{2}^{N}\frac{dx}{\log x\log (2x) \ldots \log (2^{k-1}x)}% \sim \frac{B_{k}N}{(\log N)^{k}} \end{equation*} where $B_{k}$ is the product\ \begin{equation*} B_{k}=2^{k-1}\prod\limits_{p>2}\frac{p^{k}-p^{k-1}min(k,ord_{p}(2))}{% (p-1)^{k}}\text{.} \end{equation*} \end{conjecture} \noindent (This conjecture for $k=2, 3$ and $4$, can be found in \cite{Loh89}.) Note that $min(k,ord_{p}(2))$ is just $k$ when $p>2^{k}$, so we can again write these adjustment factors in terms of the Hardy-Littlewood constants: $$B_{k}=2^{k-1} c_k \prod\limits_{k < p < 2^k}\frac{p-min(k,ord_{p}(2))}{p-k} \prod\limits_{2 < p \leq k}\frac{1-min(k,ord_{p}(2))/p}{(1-1/p)^k}.$$ We then count the Cunningham Chains less than $10^{9}$ as an example to test our conjecture. As one would expect the agreement is better for the lower $k$ because these forms yield many more primes for this small choice of $N$. \begin{table}[htbp] \caption{Cunningham chains of length $k$ starting before $10^9$} \begin{tabular}{cccccc} \hline \textbf{length} & adjustment & \multicolumn{2}{c}{\textbf{actual number}} & \multicolumn{2}{c}{\textbf{predicted}} \\ $k$ & factor $B_{k}$ & \textbf{first kind} & \textbf{second kind} & \textbf{integral} & \textbf{ratio} \\ \hline 2 & 1.320323631694 & \multicolumn{1}{r}{\textbf{3308859}} & \multicolumn{1}{r}{\textbf{3306171}} & \multicolumn{1}{r}{\textbf{3307888}} & \multicolumn{1}{r}{3074426} \\ 3 & 2.858248595719 & \multicolumn{1}{r}{\textbf{342414}} & \multicolumn{1}{r}{\textbf{341551}} & \multicolumn{1}{r}{\textbf{342313}} & \multicolumn{1}{r}{321163} \\ 4 & 5.534907817650 & \multicolumn{1}{r}{\textbf{30735}} & \multicolumn{1}{r}{\textbf{30962}} & \multicolumn{1}{r}{\textbf{30784}} & \multicolumn{1}{r}{30011} \\ 5 & 20.26358989999 & \multicolumn{1}{r}{\textbf{5072}} & \multicolumn{1}{r}{\textbf{5105}} & \multicolumn{1}{r}{\textbf{5092}} & \multicolumn{1}{r}{5302} \\ 6 & 71.96222721619 & \multicolumn{1}{r}{\textbf{531}} & \multicolumn{1}{r}{\textbf{494}} & \multicolumn{1}{r}{\textbf{797}} & \multicolumn{1}{r}{909} \\ 7 & 233.8784426339 & \multicolumn{1}{r}{\textbf{47}} & \multicolumn{1}{r}{\textbf{46}} & \multicolumn{1}{r}{\textbf{112}} & \multicolumn{1}{r}{142} \\ \hline \end{tabular} \label{table:CuninghamChains} \end{table} \subsection{Primes of the form $n^{2}+1$} If we use the single polynomial $n^{2}+1$, then $w(2)=1$, and $w(p)=1+(-1|p)$ for odd primes $p$. Here $(-1|p)$ is the Legendre symbol, so it is $1$ if there is a solution to $n^{2}\equiv -1 \pmod{p}$, and $-1$ otherwise. Now the adjustment factor (after a little algebra) is \begin{equation*} 2\prod\limits_{p>2}1-\frac{(-1|p)}{(p-1)^{2}}=1.3728134628... \end{equation*} Calling this constant $C_{+}$ we conjecture that the expected number of values of $n\leq N$ yielding primes $n^{2}+1$ is \begin{equation*} \frac{C_{+}}{2}\int_{2}^{N}\frac{dx}{\log x}\sim \frac{C_{+}}{2}\frac{N}{% \log N}. \end{equation*} But this is not how we usually word our estimates. Often, we would desire instead the number of primes $n^{2}+1$ that are at most $N$ (the resulting prime is at most $N$, rather than the variable $n$ is at most $N$). So we need to replace $N$ by $\sqrt{N}$ in the integrals' limit, to get: \begin{conjecture} The expected number of primes of the form $n^{2}+1$ less than or equal to $N$ is \begin{equation} \frac{C_{+}}{2}\int_{2}^{\sqrt{N}}\frac{dx}{\log x}\sim C_{+}\frac{\sqrt{N}}{% \log N}. \end{equation}\label{eq:Cplus} \end{conjecture} \noindent (This is \cite[Conjecture E]{HL23}.) Again these estimates are quite close, see Table \ref{table:Primesn^2+1}. \begin{table}[htbp] \caption{Primes $n^{2}+1$ less than $N$} \begin{tabular}{rrrr} \hline & \textbf{actual} & \multicolumn{2}{c}{ \textbf{predicted}} \\ $N$ & \textbf{number} & \textbf{integral} & \textbf{ratio} \\ \hline 1,000,000 & \textbf{112} & \textbf{121} & 99 \\ 100,000,000 & \textbf{841} & \textbf{855} & 745 \\ 10,000,000,000 & \textbf{6656} & \textbf{6609} & 5962 \\ 1,000,000,000,000 & \textbf{54110} & \textbf{53970} & 49684 \\ 100,000,000,000,000 & \textbf{456362} & \textbf{456404} & 425861 \\ \hline \end{tabular} \label{table:Primesn^2+1} \end{table} In 1978 Iwaniec showed \cite{Iwaniec78} that there are infinitely many $P_{2}$'s (products of two primes) among the numbers of the form $n^{2}+1$.\footnote{He proved that if we divide $C_{+}$ by 77 in equation \ref{eq:Cplus}, then we get a lower bound for the number of $P_2$'s represented.} It has also be shown that there are infinitely many of the form $n^{2}+m^{4}$, but both of these results are far from proving there are infinitely many primes of the form $n^{2}+1$. \addtocontents{toc}{\vspace{5pt}} \section{Non-polynomial forms} \addtocontents{toc}{\vspace{3pt}} In this section we attempt to apply similar reasoning to non-polynomial forms. There are quite a few examples of this in the literature: Mersenne \cite{Schroeder83, Wagstaff83}, Wieferich \cite{CDP97}, generalized Fermat \cite{DG2000}\footnote{The authors treated these as polynomials by fixing the exponent and varying the base.}, primorial and factorial \cite{CG2000}, and primes of the form $k \cdot 2^{n}+1$ \cite{BR98}. We will look at several of these cases below including the Cullen and Woodall primes (perhaps for the first time). In the previous sections we took advantage of the fact that for a polynomial $f(x)$ with integer coefficients, $f(x+p)\equiv f(x)\pmod{p}$. This is rarely the case when $f(x)$ has a more general form, and is definitely not true for the form $2^n-1$. So rather than use Dickson's Conjecture \ref{KeyConjecture} as we did in all of the previous sections, we will proceed directly from our key heuristic: associating with the random number $n$ the probability $1/\log n$ of being prime--then trying to adjust for `non-randomness' in each case. A second common problem we will have with these examples is that very few primes of each form are known, usually only a couple dozen at best. When we look back at the numerical evidence for the polynomial examples, we can not help but notice the spectacular agreement between the heuristic estimate and the actual count just begins to show itself after we have many hundreds, or thousands, of examples. For that reason it will be difficult to draw conclusion below from simply counting. We will also look at the distribution of the know examples and in some cases the gaps between these examples. \subsection{Mersenne primes and the Generalized Repunits} A repunit is an integer all of whose digits are all one such as the primes $11$ and $1111111111111111111$. The generalized repunits (repunits in radix $a$) are the numbers $R_k(a)=(a^k-1)/(a-1)$. When $a$ is 2 these are the Mersenne numbers. When $a$ is 10, they are the usual repunits. Before we estimate the number of generalized repunit primes, we must first consider their divisibility properties. For example, if $k$ is composite, then the polynomial $x^k-1$ factors, so for $R_k(a)$ to be prime, $k$ must be a prime $p$. As a first estimate we might guess the probability that $R_k(a)$ is prime is roughly $(1/\log R_k(a)) (1/\log k) \sim 1/((k-1) \log k \log a)$. Next suppose that the prime $q$ divides $R_p(a)$ with $p$ prime. Then the order of $a \pmod{q}$ divides $p$, so is $1$ or $p$. If the order is $1$, then $a \equiv 1 \pmod{q}$, $R_p(a) \equiv p$ and therefore $p=q$. If the order is $p$, then since the order divides $q-1$, we know $p$ divides $q-1$. We have shown that every prime divisor $q$ of $R_p(a)$ is either $p$ (and divides $a-1$) or has the form $k p+1$ for some integer $k$. Among other things, this means that for most primes $p$, $R_p(a)$ in not divisible by any prime $q < p$, so we can adjust our estimate that $R_k(a)$ is prime by multiplying by $1/(1-1/q)$ for each of these primes. Here we need to recall an important tool: \begin{theorem}[Merten's Theorem] \begin{equation*} \prod_{\substack{q \leq x \\ q\text{ prime}}} \left(1-\frac{1}{q}\right) = \frac{e^{-\gamma}}{\log x} + O(1), \end{equation*} \label{thm:Merten} \end{theorem} \noindent (For a proof see \cite[p. 351]{HW79}.) So our second estimate of the probability that $R_k(a)$ is prime is $e^\gamma /k \log a$. \begin{figure}[htbp] \caption{$\log_2 \log_2 n$th Mersenne prime verses $n$} \begin{center} \includegraphics[width=0.55\textwidth]{mersenne} \end{center} \label{figure:Mersenne} \end{figure} http://www.utm.edu/research/primes/mersenne/heuristic.html \begin{figure}[htbp] \caption{$\log_{10} \log_{10} n$th repunit prime verses $n$} \begin{center} \includegraphics[width=0.5\textwidth]{repunits} \end{center} \label{figure:Repunits} \end{figure} \subsection{Cullen and Woodall primes} The Cullen and Woodall primes are $C(n)=n2^{n}+1$, and $W(n)=n 2^{n}-1$. In this case we have \begin{equation*} C(n+p(p-1))\equiv C(n) \pmod{p(p-1)} \end{equation*} and \begin{equation*} W(n+p(p-1))\equiv W(n)\pmod{p(p-1)}\text{.} \end{equation*} By the Chinese remainder theorem, both of these have $ord_{p}(2)$ solutions in \begin{equation*} \{0,1,2,...,p\cdot ord_{p}(2)\}, \end{equation*} so we might again assume that the probabilities that $p$ divides $C(n)$ and $W(n)$ are both $1/p$ for odd primes $p$--the same as for an arbitrary random integers. But are these probabilities independent for different primes $p$ and $q$? We must ask this because $p(p-1)$ and $q(q-1)$ are not relatively prime. We verify this independence as follows: \begin{theorem} Let $p$ and $q$ be distinct odd primes and let $a$ and $b$ be any integers. The the system of congruences \begin{equation*} \left\{ \begin{array}{c} n2^{n}\equiv a \pmod{p} \\ n2^{n}\equiv b \pmod{q} \end{array} \right. \end{equation*} has $lcm(pq,ord_{p}(2),ord_{q}(2))/pq$ solutions in $\{0,1,2,...,% lcm(pq,ord_{p}(2),ord_{q}(2))\}$. \begin{proof} For each $r$ modulo $d=lcm(ord_{p}(2),ord_{q}(2))$ write $n=r+sd$. Then the system above is \begin{equation*} \left\{ \begin{array}{c} sd\equiv a/2^{r}-r \pmod{p} \\ sd\equiv b/2^{r}-r \pmod{q} \end{array} \right. \end{equation*} Assume that $q$ is the larger of the two primes, then we know $q\nmid ord_{p}(2)$, so the second of these congruences has a unique solution (modulo $q$). If $p\nmid d$, then the first congruence also has a unique solution, giving a total of $d$ solution to the original system (one for each $r$). In this case $d$ is $lcm(pq,ord_{p}(2),ord_{q}(2))/pq$. On the other hand, if $p\mid d$, then the only acceptable choices of $r$ are those for which $r2^{r}\equiv a \pmod{p}$. There are $d/p$ of these--which again is $lcm(pq,ord_{p}(2),ord_{q}(2))/pq$. \end{proof} \end{theorem} For each odd prime the analog of the adjustment factor (\ref{factor for prime}) is therefore one, and the complete adjustment factor (\ref {adjustment factor}) is $2$ in both cases (Cullen and Woodall). This gives us the following. \begin{conjecture} The expected number of Cullen and Woodall primes with $n\leq N$ are each \ \begin{equation} 2\int_{2}^{N}\frac{dx}{\log x2^{x}}\backsim 2\frac{\log N-\log 2}{\log 2} \label{bad_est} \end{equation} \end{conjecture} Table \ref{table:CullenWoodall} shows us that what little evidence we have does not support (\ref{bad_est}) well for the Cullen numbers, though it does appear reasonable for Woodall numbers. \begin{table}[htbp] \caption{Cullen $C(n)$ and Woodall $W(n)$ primes} \begin{tabular}{rrrcc} \hline & \multicolumn{2}{r}{\textbf{actual }(with $n<N)$} & \multicolumn{2}{c}{% \textbf{predicted}} \\ $N$ & \textbf{Woodall} & \textbf{Cullen} & \textbf{integral} & \textbf{ratio} \\ \hline 1000 & \textbf{15} & \textbf{2} & \textbf{15} & 18 \\ 10,000 & \textbf{18} & \textbf{5} & \textbf{22} & 25 \\ 100,000 & \textbf{24} & \textbf{10} & \textbf{29} & 31 \\ 500,000 & $\geq $\textbf{26} & $\geq $\textbf{13} & \textbf{33} & 36 \\ 1,000,000 & & & \textbf{35} & 38 \\ \hline \end{tabular} \label{table:CullenWoodall} \end{table} Since we have so few data points it might offer some insight to graph the expected number of Cullen and Woodall primes below each of the known primes of these forms (see graph ?removed?). If our estimate holds, then this graph would remain ``near'' the diagonal. \subsection{Primorial primes} Primes of the form $p\# \pm 1$ are sometimes called the primorial primes (a term introduced by H. Dubner as a play on the words prime and factorial). Since $\log p\#$ is the Chebyshev theta function, it is well known that asymptotically $\theta (p) = \log p\#$ is approximately $p$. In fact Dusart \cite{Dusart99} has shown \begin{equation*} \abs{\theta (x) - x} \leq 0.006788 \frac{x}{\log x} \text{ \ \ \ \ \ \ for }x \geq 2.89 \times 10^{7}. \end{equation*} \noindent We begin (as usual) noting that by the prime number theorem the probability of a ``random'' number the size of $p\# \pm 1$ being prime is asymptotically $\frac{1}{p}$. However, $p\# \pm 1$ does not behave like a random variable because primes $q$ less than $p$ divide $1/q^{\text{th}}$ of a random set of integers, but can not divide $p\# \pm 1$. So we adjust our estimate by dividing by $1 - \frac{1}{q}$ for each of these primes $q$. By Mertens' theorem \ref{thm:Merten} our final estimate of the probability that $p\# \pm 1$ is prime is $\frac{e^{\gamma} \log p}{p}$. By this simple model, the expected number of primes $p\# \pm 1$ with $p \leq N$ would then be \begin{equation*} \sum_{p \leq N} \frac{e^{\gamma} \log p}{p} \sim e^{\gamma} \log N \end{equation*} \begin{conjecture} The expected number of primorial primes of each of the forms $p\# \pm 1$ with $p \leq N$ are both approximately $e^{\gamma} \log N$. \end{conjecture} The known, albeit limited, data supports this conjecture. What is known is summarized in Table~\ref{tab:3}. \begin{table}[htbp] \caption{The number of primorial primes $p\#\ \pm 1$ with $p \le N$} \label{tab:3} \begin{tabular}{rccc} \hline & \multicolumn{2}{c}{\textbf{actual}} & \textbf{predicted} \\ $N$ & $p\# + 1$ & $p\# - 1$ & (of each form) \\ \hline 10 & \textbf{4} & \textbf{2} & \textbf{4} \\ 100 & \textbf{6} & \textbf{6} & \textbf{8} \\ 1000 & \textbf{7} & \textbf{9} & \textbf{12} \\ 10000 & \textbf{13} & \textbf{16} & \textbf{16} \\ 100000 & \textbf{19} & \textbf{18} & \textbf{20} \\ \hline \end{tabular} \end{table} \begin{remark} By the above estimate, the $n^{\text{th}}$ primorial prime should be about $e^{n/e^{\gamma}}$. \end{remark} \subsection{Factorial primes} The primes of the forms $n! \pm 1$ are regularly called the factorial primes, and like the ``primorial primes'' $p\# \pm 1$, they may owe their appeal to Euclid's proof and their simple form. Even though they have now been tested up to $n=10000$ (approximately 36000 digits), there are only $39$ such primes known. To develop a heuristical estimate we begin with Stirling's formula: \begin{equation*} \log n! = (n+\frac{1}{2}) \log n - n + \frac{1}{2} \log 2 \pi + O\left(\frac{1}{n}\right) \end{equation*} \noindent or more simply: $\log n! \sim n(\log n - 1)$. So by the prime number theorem the probability a random number the size of $n! \pm 1$ is prime is asymptotically $\frac{1}{n(\log n - 1)}$. Once again our form, $n! \pm 1$ does not behave like a random variable--this time for several reasons. First, primes $q$ less than $n$ divide $1/q$--th of a set of random integers, but can not divide $n! \pm 1$. So we again divide our estimate by $1 - \frac{1}{q}$ for each of these primes $q$ and by Mertens' theorem we estimate the probability that $n! \pm 1$ is prime to be \begin{equation} \label{eq:FactEst} \frac{e^{\gamma} \log n}{n(\log n - 1)}. \end{equation} To estimate the number of such primes with $n$ less than $N$, we may integrate this last estimate to get: \begin{conjecture} The expected number of factorial primes of each of the forms $n! \pm 1$ with $n \le N$ are both asymptotic to $e^{\gamma} \log N$ \end{conjecture} Table~\ref{tab:2} shows a comparison of this estimate to the known results. \begin{table}[htbp] \caption{The number of factorial primes $n! \pm 1$ with $n \le N$} \label{tab:2} \begin{tabular}{rccc} \hline & \multicolumn{2}{c}{\textbf{actual}} & \textbf{predicted} \\ $N$ & $n! + 1$ & $n! - 1$ & (of each form) \\ \hline 10 & \textbf{3} & \textbf{4} & \textbf{4} \\ 100 & \textbf{9} & \textbf{11} & \textbf{8} \\ 1000 & \textbf{16} & \textbf{17} & \textbf{12} \\ 10000 & \textbf{18} & \textbf{21} & \textbf{16} \\ \hline \end{tabular} \end{table} As an alternate check on this heuristic model, notice that it also applies to the forms $k\cdot n!\pm 1$ ($k$ small). For $1\le k\le 500$ and $1\le n\le 100$ the form $k\cdot n!+1$ is a prime 4275 times, and the form $k\cdot n!-1$, 4122 times. This yields an average of 8.55 and 8.24 primes for each $k$, relatively close to the predicted $8.20$. But what of the other obstacles to $n! \pm 1$ behaving randomly? Most importantly, what effect does accounting for Wilson's theorem have? These turn out not to significantly alter our estimate above. To see this we first we summarize these divisibility properties as follows. \begin{theorem} \label{thm:FactDiv} Let $n$ be a positive integer. \begin{enumerate} \item[i)] $n$ divides $1!-1$ and $0!-1$. \item[ii)] If $n$ is prime, then $n$ divides both $(n-1)!+1$ and $(n-2)!-1$. \item[iii)] If $n$ is odd and $2n+1$ is prime, then $2n+1$ divides exactly one of $n!\pm 1$. \item[iv)] If the prime $p$ divides $n!\pm 1$, then $p-n-1$ divides one of $n!\pm 1$. \end{enumerate} \end{theorem} \begin{proof} (ii) is Wilson's theorem. For (iii), note that if $2n+1$ is prime, then Wilson's theorem implies $$-1 \equiv 1\cdot2\cdot\ldots\cdot n\cdot (-n)\cdot\ldots\cdot (-1) \equiv (-1)^n (n!)^2 \pmod{2n+1}.$$ When $n$ is odd this is $(n!)^2 \equiv 1$, so $n! \equiv \pm 1 \pmod{2n+1}$. Finally, to see (iv), suppose $n!\equiv \pm 1 \pmod{p}$. Since $(p-1)!\equiv -1$, this is $$(p-1)(p-2)\cdot \ldots \cdot (n+1) \equiv (-1)^{p-n-1}(p-n-1)! \equiv \mp 1 \pmod{p}.$$ This shows $p$ divides exactly one of $(p-n-1)!\pm 1$. \end{proof} To adjust for the divisibility properties (ii) and (iii), we should multiply our estimate \ref{eq:FactEst} by $1 - \frac{1}{\log n}$, which is roughly the probability that $n+1$ or $n+2$ is composite; and then by $1 - \frac{1}{4\log 2n}$, which is the probability $n$ is odd and $2n+1$ is prime. The other two cases of Theorem~\ref{thm:FactDiv} require no adjustment. This gives us the following estimate of the primality of $n!\pm 1$. \begin{equation} \left(1 - \frac{1}{4 \log 2n}\right)\frac{e^\gamma}{n} \end{equation} \noindent Integrating as above suggests there should be $e^{\gamma} (\log N - \frac{1}{4} \log \log 2N )$ primes of the forms $n! \pm 1$ with $n \le N$. Since we are using an integral of probabilities in our argument, we can not hope to do much better than an error of $o(\log N)$, so this new estimate is essentially the same as our conjecture above. \addtocontents{toc}{\vspace{5pt}} \providecommand{\href}[2]{#2 ({\it #1})} \providecommand{\url}[1]{{\it #1}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2021-03-09T02:29:03", "yymm": "2103", "arxiv_id": "2103.04483", "language": "en", "url": "https://arxiv.org/abs/2103.04483", "abstract": "Dickson conjectured that a set of polynomials will take on infinitely many simultaneous prime values. Later others, such as Hardy and Littlewood, gave estimates for the number of these primes. In this article we look at this conjecture, develop a simple heuristic and rederive these classic estimates. We then apply them to several special forms of primes and compare the estimates with the actual numbers.", "subjects": "History and Overview (math.HO)", "title": "An Amazing Prime Heuristic", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9884918492405833, "lm_q2_score": 0.8333245973817158, "lm_q1q2_score": 0.8237345722835169 }
https://arxiv.org/abs/1712.02315
Pair Correlations in Uniform Countable Sets
We determine the pair correlations of countable sets $T \subset \mathbb{R}^n$ satisfying natural equidistribution conditions. The pair correlations are computed as the volume of a certain region in $\mathbb{R}^{2n}$, which can be expressed in terms of the incomplete Beta function. For $n=2$ and $n=3$ we give closed form expressions, and we obtain an expression in the $n \rightarrow \infty$ limit. We also verify that sets of lattice points and primitive lattice points satisfy the required distribution criteria.
\section{Introduction} In many areas of geometry it is important to determine the spatial statistics of various discrete sets of points in $n$-dimensions. We study an interesting property of the statistics of integer lattice points $\mathbb{Z}^{n}$ (and subsets of the lattice points), demonstrating an interesting relationship between probability and number theory. In particular, we calculate the \textit{pair-correlations} between the points of a uniform countable set. Given a countable set of points $T \subset \mathbb{R}^n$ satisfying certain natural uniform distribution conditions (as are satisfied by many standard discrete sets, like the lattice points and the primitive lattice points), we consider the subset of points $T$ whose magnitudes are $\leq r$. Pick two points in this set $T \cap B_n(r)$ (where $B_n(r)$ is the $n$-ball centered at $\mathbf{0}$ of radius $r$), and let the distance between them be $\lambda R$. We want to find the probability density function of $\lambda$ in the limit as $R \to \infty$. To do this, we first prove the heuristic relating the number of points in $T\times T$ within a region in $\mathbb{R}^{2n}$ to its volume. After this, we make use of our volume heuristic to reduce the computation of the required probability density function to a volume calculation. \subsection{Conditions for Equidistribution} For our result to hold good, we demand that T satisfy the following equidistribution conditions: \vspace{2mm} \noindent \textbf{I. Angular Condition.} We require that $T$ becomes uniformly distributed on the sphere $\mathbb S^{n-1} = \partial B_n (1)$ in the sense that, for any continuous function $f$ on $\mathbb S^{n-1}$ extended to the whole of $\mathbb{R}^n$ by $f(\mathbf{x}) \equiv f \left( \hat{\mathbf{x}} \right) $ (where $\hat{\mathbf{x}}$ is the unit vector pointing in the direction of $\mathbf{x}$), we have \begin{equation}\label{AngularCriterion} \lim_{r \rightarrow \infty} \frac{1}{\left| T \cap B_n(r) \right|} \sum_{\mathbf{x} \in T \cap B_n(r)} f(\mathbf{x}) = \int_{\mathbb S^{n-1}} f(\mathbf{x})d\mu_P(\mathbf{x}), \end{equation} \noindent where $\mu_P$ denotes the Lebesgue probability measure on $\mathbb S^{n-1}$. \vspace{2mm} \noindent \textbf{II. Radial (Growth) Condition.} We moreover require that $T$ satisfies the following (natural) radial counting: letting $N(r) = N(T, r)$ be the number of points $\mathbf{x}$ in $T$ such that $|\mathbf{x}| < r$, we take \begin{equation} \label{RadialCriterion} \frac{N(r\lambda)}{N(r)} \sim \lambda^n . \end{equation} \noindent We wish to show that, for some $\Psi \subset \mathbb{R}^{2n}$ with $\Psi(r)$ being $\Psi $ dilated by $r$ about $\mathbf{0}$, $|(T \times T) \cap \Psi(r) | $ grows like $N(r)^2$, in that the limit as $r\to\infty$ of their quotient tends to $m(\Psi)/m(B_n)^2$, where $m$ is the (unnormalized) Lebesgue measure in Euclidean space. \section{The Volume Heuristic for Equidistributed Sets} To calculate pair correlations of sets $T$ satisfying these equidistribution conditions (\ref{AngularCriterion}) and (\ref{RadialCriterion}), we require a heuristic relating the number of ordered pairs of points of $T$ in a certain region in $\mathbb{R}^{2n}$ to the volume of that region. We approach the proof of this in three steps. First, we provide a much simpler equidistribution criterion equivalent to the original integral one. Then, we apply this new criterion to prove an intermediate result relating the number of points of $T$ within regions in $\mathbb{R}^n$. Finally, we use an argument involving Cartesian products to extend this to $\mathbb{R}^{2n}$. First, we cite a theorem which states the equivalence of the angular equidistribution criterion to a much simpler statement. This classical result roughly asserts that a set is equidistributed in the angular sense if it looks equally dense in any direction. \begin{theorem}[Kuipers and Niederreiter 1974] Define \begin{equation} \label{WedgeDef} W(X,r) = \left\{\mathbf{x}\ \big|\ |\mathbf{x}|<r, \frac{\mathbf{x}}{|\mathbf{x}|} \in X\right\} . \end{equation} Given a countable set $T \subset \mathbb{R}^n$, the above equidistribution condition is true iff the following holds: A set $T$ is equidistributed over $\mathbb{S}^{n-1}$ if for all $X \subseteq \mathbb{S}^{n-1} $, we have \begin{equation} \label{KuipersSimplification} \lim_{r \to \infty} \frac{ | T \cap W(X, r) | }{N(r)} = \mu_P(X) = \frac{ m( W(X, 1) )}{m(B_n) }, \end{equation} \noindent where $\mu_P$ is the probability measure on $\mathbb{S}^{n-1}$. \end{theorem} \noindent This theorem is proven in \cite{Kuipers1974}. Making use of this result, we now prove an preliminary theorem from which the main volume heuristic can be derived. Loosely speaking, this theorem states that any sufficiently well-behaved region in $\mathbb{R}^n$ contains roughly as many points of a given equidistributed set as the numerical value of its volume (in an appropriate limit). \begin{theorem} For a given compact set $\Omega \subset \mathbb{R}^n$ such that $\forall \ \mathbf{x} \in \Omega$, $\sigma\mathbf{x} \in \Omega\ \forall\ 0\leq \sigma\leq 1$ and that $m(\partial \Omega) = 0$, let $\Omega(r)$ be $\Omega$ dilated by $r$ about $\mathbf{0}$. Also, let $T \subset \mathbb{R}^n$ satisfy the conditions given at the beginning of the paper. We then have \begin{equation} \label{IntermediateEquidistributionResult} \frac{\left| T \cap \Omega(r) \right|}{N(r)} \sim \frac{m(\Omega)}{m(B_{n})}, \end{equation} where $N(r)$ is as defined at the beginning of the paper. \begin{proof} Essentially, our proof involves breaking up $\Omega$ directionally into small cones, applying the assumed angular equidistribution condition (Theorem 2.1), and summing the results. For a given unit $n$-vector $\mathbf{v}$, let $\lambda({\mathbf{v}}) = \sup_{ \lambda \mathbf{v} \in \Omega} (\lambda) $. Note that $\sigma\mathbf{v} \in \Omega\ \forall\ 0\leq \sigma < \lambda(\mathbf{v})$. \vspace{2mm} Define the diameter of a compact Borel set $X \subseteq \partial B_n$ to be \begin{equation} \label{DiameterDef} \Delta(X) = \sup_{\mathbf{x}, \mathbf{y} \in X} \Vert \mathbf{x} - \mathbf{y} \Vert . \end{equation} \noindent In other words, the diameter is the least upper bound on the distance between any two points in $X$. Consider a partition $\mathcal{P}(\delta)$ of $\partial B_n$ into Borel sets $X$ with $\Delta(X)<\delta \ \forall\ X \in \mathcal{P}(\delta)$ for some $\delta > 0$. We can approximate our region $\Omega$ by the union of all of the $W\left(X,\lambda(\mathbf{x})\right)$ for some $\mathbf{x} \in X$ for each $X \in \mathcal{P}(\delta)$. Note that these regions are disjoint (because $\mathcal{P}(\delta)$ is by definition a disjoint union of sets). We see that as $\delta \rightarrow 0$, the union of these regions $W\left(X,\lambda(\mathbf{x})\right)$ tends towards $\Omega$ itself. As such, let $\Omega_{\delta}$ be the approximation of $\Omega$ by a partition $\mathcal{P}(\delta)$; then, $\lim_{\delta \to 0} \Omega_{\delta}(r) = \Omega(r)$. We have that \begin{equation} \label{SumOverPartition} \left|T \cap \Omega_\delta(r) \right| = \bigg( \sum_{X \in \mathcal{P}(\delta)} \left|T \cap W(X,r\lambda(\mathbf{x}))\right| \bigg) \ ; \end{equation} \noindent dividing by $N(r)$, taking $r \to \infty$, and then $\delta \to 0$, we see that \begin{equation} \label{BeforeRadial} \lim_{r \to \infty} \frac{ \left|T \cap \Omega(r) \right|}{N(r)} = \lim_{\delta \rightarrow 0} \lim_{r \to \infty} \frac{1}{N(r)} \sum_{X \in \mathcal{P}(\delta)} \left|T \cap W(X,r\lambda(\mathbf{x}))\right| \ . \end{equation} \noindent Also, condition (\ref{AngularCriterion}) (equivalently (\ref{KuipersSimplification})) gives $$\lim_{r\to\infty} \frac{\left| T \cap W(X,r\lambda)\right|}{N(r\lambda)} = \frac{m(W(X,1))}{m(B_{n})}$$ \noindent for all $X\in \mathcal{P}(\delta)$ and any positive $\lambda$. Using condition (\ref{RadialCriterion}), we obtain \begin{equation} \label{RadialResult} \lim_{r\to\infty} \frac{\left|T \cap W(X,r\lambda)\right|}{N(r)} = \frac{m(W(X,\lambda))}{m(B_{n})}. \end{equation} \noindent Now, we see that the asymptotic expression can be used as $r$ grows large; this enables us to write $$\lim_{r\to\infty} \frac{\left| T \cap \Omega(r) \right|}{N(r)} = \lim_{\delta\to 0}\sum_{X \in \mathcal{P}(\delta)} \frac{m(W(X,\lambda(\mathbf{x}) ))}{m(B_{n})} $$ $$\lim_{r\to\infty} \frac{\left| T \cap \Omega(r) \right|}{N(r)} = \frac{m(\Omega)}{m(B_{n})},$$ \noindent which is exactly (\ref{IntermediateEquidistributionResult}). \end{proof} \end{theorem} We now use the above result to prove the main theorem, a result showing the main volume heuristic. This statement naturally restates the previous result (Theorem 2.2) to $2n$-dimensions, but the details of this extension are quite complicated. It turns out that $\mathbb{R}^{2n}$ is far more convenient in which to work with pair correlations (for each distance, there are two independent points with $n$ degrees of freedom), which is why the following theorem is of great importance. \begin{theorem}[Volume Heuristic] Let $K$ be the region in $2n$-space (with coordinates $x_1,x_2,...,x_{2n}$) defined by \begin{equation} \label{Bn2Definition} K = \left\{ \mathbf{x} = (x_1, \cdots, x_{2n}) \ \ \bigg| \ \sum_{i=1}^n x_i^2 \leq 1\mathrm{\ and\ }\sum_{i=1}^n x_{n+i}^2 \leq 1 \right\}, \end{equation} \noindent and let $\Psi \subseteq K$ be a compact region. Furthermore, let $K(r)$ be $K$ dilated by $r$, and let $\Psi(r)$ be $\Psi$ dilated by $r$. Also, define $T$ as before, and let $T \times T$ be the set of vectors formed by the concatenation of the two $n$-vectors $(x_1, \dots x_n)$ and $(x_{n+1}, \dots x_{2n})$, each of which is in $T$. Then, we have that \begin{equation} \label{VolumeHeuristicStatement} \lim_{r\to\infty} \frac{\big|(T \times T) \cap \Psi(r) |}{\big| (T \times T) \cap K(r) \big|} = \frac{m(\Psi)}{m(B_n)^2}. \end{equation} \begin{proof} Our strategy here is slightly different. Since we are trying to extract a $2n$-dimensional result from criteria in $n$ dimensions, we partition $\mathbb{R}^{2n}$ into cubes, apply Theorem 2.2 to the $n$-dimensional slices of these cubes, and take the Cartesian product. Before this, however, we prove an important lemma which shows that the contribution of points on the boundaries of these cubes is asymptotically negligible. Let $\delta$ be a real number, and $\mathcal{H_\delta}$ be the following partition of $\mathbb{R}^{2n}$ into hypercubes of side length $\delta$: \begin{equation} \label{HDefinition} \mathcal{H}_\delta = \left( \bigcup_{i_1 = -\infty}^{\infty} \left( \delta i_1 - \delta, \delta i_1 \right] \right) \times \cdots \times \left( \bigcup_{i_{2n} = -\infty}^{\infty} \left( \delta i_{2n} - \delta , \delta i_{2n} \right] \right), \end{equation} \noindent where $\times $ is the Cartesian product. In other words, a part of $\mathcal{H}_{\delta}$ would be $\left( \delta i_1 - \delta , \delta i_1 \right] \times \cdots \left( \delta i_{2n} - \delta , \delta i_{2n} \right] $ for some integer $2n$-vector $(i_1, \dots i_{2n})$. Let the set of all such hypercubes contained entirely in $\Psi$ be $\mathfrak{C}_{\delta}$. We first prove the following lemma: \begin{lemma}[Boundary Points Lemma] Consider a face of an $n$-box $C$ with sides parallel to the axes; call it $\mathcal{F}$, and let $\mathcal{F}(r)$ be $\mathcal{F}$ dilated by $r$. Then $| T \cap \mathcal{F}(r) | = o(N(r)) $. \begin{proof} Without loss of generality, assume that $x_1$ is held constant for this face and all other coordinates may vary within the hypercube. Consider the region $\Gamma(r)$ bounded by the lines from the origin to the vertices of $\mathcal{F}(r)$ and bounded face $\mathcal{F}(r)$ itself (we let $\Gamma$ be $\Gamma(r)$ dilated by $1/r$ and define $\mathcal{F}$ similarly). Note that we may apply Theorem 2.2 to $\Gamma(r)$, which can easily be seen. With this in mind, dilate $\Gamma(r)$ by $1 - \epsilon$ for some $\epsilon > 0$, and apply Theorem 2.2 to obtain \begin{equation} \label{OneMinusEpsilon} \lim_{r \to \infty} |T \cap \Gamma(r(1 - \epsilon))| = \lim_{r \to \infty} \frac{m(\Gamma) N(r(1 - \epsilon)) }{m(B_n)}. \end{equation} \noindent Also, dilate $\Gamma(r)$ by $1+ \epsilon$ to get $\Gamma(r(1+\epsilon))$; then, we also obtain \begin{equation} \label{OnePlusEpsilon} \lim_{r \to \infty} |T \cap \Gamma(r(1+\epsilon))| = \lim_{r \to \infty} \frac{m(\Gamma)N(r(1 + \epsilon))}{m(B_n)}. \end{equation} \noindent Now, $|T \cap \mathcal{F}(r)| \leq |T \cap \Gamma(r(1 + \epsilon))| - | T \cap \Gamma(r(1-\epsilon))| $; we see this by noting that the dilation with factor $\frac{1 + \epsilon}{1 - \epsilon}$ mapping $\mathcal{F}(r(1 - \epsilon)) $ to $\mathcal{F}(r(1 + \epsilon))$ sweeps through $\mathcal{F}(r)$, and so $ \mathcal{F}(r) \subset \Gamma(r( 1 + \epsilon)) \backslash \Gamma(r( 1 - \epsilon)) $. It is therefore clear that \begin{equation} \label{PlusMinusBound} \lim_{r\to\infty} \frac{ |T \cap \mathcal{F}(r)| }{ N(r) } \leq \lim_{r\to\infty} \left[\frac{ m(\Gamma) }{ m(B_n) }\right]\left[\frac{N(r(1 + \epsilon)) - N(r(1 - \epsilon))}{N(r)}\right]. \end{equation} \noindent However, the right-hand-side vanishes as $\epsilon \to 0$, so we are done. Thus, the contribution of points from each cube's boundary is negligible, and we henceforth neglect them in our considerations. \end{proof} \end{lemma} Note that this easily implies that the contributions of points of $T \times T$ on finite $(2n - 1)$-dimensional hypersurfaces (such as the faces of $2n$-cubes) can be neglected as well. We will make use of both the $n$-dimensional and the $2n$-dimensional implications of the boundary points lemma in what follows. \begin{lemma} Consider a $n$-box $C \subset \mathbb{R}^n$ whose sides are parallel to the axes and whose faces are closed, and let $C(r)$ be $C$ dilated by $r$ about $\mathbf{0}$. Then, we have that \begin{equation} \lim_{r \to \infty} \frac{ \left| T \cap C(r) \right| }{N(r)} = \frac{m(C)}{m(B_n)}. \label{CubeLemma} \end{equation} \begin{proof} Note that by the previous theorem, for any $\Omega$ with $\mathbf{x} \in \Omega \Longrightarrow \sigma \mathbf{x} \in \Omega \ \forall \ 0 \leq \sigma \leq 1$, \[ \lim_{r \to \infty} \frac{\left| T \cap \Omega(r) \right|}{N(r)} = \frac{m(\Omega)}{m(B_{n})}.\] Assume $C(r)$ does not contain the origin, for otherwise, we would be done by Theorem 2.2. Moreover, assume that no face of $C(r)$ passes through an axis of $\mathbb{R}^n$ (if one did, simply elongate $C(r)$ downward in that direction until the origin is reached. Then $C(r)$ will be the direct difference of the two boxes which pass through the origin, both of whose edges extend those of $C(r)$, and one of whose faces perpendicular to the axis in question is closer to the origin than the other). \vspace{2mm} Otherwise, let us without loss of generality assume that $C(r)$ lies in the first quadrant. Now, let $\mathbf{v}_1$ be the vertex of $C(r)$ closest to the origin (for some large $r$), and let $\mathbf{v}_2$ be the vertex opposite $\mathbf{v}_1$ on the long diagonal. Define $U(r)$ to be the closed $n$-box with diagonally opposite vertices $\mathbf{0}$ and $\mathbf{v}_2$ and edges parallel to the coordinate axes, and let $\tilde{C}(r)$ be the closure of $U(r) \backslash C(r)$ (by Lemma 1, the number of points in $T$ on $\partial C(r)$ is negligible, so we may simply say $| T \cap \tilde{C}(r) | \sim | T \cap U(r) \backslash C(r) | $). Consider any point $\mathbf{x}$ on the boundary of $\tilde{C}(r)$. Then, it is easy to see that $\sigma\mathbf{x} \in \tilde{C}(r) \ \forall\ 0 \leq \sigma \leq 1$, since $\tilde{C}(r)$ extends all the way back to the origin in every direction $\mathbf{u}$ such that the components of $\mathbf{u}$ are nonnegative (and obviously every coordinate on the boundary of $\tilde{C}(r)$ is nonnegative). Therefore, we may apply Theorem 2.2 to $\tilde{C}(r)$. Note that we may obviously apply Theorem 2.2 to $U(r)$. We obtain \begin{equation} \lim_{r \to \infty} \frac{\left| T \cap U(r) \right|}{N(r)} = \lim_{r \to \infty} \frac{m(U(r))}{m(B_{n}(r))} \label{EquidistributionOnU} \end{equation} \begin{equation} \lim_{r \to \infty} \frac{| T \cap \tilde{C}(r) |}{N(r)} = \lim_{r \to \infty} \frac{m(\tilde{C}(r) )}{m(B_{n}(r))} ; \label{EquidistributionOnCTilde} \end{equation} thus, because $| T \cap U(r) | - | T \cap U(r) \backslash C(r) | = |T \cap C(r)|$, \[ \lim_{r \to \infty} \frac{\left| T \cap C(r) \right|}{N(r)} = \lim_{r \to \infty} \frac{m(U(r)) - m(\tilde{C}(r) )}{m(B_{n}(r))} = \lim_{r \to \infty} \frac{m(C(r))}{m(B_n(r))} = \frac{m(C)}{m(B_n)}. \] \end{proof} \end{lemma} Now, dilate $\Psi$ and $\mathcal{H}_\delta$ by $r$ about $\mathbf{0}$; we consider a $2n$-cube $Q(r) \in \mathfrak{C}_{\delta}(r)$, where $\mathfrak{C}_{\delta}(r)$ consists of the cubes in $\mathfrak{C}_{\delta}$ dilated by $r$ about $\mathbf{0}$. Now, $Q(r) = Q_1(r) \times Q_2(r)$, where $Q_1(r)$ and $Q_2(r)$ are $n$-cubes in the space of $(x_1, \dots x_n)$ and $(x_{n+1}, \dots x_{2n})$, respectively. Consider a point in $Q_1(r)$ with $(x_1, \dots x_n) \in T$. Then, there are $\left| Q_2(r) \cap T \right|$ corresponding points in $Q_2(r)$ such that the entire point $(x_1, \dots x_{2n}) \in T \times T$. But there are $|Q_1(r) \cap T|$ such points in $Q_1(r)$ with $(x_1, \dots x_n) \in T$. Thus, we get $|Q_1(r) \cap T| |Q_2(r) \cap T| = | Q(r) \cap (T \times T) | $. Since $m(Q_1(r))m(Q_2(r)) = m(Q(r))$ (because all are Cartesian products of intervals), Lemma 2 gives that \begin{equation} \label{EachBox} \lim_{r \to \infty} \frac{ | (T \times T) \cap Q(r) | }{ N(r)^2} = \frac{m(Q)}{m(B_n)^2} . \end{equation} \noindent where we have used Lemma 1 in $2n$-dimensions to ignore the fact that some faces are open whereas others are closed. Summing over the $Q(r)$ in (\ref{EachBox}), we obtain \begin{equation} \label{SumBox} \lim_{r \to \infty} \frac{1}{N(r)^2} \left| (T \times T) \cap \Bigg(\bigcup_{Q\in\mathfrak{C}_{\delta}} Q(r)\Bigg) \right| = \sum_{Q \in \mathfrak{C}_{\delta}}\frac{m(Q)}{m(B_n)^2}. \end{equation} \noindent since the number of points omitted from consideration on the boundary of any of the $Q$ is negligible. Taking the limit as $\delta\to 0$, the union of all of the $Q$-regions in $\mathfrak{C}_{\delta}$ tends towards $\Psi$, so we have that \begin{equation} \label{TcrossTResult} \lim_{r \to \infty}\frac{ | (T \times T) \cap \Psi(r) | }{ N(r)^2} = \frac{m(\Psi)}{m(B_n)^2} . \end{equation} \noindent To finish, we simply note that, for each point $(x_1, \dots x_n)$ in $T\cap B_n(r)$, there are $N(r)$ points $(x_{n+1}, \dots x_{2n}) \in T$ such that $(x_1, \dots x_{2n}) \in K(r)$. There are $N(r)$ such points $(x_1, \dots x_n)$, so $|(T \times T) \cap K(r) | = N(r)^2$, finishing the proof. \end{proof} \end{theorem} \section{Calculating the Pair Correlation Distribution} Given the volume heuristic, we have all the mathematical framework we need to compute the pair correlation function for equidistributed sets. All that follows are a few volume calculations in $2n$-dimensions by means of which we can express the required distribution in terms of the incomplete beta functions (see below). \subsection{Definition of Special Functions} Before we begin calculating the pair correlation distribution, we first need to define some familiar special functions, which we will use to both calculate and represent the pair correlation distribution: \subsubsection{Gamma Function} The \textit{Gamma function} $\Gamma(z)$ is defined (\cite[Eq.~5.2.1]{DLMF}) as \begin{equation} \label{GammaDefinition} \Gamma(z) \equiv \int_0^{\infty} e^{-t} t^{z-1}\ dt. \end{equation} \noindent It takes the value $(n-1)!$ at the positive integer $n$, and the value $\sqrt{\pi}$ at $1/2$ (\cite[Eq.~5.4.1]{DLMF},\cite[Eq.~5.4.6]{DLMF}). Furthermore, it obeys the following duplication formula: \begin{equation} \label{GammaDuplication} \Gamma(2z)=\frac{1}{\sqrt{\pi}} 2^{2z-1} \Gamma(z)\Gamma\left(z+\frac{1}{2}\right). \end{equation} \subsubsection{Beta Function} The \textit{Beta function} $\mathrm{B}(a,b)$ is defined (\cite[Eq.~5.12.1]{DLMF}) as \begin{equation} \label{BetaDefinition} \mathrm{B}(a,b) \equiv \int_0^1 t^{a-1} (1-t)^{b-1}\ dt = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}. \end{equation} \noindent It is also equal to (\cite[Eq.~5.12.2]{DLMF}) \begin{equation} \label{BetaTrig} \mathrm{B}(a,b) = 2\int_0^{\frac{\pi}{2}} \sin^{2a-1} \theta \cos^{2b-1}\theta\ d\theta. \end{equation} \subsubsection{Incomplete Beta Functions} The \textit{incomplete Beta function} $\mathrm{B}_x(a,b)$ is defined as (\cite[Eq.~8.17.1]{DLMF}) \begin{equation} \label{IncompleteBetaDefinition} \mathrm{B}_x(a,b) \equiv \int_0^x t^{a-1} (1-t)^{b-1}\ dt. \end{equation} \noindent It is also useful to define the \textit{regularized incomplete Beta function} (\cite[Eq.~8.17.2]{DLMF}) \begin{equation} \label{RegularizedIncompleteBetaDefinition} I_x(a,b) \equiv \frac{\mathrm{B}_x(a,b)}{\mathrm{B}(a,b)}. \end{equation} \noindent This obeys the reflection formula (\cite[Eq.~8.17.4]{DLMF}) \begin{equation} \label{BetaReflection} I_x(a,b)+I_{1-x}(b,a)=1. \end{equation} \subsection{Evaluation of the Volume of the Region} From the discussion in the introduction, the problem of pair correlations reduces to the following: We want to find the number of pairs of points $(a_1, a_2, ..., a_n)$ and $(b_1, b_2, ..., b_n)$ in $T$ satisfying \begin{equation} \label{MagnitudeConditions} \sum_{i=1}^n a_i^2 \leq R^2,\ \ \sum_{i=1}^n b_i^2 \leq R^2 , \end{equation} \begin{equation} \label{DistanceConditions} \sum_{i=1}^n \left(a_i-b_i\right)^2 \leq \lambda^2 R^2 \end{equation} \noindent for given $\lambda, R$. These conditions can be visualized as the intersection of three regions in $\mathbb{R}^{2n}$. Now, we are in a position to apply Theorem 2.3: the number of ordered pairs of points in $T$ within the required compact region $\Psi(R, \lambda)$ (defined by the above two equations) can be asymptotically $(R \to \infty)$ estimated by the following: \[ | (T \times T) \cap \Psi(R, \lambda) | \sim N(R)^2 \frac{ m (\Psi(R, \lambda)) }{m(B_n)^2} . \] \noindent However, note that $\Psi(R, 2) = B_n(R) \times B_n(R)$. Thus, \[ \frac{ | (T \times T) \cap \Psi(R, \lambda) | }{ | (T \times T) \cap \Psi(R, 2) | } \sim \frac{ m (\Psi(R, \lambda)) }{m(\Psi(R, 2)) } . \] Therefore, the calculated distribution function of $\lambda$ will be the same for both $| (T \times T) \cap \Psi(R, \lambda) |$ and $m (\Psi(R, \lambda)) $. We therefore completely disregard $T$ and focus only on computing the volume of $\Psi(R, \lambda)$. When computing a probability distribution, the scaling radius $R$ becomes irrelevant, so we without loss of generality we let $R = 1$. We make the change of variables \begin{equation} \label{ChangeOfVars} a_i = v_i+\frac{u_i}{2},\ \ b_i = v_i-\frac{u_i}{2}. \end{equation} \noindent It is easy to verify that the Jacobian of this map is $1$. Note that $a_i-b_i = u_i$, so condition (\ref{DistanceConditions}) becomes \[\sum_{i=1}^n u_i^2 \leq \lambda^2\ \Leftrightarrow\ |\mathbf{u}|\leq \lambda,\] \noindent and the conditions in (\ref{MagnitudeConditions}) can be rephrased as requiring that the point $\mathbf{v} = (v_1, ..., v_n)$ is inside both of the $n$-balls of radius $1$ centered at each of \begin{equation} \label{ChangeOfVarsNewCenters} \mathbf{-u} = \left(-\frac{u_1}{2},...,-\frac{u_n}{2}\right)\ \mathrm{and}\ \mathbf{u} = \left(\frac{u_1}{2},...,\frac{u_n}{2}\right). \end{equation} \noindent The volume of this region is just twice the volume of the hyperspherical cap with radius $1$ and cap base height $\frac{|\mathbf{u}|}{2}$. (We define $r = |\mathbf{u}|$.) By the theorem in \cite{Li2011}, this volume is just \begin{equation} \label{CapVolume} \frac{\pi^{\frac{n}{2}}}{2\Gamma\left(\frac{n}{2}+1\right)}I_{1-\frac{r^2}{4}}\left(\frac{n+1}{2}, \frac{1}{2}\right). \end{equation} \noindent Thus, the volume of the region defined by a given vector $\mathbf{u}$ is \begin{equation} \label{CapVolumeIntegral} \frac{\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n+1}{2}\right)\Gamma\left(\frac{1}{2}\right)}\int_0^{1-\frac{r^2}{4}} t^{\frac{n-1}{2}}(1-t)^{-\frac{1}{2}} dt. \end{equation} \noindent To obtain the volume of the region described by (\ref{MagnitudeConditions}) and (\ref{DistanceConditions}), we must integrate this over the $n$-ball with radius $\lambda$. However, since our function depends only on $r$, we can multiply by the surface area of each hyper-spherical shell of radius $r$, and integrate from $r=0$ to $\lambda$. This surface area formula is well-known (and found in \cite{Li2011}) to be \begin{equation} \label{SurfaceArea} \frac{2\pi^{\frac{n}{2}}}{\Gamma\left(\frac{n}{2}\right)}r^{n-1}, \end{equation} \noindent and thus the volume is \[\frac{2\pi^n}{\Gamma\left(\frac{n}{2}\right)\Gamma\left(\frac{n+1}{2}\right)\Gamma\left(\frac{1}{2}\right)}\int_0^\lambda r^{n-1}\int_0^{1-\frac{r^2}{4}} t^{\frac{n-1}{2}}(1-t)^{-\frac{1}{2}} dt\ dr.\] \noindent Interchanging the order of integration and simplifying using the incomplete Beta function, the expression of the volume reduces to the following: \begin{equation} \label{LambdaVolume} \frac{2\pi^n}{n\Gamma\left(\frac{n}{2}\right)\Gamma\left(\frac{n+1}{2}\right)\Gamma\left(\frac{1}{2}\right)} \Bigg[ \lambda^n\mathrm{B}_{1-\frac{\lambda^2}{4}}\left( \frac{n+1}{2}, \frac{1}{2}\right) \Bigg] + \end{equation} \[ + \frac{2\pi^n}{n\Gamma\left(\frac{n}{2}\right)\Gamma\left(\frac{n+1}{2}\right)\Gamma\left(\frac{1}{2}\right)} \Bigg[ 2^n\mathrm{B}_{\frac{\lambda^2}{4}}\left( \frac{n+1}{2}, \frac{n+1}{2}\right) \Bigg] . \] \subsection{Deriving a Probability Density Function} To find the constant by which we need to divide to turn (\ref{LambdaVolume}) into a proper cumulative density function, we evaluate (\ref{LambdaVolume}) at $\lambda = 2$, which measures the volume of the whole region (given by (\ref{MagnitudeConditions}) only): \begin{equation} \label{EntireVolume} \frac{2\pi^n}{n\Gamma\left(\frac{n}{2}\right)\Gamma\left(\frac{n+1}{2}\right)\Gamma\left(\frac{1}{2}\right)} 2^n\mathrm{B}\left(1; \frac{n+1}{2}, \frac{n+1}{2}\right) = \frac{\pi^n2^n\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\frac{n}{2}+1\right)\sqrt{\pi}\Gamma(n+1)}. \end{equation} \noindent The duplication formula (\ref{GammaDuplication}) simplifies this to \[\frac{\pi^n}{\Gamma\left(\frac{n}{2}+1\right)^2}, \] \noindent which is the square of the volume of an $n$-ball of radius $1$ (found in \cite{Li2011}), as expected (since each of the normalized $\mathbf{a}$ and $\mathbf{b}$ vectors are in the space of the $n$-ball of radius $1$). Dividing the expression (\ref{LambdaVolume}) by this, we get \begin{equation} \label{LambdaProbability} \frac{1}{\mathrm{B}\left(\frac{n+1}{2}, \frac{1}{2}\right)} \left(\lambda^n\mathrm{B}\left(1-\frac{\lambda^2}{4}; \frac{n+1}{2}, \frac{1}{2}\right) + 2^n\mathrm{B}\left(\frac{\lambda^2}{4}; \frac{n+1}{2}, \frac{n+1}{2}\right)\right), \end{equation} \noindent which is the probability that the distance between any two given lattice points in the $n$-ball with radius $R$ (sufficiently large) is less than $R\lambda$. To obtain the probability density function, we differentiate with respect to $\lambda$: \begin{equation} \label{FinalProbability} P_n(\lambda) = n\lambda^{n-1}I_{1-\frac{\lambda^2}{4}}\left(\frac{n+1}{2},\frac{1}{2}\right). \end{equation} \subsection{Specific Values of $n$} \noindent For $n=2$, the function simplifies to \begin{equation} \label{n=2} \frac{4\lambda}{\pi}\cos^{-1}\left(\frac{\lambda}{2}\right) - \frac{\lambda^2\sqrt{4-\lambda^2}}{\pi}, \end{equation} \noindent as has been very well documented. For $n=3$, the function simplifies to \begin{equation} \label{n=3} 3\lambda^2-\frac{9\lambda^3}{4}+\frac{3\lambda^5}{16}. \end{equation} \noindent Note that, for odd $n$, the probability density function will be a polynomial in $\lambda$ of degree $2n-1$ with rational coefficients. \vspace{2mm} \noindent We now examine what happens to the probability density function as $n$ tends to $\infty$. Consider the limit of the integral (\ref{FinalProbability}) as $n \to \infty$, using the substitution $u=(1-t)^{-1/2}$: \begin{equation} \label{LimitIntegralStatement} P_{\infty}(\lambda) \equiv \lim_{n\to\infty} \frac{2n\lambda^{n-1}}{\mathrm{B}\left(\frac{n+1}{2},\frac{1}{2}\right)}\int_{\frac{\lambda}{2}}^1 \left(1-u^2\right)^{\frac{n-1}{2}}\ du. \end{equation} \noindent We use the method of Laplace in \cite{Laplace1820} for asymptotically estimating integrals to obtain \begin{equation} \label{LimitSubst1} P_{\infty} (\lambda) = \lim_{n\to\infty} C(\lambda)\frac{n\lambda^{n-1}}{\mathrm{B}\left(\frac{n+1}{2},\frac{1}{2}\right)}\sqrt{\frac{2\pi}{\frac{n-1}{2}}}\left(1-\frac{\lambda^2}{4}\right)^{\frac{n-1}{2}}, \end{equation} \noindent where $C(\lambda)$ is some positive value independent of $n$. By Stirling's Approximation, we have that \begin{equation} \label{PinftyAsymptotics} P_{\infty} (\lambda) = C(\lambda)\sqrt{2}\lim_{n\to\infty} n\lambda^{n-1}\left(1-\frac{\lambda^2}{4}\right)^{\frac{n-1}{2}}. \end{equation} \noindent Letting $\lambda=2\sin\theta$ simplifies this to $C(\lambda)\sqrt{2}\ n\sin^{n-1}(2\theta)$, which tends to $0$ everywhere except $\lambda=\sqrt{2}$, where it tends to $\infty$. Thus, this probability density function can be said to equal $\delta(\lambda-\sqrt{2})$, where $\delta$ is the Dirac delta function. \section{Specific sets $T\subset \mathbb{R}^n$} In \cite{Wills1973}, \cite{Steinhaus1947}, and \cite{Clark2013}, it is proven that for any subset $\Omega\subset \mathbb{R}^n$, the number of lattice points or primitive lattice points inside $\Omega$ dilated by radius $r$ grows like $R^n$ times a constant ($1$ for lattice points and $1/\zeta(n)$ for primitive lattice points). Therefore, by Theorem 2.1, the probability density function (\ref{FinalProbability}) is also the probability density function for the normalized distance between pairs of lattice and primitive lattice points. \subsection{Numerical Results} We may verify with simple numerical checks that the probability density (\ref{FinalProbability}) agrees extremely well with the actual distribution of normalized distances between lattice and primitive lattices points enclosed within an $n$-ball of radius $R$. Here, we present results for $n = 2$, $R = 300$ and $n = 3$, $R = 30$: \begin{figure}[H] \centering \includegraphics[scale=1.20]{Distances300_2Overlay} \caption{This graph shows the distribution of distances for $n = 2$ and $R = 300$ over the lattice points (Gaussian integers). The histogram (blue) plots the relative frequency with which a certain normalized distance occurs. The curve (red) represents the (appropriately scaled) probability distribution given by (\ref{FinalProbability}). } \end{figure} \begin{figure}[H] \centering \includegraphics[scale=1.20]{Distances30_3Overlay} \caption{This graph shows the distribution of distances for $n = 3$ and $R = 30$ over the lattice points. } \end{figure} \begin{figure}[H] \centering \includegraphics[scale = 1.20]{Distances300_2PrimitiveOverlay} \caption{This graph shows the distribution of distances for $n = 2$ and $R = 300$ over the primitive lattice points. } \end{figure} \begin{figure}[H] \centering \includegraphics[scale = 1.20]{Distances30_3PrimitiveOverlay} \caption{This graph shows the distribution of distances for $n = 3$ and $R = 30$ over the primitive lattice points. } \end{figure} \section*{Acknowledgements} We would like to thank Dr. Jayadev Athreya at the University of Washington for his continual support and encouragement throughout the course of our research.
{ "timestamp": "2017-12-07T02:09:31", "yymm": "1712", "arxiv_id": "1712.02315", "language": "en", "url": "https://arxiv.org/abs/1712.02315", "abstract": "We determine the pair correlations of countable sets $T \\subset \\mathbb{R}^n$ satisfying natural equidistribution conditions. The pair correlations are computed as the volume of a certain region in $\\mathbb{R}^{2n}$, which can be expressed in terms of the incomplete Beta function. For $n=2$ and $n=3$ we give closed form expressions, and we obtain an expression in the $n \\rightarrow \\infty$ limit. We also verify that sets of lattice points and primitive lattice points satisfy the required distribution criteria.", "subjects": "Number Theory (math.NT)", "title": "Pair Correlations in Uniform Countable Sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9910145712474399, "lm_q2_score": 0.831143045767024, "lm_q1q2_score": 0.8236748691460986 }
https://arxiv.org/abs/1803.07744
Passivity and Evolutionary Game Dynamics
This paper investigates an energy conservation and dissipation -- passivity -- aspect of dynamic models in evolutionary game theory. We define a notion of passivity using the state-space representation of the models, and we devise systematic methods to examine passivity and to identify properties of passive dynamic models. Based on the methods, we describe how passivity is connected to stability in population games and illustrate stability of passive dynamic models using numerical simulations.
\section{Introduction} \label{sec_introduction} Of central interest in evolutionary game theory \cite{weibull1995_mit, hofbauer2003_ams} is the study of strategic interactions among players in large populations. Each player engaged in a game chooses a strategy among a finite set of options and repeatedly revises its strategy choice in response to given payoffs. The distribution of strategy choices by the players define the population state, and evolutionary dynamics represent the strategy revision process and prompt the time-evolution of the population state. One of main focuses of the study is on analyzing the population state trajectories induced by evolutionary dynamics to identify asymptotes of the trajectories and establishing \textit{stability} in population games. This paper contributes to the study by investigating a passivity -- abstraction of energy conservation and dissipation -- aspect of evolutionary dynamic models (EDMs) specifying the dynamics and by demonstrating how passivity yields stability in population games. The most relevant work in literature investigates various aspects of evolutionary dynamics and associated stability concepts in population games. To mention a few, Brown and von Neumann \cite{brown1950_ams} studied \textit{Brown-von Neumann-Nash (BNN)} dynamics to examine the existence of optimal strategy choices in a zero-sum two-player game. Taylor and Jonker \cite{taylor1978_mb} studied \textit{replicator} dynamics and established a connection between \textit{evolutionarily stable strategies} \cite{maynard_smith1973} and stable equilibrium points of replicator dynamics. Later the result was strengthened by Zeeman \cite{zeeman1980_gtds} who also proposed a stability concept for games under replicator dynamics. Gilboa and Matshu \cite{gilboa1991_econometrica} considered \textit{cyclic stability} in games under best-response dynamics. In succeeding work, stability results are established using broader classes of evolutionary dynamics. Swinkels \cite{swinkels1993_geb} considered a class of \textit{myopic adjustment} dynamics and studied \textit{strategic stability} of Nash equilibria under these dynamics. Ritzberger and Weibull \cite{ritzberger1995_econometrica} considered a class of \textit{sign-preserving selection} dynamics and studied asymptotic stability of faces of the population state space under these dynamics. In a recent development of evolutionary game theory, Hofbauer and Sandholm \cite{hofbauer2009_jet} proposed \textit{stable games} and established stability of Nash equilibria of these games under a large class of evolutionary dynamics. The class includes \textit{excess payoff/target (EPT)} dynamics, \textit{pairwise comparison} dynamics, and \textit{perturbed best response (PBR)} dynamics. Fox and Shamma \cite{fox2013_games} later revealed that the aforementioned class of evolutionary dynamics exhibit passivity. Based on passivity methods from dynamical system theory, the authors established $\mathbb L_2$-stability of evolutionary dynamics in a generalized class of stable games. Mabrok and Shamma \cite{7799211} discussed passivity for higher-order evolutionary dynamics. In particular, the authors investigated a connection between passivity of linearized dynamics and stability in generalized stable games using robust control methods. Passivity methods are also adopted in games over networks to analyze stability of Nash equilibrium \cite{gadjov2017_arxiv}. Inspired by passivity analysis presented in \cite{fox2013_games}, we investigate the notion of passivity for EDMs of evolutionary dynamics more in-depth. \textbf{Our main goals are} \textbf{(i)} to define passivity of EDMs that admit realizations in a finite-dimensional state space; \textbf{(ii)} to develop methods to examine passivity and to identify various properties of passive EDMs; and \textbf{(iii)} based on the above results, to establish a connection between passivity and stability in population games. \subsection{Summary of the Main Contributions} \begin{enumerate} \item We define and characterize the notions of $\delta$-passivity for EDMs that admit state-space representation. Based on the characterization result, we demonstrate how to examine $\delta$-passivity of EDMs for the replicator dynamics, EPT dynamics, pairwise comparison dynamics, and PBR dynamics. \item We investigate certain properties of $\delta$-passive EDMs with respect to payoff monotonicity and total payoff perturbation. \item Based on the above results, we establish a connection between $\delta$-passivity and asymptotic stability of Nash equilibria in population games. We also illustrate the stability using numerical simulations. \end{enumerate} \subsection{Paper Organization} In Section \ref{sec:background}, we present background materials on evolutionary game theory that are needed throughout the paper. In Section \ref{sec:passive_evo_dyn}, we define $\delta$-passivity of EDMs and present an algebraic characterization of $\delta$-passivity in terms of the vector field which describes state-space representation of EDMs. In Section \ref{sec:properties}, using the characterization result, we investigate properties of $\delta$-passive EDMs. In Section \ref{sec:numerical_ex}, we establish a connection between $\delta$-passivity and asymptotic stability in population games, and present some numerical simulation results to illustrate the stability. We end the paper with conclusions in Section \ref{sec:conclusion}. \subsection{Notation} \begin{itemize} \item $a_i$ -- given a vector $a$ in $\mathbb R^n$, we denote its $i$-th entry as $a_i$. \item $[a]_+$, $[a_i]_+$ -- the non-negative part of a vector ${a = \begin{pmatrix} a_1 & \cdots & a_n \end{pmatrix}^T}$ in $\mathbb R^n$ and its $i$-th entry defined by $[a]_+ \overset{\mathrm{def}}{=} \begin{pmatrix} [a_1]_+ & \cdots & [a_n]_+\end{pmatrix}^T$ and ${[a_i]_+ \overset{\mathrm{def}}{=} \max\{a_i, 0\}}$, respectively. \item $\mathbf 1, I, e_i$ -- the vector with all entries equal to $1$, the identity matrix, and the $i$-th column of $I$, respectively.\footnote{We omit the dimensions of vectors and matrices whenever they are clear from context.} \item $\nabla_x f, \nabla_x^2 f$ -- the gradient and Hessian of a real-valued function $x \mapsto f(x)$ with respect to $x$, respectively, provided that they exist. \item $D \mathcal F$ -- the differential of a mapping $\mathcal F: \mathbb X \to \mathbb R^n$. \item $\mathrm{int}(\mathbb X ), \mathrm{bd}(\mathbb X)$ -- the (relative) interior and boundary of a set $\mathbb X$ in its affine hull, respectively. \item $\mathbb R_+^n \left( \mathbb R_-^n \right)$ -- the set of $n$-dimensional element-wise non-negative (non-positive) vectors. For $n=1$, we omit the superscript $n$ and adopt $\mathbb R_+ \left( \mathbb R_- \right)$. \item $\| \cdot \|$ -- the Euclidean norm. \end{itemize} \section{Background on Evolutionary Game Theory} \label{sec:background} Consider a population of players engaged in a game where each player selects a (pure) strategy from the set of available strategies denoted by $\{1, \cdots, n\}$.\footnote{Population games, in general, account multiple populations of players, and the strategy sets are allowed to be distinct across populations (see \cite{hofbauer2009_jet} for details). However, for simple and clear presentation, we restrict our attention to a single-population case.} Suppose that the population consists of a continuum of players. The population states, which describe the distribution of strategy choices by players, constitute a simplex defined by ${\mathbb X \overset{\mathrm{def}}{=} \{x \in \mathbb R_+^n \,|\, \sum_{i=1}^n x_i = 1 \}}$ where $x_i$ is a fraction of population choosing strategy $i$. Let us denote the tangent space of $\mathbb X$ by $\mathbb {TX} = \left\{ z \in \mathbb R^n \,|\, \sum_{i=1}^n z_i = 0 \right\}$. In population games, an element $p$ in $\mathbb R^n$, so-called the payoff vector, is assigned to the population state $x$ by payoff functions where the $i$-th entry $p_i$ of $p$ represents the payoff given to the players choosing strategy $i$. In this work, we mainly focus on investigating passivity of evolutionary dynamic models and briefly demonstrate how passivity yields stability in population games. For the later purpose, we consider the payoff functions given below: \begin{enumerate} \item \textit{Static Payoff \cite{hofbauer2009_jet}:} \begin{align} \label{eq:static_payoff} p(t) = \mathcal F(x(t)) \end{align} \item \textit{Smoothed Payoff \cite{fox2013_games}:} \begin{align} \label{eq:smoothed_payoff} \dot p(t) = \lambda (\mathcal F(x(t)) - p(t)) \end{align} \end{enumerate} In \eqref{eq:smoothed_payoff}, we note that at the equilibrium points, it holds that $p(t) = \mathcal F(x(t))$. Throughout the paper, we adopt the definition of a Nash equilibrium as follows. \begin{mydef} Given a payoff function $\mathcal F: \mathbb X \to \mathbb R^n$, the population state $x^{\mathrm{NE}} \in \mathbb X$ is a Nash equilibrium of $\mathcal F$ if it holds that \begin{align*} x_i^{\mathrm{NE}}>0 \text{ implies } i \in \argmax_{j \in \{1, \cdots, n\}} \mathcal F_j(x^{\mathrm{NE}}) \end{align*} \end{mydef} \subsection{Evolutionary Dynamic Models} \label{sec:evo_dyn} Evolutionary dynamics describe how the population state evolves over time in response to given payoffs and are specified by evolutionary dynamic models (EDMs). We focus on EDMs that admit state-space representation of the following form: \begin{align} \label{eq:edm} \dot x(t) = \mathcal V(p(t), x(t)), ~ x(0) \in \mathbb X \end{align} where $p(t)$, $x(t)$, and $\dot x(t)$ take values in $\mathbb R^n$, $\mathbb X$, and $\mathbb {TX}$, respectively. We assume that the vector field $\mathcal V: \mathbb R^n \times \mathbb X \to \mathbb {TX}$ is \textit{well-defined} in a sense that it continuously depends on $(p(t), x(t))$, and for each initial value $x(0)$ in $\mathbb X$ and each payoff vector trajectory $p(\cdot): \mathbb R_+ \to \mathbb R^n$ in $\mathfrak P$, there exists a unique solution $x(\cdot): \mathbb R_+ \to \mathbb X$ to \eqref{eq:edm} that belongs to $\mathfrak X$, where $\mathfrak P$ and $\mathfrak X$ are defined by {\small \begin{align*} \mathfrak P &\overset{\mathrm{def}}{=} \left\{ p (\cdot) \,\bigg|\, p(t) \in \mathbb X \text{ and } \int_0^t \|\dot p(\tau)\|^2 \, \mathrm d \tau < \infty \text{ for $t \in \mathbb R_+$} \right\} \\ \mathfrak X &\overset{\mathrm{def}}{=} \left\{ x (\cdot) \,\bigg|\, x(t) \in \mathbb X \text{ and } \int_0^t \|\dot x(\tau)\|^2 \, \mathrm d \tau < \infty \text{ for $t \in \mathbb R_+$} \right\} \end{align*}} Throughout the paper, we assume that EDMs satisfy the following path-connectedness condition. \begin{assump} [Path-Connectedness] \label{assump:path_connected} We define the set of equilibrium points of \eqref{eq:edm} by ${\mathbb S \overset{\mathrm{def}}{=} \{ (p, x) \in \mathbb R^n \times \mathbb X \,|\, \mathcal V(p,x) = \mathbf 0 \}}$ and its projection on $\mathbb R^n \times \{x\}$ by $\mathbb S_x \overset{\mathrm{def}}{=} \{ p \in \mathbb R^n \,|\, (p, x) \in \mathbb S \}$ for each $x$ in $\mathbb X$. We assume that the set $\mathbb S_x$ is path-connected for every $x$ in $\mathbb X$. In other words, for every $p_0, p_1$ in $\mathbb S_x$, there exists a piece-wise smooth path from $p_0$ to $p_1$ that is contained in $\mathbb S_x$. \hfill $\square$ \end{assump} We list below a few examples of EDMs found in the literature. \begin{ex} \label{ex:evo_dynamics} For each $i$ in $\{1, \cdots, n\}$, \begin{enumerate} \item \textit{Replicator Dynamics \cite{taylor1978_mb}: } \begin{align} \dot{x}_i(t) = x_i(t) \left( p_i(t) - p^T(t)x(t)\right) \label{eq:rep} \end{align} \item \textit{BNN Dynamics \cite{brown1950_ams}: } \begin{align} \dot{x}_i(t) = [\hat{p}_i(t)]_{+} - x_i(t) \sum_{j=1}^{n} [\hat{p}_j(t)]_{+} \label{eq:bnn} \end{align} \item \textit{Smith Dynamics \cite{doi:10.1287/trsc.18.3.245}: } \begin{align} \dot{x}_i(t) &= \sum_{j=1}^{n} x_j(t) [p_i(t) - p_j(t)]_{+} \nonumber \\ &\quad- x_i(t) \sum_{j=1}^{n} [p_j(t) - p_i(t)]_{+} \label{eq:smith} \end{align} \item \textit{Logit Dynamics \cite{hofbauer2007_jet}: } \begin{align} \dot x_i(t) = \frac{\exp(\eta^{-1} p_i(t))}{\sum_{j=1}^{n} \exp(\eta^{-1} p_j(t))} - x_i(t) \label{eq:logit} \end{align} \end{enumerate} where $\hat p = \begin{pmatrix} \hat p_1 & \cdots & \hat p_n \end{pmatrix}^T$ in \eqref{eq:bnn} is the \textit{excess payoff vector} defined as $\hat{p} = p - p^T x \mathbf{1}$ and $\eta$ in \eqref{eq:logit} is a positive constant. \hfill $\square$ \end{ex} Associated with Assumption \ref{assump:path_connected}, it can be verified that all the EDMs described in Example \ref{ex:evo_dynamics} satisfy the path-connectedness assumption. For instance, consider the set $\mathbb S = \{(p,x) \,|\, x \in \operatorname*{arg\,max}_{y \in \mathbb{X}} p^Ty\}$ which is the set of equilibrium points of \eqref{eq:bnn} and \eqref{eq:smith}. It can be verified that if both $(p_0,x)$ and $(p_1,x)$ belong to $\mathbb S$ then so does ${(\lambda p_0 + (1-\lambda) p_1, x)}$ for all $\lambda$ in $[0,1]$. \subsection{Passivity and Stability} As it will be presented in Section \ref{sec:passive_evo_dyn}, passivity of EDMs allows us to construct an energy function which can be used to identify asymptotes of population state trajectories induced by EDMs in a class of population games. This observation naturally explains asymptotic stability of equilibrium points of EDMs. After we present our main results on passivity, we show how passivity is connected to stability in population games and provide numerical simulation results to illustrate the stability. We defer to a future extension of this paper for in-depth stability analysis for passive EDMs in a large class of population games. \section{$\delta$-Passivity of Evolutionary Dynamic Model} \label{sec:passive_evo_dyn} We proceed with defining $\delta$-passivity of EDMs, and then characterize $\delta$-passivity conditions in terms of the vector field $\mathcal V$ in \eqref{eq:edm}. Using the characterization, we examine $\delta$-passivity of representative EDMs in the literature and explore important properties of $\delta$-passive EDMs. \subsection{Definition of $\delta$-Passivity} Given an EDM \eqref{eq:edm} with a payoff vector trajectory $p(\cdot) \in \mathfrak P$ and a resulting population state trajectory $x(\cdot) \in \mathfrak X$, consider the following inequality: For $t \geq t_0 \geq 0$, \begin{align} \label{eq:passivity_inequality} &\int_{t_0}^t \left[ \dot p^T(\tau) \dot x(\tau) - \eta \dot x^T(\tau) \dot x (\tau) \right] \,\mathrm d \tau \nonumber \\ &\geq \mathcal S(p(t), x(t)) - \mathcal S(p(t_0), x(t_0)) \end{align} where $\mathcal S: \mathbb R^n \times \mathbb X \to \mathbb R_+$ is a $\mathcal C^1$ function and $\eta$ is a non-negative constant. Using \eqref{eq:passivity_inequality}, we state the definition of $\delta$-passivity as follows. \begin{mydef} \label{def:passivity} Given an EDM \eqref{eq:edm}, we consider the following two cases: \begin{enumerate} \item The EDM is \textit{$\delta$-passive} if there is a $\mathcal C^1$ function $\mathcal S$ for which \eqref{eq:passivity_inequality} holds with $\eta = 0$ for every $p(\cdot)$ in $\mathfrak P$. \item The EDM is \textit{strictly output $\delta$-passive} with index $\eta>0$ if there is a $\mathcal C^1$ function $\mathcal S$ for which \eqref{eq:passivity_inequality} holds for every $p(\cdot)$ in $\mathfrak P$.\footnote{It immediately follows from Definition \ref{def:passivity} that strict output $\delta$-passivity implies $\delta$-passivity.} \end{enumerate} \end{mydef} We refer to $\mathcal S$ as a \textit{storage function}. The function $\mathcal S$ describes the energy stored in the EDM, and the $\delta$-passivity inequality \eqref{eq:passivity_inequality} suggests that the variation in the stored energy $\mathcal S(p(t), x(t)) - \mathcal S(p(t_0), x(t_0))$ is upper bounded by the supplied energy $\int_{t_0}^t [ \dot p^T(\tau) \dot x(\tau) - \eta \dot x^T(\tau) \dot x(\tau) ] \,\mathrm d\tau$. We adopt the following definition of a strict storage function. \begin{mydef} Let $\mathcal S: \mathbb R^n \times \mathbb X \to \mathbb R_+$ be a storage function of a $\delta$-passive EDM \eqref{eq:edm}. We call $\mathcal S$ \textit{strict} if the following condition holds: For all $(p,x)$ in $\mathbb R^n \times \mathbb X$, \begin{align} \nabla_x^T \mathcal S(p, x) \mathcal V(p,x) = 0 \text{ if and only if } \mathcal V(p,x)=\mathbf 0 \end{align} \end{mydef} \subsection{Characterization of $\delta$-Passivity Condition} \label{sec:algebraic_conditions} Let us consider the following relations: For $(p,x)$ in ${\mathbb R^n \times \mathbb X}$, \begin{align} \nabla_p \mathcal S(p, x) &= \mathcal V(p, x) \tag{\textbf{P1}} \label{eq:p_condition_01} \\ \nabla_x^T \mathcal S(p, x) \mathcal V(p, x) & \leq - \eta \mathcal V^T(p, x) \mathcal V(p, x) \tag{\textbf{P2}} \label{eq:p_condition_02} \end{align} where $\mathcal S: \mathbb R^n \times \mathbb X \to \mathbb R_+$ is a $\mathcal C^1$ function, $\mathcal V: \mathbb R^n \times \mathbb X \to \mathbb T\mathbb X$ is the vector field in \eqref{eq:edm}, and $\eta$ is a non-negative constant. In the following theorem, using \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02}, we characterize the $\delta$-passivity condition. \begin{thm} \label{thm:passivity_edm} For a given EDM \eqref{eq:edm}, the following two statements are true: \begin{enumerate} \item The EDM is \textit{$\delta$-passive} if there is a $\mathcal C^1$ function $\mathcal S$ satisfying \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02} with $\eta = 0$. \item The EDM is \textit{strictly output $\delta$-passive} with index $\eta>0$ if there is a $\mathcal C^1$ function $\mathcal S$ satisfying \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02}. \end{enumerate} \end{thm} The proof of Theorem \ref{thm:passivity_edm} is given in Appendix \ref{proof:passivity_edm}. The following are interpretations of the conditions \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02}. The condition \eqref{eq:p_condition_01} ensures that the integral $\int_P \mathcal V(\mathbf p, x) \cdot \mathrm d\mathbf p$ does not depend on the choice of the path $P$. This condition is an important requirement in establishing stability \cite{sandholm2013_dga}. The condition \eqref{eq:p_condition_02} implies that with the payoff vector fixed, the population state $x(t)$ evolves along a trajectory for which the function $\mathcal S$ decreases. This condition plays an important role in identifying \text{stable} equilibria of EDMs. \subsection{Assessment of $\delta$-Passivity} Using Theorem \ref{thm:passivity_edm}, we evaluate $\delta$-passivity of the following four EDMs. The definition of a revision protocol given below is used to describe some of the EDMs. \begin{mydef} \label{def:revision_proc} A function $\varrho = ( \varrho_1 ~ \cdots ~ \varrho_n )^T$ is called the \textit{revision protocol} in which each entry $\varrho_i$ is defined as $\varrho_i: \mathbb R^n \to \mathbb R_+$. \end{mydef} \subsubsection{Replicator Dynamics \cite{taylor1978_mb}} For each $i$ in $\{1, \cdots, n\}$, \begin{align} \label{eq:rep_dyn} \dot x_i(t) = x_i(t) \left( p_i(t) - p^T(t) x(t) \right) \end{align} \begin{prop} \label{prop:rep_dyn} The EDM \eqref{eq:rep_dyn} is not $\delta$-passive. \end{prop} \subsubsection{Excess Payoff/Target (EPT) Dynamics \cite{sandholm2005_jet}} For each $i$ in $\{1, \cdots, n\}$, \begin{align} \dot x_i(t) = \varrho_i (\hat p(t)) - x_i(t) \mathbf 1^T \varrho(\hat p(t)) \label{eq:ept_dyn} \end{align} where $\hat p$ is the excess payoff vector defined as $\hat p = p - p^T x \mathbf 1$, and the revision protocol $\varrho = ( \varrho_1 ~ \cdots ~ \varrho_n )^T$ satisfies the following two conditions -- Integrability \eqref{eq:integrability} and Acuteness \eqref{eq:acuteness}: \begin{align} & \nabla_{\hat p} \gamma(\hat p) = \varrho(\hat p) \tag{\textbf I} \label{eq:integrability} \\ & \hat p^T \varrho (\hat p) > 0 \text{ if } \hat p \in \mathbb R^n \setminus \mathbb R^n_- \tag{\textbf A} \label{eq:acuteness} \end{align} where $\gamma: \mathbb R^n \to \mathbb R$ is a $\mathcal C^1$ function. Note that \eqref{eq:bnn} is a particular case of \eqref{eq:ept_dyn}. The following proposition establishes $\delta$-passivity of \eqref{eq:ept_dyn}. \begin{prop} \label{prop:ept_passivity} The EDM \eqref{eq:ept_dyn} is $\delta$-passive and has a strict storage function given by $\mathcal S(p,x) = \gamma(\hat p)$ where $\gamma: \mathbb R^n \to \mathbb R_+$ is a non-negative $\mathcal C^1$ function satisfying \eqref{eq:integrability}.\footnote{The condition \eqref{eq:integrability} only ensures the existence of a $\mathcal C^1$ function $\gamma$ which would be negative. However, in the proof of Proposition \ref{prop:ept_passivity}, based on Assumption \ref{assump:path_connected} we show that there is a choice of $\gamma$ that is non-negative and satisfies \eqref{eq:integrability}.} \end{prop} \subsubsection{Pairwise Comparison Dynamics \cite{sandholm2010_games}} For each $i$ in $\{1, \cdots, n\}$, \begin{align} \dot x_i(t) &= \sum_{j=1}^nx_j(t) \varrho_i (p_i(t) - p_j(t)) \nonumber \\ &\quad - x_i(t) \sum_{j=1}^n \varrho_j (p_j(t) - p_i(t)) \label{eq:pc_dyn} \end{align} The revision protocol $\varrho = (\varrho_1 ~ \cdots ~ \varrho_n )^T$ satisfies the following condition -- Sign Preservation \eqref{eq:sign_preservation}:\footnote{The function $\mathrm{sgn}: \mathbb{R} \to \{-1, 0, 1\}$ is defined as ${\mathrm{sgn}(a) \overset{\mathrm{def}}{=} \begin{cases} 1 & \text{ if } a>0 \\ 0 & \text{ if } a=0 \\ -1 & \text{ if } a<0 \end{cases}}$.} \begin{align} \mathrm{sgn} ( \varrho_{i}(p_i - p_j) ) = \mathrm{sgn} ( [ p_i - p_j ]_{+} ) \tag{\textbf{SP}} \label{eq:sign_preservation} \end{align} Note that \eqref{eq:smith} is a particular case of \eqref{eq:pc_dyn}. The following proposition establishes $\delta$-passivity of \eqref{eq:pc_dyn} \begin{prop} \cite{fox2013_games} \label{prop:pc_passivity} The EDM \eqref{eq:pc_dyn} is $\delta$-passive and has a strict storage function given by $$\mathcal S (p, x) = \sum_{i=1}^n \sum_{j=1}^n x_i \int_{0}^{p_j-p_i} \varrho_j(s) \,\mathrm ds$$ \end{prop} \subsubsection{Perturbed Best Response (PBR) Dynamics \cite{hofbauer2007_jet}} \label{sec:pbr} \begin{align} \dot x(t) = C(p(t)) - x(t) \label{eq:pbr} \end{align} The function $C:\mathbb R^n \to \mathbb X$ is defined as $C(p) = \operatorname*{arg\,max}_{y \in \mathrm{int}(\mathbb X)} \left( p^Ty - v(y) \right)$ where $v:\mathbb X \to \mathbb R$ is a $\mathcal C^2$ function satisfying the following two conditions:\footnote{Note that there is a unique $y$ in $\mathrm{int}(\mathbb X)$ for which $p^Ty - v(y)$ is maximized.} \begin{align*} &z^T \nabla_x^2 v(x) z > 0 ~\text{for all $(x,z)$ in $\mathbb X \times \mathbb{TX} \setminus \{\mathbf 0\}$} \\ &\left\| \nabla_x v(x) \right\| \to \infty ~ \text{as $x \to \mathrm{bd}(\mathbb X)$} \end{align*} We refer to such $C$ and $v$ as the \textit{choice function} and \textit{(deterministic) perturbation}, respectively. Note that \eqref{eq:logit} is a particular case of \eqref{eq:pbr} with the perturbation ${v (x) = \eta \sum_{i=1}^n x_i \ln x_i}$. The following proposition establishes $\delta$-passivity of \eqref{eq:pbr}. \begin{prop} \label{prop:pbr_passivity} The EDM \eqref{eq:pbr} is $\delta$-passive and has a strict storage function given by \begin{align} \label{eq:storage_function_pbr} \mathcal S(p, x) = \max_{y \in \mathrm{int}(\mathbb X)} \left( p^Ty - v(y)\right) - \left( p^Tx - v(x)\right) \end{align} If $v$ satisfies the following strong convexity condition then the EDM is strictly output $\delta$-passive with index $\eta > 0$: \begin{align*} z^T \nabla_x^2 v(x) z \geq \eta z^T z ~ \text{for all $(x,z)$ in $\mathbb X \times \mathbb{TX}$} \end{align*} \end{prop} The proofs of Propositions \ref{prop:rep_dyn}, \ref{prop:ept_passivity}, and \ref{prop:pbr_passivity} are given in Appendix~\ref{proof:passivity_edm}, whereas the proof of Proposition \ref{prop:pc_passivity} is presented in \cite{fox2013_games}. \section{Properties of $\delta$-Passive Evolutionary Dynamic Models} \label{sec:properties} \subsection{Payoff Monotonicity} \label{sec:payoff_monotonicity} Let us consider the following two conditions \cite{hofbauer2009_jet, hofbauer2011_te} -- \textit{Nash Stationarity} \textbf{(NS)} and \textit{Positive Correlation} \textbf{(PC)}: \begin{align} & \mathcal V \left( p,x \right) = \mathbf 0 \text{ if and only if $x \in \argmax_{y \in \mathbb X} p^Ty$} \tag{\textbf{NS}} \label{eq:ns} \\ & p^T \mathcal V(p, x) \geq 0 \text{ holds for all } (p, x) \in \mathbb{R}^{n} \times \mathbb X \tag{\textbf{PC}} \label{eq:pc} \end{align} We refer to EDMs satisfying both \eqref{eq:ns} and \eqref{eq:pc} as \textit{payoff monotonic}. Examples of payoff monotonic EDMs are those for the EPT dynamics \eqref{eq:ept_dyn} and pairwise comparison dynamics \eqref{eq:pc_dyn}. To provide an intuition behind payoff monotinicity, consider a payoff monotonic EDM with a constant payoff $p(t) = p_0$ for all $t$ in $\mathbb R_+$. The population state $x(t)$ induced by the EDM evolves along the direction $\dot x(t)$ that monotonically increases the average payoff, and it becomes stationary when it reaches the state that attains the maximum average payoff. The following two propositions characterize important properties of $\delta$-passive EDMs in connection with payoff monotonicity. \begin{prop} \label{prop:storage_function_global_minimum} For a given $\delta$-passive EDM, let $\mathcal S$ be its storage function and $\mathbb S = \{ (p,x) \,|\, \mathcal V (p,x) = 0 \}$ be its equilibrium points. It holds that $\{ (p,x) \,|\, \mathcal S (p,x) = 0\} \subseteq \mathbb S$ where the equality holds if the EDM satisfies \eqref{eq:ns}. \end{prop} \begin{prop} \label{prop:sop_ns_pc} Suppose that $n \geq 3$. No EDM can be both strictly output $\delta$-passive and payoff monotonic. \end{prop} The proofs of Propositions \ref{prop:storage_function_global_minimum} and \ref{prop:sop_ns_pc} are given in Appendix \ref{proof:properties_passive_dynamics}. The following corollary is a direct consequence of Proposition \ref{prop:sop_ns_pc}. \begin{cor} \label{cor:sop_ns_pc} The EDMs \eqref{eq:ept_dyn} and \eqref{eq:pc_dyn} are not strictly output $\delta$-passive. \end{cor} The proof of Corollary \ref{cor:sop_ns_pc} directly follows from Proposition \ref{prop:sop_ns_pc} and the fact that \eqref{eq:ept_dyn} and \eqref{eq:pc_dyn} are payoff monotonic. In Section \ref{sec:numerical_ex}, we demonstrate asymptotic stability of equilibrium points of $\delta$-passive EDMs in population games where we will observe that strictly output $\delta$-passive EDMs achieve the stability in a larger class of population games than do (ordinary) $\delta$-passive EDMs. Besides, the payoff monotonicity ensures that the population state $x(t)$ evolves toward Nash equilibria. Based on these observations, strict output $\delta$-passivity and payoff monotonicty are desired attributes of EDMs; however, Proposition \ref{prop:sop_ns_pc} states that these two attributes are not compatible. \subsection{Effect of Total Payoff Perturbation} \label{sec:payoff_perturbations} Motivated by our analysis on the PBR dynamics in Section~\ref{sec:pbr}, we investigate the effect of perturbation on $\delta$-passivity of EDMs. For this purpose, consider the \textit{total payoff function} $u: \mathbb R^n \times \mathbb X \to \mathbb R$ defined by \begin{align} \label{eq:total_payoff} u(p, x) = p^Tx - v(x) \end{align} where $v:\mathbb X \to \mathbb R$ is a $\mathcal C^2$ function satisfying the following two conditions: \begin{subequations} \label{eq:perturbation_cond} \begin{align} &z^T \nabla_x^2 v(x) z \geq \eta z^T z ~ \text{for all $(x,z)$ in $\mathbb X \times \mathbb{TX}$} \\ &\left\| \nabla_x v(x) \right\| \to \infty ~ \text{as $x \to \mathrm{bd}(\mathbb X)$} \end{align} \end{subequations} where $\eta$ is a positive constant. We refer to $v$ as \textit{deterministic perturbation} \cite{hofbauer2002_econometrica} or \textit{control cost} \cite{mattsson2002_geb}. Notice that without the perturbation ($v = 0$), the total payoff \eqref{eq:total_payoff} coincides with the average payoff $p^Tx$.\footnote{The idea of imposing perturbations on the average payoff appeared in game theory and economics to investigate the effect of random perturbations or disutility on choice models \cite{hofbauer2002_econometrica, mattsson2002_geb, hofbauer2007_jet}, to model human choice behavior \cite{mcfadden1974_frontiers_econometrics}, and to analyze the effect of social norms in economic problems \cite{lindbeck1999_qje}.} In what follows, we investigate the effect of perturbation using a certain class of EDMs whose state-space representation is given as follows:\footnote{We note that the EDM of the PBR dynamics is not of the form \eqref{eq:pedm}, and our analysis given in Section \ref{sec:payoff_perturbations} complements the $\delta$-passivity analysis on the PBR dynamics.} \begin{align} \label{eq:pedm} \dot x_i(t) &= \sum_{j=1}^n x_j(t) \widetilde{\varrho}_{ji}(\nabla_x^T u(p(t),x(t)) \xi_{ji}(x(t))) \nonumber \\ &\quad - x_i(t) \sum_{j=1}^n \widetilde{\varrho}_{ij}(\nabla_x^T u(p(t),x(t)) \xi_{ij}(x(t))) \end{align} where the function $\xi_{ji}:\mathbb X \to \mathbb{TX}$ assigns a tangent vector to each $x$ in $\mathbb X$, and $\widetilde{\varrho}_{ji}: \mathbb R^n \to \mathbb R^n$ is called a revision protocol which depends on a directional derivative of the total payoff. We refer to \eqref{eq:pedm} as \textit{perturbed} if the perturbation $v$ satisfies the conditions \eqref{eq:perturbation_cond}, and as \textit{unperturbed} if $v = 0$. The following are examples of \eqref{eq:pedm}. \begin{align} \label{eq:pbnn} \dot x_i(t) &= \left[ \nabla_x^T u(p(t), x(t)) \left( e_i - x(t) \right) \right]_+ \nonumber \\ & \quad - x_i(t) \sum_{j=1}^n \left[ \nabla_x^T u(p(t), x(t)) \left( e_j - x(t) \right) \right]_+ \end{align} \begin{align} \label{eq:psmith} \dot x_i(t) &= \sum_{j=1}^n x_j(t) \left[ \nabla_x^T u(p(t), x(t)) \left( e_i - e_j \right) \right]_+ \nonumber \\ & \quad - x_i(t) \sum_{j=1}^n \left[ \nabla_x^T u(p(t), x(t)) \left( e_j - e_i \right) \right]_+ \end{align} Note that the unperturbed EDMs \eqref{eq:pbnn} and \eqref{eq:psmith} coincide with \eqref{eq:bnn} and \eqref{eq:smith}, respectively. Thus, according to Propositions \ref{prop:ept_passivity} and \ref{prop:pc_passivity}, with $v=0$, \eqref{eq:pbnn} and \eqref{eq:psmith} are $\delta$-passive EDMs. For the case where $v$ satisfies \eqref{eq:perturbation_cond}, the perturbed EDMs \eqref{eq:pbnn} and \eqref{eq:psmith} are strictly output $\delta$-passive with index $\eta > 0$. In the following proposition, we generalize this observation using \eqref{eq:pedm}. \begin{prop} \label{prop:payoff_perturbation} Given an EDM \eqref{eq:pedm}, suppose that the EDM is $\delta$-passive when $v=0$. If $v$ satisfies \eqref{eq:perturbation_cond}, then the perturbed EDM \eqref{eq:pedm} is strictly output $\delta$-passive with index $\eta > 0$. \end{prop} The proof is given in Appendix \ref{proof:prop_payoff_perturbation}. \section{Stability of $\delta$-passive Evolutionary Dynamic Models} \label{sec:numerical_ex} In this section, we investigate how $\delta$-passivity is connected to asymptotic stability in population games. \subsection{Stability in Population Games} \label{sec:passivity_equivalence} Consider the following closed-loop configuration of a static payoff function \eqref{eq:static_payoff} and $\delta$-passive EDM \eqref{eq:edm}: \begin{align} \label{eq:closed_loop_01} \dot x(t) = \mathcal V(\mathcal F(x(t)), x(t)) \end{align} where the $\mathcal C^1$ mapping $\mathcal F: \mathbb X \to \mathbb R^n$ satisfies $z^TD\mathcal F(x) z \leq \epsilon z^Tz$ with $\epsilon \geq 0$ for all $(x,z)$ in $\mathbb X \times \mathbb {TX}$, and the storage function $\mathcal S$ of the EDM is strict. The following proposition establishes asymptotic stability of the equilibrium points $\{x \in \mathbb X \,|\, \mathcal V (\mathcal F(x), x) = \mathbf 0\}$ of \eqref{eq:closed_loop_01}. \begin{prop} \label{prop:stability_static_function} Consider the closed-loop \eqref{eq:closed_loop_01}. The equilibrium points of \eqref{eq:closed_loop_01} are asymptotically stable if either of the following conditions holds: \begin{enumerate} \item The EDM \eqref{eq:edm} is $\delta$-passive and $\epsilon = 0$. \item The EDM \eqref{eq:edm} is strictly output $\delta$-passive with index $\eta$ and it holds that $\eta > \epsilon > 0$. \end{enumerate} \end{prop} The proof of Proposition \ref{prop:stability_static_function} is given in Appendix \ref{proof:passivity_stability_equivalence}. Note that under \eqref{eq:ns}, Proposition \ref{prop:stability_static_function} essentially establishes asymptotic stability of Nash equilibria of $\mathcal F$. Proposition \ref{prop:stability_static_function} shows that $\delta$-passivity is a sufficient condition for stability in population games with static payoff functions \eqref{eq:static_payoff}. In what follows, we investigate whether $\delta$-passivity is also a necessary condition for stability. To this end, let us consider the following closed-loop configuration of a variant of the smoothed payoff \eqref{eq:smoothed_payoff} and an EDM \eqref{eq:edm} satisfying \eqref{eq:ns}: \begin{subequations} \label{eq:closed_loop_02} \begin{align} \dot p(t) &= \widetilde{\mathcal F}(x(t)) - \frac{1}{n} \mathbf 1 \mathbf 1^T p(t) \label{eq:closed_loop_02_a}\\ \dot x(t) &= \mathcal V(p(t), x(t)) \label{eq:closed_loop_02_b} \end{align} \end{subequations} where $\widetilde{\mathcal F}:\mathrm{int}(\mathbb X) \to \mathbb R^n$ is defined by $\widetilde{\mathcal F}(x) = \nabla_x \left( f(x) + \sum_{i=1}^n x_i \ln\frac{a_i}{x_i}\right)$ with a $\mathcal C^1$ concave function ${f:\mathbb X \to \mathbb R}$ and positive constants $\{a_i\}_{i=1}^n$. Note that $\widetilde{\mathcal F}$ can be interpreted as a perturbed payoff function in a concave potential game in which the perturbation is described by $\sum_{i=1}^n x_i \ln\frac{a_i}{x_i}$. At a stationary point $(p,x)$ of \eqref{eq:closed_loop_02_a}, the population state $x$ is a Nash equilibrium of the game\footnote{Note that due to the perturbation $\sum_{i=1}^n x_i \ln\frac{a_i}{x_i}$, Nash equilibrium is in the interior of $\mathbb X$.} which satisfies the following relation: $x \in \mathbb X$ is a stationary point of \eqref{eq:closed_loop_02_a} if and only if \begin{align*} x_i>0 \text{ implies } i \in \argmax_{j \in \{1, \cdots, n\}} \widetilde{\mathcal F}_j(x) \end{align*} In the following proposition, we show that $\delta$-passivity of EDMs is equivalent to Lyapunov stability in population games with the smoothed payoff \eqref{eq:closed_loop_02_a}. \begin{prop} \label{prop:passivity_closed-loop_stability} Consider the closed-loop \eqref{eq:closed_loop_02}. The EDM \eqref{eq:closed_loop_02_b} is $\delta$-passive and its storage function is strict if and only if for every smoothed payoff \eqref{eq:closed_loop_02_a}, there exists an energy function $\mathcal E: \mathbb R^n \times \mathbb X \to \mathbb R_+$ for which the following holds: \begin{enumerate} \item $\mathcal E(p,x) = \mathcal S(p,x) + \max_{y \in \mathrm{int}(\mathrm{\mathbb X})} \widetilde f (y) - \widetilde f(x)$. \item $\frac{\mathrm d}{\mathrm dt} \mathcal E(p(t), x(t)) \leq 0$ where the equality holds if and only if $x(t) \in \argmax_{y \in \mathbb X} p^T(t)y$. \end{enumerate} where $\mathcal S$ is a fixed $\mathcal C^1$ function and $\widetilde f$ is defined by $\widetilde f(x) = f(x) + \sum_{i=1}^n x_i \ln\frac{a_i}{x_i}$ for each $x$ in $\mathbb X$. \end{prop} The proof of Proposition \ref{prop:passivity_closed-loop_stability} is given in Appendix \ref{proof:passivity_stability_equivalence}. In fact, $1)$ and $2)$ of Proposition \ref{prop:passivity_closed-loop_stability} yield asymptotic stability of the Nash equilibrium of $\widetilde{\mathcal F}$. To make the argument simple, let's suppose that $x(t)$ stays in a compact subset of $\mathrm{int}(\mathbb X)$ for all $t$ in $\mathbb R_+$, which ensures that the payoff $p(t)$ also stays in a compact subset of $\mathbb R^n$. Note that $\mathcal E$ is a Lyapunov function which ensures that the closed-loop \eqref{eq:closed_loop_02} satisfies the Lyapunov stability criterion \cite{khalil2001_prentice_hall}. In particular, if $\mathcal E(p(t),x(t)) = 0$ then $x(t)$ is the Nash equilibrium of $\widetilde{\mathcal F}$ and $p(t) = \mathbf 1 a(t)$ for a real number $a(t)$, and due to the payoff dynamics \eqref{eq:closed_loop_02_a}, $p(t)$ converges to $\widetilde{\mathcal F}(x(t))$. In addition, due to $2)$ of Proposition \ref{prop:passivity_closed-loop_stability}, the value of $\mathcal E(p(t), x(t))$ decreases unless $p(t) = \mathbf 1 a(t)$ for a real number $a(t)$; on the other hand, unless $x(t)$ is the Nash equilibrium of $\widetilde{\mathcal F}$, the payoff dynamics \eqref{eq:closed_loop_02_a} would not allow $p(t) = \mathbf 1 a(t)$ and $\mathcal E(p(t), x(t))$ keeps decreasing. The above two observations yield asymptotic stability of $(\widetilde{\mathcal F}(x^{\mathrm{NE}}), x^{\mathrm{NE}})$ under \eqref{eq:closed_loop_02} where $x^{\mathrm{NE}}$ is the Nash equilibrium of $\widetilde{\mathcal F}$. \subsection{Numerical Simulations} Through numerical simulations with $n=3$, we evaluate stability of EDMs in population games. We consider two scenarios where in the first scenario, we consider the BNN dynamics \eqref{eq:bnn} and the logit dynamics \eqref{eq:logit} in the Hypnodisk game \cite{hofbauer2011_te} whose static payoff function is given by \begin{align} \label{eq:hypnodisk} p(t) &= \mathcal H(x(t)) \end{align} The mapping $\mathcal H = (\mathcal H_1, \mathcal H_2, \mathcal H_3)$ is defined by \begin{align*} &\begin{pmatrix} \mathcal H_1 (x_1, x_2, x_3) \\ \mathcal H_2 (x_1, x_2, x_3) \\ \mathcal H_3 (x_1, x_2, x_3) \end{pmatrix} \nonumber \\ &= \cos \left( \theta(x_1, x_2, x_3) \right) \begin{pmatrix} x_1 - \frac{1}{3} \\ x_2 - \frac{1}{3} \\ x_3 - \frac{1}{3} \end{pmatrix} \nonumber \\ &\quad + \frac{\sqrt{3}}{3} \sin \left( \theta(x_1, x_2, x_3) \right) \begin{pmatrix} x_2 - x_3 \\ x_3 - x_1 \\ x_1 - x_2 \end{pmatrix} + \frac{1}{3} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \end{align*} where $\theta(x_1, x_2, x_3) = \pi \left[ 1 - b\left( \sum_{i=1}^3 \left| x_i - \frac{1}{3} \right|^2 \right) \right]$ and $b$ is a function satisfying \begin{enumerate} \item $b(r) = 1$ if $r \leq R_I^2$ \item $b(r) = 0$ if $r \geq R_O^2$ \item $b(r)$ is decreasing if $R_I^2 < r < R_O^2$ \end{enumerate} with $R_I = 0.1$ and $R_O = 0.4$. It can be verified that there is a constant $\eta > 0$ for \eqref{eq:logit} for which $2)$ in Proposition \ref{prop:stability_static_function} holds and the population state $x(t)$ converges to the Nash equilibrium of $$\widetilde{\mathcal H}(x) = \mathcal H(x)+\begin{pmatrix} -\ln x_1 - 1 \\ -\ln x_2 - 1 \\ -\ln x_3 - 1 \end{pmatrix}$$ However, Proposition \ref{prop:stability_static_function} does not apply to \eqref{eq:bnn}. Simulation results given in Fig. \ref{figure:numerial_simulations} (a)-(b) illustrate that the population state trajectories of \eqref{eq:logit} under \eqref{eq:hypnodisk} converge to the Nash equilibrium of $\widetilde{\mathcal H}$ whereas those of \eqref{eq:bnn} do not. In the second scenario, we compare stability of the replicator dynamics \eqref{eq:rep} and the BNN dynamics \eqref{eq:bnn} using the smoothed payoff \eqref{eq:closed_loop_02_a} where the mapping $\widetilde{\mathcal F}$ is defined by $$\widetilde{\mathcal F}(x) = \begin{pmatrix} -x_1 \\ -x_2 \\ -x_3 \end{pmatrix} + \begin{pmatrix} -\ln x_1 - 1 \\ -\ln x_2 - 1 \\ -\ln x_3 - 1 \end{pmatrix}$$ Note that according to Proposition \ref{prop:passivity_closed-loop_stability} and $\delta$-passivity of \eqref{eq:bnn}, we can construct a Lyapunov function $\mathcal E$ and establish asymptotic stability of the Nash equilibrium of $\widetilde{\mathcal F}$ whereas \eqref{eq:rep} is not $\delta$-passive (see Proposition \ref{prop:rep_dyn}) and Proposition~\ref{prop:passivity_closed-loop_stability} does not apply to \eqref{eq:rep}. Simulation results given in Fig. \ref{figure:numerial_simulations} (c)-(d) illustrate that the population state trajectory of \eqref{eq:bnn} under \eqref{eq:closed_loop_02_a} converges to the Nash equilibrium of $\widetilde{\mathcal F}$ whereas that of \eqref{eq:rep} oscillates around the Nash equilibrium. \begin{figure*} \centering \subfigure[Multiple population state trajectories induced by the BNN dynamics with different initial conditions in the Hypnodisk game; and the storage function evaluated along a population state trajectory.]{\includegraphics[scale=0.4]{./figures/bnn_h.png} \includegraphics[scale=0.3]{./figures/bnn_h_s.png}} \centering \subfigure[Multiple population state trajectories induced by the logit dynamics with different initial conditions in the Hypnodisk game; and the storage function evaluated along a population state trajectory.]{\includegraphics[scale=0.4]{./figures/logit.png} \includegraphics[scale=0.3]{./figures/logit_s.png}} \centering \subfigure[A single population state trajectory induced by the replicator dynamics with smoothed payoff; and the KL divergence $\mathcal E(x(t)) = \sum_{i=1}^3 x_i(t) \ln\frac{x_i^{\mathrm{NE}}}{x_i(t)}$ evaluated along the trajectory, where $x^{\mathrm{NE}} = \frac{1}{3}\mathbf 1$.]{\includegraphics[scale=0.4]{./figures/rd.png} \includegraphics[scale=0.3]{./figures/rd_s.png}} \centering \subfigure[A single population state trajectory induced the BNN dynamics with smoothed payoff; and the storage function evaluated along the trajectory.]{\includegraphics[scale=0.4]{./figures/bnn.png} \includegraphics[scale=0.3]{./figures/bnn_s.png}} \caption {Simulation results that illustrate asymptotic stability in population games.} \label{figure:numerial_simulations} \end{figure*} \section{Conclusions} \label{sec:conclusion} In this paper, we have investigated $\delta$-passivity of dynamic models in evolutionary game theory. We defined and characterized $\delta$-passivity using the state-space representation of EDMs. Based on the characterization, we studied certain properties of $\delta$-passive EDMs and demonstrated how $\delta$-passivity can be used to establish stability in population games. As future work, it would be interesting to further investigate stability of EDMs in a generalized class of population games in which diverse higher-order dynamics and/or delay are involved. \begin{appendix} \renewcommand{\themydef}{\thesection.\arabic{mydef}} \subsection{Proofs of Theorem \ref{thm:passivity_edm} and Propositions \ref{prop:rep_dyn}, \ref{prop:ept_passivity}, and \ref{prop:pbr_passivity}} \label{proof:passivity_edm} \subsubsection{Proof of Theorem \ref{thm:passivity_edm}} The definition of $\delta$-passivity in Definition \ref{def:passivity} is closely related with the notion of dissipativity in dynamical system theory \cite{willems1972_arma}. To see this, let us rewrite \eqref{eq:edm} in the following form: \begin{subequations} \label{eq:edm_expanded} \begin{align} \dot x(t) &= \mathcal V(u(t), x(t)) \\ \dot y(t) &= \dot x(t) \end{align} \end{subequations} Note that \eqref{eq:edm_expanded} can be interpreted as a state-space equation for a control-affine nonlinear system with the input $\dot u(t)$, state $(u(t),x(t))$, and output $\dot y(t)$. According to the definition of dissipativity \cite{willems1972_arma}, the system \eqref{eq:edm_expanded} is dissipative with respect to the supply rate $s(\dot u, \dot y) = \dot u^T \dot y - \eta \dot y^T \dot y$ with a constant $\eta$ if there is a $\mathcal C^1$ function $\mathcal S: \mathbb R^n \times \mathbb X \to \mathbb R_+$ for which \begin{align} \label{eq:dissipativity_inequality} &\int_{t_0}^{t} \left[ \dot u^T(\tau) \dot y(\tau) - \eta \dot y^T(\tau) \dot y(\tau) \right] \,\mathrm d\tau \nonumber \\ &\geq \mathcal S(p(t), x(t)) - \mathcal S(p(t_0), x(t_0)) \end{align} holds for all $t \geq t_0 \geq 0$ and $u(\cdot) \in \mathfrak P$. Then, by the equivalence between \eqref{eq:edm} and \eqref{eq:edm_expanded}, we can verify that the $\delta$-passivity condition is satisfied for \eqref{eq:edm} if there is a $\mathcal C^1$ function $\mathcal S$ for which \eqref{eq:dissipativity_inequality} holds with $\eta \geq 0$ for every $u(\cdot)$ in $\mathfrak P$. Based on the above observation, using the dissipativity characterization theorem (see, for instance, Theorem 1 in \cite{hill1976_ieee_tac}), we can see that there is a $\mathcal C^1$ function $\mathcal S$ satisfying \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02} with $\eta \geq 0$ if and only if under the same choice of $\mathcal S$, the $\delta$-passivity inequality \eqref{eq:passivity_inequality} holds with the same choice of $\eta$ for all $t \geq t_0 \geq 0$ and all $p(\cdot)$ in $\mathfrak P$. \hfill $\blacksquare$ \subsubsection{Proof of Proposition \ref{prop:rep_dyn}} \label{proof:prop_rep_dyn} We proceed by showing that any $\mathcal C^1$ function $\mathcal S:\mathbb R^n \times \mathbb X \to \mathbb R_+$ satisfying \eqref{eq:p_condition_01} does not satisfy the condition \eqref{eq:p_condition_02} under \eqref{eq:rep_dyn}. Using Theorem \ref{thm:passivity_edm}, we conclude that \eqref{eq:rep_dyn} is not passive. Let us re-write \eqref{eq:rep_dyn} in the following form: \begin{align} \dot x_i(t) = \sum_{j=1}^n x_i(t) x_j(t) \left(p_i(t) - p_j(t) \right) \end{align} Note that any $\mathcal C^1$ function $\mathcal S$ satisfying \eqref{eq:p_condition_01} should be of the following form: $$\mathcal S(p, x) = \frac{1}{4}\sum_{i=1}^n \sum_{j=1}^n x_i x_j (p_i - p_j)^2 + g(x)$$ where $g: \mathbb X \to \mathbb R_+$ is a $\mathcal C^1$ function. By taking a partial derivative of $\mathcal S$ with respect to $x$, we obtain \eqref{eq:prop:rep_dyn_01}. \begin{figure*} \begin{align} \label{eq:prop:rep_dyn_01} \nabla_x^T \mathcal S (p, x) \mathcal V(p, x) &= \left[ \begin{pmatrix} \frac{1}{2} \sum_{j=1}^n x_j (p_1 - p_j)^2 \\ \vdots \\ \frac{1}{2} \sum_{j=1}^n x_j (p_n - p_j)^2 \end{pmatrix} + \nabla_x g(x) \right]^T \begin{pmatrix} \sum_{j=1}^n x_1 x_j (p_1 - p_j) \\ \vdots \\ \sum_{j=1}^n x_n x_j (p_n - p_j) \end{pmatrix} \end{align} \end{figure*} Let us choose $x_j = 0$ for all $j \geq 3$. Then, we obtain \eqref{eq:prop:rep_dyn_02}. \begin{figure*} \begin{align} \label{eq:prop:rep_dyn_02} \nabla_x^T \mathcal S (p, x) \mathcal V(p, x) &= \begin{pmatrix} \frac{1}{2} x_2 (p_1 - p_2)^2 + \frac{\partial g}{\partial x_1} (x) \\ \frac{1}{2}x_1 (p_1 - p_2)^2 + \frac{\partial g}{\partial x_2} (x) \end{pmatrix} ^T \begin{pmatrix} x_1 x_2 (p_1 - p_2) \\ -x_1 x_2 (p_1 - p_2) \end{pmatrix} \nonumber \\ &= -\frac{1}{2} x_1 x_2 (p_1 - p_2) \left[ (x_1 - x_2) (p_1 - p_2)^2 + 2 \left( \frac{\partial g}{\partial x_2} (x) - \frac{\partial g}{\partial x_1} (x)\right) \right] \end{align} \hrulefill \end{figure*} Note that for fixed $x_1, x_2$ (except for the points at which $\nabla_x^T \mathcal S(p, x) \mathcal V(p, x) = 0$ holds for all $p$ in $\mathbb R^n$), there exists $p \in \mathbb R^n$ for which $\nabla_x^T \mathcal S (p, x) \mathcal V(p, x) > 0$ holds. Therefore, the function $\mathcal S$ does not satisfy \eqref{eq:p_condition_02}. Since we made an arbitrary choice of $\mathcal S$, we conclude that there does not exist a $\mathcal C^1$ function $\mathcal S$ that satisfies \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02} simultaneously. \hfill $\blacksquare$ \subsubsection{Proof of Proposition \ref{prop:ept_passivity}} \label{proof:prop_ept_passivity} We first note that the condition \eqref{eq:acuteness} implies the so-called \textit{Strict Positive Correlation} \eqref{eq:strict_positive_correlation} \cite{sandholm2005_jet} described as follows: \begin{align} \mathcal V(p, x) \neq \mathbf 0 \text{ implies } p^T \mathcal V(p, x) > 0 \tag{\textbf{SPC}} \label{eq:strict_positive_correlation} \end{align} Let $\gamma:\mathbb R^n \to \mathbb R$ be a $\mathcal C^1$ function for which \eqref{eq:integrability} holds. It can be verified that $\gamma$ satisfies \begin{subequations} \begin{align} \nabla_p \gamma(\hat p) &= \mathcal V(p, x) \label{eq:prop_ept_passivity_04} \\ \nabla_x^T \gamma(\hat p) \mathcal V(p, x) &= -\left( \mathbf 1^T \varrho(\hat p)\right) \left( p^T \mathcal V(p, x)\right) \label{eq:prop_ept_passivity_01} \end{align} \end{subequations} where $\hat p = p - p^Tx \mathbf 1$. Let us select a candidate storage function as $\mathcal S(p, x) = \gamma(\hat p)$. Due to \eqref{eq:prop_ept_passivity_04}, the function $\mathcal S$ satisfies \eqref{eq:p_condition_01}. In conjunction with the fact that $\varrho(\hat p) = \mathbf 0$ implies $\mathcal V(p, x) = \mathbf 0$, due to \eqref{eq:strict_positive_correlation} and \eqref{eq:prop_ept_passivity_01}, we can see that \eqref{eq:p_condition_02} holds with $\eta = 0$ and the equality in \eqref{eq:p_condition_02} holds only if $\mathcal V(p,x) = \mathbf 0$. Suppose that $\gamma$ also satisfies the following inequality for every $\hat p$ in $\mathbb R^n$: \begin{align} \label{eq:prop_ept_passivity_02} \gamma(\hat p) \geq \gamma(\mathbf 0) \end{align} Then, without loss of generality by setting $\gamma(\mathbf 0) = 0$, we conclude that $\mathcal S(p,x) = \gamma(\hat p)$ is non-negative and in conjunction with aforementioned arguments, the EDM \eqref{eq:ept_dyn} is $\delta$-passive where its storage function is strict and is given by $\mathcal S(p,x) = \gamma(\hat p)$. In what follows, we show that \eqref{eq:prop_ept_passivity_02} is valid for every $\hat p$ in $\mathbb R^n$. We first claim that \eqref{eq:prop_ept_passivity_02} holds for all $(p,x)$ in the set $\mathbb S$ of equilibrium points of \eqref{eq:ept_dyn}. To see this, first note that due to \eqref{eq:strict_positive_correlation}, it holds that $\mathcal V(\mathbf 0, x) = \mathbf 0$, i.e., $\mathbf 0 \in \mathbb S_x$, for all $x$ in $\mathbb X$. By \eqref{eq:prop_ept_passivity_04}, for fixed $x$ in $\mathbb X$, the following equality holds for all $p$ in $\mathbb R^n$: \begin{align} \gamma \left( \hat p\right) = \gamma \left( \mathbf 0 \right) + \int_0^1 \dot{\mathbf p}^T(s) \mathcal V(\mathbf p(s), x) \,\mathrm d s \end{align} where $\mathbf p: [0,1] \to \mathbb R^n$ is a parameterization of a piece-wise smooth path from $\mathbf{0}$ to $p$ and $\hat p = p- p^Tx \mathbf 1$. According to the path-connectedness assumption (Assumption \ref{assump:path_connected}), for each $p$ in $\mathbb S_x$, there is a path from $\mathbf 0$ to $p$ in which the entire path is contained in $\mathbb S_x$, i.e., $\mathcal V(\mathbf p(s), x) = \mathbf 0$ holds for all $s$ in $[0,1]$; hence the following equality holds for every $p$ in $\mathbb S_x$: \begin{align} \label{eq:prop_ept_passivity_03} \gamma \left( \hat p\right) - \gamma \left( \mathbf 0 \right) = \int_0^1 \dot{\mathbf p}^T(s) \mathcal V(\mathbf p(s), x) \,\mathrm ds = \mathbf 0 \end{align} Since \eqref{eq:prop_ept_passivity_03} holds for every $(p,x)$ in $\mathbb S$, this proves the claim. To see that \eqref{eq:prop_ept_passivity_02} extends to the entire domain $\mathbb R^n \times \mathbb X$, by contradiction, let us assume that there is $(p', x') \notin \mathbb S$ for which $\mathcal S(p', x') = \gamma(\hat p') < \gamma(\mathbf 0)$ holds. Let $x(t)$ be the population state induced by \eqref{eq:ept_dyn} with the initial condition $x(0) = x'$ and the constant payoff $p(t) = p'$ for all $t$ in $\mathbb R_+$. By \eqref{eq:strict_positive_correlation} and \eqref{eq:prop_ept_passivity_01}, the value of $\mathcal S(p', x(t))$ is strictly decreasing unless $\mathcal V(p', x(t)) = \mathbf 0$. By the hypothesis that $\mathcal S (p', x') < \gamma(\mathbf 0)$ and by \eqref{eq:prop_ept_passivity_03}, for every $(p,x)$ in $\mathbb S$, it holds that $\mathcal S(p',x') < \mathcal S(p,x)$ and the state $\left( p(t), x(t) \right)$ never converges to $\mathbb{S}$. On the other hand, by LaSalle's Theorem \cite{khalil2001_prentice_hall}, since $p(t)$ is constant and the population state $x(t)$ is contained in a compact set, $x(t)$ converges to an invariant subset of $\left\{ x \in \mathbb X \,\big|\, \nabla_x^T \mathcal S(p', x) \mathcal V(p', x) = 0\right\}$. By \eqref{eq:strict_positive_correlation} and \eqref{eq:prop_ept_passivity_01}, the invariant subset is contained in $\mathbb S$. This contradicts the fact that the state $(p(t), x(t))$ does not converge to $\mathbb S$; hence $\gamma(\hat p) \geq \gamma(\mathbf 0)$ holds for all $(p,x)$ in $\mathbb R^n \times \mathbb X$. \hfill $\blacksquare$ \subsubsection{Proof of Proposition \ref{prop:pbr_passivity}} \label{proof:prop_pbr_passivity} The analysis used in Theorem 2.1 of \cite{hofbauer2002_econometrica} suggests that the following hold: \begin{subequations} \label{eq:prop_perturbed_dynamics_sop_01} \begin{align} z^T \nabla_p \left[ \max_{y \in \mathrm{int}(\mathbb{X})} (p^Ty - v(y)) \right] = z^T C(p) \end{align} \begin{align} z^T \nabla_x v(x) = z^T p \text{ if and only if } x = C(p) \label{eq:prop_perturbed_dynamics_sop_01_b} \end{align} \end{subequations} for all $(p,x,z)$ in $\mathbb R^n \times \mathbb X \times \mathbb{TX}$. Using \eqref{eq:prop_perturbed_dynamics_sop_01}, we can see that \begin{align} \nabla_p \mathcal S(p,x) = C(p)-x = \mathcal V(p,x) \end{align} and \begin{align} \label{eq:prop_perturbed_dynamics_sop_02} \nabla_x^T \mathcal S(p, x) \mathcal V(p, x) &= - \left( p - \nabla_x v(x) \right)^T \mathcal V(p, x) \nonumber \\ &= -\left( \nabla_y v(y) - \nabla_x v(x)\right)^T \left( y - x \right) \end{align} where $y = C(p)$. By the fact that $v$ is strictly convex, it holds that ${\nabla_x^T \mathcal S(p, x) \mathcal V(p, x) \leq 0}$ where the equality holds only if $\mathcal V(p,x) = \mathbf{0}$. According to Theorem \ref{thm:passivity_edm}, we conclude that the EDM \eqref{eq:pbr} is $\delta$-passive and its storage function is strict and is given by \eqref{eq:storage_function_pbr}. Furthermore, if the perturbation $v$ satisfies $z^T \nabla_x^2 v(x) z \geq \eta z^Tz$ for all $(x,z)$ in $\mathbb X \times \mathbb{TX}$, i.e., $v$ is strongly convex, then it holds that $\left( \nabla_y v(y) - \nabla_x v(x)\right)^T \left( y - x \right) \geq \eta \left\| y-x\right\|^2$ for all $x,y$ in $\mathbb X$. Using \eqref{eq:prop_perturbed_dynamics_sop_02} we can derive $\nabla_x^T \mathcal S(p, x) \mathcal V(p, x) \leq -\eta \mathcal V^T(p, x) \mathcal V(p, x)$. Hence, by Theorem~\ref{thm:passivity_edm}, we conclude that \eqref{eq:pbr} is strictly output $\delta$-passive with index $\eta$. \hfill $\blacksquare$ \subsection{Proofs of Propositions \ref{prop:storage_function_global_minimum} and \ref{prop:sop_ns_pc}} \label{proof:properties_passive_dynamics} \subsubsection{Proof of Proposition \ref{prop:storage_function_global_minimum}} \label{proof:prop_storage_function_global_minimum} The first part of the statement directly follows from the condition \eqref{eq:p_condition_01} and the fact that at a global minimizer $(p^\ast,x^\ast)$ of $\mathcal S$, it holds that $\nabla_p \mathcal S(p^\ast,x^\ast) = \mathbf 0$. Now suppose that the EDM satisfies \eqref{eq:ns}. To prove the second statement, it is sufficient to show that at each equilibrium point $(p,x)$ of \eqref{eq:edm}, it holds that $\mathcal S(p,x) = 0$. To this end, let us consider an \textit{anti-coordination game} whose payoff function is given by $\mathcal F_{x_{o}} (x) = -(x - x_{o})$ for a fixed $x_o$ in $\mathbb{X}$. Notice that $x_o$ is a unique Nash equilibrium of the game. In what follows, we show that $\mathcal S(p_o, x_o) = 0$ holds for any choice of $x_o$ from $\mathbb X$ and $p_o$ from $\mathbb S_{x_o}$. Let $(p^\ast, x^\ast)$ be a global minimizer of $\mathcal S$, i.e., $\mathcal S\left( p^\ast, x^\ast \right) = 0$. By the first part of the statement and \eqref{eq:ns}, we have that $\mathcal V(p^\ast s, x^\ast) = \mathbf 0$ for all $s$ in $\mathbb R_+$; hence it holds that $$\mathcal S (\mathbf 0, x^\ast) = \mathcal S (p^\ast, x^\ast) - \int_0^1 \left( p^\ast \right)^T \mathcal V(p^\ast s, x^\ast) \,\mathrm d s = 0$$ By the continuity of $\mathcal S$, for each $\epsilon > 0$ there exists $\delta > 0$ for which $\mathcal S \left( \delta \mathcal F_{x_o} (x^\ast), x^\ast \right) < \epsilon$ holds. According to the conditions \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02} with $\eta = 0$, the following relation holds for every positive constant $\delta$: \begin{align} \label{eq:prop_storage_function_global_minimum_02} &\frac{\mathrm d}{\mathrm d t} \mathcal S (\delta \mathcal F_{x_o}(x(t)), x(t)) \nonumber \\ &\leq \delta \mathcal V^T(\delta \mathcal F_{x_o}(x(t)), x(t)) D\mathcal F_{x_0}(x(t)) \mathcal V(\delta \mathcal F_{x_o}(x(t)), x(t)) \nonumber \\ &= -\delta \left\| \mathcal V \left( \delta \mathcal F_{x_o}(x(t)), x(t) \right) \right\|^2 \end{align} Suppose that the population state $x(t)$ induced by \eqref{eq:edm} in the anti-coordination game starts from $x(0) = x^\ast$. By an application of LaSalle's theorem \cite{khalil2001_prentice_hall} and by \eqref{eq:ns}, we can verify that $(\delta \mathcal F_{x_o}(x(t)), x(t))$ converges to $\left( \mathbf{0}, x_{o} \right)$ as $t \to \infty$. In addition, due to \eqref{eq:prop_storage_function_global_minimum_02}, we have that $$\mathcal S \left( \mathbf 0, x_o \right) \leq \mathcal S \left( \delta \mathcal F_{x_0} (x^\ast), x^\ast \right) < \epsilon$$ Since this holds for every $\epsilon > 0$, we conclude that $\mathcal S (\mathbf 0, x_o) = 0$. By the fact that $\mathcal V (p_o s, x_o) = \mathbf 0$ for all $s$ in $\mathbb R_+$ if $p_o$ belongs to $\mathbb S_{x_o}$, we can see that the following equality holds for every $p_o$ in $\mathbb S_{x_o}$: \begin{align} \label{eq:prop_storage_function_global_minimum_03} \mathcal S (p_o, x_o ) = \mathcal S(\mathbf 0, x_o ) + \int_0^1 p_o^T \mathcal V (p_o s, x_o) \,\mathrm d s = 0 \end{align} Since we made an arbitrary choice of $x_o$ from $\mathbb X$ in constructing the anti-coordination game, we conclude that \eqref{eq:prop_storage_function_global_minimum_03} holds for every $(p_o, x_o)$ in $\mathbb S$. This proves the proposition. \hfill $\blacksquare$ \subsubsection{Proof of Proposition \ref{prop:sop_ns_pc}} \label{proof:prop_sop_ns_pc} We first construct a game $\mathcal F_\nu$ based on the \textit{Hypnodisk game} \cite{hofbauer2011_te}, which is described by the following payoff function: For $a_1, a_2, a_3$ in $\mathbb R_+$, \begin{align*} &\begin{pmatrix} \mathcal H_1 (a_1, a_2, a_3) \\ \mathcal H_2 (a_1, a_2, a_3) \\ \mathcal H_3 (a_1, a_2, a_3) \end{pmatrix} \nonumber \\ &= \cos \left( \theta(a_1, a_2, a_3) \right) \begin{pmatrix} a_1 - \frac{1}{3} \\ a_2 - \frac{1}{3} \\ a_3 - \frac{1}{3} \end{pmatrix} \nonumber \\ &\quad + \frac{\sqrt{3}}{3} \sin \left( \theta(a_1, a_2, a_3) \right) \begin{pmatrix} a_2 - a_3 \\ a_3 - a_1 \\ a_1 - a_2 \end{pmatrix} + \frac{1}{3} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \end{align*} where $\theta(a_1, a_2, a_3) = \pi \left[ 1 - b\left( \sum_{i=1}^3 \left| a_i - \frac{1}{3} \right|^2 \right) \right]$ and $b$ is a bump function that satisfies \begin{enumerate} \item $b(r) = 1$ if $r \leq R_I^2$ \item $b(r) = 0$ if $r \geq R_O^2$ \item $b(r)$ is decreasing if $R_I^2 < r < R_O^2$ \end{enumerate} with $0<R_I<R_O<\frac{1}{\sqrt{6}}$. Now consider a payoff function $\mathcal F' = \begin{pmatrix} \mathcal F_1' & \cdots & \mathcal F_n' \end{pmatrix}^T$ defined as follows: For each $i$ in $\{1, \cdots, n\}$, \begin{align} \label{eq:hypnodisk_extended} \mathcal F_i'(x) = \begin{cases} \mathcal H_j (x_1, x_2, x_3 + \cdots + x_n) & \text{ if } j \in \{1, 2\} \\ \mathcal H_3 (x_1, x_2, x_3 + \cdots + x_n) & \text{ otherwise} \end{cases} \end{align} Note that the set of Nash equilibria of $\mathcal F'$ is given as follows: \begin{align} \label{eq:ne_hypnodisk_extended} \left\{ x \in \mathbb X \,\bigg|\, x_1 = x_2 = x_3+\cdots+x_n = \frac{1}{3} \right\} \end{align} Since $\theta$ is a smooth function, $\mathcal F'$ is continuously differentiable and its differential map $D \mathcal F'$ is bounded, i.e., for some $\delta > 0$, it holds that $z^T D \mathcal F'(x) z < \delta z^Tz$ for all $(x,z)$ in $\mathbb X \times \mathbb{TX}$. Finally, for a given constant $\nu>0$, we define a new payoff function by $\mathcal F_\nu = \frac{\nu}{\delta} \mathcal F'$. Using the payoff function $\mathcal F_\nu$, we prove the statement of the proposition. By contradiction, suppose that there is an EDM that is both strictly output $\delta$-passive and payoff monotonic. By definition, the EDM satisfies the $\delta$-passivity condition with index $\eta > 0$. Under the payoff function $\mathcal F_\nu$ for which $\nu < \eta$ holds, the time-derivative of the storage function $\mathcal S( \mathcal F_\nu(x(t)), x(t))$ satisfies the following inequality: \begin{align*} \frac{\mathrm d}{\mathrm d t} \mathcal S (\mathcal F_\nu (x(t)), x(t)) \leq - (\eta - \nu) \left\| \mathcal V (\mathcal F_\nu(x(t)), x(t)) \right\|^2 \end{align*} By an application of LaSalle's theorem \cite{khalil2001_prentice_hall} and by \eqref{eq:ns}, we can verify that the population state trajectory $x(\cdot)$ induced by the EDM under $\mathcal F_\nu$ converges to the set of Nash equilibria \eqref{eq:ne_hypnodisk_extended} as $t \to \infty$. On the other hand, when $x(t)$ is contained in the set \begin{align*} &\left\{ x \in \mathbb X \,\Bigg|\, \left( x_1 - \frac{1}{3} \right)^2 + \left( x_2 - \frac{1}{3} \right)^2 \right. \nonumber \\ & \qquad\qquad \left. + \left( x_3 + \cdots + x_n - \frac{1}{3} \right)^2 \leq R_I^2 \right\} \end{align*} by \eqref{eq:pc}, it holds that \begin{align} & \mathcal F_\nu^T(x(t)) \mathcal V ( \mathcal F_\nu(x(t)), x(t)) \nonumber \\ & = \frac{\nu}{2\delta} \frac{\mathrm d}{\mathrm d t} \left[ \left( x_1(t) - \frac{1}{3} \right)^2 \right. \nonumber \\ &\quad + \left.\left( x_2(t) - \frac{1}{3} \right)^2 + \left( x_3(t) + \cdots + x_n(t) - \frac{1}{3} \right)^2 \right] \geq 0 \end{align} Hence, the population state $x(t)$ never converges to the set of Nash equilibria. This is a contradiction; hence, EDMs cannot be both strictly output $\delta$-passive and payoff monotonic. \hfill $\blacksquare$ \subsection{Proof of Proposition \ref{prop:payoff_perturbation}} \label{proof:prop_payoff_perturbation} Since the EDM \eqref{eq:pedm} is $\delta$-passive if $v=0$, by Theorem~\ref{thm:passivity_edm}, we can find a storage function $\mathcal S: \mathbb R^n \times \mathbb X \to \mathbb R_+$ for which the conditions \eqref{eq:p_condition_01} and \eqref{eq:p_condition_02} hold for $\eta = 0$. In what follows, we show that when the perturbation $v$ satisfies \eqref{eq:perturbation_cond}, the resulting perturbed EDM is strictly output $\delta$-passive with a storage function $\widetilde{\mathcal S} (p, x) \overset{\mathrm{def}}{=} \mathcal S \left( p - \nabla_x v(x), x \right)$. By definition, we have that \begin{align} \nabla_{\breve p_i} \mathcal S \left( \breve p, x \right) &= \sum_{j=1}^n x_j \widetilde{\varrho}_{ji}(\breve p^T \xi_{ji}(x)) - x_i \sum_{j=1}^n \widetilde{\varrho}_{ij}(\breve p^T \xi_{ij}(x)) \nonumber \\ &\overset{\mathrm{def}}{=} \mathcal V_i(\breve p,x) \end{align} Note that \eqref{eq:pedm} can be concisely written as $$\dot x_i(t) = \mathcal V_i(\breve p,x) \,\big|_{\breve p = p - \nabla_x v(x)}$$ Using $\mathcal V(\breve p,x) = (\mathcal V_1(\breve p,x) \, \cdots \, \mathcal V_n(\breve p,x))^T$, we can compute the gradient of $\widetilde{\mathcal S}$ with respect to $p$ and $x$ as follows: \begin{align} \nabla_p \widetilde{\mathcal S} \left( p, x \right) &= \nabla_{\breve p} \mathcal S \left( \breve p, x \right)\Big|_{\breve p = p - \nabla_x v(x)} \nonumber \\ &= \mathcal V\left(p - \nabla_x v(x), x \right) \\ \nabla_x \widetilde{\mathcal S}(p, x) &= \nabla_x^T \left(p - \nabla_x v(x)\right) ~ \nabla_{\breve p} \mathcal S (\breve p, x)\Big|_{\breve p = p-\nabla_x v(x)} \nonumber \\ &\quad + \nabla_x \mathcal S (\breve p, x) \Big|_{\breve p = p-\nabla_x v(x)} \nonumber \\ &= - \left( \nabla_x^2 v(x) \right)^T \mathcal V\left(p - \nabla_x v(x), x \right) \nonumber \\ &\quad + \nabla_x \mathcal S(\breve p, x) \Big|_{\breve p = p-\nabla_x v(x)} \label{eq:S_tilde} \end{align} Using \eqref{eq:S_tilde}, we can derive the following: \begin{align} \label{eq:p_condition_perturbed} &\nabla_x^T \widetilde{\mathcal S} (p, x) \mathcal V\left(p - \nabla_x v(x), x \right) \nonumber \\ &= - \mathcal V^T\left(p - \nabla_x v(x), x \right) \nabla_x^2 v(x) \mathcal V\left(p - \nabla_x v(x), x \right) \nonumber \\ &\quad + \nabla_x^T \mathcal S (\breve p, x) \mathcal V(\breve p, x) \Big|_{\breve p = p-\nabla_x v(x)} \nonumber \\ &\leq - \mathcal V^T\left(p - \nabla_x v(x), x \right) \nabla_x^2 v(x) \mathcal V\left(p - \nabla_x v(x), x \right) \end{align} where the inequality holds due to $\delta$-passivity of the unperturbed EDM. Since $v$ satisfies $z^T \nabla_x^2 v(x) z \geq \eta z^Tz$ for all $(x,z)$ in $\mathbb X \times \mathbb{TX}$ with $\eta > 0$, from \eqref{eq:p_condition_perturbed}, we can see that the following inequality holds: \begin{align*} &\nabla_x^T \widetilde{\mathcal S}(p, x) \mathcal V\left(p - \nabla_x v(x), x \right) \\ &\leq -\eta \mathcal V^T\left(p - \nabla_x v(x), x \right) \mathcal V \left(p - \nabla_x v(x), x \right) \end{align*} Hence, using Theorem~\ref{thm:passivity_edm}, we conclude that the perturbed EDM is strictly output $\delta$-passive and its storage function is given by $\widetilde{\mathcal S}$.\hfill $\blacksquare$ \subsection{Proofs of Propositions \ref{prop:stability_static_function} and \ref{prop:passivity_closed-loop_stability}} \label{proof:passivity_stability_equivalence} \subsubsection{Proof of Proposition \ref{prop:stability_static_function}} By taking a time-derivative of the strict storage function $\mathcal S$, we can derive the following relation: \begin{align} &\frac{\mathrm d }{\mathrm d t} \mathcal S(\mathcal F(x(t)), x(t)) \nonumber \\ &\leq \dot x^T(t) D \mathcal F(x(t)) \dot x(t) \nonumber \\ &\quad - \eta \mathcal V^T(\mathcal F(x(t)), x(t)) \mathcal V(\mathcal F(x(t)), x(t)) \end{align} Note that according to LaSalle's Theorem \cite{khalil2001_prentice_hall}, if either $1)$ or $2)$ of the statement holds, then the population state $x(t)$ of \eqref{eq:closed_loop_01} converges to the equilibrium points of \eqref{eq:closed_loop_01}. Beside Lyapunov stability directly follows from the fact that under $1)$ or $2)$ of the statement, $\frac{\mathrm d }{\mathrm d t} \mathcal S(\mathcal F(x(t)), x(t)) \leq 0$ holds. \hfill $\blacksquare$ \subsubsection{Proof of Proposition \ref{prop:passivity_closed-loop_stability}} \label{proof:prop_passivity_closed-loop_stability} The sufficiency directly follows using Theorem \ref{thm:passivity_edm}, Proposition \ref{prop:storage_function_global_minimum}, and a strict storage function $\mathcal S$ of the EDM. To prove the necessity, we proceed with computing the derivative of the function $\mathcal E$ as follows: {\small \begin{align} &\frac{\mathrm d}{\mathrm dt} \mathcal E (p(t), x(t)) \nonumber \\ &= \nabla_p^T \mathcal S(p(t), x(t)) \dot p(t) + \nabla_x^T \mathcal S(p(t), x(t)) \dot x(t) - \widetilde{\mathcal F}^T(x(t)) \dot x(t) \nonumber \\ &= \nabla_p^T \mathcal S(p(t), x(t)) \left[ \widetilde{\mathcal F}(x(t)) - \frac{1}{n} \mathbf 1 \mathbf 1^T p(t) \right] \nonumber \\ & \quad + \nabla_x^T \mathcal S(p(t), x(t)) \mathcal V(p(t), x(t)) \nonumber \\ & \quad - \left[ \widetilde{\mathcal F}(x(t)) - \frac{1}{n} \mathbf 1 \mathbf 1^T p(t) \right]^T \mathcal V(p(t), x(t)) \nonumber \\ &= \left[ \nabla_p \mathcal S(p(t), x(t)) - \mathcal V(p(t), x(t)) \right]^T \left[ \widetilde{\mathcal F}(x(t)) - \frac{1}{n} \mathbf 1 \mathbf 1^T p(t) \right] \nonumber \\ & \quad + \nabla_x^T \mathcal S(p(t), x(t)) \mathcal V(p(t), x(t)) \label{eq:passivity_stability_01} \end{align}} where we use the fact that $\mathbf 1^T \mathcal V(p, x) = 0$ for all $(p,x)$ in $\mathbb R^n \times \mathbb X$. Let us select $\widetilde f(x) = \sum_{i=1}^n x_i \ln \frac{a_i}{x_i}$ and define $\overline p(t) = \frac{1}{n} \mathbf 1^T p(t)$. Then, from \eqref{eq:passivity_stability_01}, we can derive the following relation: \begin{align} & \frac{\mathrm d}{\mathrm dt} \mathcal E (p(t), x(t)) \nonumber \\ &= \left[ \nabla_p \mathcal S(p(t), x(t)) - \mathcal V(p(t), x(t)) \right]^T \begin{pmatrix} \ln \frac{a_1}{x_1}-1 - \overline p(t) \\ \vdots \\ \ln \frac{a_i}{x_i}-1 - \overline p(t) \\ \vdots \\ \ln \frac{a_n}{x_n}-1 - \overline p(t) \end{pmatrix} \nonumber \\ & \quad + \nabla_x^T \mathcal S(p(t), x(t)) \mathcal V(p(t), x(t)) \nonumber \end{align} Recall that the constant $a_i$ can be any positive real number and the image of $\ln a_i $ is $(-\infty, \infty)$. Based on this observation for any $(p(t), x(t))$ in $\mathbb R^n \times \mathrm{int}(\mathbb X)$, we can select the constants $\{a_i\}_{i=1}^n$ for which $\frac{\mathrm d}{\mathrm dt} \mathcal E (p(t), x(t)) > 0$ unless $\nabla_p \mathcal S(p(t), x(t)) = \mathcal V(p(t), x(t))$. Hence, by the continuity of $\nabla_p \mathcal S$ and $\mathcal V$, we have that $\nabla_p \mathcal S(p,x) = \mathcal V(p,x)$ for all $(p,x)$ in $\mathbb R^n \times \mathbb X$. According to \eqref{eq:ns} and $2)$ of Proposition \ref{prop:passivity_closed-loop_stability}, $\nabla_x^T \mathcal S(p, x) \mathcal V(p, x) = 0$ if and only if $\mathcal V(p,x) = \mathbf 0$. Using Theorem \ref{thm:passivity_edm}, we conclude that the EDM is $\delta$-passive with a strict storage function $\mathcal S$. \hfill $\blacksquare$ \end{appendix} \bibliographystyle{IEEEtran}
{ "timestamp": "2018-03-22T01:05:33", "yymm": "1803", "arxiv_id": "1803.07744", "language": "en", "url": "https://arxiv.org/abs/1803.07744", "abstract": "This paper investigates an energy conservation and dissipation -- passivity -- aspect of dynamic models in evolutionary game theory. We define a notion of passivity using the state-space representation of the models, and we devise systematic methods to examine passivity and to identify properties of passive dynamic models. Based on the methods, we describe how passivity is connected to stability in population games and illustrate stability of passive dynamic models using numerical simulations.", "subjects": "Optimization and Control (math.OC)", "title": "Passivity and Evolutionary Game Dynamics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9759464422083111, "lm_q2_score": 0.8438951005915208, "lm_q1q2_score": 0.8235964210193195 }
https://arxiv.org/abs/1807.01450
Ramsey theory for hypergroups
In this paper, Ramsey theory for discrete hypergroups is introduced with emphasis on polynomial hypergroups, discrete orbit hypergroups and hypergroup deformations of semigroups. In this context, new notions of Ramsey principle for hypergroups and $\alpha$-Ramsey hypergroup, $0 \leq \alpha<1,$ are defined and studied.
\section{Introduction} Ramsey theory \cite{Ramsey1}, now a well-developed branch of combinatorics, has a long history dating back to 1892 starting with David Hilbert \cite{Hilbert}. For a discrete semigroup $(S, \cdot),$ the algebra structure of Stone-$\check{\mbox{C}}$ech compactification $\beta S$ of $S$ has been utilized with a great advantage to study the Ramsey theory; a good account of all this can be found in Hindman and Strauss \cite{Dona}. Dunkl \cite{Dunkl}, Jewett \cite{Jewett} and Spector \cite{Spector} independently created locally compact hypergroups under different names with the purpose of doing standard harmonic analysis. Hypergroups, same as convolution spaces, in short, convos, or, semi convolution spaces, in short, semiconvos \cite{Jewett} are probabilistic generalization of groups or semigroups respectively. In \cite{KKA}, we presented a necessary and sufficient condition for a discrete semigroup to become a semiconvo or hypergroup after deforming the product at idempotents. This together with polynomial hypergroups served as the initial motivation for this work. Another motivation for this came from the development of configurations and configuration equations in the hypergroup setting \cite{Willson}. A paper of Golan and Tsaban \cite{Gili} gave a few more analogues or variants of Ramsey principle on groups and semigroups. In this paper, we define the concepts of Ramsey hypergroups, almost-Ramsey hypergroups and almost-strong Ramsey hypergroups and prove that the double coset hypergroup of a commutative discrete hypergroup $K$ by a finite subgroup of the centre $Z(K)$ of $K,$ defined and studied by Ross \cite{Ken}, is a Ramsey hypergroup or almost-Ramsey hypergroup if $K$ is so. We also prove that no polynomial hypergroup is an almost-strong Ramsey hypergroup. Further, we show that Chebyshev polynomial hypergroup of second kind is not an almost-Ramsey hypergroup. Next, we introduce variants like $\alpha$-Ramsey semiconvos and hypergroups. We prove that every polynomial hypergroup is a $0$-Ramsey hypergroup. We show that the Chebyshev polynomial hypergroup of second kind can never be an $\alpha$-Ramsey hypergroup for $0<\alpha<1.$ We also prove that orbit space hypergroups arising from group actions on discrete groups are $0$-Ramsey semiconvos or hypergroups. Finally, we generalize this to the semigroup setting for the action of a finite group of automorphisms of a semigroup. For the sake of convenience we give relevant basics of hypergroups and Ramsey theory for semigroups in the next section. Section 3 contains our main results on hypergroup versions of the Ramsey principles and includes motivation with examples. The final section 4 is devoted to the variants like $\alpha$-Ramsey hypergroup, $0 \leq \alpha<1$. Let $\mathbb{Z}_+= \mathbb{N} \cup \{0\}$ and, $ \times$ and $+$ be the usual multiplication and addition respectively. For a set $A,$ the power set of $A$ will be denoted by $\mathcal{P}(A).$ For notational convenience, we take empty sums to be zero and empty product to be one. \section{Relevant Basics of Ramsey theory and hypergroups} We give some basics of Ramsey theory and hypergroups which are relevant to our study. \subsection{Basics of Ramsey theory for semigroups} \label{rs} To begin with we confine our attention to the following aspects of Ramsey theory, viz., Theorem \ref{fsum}, Theorem \ref{finitepro} below which served as a motivation for this work. We refer to \cite{Hindman} and \cite{Dona} or any other suitable sources like \cite{Gili} for more details. Given a sequence $\langle x_n\rangle_{n=1}^\infty$ in $\mathbb{N},$\, $$\text{FS}(\langle x_n\rangle_{n=1}^\infty)= \{ \sum_{n \in F}x_n: F\, \text{is a non-empty finite subset of}\,\, \mathbb{N} \}.$$ \begin{thm} {\bf $\text{Hindman}$} \cite[ Theorem 3.1 (Finite Sums Theorem)]{Hindman} \label{fsum} Let $r \in \mathbb{N}$ and let $\mathbb{N} = \bigcup_{i=1}^r A_i$ be a partition of $\mathbb{N}.$ There exist $i \in \{1,2,...,r\}$ and a sequence $\langle x_n\rangle_{n=1}^\infty$ in $\mathbb{N}$ such that FS$(\langle x_n\rangle_{n=1}^\infty) \subset A_i.$ \end{thm} Any partition of a non-empty set $X = \cup_{i=1}^r X_i$ is usually called a {\it finite colouring} (or, in short, a {\it colouring}, at times) of $X;$ and $X$ is said to be a coloured set. A subset $A$ of a coloured set $X$ is {\it monochromatic} if all members of $A$ have the same colour. $A$ is {\it almost-monochromatic} if all but finitely many members of $A$ have the same colour. The Finite Sums Theorem is often also stated as: for any colouring of $\mathbb{N}$ there exists a sequence $\langle x_n\rangle_{n=1}^\infty$ in $\mathbb{N}$ such that all finite sums FS$(\langle x_n\rangle_{n=1}^\infty)$ are monochromatic.\\ Let $(S, \cdot)$ be a semigroup and $\langle x_n\rangle_{n=1}^\infty$ a sequence in $S.$ If $S$ is commutative, we set \, FP$(\langle x_n\rangle_{n=1}^\infty)= \{ \prod_{n \in F}x_n: F\, \text{is non-empty finite subset of}\,\, \mathbb{N} \}.$ If $S$ is non-commutative then FP$(\langle x_n\rangle_{n=1}^\infty)= \{ \prod_{j=1}^k x_{n_j}: k \in \mathbb{N},\, 1 \leq n_1< n_2< \cdots < n_k\}.$ Now, we replace $(\mathbb{N},+)$ by $(S, \cdot)$ and finite sums (FS) by finite products (FP) in Theorem \ref{fsum} above. Of course, if $(S, \cdot)$ has an idempotent $s$ then by taking the sequence $\langle x_n \rangle_{n=1}^\infty$ with $x_n=s$ for all $n \in \mathbb{N}$ and the set $A_i$ from the partition containing $s,$ the desired analogue of Theorem \ref{fsum} becomes immediately available. \begin{thm} \cite[Corollary 5.9]{Dona} \label{finitepro} Let $S$ be a semigroup, let $r \in \mathbb{N}$ and let $S = \bigcup_{i=1}^r A_i$ be a partition of $S.$ There exist $i \in \{1,2, \ldots,r\}$ and a sequence $\langle x_n\rangle_{n=1}^\infty$ in $S$ such that FP$(\langle x_n\rangle_{n=1}^\infty) \subset A_i.$ \end{thm} It is implicit in the proof in \cite{Hindman} of Theorem \ref{fsum} above that a sequence $\langle x_n \rangle_{n=1}^\infty$ with distinct terms can be chosen. For the sake of convenience, we call a sequence $\langle z_n \rangle_{n=1}^\infty$ in any non-empty set $T$ to be {\it injective} if $z_n$'s are all distinct. The question whether an injective sequence can be chosen in Theorem \ref{finitepro} has been considered in the literature. To give an idea, we will give some extracts from \cite{Gili} after presenting some basic concepts and results. We first state a useful property of semigroups. \begin{prop}{\bf(Dichotomy)}. \label{Dichotomy} Let $S$ be a semigroup. For $m \in S$ either $m^j, \,j =1,2, \ldots$ are all distinct or $m^j$ is an idempotent for some $j \in \mathbb{N}.$ \end{prop} We will say $m$ is of {\it infinite order} in the first case and of {\it finite order} in the second case; this is in accordance with the corresponding concepts in group theory. Let $(S,\cdot)$ be a semigroup. For $m,n \in S$ we usually write $m\cdot n = mn.$ Let $E(S)$ denote the set of idempotent elements in $S$, i.e., the set of elements $n \in S$ such that $n^2=n.$ We write $E_0(S)=E(S)\backslash \{e\};$ in case, $S$ has the identity $e.$ Let $\widetilde{S}= S \backslash E(S).$ For any element $x \in S$ and any subset $A \subset S,$ we set $x^{-1}A:= \{s \in S: xs \in A\}.$ It follows from Proposition \ref{Dichotomy} that every finite semigroup has an idempotent. \begin{rem} \label{Remark1} Clearly, for an element $s \in S$ of infinite order, the set $\{s^n:n \in \mathbb{N}\}$ is a subsemigroup of $S$ and thus $S$ contains a copy of $(\mathbb{N}, +).$ It does happen if \begin{itemize} \item[(a)] $S$ has no idempotent, or \item[(b)] if $\widetilde{S}$ is a subsemigroup of $S$ (then it does happen for $\widetilde{S}$ and therefore for $S$ too). Further, we may note that for $S= (\mathbb{Z}_+,+),$ $\widetilde{S}=\mathbb{N}.$ \end{itemize} On the other hand, if $s$ is an element of $S$ of finite order, then $s$ generates a finite subsemigroup of $S$ containing an idempotent. \end{rem} \begin{prop} \cite[Theorem 4.28]{Dona} Let $S$ be an infinite semigroup. Then $S^*:= \beta S \backslash S$ is a subsemigroup of $\beta S$ if and only if for any finite subset $F$ of $S$ and for any infinite subset $A$ of $S$ there exists a finite subset $C$ of $A$ such that $\cap_{x \in C} x^{-1}F$ is finite. \end{prop} This is equivalent to saying that $S$ is a moving semigroup defined by Golan and Tsaban \cite[pg. 112]{Gili} in the following way. A semigroup $S$ is called {\it moving} if it is infinite and, for each infinite $A \subset S$ and each finite $F \subset S,$ there exist $x_1,x_2, \ldots, x_k \in A$ such that $\{x_1s,x_2s,\ldots, x_ks\} \nsubseteq F$ for all but finitely many $s \in S.$ It is clear that every right cancellative infinite semigroup is moving. In particular, the semigroup $(\mathbb{Z}_+,+)$ and every infinite group is moving. \begin{thm} \label{GGH} (Galvin-Glazer-Hindman) \cite[Theorem 1.2]{Gili}. Let $S$ be a moving semigroup. For each finite colouring of $S,$ there is an injective sequence $\langle x_n \rangle_{n=1}^\infty$ such that FP$(\langle x_n\rangle_{n=1}^\infty)$ is monochromatic. \end{thm} \begin{defn} \begin{itemize} \item[(i)] An infinite semigroup $S$ is called a {\it Ramsey semigroup} if the conclusion of Galvin-Glazer-Hindman Theorem \ref{GGH} holds for $S$. \item[(ii)] A group which is a Ramsey semigroup will be called a {\it Ramsey group}. \end{itemize} \end{defn} \begin{rem} \label{Smee} \begin{itemize} \item[(i)] For a finite subset $F$ of $S,$ we can choose a sequence $\langle x_n \rangle_{n=1}^\infty$ as in Theorem \ref{GGH} to be in $S \backslash F$ by replacing the sequence by a subsequence if needed. \item[(ii)]Let ${\bf x}=\langle x_n\rangle_{n=1}^\infty$ be a sequence in $S$ with infinite range $B.$ Clearly, $S$ is infinite. Then there exists an injective sequence ${\bf y }= \langle y_j\rangle_{j=1}^\infty = \langle x_{n_j}\rangle_{j=1}^\infty,$ a subsequence of ${\bf x},$ so that FP($\langle y_j\rangle_{j=1}^\infty) \subset $ FP$(\langle x_n\rangle_{n=1}^\infty).$ So the requirement that for any partition $\{C_i\}_{i=1}^r$ of $S$ there exist an $i \in \{1,2, \ldots,r\}$ and a sequence $\langle x_n\rangle_{n=1}^\infty$ with infinite range such that FP$(\langle x_n\rangle_{n=1}^\infty)\subset C_i,$ is equivalent to the existence of an $i \in \{1,2, \ldots,r\}$ and an injective sequence $\langle y_n\rangle_{n=1}^\infty$ such that FP$(\langle y_n\rangle_{n=1}^\infty)\subset C_i.$ \end{itemize} \end{rem} \begin{exmp} \label{copyN} \begin{itemize} \item[(i)] Let $(S,<, \cdot)$ be an infinite ``max" semigroup with $m \cdot n= \text{max}\{m,n\}.$ Then $(S,<,\cdot)$ is a Ramsey semigroup. \item[(ii)] Galvin-Glazer-Hindman Theorem \ref{GGH} can be restated as: every moving semigroup is a Ramsey semigroup. In particular, every infinite group is a Ramsey group. \item[(iii)] If an infinite semigroup $S$ contains a copy of $(\mathbb{N}, +)$ then $S$ is a Ramsey semigroup. This follows immediately from the observation that a partition of $S$ induces a partition of $\mathbb{N}$. Further, we can choose a required injective sequence from that copy of $(\mathbb{N},+).$ \begin{itemize} \item[(a)] It follows from Remark \ref{Remark1} (i)(b) that if $(\tilde{S}, \cdot)$ is subsemigroup of $(S, \cdot)$ then $\tilde{S}$ contains a copy of $(\mathbb{N}, +)$ and therefore, both $(S,\cdot)$ and $(\tilde{S}, \cdot)$ are Ramsey semigroups. \item[(b)] Thus roughly speaking, it is enough to consider periodic semigroups $S$ in the sense that every element of $S$ has finite order. \end{itemize} \item[(iv)] It is known that Galvin-Glazer-Hindman Theorem \ref{GGH} cannot be extended to arbitrary semigroups as it is clear from \cite[Example 2.1]{Gili} given by Golan and Tsaban as follows: Let $k \in \mathbb{N},$ let $S_k:=\{0, 1,2,\ldots, k-1\} \cup k\mathbb{N}+1$ be the commutative semigroup with the operation of addition modulo $k.$ It can be easily seen that $S_k$ is not Ramsey semigroup by assigning each $s \in S_k$ the colour $s \,\text{mod}\,k.$ \end{itemize} \end{exmp} \begin{thm} \label{alram} \cite[Theorem 2.3]{Gili}. Let $S$ be an infinite semigroup. For each finite colouring of $S,$ there exist a sequence $\langle x_n \rangle_{n=1}^\infty$ with distinct terms and a finite subset $F$ of FP$(\langle x_n\rangle_{n=1}^\infty)$ such that FP$(\langle x_n\rangle_{n=1}^\infty) \backslash F$ is monochromatic. \end{thm} \begin{lem} \cite[Lemma 3.2]{Gili}. For each finite colouring of $\bigoplus_n \mathbb{Z}_2,$ there is an infinite subgroup $H$ of $\bigoplus_n \mathbb{Z}_2$ such that $H \backslash \{0\}$ is monochromatic. \end{lem} Consequently, for each finite colouring of the group $\bigoplus_n \mathbb{Z}_2$ there is an infinite almost-monochromatic subgroup $H$ of $\bigoplus_n \mathbb{Z}_2$. Golan and Tsaban \cite{Gili} studied this sort of condition for semigroups and groups which we give below. \begin{defn} \begin{itemize} \item[(i)] An infinite semigroup $S$ is called an {\it almost-strong Ramsey semigroup} if given any finite colouring of $S$ there exists an infinite almost-monochromatic subsemigroup $T$ of $S.$ \item[(ii)] An {\it almost-strong Ramsey group} can be defined by replacing semigroup and subsemigroup by group and subgroup respectively in (i) above. \end{itemize} \end{defn} \begin{exmp} \label{gt} \begin{itemize} \item[(i)] Consider an infinite semigroup (group) $S$ containing an infinite subsemigroup (subgroup) $T$ of $S.$ \begin{itemize} \item[(a)] If $T$ is an almost-strong Ramsey semigroup (group) then $S$ is an almost-strong Ramsey semigroup (group). In particular, if $\bigoplus_n \mathbb{Z}_2$ is contained in a group $G$ as subgroup of $G,$ then $G$ is an almost-strong Ramsey group. \item[(b)] If $S \backslash T$ is finite then the converse of the first part of (a) is true. \end{itemize} \item[(ii)] Let $k \in \mathbb{N}.$ Consider the commutative semigroup $S_k$ as in Example \ref{copyN} (iv) above. Then $S_k$ is an almost-strong Ramsey semigroup. To see this take any colouring $\{A_i\}_{i=1}^r$ of $S.$ Consider the sets $B_i:=A_i \cap (k\mathbb{N}+1), 1 \leq i \leq r.$ Set $\Lambda= \{i \in \{ 1,2, \ldots,r\}: B_i \neq \emptyset\},$ then $\{B_i\}_{i \in \Lambda}$ gives a colouring of $k\mathbb{N}+1.$ Now, note that at least one $B_i$ has to be an infinite subset $C$ of $k\mathbb{N}+1.$ Consider $H= \{0, 1,2,\ldots,k-1\} \cup C.$ Then $H$ is an infinite subsemigroup of $S_k$ and $H \backslash \{0, 1, 2,\ldots,k-1\}$ is monochromatic. Therefore, $H$ is almost-monochromatic. \item[(iii)] It is well know that $(\mathbb{N}, +)$ is not an almost-strong Ramsey semigroup \cite[Lemma 3.8 (folklore)]{Gili}. It can be seen by considering the 2-colouring of $\mathbb{N}$ given by $${\color{red}1} ~{\color{blue}2~ 3}~ {\color{red} 4~ 5~ 6 }~ {\color{blue} 7 ~8~ 9~ 10 }~{\color{red} 11~ 12~ 13~ 14~ 15}~ {\color{blue} 16 ~17~ 18~ 19~ 20~ 21}~ {\color{red} 22 \ldots}$$ where the length of the intervals of elements of identical colours are $1, 2, 3, \ldots$. \item[(iv)] An infinite group $G$ is a {\it Tarski Monster} if, for some large prime $p,$ all proper subgroups of $G$ have cardinality $p.$ Olshanskii \cite{Ola} proved that Tarski Monsters exist for all large enough primes $p$ (see also \cite{Adi}) Clearly, Tarski Monsters are not almost-strong Ramsey groups. \end{itemize} \end{exmp} \subsection{ Basics of hypergroups} Here we come to the relevant basics of hypergroups. We may refer to \cite{Dunkl} and \cite{Jewett} or any other suitable sources. In this paper we are mostly concerned with commutative discrete semiconvos or hypergroups. It is convenient to write the definition in terms of a minimal number of axioms. For instance, see (\cite[Chapter 1]{Lasser}, \cite{Alagh}). Let $K$ be a discrete space. Let $M(K)$ be the space of complex-valued regular Borel measures on $K.$ Let $M_F(K)$ and $M_p(K)$ denote the subset of $M(K)$ consisting of measures with finite support and probability measures respectively. Let $M_{F,p}(K)= M_F(K) \cap M_p(K).$ At times, we do not distinguish between $m$ and $\delta_m$ for any $m \in K$ because $m \mapsto \delta_m$ is an embedding from $K$ into $M_p(K).$ Here $\delta_m$ is the unit point mass at $m,$ i.e., the Dirac-delta measure at $m.$ We begin with a map $* : K \times K \rightarrow M_{F,p}(K)$. Simple computations enable us to extend `$*$' to a bilinear map called {\it convolution}, denoted by `$*$' again, from $M(K) \times M(K)$ to $M(K).$ At times, for certain $n \in K$ we will write $q_n$ for $ \delta_n* \delta_n$ and $Q_n$ for its support. A bijective map $\vee: m \mapsto \check{m}$ from $K$ to $K$ is called an {\it involution} if $\check{\check{m}}=m.$ We can extend it to $ M(K)$ in a natural way. \begin{defn} \label{semiconvo} A pair $(K,*)$ is called a { \it discrete semiconvo} if the following conditions hold. \begin{itemize} \item The map $* : K \times K \rightarrow M_{F,p}(K)$ satisfies the associativity condition $$ (\delta_m*\delta_n)*\delta_k = \delta_m* (\delta_n*\delta_k)\,\,\, \text{for all}\, m, n, k \in K.$$ \item There exists a (necessarily unique) element $e \in K$ such that $$ \delta_m*\delta_e = \delta_e*\delta_m = \delta_m \,\,\,\,\text{for all}\,\, m \in K.$$ \end{itemize}\end{defn} A discrete semiconvo $(K,*)$ is called {\it commutative} if $\delta_m*\delta_n= \delta_n*\delta_m$ for all $m,n \in K.$ \begin{defn} A triplet $(K,*, \vee)$ is called a {\it discrete hypergroup} if \begin{itemize} \item $(K,*)$ is a discrete semiconvo, \item $\vee$ is an involution on $K$ that satisfies \begin{itemize} \item[(i)] $(\delta_m*\delta_n \check{)}= \delta_{\check{n}}*\delta_{\check{m}}$ for all $m,n \in K$ and \item[(ii)] $e \in \textnormal{spt}(\delta_m*\delta_{\check{n}})$ if and only if $m=n.$ \end{itemize} \end{itemize} \end{defn} A discrete hypergroup $(K,*, \vee)$ is called {\it hermitian} if the involution on $K$ is the identity map, i.e., $\check{m}=m$ for all $m \in K.$ Note that a hermitian discrete hypergroup is commutative. We write $(K,*)$ or $(K,*,\vee)$ as $K$ only if no confusion can arise. Let K be a commutative discrete hypergroup. For a complex-valued function $\chi$ defined on $K,$ we write $\check{\chi}(m):= \overline{\chi(\check{m})}$ and $\chi(m*n) = \int_K \chi\, d(\delta_m*\delta_n)$ for $m,n \in K.$ Now, define two dual objects of $K:$ $$ \mathcal{X}_b(K)= \left\lbrace\chi \in \ell^\infty(K): \chi \neq 0, \chi(m*n)= \chi(m) \chi(n) \, \text{for all}\,\, m,n \in K \right\rbrace,$$ $$\widehat{K}= \left\lbrace \chi \in \mathcal{X}_b(K) : \check{\chi}= \chi,\,\mbox{i.e.,}\, \chi(\check{m})= \overline{\chi(m)}\,\, \text{for all} \,m \in K \right\rbrace.$$ Each $\chi \in \mathcal{X}_b(K)$ is called a {\it character} and each $\chi \in \widehat{K}$ is called a {\it symmetric character}. With the topology of pointwise convergence, $\mathcal{X}_b(K)$ and $\widehat{K}$ become compact Hausdorff spaces. In contrast to the group case, these two dual objects need not be the same and also need not have a hypergroup structure. \begin{defn}{\bf Center of hypergroup.} Let $K$ be a discrete hypergroup. Dunkl \cite[1.6]{Dunkl} defined the center $Z(K)$ of $K$ as the set of all $x$ in $K$ such the $\textnormal{spt}(\delta_x*\delta_y)$ is a singleton for each $y \in K.$ Jewett defined the maximum subgroup of $K$ \cite[10.4]{Jewett} as the set of all $x$ in $K$ such that $\textnormal{spt}(\delta_x*\delta_{\check{x}})=\{e\}$ and showed that it is exactly same as $Z(K)$ \cite[10.4B]{Jewett}. \end{defn} Now we give some examples of hypergroups. \begin{exmp} {\bf Polynomial hypergroups:} \label{2.4} This is a wide and important class of hermitian discrete hypergroups in which hypergroup structures are defined on $\mathbb{Z}_+.$ This class contains Chebyshev polynomial hypergroups of first kind, Chebyshev polynomial hypergroups of second kind, Gegenbauer hypergroups, Hermite hypergroups, Jacobi hypergroups, Laguerre hypergroups, etc. For more details see \cite{Dunkl, Lasser1, Bloom,Lasser}. \begin{thm} \cite[Theorem 5.3, and Lemma 5.1]{Lasser}. \label{gpositive} Let ${\bf P}=(P_n)_{n \in \mathbb{Z}_+}$ be an orthogonal polynomial system such that the linearization coefficients $g(n,m;k)$ occurring in $$P_n(x) P_m(x)= \sum_{k=|n-m|}^{n+m} g(n,m;k) P_k(x)$$ satisfy $$g(n,m;k) \geq 0,\,\,\,\,\,n,m \in \mathbb{Z}_+, \,\,\,\,\, |n-m| \leq k \leq n+m.$$ Let $*: \mathbb{Z}_+ \times \mathbb{Z}_+ \rightarrow M_{F,p}(\mathbb{Z}_+ )$ be given by $$\delta_n*\delta_m= \sum_{k=|n-m|}^{n+m} g(n,m;k) \delta_k.$$ Then \begin{itemize} \item[(i)] $K_{\mathbf{P}}=(\mathbb{Z}_+.*)$ is a hermitian discrete hypergroup, called the polynomial hypergroup (related to $ \mathbf{P}=(P_n)_{n \in \mathbb{Z}_+})$. \item[(ii)] $g(n,m;|n-m|)>0$ and $g(n,m;n+m)>0.$ \end{itemize} \end{thm} This immediately gives the following corollary. \begin{cor} \label{subsem} For a subhypergroup $(L, *)$ of $(K_{\bf P},*),$ $(L,+)$ is also a subsemigroup of $(\mathbb{Z}_+,+).$ \end{cor} For Illustration, we describe CP1, the {\it Chebyshev polynomial hypergroup of first kind} which arises from the Chebyshev polynomials of first kind. In fact, they define the following convolution `$*$' on $\mathbb{Z}_+:$ $$\delta_m*\delta_n= \frac{1}{2} \delta_{|n-m|}+\frac{1}{2} \delta_{n+m} \,\,\,\text{for}\,\, m,n \in \mathbb{Z}_+.$$ For any $k \in \mathbb{N},$ $K= k \mathbb{Z}_+$ with $*|_{K \times K}$ is a discrete hypergroup in its own right. \end{exmp} The {\it Chebyshev polynomial hypergroup of second kind} $(\mathbb{Z}_+, *),$ say, CP2 arises from the Chebyshev polynomials of second kind and the convolution '$*$' on $\mathbb{Z}_+$ is given by $$ \delta_m*\delta_n= \sum_{k=0}^{\text{min}\{m ,n\}} \frac{|m-n|+2k+1}{(m+1)(n+1)} \delta_{|m-n|+2k}.$$ Further, $K= 2 \mathbb{Z}_+$ with $*|_{K \times K}$ is a discrete hypergroup in its own right. \begin{exmp} \label{Dk} {\bf Dunkl-Ramirez Discrete hypergroups:} Let $0 < a \leq \frac{1}{2}.$ Dunkl and Ramirez \cite{Ramirez} defined a convolution structure `$*$' on $\mathbb{Z}_+$ to make it a hermitian hypergroup $K$. The convolution `$*$' is defined by \begin{equation*} \delta_m*\delta_n= \begin{cases} \text{max}\{m,n\} & m \neq n, \\ \frac{a^n}{1-a} \delta_0+ \sum_{k=1}^{n-1} a^{n-k} \delta_k+\frac{1-2a}{1-a} \delta_n & m=n \geq 2 \end{cases} \end{equation*} with $\delta_0*\delta_0=\delta_0$ and $\delta_1*\delta_1=\frac{a}{1-a} \delta_0+\frac{1-2a}{1-a} \delta_1.$ The dual $H_a (=\widehat{K})$ of $K$ is a (hermitian) countable compact hypergroup and it can be identified with the one point compactification $\mathbb{Z}_+^*=\{0,1,2, \ldots, \infty\}$ of $\mathbb{Z}_+.$ For a prime $p,$ let $\Delta_p$ be the ring of p-adic integers and $\mathcal{W}$ be its group of units, that is , $\{x=x_0+x_1p+ \cdots+ x_np^n+ \cdots \in \Delta_p : x_j = 0,1, \ldots,p-1 \, \text{for}\, j \geq 0 \, \text{and} \, x_0 \neq 0 \}$. For $a=\frac{1}{p},$ $H_a$ derives its structure from $\mathcal{W}$-orbits of action of $\mathcal{W}$ on $\Delta_p$ by multiplication in $\Delta_p.$ Further, in this case we also have $\widehat{H_a}=K.$ \end{exmp} \begin{exmp} {\bf Hypergroup deformations of semigroups at idempotents}. \label{KHD} Motivated by Discrete Dunkl-Ramirez hypergroups as in Example \ref{Dk} above, the authors attempted to make a ``max" semigroup $(S, <, \cdot)$ with the discrete topology into a hermitian discrete hypergroup by deforming the product on the diagonal. We showed that this can be done if and only if either $S$ is finite or $S$ is isomorphic to $(\mathbb{Z}_+,<,\text{max}).$ Next, we presented a necessary and sufficient condition for a discrete semigroup to become a semiconvo or hypergroup after deforming the product at idempotents. The details are given in \cite{KKA}; here, we give an idea without proofs. \begin{itemize} \item[(i)] In \cite[Section 3]{KKA}, we deformed the semigroup product of a discrete ``$\text{max}$" semigroup $(S,<, \cdot)$ along the diagonal by replacing the semigroup product `$\cdot$' by convolution product `$*$' as follows: \begin{eqnarray*} \delta_m*\delta_n = \delta_n*\delta_m &=& \delta_{m\cdot n} (= \delta_{\text{max}\{m,n\}}) \,\,\,\,\mbox{for}\, m,n \in S\, \text{with}\, m \neq n, \, \text{or},\, m=n=e,\\ \delta_n*\delta_n &=& q_n \,\,\,\,\,\,\,\,\,\,\, \text{for} \, n \in S\backslash \{e\}. \end{eqnarray*} Here, $q_n$ is a probability measure on $S$ with finite support $Q_n$ containing $e$ and has the form $\sum_{j \in Q_n} q_n(j) \delta_j$ with $q_n(j)>0$ for $j \in Q_n$ and $\sum_{j \in Q_n} q_n(j)=1.$ We gave a set of necessary and sufficient conditions for $(S,*)$ to become a hermitian discrete hypergroup. \begin{thm} \label{Ordered} \cite[Theorem 3.2]{KKA} Let $(S,<, \cdot)$ be a discrete (commutative) ``max" semigroup with identity $e$ and ` $*$' and other related symbols as above. Then $(S,*)$ is a hermitian discrete hypergroup if and only if the following conditions hold. \begin{itemize} \item[(i)] Either $S$ is finite or $(S,<,\cdot)$ is isomorphic to $(\mathbb{Z}_+,<, \text{max}).$ \item[(ii)] For $n \in S \backslash \{e\},$ we have $\mathcal{L}_n \subset Q_n \subset \mathcal{L}_n \cup \{n\},$ where $\mathcal{L}_n:= \{j \in S: j<n\}.$ \item[(iii)] If $\#S > 2,$ then for $ e \neq m <n $ in $S,$ we have \begin{itemize} \item[(a)] $q_n(e)=q_n(m)q_m(e)$ and \item[(b)] $q_n(e) \left( 1+ \sum_{e \neq k \in \mathcal{L}_n} \frac{1}{q_k(e)} \right) \leq 1;$ \end{itemize} or, equivalently, with $v_n= \frac{1}{q_n(e)}$ for $n \in S,$ \item[(iii)'] If $\#S > 2,$ then for $ e \neq m <n $ in $S,$ we have \begin{itemize} \item[(a)]$q_n(m)= \frac{v_m}{v_n}$ and \item[(b)] $\sum_{k \in \mathcal{L}_n}v_k \leq v_n.$ \end{itemize} \end{itemize} \end{thm} \item[(ii)] In \cite[Section 4]{KKA}, we try to make $(S,\cdot)$ into a commutative discrete semiconvo or discrete hypergroup $(S,*)$ by deforming the product on $\mathcal{D}_{E_0(S)}:=\{(m,m): m \in E_0(S)\},$ the diagonal of $E_0(S),$ or the idempotent diagonal of $S,$ say. Let $(S,\cdot)$ be a semigroup with identity $e$. A non-empty subset $T$ of $S$ is called an {\it ideal} in $S$ if $TS \subset T$ and $ST \subset T,$ where $TS := \{ts \,: t \in T, s\in S\}$ and similarly for $ST.$ Let $G(S)$ denote the set $\{g \in S: \exists\, h \in S \,\,\text{with}\,\, gh=hg=e\}.$ Then $G(S)$ is a group contained in $S$ called the {\it maximal group}. Set $G_1(S)= \{g \in G(S): \, gm=m \, \text{for all} \, m \in E_0(S)\}.$ Clearly, $G_1(S)$ is a subgroup of $G(S).$ Note that members of $G_1(S)$ act on $E_0(S)$ as the identity via left multiplication of $(S, \cdot).$ $(S, \cdot)$ is called {\it action-free} if $G_1(S)= \{e\}.$ For $n \in E(S),$ let $q_n$ be a probability measure on $S$ with finite support $Q_n$ containing $e.$ We express $q_n= \sum_{j \in Q_n} q_n(j) \delta_j$ with $q_n(j)>0$ for $j \in Q_n$ and $\sum_{j \in Q_n} q_n(j)=1.$ We look for necessary and sufficient conditions on $S$ and $ \{ q_n: n \in E_0(S)\}$ such that $(S, *)$ with `$*$' defined below, is a commutative discrete semiconvo: \begin{eqnarray*} \delta_m*\delta_n = \delta_n*\delta_m & = &\delta_{mn} \,\,\,\,\mbox{for}\,\, (m,n) \in S\times S \backslash \mathcal{D}_{E_0(S)}, \\ \delta_n*\delta_n &= &q_n \,\,\,\,\mbox{for}\,\, n \in E_0(S). \end{eqnarray*} \begin{thm} \label{main} \cite[Theorem 3.8]{KKA} Let $(S,\cdot)$ be a commutative discrete semigroup with identity $e$ such that $S$ is action-free. Let `$*$' and other related notation and concepts be as above. Then $(S, *)$ is a commutative discrete semiconvo if and only if the following conditions hold. \begin{itemize} \item[(i)]$E(S)$ is finite or $E(S)$ is isomorphic to $(\mathbb{Z}_+,<,\text{max}),$ where the order on $E(S)$ is defined by $m<n$ if $mn=n \neq m.$ \item[(ii)] $(\widetilde{S}, \cdot)$ is an ideal of $(S, \cdot).$ \item[(iii)] $Q_n \subset E(S)$ for $n \in E_0(S).$ \item[(iv)] If $n \in E_0(S)$ and $m \in \widetilde{S}$ then $Q_n\cdot m =\{nm\}.$ \item[(v)] For $n \in E_0(S),$ we have $\mathcal{L}_n \subset Q_n \subset \mathcal{L}_n \cup \{n\},$ where for $n \in E(S),$ $\mathcal{L}_n := \{j \in E(S): j<n\}.$ \item[(vi)]If $\#E(S) > 2,$ then for $ e \neq m <n $ in $E(S),$ we have the following: \begin{itemize} \item[($\alpha$)] $q_n(e)=q_n(m) q_m(e)$ and \item[($\beta$)] $q_n(e) \left( 1+ \sum_{e \neq k \in \mathcal{L}_n} \frac{1}{q_k(e)} \right) \leq 1.$ \end{itemize} \end{itemize} Further, under these conditions, $E(S)$ is a hermitian discrete hypergroup. Moreover, $S$ is a hermitian discrete hypergroup if and only if $S = E(S).$ \end{thm} \end{itemize} \end{exmp} \begin{exmp} \label{ocd} Here, we rewrite some of the results of \cite[Section 8]{Jewett} for the case of discrete groups. \begin{defn} \cite[Section 8.1]{Jewett} Let $G$ be a discrete group. A mapping $\varphi: G \rightarrow G $ is called { \it affine} if there exist $y \in G$ and an automorphism $\psi$ of $G$ such that $\varphi(x)=\psi(x)\,y.$ An {\it affine action} of a discrete group $H$ on a discrete group $G$ is an action $(x,s) \mapsto x^s$ for which each mapping $x \mapsto x^s$ from $G$ to $G$ is affine. \end{defn} \begin{itemize} \item[(i)] {\bf Discrete orbit semiconvos}. \cite[Theorem 8.1 B]{Jewett}. Let $G$ be a discrete group and let $H$ be a finite group with $\#H=c.$ Suppose that $(x,s) \mapsto x^s$ is an affine action of $H$ on $G$. Then the space $G^H $ of orbits $x^H$ given by $x^H= \{x^s: s \in H \}$ equipped with the discrete topology is a semiconvo with respect to the convolution `$*$' defined by $$ \delta_{x^H}*\delta_{y^H}= \frac{1}{c^2} \sum_{s,t \in H} \delta_{(x^sy^t)^H}.$$ \item[(ii)] {\bf Discrete coset semiconvos.} \cite[Theorem 8.2 A]{Jewett}. Let $G$ be a discrete group and let $H$ be a finite subgroup of $G$ with $\#H=c.$ If the action of $H$ on $G$ is given by $(x,s) \mapsto xs$ then it is an affine action and the orbit space $G^H$ is the right coset space $G/H:=\{xH: x \in G\}.$ The space $G/H,$ with the operation `$*$' given by $$ \delta_{xH}*\delta_{yH}= \frac{1}{c} \sum_{s \in H} \delta_{xsyH}$$ is a semiconvo. \item[(iii)] {\bf Discrete double coset hypergroups.} \cite[Theorem 8.2 B]{Jewett}. Let $G$ be a discrete group and let $H$ be a finite subgroup of $G$ with $\#H=c.$ If the action of $H \times H$ on $G$ is given by $(x,(s,t)) \mapsto s^{-1}xt$ then it is an affine action and the orbit space $G^{H \times H}$ is the double coset space $G//H:=\{HxH: x \in G\}.$ The space $G//H,$ with the operation `$*$' given by $$ \delta_{HxH}*\delta_{HyH}= \frac{1}{c} \sum_{t \in H} \delta_{HxtyH}$$ is a discrete commutative hypergroup with the identity $HeH=H$ and the involution $(HxH \check{)} = Hx^{-1}H.$ \item[(iv)] {\bf Discrete automorphisms orbit hypergroups.} \cite[Theorem 8.3 A]{Jewett}. Let $G$ be a discrete group and let $H$ be a finite subgroup of the group of automorphisms of $G$ with $\#H=c.$ Suppose that $(x,s) \mapsto x^s$ is the corresponding action of $H$ on $G$. Then the space $G^H $ of orbits $x^H$ given by $x^H= \{x^s: s \in H \}$ equipped with the discrete topology is a discrete hypergroup with respect to the convolution `$*$' defined by $$ \delta_{x^H}*\delta_{y^H}= \frac{1}{c} \sum_{s \in H} \delta_{(x^sy)^H}$$ with the identity $e^H= \text{id}_G$ and the involution $(x^H \check{)}= (x^{-1})^H.$ \end{itemize} \end{exmp} \begin{exmp} \label{Kenthm} Jewett \cite[14.2]{Jewett} proved that if we take $H$ to be a compact subhypergroup of a hypergroup $K$ then the space of double cosets $K//H$ can be made into a hypergroup. But the second author \cite{Ken} considered a commutative hypergroup $K$ and a closed subgroup $H$ of $Z(K)$ and defined a different convolution product on K//H to make it a hypergoup. In fact, he studied many properties of $K//H$ by assuming $K//H$ to be compact. An element of $K//H$ is denoted by $[x],$ the double coset of $x \in K.$ Let $\pi$ denote natural surjection $x \mapsto [x]$ from $K$ onto $K//H.$ Here, we rewrite some of the results of \cite[Section 4]{Ken} for the case of a commutative discrete hypergroup $K$ and a subgroup $H$ of $Z(K)$. We refine \cite[Theorem 4.1]{Ken}, whose proof in \cite{Ken} does not require $K//H$ to be compact (finite in our case). {\it Let $K$ be a commutative discrete hypergroup and let $H$ be a subgroup of $K.$ Then $K//H$ is a hypergroup with the convolution `$*$' defined by, for $x,y \in K,$ $f$ a complex valued function on $K//H$ vanishing at infinity,} \begin{equation} \label{kencon} \int_{K//H} f\, d(\delta_{[x]}*\delta_{[y]})= \int_K f\circ \pi(u)\, d(\delta_x*\delta_y)(u). \end{equation} Following \cite{Ken}, we use the following notational conventions: if $x \in K$ and $z \in Z(K),$ the unique element in $\textnormal{spt}(\delta_x*\delta_z)$ is denoted by $xz$ only. Also, as $K$ is commutative, the double cosets in $K//H$ all have the form $xH=\{xz:z \in H\}.$ \end{exmp} \section{Ramsey theory for hypergroups} \label{r3} We begin with motivation for the development of this paper. \subsection{Motivation for Ramsey theory on hypergroups} We start with polynomial hypergroups as in Example \ref{2.4}. \subsubsection{Motivation through polynomial hypergroups} \label{mp} We freely use Example \ref{2.4} . It seems natural that the finite sums in $\mathbb{Z}_+$ will be replaced by the supports of the convolution of finitely many unit point mass measures on $K_{\mathbf{P}}$. Let $\mathbf{x}= \langle x_n\rangle_{n=1}^\infty$ be an injective sequence in $\mathbb{Z}_+$ with range $B.$ For a non-empty finite subset $F$ of $B$ i.e., $F= \{x_{n_j} \,: 1 \leq j \leq m\},$ we set $\delta_F= \delta_{x_{n_1}}*\delta_{x_{n_2}}*\cdot \cdot \cdot*\delta_{x_{n_m}}.$ Let $\{A_i\}_{i=1}^r$ be a partition of $\mathbb{Z}_+.$ We would like that there must be an injective sequence $\langle x_n\rangle_{n=1}^\infty$ with range $B$ and an $i \in \{1,2, \ldots,r\}$ such that $\sup(\delta_F) \subset A_i$ for all finite subsets $F$ of $B.$ Now, consider the Chebyshev polynomial hypergroup of second kind (CP2) as in Example \ref{2.4}. Take the finite partition $\{A_{i}\}_{i=1}^3$ where $A_{i}:=\{n \in \mathbb{Z}_+: n \equiv i-1\,\text{mod} \,3\}.$ Take any injective sequence $\langle x_n\rangle_{n=1}^\infty$ in $\mathbb{N}.$ Since $x_n$'s are distinct, we may take it to be strictly increasing. Then, for $k \in \mathbb{N},$ $1 \leq x_{k}<x_{k}+1 \leq x_{k+1}.$ By choosing $F:=\{x_k, x_{k+1}\},$ we get $\textnormal{spt}(\delta_F) \nsubseteq A_i $ for any $i$ as the support $\textnormal{spt}(\delta_{x_k}*\delta_{x_{k+1}})$ contains two or more elements starting from $x_{k+1}-x_{k}$ to $x_{k+1}+x_k$ with the consecutive differences of $2$ while every $A_i$ contains elements with the difference of multiples of $3.$ Therefore, the situation is different in the setting of hypergroups. It becomes essential and interesting to explore Ramsey theory for hypergroups in detail. \subsubsection{Motivation through hypergroup deformations of semigroups} \label{3.1.2} It is clear from the definition of the convolution product `$*$' as in Example \ref{KHD}(i) of hypergroup deformations $(S,*)$ of the semigroup $ (S, \cdot)= (\mathbb{Z}_+,<,\text{max})$ that we can have an analogue of Theorem \ref{finitepro} and Theorem \ref{GGH} for $(S,*)$ with some appropriate changes in notation and terminology as indicated in Subsection \ref{mp} above. We elaborate as follows. Let $\mathbf{x}= \langle x_n\rangle_{n=1}^\infty$ be an injective sequence in $\mathbb{N}$ with range $B.$ For a non-empty finite subset $F$ of $B$, i.e., $F= \{x_{n_j} \,: 1 \leq j \leq m\}$, set $\delta_F= \delta_{x_{n_1}}*\delta_{x_{n_2}}*\cdot \cdot \cdot*\delta_{x_{n_m}}.$ In fact, take a partition $(A_i)_{i=1}^r$ of $\mathbb{Z}_+.$ Then at least one of the $A_i$'s is infinite. In case $A_i$ has identity $e=0$, we replace $A_i$ by $\widetilde{A_i}= A_i\backslash \{e\},$ otherwise we redesignate $A_i$ by $\widetilde{A_i}$. Then $\widetilde{A_i}$ has an injective sequence $\langle x_n\rangle_{n=1}^\infty$ with range $B$ so that all finite products from $\langle x_n\rangle_{n=1}^\infty$ in $(\mathbb{Z}_+,<,\text{max})$ are in $A_i$ by Example \ref{copyN} (i). Now, for this set and sequence; for any finite subset $F= \{x_{n_j}: 1 \leq j \leq m\}$ of $B,$ $\delta_F$ becomes $\delta_{y}$ where $y=\underset{1 \leq j \leq m}{\prod} x_{n_j}= \underset{1\leq j \leq m}{\text{max} }x_{n_j}$ and thus $\textnormal{spt}(\delta_F) \subseteq A_i.$\\ This motivates us to study Ramsey theory in the context of general discrete hypergroups or semiconvos. \subsection{Ramsey principle for hypergroups} \label{4.2} Benjamin Willson \cite{Willson} generalized the concept of colouring in the setting of hypergroups in terms of configurations and configuration equations. That suggested an idea to formulate the following concepts. Let $(K, *)$ be an infinite discrete semiconvo. Let $\mathbf{x}= \langle x_n\rangle_{n=1}^\infty$ be an injective sequence in $K \backslash \{e\}.$ We denote its range by $B.$ For a non-empty finite subset $F$ of $B,$ we first write it in its increasing indices form, i.e., $F= \{x_{n_j} \,: 1 \leq j \leq m\}$ with $1 \leq n_1<n_2< \cdots< n_m.$ Next, we set $\delta_F= \delta_{x_{n_1}}*\delta_{x_{n_2}}*\cdot \cdot \cdot*\delta_{x_{n_m}}.$ Let $\mathbf{x}= \langle x_n\rangle_{n=1}^\infty$ be an injective sequence in $K \backslash \{e\}$ with range $B.$ Set \begin{eqnarray*} \text{FC}(\langle x_n\rangle_{n=1}^\infty) &:=& \{\delta_{x_{n_1}}*\delta_{x_{n_2}}*\cdot \cdot \cdot*\delta_{x_{n_m}}: n_1<n_2< \cdots< n_m,\, m \geq 1 \} \\&=& \{\delta_F: F \,\text{is a non-empty finite subset of}\, B \} \end{eqnarray*} and \begin{eqnarray*} \text{SFC}(\langle x_n\rangle_{n=1}^\infty) &:=& \{\textnormal{spt}(\delta_{x_{n_1}}*\delta_{x_{n_2}}*\cdot \cdot \cdot*\delta_{x_{n_m}}): n_1<n_2< \cdots< n_m,\, m \geq 1 \} \\&=& \{\textnormal{spt}(\delta_F): F \,\text{is a non-empty finite subset of}\, B \}. \end{eqnarray*} \begin{defn} \label{def3.1} (i) Let $(K, *)$ be an infinite discrete semiconvo. $(K,*)$ will be called a {\it Ramsey semiconvo} if for every partition $K= \bigcup_{i=1}^r C_i,$ there exist $i$ and an injective sequence $\mathbf{x}= \langle x_n \rangle_{n=1}^\infty$ in $K \backslash \{e\}$ such that $\textnormal{spt}(\delta_F) \subset C_i,$ i.e., $\delta_F(C_i)=1$ for every non-empty finite subset $F \subset B.$ In other words, $ \text{SFC}(\langle x_n\rangle_{n=1}^\infty) \subset \mathcal{P}(C_i).$ (ii) If $(K,*,\vee)$ is an infinite discrete hypergroup such that $(K,*)$ is a Ramsey semiconvo, then $(K,*,\vee)$ will be called a {\it Ramsey hypergroup}. \end{defn} \begin{rem} \label{4.3iii} If an infinite discrete subsemiconvo $L$ of a semiconvo $K$ is Ramsey then $K$ is Ramsey. To see this, take a partition $\{C_i\}_{i=1}^r$ of $K$ and set $\Lambda := \{i \in \{1,2, \ldots,r\}: \widetilde{C_i}=C_i \cap L \neq \phi\}.$ Then $L= \bigcup_{j \in \Lambda} \widetilde{C_j}$ is a partition of $L.$ Since $L$ is Ramsey, there exist an injective sequence $ \mathbf{x}=\langle x_n\rangle_{n=1}^\infty$ in $L \backslash \{e\}$ and $i \in \Lambda$ such that for every non-empty finite subset $F$ of $B$, $ \textnormal{spt}(\delta_F) \subset \widetilde{C_i};$ as $\widetilde{C_i} \subset C_i,$ we obtain that $\textnormal{spt}(\delta_F) \subset C_i.$ Therefore, $K$ is a Ramsey semiconvo. \end{rem} \begin{exmp} \label{hysup} (i). Discussion in \ref{mp} can be summarized as: The Chebyshev polynomial hypergroup of second kind (CP2) is not a Ramsey hypergroup. (ii). Discussion in \ref{3.1.2} can be summarized as: The hypergroup deformations $(S,*)$ of $(S,\cdot):=(\mathbb{Z}_+,<,\text{max})$ as in Example \ref{KHD} above are all Ramsey hypergroups. In view of Remark \ref{4.3iii}, if a semiconvo or hypergroup $K$ contains a copy of any of these $(S,*)$ then $K$ is a Ramsey semiconvo or hypergroup respectively. \end{exmp} \begin{thm} Let $(S,\cdot)$ be an infinite commutative discrete action-free semigroup with the identity $e$ satisfying the conditions (i)-(vi) of Theorem \ref{main}. Then the semiconvo $(S,*)$ is a Ramsey semiconvo. \end{thm} \begin{proof} To prove $(S,*)$ is a Ramsey semiconvo we consider two different cases. First, consider the case that $E(S)$ is infinite. Then $(E(S),*)$ is a copy of a hypergroup deformation of $(\mathbb{Z}_+,<,\text{max}).$ By Example \ref{hysup} (ii) it follows that $(S,*)$ is a Ramsey semiconvo. Finally, we consider the case when $E(S)$ is finite. Then $\widetilde{S} \,(= S \backslash E(S))$ is infinite. Because $S$ satisfies condition (ii) of Theorem \ref{main}, we have that $(\widetilde{S},\cdot)$ is a subsemigroup of $(S,\cdot)$ and further, $(\widetilde{S},*)$ coincides with $(\widetilde{S}, \cdot).$ As a consequence, $(\widetilde{S} \cup \{e\},*)$ coincides with $(\widetilde{S} \cup\{e\}, \cdot).$ By Example \ref{copyN} (iii), $(\widetilde{S} \cup \{e\}, \cdot)$ is a Ramsey semigroup with required injective sequence in $\tilde{S}.$ Therefore, $(\widetilde{S}\cup\{e\}, *)$ is Ramsey semoconvo. Now, Remark \ref{4.3iii} implies that $(S,*)$ is a Ramsey semiconvo. \end{proof} For the rest of this subsection we work in the context of Example \ref{Kenthm}. The following lemma is useful for proving the next few results. \begin{lem} \label{3} Let $K$ be a commutative discrete hypergroup and let $H$ be a subgroup of $Z(K).$ Then, for any non-empty subset $E $ of the hypergroup $ K//H$ and $x_1,x_2, \ldots, x_m \in K,$ we have \begin{equation*} (\delta_{[x_1]}*\delta_{[x_2]}* \cdots*\delta_{[x_m]})(E)= (\delta_{x_1}*\delta_{x_2}*\cdots*\delta_{x_m})(\pi^{-1}(E)). \end{equation*} \end{lem} \begin{proof} For any non-empty set $E \subset K//H,$ by the definition of `$*$' (see \eqref{kencon} in Example \ref{Kenthm}), we have, for $x,y \in K,$ $$\int_{K//H} \chi_E \, d(\delta_{[x]}*\delta_{[y]})= \int_K \chi_{\pi^{-1}(E)} (u) \, d(\delta_x*\delta_y)(u)$$ which is equivalent to \begin{equation} \label{fs} (\delta_{[x]}*\delta_{[y]})(E)= (\delta_x*\delta_y)(\pi^{-1}(E),\,\, \text{i.e.}, \,\, \chi_E([x]*[y])= \chi_{\pi^{-1}(E)}(x*y). \end{equation} Now, for $x,y,z \in K,$ we get \begin{eqnarray*} (\delta_{[x]}*\delta_{[y]}*\delta_{[z]})(E) &=& \int_{K//H} \chi_E\, d(\delta_{[x]}*\delta_{[y]}*\delta_{[z]}) \\ &=& \int_{K//H} \int_{K//H} \chi_E([t]*[s])\, d(\delta_{[x]}*\delta_{[y]})([t])\, d(\delta_{[z]})([s]) \\ &=& \int_{K//H} \chi_E([t]*[z])\, d(\delta_{[x]}*\delta_{[y]})([t]) \\ &=& \int_{K//H} \chi_E^{[z]}([t])\, d(\delta_{[x]}*\delta_{[y]})([t]) \\ &=& \int_{K} \chi_E^{[z]}([u])\, d(\delta_x*\delta_y)(u)\,\,\,\,\, \text{[Using \eqref{kencon}]} \\&=& \int_K \chi_E([u]*[z])\, d(\delta_x*\delta_y)(u)\\ &=& \int_K \chi_{\pi^{-1}(E)}(u*z)\, d(\delta_x*\delta_y)(u) \,\,\,\,\,\,\, [\text{From \eqref{fs}}] \\&=& \int_K \int_K \chi_{\pi^{-1}(E)}(u*v)\, d(\delta_x*\delta_y)\, d(\delta_z)(v) \\ &=& \int_K \chi_{\pi^{-1}(E)}\, d(\delta_x*\delta_y*\delta_z)= (\delta_x*\delta_y*\delta_z)(\pi^{-1}(E)). \end{eqnarray*} Similarly, if $x_1, x_2, \ldots, x_m \in K,$ then for any $\emptyset \neq E \subset K//H$ we have \begin{equation*} (\delta_{[x_1]}*\delta_{[x_2]}* \cdots*\delta_{[x_m]})(E)= (\delta_{x_1}*\delta_{x_2}*\cdots*\delta_{x_m})(\pi^{-1}(E)). \end{equation*} \end{proof} \begin{thm}\label{douken} Let $K$ be a commutative discrete hypergroup and let $H$ be a finite subgroup of $Z(K).$ If $K$ is a Ramsey hypergroup then the hypergroup $K//H$ is a Ramsey hypergroup. \end{thm} \begin{proof} Take a partition $\{C_i\}_{i=1}^r$ of $K//H$ so that $K//H = \cup_{i=1}^r C_i.$ Set $\tilde{C_i}= \pi^{-1}(C_i),\, 1 \leq i \leq r.$ Then $\{\tilde{C_i}\}_{i=1}^r$ is a partition of $K.$ Since $K$ is a Ramsey hypergroup, there exist $i \in \{1,2, \ldots, r\}$ and an injective sequence $\langle x_n \rangle_{n=1}^\infty \subset K\backslash \{e\}$ such that $\delta_{F'}(\tilde{C_i})=1$ for any non-empty finite subset $F'$ of the range of the sequence $\langle x_n \rangle_{n=1}^\infty.$ For $n \in \mathbb{N},$ set $y_n=\pi(x_n).$ Since $H$ is finite and the $x_n$'s are distinct, we get an injective sequence $\langle y_{n_j}\rangle_{j=1}^\infty,$ subsequence of $\langle y_n \rangle_{n=1}^\infty$ with $n_j'$s strictly increasing. Put $z_j:=y_{n_j}$ for $j \in \mathbb{N}.$ Consider any non-empty finite subset $F$ of the range of $\langle z_j \rangle_{j=1}^\infty$ say, $F:=\{z_{j_k}: 1 \leq k \leq m\}$ with $1 \leq {j_1}<j_2< \cdots < {j_m}.$ Now, set $F':=\{x_{n_{j_k}}: 1 \leq k \leq m\}.$ Using Lemma \ref{3} , we have \begin{equation} \label{next} \delta_F(C_i )= \delta_{F'}(\tilde{C_i}). \end{equation} To see this, \begin{eqnarray*} \delta_F(C_i ) =(\delta_{z_{j_1}}*\delta_{z_{j_2}}*\cdots*\delta_{z_{j_m}})(C_i) &=& (\delta_{y_{n_{j_1}}}*\delta_{y_{n_{j_2}}}* \cdots* \delta_{y_{n_{j_m}}})(C_i)\\ &=& (\delta_{\pi(x_{n_{j_1}})}*\delta_{\pi(x_{n_{j_2}})}*\cdots*\delta_{\pi(x_{n_{j_m}})}) (C_i) \\ &=& \left( \delta_{ \left[ x_{n_{j_1 }}\right]}*\delta_{ \left[ x_{n_{j_2 }}\right]}*\cdots*\delta_{ \left[ x_{n_{j_m }}\right]} \right) (C_i) \\ &=& (\delta_{x_{n_{j_1}}}*\delta_{x_{n_{j_2}}}*\cdots*\delta_{x_{n_{j_m}}}) (\tilde{C_i}) \\ &=& \delta_{F'}(\tilde{C_i}). \end{eqnarray*} But, $ \delta_{F'}(\tilde{C_i})=1$ and, therefore, $ \delta_F(C_i )=1,$ i.e., $\textnormal{spt}(\delta_F) \subset C_i.$ Hence, $K//H$ is a Ramsey hypergroup. \end{proof} \begin{exmp} We may take $K=(S,*)$ for any hypergroup deformation of $(\mathbb{Z}_+,<, \text{max})$ with $q_1(1)=0.$ Then $Z(K)=\{0,1\}.$ We take $H=Z(K).$ We note that $$K//H= \{\{0,1\} , \{m\} : m \geq 2\}$$ and \begin{eqnarray*} \delta_{\{m\}}* \delta_{\{n\}} = \begin{cases} \delta_{\{\text{max}\{m,n\}\}} & \text{for}\, m \neq n\,\, \text{with}\, m,n \geq 2, \\ \left(q_m(0)+q_m(1) \right) \delta_{\{0,1\}}+ \sum_{k \geq 2} q_m(k) \delta_{\{k\}} & \text{for}\,\, m=n \geq 2. \end{cases} \end{eqnarray*} Then, by Theorem \ref{douken} $K//H$ is a Ramsey hypergroup. \end{exmp} \begin{defn} \begin{itemize} \item[(i)] Let $(K, *)$ be an infinite discrete semiconvo. $(K,*)$ will be called an {\it almost-Ramsey semiconvo} if for every partition $K= \bigcup_{i=1}^r C_i,$ there exist $i,$ an injective sequence $\mathbf{x}= \langle x_n \rangle_{n=1}^\infty$ in $K \backslash \{e\}$ and a finite subset $\mathcal{F}$ of $\text{SFC}(\langle x_n\rangle_{n=1}^\infty)$ such that $\text{SFC}(\langle x_n\rangle_{n=1}^\infty) \backslash \mathcal{F} \subset \mathcal{P}(C_i).$ \item[(ii)] Let $(K, *)$ be an infinite discrete semiconvo. $(K,*)$ will be called an {\it almost-strong Ramsey semiconvo} if for every partition $K= \bigcup_{i=1}^r C_i,$ there exist an $i \in \{1,2,\ldots, r\},$ an infinite subsemiconvo of $L$ of $K$ and a finite subset $D$ of $L$ such that $L \backslash D \subset C_i.$ \item[(iii)] If $(K,*,\vee)$ is an infinite discrete hypergroup such that $(K,*)$ is an almost-Ramsey semiconvo then $(K,*,\vee)$ will be called an {\it almost-Ramsey hypergroup}. An {\it almost-strong Ramsey hypergroup} can be defined by replacing semiconvo and subsemicovo by hypergroup and subhypergroup respectively in (ii) above. \end{itemize} \end{defn} \begin{rem} If an infinite discrete subsemiconvo $L$ of a semiconvo $K$ is an almost-Ramsey semiconvo or an almost-strong Ramsey semiconvo, then $K$ is also an almost-Ramsey semiconvo or an almost-strong Ramsey semiconvo respectively. We have only to modify the proof of Remark \ref{4.3iii} above to prove these. Clearly, the statement remains true if semiconvo and subsemiconvo are replaced by hypergroup and subhypergroup respectively. \end{rem} \begin{exmp} \begin{itemize} \item[(i)] The Chebyshev polynomial hypergroup of second kind (CP2) as in Example \ref{2.4} is not an almost-Ramsey hypergroup. To see this, we have only to refine the argument in Item \ref{mp}. To elaborate, take the finite partition $\{C_{i}\}_{i=1}^3$ of $\mathbb{Z}_+,$ where $C_{i}:=\{n \in \mathbb{Z}_+: n \equiv i-1\,\text{mod} \,3\}.$ Let, if possible, there exist a (strictly increasing) injective sequence $\langle x_n\rangle_{n=1}^\infty$ in $\mathbb{N},$ $i \in \{1,2,3\}$ and a finite subset $\mathcal{F}$ of $\text{SFC}(\langle x_n\rangle_{n=1}^\infty)$ such that $\text{SFC}(\langle x_n\rangle_{n=1}^\infty) \backslash \mathcal{F} \subset \mathcal{P}(C_i).$ Set $D=\cup \{V:V \in \mathcal{F}\}.$ Then $D$ is a finite subset of $\mathbb{Z}_+$. Now, put $m= \text{max} (D)+1.$ Let $F=\{x_m,x_{2m}\}.$ Then $m \leq x_m <x_m+m \leq x_{2m}$ and $\textnormal{spt}(\delta_F)= \{x_{2m}-x_m+2j: 0 \leq j \leq x_m\}.$ Note that $\textnormal{spt}(\delta_F) \cap D = \emptyset.$ So, $\textnormal{spt}(\delta_F) \in \text{SFC}(\langle x_n\rangle_{n=1}^\infty) \backslash \mathcal{F} \subset \mathcal{P}(C_i).$ Therefore, $\textnormal{spt}(\delta_F) \subset C_i$ which is not true as the support $\textnormal{spt}(\delta_F)$ contains two or more consecutive elements of difference $2$ while every $C_i$ contains elements with the difference of multiples of $3.$ \item[(ii)] The hypergroup deformations $(S,*)$ of $(S,\cdot):=(\mathbb{Z}_+,<,\text{max})$ as in Example \ref{KHD} above are not almost-strong Ramsey hypergroups as $(S,*)$ does not have any proper infinite subhypergroup. \end{itemize} \end{exmp} \begin{thm} Let $K$ be a commutative discrete hypergroup and let $H$ be a finite subgroup of $Z(K).$ If $K$ is an almost-Ramsey hypergroup then the hypergroup $K//H$ is an almost-Ramsey hypergroup. \end{thm} \begin{proof} The proof of this theorem follows by using Equation \eqref{next}. \end{proof} \begin{thm} No polynomial hypergroup $K_{\bf P}$ is an almost-strong Ramsey hypergroup. \end{thm} \begin{proof} Let, if possible, $K_{\bf P}$ be an almost-strong Ramsey hypergroup. The 2-colouring of $\mathbb{N}$ used in Example \ref{gt} (iii) provides a clue. Take the 2-colouring of $\mathbb{Z}_+$ given by $${\color{blue}0}~{\color{red}1} ~{\color{blue}2~ 3}~ {\color{red} 4~ 5~ 6 }~ {\color{blue} 7 ~8~ 9~ 10 }~{\color{red} 11~ 12~ 13~ 14~ 15}~ {\color{blue} 16 ~17~ 18~ 19~ 20~ 21}~ {\color{red} 22 \ldots}$$ where the length of the intervals of elements of identical colours are $1,1, 2, 3,4, \ldots$. Then there exist an $i \in \{ \color{blue}\text{blue}, {\color{red}\text{red}} \},$ an infinite subsemiconvo $L$ of $K$ and a finite subset $D$ of $L$ such that $L \backslash D \subset C_i.$ By Corollary \ref{subsem}, the underlying set $L$ is also an infinite subsemigroup of $(\mathbb{Z}_+,+).$ But, in view of Example \ref{gt}(iii), this is not possible. Hence, $K_{\bf P}$ is not an almost-strong Ramsey hypergroup. \end{proof} \section{Variants of Ramsey principle for hypergroups} We will freely use notation and terminology from previous Section \ref{r3}. \begin{defn} Let $0\leq \alpha<1.$ \begin{itemize} \item[(i)] Let $(K, *)$ be an infinite discrete semiconvo . $(K,*)$ will be called an {\it $\alpha$-Ramsey semiconvo} if for every partition $K= \bigcup_{i=1}^r C_i,$ there exist $i$ and an injective sequence $\mathbf{x}= \langle x_n \rangle_{n=1}^\infty$ in $K \backslash \{e\}$ such that $\delta_F (C_i) >\alpha$ for every non-empty finite subset $F$ of the range of $\langle x_n \rangle_{n=1}^\infty.$ \item[(ii)] If $(K,*,\vee)$ is an infinite discrete hypergroup such that $(K,*)$ is an $\alpha$-Ramsey semiconvo, then $(K,*,\vee)$ will be called an {\it $\alpha$-Ramsey hypergroup}. \end{itemize} \end{defn} \begin{rem} (i) Clearly, a Ramsey semiconvo is $\alpha$-Ramsey for $0\leq \alpha<1.$ (ii) Let $0 \leq \alpha< \beta <1.$ Then a $\beta$-Ramsey semiconvo is $\alpha$-Ramsey. (iii) A $0$-Ramsey semiconvo $K$ will be called a {\it recurrent semiconvo}, because it is so, if for every partition $K= \bigcup_{i=1}^r C_i,$ there exist $i$ and an injective sequence $\mathbf{x}= \langle x_n \rangle_{n=1}^\infty$ in $K \backslash \{e\}$ such that $\delta_F (C_i) >0,$ i.e., $\textnormal{spt}(\delta_F) \cap C_i \neq \emptyset$ for every non-empty finite subset $F$ of the range of $\langle x_n \rangle_{n=1}^\infty.$ \end{rem} \begin{rem} \label{rem4} (i) Let $0 \leq \alpha<1.$ If an infinite discrete sub-semiconvo $L$ of a discrete semiconvo $K$ is $\alpha$-Ramsey then $K$ is $\alpha$-Ramsey. We have only to modify the proof of Remark \ref{4.3iii} after Definition \ref{def3.1} to prove this. Clearly, the statement remains true if the word semiconvo is replaced by hypergroup. (ii) Note that if semiconvo $(K,*)$ is a semigroup with identity then Ramsey semiconvo and $\alpha$-Ramsey semiconvo are the same object. \end{rem} \begin{thm} \label{ephr} Every polynomail hypergroup $K_{\mathbf{P}}$ is a $0$-Ramsey hypergroup, that is, a recurrent hypergroup. \end{thm} \begin{proof} Take a partition $\{C_i\}_{i=1}^r$ of $\mathbb{Z}_+.$ Then, by Theorem \ref{GGH}, there exist a set $C_i$ and an injective sequence $\langle x_n\rangle_{n=1}^\infty$ in $\mathbb{N}$ such that all the finite sums of $\langle x_n\rangle_{n=1}^\infty$ are in $C_i.$ Since, by Theorem \ref{gpositive} (ii), $g(n,m,n+m)>0$, it follows that for a finite subset $F$ of $\mathbb{N},$ $\textnormal{spt}(\delta_F)$ contains the element $s_F=\sum_{x_j \in F} x_j$ so that $\textnormal{spt}(\delta_F) \cap C_i \neq \phi,$ i.e., $\delta_F(C_i)>0.$ \end{proof} In particular, Chebyshev polynomial hypergroup of second kind (CP2) is a recurrent hypergroup. We have already noted in Example \ref{hysup} (i) that CP2 is not a Ramsey hypergroup. The following theorem shows that CP2 is not even $\alpha$-Ramsey, $0<\alpha<1.$ \begin{thm} \label{CP2notalpha} Let $0< \alpha <1.$ Then the Chebyshev polynomial hypergroup of second kind (CP2) is not an $\alpha$-Ramsey hypergroup. \end{thm} \begin{proof} Let, if possible, the hypergroup CP2 $=(\mathbb{Z}_+,*)$ be an $\alpha$-Ramsey hypergroup for some $0<\alpha<1.$ Then for every partition $\mathbb{Z}_+= \bigcup_{i=1}^r C_i,$ there exist $i$ and an injective sequence $\mathbf{x}= \langle x_n \rangle_{n=1}^\infty$ in $C_i \backslash \{e\}$ such that $\delta_F (C_i) >\alpha,$ for any non-empty finite subset $F$ of range $B$ of $\langle x_n \rangle_{n=1}^\infty.$ Consider the partition $\{C_i\}_{i=1}^{4^k}$ of $\mathbb{Z}_+$ with $C_i=\{n: n=(i-1)\,(\text{mod}\, 4^k) \},$ where $k (\geq 2) \in \mathbb{N}$ is such that $\alpha> \frac{1}{2^{k-1}}.$ Since $\langle x_n \rangle_{n=1}^\infty$ is an injective sequence, there exist $m,n \in B$ such that $16 \leq 4^k<m<2m<n-m.$ Now, we have \begin{equation} \label{4} \delta_n*\delta_m= \sum_{j=0}^m \frac{n-m+2j+1}{(n+1)(m+1)} \delta_{n-m+2j}.$$ So, $$ \frac{1}{2^{k-1}} < \alpha< (\delta_n*\delta_m)(C_i)= \sum_{\underset{n-m+2j \in C_i}{0\leq j \leq m}} \frac{n-m+2j+1}{(n+1)(m+1)}. \end{equation} Now, since $m,n \in B \subset C_i,$ we have $m=l_0 4^k+(i-1)$ and $n=l_1 4^k+(i-1)$ for some $l_0,l_1 \in \mathbb{N}.$ Therefore, $n-m= (l_1-l_0)4^k>2m= 2(l_0 4^k+ (i-1)) \geq 2l_0 4^k,$ which, in turn, gives $l_1>3l_0 \geq 3.$ Let $ 0 \leq j \leq m= l_0 4^k+(i-1).$ Then $n-m+2j \in C_i$ if and only if $(l_1-l_0) 4^k+2j= (i-1)\,( \text{mod} 4^k).$ In view of \eqref{4} such a $j$ does exist. So $i-1$ must be even, say, $2u.$ As $0 \leq i-1=2u \leq 4^k-1,$ we have, $0 \leq 2u \leq 4^k-2,$ i.e., $0 \leq u \leq \frac{1}{2}4^k-1.$ Further, \begin{equation}\label{new4} (\delta_n*\delta_m)(C_i)= \sum_{\underset{(l_1-l_0) 4^k+2j= 2u (\text{mod} 4^k)}{0 \leq j \leq l_0 4^k+2u}} \frac{n-m+2j+1}{(n+1)(m+1)}. \end{equation} Now, for $0 \leq j \leq l_0 4^k+2u,$ either \begin{equation} \label{aA} j= 4^k l+t,\,\,\,\,\,\,\, 0 \leq l <l_0, \, 0 \leq t \leq 4^k-1; \end{equation} or \begin{equation} \label{Bb} j= 4^k l_0 +s,\,\,\,\,\,0 \leq s \leq 2u. \end{equation} In case of \eqref{aA}, we have $n-m+2j= (l_1-l_0) 4^k+2 l 4^k+ 2t,$ which is $2u\,(\text{mod}\, 4^k)$ if and only if $2t=2u\, (\text{mod}\,4^k).$ This holds if and only if either $t=u$ or $t= \frac{1}{2} 4^k+u.$ In case of \eqref{Bb}, we have $n-m+2j= (l_1-l_0)4^k+2 l_0 4^k+2s,$ which is $2u\,(\text{mod}\, 4^k)$ if and only if either $s=u$ or $s= \frac{1}{2} 4^k+u.$ The latter is not possible simply because $\frac{1}{2}4^k+u$ is greater than $2u.$ So, the number of terms summed in \eqref{new4} is $2l_0+1$. Also, each such term is $\leq \frac{m+n+1}{(m+1)(n+1)} = \frac{(l_04^k+(i-1))+(l_14^k+(i-1))+1}{(l_04^k+(i-1))(l_14^k+(i-1))} < \frac{(l_0+l_1+2)}{l_0l_1 4^k}.$ Thus $$ \delta_n*\delta_m(C_i)<\frac{(2l_0+1)(l_0+l_1+2)}{l_0l_1 4^k}.$$ Therefore, by \eqref{4}, we have $\frac{1}{2^{k-1}}<\frac{(2l_0+1)(l_0+l_1+2)}{l_0l_1 4^k},$ which gives $ 2^{k+1} l_0l_1<(2l_0+1)(l_0+l_1+2).$ But $k \geq 2.$ So, by using the inequalities $2+l_0 \leq 3l_0<l_1,$ we get $8l_0l_1<(2l_0+1)(l_0+l_1+2)<(2l_0+1) 2l_1=4l_0l_1+2l_1,$ which, in turn, gives, $4l_0l_1<2l_1,$ i.e., $2l_0<1,$ which is not possible. Hence, CP2 is not an $\alpha$-Ramsey hypergroup. \end{proof} \begin{rem} Using notation, terminologies and observations made in the proof of Theorem \ref{CP2notalpha} and doing direct computations, we can obtain the exact value of $\delta_n*\delta_m(C_i)$ as follows. $$\delta_n*\delta_m(C_i)= \frac{2l_0+1}{m+1}= \frac{2l_0+1}{l_04^k+2u+1}=\frac{2(m-i+1)+4^k}{4^k (m+1)}.$$ \end{rem} \begin{thm} Let $0 \leq \alpha <1$ and let $K$ be a commutative discrete hypergroup and let $H$ be a finite subgroup of $Z(K).$ If $K$ is an $\alpha$-Ramsey hypergroup then the hypergroup $K//H$ is an $\alpha$-Ramsey hypergroup. \end{thm} \begin{proof} The proof of this theorem follows by using \eqref{next} in the proof of Theorem \ref{douken}. \end{proof} Next, we come to semiconvos and hypergroups in Example \ref{ocd}. \begin{thm} \label{3.6} Let $G$ be an infinite discrete group and let $H$ be a finite group with $\#H=c.$ Suppose that $(x,s) \mapsto x^s$ is an affine action of $H$ on $G$. Then the discrete orbit semiconvo $G^H $ is a $0$-Ramsey semiconvo, that is, a recurrent semiconvo. In particular, we have the following facts. \begin{itemize} \item[(i)] If $H$ is a finite subgroup of $G$ with $\#H=c$ then the discrete coset semiconvo $G/H$ is a recurrent semiconvo. \item[(ii)] If $H$ is a finite subgroup of $G$ with $\#H=c$ then the discrete double coset hypergroup $G//H$ is a recurrent hypergroup. \item[(iii)] If $H$ is a finite subgroup of the group of automorphisms of $G$ with $\#H=c$ then the discrete automorphism orbit hypergroup $G^H$ is a recurrent hypergroup. \end{itemize} \end{thm} \begin{proof} Take any partition $\{C_i\}_{i=1}^r$ of $G^H.$ Set, for $1 \leq i \leq r,\, \widetilde{C_i}= \bigcup_{U \in C_i} U.$ Note that $G = \bigcup_{i=1}^r\widetilde{C_i}$ is a partition of $G.$ Also, for $1 \leq i \leq r$ and $x \in \widetilde{C_i},$ we have $x^H \in C_i.$ Applying Theorem \ref{GGH} to $G$, we get an injective sequence $\langle x_n \rangle_{n=1}^\infty$ in $G \backslash \{e\}$ and $i \in \{1,2,\ldots,r\}$ such that FP$(\langle x_n \rangle_{n=1}^\infty) \subset \widetilde{C_i}.$ For $n \in \mathbb{N},$ set $y_n= x_n^H.$ Because the $x_n$'s are all distinct and $H$ is finite, there exists an injective sequence $\langle y_{n_j}\rangle_{j=1}^\infty,$ a subsequence of $\langle y_{n}\rangle_{n=1}^\infty$. Set $\sigma_j = x_{n_j}$ and $\tau_j= y_{n_j}$ for $j \in \mathbb{N}.$ Consider any non-empty finite subset $F$ of $\mathbb{N},$ say, $F= \{j_k: 1 \leq k \leq m\}$ with $1 \leq j_1<j_2< \cdots< j_m.$ Let $ \sigma_F= \prod_{k=1}^m \sigma_{j_k}$ and $\tau^{}_F=(\sigma_F)^H.$ Then $\sigma_F \in \widetilde{C_i}$ and, therefore, $\tau^{}_F \in C_i.$ Further, for $h= (h_{j_k})_{k=1}^m \in H^m,$ set $\sigma_h= \prod_{k=1}^m \sigma_{j_k}^{h_{j_k}}$ and $\tau_h=(\sigma_h)^H.$ Now, set $F'=\{\tau_{j_k}: 1 \leq k \leq m\}.$ Then $$\delta_{F'}= \delta_{\tau_{j_1}}*\delta_{\tau_{j_2}}* \cdots *\delta_{\tau_{j_m}}= \frac{1}{c^m} \sum_{h \in H^m} \delta_{\tau_h}.$$ Now, for the identity $e'$ of $H^m,$ we have $\tau_{e'}= \tau^{}_F \in C_i.$ Therefore, $$\delta_{F'}(C_i) \geq \delta_{F'}(\{\tau^{}_{F}\}) \geq \frac{1}{c^m} >0.$$ Hence, $(G^H,*)$ is a $0$-Ramsey semiconvo. \end{proof} \begin{exmp} \label{ocp} Another way to look at CP1 is to think of it as the orbit space of the group $G=\mathbb{Z}$ with the usual addition via the action of the finite group $H=(\{1,-1\},\times )$ of automorphisms of $\mathbb{Z}$ given by multiplication $\times$ as explained in Example \ref{ocd}(iv) and then proceed as in Theorem \ref{ephr} or \ref{3.6} above. For $n \in \mathbb{N}$, let $D_n= \{n, -n\}$ and let $D_0=\{0\}$. Consider any non-empty finite subset $F=\{n_j: 1 \leq j \leq m\}$ of $\mathbb{Z}_+$. Let $D_F$ be the sum in $\mathbb{Z}$ of orbits $D_{x_{n_j}},\, 1 \leq j \leq m $. Then $D_F$ turns out to be the union of orbits $D_k$, $k \in \textnormal{spt}(\delta_F).$ Also it contains the orbit of $s_F= \sum_{j=1}^m x_{n_j}.$ \end{exmp} \begin{exmp} Let $G$ be the group $\mathbb{Z} \times \mathbb{Z}$ with the usual addition. Let $H := \{id_{\text{G}}, \alpha, \beta , \gamma\}$ be the group consisting of automorphisms $\text{id}_G$ and $\alpha, \beta, \gamma : G \rightarrow G $ given by $\alpha(x,y)=(-x,y), \,\beta (x,y)=(x,-y),\, \gamma(x,y)= (-x,-y).$ By Theorem \ref{3.6}, the hypergroup $G^H$ is a recurrent hypergroup. Another way to see this is to start with the subgroup $M=\mathbb{Z} \times \{0\}$ of $G.$ On $M,$ $H$ reduces to $H_1= \{id_{\text{M}}, \alpha|_{M}\}.$ So by Example \ref{ocp}, $M^{H_1}$ is a recurrent hypergroup. Next, by Remark \ref{rem4} we see that $G^H$ is a recurrent hypergroup. \end{exmp} We conclude the string of results by giving a semigroup version of Theorem \ref{3.6} (iii). \begin{thm} \label{Orbit} Let $(S, \cdot)$ be an infinite discrete Ramsey semigroup with identity $e$. Let $H$ be a finite group of automorphisms of $(S, \cdot)$ with $\#H =c.$ Then the space $S^H$ of orbits $s^H$ given by $s^H =\{\alpha(s): \alpha \in H\},$ equipped with the discrete topology, can be made into a recurrent semiconvo by defining ` $*$' as follows: \begin{eqnarray*} \delta_{s^H}* \delta_{t^H}= \frac{1}{c} \sum_{\alpha \in H} \delta_{(\alpha(s)\cdot t)^H}. \end{eqnarray*} \end{thm} \begin{proof} Simple computations give that `$*$' is associative and $e^H$ works as the identity of $S^H.$ Hence, $(S^H,*)$ is an infinite discrete semiconvo. We prove that $(S^H,*)$ is a recurrent semiconvo. A major part of the proof is on lines of that of Theorem \ref{3.6}, but we prefer to give it in full here too. For this take any partition $\{C_i\}_{i=1}^r$ of $S^H.$ Set, for $1 \leq i \leq r,\, \widetilde{C_i}= \bigcup_{U \in C_i} U.$ Note that $S = \bigcup_{i=1}^r\widetilde{C_i}$ is a partition of $S.$ Also, for $1 \leq i \leq r$ and $s \in \widetilde{C_i},$ we have $s^H \in C_i.$ Applying Remark \ref{Smee}(i) to $(S, \cdot)$, we get an injective sequence $\langle s_n \rangle_{n=1}^\infty$ in $S \backslash \{e\}$ and $i \in \{1,2, \ldots,r\}$ such that FP$(\langle s_n \rangle_{n=1}^\infty) \subset \widetilde{C_i}.$ For $n \in \mathbb{N},$ set $t_n= s_n^H.$ Because the $s_n$'s are all distinct and $H$ is finite, there exists an injective sequence $\langle t_{n_j}\rangle_{j=1}^\infty,$ a subsequence of $\langle t_{n}\rangle_{n=1}^\infty$. Set $\sigma_j = s_{n_j}$ and $\tau_j= t_{n_j}$ for $j \in \mathbb{N}.$ Consider any non-empty finite subset $F$ of $\mathbb{N},$ say, $F= \{j_k: 1 \leq k \leq m\}$ with $1 \leq j_1<j_2< \cdots< j_m.$ Let $ \sigma_F= \prod_{k=1}^m \sigma_{j_k}$ and $\tau^{}_F=(\sigma_F)^H.$ Then $\sigma_F \in \widetilde{C_i}$ and, therefore, $\tau^{}_F \in C_i.$ Further, for $\alpha= (\alpha_{j_k})_{k=1}^m \in H^m,$ set $\sigma_\alpha= \prod_{k=1}^m \alpha_{j_k}(\sigma_{j_k})= \alpha_{j_m} \prod_{i=1}^{m} \alpha_{j_m}^{-1} \alpha_{j_k} (\sigma_{j_k})$ and $\tau_\alpha=(\sigma_\alpha)^H.$ Now, set $F'=\{\tau_{j_k}: 1\leq k \leq m\}$ and $\beta'= (\beta, \text{id}_S) \in H^m $ with $\beta=(\beta_1, \beta_2, \ldots, \beta_{m-1}) \in H^{m-1}.$ Then $$\delta_{F'}= \delta_{\tau_{j_1}}*\delta_{\tau_{j_2}}* \cdots *\delta_{\tau_{j_m}}= \frac{1}{c^m} \sum_{\alpha \in H^m} \delta_{\tau_\alpha} = \frac{1}{c^{m-1}} \sum_{\beta \in H^{m-1}} \delta_{\tau_{\beta'}} .$$ Now, for the identity $\beta_0$ of $H^{m-1},$ we have $\tau_{\beta_0'}= \tau^{}_F \in C_i.$ Therefore, $$\delta_{F'}(C_i) \geq \delta_{F'}(\{\tau_{F}\}) \geq \frac{1}{c^{m-1}} >0.$$ Hence, $(S^H,*)$ is a recurrent semiconvo. \end{proof} \begin{exmp} Let $S$ be the group $\mathbb{Z} \times \mathbb{Z}$ or the semigroup $\mathbb{N} \times \mathbb{N}$ with usual addition. Then $H=\{\text{id}_S,\alpha\}$ is a group of automorphisms of $S$ if $\alpha(x,y)=(y,x)$ for $x,y \in S.$ Then, by Theorem \ref{Orbit}, $S^H$ is a recurrent hypergroup or semiconvo respectively. In fact, it is a Ramsey hypergroup or semiconvo as it contains a copy (coming from the orbits of points on the diagonal) of $(\mathbb{Z},+)$ or $(\mathbb{N}, +)$ respectively. \end{exmp} \begin{exmp} Let $S = (\mathbb{Z} \times \mathbb{N}) \cup \{(0,0)\}$ and $H_2 =\{\text{id}_S, \alpha|_{S}\},$ where the map $\alpha:S \rightarrow S$ is given by $\alpha(x,y)=(-x,y)$. The orbit space $S^{H_2}$ has elements of the following two forms: \begin{itemize} \item[(A)]$\{(x,y),(-x,y)\},\,\, 0 \neq x \in \mathbb{Z}$ and $y \in \mathbb{N},$ \item[(B)] $\{(0,y)\},\,\, y \in \mathbb{Z}_+.$ \end{itemize} \begin{itemize} \item[(a)] We consider $(S, \cdot)$ with `$\cdot$' derived from the usual addition in $\mathbb{Z} \times \mathbb{Z}.$ Then, $(S,\cdot)$ is a Ramsey semigroup. Note that $H_2$ is a finite group of automorphisms of $(S, \cdot).$ So we can apply Theorem \ref{Orbit} above to conclude that $(S^{H_2},*)$ is a recurrent semiconvo. It is clear from (B) that $(S^{H_2},*)$ contains a copy of the semigroup $(\mathbb{Z}_+,+).$ So the conclusion can also be derived by using Remark \ref{rem4}. In fact, we may use Remark \ref{4.3iii} and conclude that $(S^{H_2}, *)$ is Ramsey. \item[(b)] We consider $(S, \cdot)$ with `$\cdot$' defined by $(x,y) \cdot (x',y')= (x+x', \text{max}\{y,y'\}).$ Then $S$ is an infinite commutative discrete semigroup with identity $e=(0,0).$ Also each element of the form $(x,y)$ with $x \neq 0$ has infinite order. So, by Remark \ref{Remark1} and Example \ref{copyN}(iii), $(S,\cdot)$ is a Ramsey semigroup. Further, $H_2$ is a finite group of automorphisms of $(S, \cdot).$ By Theorem \ref{Orbit} above, $(S^{H_2},*)$ is a recurrent discrete semiconvo. In view of (B) above, it contains a copy of the semigroup $(\mathbb{Z}_+, \text{max})$ as a subhypergroup of $(S^{H_2},*).$ Hence, using Remark \ref{Smee}(i) and then Remark \ref{4.3iii} we conclude that $(S^{H_2},*)$ is a Ramsey semiconvo. \end{itemize} \end{exmp} \begin{rem} (i) Further variants of these concepts can be given on the lines of concepts in Subsection \ref{4.2} but we do not go into that. (ii) With a little extra care, the concepts above can be defined for general locally compact semiconvos or hypergroups but we do not go into that. \end{rem} \section*{Acknowledgment} Vishvesh Kumar thanks the Council of Scientific and Industrial Research, India, for its senior research fellowship. He thanks his supervisors Ritumoni Sarma and N. Shravan Kumar for their support. A preliminary version of a part of this paper was included in the invited talk by Ajit Iqbal Singh at the conference ``The Stone-$\check{\mbox{C}}$ech compactification : Theory and Applications, at Centre for Mathematical Sciences, University of Cambridge, July 6-8 2016" in honour of Neil Hindman and Dona Strauss. She is grateful to the organizers H.G. Dales and Imre Leader for the kind invitation, hospitality and travel support. She thanks them, Dona Strauss and Neil Hindman and other participants for useful discussion. She expresses her thanks to the Indian National Science Academy for the position of INSA Emeritus Scientist and travel support. Almost half of the contributory talk given by Vishvesh Kumar at the conference ``Abstract Harmonic analysis (AHA)-2018, at National Sun Yat-sen University, Kaohsiung, Taiwan, June 25-29 2018" was based on this paper. He thanks the organizers of the conference and Indian Institute of Technology Delhi for financial support. The authors thank George Willis for his useful comment and suggestion.
{ "timestamp": "2018-10-11T02:01:14", "yymm": "1807", "arxiv_id": "1807.01450", "language": "en", "url": "https://arxiv.org/abs/1807.01450", "abstract": "In this paper, Ramsey theory for discrete hypergroups is introduced with emphasis on polynomial hypergroups, discrete orbit hypergroups and hypergroup deformations of semigroups. In this context, new notions of Ramsey principle for hypergroups and $\\alpha$-Ramsey hypergroup, $0 \\leq \\alpha<1,$ are defined and studied.", "subjects": "Combinatorics (math.CO); Functional Analysis (math.FA)", "title": "Ramsey theory for hypergroups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983085085246543, "lm_q2_score": 0.8376199673867852, "lm_q1q2_score": 0.8234516970426443 }
https://arxiv.org/abs/1512.02086
Hypercube Unfoldings that Tile R^3 and R^2
We show that the hypercube has a face-unfolding that tiles space, and that unfolding has an edge-unfolding that tiles the plane. So the hypercube is a "dimension-descending tiler." We also show that the hypercube cross unfolding made famous by Dali tiles space, but we leave open the question of whether or not it has an edge-unfolding that tiles the plane.
\section{Introduction} \seclab{Introduction} The cube in ${\mathbb{R}}^3$ has $11$ distinct (incongruent) edge-unfoldings\footnote{ An \emph{edge-unfolding} cuts along edges. } to $6$-square planar polyominoes, each of which tiles the plane~\cite{Etudes}. A single tile (a \emph{prototile}) that tiles the plane with congruent copies of that tile (i.e., tiles via translations and rotations, but not reflections) is called a \emph{monohedral} tile. The cube itself obviously tiles ${\mathbb{R}}^3$. So the cube has the pleasing property that it tiles ${\mathbb{R}}^3$ and all of its edge-unfoldings tile ${\mathbb{R}}^2$. The latter property makes the cube a \emph{semi-tile-maker} in Akiyama's notation~\cite{JinAkiyama}, a property shared by the regular octahedron. In this note we begin to address a higher-dimensional analog of these questions. The 4D hypercube (or \emph{tesseract}) tiles ${\mathbb{R}}^4$. Do all of its face-unfoldings monohedrally tile ${\mathbb{R}}^3$? The hypercube has $261$ distinct face-unfoldings (cutting along $2$-dimensional square faces) to $8$-cube polycubes, first enumerated by Turney~\cite{Turney} and recently constructed and confirmed by McClure~\cite{McClure_MO}~\cite{JOR_MO1}. The second author posed the question of determining which of the $261$ unfoldings tile space monohedrally~\cite{JOR_MO2}. Whether or not it is even decidable to determine if a given tile can tile the plane monohedrally is an open problem~\cite{JOR_TCS}, and equally open for ${\mathbb{R}}^3$. The only general tool is Conway's sufficiency criteria~\cite{Schattschneider} for planar prototiles, which seem too specialized to help much here. In the absence of an algorithm, this seems a daunting task. Here we focus on two narrower questions, essentially replacing Akiyama's ``all" with ``at least one": \begin{question} Is there an unfolding of the hypercube that tiles ${\mathbb{R}}^3$, and which itself has an edge-unfolding that tiles ${\mathbb{R}}^2$? \end{question} Call a polytope that monohedrally tiles ${\mathbb{R}}^d$ a \emph{dimension-descending tiler} (DDT) if it has a facet-unfolding that tiles ${\mathbb{R}}^{d-1}$, and that ${\mathbb{R}}^{d-1}$ polytope has a facet-unfolding that tiles ${\mathbb{R}}^{d-2}$, and so on down to an edge-unfolding that tiles ${\mathbb{R}}^2$. (Every polygon has a vertex-unfolding of its perimeter that trivially tiles ${\mathbb{R}}^1$.) Thus the cube is a DDT. We answer Question~1 positively by showing that the hypercube is a DDT, by finding one face-unfolding to an $8$-cube polyform in ${\mathbb{R}}^3$, which itself has an edge-unfolding to a $34$-square polyominoe that tiles ${\mathbb{R}}^2$. It is natural to wonder about the other $260$ face-unfoldings of the hypercube, and in particular, the most ``famous" one, what we call the \emph{Dali cross}, made famous in Salvadore Dali's painting shown in Figure~\figref{Dali_Crucifixion_hypercube}. \begin{figure}[htbp] \centering \includegraphics[width=0.5\linewidth]{Figures/Dali_Crucifixion_hypercube.jpg} \caption{The 1954 Dali painting \emph{Corpus Hypercubus}. (Image from Wikipedia).} \figlab{Dali_Crucifixion_hypercube} \end{figure} \begin{question} Does the Dali cross tile ${\mathbb{R}}^3$, and if so, does it have an edge-unfolding that tiles ${\mathbb{R}}^2$? \end{question} Here we are only partially successful: We show that the Dali cross does indeed tile space (Theorem~\thmref{DaliCross}), but we have not succeeded in finding an unfolding of this cross that tiles the plane. \section{Hypercube Unfoldings that Tile ${\mathbb{R}}^3$} \seclab{TileSpace} So far as we are aware, there are now $4$ hypercube unfoldings that are known to tile space. The first two were found by Steven Stadnicki~\cite{Stadnicki_MO} in response to the question raised in~\cite{JOR_MO2}. We call the first of Stadnicki's unfoldings the $L$-unfolding. We describe this in detail for it is the unfolding we use to answer Question~1. \subsection{The Hypercube $L$-unfolding tiles ${\mathbb{R}}^3$} \seclab{Unf-L} The $L$-unfolding is shown in Figure~\figref{L1_3D}. (The labels will not be used until Section~\secref{EdgeUnf}.) \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/L1_3D.jpg} \caption{The $L$-unfolding of the hypercube. Some face labels are shown.} \figlab{L1_3D} \end{figure} Stadnicki showed this leads to a particularly simple tiling of space, because nestling one $L$ inside another as shown in Figure~\figref{L5_3D} \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/L5_3D.jpg} \caption{Five nestled $L$'s.} \figlab{L5_3D} \end{figure} leads to a $2$-cube thick infinite slab, as illustrated in Figure~\figref{L10_3D}. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/L10_3D.jpg} \caption{Ten nestled $L$'s. Note the evolving structure is two-cubes thick in depth.} \figlab{L10_3D} \end{figure} Then of course all of ${\mathbb{R}}^3$ can be tiled by stacking the $2$-cube thick slabs. We will return to edge-unfolding the $L$ in Section~\seclab{EdgeUnf}. Stadnicki showed that a second unfolding (Figure~\figref{Stadnicki}) also tiles space~\cite{Stadnicki_MO}, via a slightly more complicated but still simple structure. We will not describe that tiling. \begin{figure}[htbp] \centering \includegraphics[width=0.25\linewidth]{Figures/Stadnicki.jpg} \caption{Another hypercube unfolding that tiles ${\mathbb{R}}^3$ (Stadnicki).} \figlab{Stadnicki} \end{figure} \newpage \subsection{The Dali Cross Unfolding tiles ${\mathbb{R}}^3$} \seclab{DaliCross} Recall the Dali cross consists of four cubes in a tower, with the third tower-cube surrounded by four more; see Figure~\figref{Cross1_3D}. (Again the labels will not be used until Section~\secref{EdgeUnf}.) \begin{figure}[htbp] \centering \includegraphics[width=0.50\linewidth]{Figures/Cross1_3D.jpg} \caption{The Dali cross. Some face labels are shown.} \figlab{Cross1_3D} \end{figure} Our proof that this shape tiles ${\mathbb{R}}^3$ is in six steps: \begin{enumerate} \setlength{\itemsep}{0pt} \item $2$-cross unit. \item Cross-strip. \item Cross-layer. \item Two cross-layers. \item Three cross-layers. \item Four cross-layers. \end{enumerate} \subsection{$2$-Cross Unit} We first build a $2$-cross unit with prone, opposing crosses, as illustrated in Figure~\figref{Cross2_3D}. We will call planes of possible cube locations $z$-layers $1, 2, 3, \ldots$, corresponding to $z$-height. The $2$-cross unit has two cubes in $z$-layers $1$ and $3$, in the same $xy$-locations, and the remaining cubes in $z$-layer $2$. It will be convenient to use \emph{bump} to indicate a cube protruding above a particular layer of interest, and use \emph{hole} to indicate a cube cell as-yet unoccupied by a cube. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/Cross2_3D.jpg} \caption{A $2$-cross unit.} \figlab{Cross2_3D} \end{figure} \subsection{Cross-strip} Now we form a vertical strip of $2$-cross units as shown in Figure~\figref{HC_1}. Here we introduce a convention of displaying the construction by using colors and $z$-layer numbers. So the cubes in a cross-strip occupy $z$-layers $1,2,3$, but only $z$-layers $2$ and $3$ are visible from above in an overhead view. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/HC_1.jpg} \caption{Cross-strip.} \figlab{HC_1} \end{figure} \subsection{Cross-layer} Now we place cross-strips adjacent to one another horizontally, as shown in Figure~\figref{HC_2}. \begin{figure}[htbp] \centering \includegraphics[width=0.90\linewidth]{Figures/HC_2.jpg} \caption{Cross-layer.} \figlab{HC_2} \end{figure} The remaining steps stack cross-layers one on top of the other. So the pattern of holes and bumps in each cross-layer will be important. \subsection{Two Cross-Layers} Henceforth we color all cubes in one cross-layer the same primary color, with the bumps slightly darker, as in Figure~\figref{HC_3}(a). Remember the bumps in one cross-layer align vertically. Now we place a second cross-layer on top of the first, with the bumps in the second cross-layer fitting into the holes of the first. Figure~\figref{HC_3}(b) shows the top view, which will be our focus. Note that now we see cubes at $z$-layers $2,3,4$. That there are no holes all the way through; rather, $z$-layer-$2$ cells are dents and $z$-layer-$4$ cells bumps. \begin{figure}[htbp] \centering \includegraphics[width=1.00\linewidth]{Figures/HC_3.jpg} \caption{Two cross-layers. (a)~One cross-layer. (b)~Two cross-layers. } \figlab{HC_3} \end{figure} We ask the reader to concentrate on the pattern depicted in Figure~\figref{HC_4}: in two adjacent columns, we see $(4,3,3,3,4)$ and $(2,3,3,3,2)$, with the latter pattern shifted diagonally upward one unit. It should be clear that the entire overhead $z$-layer-view is composed of copies of this \emph{fundmental layer-pattern}. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/HC_4.jpg} \caption{Fundamental layer-pattern after stacking two cross-layers.} \figlab{HC_4} \end{figure} \subsection{Three Cross-Layers} When we stack a third cross-layer on the construction, again inserting bumps into dents, we do not quite regain the fundamental layer-pattern. Instead we see that pattern shifted diagonally downward rather than upward; see Figure~\figref{HC_5}. Although we could argue that now we see a reflection (over a horizontal) of the full pattern of visible $z$-layer numbers, it seems easier and more convincing to us to add one more cross-layer. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/HC_5.jpg} \caption{Three cross-layers and a reflected pattern.} \figlab{HC_5} \end{figure} \subsection{Four Cross-Layers} With the addition of the fourth cross-layer (Figure~\figref{HC_6}), we regain the exact same pattern of $z$-layer numbers. Note the fundamental layer-pattern is now $(6,5,5,5,6)$ and $(4,5,5,5,4)$, exactly $+2$ of the pattern in two cross-layers, as emphasized in Figure~\figref{HC_7}. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/HC_6.jpg} \caption{Four cross-layers.} \figlab{HC_6} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1.00\linewidth]{Figures/HC_7.jpg} \caption{Two cross-layers~(a) compared to four cross-layers~(b), with the same fundamental pattern indicated.} \figlab{HC_7} \end{figure} It is now clear that because we have regained at four cross-layers the exact same ``$z$-layer landscape" as we had at two cross-layers, the stacking can be continued indefinitely. \begin{theorem} The Dali cross unfolding of the hypercube tiles ${\mathbb{R}}^3$ monohedrally. \thmlab{DaliCross} \end{theorem} We have found another hypercube unfolding, shown in Figure~\figref{TTunfolding}, that tiles ${\mathbb{R}}^3$ in a similar manner, not described here. \begin{figure}[htbp] \centering \includegraphics[width=0.5\linewidth]{Figures/TTunfolding.pdf} \caption{Another hypercube unfolding that tiles ${\mathbb{R}}^3$.} \figlab{TTunfolding} \end{figure} \newpage \section{Edge-unfoldings to tile ${\mathbb{R}}^2$} \seclab{EdgeUnf} Now we turn to unfolding the $L$ to tile the plane. We label the cubes from $1$ to $8$, and the faces as $\{F, L, K, R, B, T\}$ for \{Front, Left, bacK, Right, Bottom, Top\} respectively. Refer again to Figure~\figref{L1_3D}. There are $34$ exposed faces of the $8$ cubes. Through a mixture of heuristic computer searches and hand tinkering, we found the unfolding shown in Figure~\figref{L_unfolding}. \begin{figure}[htbp] \centering \includegraphics[width=0.5\linewidth]{Figures/L_unfolding.jpg} \caption{Unfolding of the $L$ (Figure~\protect\figref{L1_3D}), with face labels and dual-tree (uncut) connections.} \figlab{L_unfolding} \end{figure} That this tiles the plane (by translation only) is demonstrated in Figure~\figref{tiledL_all}. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/tiledL_all.pdf} \caption{Tiling of the plane with the unfolding shown in Figure~\protect\figref{L_unfolding}.} \figlab{tiledL_all} \end{figure} This then establishes our answer to Question~1: \begin{theorem} The $L$-unfolding of the hypercube has an edge-unfolding that tiles the plane, establishing that the hypercube is a dimension-descending tiler. \thmlab{LTilesPlane} \end{theorem} \subsection{Edge-unfoldings of the Dali cross} \seclab{EdgeUnfDaliCross} There are a huge number of edge-unfoldings of each hypercube unfolding. Each edge-unfolding corresponds to a spanning tree of the dual graph, where each square face is a node, and arcs represent uncut edges. There are at most approximately $5^n$ spanning trees~\cite{GRote} of planar graphs with $n$ nodes, and asymptotically that many for some graphs. It seems conservative to estimate that the dual graph of the Dali cross has at least $2^n=2^{34} \approx 10^{10}$ spanning trees, and more likely $3^{34} \approx 10^{16}$. (The square grid has $3.2^n$ spanning trees, and each hypercube unfolding dual graph is also regular of degree $4$.) Each of these trees leads to an unfolding, but many self-overlap in their planar layout, and even among those that avoid overlap, many delimit a region with holes, and so could not form tilers. With brute-force search infeasible, and no algorithm available, we are left only with heuristics, with which we have not been successful. Figure~\figref{hypercrossUnfolding} shows the closest to a tiling unfolding of the Dali cross that we found. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{Figures/hypercrossUnfolding.jpg} \caption{An edge-unfolding of the Dali cross that nearly tiles the plane. (See Figure~\protect\figref{Cross1_3D} for labels.)} \figlab{hypercrossUnfolding} \end{figure} \section{Open Problems} \seclab{Open} \begin{enumerate} \setlength{\itemsep}{0pt} \item Is the $5$-dimensional cube in ${\mathbb{R}}^5$ a dimension-descending tiler? \item What are good heuristics to test if the remaining $257$\footnote{ $261-4$, because $4$ are known to tile: Figures~\protect\figref{L1_3D}, \protect\figref{Stadnicki}, \protect\figref{Cross1_3D}, \protect\figref{TTunfolding}.} hypercube unfoldings tile ${\mathbb{R}}^3$? \item Can any of the hypercube unfoldings be proved \underline{not} to tile ${\mathbb{R}}^3$? \item Does the Dali cross have an unfolding that tiles ${\mathbb{R}}^2$? \end{enumerate} \paragraph{Addendum.} We learned after posting this note that polyhedra that have an edge-unfolding that tiles the plane are called \emph{tessellation polyhedra} in~\cite{Stefan}. \bibliographystyle{alpha}
{ "timestamp": "2015-12-09T02:09:48", "yymm": "1512", "arxiv_id": "1512.02086", "language": "en", "url": "https://arxiv.org/abs/1512.02086", "abstract": "We show that the hypercube has a face-unfolding that tiles space, and that unfolding has an edge-unfolding that tiles the plane. So the hypercube is a \"dimension-descending tiler.\" We also show that the hypercube cross unfolding made famous by Dali tiles space, but we leave open the question of whether or not it has an edge-unfolding that tiles the plane.", "subjects": "Computational Geometry (cs.CG)", "title": "Hypercube Unfoldings that Tile R^3 and R^2", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964186047327, "lm_q2_score": 0.8354835330070839, "lm_q1q2_score": 0.8233660295817101 }
https://arxiv.org/abs/1810.05317
Independence Equivalence Classes of Paths and Cycles
The independence polynomial of a graph is the generating polynomial for the number of independent sets of each size. Two graphs are said to be \textit{independence equivalent} if they have equivalent independence polynomials. We extend previous work by showing that independence equivalence class of every odd path has size 1, while the class can contain arbitrarily many graphs for even paths. We also prove that the independence equivalence class of every even cycle consists of two graphs when $n\ge 2$ except the independence equivalence class of $C_6$ which consists of three graphs. The odd case remains open, although, using irreducibility results from algebra, we were able show that for a prime $p \geq 5$ and $n\ge 1$ the independence equivalence class of $C_{p^n}$ consists of only two graphs.
\section*{References}} \begin{document} \tikzset{bignode/.style={minimum size=3em,}} \maketitle \begin{abstract} The independence polynomial of a graph is the generating polynomial for the number of independent sets of each size. Two graphs are said to be \textit{independence equivalent} if they have equivalent independence polynomials. We extend previous work by showing that independence equivalence class of every odd path has size 1, while the class can contain arbitrarily many graphs for even paths. We also prove that the independence equivalence class of every even cycle consists of two graphs when $n\ge 2$ except the independence equivalence class of $C_6$ which consists of three graphs. The odd case remains open, although, using irreducibility results from algebra, we were able show that for a prime $p \geq 5$ and $n\ge 1$ the independence equivalence class of $C_{p^n}$ consists of only two graphs. \end{abstract} \setstretch{1.4} \section{Introduction}\label{sec:intro} A subset of vertices of a (finite, undirected and simple) graph $G$ is called {\em independent} if the subset induces a subgraph with no edges (the \emph{independence number} of $G$ is the size of the largest independent set in $G$ and is denoted by $\alpha(G)$, or just $\alpha$ if the graph is clear from context). The {\em independence polynomial} of $G$, denoted by $i(G,x)$ is defined by \[ i(G,x)=\sum_{k=0}^{\alpha}i_kx^k,\] where $i_k$ is the number of independent sets of size $k$ in $G$. Research on the independence polynomial has been very active since it was first defined in 1983 \cite{INDFIRST,INDPOLY,Chudnovsky2007,Alavi,BDN2000,INDROOTS} We say that two unlabelled graphs $G$ and $H$, are \textit{independence equivalent}, denoted $G \sim H$, if they have the same independence polynomial. Independence equivalence is clearly an equivalence relation, so we define the \textit{independence equivalence class} of a graph $G$, denoted $[G]$, to be the set of all graphs that are independence equivalent to $G$. If a graph is the only graph in its independence equivalence class, we call this graph \textit{independence unique}. As an example, $P_4$ and $K_3\cup K_1$, both of which have independence polynomial $1+4x+3x^2$, are independence equivalent. On the other hand, each complete graph $K_n$, is independence unique as it is the only graph with independence polynomial $1+nx$. As a graph is completely determined by its independent sets of cardinality $2$, (that is, the non-edges of the graph), it is interesting to see what information is encoded when we do not have access to all of the independent sets but only the information encoded by the independence polynomial (that is, only how many of each size there are). As we have seen, the combinatorial information about the independent sets is not enough to completely distinguish a graph (it cannot even determine whether a a graph is connected). In fact Makowsky and Zhang \cite{Makowksy2017} showed the proportion of independence unique graphs to graphs tends to zero as the order (that is, the number of vertices) tends to infinity. Independence uniqueness and independence equivalence is also of interest in analogy to the corresponding notion for the chromatic polynomial, the \emph{chromaticity} of a graph (see chapters 4,5, and 6 of \cite{DongKohTeo2005}). In \cite{Makowsky2014} the authors consider equivalence and uniqueness of a general polynomial arising from a graph, and they also raise the point that one reason to study graph polynomials is to help distinguish non-isomorphic graphs. \begin{figure}[htp] \def0.7{0.7} \def1{2} \centering \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,draw,fill}] \node (1) at (0*1,0*1) {}; \node (2) at (1*1,0*1) {}; \node (3) at (2*1,0*1) {}; \node (4) at (2*1,1*1) {}; \node (5) at (0*1,-1*1) {}; \node (6) at (0*1,1*1) {}; \node(7) at (1*1,-1*1) {}; \node (8) at (2*1,-1*1) {}; \end{scope} \begin{scope} \path [-] (1) edge node {} (2); \path [-] (2) edge node {} (3); \path [-] (4) edge node {} (3); \path [-] (1) edge node {} (5); \path [-] (1) edge node {} (6); \path [-] (2) edge node {} (7); \path [-] (3) edge node {} (8); \end{scope} \begin{scope}[every node/.style={circle,thick,draw,fill}] \node (9) at (-5*1,0*1) {}; \node (10) at (-4*1,0*1) {}; \node (11) at (-3*1,-1*1) {}; \node (12) at (-2*1,0*1) {}; \node (13) at (-2*1,-1*1) {}; \node (14) at (-5*1,-1*1) {}; \node (15) at (-4*1,-1*1) {}; \node (16) at (-3*1,1*1) {}; \end{scope} \begin{scope} \path [-] (9) edge node {} (16); \path [-] (10) edge node {} (16); \path [-] (12) edge node {} (16); \path [-] (16) edge node {} (13); \path [-] (10) edge node {} (15); \path [-] (15) edge node {} (11); \path [-] (9) edge node {} (14); \end{scope} \end{tikzpicture}} \caption{Independence equivalent trees on $8$ vertices.}% \label{fig:equivnonwctrees}% \end{figure} Returning to independence, in \cite{Stevanovic1997}, Stevanovic showed threshold graphs are independence unique among threshold graphs, doing so from the clique polynomial point of view. There is work done by Brown and Hoshino \cite{BrownHoshino2012} that provides a full characterization of independence unique circulant graphs and in the process determines some constructions to obtain graphs that are independence equivalent to circulant graphs. In \cite{Levit2008}, Levit and Mandrescu showed well-covered spiders are independence unique among well-covered graphs. However, even for the path $P_{n}$ and cycle $C_{n}$ of order $n$, determining the independence equivalence classes is tricky and subtle (much more so than for other graph polynomials). Chism \cite{Chism2009} showed that $[P_{2n}]$ contains a few families of graphs (we will expand upon in Section~\ref{sec:paths}) (Zhang \cite{Zhang2012} proved the same results via different techniques). In \cite{Li2016}, the authors showed the only tree in $[P_n]$ is $P_n$ itself. Most recently, Oboudi \cite{Oboudi2018} completely determined all connected graphs in the independence equivalence classes of cycles. In this work, we extend the results of Oboudi \cite{Oboudi2018} and Li \cite{Li2016} by considering which disconnected graphs can be in $[P_n]$ and $[C_n]$ respectively. This paper is structured as follows: Section~\ref{sec:paths} is devoted to exploring $[P_n]$. For odd $n$ we show that $P_n$ is independence unique, whereas for even $n$ there can be arbitrarily many nonisomorphic graphs in $[P_n]$. In Section~\ref{sec:cycles}, we consider $[C_n]$, using very different methods depending on the parity of $n$. We find that when $n$ is even (and $n\neq 6$), or a prime power where the base is at least $5$, then $[C_n]=\{C_n,D_n\}$ where $D_n$ is the graph obtained by gluing a leaf of $P_{n-3}$ to one vertex of a triangle (see Figure~\ref{fig:Ln}). Our results for paths and even cycles involve combinatorial analysis that comes from analyzing the coefficients. Our results for prime cycles and prime power cycles, however, are proved using algebraic results by examining the reducibility of the polynomials. \section{Independence Equivalence Classes of Paths}\label{sec:paths} The independence equivalence class of a path has been considered before in \cite{Zhang2012,Chism2009}, where the it was shown that there are at least $2$ disconnected graphs in $[P_{2n}]$ and in \cite{Li2016} where it was shown that the only connected graph in $[P_n]$ is $P_n$ itself. For independence polynomials, the highly structured nature of paths allows for an explicit formula for paths: \begin{theorem}[Arocha, \cite{Arocha1984}] The independence polynomial of a path of order $n$ is given by \label{thm:PathPoly} $$i(P_n,x) = \sum_{j=0}^{\lfloor \frac{n+1}{2} \rfloor} \binom{n+ 1 -j}{j}x^j.$$ \QED \end{theorem} Recently, Li, Liu, and Wu \cite{Li2016} completely classified all {\it connected} graphs in $[P_n]$ for all $n$ \begin{theorem}[\cite{Li2016}]\label{cor:connectedpathunique} For any connected graph $G$ and $n \in \mathbb{N}$, if $i(G,x)=i(P_n,x)$ then $G \cong P_n$. \QED \end{theorem} However, independence equivalence does not necessarily put a restriction on connectivity. In this section we will consider what disconnected graphs can belong to $[P_n]$. We start by showing that even paths are very different in the disconnected case. We will show that we can have arbitrarily many graphs in the independence equivalence classes of even paths. To do this, we build on the basic results in \cite{Chism2009,Zhang2012} that provides an example of a disconnected graph in $[P_n]$ for even $n$. \begin{prop}[\cite{Chism2009,Zhang2012}]\label{prop:chismpath} $P_{2n}\sim P_{n-1}\cup C_{n+1}$ for $n\ge 2$. \QED \end{prop} \begin{figure}[htp] \def0.7{0.7} \def1{2} \centering \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node[label=above:$u_2$,draw] (2) at (1.13397*1,0.5*1) {}; \node[label=above:$u_3$,draw] (3) at (2*1,0*1) {}; \node[label=above:$u_{n-1}$,draw] (4) at (4*1,0*1) {}; \node[label=above:$u_n$,draw] (5) at (5*1,0*1) {}; \node[label=below:$u_1$,draw] (7) at (1.13397*1,-0.5*1) {}; \node[label=above:$u_4$,draw] (8) at (3*1,0*1) {}; \end{scope} \begin{scope} \path [-] (2) edge node {} (3); \path [-] (4) edge node {} (5); \path [-] (3) edge node {} (7); \path [-] (2) edge node {} (7); \path [-] (3) edge node {} (8); \end{scope} \path (8) -- node[auto=false]{\ldots} (4); \path [-] (8) edge node {} (3*1+.7,0*1) ; \path [-] (4*1-.7,0*1) edge node {} (4); \end{tikzpicture}} \caption{The graph $D_n$}% \label{fig:Ln}% \end{figure} \begin{prop}[\cite{Chism2009,Zhang2012}]\label{prop:chismcycle} $C_n\sim D_n$ for $n\ge 3$ (where $D_n$ is formed from a triangle by adding a pendant path -- see Figure~\ref{fig:Ln}). \QED \end{prop} \begin{prop}\label{prop:largeevenpaths} For any $K \geq 0$, there is an even path whose independence equivalence class has cardinality at least $K$. \end{prop} \begin{proof} Let $N$ be a positive integer, and set $n=2^{\lceil N/2\rceil+2}-2$. We claim that $P_{n}$ has at least $\frac{n}{2}$ non-isomorphic graphs in its independence equivalence class. From Proposition~\ref{prop:chismpath}, $P_{2^{\lceil N/2\rceil+2-k}-2}$ is equivalent to $P_{2^{\lceil N/2\rceil+1-k}-2}\cup C_{2^{\lceil N/2\rceil+1-k}}$ for all $k=0,1,\ldots,\lceil N/2\rceil-1$. Therefore, by iteratively applying \begin{align} P_{n}\sim P_{2^{\lceil N/2\rceil+1-k}-2}\cup\bigcup_{\ell=0}^{k} C_{2^{\lceil N/2\rceil+1-\ell}}\label{eq:patheqclass}. \end{align} By Proposition~\ref{prop:chismcycle}, for $0\le \ell\le k$, $C_{2^{\lceil N/2\rceil+1-\ell}}\sim D_{2^{\lceil N/2\rceil+1-\ell}}$. Therefore, for each value of $k$, the cycles in (\ref{eq:patheqclass}) can be replaced by equivalent graphs in $2^{k+1}$ ways. This, together with the graph $P_n$, gives $1+2+2^2+\cdots 2^{\lceil N/2\rceil}=2^{\lceil N/2\rceil+1}-1=\frac{n}{2}$ many distinct graphs in $[P_n]$. \end{proof} The surprising difference between the disconnected and connected graphs that are independence equivalent to even paths begs the question of what happens with odd paths. In the odd case, we completely characterize $[P_{2n+1}]$ for all $n$ by showing, in stark contrast to Proposition~\ref{prop:largeevenpaths}, that $P_{2n+1}$ is independence unique for all $n\ge 0$. \begin{theorem}\label{thm:oddpathsunique} $P_{2n+1}$ is independence unique for all $n\ge 0$. \end{theorem} \begin{proof} Suppose that there exists a graph $G$ such that $G\sim P_{2n+1}$. Note that $i(P_{2n+1},x)$ is monic for every $n\ge 0$, since there is exactly one independent set of maximum size, $n+1$, by taking a leaf and then every other vertex along the path. So $i(G,x)$ must be monic. Therefore, $G$ must have exactly one independent set of size $n+1$; call this set $S$. If there is a vertex in $V(G)-S$ that is not adjacent to at least two vertices in $S$, then we can take this vertex and $n$ vertices in $S$ that are not adjacent with it to make a second independent set of size $n+1$, a contradiction. Therefore every vertex in $V(G)-S$ is adjacent to at least $2$ vertices in $S$, requiring at least $2n$ edges between $V(G)-S$ and $S$. From the second coefficient of $i(P_{2n+1},x)$, we know that $G$ has exactly $2n$ edges and therefore $G$ is a bipartite graph with bipartition $(V(G)-S,S)$. Therefore, $G$ is triangle-free. \begin{figure}[!h] \centering \scalebox{1}{ \begin{tikzpicture} \draw (0,0.9) ellipse (2 and 0.75); \node[text width=1cm] at (0,1) {\Large $S$}; \node[shape=circle,draw=black,fill=black] (1) at (-1.4,-0.75) {}; \node[shape=circle,draw=black,fill=black] (2) at (-0.7,-0.75) {}; \node[shape=circle,draw=black,fill=black] (3) at (0,-0.75) {}; \node[shape=circle,draw=black,fill=black] (4) at (1.4,-0.75) {}; \begin{scope} \path [-] (1) edge node {} (-1.5,0.5); \path [-] (1) edge node {} (-0.9,0.5); \path [-] (2) edge node {} (-0.8,0.5); \path [-] (2) edge node {} (-0.2,0.5); \path [-] (3) edge node {} (-0.1,0.5); \path [-] (3) edge node {} (0.5,0.5); \path [-] (4) edge node {} (0.9,0.5); \path [-] (4) edge node {} (1.5,0.5); \path (3) -- node[auto=false]{\ldots} (4); \end{scope} \draw[black, decoration={brace, raise=5pt, mirror, amplitude=3mm}, decorate] (-1.5,-1) -- (1.5,-1); \node[text width=1cm] at (0.4,-1.75) {$n$}; \end{tikzpicture}} \caption{$G$}% \label{fig:G}% \end{figure} If $G\not\cong P_{2n+1}$, then from Corollary~\ref{cor:connectedpathunique} we know that $G$ must be disconnected. Let $G_1,G_2,\ldots,G_k$ be the connected components of $G$ for some $k\ge 2$. Let $S_i=S\cap V(G_i)$ and $D_i=V(G_i)-S_i$ for $i=1,2,\ldots,k$. Each $G_i$ is bipartite with bipartition $(S_i,D_i)$. Suppose that for some $i$, $|S_i|\le |D_i|$. Now, $\bigcup_{j\neq i}S_j\cup D_i$ is an independent set with at least $n+1$ vertices in it, which contradicts $i(G,x)$ being monic and of degree $n+1$. Therefore, $|S_i|\ge |D_i|+1$ for $i=1,2,\ldots,k$. Therefore, $$2n+1=|V(G)|=\sum_{i=1}^k|V(G_i)|=\sum_{i=1}^k\left(|S_i|+|D_i|\right)\ge \sum_{i=1}^k\left( 2|D_i|+1 \right) =2n+k\ge 2n+2,$$ a contradiction. Therefore, $G$ must be connected, and by Corollary~\ref{cor:connectedpathunique}, $G\cong P_{2n+1}$. Therefore $P_{2n+1}$ is independence unique. \end{proof} It is interesting to note the contrast between the independence equivalence classes of even and odd paths respectively given by Proposition~\ref{prop:largeevenpaths} and Theorem~\ref{thm:oddpathsunique}. It seems that the key distinction between the independence equivalence class of odd and even paths is the number of independent sets of maximum size. An even path on $n$ vertices has $\frac{n}{2}+1$ maximum independent sets, while an odd path has only one. As seen in the proof of Theorem \ref{thm:oddpathsunique}, a graph having few maximum independent sets determines some structure. We will use a similar approach in the next section for even cycles. \section{Independence Equivalence Class of Cycles $C_{n}$} \label{sec:cycles} An early result in chromaticity is that cycles are chromatically unique \cite{ChaoWhitehead1978}. Clearly this is not the case for independence polynomials as Proposition \ref{prop:chismcycle} shows $C_n \sim D_n$ for $n \geq 3$. In this section, we will show that $[C_n]=\{C_n,D_n\}$ for $n$ even, or $n$ a prime at least $5$ to any power. Along with these results, we have used the computational tools of nauty \cite{McKay2014} and Maple to show that $[C_n]=\{C_n,D_n\}$ for $1\le n\le 32$ with the exceptions of $C_6,C_9,$ and $C_{15}$. We will present the independence equivalence classes of each of these three exceptional graphs as we proceed. Like paths, all connected graphs which are independence equivalent to cycles have been determined. \begin{theorem}[\cite{Oboudi2018}]\label{thm:cycleconnectedequivclass} For $n\ge 3$, if $G$ is a connected graph such that $i(G,x)=i(C_n,x)$, then $G\cong C_n$ or $G\cong D_n$. \QED \end{theorem} Given Theorem \ref{thm:cycleconnectedequivclass}, we need only consider disconnected graphs to determine $[C_n]$. We will use an argument on the degree sequence to show that there are no disconnected graphs in $[C_{2n}]$ for $n \geq 2$, and one disconnected graph in $[C_6]$. As is shown in the next theorem, using the principle of inclusion-exclusion, some information about the degree sequence of a graph is encoded in the coefficient of $x^3$ in its independence polynomial. \begin{theorem} \label{thm_i3} For any graph $G=(V,E)$ with $n$ vertices and $m$ edges $$i_3(G) = \binom{n}{3} - m(n-2) + \sum_{v \in V} \binom{\deg (v)}{2}-n(C_3),$$ \noindent where $i_3(G)$ is the number of independent sets in $G$ with cardinality three and $n(C_3)$ is the number of $3$-cycles in $G$. \end{theorem} \begin{proof} It is sufficient to show the number of 3-subsets which are not independent is $m(n-2) - \sum_{v \in V} {\deg (v) \choose 2}+n(C_3)$. Any 3-subset of $V$ induces one of the following subgraphs: \begin{figure}[!h] \def0.7{0.5} \centering \subfigure[]{ \scalebox{0.7}{ \begin{tikzpicture} \node[shape=circle,draw=black,fill=black] (1) at (-1,0) {}; \node[shape=circle,draw=black,fill=black] (2) at (0,2) {}; \node[shape=circle,draw=black,fill=black] (3) at (1,0) {}; \end{tikzpicture}}} \qquad \subfigure[]{ \scalebox{0.7}{ \begin{tikzpicture} \node[shape=circle,draw=black,fill=black] (1) at (-1,0) {}; \node[shape=circle,draw=black,fill=black] (2) at (0,2) {}; \node[shape=circle,draw=black,fill=black] (3) at (1,0) {}; \path[] (1) edge node {} (2); \end{tikzpicture}}} \qquad \subfigure[]{ \scalebox{0.7}{ \begin{tikzpicture} \node[shape=circle,draw=black,fill=black] (1) at (-1,0) {}; \node[shape=circle,draw=black,fill=black] (2) at (0,2) {}; \node[shape=circle,draw=black,fill=black] (3) at (1,0) {}; \path[] (1) edge node {} (2); \path[] (2) edge node {} (3); \end{tikzpicture}}} \qquad \subfigure[]{ \scalebox{0.7}{ \begin{tikzpicture} \node[shape=circle,draw=black,fill=black] (1) at (-1,0) {}; \node[shape=circle,draw=black,fill=black] (2) at (0,2) {}; \node[shape=circle,draw=black,fill=black] (3) at (1,0) {}; \path[] (1) edge node {} (2); \path[] (2) edge node {} (3); \path[] (1) edge node {} (3); \end{tikzpicture}}} \end{figure} We can construct each non-independent 3-subset by taking an edge $uv$ and a vertex $w$ not incident to the edge. As $G$ has $m$ edges there we will construct $m(n-2)$ subsets. If $w$ is not adjacent to $u$ nor $v$ then we induce the subgraph $(b)$ and construct it once. If $w$ is adjacent to $u$ (or $v$) then we induce the subgraph $(c)$. However this 3-subset will have been constructed in two ways: The edge $uv$ and vertex $w$, and the edge $uw$ (or $vw$) and vertex $v$. Therefore we have counted each 3-subset which induces a subgraph of type $(c)$ twice and $(d)$ three times. We can construct each 3-subset which induces a subgraph of type $(c)$ by taking a vertex and choosing any two of its neighbours. Hence there are $\sum_{v \in V} {\deg (v) \choose 2}$ such subsets. Note this counts the number of 3-subsets which induces subgraph $(d)$ three times as well. Clearly the number of 3-subsets which induces subgraph $(d)$ is $n(C_3)$. Thus the number of non-independent 3-subsets is $m(n-2) - \sum_{v \in V} {\deg (v) \choose 2}+n(C_3)$. \end{proof} \begin{lemma} \label{lem:eq3} Let $n \geq 4$ and $G$ be a graph with $n(C_3)$ many $3$-cycles and $g_i$ many vertices of degree $i$. If $G \sim C_n$ then \begin{enumerate}[(i)] \item $\sum\limits_{i=0}^{n-1} g_i = n$, \item $\sum\limits_{i=1}^{n-1} i \cdot g_i = 2n$, \item $\sum\limits_{i=2}^{n-1} {i \choose 2}g_i =n + n(C_3)$, and \item $n(C_3) \geq g_0 + \sum\limits_{i=3}^{n-1} g_i$, that is there are at most $n(C_3)$ vertices not of degree one or two. \end{enumerate} \end{lemma} \begin{proof} Suppose $G$ is a graph such that $G \sim C_n$. Then $G$ has $n$ vertices and $n$ edges making (i) and (ii) trivial. To prove (iii), we note that by Theorem \ref{thm_i3}, $$i_3(G) = {n \choose 3} - n(n-2) + \sum_{i=2}^{n-1} {i \choose 2}g_i-n(C_3)$$. Furthermore $i_3(C_{n})$ can easily be computed to be ${n \choose 3} - n(n-2) + n$. As $i_3(G)=i_3(C_{n})$ it follows that (iii) holds. Finally by adding (i) and (iii) and subtracting (ii) we obtain: $$n(C_3)=\sum_{i=0}^{n-1} g_i+ \sum_{i=2}^{n-1} {i \choose 2}g_i - \sum_{i=1}^{n-1} i \cdot g_i = g_0 + \sum_{i=3}^{n-1} \left({i \choose 2} -i+1 \right) g_i \geq g_0+\sum_{i=3}^{n-1} g_i.$$ Hence (iv) holds as well. \end{proof} We will require basic computational results on computing the independence polynomial due to Gutman and Harary. \begin{prop}[\cite{INDFIRST}]\label{prop:deletion} Let $G$ and $H$ be graphs and $v\in V(G)$. Then: \begin{enumerate}[i)] \item $i(G,x)=i(G-v,x)+xi(G-N[v],x)$. \item $i(G\cup H,x)=i(G,x)\cdot i(H,x)$. \QED \end{enumerate} \end{prop} \subsection{Even Cycles} \begin{theorem} \label{thm:evencycle} Let $K_4-e$ denote the graph which consists of a $K_4$ with one edge removed. Then \begin{itemize} \item $[C_{6}] = \{C_{2n},D_{2n},(K_4-e) \cup K_2\},$ and \item $[C_{2n}] = \{C_{2n},D_{2n}\}$ for $n \geq 2,~n \neq 3$. \end{itemize} \end{theorem} \begin{proof} Suppose $G \sim C_{2n}$ and $G \not\cong C_{2n}$. Then $G$ has $2n$ vertices and $2n$ edges. For $n=2$ there is only one graph, $D_4$, with $4$ edges and $4$ vertices which is not isomorphic to $C_4$. As $C_4 \sim D_4$ by Proposition \ref{prop:chismcycle} then $[C_{4}] = \{C_{2n},D_{2n}\}$. We now consider when $n \geq 3$. By Theorem \ref{thm:PathPoly} and Proposition \ref{prop:deletion} it can be shown that $i(G,x)$ is degree $n$ with leading coefficient equal to $2$. That is, there are exactly two maximum independent sets in $G$ of size $n$. We begin by showing $G$ contains a triangle. Suppose not, and let $g_i$ be the number of vertices of degree $i$ in $G$. By Lemma \ref{lem:eq3} (iii) and (iv), as $G$ is triangle-free (i.e. $n(C_3)=0$) and $G \sim C_{2n}$, $$ \sum_{i=2}^{2n-1} {i \choose 2}g_i =2n \text{ } \text{ and } \text{ } 0 \geq g_0 + \sum_{i=3}^{2n-1} g_i.$$ \noindent Hence $g_i = 0$ for $i \geq 3$ and thus $\sum_{i=2}^{2n-1} {i \choose 2}g_i =2n$ implies $G$ is 2-regular. However as $G \not\cong C_{2n}$ then $G$ is a disjoint union of cycles. It is easy to see each cycle has at least two maximum independent sets, meaning $G$ must have at least $4$ maximum independent sets which is a contradiction. Thus $G$ contains a triangle. As $G$ contains a triangle, it is not bipartite, and hence the two maximum independent sets (of cardinality $n$) in $G$ are not disjoint. Thus we can partition the vertices into non-empty sets $A, A', B, B'$ such that $A \cup A'$ and $A \cup B$ are the two independent sets of size $n$. Note $|A \cup A'|=|A \cup B|=|B \cup B'|=n$ and $|A'|=|B|$. It follows that $|A| = |B'|$ and so $|A'|+|B'|=n$. Each vertex in $B'$ is adjacent to at least two vertices in $A \cup A'$. Otherwise we can form another independent set of size at least $n$ which is not $A \cup A'$ nor $A \cup B$. Thus our partially constructed $G$ looks like: \begin{center} \scalebox{1}{ \begin{tikzpicture} \draw (0,0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (0,1) {\large $A'$}; \draw (4,0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,1) {\large $A$}; \draw (0,-0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (0,-1) {\large $B$}; \draw (4,-0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,-1) {\large $B'$}; \begin{scope} \path [-] (-1.4+4,-0.75) edge node {} (-1.5+2,0.6); \path [-] (-1.4+4,-0.75) edge node {} (-0.9+2,0.6); \path [-] (-0.7+4,-0.75) edge node {} (-0.8+2.2,0.7); \path [-] (-0.7+4,-0.75) edge node {} (-0.2+4,0.6); \path [-] (0+4,-0.75) edge node {} (-0.1+4,0.6); \path [-] (0+4,-0.75) edge node {} (0.5+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (0.9+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (1.5+4,0.7); \path (0+4,-0.75) -- node[auto=false]{\ldots} (1.4+4,-0.75); \end{scope} \end{tikzpicture}} \end{center} We now consider two cases: $|B| \geq 2$ and $|B|=1$. If $|B| \geq 2$, then by the same argument used for $B'$ and $A \cup A'$, each vertex in $A'$ is adjacent to at least two vertices in $B$. Thus $G$ has is at least $2(|A'|+|B'|)=2n$ edges. However, as $G$ is not bipartite and has exactly $2n$ edges, there must be an edge between two vertices of $B \cup B'$, a contradiction. Now suppose $|B|=1$. As $|B|=|A'|$, we now have that $|A'|=1$. We will label the vertex in $A'$ and the vertex in $B$ to be $a'$ and $b$, respectively. Note $a'$ and $b$ are adjacent, as otherwise $A \cup A' \cup B$ forms a independent set of size $n+1$. Thus our partially constructed $G$ (omitting one edge in $B\cup B'$) looks like this: \begin{center} \scalebox{1}{ \begin{tikzpicture} \draw (4,0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,1) {\large $A$}; \draw (4,-0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,-1) {\large $B'$}; \node[shape=circle,draw=black,fill=black,label=left:$a'$] (a) at (1,0.9) {}; \node[shape=circle,draw=black,fill=black,label=left:$b$] (b) at (1,-0.9) {}; \begin{scope} \path [-] (-1.4+4,-0.75) edge node {} (a); \path [-] (-1.4+4,-0.75) edge node {} (-0.7+4,0.6); \path [-] (-0.7+4,-0.75) edge node {} (a); \path [-] (-0.7+4,-0.75) edge node {} (-0.2+4,0.6); \path [-] (0+4,-0.75) edge node {} (-0.1+4,0.6); \path [-] (0+4,-0.75) edge node {} (0.5+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (0.9+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (1.5+4,0.7); \path [-] (a) edge node {} (b); \path (0+4,-0.75) -- node[auto=false]{\ldots} (1.4+4,-0.75); \end{scope} \end{tikzpicture}} \end{center} We will consider the placement of the edge in $B \cup B'$ in two cases. \vspace{3mm} \noindent \textbf{Case 1: The edge is from $b$ to some vertex $v \in B'$,} \vspace{3mm} Then as $G$ contains a triangle, $v$ must be adjacent to $a'$ (note this is only triangle in $G$). All vertices in $B \cup B'$ are now degree two with the exception of $v$ which has degree three. Thus $G$ now looks like: \begin{center} \scalebox{1}{ \begin{tikzpicture} \draw (4,0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,1) {\large $A$}; \draw (4,-0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,-1) {\large $B'$}; \node[shape=circle,draw=black,fill=black,label=left:$a'$] (a) at (1,0.9) {}; \node[shape=circle,draw=black,fill=black,label=left:$b$] (b) at (1,-0.9) {}; \node[shape=circle,draw=black,fill=black,label=right:$v$] (v) at (-1.4+4,-0.9) {}; \begin{scope} \path [-] (v) edge node {} (a); \path [-] (v) edge node {} (b); \path [-] (v) edge node {} (-0.7+4,0.6); \path [-] (-0.7+4,-0.75) edge node {} (a); \path [-] (-0.7+4,-0.75) edge node {} (-0.2+4,0.6); \path [-] (0+4,-0.75) edge node {} (-0.1+4,0.6); \path [-] (0+4,-0.75) edge node {} (0.5+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (0.9+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (1.5+4,0.7); \path [-] (a) edge node {} (b); \path (0+4,-0.75) -- node[auto=false]{\ldots} (1.4+4,-0.75); \end{scope} \end{tikzpicture}} \end{center} We now know that $G$ has exactly one triangle (i.e. $n(C_3)=1$) and $G \sim C_{2n}$ so, by Lemma \ref{lem:eq3} (iv), $G$ has at most one vertex which is not degree one or two. As $v$ is degree three then every other vertex must either have degree one or two. Again let $g_i$ be the number of vertices of degree $i$ in $G$. Note $g_i=0$ for $i \neq 1,2,3$, $g_3=1$, and $g_1+g_2+g_3=2n$. Furthermore by Lemma \ref{lem:eq3} (iii), $$2n+1=\sum_{i=2}^{2n-1} {i \choose 2}g_i=g_2+3.$$ \noindent Thus $g_2=2n-2$, $g_3=1$ and $g_1=1$. Note $a'$ must have degree two. We now construct $G$. Begin with the one triangle in $G$ which is formed by the vertices $a'$, $b$, and $v$. As $v$ is a degree three vertex it must have a neighbour in $A$. Now label the only vertex of degree one as $\ell$. As the other vertices in $G$ are all degree two, there must be an induced path of vertices connecting $v$ and $\ell$. This forms a $D_r$ component in $G$ for some $r \leq n$. If $r=n$ then $G \cong D_n$, otherwise $G$ is the disjoint union of cycles and a $D_r$ for $r < n$. However as $D_n$ has two maximum independent sets, if $G$ has any cycle components it would have at least four maximum independent sets which is a contradiction. \vspace{3mm} \noindent \textbf{Case 2:} \vspace{3mm} If the edge in $B\cup B'$ is between two vertices $u,v \in B'$, then as $G$ contains a triangle, $u$ and $v$ must have at least one common neighbour in $A \cup A'$. Note that we now know the number of vertices of each degree in $B \cup B'$; $b$ is degree one, $u$ and $v$ are degree three and every other vertex in $B \cup B'$ is degree two. Thus we consider two subcases: $u$ and $v$ have one or two common neighbours. \vspace{3mm} \noindent \textbf{Case 2a: $u$ and $v$ have exactly one common neighbour} \vspace{3mm} Then $G$ has exactly one triangle and now looks like: \begin{center} \scalebox{1}{ \begin{tikzpicture} \draw (4,0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,1) {\large $A$}; \draw (4,-0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.75,-1) {\large $B'$}; \node[shape=circle,draw=black,fill=black,label=left:$a'$] (a) at (1,0.9) {}; \node[shape=circle,draw=black,fill=black,label=left:$b$] (b) at (1,-0.9) {}; \node[shape=circle,draw=black,fill=black,label=below:$u$] (u) at (3.2,-0.7) {}; \node[shape=circle,draw=black,fill=black,label=below:$v$] (v) at (4,-0.7) {}; \begin{scope} \path [-] (-1.5+4,-0.75) edge node {} (a); \path [-] (-1.5+4,-0.75) edge node {} (-1.3+4,0.7); \path [-] (-1.2+4,-0.75) edge node {} (a); \path [-] (-1.2+4,-0.75) edge node {} (-0.9+4,0.6); \path [-] (0.5+4,-0.75) edge node {} (0+4,0.6); \path [-] (0.5+4,-0.75) edge node {} (0.7+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (0.9+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (1.5+4,0.7); \path [-] (a) edge node {} (b); \path [-] (u) edge node {} (v); \path [-] (u) edge node {} (3.6,.6); \path [-] (u) edge node {} (-1.3+4,0.7); \path [-] (v) edge node {} (3.6,.6); \path [-] (v) edge node {} (0.7+4,0.6); \end{scope} \end{tikzpicture}} \end{center} As $G$ has exactly one triangle (i.e. $n(C_3)=1$) and $G \sim C_{2n}$, Lemma \ref{lem:eq3} (iv) gives that $g_3\le 1$. However $u$ and $v$ both have degree three which is a contradiction. \vspace{3mm} \noindent \textbf{Case 2b: $u$ and $v$ have exactly two common neighbours.} \vspace{3mm} Then $G$ has exactly two triangles and looks like: \begin{center} \scalebox{1}{ \begin{tikzpicture} \draw (4,0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.25,1) {\large $A$}; \draw (4,-0.9) ellipse (1.9 and 0.5); \node[text width=1cm] at (4.75,-1) {\large $B'$}; \node[shape=circle,draw=black,fill=black,label=left:$a'$] (a) at (1,0.9) {}; \node[shape=circle,draw=black,fill=black,label=left:$b$] (b) at (1,-0.9) {}; \node[shape=circle,draw=black,fill=black,label=below:$u$] (u) at (3.25,-0.7) {}; \node[shape=circle,draw=black,fill=black,label=below:$v$] (v) at (4,-0.7) {}; \begin{scope} \path [-] (-1.5+4,-0.75) edge node {} (a); \path [-] (-1.5+4,-0.75) edge node {} (-1.3+4,0.7); \path [-] (-1.2+4,-0.75) edge node {} (a); \path [-] (-1.2+4,-0.75) edge node {} (-0.9+4,0.6); \path [-] (0.5+4,-0.75) edge node {} (0+4,0.6); \path [-] (0.5+4,-0.75) edge node {} (0.5+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (0.9+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (1.5+4,0.7); \path [-] (a) edge node {} (b); \path [-] (u) edge node {} (v); \path [-] (u) edge node {} (3.85,.6); \path [-] (u) edge node {} (3.35,.6); \path [-] (v) edge node {} (3.85,.6); \path [-] (v) edge node {} (3.35,0.6); \end{scope} \end{tikzpicture}} \end{center} Since $G$ has exactly two triangles (i.e. $n(C_3)=2$) and $G \sim C_{2n}$, Lemma \ref{lem:eq3} (iv) implies that $\displaystyle{\sum_{i\neq 1,2}}g_i\le 2$. Both $u$ and $v$ have degree three so every other vertex must either have degree one or two. Note $g_i=0$ for $i \neq 1,2,3$, $g_3=2$, and $g_1+g_2+g_3=2n$. Furthermore by Lemma \ref{lem:eq3} (iii), $$2n+2=\sum_{i=2}^{2n-1} {i \choose 2}g_i=g_2+6.$$ \noindent Thus $g_2=2n-4$, $g_3=2$ and $g_1=2$. Note that every in $A \cup A'$ has degree at most three, thus $u$, $v$ and their two common neighbours form a $K_4$ less an edge component of $G$. Furthermore $G$ has two vertices of degree one. As $b$ is degree one and every vertex in $B'$ is degree two or three, the second of vertex degree one is $a'$ or some vertex in $A$. First suppose some vertex $\ell \in A$ is degree one. At this point our graph looks like: \begin{center} \scalebox{1}{ \begin{tikzpicture} \draw (4,0.9) ellipse (1.9 and 0.5); \node[text width=5cm] at (4.75,1.8) {\large $A-N(u)-N(v)$}; \draw (4,-0.9) ellipse (1.9 and 0.5); \node[text width=5cm] at (5.25,-1.8) {\large $B'-\{u,v\}$}; \node[shape=circle,draw=black,fill=black,label=left:$a'$] (a) at (1,0.9) {}; \node[shape=circle,draw=black,fill=black,label=left:$b$] (b) at (1,-0.9) {}; \node[shape=circle,draw=black,fill=black,label=below:$u$] (u) at (0,-0.9) {}; \node[shape=circle,draw=black,fill=black,label=below:$v$] (v) at (-1,-0.9) {}; \node[shape=circle,draw=black,fill=black] (1) at (0,0.9) {}; \node[shape=circle,draw=black,fill=black] (2) at (-1,0.9) {}; \node[shape=circle,draw=black] (1) at (0,0.9) {}; \node[shape=circle,draw=black] (2) at (-1,0.9) {}; \begin{scope} \path [-] (-1.4+4,-0.75) edge node {} (a); \path [-] (-1.4+4,-0.75) edge node {} (-0.7+4,0.6); \path [-] (-0.7+4,-0.75) edge node {} (-0.7+3,0.75); \path [-] (-0.7+4,-0.75) edge node {} (-0.2+4,0.6); \path [-] (0+4,-0.75) edge node {} (-0.1+4,0.6); \path [-] (0+4,-0.75) edge node {} (0.5+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (0.9+4,0.6); \path [-] (1.4+4,-0.75) edge node {} (1.5+4,0.7); \path [-] (a) edge node {} (b); \path [-] (u) edge node {} (v); \path [-] (u) edge node {} (1); \path [-] (u) edge node {} (2); \path [-] (v) edge node {} (1); \path [-] (v) edge node {} (2); \path (0+4,-0.75) -- node[auto=false]{\ldots} (1.4+4,-0.75); \end{scope} \end{tikzpicture}} \end{center} Note that every vertex in $B'-\{u,v\}$ and $A-N(u)-N(v)$ is degree two other than $\ell$. Therefore one component in $G$ is a even order path from $b$ to $\ell$. However, every even path with more than two vertices has at least three maximum independent sets, which is a contradiction as $G$ only has two maximum independent sets. Now suppose $a'$ is degree one. Then $a'$ and $b$ form a $K_2$ component in $G$ and the remaining vertices in $(B'-\{u,v\})\cup(A-N(u))$ must induce a disjoint union of cycles. In the case where $n=3$, that is $G \sim C_6$, $G$ has no cycle components and $G \cong (K_4-e) \cup K_2$. For $n \geq 4$, $(B'-\{u,v\})\cup(A-N(u))$ contains at least one cycle. However, as $K_2$ and cycles each have at least two maximum independent sets, $G$ has at least four maximum independent sets, which is again a contradiction. The only two cases which didn't result in a contradiction yielded $G \cong D_{2n}$ and $G \cong (K_4-e) \cup K_2$. As $D_{2n} \sim C_{2n}$ for all $n \geq 3$ and $(K_4-e) \cup K_2 \sim C_6$ we have shown that $[C_{6}] = \{C_{6},D_{6}, (K_4-e) \cup K_2 \}$ and $[C_{2n}] = \{D_{2n},D_{2n}\}$ for $n \geq 4$. \end{proof} \subsection{Prime Power Cycles} In Theorem \ref{thm:evencycle}, we used an involved construction to show that there is only one disconnected graph that is independence equivalent to $C_{2n}$. This construction relies on the fact that the leading coefficient of $i(C_{2n},x)$ is 2. This argument will not hold for odd cycles as the leading coefficient of $i(C_{2n+1},x)$ is $2n+1$. However there are other ways to show connectivity, and we shall do so via irreducibility of polynomials over the rationals. We will Eisenstein's famous criterion for irreducibility that we state here \begin{theorem}[c.f. \cite{Fraleigh} pp. 215] Let $p\in \mathbb{Z}$ be a prime and $f(x)=a_0+a_1x+\ldots+a_nx^n$ be a polynomial of degree $n$ with integer coefficients. If $p$ divides each of $a_0,a_1,\ldots,a_{n-1}$ but $p$ does not divide $a_n$, and $p^2$ does not divide $a_0$, then $f$ is irreducible over the rationals. \QED \end{theorem} \begin{prop}\label{prop:primecycles} If $p$ is an odd prime, then $[C_p]=\{C_p,D_p\}$ (note $C_p \cong D_p$ when $p=3$). \end{prop} \begin{proof} We show that $i(C_p,x)$ is irreducible over the rationals and therefore $C_p$ has no disconnected graphs in its equivalence class. The result will then follow by Proposition~\ref{thm:cycleconnectedequivclass}. Let $p$ be an odd prime. By Theorem~\ref{thm:PathPoly} and Proposition~\ref{prop:deletion} we know that \begin{align*} i(C_p,x)&=i(P_{p-1},x)+xi(P_{p-3},x)\\ &= \sum_{j=0}^{\lfloor \frac{p}{2} \rfloor} { {p -j} \choose {j}}x^j+\sum_{j=0}^{\lfloor \frac{p-2}{2} \rfloor} { {p-2-j} \choose {j}}x^{j+1}\\ &=\sum_{j=0}^{\lfloor \frac{p}{2} \rfloor} { {p -j} \choose {j}}x^j+\sum_{j=1}^{\lfloor \frac{p}{2} \rfloor} { {p-j-1} \choose {j-1}}x^{j}\\ &=1+\sum_{j=1}^{\lfloor \frac{p}{2} \rfloor}\left( { {p -j} \choose {j}}+{ {p-j-1} \choose {j-1}}\right)x^j\\ &=1+\sum_{j=1}^{\lfloor \frac{p}{2} \rfloor}{ {p -j} \choose {j}}\left(\frac{p}{p-j}\right)x^j.\\ \end{align*} The coefficients above must be integers and since $p$ is a prime, it follows that $p-j$ does not divide $p$ for any $j=1,2,\ldots, \lfloor \frac{p}{2} \rfloor$, so $p-j$ must divide the integer ${ {p -j} \choose {j}}$. Therefore, ${ {p -j} \choose {j}}\left(\frac{p}{p-j}\right)$ is a multiple of $p$ for $j=1,2,\ldots, \lfloor \frac{p}{2} \rfloor$. We now consider the coefficient of $x^{\lfloor \frac{p}{2} \rfloor}$, \begin{align*} { {p -\lfloor \frac{p}{2} \rfloor} \choose {\lfloor \frac{p}{2} \rfloor}}\left(\frac{p}{p-\lfloor \frac{p}{2} \rfloor}\right)&=\left(\frac{(p -\lfloor \frac{p}{2}\rfloor-1)!}{\lfloor \frac{p}{2}\rfloor!(p -2\lfloor \frac{p}{2}\rfloor)!} \right)p\\ &=\left(\frac{(p -\lceil \frac{p}{2}\rceil)!}{\lfloor \frac{p}{2}\rfloor!(\lceil \frac{p}{2}\rceil) -\lfloor \frac{p}{2}\rfloor)!} \right)p\\ &=\left(\frac{\lfloor \frac{p}{2}\rfloor!}{\lfloor \frac{p}{2}\rfloor!} \right)p\\ &=p. \end{align*} Therefore, applying Eisenstein's famous criterion to the polynomial $x^{\alpha(C_p)}i(C_p,\tfrac{1}{x})$ with the prime $p$, it follows that $i(C_p,x)$ is irreducible over the rationals. Since $i(C_p,x)$ is irreducible, $C_p$ cannot be independence equivalent to any disconnected graph. It follows that $[C_p]=\{C_p,D_p\}$ by Theorem~\ref{thm:cycleconnectedequivclass}. \end{proof} The irreducibility of cycles of prime length given in Proposition~\ref{prop:primecycles} can be partially extended to cycle with length $p^n$ for all $n$ and all odd primes $p\ge 5$. These polynomials are reducible but considering each irreducible factor will lead us to the same conclusion as the case for $n=1$. \begin{defn}\label{def:cyclicpoly} We say that a polynomial $p(x)=\sum_{i=0}^np_ix^i$ with integer coefficients is \textbf{unicyclic} if $p_0=1$, $p_1=k$ and $p_2=\binom{k}{2}-k$ for some integer $k$. \end{defn} Note that a unicyclic polynomial is one that shares the same first three coefficients with the independence polynomial of some unicyclic graph. If a connected graph has a unicyclic independence polynomial, then that graph must be unicyclic. This is because the graph has $n$ vertices, $n$ edges, and is connected. \begin{lemma}\label{lem:productunicyclic} If $h(x)=g(x)f(x)$ and $h(x)$, $g(x)$ are unicyclic, then $f(x)$ is unicyclic. \end{lemma} \begin{proof} Assuming the hypothesis, let the first three terms of $g(x)$ be $1,nx,\left(\binom{n}{2}-n\right)x^2$, the first three terms of $f(x)$ be $1,kx,\left(\binom{k}{2}-k+\ell\right)x^2$ where $\ell$ is some integer, so that the first three terms of $h(x)$ are $1,(n+k)x,\left(\binom{n+k}{2}-(n+k)\right)x^2$. Since $h(x)=f(x)g(x)$, they must be equal coefficient-wise so we must have, \begin{align*} \binom{n+k}{2}-(n+k)&=\binom{n}{2}-n+\binom{k}{2}-k+\ell+nk\\ &=\frac{n(n-1)+k(k-1)+2nk}{2}-(n+k)+\ell\\ &=\frac{(n+k)((n+k)-1)}{2}-(n+k)+\ell\\ &=\frac{(n+k)^2-(n+k)}{2}-(n+k)+\ell\\ &=\binom{n+k}{2}-(n+k)+\ell.\\ \end{align*} Therefore, $\ell=0$, and $f(x)$ is unicyclic. \end{proof} The roots of $i(C_n,x)$ have been completely determined by Alikahni and Peng \cite{AlikhaniPeng2011} and we will make use of a corollary that can be derived from their results. \begin{theorem}[\cite{AlikhaniPeng2011}]\label{thm:cycleroots} For $n\ge 3$, the roots of $i(C_n,x)$ are given by $$r_i=-\frac{1}{2\left(1+\cos\left(\frac{(2i-1)\pi}{n}\right) \right)}$$ for $i=1,2,\ldots,\lfloor\frac{n}{2}\rfloor$, and these roots are all distinct. \QED \end{theorem} \begin{cor}\label{cor:cycledivision} For odd $n$ and $k\neq 1$, $k|n$ if and only if $i(C_k,x)|i(C_n,x)$. \end{cor} \begin{proof} Let $n$ be odd. First suppose $k|n$. Then let $n=qk$ for some positive integer $q$. By Theorem~\ref{thm:cycleroots}, we only have to show that for all $j=1,2,\ldots,\lfloor\frac{k}{2}\rfloor$ there exists an $i$ from $1\le i\le \lfloor\frac{n}{2}\rfloor$ such that $\frac{(2i-1)\pi}{n}=\frac{(2j-1)\pi}{k}$. This happens if and only if $$i=\frac{(2j-1)q+1}{2}.$$ Since $n$ is odd, it follows that $q$ is also odd and therefore $i$ is indeed an integer and since $j\le \lfloor\frac{k}{2}\rfloor$, $1\le i\le \lfloor\frac{n}{2}\rfloor$. Thus every root of $i(C_k,x)$ is also a root of $i(C_n,x)$. Let $i(C_k,x)=(x-r_1)(x-r_2)\ldots(x-r_{\lfloor\frac{k}{2}\rfloor})$ where the $r_i$'s are the roots of $i(C_k,x)$. Since all roots of $i(C_k,x)$ are also roots of $i(C_n,x)$, it follows that $i(C_n,x)=(x-r_1)(x-r_2)\ldots(x-r_{\lfloor\frac{k}{2}\rfloor})g(x)$ for some polynomial $g(x)$ and therefore $i(C_k,x)|i(C_n,x)$. Conversely suppose $i(C_k,x)|i(C_n,x)$. Then the leading coefficient of $i(C_k,x)$ must divide the leading coefficient of $i(C_n,x)$. From Theorem \ref{thm:PathPoly} and Proposition \ref{prop:deletion}, as $n$ is odd then the leading coefficient of $i(C_n,x)$ is $n$. Furthermore the leading coefficient of $i(C_k,x)$ is either 2 if $k$ is even or $k$ if $k$ is odd. As $n$ is odd then $2 \not | n$ and hence $k|n$. \end{proof} \begin{lemma} \label{lem:cycleuni} Let $p$ be an odd prime and $n\ge 1$. Then every irreducible factor of $i(C_{p^n},x)$ is unicyclic. \end{lemma} \begin{proof} The proof is by induction on $n$. For $n=1$, that case was handled in Proposition~\ref{prop:primecycles}. Suppose the result holds for $n\le k$ for some $k\ge 1$. Now from Corollary~\ref{cor:cycledivision}, we know that $i(C_{p^{k}},x)|i(C_{p^{k+1}},x)$. Let $i(C_{p^{k+1}},x)=i(C_{p^k},x)r(x)$. We claim that $r(x)$ is irreducible and unicyclic. The fact that $r(x)$ is unicyclic follows from the inductive hypothesis and Lemma~\ref{lem:productunicyclic}, once we show that $r(x)$ is irreducible. Similarly to the proof of Proposition~\ref{prop:primecycles}, we derive and expression for the coefficients of $i(C_{p^k},x)$ $$i(C_{p^k},x)=1+\sum_{j=1}^{\lfloor \frac{p^k}{2} \rfloor}{ {p^k -j} \choose {j}}\left(\frac{p^k}{p^k-j}\right)x^j. $$ Note that $p$ divides each coefficient above except the constant term as $\frac{p^k}{p^k-j}$ must be an integer and $p^k-j$ has at most $k-1$ factors of $p$ for all $1\le j\le p^k-1$. Let $r(x)=r_0+r_1x+r_2x^2+\cdots+r_{m}x^m$. Since $i(C_{p^{k+1}},x)=i(C_{p^k},x)r(x)$, we must have \begin{align} { {p^{k+1} -j} \choose {j}}\left(\frac{p^{k+1}}{p^{k+1}-j}\right)&=\sum_{i=0}^{j}\left(r_i{ {p^k -(j-i)} \choose {j-i}}\left(\frac{p^k}{p^k-(j-i)}\right)\right)\label{eq:primepwrsum} \end{align} for $j=0,1,\ldots,\lfloor \frac{p^{k+1}}{2} \rfloor$. As noted earlier, since $p\vert\frac{p^{k+1}}{p^{k+1}-j}$ for $1\le j\le p^{k+1}-1$, $p$ must divide the sum on the right hand side of (\ref{eq:primepwrsum}). Since we know $p$ divides each coefficient of $i(C_{p^k},x)$ except the constant term, it follows that $p|r_j$ for all $j=1,2,\ldots,m$. Also, since $p^kr_m=p^{k+1}$, it follows that $r_m=p$. So by Eisenstein's Criterion applied to $x^mr(\tfrac{1}{x})$, it follows that $r(x)$ is irreducible. \end{proof} \begin{theorem} \label{thm:powerprimecycles} For $k,p \in \mathbb{N}$ where $p \geq 5$ is prime, $[C_{p^k}] = \{C_{p^k},D_{p^k}\}$. \end{theorem} \begin{proof} Suppose $G \sim C_{n}$ and $G \not\cong C_{n}$ where $n=p^k$. Then $G$ has $n$ vertices and $n$ edges. Then by Lemma \ref{lem:eq3} we obtain the following three equations: $$\sum_{i=0}^{n-1} g_i = n, \text{ }\sum_{i=1}^{n-1} i \cdot g_i = 2n, \text{ } \sum_{i=2}^{n-1} {i \choose 2}g_i =n+n(C_3).$$ \noindent Thus, \[n(C_3)=\sum_{i=0}^{n-1} g_i+ \sum_{i=2}^{n-1} {i \choose 2}g_i - \sum_{i=1}^{n-1} i \cdot g_i = g_0 + \sum_{i=3}^{n-1} \left({i \choose 2} -i+1 \right) g_i. \tag{1} \] \noindent Furthermore, $G$ has no $C_3$ components, otherwise $i(C_3) | i(C_n)$ and hence by Corollary \ref{cor:cycledivision}, $3|n$ which is a contradiction as $n=p^k$ for prime $p \geq 5$. Hence every induced $C_3$ has a vertex with degree 3 or greater. By Lemma \ref{lem:cycleuni}, every irreducible factor of $i(C_n)$ is unicyclic and hence every connected component of $G$ has the same number of vertices and edges and is therefore unicyclic. Therefore every vertex is part of at most one induced $C_3$. As every induced $C_3$ has a vertex with degree 3 or greater then \[n(C_3) \leq \sum_{i=3}^{n-1} g_i.\] \noindent Therefore by subtracting this inequality from equation $(1)$ we obtain $$0 \geq g_0 + \sum_{i=3}^{n-1} \left({i \choose 2} -i \right) g_i.$$ \noindent As ${i \choose 2} -i \geq 2$ for $i \geq 4$ then $g_i=0$ for $i \neq 1,2 \text{ or }3$. Therefore, by equation (1) we have $g_3=n(C_3)$. We can also now simplify the sums given in Lemma~\ref{lem:eq3} to get $g_1+g_2+g_3=n$ and $g_1+2g_2+3g_3=2n$ and subtracting $2$ times the former from the latter we obtain $g_1=g_3$. Consider the structure of $G$. Note that no two induced $C_3$ graphs intersect, as each vertex is in at most one. As $G$ has no $C_3$ components then each of the induced $C_3$ must contain at least one degree three vertex. As $d_3=n(C_3)$, each induced $C_3$ contains exactly one degree three vertex and there are no other degree three vertices in the graph. Now all that remains are degree one and two vertices. Hence the other neighbour of each degree three vertex is either a leaf or a degree two vertex. It is easy to see that if it is a degree two vertex, this must be the beginning of a path of degree two vertices ending in a leaf, otherwise we would contradict either the component being unicyclic or the number of degree three or greater vertices. This shows that each component is either a cycle or a $D$-graph. As $D_{l_i} \sim C_{l_i}$, $G$ must be independently equivalent to a disjoint union of cycles. Now let $G \sim C_{n_1} \cup C_{n_2} \cup \cdots \cup C_{n_r}$ for some $r \in \mathbb{N}$. Note each $n_j \geq 3$ as each component must have an equal number of vertices and edges. As the independence polynomial is multiplicative across components we have $i(G,x)= i(C_{n_1},x) \cdot i(C_{n_2},x) \cdots i(C_{n_r},x)$. It is easy to see from Theorem \ref{thm:PathPoly} and Proposition \ref{prop:deletion} that the leading coefficient and the coefficient of $x$ of $i(C_{n_j})$ are both $n_j$. Thus the leading coefficient of $i(G,x)$ is $n_1 \cdot n_2 \cdots n_r$ and the coefficient of $x$ is $n_1 + n_2 + \cdots +n_r$. However as $i(G,x) \sim i(C_n,x)$ then the leading coefficient and the coefficient of $x$ of $i(G,x)$ are both $n$. Thus $n_1 n_2 \cdots n_r=n_1 + n_2 + \cdots +n_r$. However a simple induction can show $n_1 \cdot n_2 \cdots n_r>n_1 + n_2 + \cdots +n_r$ for $r \geq 2$ and $n_j \geq 3$. As each $n_j\geq 3$ then $r=1$ and $G$ is connected. By Theorem \ref{thm:cycleconnectedequivclass}, we conclude that $[C_n] = \{C_n,D_n\}$. \end{proof} One notable exception to these results is $[C_{3^n}]$ when $n>1$. These cases are more difficult to deal with, as a graph in $[C_{3^n}]$ can have $C_3$ components which does not allow us the certainty of where the degree $3$ vertices are located among the components. We suspect that if $[C_n]$ grows large for certain $n$, then $n$ will be an odd multiple of $3$. For example, the only cycles that we know of with graphs other than $D_n$ and $C_n$ in their independence equivalence classes are $C_6$, $C_9$ and $C_{15}$. Oboudi showed in \cite{Oboudi2018} that $$[C_{9}]=\{C_{9},D_{9},G_1\cup C_3,G_2\cup C_3, G_3\cup C_3\}$$ where $G_1$, $G_2$, and $G_3$ are shown in Figure~\ref{fig:C9equivclass}. \setcounter{subfigure}{0} \begin{figure}[!h] \def0.7{0.6} \def1{1} \centering \subfigure[$G_1$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (2) at (0.13397*1,0.5*1) {}; \node (3) at (1*1,0*1) {}; \node (4) at (3*1,0*1) {}; \node (7) at (0.13397*1,-0.5*1) {}; \node (8) at (2*1,0*1) {}; \node (9) at (-0.7*1,-0.5*1) {}; \end{scope} \begin{scope} \path [-] (2) edge node {} (3); \path [-] (3) edge node {} (7); \path [-] (4) edge node {} (8); \path [-] (2) edge node {} (7); \path [-] (3) edge node {} (8); \path [-] (9) edge node {} (7); \end{scope} \end{tikzpicture}}} \qquad \subfigure[$G_2$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (10) at (0.13397*1,0.5*1) {}; \node (11) at (1*1,0*1) {}; \node (12) at (3*1,0*1) {}; \node (15) at (0.13397*1,-0.5*1) {}; \node (16) at (2*1,0*1) {}; \node (17) at (-0.7*1,0*1) {}; \end{scope} \begin{scope} \path [-] (10) edge node {} (11); \path [-] (11) edge node {} (15); \path [-] (12) edge node {} (16); \path [-] (10) edge node {} (17); \path [-] (11) edge node {} (16); \path [-] (17) edge node {} (15); \end{scope} \end{tikzpicture}}} \qquad \subfigure[$G_3$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (18) at (9.13397*1,0.5*1) {}; \node (19) at (10*1,0.5*1) {}; \node (20) at (10.8*1,0*1) {}; \node (21) at (11.8*1,0*1) {}; \node (23) at (9.13397*1,-0.5*1) {}; \node (24) at (10*1,-0.5*1) {}; \end{scope} \begin{scope} \path [-] (18) edge node {} (19); \path [-] (20) edge node {} (21); \path [-] (19) edge node {} (20); \path [-] (20) edge node {} (24); \path [-] (23) edge node {} (24); \path [-] (18) edge node {} (23); \end{scope} \end{tikzpicture}}} \caption{Components of the disconnected graphs in $[C_9]$}% \label{fig:C9equivclass}% \end{figure} Computationally, we were able to show that $$[C_{15}]=\{C_{15},D_{15},G_1\cup C_3\cup C_5,G_2\cup C_3\cup C_5,G_3\cup C_3\cup C_5\}$$ where $G_1'$, $G_2'$, and $G_3'$ are shown in Figure~\ref{fig:C15equivclass}. \setcounter{subfigure}{0} \begin{figure}[!h] \def0.7{0.6} \def1{1} \centering \subfigure[$G_1'$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (2) at (0.13397*1,0.5*1) {}; \node (3) at (1*1,0*1) {}; \node (4) at (3*1,0*1) {}; \node (5) at (4*1,0*1) {}; \node (7) at (0.13397*1,-0.5*1) {}; \node (8) at (2*1,0*1) {}; \node (9) at (-0.7*1,-0.5*1) {}; \end{scope} \begin{scope} \path [-] (2) edge node {} (3); \path [-] (4) edge node {} (5); \path [-] (3) edge node {} (7); \path [-] (4) edge node {} (8); \path [-] (2) edge node {} (7); \path [-] (3) edge node {} (8); \path [-] (9) edge node {} (7); \end{scope} \end{tikzpicture}}} \qquad \subfigure[$G_2'$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (10) at (0.13397*1,0.5*1) {}; \node (11) at (1*1,0*1) {}; \node (12) at (3*1,0*1) {}; \node (13) at (4*1,0*1) {}; \node (15) at (0.13397*1,-0.5*1) {}; \node (16) at (2*1,0*1) {}; \node (17) at (-0.7*1,0*1) {}; \end{scope} \begin{scope} \path [-] (10) edge node {} (11); \path [-] (12) edge node {} (13); \path [-] (11) edge node {} (15); \path [-] (12) edge node {} (16); \path [-] (10) edge node {} (17); \path [-] (11) edge node {} (16); \path [-] (17) edge node {} (15); \end{scope} \end{tikzpicture}}} \qquad \subfigure[$G_3'$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (18) at (9.13397*1,0.5*1) {}; \node (19) at (10*1,0.5*1) {}; \node (20) at (10.8*1,0*1) {}; \node (21) at (11.8*1,0*1) {}; \node (23) at (9.13397*1,-0.5*1) {}; \node (24) at (10*1,-0.5*1) {}; \node (25) at (8.3*1,0*1) {}; \end{scope} \begin{scope} \path [-] (18) edge node {} (19); \path [-] (20) edge node {} (21); \path [-] (19) edge node {} (20); \path [-] (20) edge node {} (24); \path [-] (18) edge node {} (25); \path [-] (23) edge node {} (24); \path [-] (25) edge node {} (23); \end{scope} \end{tikzpicture}}} \caption{Components of the disconnected graphs in $[C_9]$}% \label{fig:C15equivclass}% \end{figure} Despite the similarities between $[C_9]$ and $[C_{15}]$, we were able to computationally verify that $[C_{21}]=\{C_{21},D_{21}\}$ and $[C_{27}]=\{C_{27},D_{27}\}$. \section{Concluding Remarks}\label{sec:conclusion} Building on previous work in the literature, we explored the independence equivalence classes of paths and cycles. These were completely determined for connected graphs in \cite{Li2016} and \cite{Oboudi2018} respectively and our work extended this by considering disconnected graphs that belong to the independence equivalence classes. We showed that paths of odd length are independence unique, while paths of even length can have arbitrarily many graphs in their independence equivalence classes. For cycles, we showed that $[C_{n}]=\{C_{n},D_{n}\}$ when $n$ is even and not equal to $6$, or any power of a prime (the prime being at least $5$). We are left with some open problems. \begin{problem} What graphs can be in $[P_{2n}]$? \end{problem} We showed that $|[P_{2n}]|$ is unbounded, but this involved showing that $[P_{2n}]$ consisted of disjoint unions of cycles, graphs independence equivalent to cycles, and a path. However using the program Nauty \cite{McKay2014}, we were able to computationally determine that $|[P_{10}]|=10$. In addition to the $7$ graphs that we expect employing methods from the proof of Proposition~\ref{prop:largeevenpaths}, we found the $3$ surprising graphs in Figure~\ref{fig:P10equivclass}. What other graphs can belong to $[P_{2n}]$? \setcounter{subfigure}{0} \begin{figure}[!h] \def0.7{0.7} \def1{1} \centering \subfigure[$H_1$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (1) at (0*1,0*1) {}; \node (2) at (1*1,0*1) {}; \node (3) at (0.5*1,1*1) {}; \node (4) at (2*1,0*1) {}; \node (5) at (2*1,1*1) {}; \node (6) at (3*1,1*1) {}; \node (7) at (4*1,1*1) {}; \node (8) at (3*1,0*1) {}; \node (9) at (4*1,0*1) {}; \node (10) at (5*1,0*1) {}; \end{scope} \begin{scope} \path [-] (2) edge node {} (3); \path [-] (1) edge node {} (2); \path [-] (1) edge node {} (3); \path [-] (4) edge node {} (5); \path [-] (5) edge node {} (6); \path [-] (6) edge node {} (7); \path [-] (6) edge node {} (8); \path [-] (8) edge node {} (9); \path [-] (9) edge node {} (10); \end{scope} \end{tikzpicture}}} \qquad \subfigure[$H_2$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (1) at (0*1,0*1) {}; \node (2) at (1*1,0*1) {}; \node (3) at (0.5*1,1*1) {}; \node (4) at (2*1,0*1) {}; \node (5) at (2*1,1*1) {}; \node (6) at (3*1,1*1) {}; \node (7) at (4*1,1*1) {}; \node (8) at (3*1,0*1) {}; \node (9) at (4*1,0*1) {}; \node (10) at (5*1,0*1) {}; \end{scope} \begin{scope} \path [-] (2) edge node {} (3); \path [-] (1) edge node {} (2); \path [-] (1) edge node {} (3); \path [-] (4) edge node {} (5); \path [-] (7) edge node {} (9); \path [-] (6) edge node {} (7); \path [-] (6) edge node {} (8); \path [-] (8) edge node {} (9); \path [-] (9) edge node {} (10); \end{scope} \end{tikzpicture}}} \qquad \subfigure[$H_3$]{ \scalebox{0.7}{ \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,fill,draw}] \node (1) at (0*1,0*1) {}; \node (2) at (1*1,0*1) {}; \node (3) at (0.5*1,1*1) {}; \node (4) at (2*1,0*1) {}; \node (5) at (2*1,1*1) {}; \node (6) at (3*1,1*1) {}; \node (7) at (4*1,1*1) {}; \node (8) at (3*1,0*1) {}; \node (9) at (4*1,0*1) {}; \node (10) at (5*1,0.5*1) {}; \end{scope} \begin{scope} \path [-] (2) edge node {} (3); \path [-] (1) edge node {} (2); \path [-] (1) edge node {} (3); \path [-] (4) edge node {} (5); \path [-] (7) edge node {} (9); \path [-] (6) edge node {} (7); \path [-] (7) edge node {} (10); \path [-] (8) edge node {} (9); \path [-] (9) edge node {} (10); \end{scope} \end{tikzpicture}}} \caption{Surprising graphs in $[P_{10}]$.} \label{fig:P10equivclass}% \end{figure} \begin{problem} What graphs can be in $[C_{3n}]$? \end{problem} Multiples of $3$ make things more difficult when trying to characterize the equivalence classes of cycles as graphs in these classes can have triangle components. In fact, the only cycles we know of where $[C_n]\neq \{C_n,D_n\}$ are cycles with $n=3k$ for $k$ odd. Not every multiple of three has this property however, as $C_{21}$ is only equivalent to itself and $D_{21}$. Does $[C_{3n}]$ eventually stabilize to the two graphs we expect, or can it grow like the independence equivalence classes of even paths? \begin{problem} Are there families of graphs such that the independence equivalence class is unbounded \textbf{and} each independence polynomial is irreducible? \end{problem} We saw that $i(C_p,x)$ was irreducible and $|[C_{p}]|=2$ for all primes $p\ge 3$. An irreducible independence polynomial implies that all graphs in the independence equivalence class are connected. The restriction of connected graphs and irreducibility seems that it would make it less likely to have large independence equivalence classes, but the question remains open. We also think that studying the irreducibility of independence polynomials can be useful when studying independence equivalence classes of other graphs. Finally, we leave the reader with a conjecture that all of our results and computational work has lead us to believe is true. \begin{conj} If $3\!\!\!\not\vert n$ and $n\ge 4$, then $[C_{n}]=\{C_n,D_n\}$. \end{conj} \bibliographystyle{plain}
{ "timestamp": "2018-10-15T02:05:15", "yymm": "1810", "arxiv_id": "1810.05317", "language": "en", "url": "https://arxiv.org/abs/1810.05317", "abstract": "The independence polynomial of a graph is the generating polynomial for the number of independent sets of each size. Two graphs are said to be \\textit{independence equivalent} if they have equivalent independence polynomials. We extend previous work by showing that independence equivalence class of every odd path has size 1, while the class can contain arbitrarily many graphs for even paths. We also prove that the independence equivalence class of every even cycle consists of two graphs when $n\\ge 2$ except the independence equivalence class of $C_6$ which consists of three graphs. The odd case remains open, although, using irreducibility results from algebra, we were able show that for a prime $p \\geq 5$ and $n\\ge 1$ the independence equivalence class of $C_{p^n}$ consists of only two graphs.", "subjects": "Combinatorics (math.CO)", "title": "Independence Equivalence Classes of Paths and Cycles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964177527898, "lm_q2_score": 0.8354835289107307, "lm_q1q2_score": 0.8233660248329846 }
https://arxiv.org/abs/1507.01613
A note on the clique number of complete $k$-partite graphs
In this note, we show that a complete $k$-partite graph is the only graph with clique number $k$ among all degree-equivalent simple graphs. This result gives a lower bound on the clique number, which is sharper than existing bounds on a large family of graphs.
\section{Preliminaries} We first recall select graph theoretic notions used in the sequel; see \cite{bondy} for further details. All graphs considered in this note are simple graphs. Let $G=(V,E)$ be a graph. The number of vertices and edges in $G$ are denoted by $n$ and $m$, respectively. The \emph{neighborhood} in $G$ of a vertex $v$, denoted $N(v;G)$, is the set of vertices in $G$ adjacent to $v$; the \emph{degree} of $v$ in $G$, denoted $d(v;G)$, is equal to $|N(v;G)|$. The \emph{degree sequence} of $G$, denoted $D(G)$, is the multiset of degrees of the vertices of $G$, i.e., $D(G)=\{d(v_1;G), \ldots , d(v_n;G)\}$. Two graphs $G$ and $H$ are \emph{degree equivalent}, denoted $G\simeq H$, if they have the same degree sequence. We exclude graphs with loops and multiple edges from being degree-equivalent to a given graph. Given $S\subset V$, the \emph{induced subgraph} $G[S]$ is the subgraph of $G$ whose vertex set is $S$ and whose edge set consists of all edges of $G$ which have both ends in $S$. The \emph{complement} of $G$, denoted $\overline{G}$, is the graph on the same vertex set in which two vertices are adjacent if and only if they are not adjacent in $G$. The \emph{clique number} of $G$, denoted $\omega(G)$, is the cardinality of the largest clique in $G$. An \emph{independent set} in $G$ is a set of vertices no two of which are adjacent; the \emph{independence number} of $G$, denoted $\alpha(G)$, is the cardinality of the largest independent set in $G$. A \emph{complete $k$-partite graph} $K_{a_1,\ldots, a_k}$ is a graph whose vertices can be partitioned into $k$ independent sets (called \emph{parts}) with sizes $a_1,\ldots,a_k$ so that any two vertices in different parts are adjacent. \begin{remark} An independent set in $G$ is a clique in $\overline G$, and the complement of a complete $k$-partite graph $K_{a_1,\ldots, a_k}$ is a disjoint union of complete graphs $K_{a_1}\cup\ldots\cup K_{a_k}$. Moreover, if $G\simeq K_{a_1,\ldots, a_k}$, then $\overline{G}\simeq K_{a_1}\cup\ldots\cup K_{a_k}$. Thus, results about cliques in $k$-partite graphs can typically be restated as results about independent sets in disjoint unions of complete graphs; this duality will be employed in the next section. \end{remark} \section{Main results} Complete $k$-partite graphs and their complements play a fundamental role in extremal graph theory. A notable $k$-partite graph is the \emph{Tur\'an graph} $T(n,k)$, whose parts have sizes $\lfloor n/k\rfloor$ and $\lceil n/k\rceil$; the number of edges of $T(n,k)$ is denoted $t(n,k)$. The following well-known theorem gives an upper bound on the number of edges of a $K_{k+1}$-free graph. \newline \noindent \textbf{Tur\'an's Theorem \cite{aigner,turan2}.} \emph{The graph $T(n,k)=K_{\lfloor\frac{n}{k}\rfloor,\ldots,\lceil\frac{n}{k}\rceil}$ is the unique $K_{k+1}$-free graph with the maximal number $t(n,k)$ of edges.} \newline \noindent From Tur\'an's Theorem, it follows that among all graphs with $t(ka,k)$ edges where $a$ is some positive integer, the only $K_{k+1}$-free graph is $T(ka,k)=K_{a,\ldots,a}$. This yields the following corollary. \begin{corollary} Let $G\simeq K_{a,\ldots,a}$. Then, $\omega(G)=k$ if and only if $G=K_{a,\ldots,a}$. \end{corollary} \noindent Our main result is the following generalization of Corollary 1. \begin{theorem} Let $G\simeq K_{a_1,\ldots, a_k}$. Then $\omega(G)=k$ if and only if $G=K_{a_1,\ldots, a_k}$. \end{theorem} \noindent The proof of Theorem 1 is laid out in the next section. We will now state some related results; first, by Remark 1, Theorem 1 can be restated in terms of the independence number of disjoint cliques, as follows. \begin{corollary} Let $G\simeq K_{a_1}\cup\ldots\cup K_{a_k}$. Then $\alpha(G)=k$ if and only if $G=K_{a_1}\cup\ldots\cup K_{a_k}$. \end{corollary} \noindent The conditions of Theorem 1 and Corollary 2 are computationally easy to check, as shown in the following proposition. \begin{proposition} Let $G=(V,E)$ be a graph with $|V|=n$ and $|E|=m$. The following conditions can be checked with $O(m + n\log n)$ time. \begin{enumerate} \item$G=K_{a_1,\ldots, a_k}$ \item$G=K_{a_1}\cup\ldots\cup K_{a_k}$ \item$G\simeq K_{a_1,\ldots, a_k}$ \item$G\simeq K_{a_1}\cup\ldots\cup K_{a_k}$ \end{enumerate} \end{proposition} \begin{proof} Conditions 1 and 2 are easily verified, as it is well-known that complete $k$-partite graphs and their complements can be recognized in $O(m)$ time. The degree sequence of $G$ can be obtained in $O(m)$ time and the cardinality of each number in the sequence can be found in $O(n\log n)$ time. Then, Condition~3 is satisfied if and only if the cardinality of each number $d$ in $D(G)$ is an integer multiple of $n-d$ and Condition 4 is satisfied if and only if the cardinality of each number $d$ in $D(G)$ is an integer multiple of $d+1$. Each of these can be checked in linear time, so the overall time complexity of verifying Conditions 3 and 4 is $O(m+n\log n)$. \qed \end{proof} \noindent On the other hand, the conditions of Theorem 1 and Corollary 2 are not very restrictive, in the sense that the graphs satisfying them form large families and may be quite structurally complex. For example, it is easy to see that these families of graphs have the following properties: \begin{enumerate} \item Arbitrary (asymptotic) density or sparsity \item No forbidden subgraph characterization \item No special structure like being co-graphs or perfect graphs; see Fig. 1. \end{enumerate} \begin{figure} \begin{center} \includegraphics[scale=0.45]{fig1a} \end{center} \caption{A graph degree-equivalent to $K_3 \cup K_3 \cup K_4$ with independence number 4. This graph contains as induced subgraphs the path $P_4$ and the cycle $C_5$, which are forbidden induced subgraphs for co-graphs and perfect graphs.} \end{figure} \noindent In fact, as shown below, finding the independence and clique numbers of graphs in these families is NP-complete; thus, it is useful to have the characterizations of (k+1)-clique-free and $(k+1)$-independent set-free graphs given by Theorem~1 and Corollary~2. \begin{proposition} If $G\simeq K_{a_1}\cup\ldots\cup K_{a_k}$ or $G\simeq K_{a_1,\ldots,a_k}$, then finding $\alpha(G)$ and $\omega(G)$ is NP-complete. \end{proposition} \begin{proof} Let $\mathcal{P}$ be the problem of finding the independence number of a cubic graph; it is well-known that $\mathcal{P}$ is NP-complete \cite{GJ,GJ2}. Let $\mathcal{R}$ be the problem of finding the independence number of a graph which is degree equivalent to a disjoint union of cliques. We will demonstrate a polynomial reduction of $\mathcal{P}$ to $\mathcal{R}$. Let $G=(V,E)$ be an arbitrary cubic graph with $|V|=n$. Let $G'=(V',E')$ be the disjoint union of four copies of $G$; thus $G' \simeq \bigcup_{i=1}^{n} K_4$. Obviously, the time and space needed to construct $G'$ is polynomial in $n$. Moreover, $\alpha(G)=\alpha(G')/4$, since pairwise non-adjacent vertices may be chosen independently in each copy of $G$ in $G'$. Thus, $\mathcal{R}$ is NP-complete, as well. Furthermore, the time and space needed to construct the complement of an $n$-vertex graph is polynomial in $n$, and the clique number of a graph is equal to the independence number of its complement. Thus, the problem of finding the clique number of a graph which is degree equivalent to a complete multipartite graph is NP-complete, as well. \qed \end{proof} \noindent Caro and Wei \cite{caro_wei} have shown that $\alpha(G) \geq \sum_{i=1}^n \frac{1}{d_i+1}$, where $D(G)=\{d_1,\ldots,d_n\}$. If $G\simeq K_{a_1}\cup\ldots\cup K_{a_k}$, then $a_i$ appears $a_i+1$ times in $D(G)$, $1\leq i \leq k$; thus, the Caro-Wei bound yields $\alpha(G)\geq k$. Using this fact, Corollary 2 (and thus Theorem 1) is equivalent to the following statement. \begin{corollary} Let $G\simeq K_{a_1}\cup\ldots\cup K_{a_k}$. If $G\neq K_{a_1}\cup\ldots\cup K_{a_k}$, then $\alpha(G)\geq k+1$. \end{corollary} \noindent By Remark 1, Corollary 3 can also be restated as a bound on the clique number, as follows. \begin{corollary} Let $G\simeq K_{a_1,\ldots, a_k}$. If $G\neq K_{a_1,\ldots, a_k}$, then $\omega(G)\geq k+1$. \end{corollary} \noindent The bounds of Corollaries 3 and 4 are sharp, as shown by the graph in Fig. 1 and its complement. In contrast, it is easy to check that existing bounds like the ones below are not sharp for the families of graphs in Corollaries 3 and 4. \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{25pt} \begin{tabular}{ l l } $\alpha(G) \geq \sum_{i=1}^n \frac{1}{d_i+1}$ & Caro and Wei \cite{caro_wei} \\ $\alpha(G) \geq \frac{n^2}{n+2m}$ & Tur\'an \cite{ajtai_erdos,griggs,turan} \\ $\alpha(G) \geq \left\lceil \frac{2n-2m/\lfloor 2m/n \rfloor}{\lfloor 2m/n \rfloor+1}\right\rceil$ & Hansen and Zheng \cite{hansen} \\ $\omega(G)\geq\frac{n^2}{n^2-2m}$& Myers and Liu \cite{myers_liu}\\ $\omega(G)\geq n/(n-(\frac{1}{n}\sum_{i=1}^n d_i^2)^{1/2})$ & Edwards and Elphick \cite{edwards_elphick}\\ \end{tabular} \newline \noindent Thus, we have shown that a complete $k$-partite graph is the only graph which does not contain a $(k+1)$-clique among all degree-equivalent graphs. Equivalently, a disjoint union of $k$ cliques is the only graph which does not have a $(k+1)$-independent set among all degree-equivalent graphs. These results can be formulated as bounds on the independence and clique numbers, which are sharper than existing bounds on large families of graphs. \section{Proof of Theorem 1} For technical simplicity, we will prove Corollary 3, which is equivalent to Theorem 1. \begin{proof} Let $G\simeq K_{a_1}\cup \ldots \cup K_{a_k}$ and $G\neq K_{a_1}\cup \ldots \cup K_{a_k}$. We want to show that $\alpha(G)\geq k+1$. If $G$ has a connected component $Q$ which is a clique, $G-Q$ also satisfies the conditions of Corollary 3, and $\alpha(G-Q)\geq k$ if and only if $\alpha(G)\geq k+1$. Thus, without loss of generality, suppose that $G$ has no clique components. If $a_1=\ldots = a_k$, by Corollary 1, $\alpha(G)\geq k+1$ and we are done. Thus, suppose $a_1=\ldots=a_c<a_{c+1}\leq a_{c+2}\leq \ldots \leq a_k$, where $c\geq 1$. Let $S_1,\ldots,S_k$ be a partition of the vertices of $G$, where $S_i$ has $a_i$ vertices of degree $a_i-1$. For $0\leq i \leq k-c$, let $G^{c+i}=G[S_1\cup\ldots\cup S_{c+i}]$. We will first show that $\alpha(G^c)\geq c+1$ and then by induction that $\alpha(G^{c+i})\geq c+1+i$. Note that $G^c$ cannot have a clique component of size $a_1$, because such a component would also be a clique component in $G$, and we assumed $G$ has no clique components (there could possibly be smaller clique components in $G^c$). Also note that for any $S\subset V$ and $v\in S$, $d(v;G[S])\leq d(v;G)$; thus, $\forall v\in V(G^c)$, $d(v;G^c)\leq a_1-1$. Now, suppose for contradiction that $\alpha(G^c)\leq c$ and let $\mathcal{J}=\{x_1, \ldots, x_j\}$ be a maximum independent set in $G^c$, $j\leq c$. Thus, we have \begin{equation} \label{eq2} \left|\bigcup_{i=1}^j N(x_i;G^c)\right|\leq \sum_{i=1}^j |N(x_i;G^c)| \leq j(a_1-1)\leq ca_1-j=|V(G^c)-\mathcal{J}|, \end{equation} \noindent where the first inequality is a basic fact in set theory, the second inequality follows because $d(x_i;G^c)\leq a_1-1$, and the third inequality follows because $j\leq c$. If $|\bigcup_{i=1}^j N(x_i;G^c)|<|V(G^c)-\mathcal{J}|$, then there must be a vertex $y$ which is not adjacent to any of $x_1, \ldots, x_j$, so $\{y, x_1, \ldots, x_j\}$ is an independent set, contradicting that $\{x_1, \ldots, x_j\}$ is a maximum independent set. If $|\bigcup_{i=1}^j N(x_i;G^c)|=|V(G^c)-\mathcal{J}|$, then all inequalities in (\ref{eq2}) must be equalities, so $j=c$ and $|\bigcup_{i=1}^j N(x_i;G^c)|= \sum_{i=1}^j |N(x_i;G^c)|$, which implies that $N(x_1;G^c), \ldots, N(x_j;G^c)$ are pairwise disjoint. Now, if $G[N(x_{\ell};G^c)]$ is not a clique for some $\ell \in \{1,\ldots, j\}$, then there are two vertices $y$ and $z$ in $N(x_{\ell};G^c)$ which are not adjacent. Then, $\{x_1, \ldots, x_{\ell-1}, y,z, x_{\ell+1}, \ldots, x_j\}$ is an independent set of size $j+1$, contradicting that $\{x_1, \ldots, x_j\}$ is a maximum independent set. Thus, $G[N(x_i;G^c)]$ must be a clique for each $1\leq i\leq j$ and hence also $G[N(x_i;G^c)\cup x_i]$ must be a clique for each $1\leq i\leq j$. But this means there are $j=c\geq 1$ clique components of size $a_1$ in $G^c$ --- a contradiction. Thus, $\alpha(G^c)\geq c+1$, so there is an independent set $\mathcal{I}=\{x_1,\ldots,x_{c+1}\}$ in $G^c$. Recall that $a_1=\ldots=a_c$, so we can say that $d(x_1;G)\leq a_1-1$ and $d(x_{i+1};G)\leq a_i-1$ for $1\leq i \leq c$ (in fact, each of these hold with equality). Now for the inductive step, suppose that $\mathcal{I}=\{x_1,\ldots,x_{c+j+1}\}$ is an independent set in $G^{c+j}$ for some $j\in\{0,\ldots,k-c-1\}$, and that $d(x_1;G)\leq a_1-1$ and $d(x_{i+1};G)\leq a_i-1$ for $1\leq i\leq c+j$. The vertices in $\mathcal{I}$ cannot collectively be adjacent to every vertex of $V(G^{c+j+1})-\mathcal{I}$, since $$\left|\bigcup_{i=1}^{c+j+1} N(x_i;G^{c+j+1})\right|\leq \sum_{i=1}^{c+j+1} |N(x_i;G^{c+j+1})|= \sum_{i=1}^{c+j+1} d(x_i;G^{c+j+1})\leq$$ $$\sum_{i=1}^{c+j+1}d(x_i;G)\leq (a_1-1)+\sum_{i=1}^{c+j}(a_i-1)<\sum_{i=1}^{c+j+1} (a_i-1)=\left|V(G^{c+j+1})-\mathcal{I}\right|.$$ \noindent The strict inequality follows from the assumption that $a_c<a_{c+1}\leq \ldots \leq a_k$. Thus, there must be a vertex $x_{c+j+2}$ in $V(G^{c+j+1})-\mathcal{I}$ which is not adjacent to any vertex in $\mathcal{I}$. This vertex can be added to $\mathcal{I}$, so $\alpha(G^{c+j+1})\geq c+j+2$. Moreover, since $x_{c+j+2}$ is in one of $S_1,\ldots,S_{c+j+1}$, $d(x_{c+j+2};G)\leq a_{c+j+1}-1$ as required for the inductive step. In particular, for $j=k-c-1$, this means that there is an independent set $\{x_1,\ldots,x_{k+1}\}$ in $G^k=G$, and so $\alpha(G)\geq k+1$. \qed \end{proof} \section*{Acknowledgements} This research was supported by the National Science Foundation under Grant No. 1450681.
{ "timestamp": "2015-07-08T02:00:45", "yymm": "1507", "arxiv_id": "1507.01613", "language": "en", "url": "https://arxiv.org/abs/1507.01613", "abstract": "In this note, we show that a complete $k$-partite graph is the only graph with clique number $k$ among all degree-equivalent simple graphs. This result gives a lower bound on the clique number, which is sharper than existing bounds on a large family of graphs.", "subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)", "title": "A note on the clique number of complete $k$-partite graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9777138177076645, "lm_q2_score": 0.8418256452674008, "lm_q1q2_score": 0.8230645654786085 }
https://arxiv.org/abs/1106.4753
New approaches to plactic monoid via Gröbner-Shirshov bases
We present the plactic algebra on an arbitrary alphabet set $A$ by row generators and column generators respectively. We give Gröbner-Shirshov bases for such presentations. In the case of column generators, a finite Gröbner-Shirshov basis is given if $A$ is finite. From the Composition-Diamond lemma for associative algebras, it follows that the set of Young tableaux is a linear basis of plactic algebra. As the result, it gives a new proof that Young tableaux are normal forms of elements of plactic monoid. This result was proved by D.E. Knuth \cite{Knuth} in 1970, see also Chapter 5 in \cite{M.L}.
\section{Introduction}\label{Intro} Let $A=\{1,2,\dots,n\}$ with $1<2<\dots<n$. Then we call $$ Pl(A):=sgp\langle A|\Omega\rangle=A^*/\equiv $$ a plactic monoid on the alphabet set $A$, see \cite{M.L}, where $A^*$ is the free monoid generated by $A$, $\equiv$ is the congruence of $A^*$ generated by the Knuth relations $\Omega$ and $\Omega$ consists of \begin{eqnarray*} &&\sigma_i\sigma_k\sigma_j= \sigma_k\sigma_i\sigma_j \ (i\leq j<k),\\ &&\sigma_j\sigma_i\sigma_k= \sigma_j\sigma_k\sigma_i \ (i< j\leq k). \end{eqnarray*} Let $F$ be a field. Then $F\langle A|\Omega\rangle$ is called the plactic monoid algebra over $F$ of $Pl(A)$. The basic theory of plactic monoid was systematically developed in Lascoux and Sch$\ddot{u}$tzenberger \cite{M.L.M.-P} in 1981. D.E. Knuth \cite{Knuth} in 1970, see also Chapter 5 in \cite{M.L}, shows that Young tableaux are normal forms of elements of plactic monoid. In a recent paper \cite{Okninskii}, with the deg-lex ordering on $A^*$, a finite Gr\"{o}bner-Shirshov basis for plactic algebra is given when $n=3$. They prove that for $n>3$, a corresponding Gr\"{o}bner-Shirshov basis must be infinite. In fact, it is an open problem to give a corresponding Gr\"{o}bner-Shirshov basis by using such a presentation and the deg-lex ordering on $A^*$. In this paper, we present the plactic algebra on an arbitrary alphabet set $A$ by row generators. We give a Gr\"{o}bner-Shirshov basis for such the presentation. From the Composition-Diamond lemma for associative algebras, it follows that the set of Young tableaux is a linear basis of plactic algebra. As the result, it gives a new proof that Young tableaux are normal forms of elements of plactic monoid. \section{Preliminaries}\label{pre1} We first cite some concepts and results from the literatures \cite{Bo72, Bo76, Sh62,Shir3} which are related to Gr\"{o}bner-Shirshov bases for associative algebras. Let $X$ be a set and $F$ a field. Throughout this paper, we denote $F\langle X\rangle$ the free associative algebra over $F$ generated by $X$, $X^*$ the free monoid generated by $X$ and $N$ the set of non-negative integers. A well ordering $<$ on $X^*$ is called monomial if for $u, v\in X^*$, we have $$ u < v \Rightarrow w|_u < w|_v\ \ for \ all \ w\in X^*, $$ where $w|_u=w|_{x_i\mapsto u}, \ w|_v=w|_{x_i\mapsto v}$ and $x_{i}$'s are the same individuality of the letter $x_{i}\in X$ in $w$. A standard example of monomial ordering on $X^*$ is the deg-lex ordering which first compare two words by degree and then by comparing them lexicographically, where $X$ is a well-ordered set. Let $X^*$ be a set with a monomial ordering $<$. Then, for any non-zero polynomial $f\in F\langle X\rangle$, $f$ has the leading word $\overline{f}$. We call $f$ monic if the coefficient of $\overline{f}$ is 1. By $|\overline{f}|$ we denote the degree (length) of $\overline{f}$. Let $f,\ g\in F\langle X\rangle$ be two monic polynomials and $w\in X^*$. If $w=\overline{f}b=a\overline{g}$ for some $a,b\in X^*$ such that $|\overline{f}|+|\overline{g}|>|w|$, then $(f,g)_w=fb-ag$ is called the intersection composition of $f,g$ relative to $w$. If $w=\overline{f}=a\overline{g}b$ for some $a, b\in X^*$, then $(f,g)_w=f-agb$ is called the inclusion composition of $f,g$ relative to $w$. Let $S\subset F\langle X\rangle$ be a monic set. A composition $(f,g)_w$ is called trivial modulo $(S,w)$, denoted by $$ (f,g)_w\equiv0 \ \ \ mod(S,w) $$ if $(f,g)_w=\sum\alpha_ia_is_ib_i,$ where every $\alpha_i\in k, \ s_i\in S,\ a_i,b_i\in X^*$, and $a_i\overline{s_i}b_i<w$. Recall that $S$ is a Gr\"{o}bner-Shirshov basis in $F\langle X\rangle$ if any composition of polynomials from $S$ is trivial modulo $S$ and corresponding $w$. The following lemma was first proved by Shirshov \cite{Sh62,Shir3} for free Lie algebras (with deg-lex ordering) (see also Bokut \cite{Bo72}). Bokut \cite{Bo76} specialized the approach of Shirshov to associative algebras (see also Bergman \cite{Be}). For commutative polynomials, this lemma is known as Buchberger's Theorem (see \cite{Bu65, Bu70}). \begin{lemma}\label{l1} (Composition-Diamond lemma for associative algebras) \ Let $F$ be a field, $<$ a monomial ordering on $X^*$ and $Id(S)$ the ideal of $F \langle X\rangle$ generated by $S$. Then the following statements are equivalent: \begin{enumerate} \item[1)] $S$ is a Gr\"{o}bner-Shirshov basis in $F\langle X\rangle$. \item[2)] $f\in Id(S)\Rightarrow \bar{f}=a\bar{s}b$ for some $s\in S$ and $a,b\in X^*$. \item[3)] $Irr(S) = \{ u \in X^* | u \neq a\bar{s}b,s\in S,a ,b \in X^*\}$ is an $F$-basis of the algebra $A=F\langle X | S \rangle=F\langle X\rangle/Id(S)$. \end{enumerate} \end{lemma} If a subset $S$ of $F\langle X \rangle$ is not a Gr\"{o}bner-Shirshov basis then one can add all nontrivial compositions of polynomials of $S$ to $S$. Continuing this process repeatedly, we finally obtain a Gr\"{o}bner-Shirshov basis $S^{comp}$ that contains $S$. Such a process is called Shirshov algorithm. Let $A=sgp\langle X|S\rangle$ be a semigroup representation. Then $S$ is also a subset of $F\langle X \rangle$ and we can find Gr\"{o}bner-Shirshov basis $S^{comp}$. We also call $S^{comp}$ a Gr\"{o}bner-Shirshov basis of $A$. $Irr(S^{comp})=\{u\in X^*|u\neq a\overline{f}b,\ a ,b \in X^*,\ f\in S^{comp}\}$ is an $F$-basis of $F\langle X|S\rangle$ which is also a normal form of the semigroup $A$. \section{Main theorem} Let $A=\{\sigma_i|i\in I\}$ be a well-ordered alphabet set. Then we call $$ Pl(A):=sgp\langle A|\Omega\rangle=A^*/\equiv $$ a plactic monoid on the alphabet set $A$, see \cite{M.L}, where $\equiv$ is the congruence of $A^*$ generated by the Knuth relations $\Omega$ and $\Omega$ consists of \begin{eqnarray*} &&\sigma_i\sigma_k\sigma_j= \sigma_k\sigma_i\sigma_j \ (\sigma_i\leq \sigma_j<\sigma_k),\\ &&\sigma_j\sigma_i\sigma_k= \sigma_j\sigma_k\sigma_i \ (\sigma_i< \sigma_j\leq \sigma_k). \end{eqnarray*} \ \ Now, suppose that $A=\{1,2,\dots,n\}$ with $1<2<\dots<n$. Let $N$ be the set of non-negative integers. A word $R\in A^*$ is called a row if it is nondecreasing, for example, $R=111225$ is a row. For convenience, denote $R=(r_1,r_2,\dots, r_n)$, where $r_i$ is the number of letter $i\ (i=1,2,\dots,n)$, for example, $R=111225=(3,2,0,0,1,0,\dots,0)$. Let $U=\{R\in A^*\ |\ R \mbox{ is a row} \}$. We order the set $U^*$ as follows. Let $R=(r_1,r_2,\dots, r_n)\in U$. Then $|R|=r_1+\dots+r_n$ is the length of $R$ in $A^*$. We first order $U$: for any $R, S\in U$, $ R<S $ if and only if $|R|<|S|$ or $|R|=|S|$ and there exists a $t \ (0\leq t< n)$ such that $r_i=s_i, \ i=1,\dots, t$ and $r_{t+1}>s_{t+1}.$ Clearly, this is a well ordering on $U$. Then, we order $U^*$ by the deg-lex ordering. We will use this ordering throughout this paper. Let $R,S\in U$. Then $R$ dominates $ S$ if $|R|\leq|S|$ and each letter of $R$ is larger than the corresponding letter of $S$. A (semistandard) Young tableau on $A$ (see \cite{M.L}) is a word $w=R_1R_2 \cdots R_t\in U^*$ such that $R_{i}$ dominates $R_{i+1},\ i=1,\dots,t-1$. For example, $4556\cdot223357\cdot1112444$ is a Young tableau. The following algorithm was introduced by Schensted \cite{Schensted} in 1961. \begin{definition}\label{def3} (Schensted's algorithm \cite{Schensted})\\ Let $R\in U$, $x\in$ A. $$ R\cdot x= \left\{ \begin{array}{cc} Rx,& if~Rx~is~a~row, \\ y\cdot R',& otherwise \end{array} \right. $$ where $y$ is the leftmost letter in $R$ and is strictly larger than $x$, and $R'=R\mid _{y\mapsto x}$, i.e., $R'$ is obtained from $R$ by replacing $y$ by $x$. \end{definition} Then, for any $R,S\in U$, by induction, it is clear that there exist uniquely $R',S'\in U$ such that $R\cdot S=R'\cdot S'$ and $R'\cdot S'$ is a Young tableau, where $R'$ is empty if $R\cdot S=S'$ is a row. It is clear that $$ sgp\langle U\mid \Lambda\rangle=sgp\langle A\mid \Omega\rangle $$ and so $F\langle U\mid \Lambda\rangle=F\langle A\mid \Omega\rangle$, where $\Lambda=\{R\cdot S=R'\cdot S',\ R,S\in U\}$. The following is the main theorem of the paper. \begin{theorem}\label{mainTH} Let the notation be as before. Then with the deg-lex ordering on $U^*$, $\Lambda$ is a Gr\"{o}bner-Shirshov basis for the plactic algebra $F\langle U\mid \Lambda\rangle$. \end{theorem} By using Composition-Diamond lemma for associative algebras (Lemma \ref{l1}) and Theorem \ref{mainTH}, we have the following corollary. \begin{corollary}\label{co1}(\cite{M.L}, Chapter 5) The set of Young tableaux on $A$ is a normal form of the plactic monoid $sgp\langle A\mid \Omega\rangle$. \end{corollary} We will prove Theorem \ref{mainTH} by assuming that $A=\{1,2,\dots,n\}$ with $1<2<\dots<n$. \ \ \noindent{\bf Remark}: For an arbitrary well-ordered set $A$, we define similarly a row on $A^*$, Schensted's algorithm, Young tableau on $A$ and the set $\Lambda$. Then for an arbitrary well-ordered set $A$, similar to the proof of $A$ to be finite, we also have Theorem \ref{mainTH} and Corollary \ref{co1}. \section{A proof of Theorem \ref{mainTH}} \subsection{Technique formulas} Let $(\phi_1,\dots,\phi_n)\in U$. Denote $\Phi_p=\sum_{i=1}^{p}\phi_i\ (1 \leq p \leq n)$, where $\phi$ represents any lowercase symbol and $\Phi$ the corresponding uppercase symbol. \begin{definition}\label{def2} Let $W=(w_1,w_2,\dots,w_n),Z=(z_1,z_2,\dots,z_n)\in U$. Define an algorithm \begin{eqnarray*} W\cdot Z&=&(w_1,w_2,\dots,w_n)(z_1,z_2,\dots,z_n)\\ &=&(x_1,x_2,\dots,x_n)(y_1,y_2,\dots,y_n)\\ &=&X\cdot Y \end{eqnarray*} where $X=(x_1,x_2,\dots,x_n),Y=(y_1,y_2,\dots,y_n)$, $x_1=0$ and\begin{eqnarray*} x_p&=&min(Z_{p-1}-X_{p-1},w_p),\\ X_p&=&min(Z_{p-1},X_{p-1}+w_p),\\ y_q&=&w_q+z_q-x_q, \\ Y_q&=&W_q+Z_q-X_q, \end{eqnarray*} where $n\geq p\ge 2,\ n\geq q\ge 1$. Clearly, either $X,Y\in U$ or $Y\in U$ and $X= (0,0,\dots,0)$. \end{definition} The formulas in Definition \ref{def2} are very useful in the proof of Theorem \ref{mainTH}. \begin{lemma} The algorithms in Definition \ref{def2} and Definition \ref{def3} are equivalent. \end{lemma} \begin{proof}\ Definition \ref{def2} $\Rightarrow$ Definition \ref{def3}. Suppose that $W=(w_1,w_2,\dots,w_n)\in U$ and $Z$ is a letter. Then we can express $Z=(0,\dots,0,z_p,0,\dots,0)$, where $z_p=1 \ (1\leq p\leq n).$ Clearly $Z_1=Z_2=\dots=Z_{p-1}=0$ and $Z_p=Z_{p+1}=\dots=Z_n=1$. Since $x_1=X_1=0$ and Definition \ref{def2}, $x_2=min(Z_1-X_1, w_2)=0$. Similarly, $x_3=x_4=\dots=x_p=0$. Let $w_j\ (1\leq j\leq n)$ satisfy $w_j\neq0$ and $w_{j+1}=\dots=w_n=0$. There are two cases to consider. Case 1. $p\geq j$. By Definition \ref{def2}, $x_{p+1}=min(Z_{p}-X_{p}, w_{p+1})=min(1-0, 0)=0$. Similarly, $x_{j+1}=\dots=x_n=0$, i.e., $X=(0,\dots,0)$. Then $Y=(w_1,\dots,w_{p-1}, w_p+1,w_{p+1},\dots,w_n)$. This means $WZ=Y$ is a row which corresponds to the first part of Definition \ref{def3}. Case 2. $p< j$. There must exist a $w_t, p<t\leq j$ such that $w_t\neq0$ and $w_k=0,\ p<k<t$. If $p+1=t$, then $x_t=min(Z_{t-1}-X_{t-1}, w_t)=min(1-0, w_t)=1$, and $x_{i}=min(Z_{i-1}-X_{i-1}, w_{i})=min(1-1, w_{i})=0, t+1\leq i \leq n$. If $p+1<t$, then by Definition \ref{def2}, $x_{p+1}=min(Z_p-X_p, w_{p+1})=min(1-0, 0)=0$. Similarly, $x_{p+2}=\dots=x_{t-1}=0$. Moreover $x_t=min(Z_{t-1}-X_{t-1}, w_t)=min(1-0, w_t)=1$, $x_{t+1}=min(Z_t-X_t, w_{t+1})=0$. Similarly, $x_{t+2}=\dots=x_n=0$. This shows that $X=(0,\dots,0,x_t,0,\dots,0)$, where $x_t=1$. Now, $y_p=w_p+z_p-x_p=w_p+1$, $y_t=w_t+z_t-x_t=w_t-1$ and $y_i=w_i+z_i-x_i=w_i, 1\leq i\leq n, i\neq p, i\neq t$, i.e., $Y=(w_1,\dots,w_p+1,\dots,w_t-1,\dots,w_n)$, which corresponds to the second part of Definition \ref{def3}. Definition \ref{def3} $\Rightarrow$ Definition \ref{def2}. Clearly, $x_1=0$ and $x_2=min(z_1,w_2)=min(Z_1-X_1,w_2)$. Assume that for all $t\le m-1\ (t\geq1,\ m\ge 3)$, $x_t=min(Z_{t-1}-X_{t-1},w_t)$. Then \begin{eqnarray*} x_m&=&min(Z_{m-2}-X_{m-2}-x_{m-1}+z_{m-1},w_m)\\ &=&min(Z_{m-1}-X_{m-1},w_m) \end{eqnarray*} which shows that $x_p=min(Z_{p-1}-X_{p-1},w_p)$ for any $p$. It is obvious that $ X_p=x_p+X_{p-1}=min(Z_{p-1},X_{p-1}+w_p),\ y_q=w_q+z_q-x_q$ and $ Y_q=W_q+Z_q-X_q$. \end{proof} \begin{corollary} In Definition \ref{def2}, $X\cdot Y$ is a Young tableau. \end{corollary} \subsection{ Expressions of reductions}\label{mulexp} In order to prove the Theorem \ref{mainTH}, we have to check that all possible compositions in $\Lambda$, which are only intersections, are trivial. Let $R=(r_1,r_2,\dots,r_n),\ S=(s_1,s_2,\dots,s_n),\ T=(t_1,t_2,\dots,t_n)\in U$. For reductions, we will use the following notation. \begin{eqnarray*} RST&=&\left( \begin{array}{ccccc} r_1 & r_2 & r_3 & \dots & r_n \\ s_1 & s_2 & s_3 & \dots & s_n \\ t_1 & t_2 & t_3 & \dots & t_n \end{array} \right) \overset{\text{$S\cdot T=A\cdot B$}}{\Longrightarrow}\left( \begin{array}{ccccc} r_1 & r_2 & r_3 & \dots & r_n \\ a_1 & a_2 & a_3 & \dots & a_n \\ b_1 & b_2 & b_3 & \dots & b_n \end{array} \right)\\ &\ \overset{\text{$R\cdot A=C\cdot D$}}{\Longrightarrow}&\left( \begin{array}{ccccc} c_1 & c_2 & c_3 & \dots & c_n \\ d_1 & d_2 & d_3 & \dots & d_n \\ b_1 & b_2 & b_3 & \dots & b_n \end{array} \right) \overset{\text{$D\cdot B=E\cdot F$}}{\Longrightarrow}\left( \begin{array}{ccccc} c_1 & c_2 & c_3 & \dots & c_n \\ e_1 & e_2 & e_3 & \dots & e_n \\ f_1 & f_2 & f_3 & \dots & f_n \end{array} \right), \end{eqnarray*} \begin{eqnarray*} RST&=&\left( \begin{array}{ccccc} r_1 & r_2 & r_3 & \dots & r_n \\ s_1 & s_2 & s_3 & \dots & s_n \\ t_1 & t_2 & t_3 & \dots & t_n \end{array} \right) \overset{\text{$R\cdot S=G\cdot H$}}{\Longrightarrow}\left( \begin{array}{ccccc} g_1 & g_2 & g_3 & \dots & g_n \\ h_1 & h_2 & h_3 & \dots & h_n \\ t_1 & t_2 & t_3 & \dots & t_n \end{array} \right)\\ &\overset{\text{$H\cdot T=I\cdot J$}}{\Longrightarrow}&\left( \begin{array}{ccccc} g_1 & g_2 & g_3 & \dots & g_n \\ i_1 & i_2 & i_3 & \dots & i_n \\ j_1 & j_2 & j_3 & \dots & j_n \end{array} \right) \overset{\text{$G\cdot I=K\cdot L$}}{\Longrightarrow}\left( \begin{array}{ccccc} k_1 & k_2 & k_3 & \dots & k_n \\ l_1 & l_2 & l_3 & \dots & l_n \\ j_1 & j_2 & j_3 & \dots & j_n \end{array} \right), \end{eqnarray*}\\ where \begin{eqnarray*} &&A=(a_1,a_2,\dots,a_n),\ \ B=(b_1,b_2,\dots,b_n),\ \ C=(c_1,c_2,\dots,c_n),\\ &&D=(d_1,d_2,\dots,d_n),\ \ E=(e_1,e_2,\dots,e_n),\ \ F=(f_1,f_2,\dots,f_n),\\ &&G=(g_1,g_2,\dots,g_n),\ \ H=(h_1,h_2,\dots,h_n),\ \ I=(i_1,i_2,\dots,i_n),\\ &&J=(j_1,j_2,\dots,j_n),\ \ K=(k_1,k_2,\dots,k_n),\ \ L=(l_1,l_2,\dots,l_n), \end{eqnarray*} and $S\cdot T=A\cdot B,R\cdot A=C\cdot D,D\cdot B=E\cdot F, R\cdot S=G\cdot H,H\cdot T=I\cdot J,G\cdot I=K\cdot L\in \Lambda$. We will prove that \begin{equation*}\label{e1} \left( \begin{array}{ccccc} c_1 & c_2 & c_3 & \dots & c_n \\ e_1 & e_2 & e_3 & \dots & e_n \\ f_1 & f_2 & f_3 & \dots & f_n \end{array} \right) =\left( \begin{array}{ccccc} k_1 & k_2 & k_3 & \dots & k_n \\ l_1 & l_2 & l_3 & \dots & l_n \\ j_1 & j_2 & j_3 & \dots & j_n \end{array} \right), \end{equation*} which implies that the intersection composition $(RS,ST)_{RST}\equiv 0, mod(\Lambda,RST)$. Therefore, $\Lambda$ is a Gr\"{o}bner-Shirshov basis of the algebra $F\langle U\mid \Lambda\rangle$. \ \ By the definition of the algorithm in Definition \ref{def2}, $a_1=c_1=c_2=e_1=g_1=i_1=k_1=k_2=l_1=0$. Therefore, one needs only to show that $c_p=k_p$ and $e_q=l_q$ for all $3\le p\le n,2\le q\le n$. \subsection{$c_p=k_p\ (p\ge 3)$} \label{cpkp} We need the following lemmas to prove $c_p=k_p\ (p\ge 3)$. \begin{lemma}\label{AleI} For all $p\ (1\leq p\le n)$, $A_p\le I_p$. \end{lemma} \begin{proof} By the algorithm in Definition \ref{def2}, $A_1=a_1=i_1=I_1=0$. Assume that for all $p\le m-1\ (m\ge 2)$, $A_p\le I_p$. Since \begin{eqnarray*} A_m&=&min(T_{m-1},A_{m-1}+s_m),\\ I_m&=&min(T_{m-1},I_{m-1}+h_m),\\ h_m&=&r_m+s_m-g_m\\ &=&r_m+s_m-min(S_{m-1}-G_{m-1},r_m)\ge s_m, \end{eqnarray*} we have $A_m\le I_m$. Now by induction, the result follows. \end{proof} \begin{lemma}\label{KeqI} Assume that $p\geq3, \ K_{p-1}=C_{p-1},\ K_p=C_p=A_{p-1}<C_{p-1}+r_p$ and $S_{p-1}-G_{p-1}\ge r_p$. Then $K_p=I_{p-1}$. \end{lemma} \begin{proof} Note that $ C_p=min(A_{p-1},C_{p-1}+r_p)$ and $ K_p=min(I_{p-1},K_{p-1}+g_p)$. If $S_{p-1}-G_{p-1}\ge r_p,$ then $g_p=min(S_{p-1}-G_{p-1},r_p)=r_p.$ Therefore $K_p=C_p=A_{p-1}<C_{p-1}+r_p=K_{p-1}+g_p$, which concludes $K_p=I_{p-1}$. \end{proof} \begin{lemma}\label{CCr2SGr} Assume that $p\geq2,\ C_p=K_p,\ C_{p+1}=K_{p+1}$ and $C_{p+1}=C_p+r_{p+1}$. Then $S_p-G_p\ge r_{p+1}$. \end{lemma} \begin{proof} Note that $C_{p+1}=min(A_p,C_p+r_{p+1})$ and $K_{p+1}=min(I_p,K_p+g_{p}) =min(I_p,K_p+S_p-G_p,K_p+r_{p+1})$. If $C_p=K_p, C_{p+1}=K_{p+1}$ and $C_{p+1}=C_p+r_{p+1}$, then $ K_{p+1}=C_{p+1} =C_p+r_{p+1} =K_p+r_{p+1}. $ Therefore, $I_p\ge K_p+r_{p+1}$, $K_p+S_p-G_p\ge K_p+r_{p+1}$, which concludes $S_p-G_p\ge r_{p+1}$. \end{proof} \subsubsection{$C_3=K_3$} \label{c3k3} \begin{proof} Since $c_1=c_2=k_1=k_2=0$, we have $C_3=c_3$ and $K_3=k_3$. According to the algorithm in Definition \ref{def2}, \begin{eqnarray*} c_3&=&min(a_2,r_3)\\ &=&min[min(t_1,s_2),r_3]\\ &=&min(t_1,s_2,r_3),\\ k_3&=&min(i_2,g_3)\\ &=&min[min(t_1,h_2),min(s_1+s_2-g_2,r_3)]\\ &=&min[t_1,r_2+s_2-min(s_1,r_2),s_1+s_2-min(s_1,r_2),r_3]\\ &=&min[t_1,s_2+min(r_2,s_1)-min(s_1,r_2),r_3]\\ &=&min(t_1,s_2,r_3). \end{eqnarray*} Therefore, $C_3=K_3=min(t_1,s_2,r_3)$. \end{proof} \subsubsection{$C_p=K_p$ $(p\ge 4)$} \begin{proof} Assume that for all $p\le m-1\ (m\ge 4)$, $C_p=K_p$. Note that \begin{eqnarray*} &&C_m=min(T_{m-2},A_{m-2}+s_{m-1},C_{m-1}+r_m), \\ &&K_m=min(T_{m-2},I_{m-2}+h_{m-1},K_{m-1}+M,K_{m-1}+r_m), \end{eqnarray*} where $M=s_{m-1}+S_{m-2}-G_{m-2}-g_{m-1}$. There are three cases to consider. Case 1. $C_{m-1}=A_{m-2}$. If $A_{m-2}<C_{m-2}+r_{m-1}$, $S_{m-2}-G_{m-2}<r_{m-1}$, then \begin{eqnarray*} &&C_{m-1}=A_{m-2},\\ &&g_{m-1}=S_{m-2}-G_{m-2}, \\ &&h_{m-1}=r_{m-1}+s_{m-1}-S_{m-2}+G_{m-2}>s_{m-1},\\ &&M=s_{m-1}, \\ &&I_{m-2}+h_{m-1}\ge K_{m-1}+h_{m-1}>K_{m-1}+s_{m-1}. \end{eqnarray*} Therefore, $K_m=C_m=min(T_{m-2},C_{m-1}+s_{m-1},C_{m-1}+r_m)$. If $A_{m-2}<C_{m-2}+r_{m-1}$, $S_{m-2}-G_{m-2}\ge r_{m-1}$, then $ C_{m-1}=A_{m-2},\ g_{m-1}=r_{m-1},\ h_{m-1}=s_{m-1}$ and $M\ge s_{m-1}$. By Lemma \ref{KeqI}, we have $ K_{m-1}=I_{m-2}$ and $ I_{m-2}+h_{m-1}=K_{m-1}+s_{m-1}$. Therefore, $K_m=C_m=min(T_{m-2},C_{m-1}+s_{m-1},C_{m-1}+r_m)$. Case 2. $C_{m-1}=A_{m-q-2}+\sum_{i=1}^{q+1}r_{m-i}$, where $q\le m-4$. The condition $C_{m-1}=A_{m-q-2}+\sum_{i=1}^{q+1}r_{m-i}$ implies that for all $k\le q \ (q\le m-4)$, $C_{m-k-1}+r_{m-k}\le A_{m-k-1}$. Then $ C_{m-k}=C_{m-k-1}+r_{m-k} $ and by Lemma \ref{CCr2SGr}, we have \begin{eqnarray*} &&S_{m-k-1}-G_{m-k-1}\ge r_{m-k}, \\ &&g_{m-k}=r_{m-k}, \\ &&h_{m-k}=s_{m-k}, \end{eqnarray*} \begin{eqnarray*} &&C_m=min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_{m-q-2}+\sum_{i=1}^{q}s_{m-i}, A_{m-q-2}+\sum_{i=1}^{q+1}s_{m-i},C_{m-1}+r_m),\\ &&K_m=min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_{m-q-2}+\sum_{i=1}^{q}s_{m-i},\\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ I_{m-q-2}+h_{m-q-1}+\sum_{i=1}^{q}s_{m-i}, K_{m-q-1}+\sum_{i=1}^{q}r_{m-i}+M,K_{m-1}+r_m), \end{eqnarray*} where $M=\sum_{i=1}^{q+1}s_{m-i}+S_{m-q-2}-G_{m-q-2}-g_{m-q-1}-\sum_{i=1}^{q}r_{m-i}$. If $A_{m-q-2}<C_{m-q-2}+r_{m-q-1}$ and $S_{m-q-2}-G_{m-q-2}<r_{m-q-1}$, then \begin{eqnarray*} &&C_{m-q-1}=A_{m-q-2},\\ &&g_{m-q-1}=S_{m-q-2}-G_{m-q-2},\\ &&h_{m-q-1}=r_{m-q-1}+s_{m-q-1}-S_{m-q-2}+G_{m-q-2}>s_{m-q-1},\\ &&M=\sum_{i=1}^{q+1}s_{m-i}-\sum_{i=1}^{q}r_{m-i},\\ &&K_{m-q-1}+\sum_{i=1}^{q}r_{m-i}+M=K_{m-q-1}+\sum_{i=1}^{q+1}s_{m-i},\\ &&I_{m-q-2}+h_{m-q-1}+\sum_{i=1}^{q}s_{m-i}>K_{m-q-1}+\sum_{i=1}^{q+1}s_{m-i}. \end{eqnarray*} Therefore, \begin{eqnarray*} K_m&=&min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_{m-q-2}+\sum_{i=1}^{q}s_{m-i}, C_{m-q-1}+\sum_{i=1}^{q+1}s_{m-i},C_{m-1}+r_m)\\ &=&C_m. \end{eqnarray*} If $A_{m-q-2}<C_{m-q-2}+r_{m-q-1}$ and $S_{m-q-2}-G_{m-q-2}\ge r_{m-q-1}$, then $C_{m-q-1}=A_{m-q-2},$ $g_{m-q-1}=r_{m-q-1}$ and $h_{m-q-1}=s_{m-q-1}$. By Lemma \ref{KeqI}, \begin{eqnarray*} &&K_{m-q-1}=I_{m-q-2},\\ &&M\ge \sum_{i=1}^{q+1}s_{m-i}-\sum_{i=1}^{q}r_{m-i},\\ &&K_{m-q-1}+\sum_{i=1}^{q}r_{m-i}+M\ge K_{m-q-1}+\sum_{i=1}^{q+1}s_{m-i},\\ &&I_{m-q-2}+h_{m-q-1}+\sum_{i=1}^{q}s_{m-i}=K_{m-q-1}+\sum_{i=1}^{q+1}s_{m-i}. \end{eqnarray*} Therefore, \begin{eqnarray*} K_m=min(T_{m-2},T_{m-3}+s_{m-1},\dots, T_{m-q-2}+\sum_{i=1}^{q}s_{m-i},C_{m-q-1}+\sum_{i=1}^{q+1}s_{m-i},C_{m-1}+r_m)=C_m. \end{eqnarray*} Case 3. $C_{m-1}=C_2+\sum_{i=1}^{m-3}r_{m-i}$. The condition $C_{m-1}=C_2+\sum_{i=1}^{m-3}r_{m-i}$ implies that for all $k\le q=m-3$, $C_{m-k-1}+r_{m-k}\le A_{m-k-1}$ and so $C_{m-k}=C_{m-k-1}+r_{m-k}$. Now, by Lemma \ref{CCr2SGr}, \begin{eqnarray*} &&S_{m-k-1}-G_{m-k-1}\ge r_{m-k},\\ &&g_{m-k}=r_{m-k},\\ &&h_{m-k}=s_{m-k},\\ M&=&\sum_{i=1}^{m-2}s_{m-i}+S_1-G_1-g_2-\sum_{i=1}^{m-3}r_{m-i}\\ &=&\sum_{i=1}^{m-2}s_{m-i}+s_1-g_2-\sum_{i=1}^{m-3}r_{m-i}. \end{eqnarray*} Thus, \begin{eqnarray*} C_m&=&min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_1+\sum_{i=1}^{m-3}s_{m-i},A_1+\sum_{i=1}^{m-2}s_{m-i},C_{m-1}+r_m)\\ &=&min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_1+\sum_{i=1}^{m-3}s_{m-i},\sum_{i=1}^{m-2}s_{m-i},C_{m-1}+r_m), \end{eqnarray*} and \begin{eqnarray*} K_m&=&min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_1+\sum_{i=1}^{m-3}s_{m-i},I_1+h_2+\sum_{i=1}^{m-3}s_{m-i},\\ & &\ \ \ \ \ \ \ K_2+\sum_{i=1}^{m-3}r_{m-i}+M,K_{m-1}+r_m)\\ &=&min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_1+\sum_{i=1}^{m-3}s_{m-i},\\ & & \ \ \ \ \ \ \ \sum_{i=1}^{m-2}s_{m-i}+min(h_2-s_2,s_1-g_2),K_{m-1}+r_m). \end{eqnarray*} Since \begin{eqnarray*} min(h_2-s_2,s_1-g_2)&=&min[r_2-min(s_1,r_2),s_1-min(s_1,r_2)] \\ &=&min(r_2,s_1)-min(s_1,r_2)\\ &=&0, \end{eqnarray*} one gets \begin{eqnarray*} K_m=min(T_{m-2},T_{m-3}+s_{m-1},\dots,T_1+\sum_{i=1}^{m-3}s_{m-i},\sum_{i=1}^{m-2}s_{m-i},K_{m-1}+r_m)=C_m. \end{eqnarray*} Therefore, the proof of $K_m=C_m$ is complete. \end{proof} \ \ Therefore, we have shown that $C_p=K_p\ (1\leq p\leq n)$ which, by using the formulas in Definition \ref{def2}, is clearly equivalent to $c_p=k_p\ (1\leq p\leq n)$. \subsection{$e_p=l_p$ $(p\ge 2)$} We need the following lemmas to prove $e_p=l_p\ (p\ge 2)$. \begin{lemma}\label{SCGAgezero} For any $p\ (p\le n)$, $S_p+C_p-G_p-A_p\ge 0$. \end{lemma} \begin{proof} For $p=1$, we have $S_{1}+C_{1}-G_{1}-A_{1}=s_1\ge 0$. Assume that for all $p\le m-1\ (m\ge 2)$, $S_p+C_p-G_p-A_p\ge 0$. If $A_{m-1}<C_{m-1}+r_m$, then $C_m=A_{m-1}$ and $$ S_m+C_m-G_m-A_m \ge S_m+A_{m-1}-S_{m-1}-A_{m-1}-s_m=0. $$ If $C_{m-1}+r_m\le A_{m-1}$, then $C_m=C_{m-1}+r_m$. By section 4.3, we have $K_m=C_m$ and $K_{m-1}=C_{m-1}$. Then by Lemma \ref{CCr2SGr}, $$S_{m-1}-G_{m-1}\ge r_m,$$ $$G_m=G_{m-1}+r_m.$$ So, \begin{eqnarray*} S_m+C_m-G_m-A_m&=&S_m+C_{m-1}+r_m-G_{m-1}-r_m-A_m\\ &=&S_m+C_{m-1}-G_{m-1}-A_m\\ &=&S_{m-1}+C_{m-1}-G_{m-1}-A_{m-1}+s_m-a_m\\ &\ge & s_m-a_m\\ &=&s_m-min(T_{m-1}-A_{m-1},s_m)\\ &\ge & 0. \end{eqnarray*} The proof is complete. \end{proof} \begin{lemma}\label{AeqI} If $S_p+C_p-G_p-A_p>0$, then $A_p=I_p$. \end{lemma} \begin{proof} For $p=1$, we have $A_1=I_1=0$. Assume that for all $p\le m-1\ (m\ge 2)$, $A_p=I_p$ if $S_p+C_p-G_p-A_p>0$. By using Lemma \ref{SCGAgezero}, $S_m+C_m-G_m-A_m\ge 0$. Suppose that $S_m+C_m-G_m-A_m>0$. Then by the proof of Lemma \ref{SCGAgezero}, one of the following four conditions should be satisfied. (\ref{AeqI}.1) $A_{m-1}<C_{m-1}+r_m$, $A_m=T_{m-1}<A_{m-1}+s_m$; (\ref{AeqI}.2) $A_{m-1}<C_{m-1}+r_m$, $G_m=G_{m-1}+r_m<S_{m-1}$; (\ref{AeqI}.3) $C_{m-1}+r_m\le A_{m-1}$, $S_{m-1}+C_{m-1}-G_{m-1}-A_{m-1}>0$; (\ref{AeqI}.4) $C_{m-1}+r_m\le A_{m-1}$, $S_{m-1}+C_{m-1}-G_{m-1}-A_{m-1}=0$, $s_m>a_m$. Assume (\ref{AeqI}.1). Then $ C_m=A_{m-1}$, and by Lemma \ref{AleI}, $A_m=T_{m-1}<A_{m-1}+s_m\le I_{m-1}+h_m$. So, $ A_m=I_m=T_{m-1}$. Assume (\ref{AeqI}.2). Then $C_m=A_{m-1},\ g_m=r_m,\ h_m=s_m$, and by Lemma \ref{KeqI}, $K_m=I_{m-1}$. Thus, $A_m=I_m=min(T_{m-1},C_m+s_m)$. Assume (\ref{AeqI}.3). Then $C_m=C_{m-1}+r_m$. Since $K_m=C_m$ and $K_{m-1}=C_{m-1}$, by Lemma \ref{CCr2SGr}, we have $S_{m-1}-G_{m-1}\ge r_m$, $g_m=r_m$, $h_m=s_m$ and $A_{m-1}=I_{m-1}$. So, $A_m=I_m=min(T_{m-1},A_{m-1}+s_m)$. Assume (\ref{AeqI}.4). Then $C_m=C_{m-1}+r_m$ and $a_m=T_{m-1}-A_{m-1}$. By Lemma \ref{AleI}, $A_m=T_{m-1}<A_{m-1}+s_m\le I_{m-1}+h_m$. So, $A_m=I_m=T_{m-1}$. \end{proof} \subsubsection{$e_2=l_2$} \label{e2l2} \begin{proof} \begin{eqnarray*} e_2&=&min(b_1,d_2)\\ &=&min(s_1+t_1,r_2+a_2)\\ &=&min[s_1+t_1,r_2+min(t_1,s_2)]\\ &=&min(s_1+t_1,t_1+r_2,r_2+s_2),\\ l_2&=&g_2+i_2\\ &=&min(s_1,r_2)+min(t_1,h_2)\\ &=&min(s_1,r_2)+min[t_1,r_2+s_2-min(s_1,r_2)]\\ &=&min(s_1+t_1,t_1+r_2,r_2+s_2)\\ &=&e_2. \end{eqnarray*} \end{proof} \subsubsection{$e_p=l_p$ $(p\ge 3)$} \label{enln} \begin{proof} Assume that for any $p\le m-1\ (m\ge 3)$, $e_p=l_p$. Since $E_{m-1}=L_{m-1}=G_{m-1}+I_{m-1}-K_{m-1}$ and $K_{m-1}=C_{m-1}=C_m-c_m$, we have \begin{eqnarray*} e_m&=&min(B_{m-1}-E_{m-1},d_m)\\ &=&min(S_{m-1}+T_{m-1}-A_{m-1}-E_{m-1},r_m+a_m-c_m)\\ &=&min[S_{m-1}+T_{m-1}-A_{m-1}-E_{m-1},r_m+min(T_{m-1}-A_{m-1},s_m)-c_m]\\ &=&min(S_{m-1}+T_{m-1}+C_m-A_{m-1}-G_{m-1}-I_{m-1},r_m+T_{m-1}-A_{m-1},\\ & &r_m+s_m)-c_m,\\ l_m&=&g_m+i_m-k_m\\ &=&min(S_{m-1}-G_{m-1},r_m)+min(T_{m-1}-I_{m-1},h_m)-k_m\\ &=&min(S_{m-1}+T_{m-1}-G_{m-1}-I_{m-1},S_{m-1}-G_{m-1}+h_m,\\ & &r_m+T_{m-1}-I_{m-1},r_m+h_m)-k_m. \end{eqnarray*} There are two cases to consider. Case 1. $A_{m-1}<C_{m-1}+r_m$. Then $C_m=A_{m-1}$. If $S_{m-1}-G_{m-1}<r_m$, then \begin{eqnarray*} &&g_m=S_{m-1}-G_{m-1},\\ &&h_m=r_m+s_m-S_{m-1}+G_{m-1}>s_m,\\ &&S_{m-1}-G_{m-1}+h_m=r_m+s_m<r_m+h_m \end{eqnarray*} and by Lemma \ref{AleI}, \begin{eqnarray*} S_{m-1}+T_{m-1}-G_{m-1}-I_{m-1}&<&r_m+T_{m-1}-I_{m-1}\\ &\le& r_m+T_{m-1}-A_{m-1}. \end{eqnarray*} Therefore, $l_m=e_m=min(S_{m-1}+T_{m-1}-G_{m-1}-I_{m-1},r_m+s_m)-c_m$. If $S_{m-1}-G_{m-1}\ge r_m$, then $g_m=r_m,\ h_m=s_m$, and by Lemma \ref{KeqI}, \begin{eqnarray*} &&K_m=I_{m-1},\\ &&S_{m-1}+T_{m-1}-G_{m-1}-I_{m-1}\ge r_m+T_{m-1}-I_{m-1}=r_m+T_{m-1}-K_m,\\ &&S_{m-1}-G_{m-1}+h_m=S_{m-1}-G_{m-1}+s_m\ge r_m+s_m. \end{eqnarray*} Therefore, $l_m=e_m=min(T_{m-1}-C_m,s_m)+r_m-c_m$. Case 2. $C_{m-1}+r_m\le A_{m-1}$. Then $C_m=C_{m-1}+r_m$, and by Lemma \ref{CCr2SGr}, \begin{eqnarray*} &&S_{m-1}-G_{m-1}\ge r_m,\\ && g_m=r_m,\\ && h_m=s_m,\\ &&S_{m-1}+T_{m-1}-G_{m-1}-I_{m-1}\ge r_m+T_{m-1}-I_{m-1},\\ &&S_{m-1}-G_{m-1}+h_m\ge r_m+h_m=r_m+s_m,\\ &&e_m=min(S_{m-1}+T_{m-1}+C_{m-1}-A_{m-1}-G_{m-1}-I_{m-1},T_{m-1}-A_{m-1},s_m)+r_m-c_m,\\ &&l_m=min(T_{m-1}-I_{m-1},s_m)+r_m-k_m. \end{eqnarray*} By Lemma \ref{SCGAgezero}, $S_{m-1}+C_{m-1}-G_{m-1}-A_{m-1}\ge 0$. If $S_{m-1}+C_{m-1}-G_{m-1}-A_{m-1}=0$, by Lemma \ref{AleI}, \begin{eqnarray*} e_m&=&min(T_{m-1}-I_{m-1},T_{m-1}-A_{m-1},s_m)+r_m-c_m\\ &=&min(T_{m-1}-I_{m-1}, s_m)+r_m-k_m\\ &=&l_m. \end{eqnarray*} If $S_{m-1}+C_{m-1}-G_{m-1}-A_{m-1}>0$, by Lemma \ref{AeqI}, \begin{eqnarray*} &&A_{m-1}=I_{m-1}, \\ &&S_{m-1}+T_{m-1}+C_{m-1}-A_{m-1}-G_{m-1}-I_{m-1}>T_{m-1}-A_{m-1}. \end{eqnarray*} Therefore, \begin{eqnarray*} e_m&=&min(T_{m-1}-A_{m-1},s_m)+r_m-c_m\\ &=&min(T_{m-1}-I_{m-1},s_m)+r_m-k_m\\ &=&l_m. \end{eqnarray*} \end{proof} \ \ \noindent{\bf Acknowledgement}: The authors would like to thank Professor L.A. Bokut for his guidance, useful discussions and enthusiastic encouragement in writing up this paper.
{ "timestamp": "2011-06-24T02:03:37", "yymm": "1106", "arxiv_id": "1106.4753", "language": "en", "url": "https://arxiv.org/abs/1106.4753", "abstract": "We present the plactic algebra on an arbitrary alphabet set $A$ by row generators and column generators respectively. We give Gröbner-Shirshov bases for such presentations. In the case of column generators, a finite Gröbner-Shirshov basis is given if $A$ is finite. From the Composition-Diamond lemma for associative algebras, it follows that the set of Young tableaux is a linear basis of plactic algebra. As the result, it gives a new proof that Young tableaux are normal forms of elements of plactic monoid. This result was proved by D.E. Knuth \\cite{Knuth} in 1970, see also Chapter 5 in \\cite{M.L}.", "subjects": "Rings and Algebras (math.RA)", "title": "New approaches to plactic monoid via Gröbner-Shirshov bases", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9825575162853136, "lm_q2_score": 0.8376199694135332, "lm_q1q2_score": 0.8230097967379416 }
https://arxiv.org/abs/2202.00762
Extending FEniCS to Work in Higher Dimensions Using Tensor Product Finite Elements
We present a method to extend the finite element library FEniCS to solve problems with domains in dimensions above three by constructing tensor product finite elements. This methodology only requires that the high dimensional domain is structured as a Cartesian product of two lower dimensional subdomains. In this study we consider Dirichlet problems for scalar linear partial differential equations, though the methodology can be extended to non-linear problems. The utilization of tensor product finite elements allows us to construct a global system of linear algebraic equations that only relies on the finite element infrastructure of the lower dimensional subdomains contained in FEniCS. We demonstrate the effectiveness of our methodology in four distinctive test cases. The first test case is a Poisson equation posed in a four dimensional domain which is a Cartesian product of two unit squares solved using the classical Galerkin finite element method. The second test case is the wave equation in space-time, where the computational domain is a Cartesian product of a two dimensional space grid and a one dimensional time interval. In this second case we also employ the Galerkin method. The third test case is an advection dominated advection-diffusion equation where the global domain is a Cartesian product of two one dimensional intervals in which the streamline upwind Petrov-Galerkin method is applied to ensure discrete stability. The final test case uses the Galerkin approach to solve a Poisson problem on a Cartesian product of two intervals with a spatially varying, non-separable diffusivity term. In all cases, a p=1 basis is used and optimal L^2 convergence rates of order h^{p+1} of the errors are achieved with respect to h refinement
\section{Introduction} In the last decade, open source libraries with high level APIs that automate the process of solving partial differential equations (PDEs) using the finite element method (FEM) have become valuable tools in computational research. Some prominent libraries of this kind include Firedrake~\cite{rathgeber2016firedrake}, deal.II~\cite{bangerth2007deal}, MFEM~\cite{anderson2021mfem}, and FEniCS~\cite{alnaes2015fenics}. FEniCS is one of the most widely used of these libraries. FEniCS, along with most other FEM libraries, only include capability of handling up to three dimensional problems (GetFEM~\cite{renard2020getfem} is a notable exception). FEniCS specifically supports unstructured meshes with corresponding basis functions in one, two, and three dimensions. Extension of FEniCS to solve problems in higher dimensions is of interest for problems such as spectral wind wave models, spatial population genetics, quantum mechanics, and even 3d space-time problems. FEniCS does not contain support for fully unstructured meshes in dimensions higher than three. However, at the expense of losing a fully unstructured mesh, a higher dimensional space can be discretized with the available tools in FEniCS if it is the Cartesian product of two lower dimensional (3 or lower) spaces that can themselves be unstructured. It is well known that in a Cartesian product space, a finite element basis can be constructed called the product basis which spans the function space of the full domain~\cite{brenner2008mathematical,quarteroni2010numerical,ern2013theory}. This research seeks to use the infrastructure of an FEM library, such as FEniCS, in the lower dimensional spaces to construct a basis for this high dimensional space. Using FEM to discretize a structured domain via a product basis has been done before without the use of FEM libraries such as FEniCS. In 1978, Banks~\cite{Bank1978} used tensor product finite elements to solve a 2D Poisson problem on a structured grid in order to find a faster solver. In the study, the domain was a Cartesian product between two unit intervals resulting in a uniform square mesh. Banks used the tensor product of two one dimensional quadratic and cubic basis functions to construct the polynomial basis of the full 2D domain. In 1980, Baker performed numerical tests as well as a stability analysis on a tensor product finite method with application to convection-dominated fluid flow problems~\cite{BAKER1981215}. The stability analysis showed the basic algorithm is spatially fourth- order accurate in its most elementary embodiment and the numerical experiments on the convection-dominated model test problems confirmed the basic viability of the developed algorithm, and its tensor product formulation. Recently, Du \eirik{\emph{et al.}~\cite{DU2013181}} used tensor product finite elements to construct a fast solver for an electromagnetics scattering problem of a cavity. In Firedrake~\cite{rathgeber2016firedrake}, there exists a capability to create tensor product finite elements only up to three dimensions~\cite{mcrae2016automated}. A package built on deal.II called Hyperdeal~\cite{munch2020hyperdeal} has the capability of creating tensor product finite elements in up to six dimensions. deal.II is different to a library such as FEniCS since deal.II uses quadrilateral elements in two dimensions whereas FEniCS uses triangles. \eirik{Here, we present a general software framework built on the components of the FEniCS library as well as other open source Python libraries to construct high dimensional meshes and corresponding FE discretizations. } Following this introduction, the general set of problems for which this paper focuses on will be defined and the notation for the product basis will be introduced \eirik{in Section~\ref{sec:model_prob}}. Then, a set of three different model problems will be described in detail and the derivation of the system of equations using tensor product finite elements will be shown for each case. The first model problem will be discussed in Section~\ref{sec:model_prob} which is an N dimensional Poisson problem where the domain is a Cartesian product between two subdomains of dimension three or less. In Section~\ref{sec:prob2} the second model problem is discussed which is a wave equation where the domain is decomposed as a Cartesian product between space and time. The third model problem is discussed in Section~\ref{sec:prob3} which is an advection dominated advection-diffusion equation where the advection aligns with one of the subdomains, a streamlined upwind Petrov-Galerkin method (SUPG) is formulated for this case. After each system of equations is derived, numerical tests were run using FEniCS for each model problem where specific boundary conditions were given. For each case, error and convergence rates are tabulated in Section~\ref{sec:numerical_results}. Lastly, conclusions and recommendations for future work are given in Section~\ref{sec:Conclusions}. \section{Methods} \label{sec:methods} To present the proposed methodology and algorithms, we consider the following class of problems, i.e., PDEs: \begin{equation*} \begin{split} \mathscr{L} u = f \quad \textrm{in} \quad \Omega = \Omega_1 \times \Omega_2. \end{split} \end{equation*} Where $\mathscr{L}$ is a linear differential operator, $f$ is a forcing function, and the domain $\Omega$ is defined as a Cartesian product between two lower dimensional Lipschitz domains. For example, if the global domain $\Omega \subset \mathbb{R}^2$, then $\Omega$ can be defined by the Cartesian product of 2 intervals $\Omega_1\subset \mathbb{R}$ and $\Omega_2 \subset \mathbb{R}$. Hence, we consider general domains of the form $\Omega = \Omega_1 \times \Omega_2 $, see Figure~\ref{fig:cartesian_illustration} for an illustration. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{CartesianProductIllust.png} \caption{Example of a global domain $\Omega$ that is a Cartesian product of two lower dimensional subdomains $\Omega_1, \Omega_2$.} \label{fig:cartesian_illustration} \end{figure} FEniCS can discretize up to three dimensional objects, hence, in practice this framework can be used to define domains that are Cartesian products of up to six dimensions. Furthermore, this process can be done iteratively to yield even higher dimensional domains as $\Omega_1$ itself could be a product of two other spaces and so on. We note that in this presentation, we consider only symmetric functional settings which admit well posed Galerkin FE discrtetization. It can be shown that if a function $u$ is defined on a Cartesian product domain $\Omega = \Omega_1 \times \Omega_2$ then we can construct a basis of a polynomial space to be used for the FEM \eirik{ by exploiting this underlying geometric structure, see, e.g.,}~\cite{quarteroni2010numerical,brenner2008mathematical,ern2013theory}. \eirik{With} a basis for a polynomial space for the first lower dimensional subdomain $\Omega_1$: \begin{equation} \{\phi_i\}_{i=1}^N, \end{equation} and a basis for the second subdomain $\Omega_2$: \begin{equation} \{\psi_j\}_{j=1}^M. \end{equation} The basis for the entire domain can subsequently be constructed and any arbitrary function $u$ whose domain is in $\Omega = \Omega_1 \times \Omega_2$ can be approximated via the product basis: \begin{equation} u \approx \sum_{i=1}^N\sum_{j=1}^M u_{i,j} \phi_i\, \psi_j. \end{equation} \eirik{Note that in the following,} we use boldface letters and symbols to denote vector quantities, e.g., $\dX = \dx \dy$. \eirik{To avoid raising complexity in implementation, we also assume in the numerical test cases (Section~\ref{sec:numerical_results}) that all physical parameters in the following model problems are constant and that the Dirichlet boundary conditions are homogeneous.} \matt{However, it is possible to remove these assumptions by projecting the parameters and boundary conditions to the appropriate function space.} \subsection{Model Problem 1: N dimensional Poisson Equation} \label{sec:model_prob} As a first model problem to illustrate the methodology, we consider the Poisson equation: \begin{equation}\label{eqn:model_prob1} \begin{split} - \Delta \, u &= f \quad \textrm{in} \quad \Omega, \\ u &= u_D \quad \textrm{on} \quad \partial \Omega, \end{split} \end{equation} where the source $f$ is in $L^2(\Omega)$ and the source data $u_D$ is assumed to be sufficiently regular. To define the weak formulation for~\eqref{eqn:model_prob1}, multiply both sides with a test function $v$ in $L^2(\Omega)$ and integrate over $\Omega$: \begin{equation}\label{eqn:weak_form1} \int_{\Omega} - \Delta \, u\, v \dX = \int_{\Omega} f \, v \dX \quad \forall v \in L^2(\Omega), \end{equation} Integrating by parts on the left side (assuming Dirichlet boundary conditions on entire boundary) gives the following weak formulation\matt{: find $u \in \mathcal{U}(\Omega)$ such that} \begin{equation} \label{eq:weak_form} \int_{\Omega} \Nabla u \cdot \Nabla v \dX = \int_{\Omega} f\,v \dX \quad \forall \quad v \in \mathcal{V}(\Omega), \end{equation} where the function space $\mathcal{V}(\Omega)$ is the Hilbert space with zero trace on the boundary $H^1_0(\Omega)$ and $\mathcal{U}(\Omega)$ is $\mathcal{V}(\Omega)$ with a finite energy lift on the boundary so that the Dirichlet condition $u=u_D$ is satisfied. The weak formulation in~\eqref{eq:weak_form}, and its corresponding discretization is known to be well posed, see, e.g.,~\cite{becker1981finite}. With a well posed weak formulation at hand, we can discretize this weak form using a finite element basis. In this case, the functional setting dictates the use of a $C^0$ continuous \eirik{polynomial} basis for the classical FEM. Hence, we approximate the trial functions with the product basis: \begin{equation} u \approx \sum_{i=1}^N\sum_{j=1}^M u_{i,j} \, \phi_i \, \psi_j, \end{equation} and the test functions by its product basis: \begin{equation} v \approx \sum_{k=1}^N\sum_{l=1}^M v_{k,l} \, \gamma_k \, \beta_l. \end{equation} The weak form from~\eqref{eqn:weak_form1} can then be discretized by substitution of the product bases: \begin{equation} \displaystyle \int_{\Omega} (\Nabla \sum_{i=1}^N\sum_{j=1}^M u_{i,j} \, \phi_i \, \psi_j) \cdot ( \Nabla \sum_{k=1}^N\sum_{l=1}^M v_{k,l}\,\gamma_k\, \beta_l) \dX = \int_{\Omega} f\, (\sum_{k=1}^N\sum_{l=1}^M v_{k,l}\,\gamma_k\, \beta_l ) \dX. \end{equation} Due to arbitrariness of the test function and the bilinearity of the weak form, this implies the following form: \begin{equation} \sum_{i=1}^N\sum_{j=1}^M \int_{\Omega} (\Nabla \, \phi_i \, \psi_j) \cdot ( \Nabla \,\gamma_k\, \beta_l) \dX \, u_{i,j} = \int_{\Omega} f\, (\,\gamma_k\, \beta_l ) \dX. \end{equation} This left hand side \eirik{results} in a \eirik{product between a} 4 dimensional \eirik{and} a 2 dimensional tensor $u_{i,j}$. \eirik{Consequently,} the 4 dimensional tensor $A_{ijkl}$ will be of the form: \begin{equation} \int_{\Omega} \Nabla \phi_i \,\psi_j \cdot \Nabla \gamma_k \,\beta_l \, \dX. \end{equation} By the product rule we have: \begin{equation} \label{eq:chain_ruled} \int_{\Omega} \Nabla \phi_i \, \psi_j \cdot \Nabla \gamma_k \,\beta_l \dX = \int_{\Omega}\, (\phi_i \, \Nabla \psi_j + \psi_j\, \Nabla \phi_i) \cdot (\gamma_k\, \Nabla \beta_l + \beta_l\, \Nabla \gamma_k) \dX. \end{equation} By construction, \eirik{the} $\phi$\eirik{'s} and$\gamma$\eirik{'s} only vary in the first subdomain $\Omega_1$ whereas \eirik{the} $\psi$\eirik{'s} and $\beta$\eirik{'s} only vary in the second subdomain $\Omega_2$. \eirik{Thus, the integral form~\eqref{eq:chain_ruled} can be simplified. To this end, we use the following notation convention:} if the gradient operator on the entire domain \eirik{$\Omega$} is $\Nabla$, define the gradient on the subdomains \eirik{$\Omega_1$ and $\Omega_2$} \eirik{as} $\Nabla_1$, $\Nabla_2$\eirik{, respectively}. \eirik{Hence, by} construction $\Nabla = (\Nabla_1, \Nabla_2)$. Rewriting \eirik{ the gradients in~\eqref{eq:chain_ruled} then} gives: \begin{equation} \label{eq:expansion} \begin{split} & \int_{\Omega} (\phi_i \, (\Nabla_1, \Nabla_2)\,\psi_j + \psi_j \,(\Nabla_1, \Nabla_2)\, \phi_i) \cdot (\gamma_k\, (\Nabla_1, \Nabla_2)\, \beta_l + \beta_l (\Nabla_1, \Nabla_2)\gamma_k)\dX, \\ \qquad &= \int_{\Omega} ( (\mathbf{0}, \phi_i\,\Nabla_2 \psi_j) + (\psi_j\,\Nabla_1\phi_i, \mathbf{0}) ) \cdot ( (\mathbf{0}, \gamma_k\, \Nabla_2\beta_l) + (\beta_l\,\Nabla_1\gamma_k, \mathbf{0}))\dX, \\ &= \int_{\Omega} (\psi_j\,\Nabla_1\phi_i, \phi_i\,\Nabla_2 \psi_j) \cdot (\beta_l\,\Nabla_1\gamma_k, \gamma_k\, \Nabla_2\beta_l) \dX, \\ \qquad &= \int_{\Omega} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k\,\psi_j\,\beta_l + \phi_i \,\gamma_k \,\Nabla_2 \psi_j \cdot \Nabla_2\,\beta_l \dX. \end{split} \end{equation} Since \eirik{the} domain $\Omega$ is a Cartesian product of \eirik{the} subdomains $\Omega_1, \Omega_2$ we can rewrite the \eirik{last} integral \eirik{in~\eqref{eq:expansion}, as}: \begin{equation} \int_{\Omega_1} \int_{\Omega_2} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k\,\psi_j\,\beta_l + \phi_i\, \gamma_k\, \Nabla_2 \psi_j \cdot \Nabla_2\beta_l \dy \dx. \end{equation} \eirik{An application of} Fubini's theorem \eirik{gives}: \begin{equation} \int_{\Omega_1} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k \dx \int_{\Omega_2} \psi_j\,\beta_l \dy + \int_{\Omega_1} \phi_i\, \gamma_k \dx \int_{\Omega_2} \Nabla_2 \psi_j \cdot \Nabla_2\beta_l \dy, \end{equation} These can be written as Kronecker products of matrices, \eirik{e.g., stiffness matrices computed with} FEniCS. \eirik{To continue the discussion, we introduce} the following notation: \begin{equation} \begin{split} K_{11} = \int_{\Omega_1} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k \dx, \quad K_{22} = \int_{\Omega_2} \psi_j\;\beta_l \dy,\\ K_{12} = \int_{\Omega_1} \phi_i\; \gamma_k \dx, \quad K_{21} = \int_{\Omega_2} \Nabla_2 \psi_j \cdot \Nabla_2\beta_l \dy, \end{split} \end{equation} \eirik{where} the first index indicates subdomain (\eirik{i.e.,} 1 or 2 \eirik{in this case}) and the second indicates the term from the weak form (1 is the first order operator and 2 the \eirik{$0^{th}$} order). \eirik{Hence, each local matrix $K_{ij}$ can be constructed independently, and the global stiffness matrix $A_{ijkl}$ can be defined by the sum of 2 Kronecker products.} The Kronecker product allows us to represent \eirik{the matrix of a} 4D tensor \eirik{by smaller 2D matrices}: \begin{equation} A = K_{11} \otimes K_{22} + K_{12} \otimes K_{21}. \end{equation} \eirik{A similar reasoning for the right hand side leads to}: \begin{equation} \int_{\Omega} f_{dof}\;\phi_k\; \beta_l \dX = f_{dof} F_1 \otimes F_2 \end{equation} Where $f_{dof}$ is the pointwise value of $f$ at each $k,l$ coordinate in the global space. The forcing vectors are defined as: \begin{equation} F_{1} = \int_{\Omega_1} \phi_k \dx \quad F_{2} = \int_{\Omega_2} \beta_l \dy. \end{equation} \eirik{Finally,} the entire system \eirik{of equations} becomes: \begin{equation} \label{eq:final_poisson_form} (K_{11} \otimes K_{22} + K_{12} \otimes K_{21})(U) = f_{dof} F_1 \otimes F_2, \end{equation} where $U$ are the values at all d.o.f in the Cartesian product space for $u$. \eirik{A subsequent application of boundary conditions to~\eqref{eq:final_poisson_form} leads to the final system of linear algebraic equations.} \subsection{Model Problem 2: Arbitrary dimensional Space-Time Wave Equation}\label{sec:prob2} \eirik{The second model problem we consider is the linear wave equation. We consider this transient problem to highlight the application of this methodology to transient problems where space-time finite elements are employed. Thus, we have the following model problem:} \begin{equation}\label{eqn:model_prob2} \begin{split} \frac{\partial^2 u}{\partial t}-c^2\Delta u & = 0 \quad \textrm{in} \quad \Omega_T, \\ \eirik{u} &\eirik{ =u_D \quad \textrm{on} \quad \partial \Omega,} \\ \eirik{u} &\eirik{ =u_{initial} \quad \textrm{on} \quad \Omega,} \end{split} \end{equation} \eirik{where $c$ denotes the wave speed}. In this problem the space-time domain $\Omega_T$ is a Cartesian product domain of one, two, or three dimensional \eirik{spatial domain and a one dimensional temporal domain} ($\Omega_T = \Omega \times (0,T)$). The weak form of~\eqref{eqn:model_prob2} is obtained by multiplying by a test function and integrating over the entire space-time domain: \begin{equation} \int_{\Omega_T} \frac{\partial^2 u}{\partial t} v - c^2\Delta u v \dX \, {\rm d t} = 0 \quad \forall \quad v \in L^2(\Omega_T), \end{equation} \eirik{and subsequent integration } by parts in \eirik{space and time} gives the following weak form. . Find $u \in \mathcal{U}(\Omega_T)$: \begin{equation}\label{eqn:weak_form2} \int_{\Omega_T} -\frac{\partial u}{\partial t} \frac{\partial v}{\partial t} + c^2\Nabla u \cdot \Nabla v \dX \, {\rm d t} + \int_{\partial \Omega_T|_{t=T}}\frac{\partial u}{\partial t}v \dX = 0 \quad \forall \quad v \in \mathcal{V}(\Omega_T), \end{equation} where \eirik{we have applied Dirichlet conditions to the space-time boundary $\partial \Omega_T$, except at the final time boundary.} $\mathcal{V}(\Omega_T)$ is the space of all $H^1_0(\Omega_T)$ functions except on $\partial \Omega_T|_{t=T}$, where the trace is an unkown, and $\mathcal{U}(\Omega_T)$ is the space of all $H^1_0(\Omega_T)$ plus the trace $u=u_D$ on $\partial \Omega_T$ except the aforementioned part of the boundary $\Omega_T|_{t=T}$. \eirik{The corresponding discretization of}~\eqref{eqn:weak_form2} using a product basis gives a very similar system of equations to the Poisson problem \eirik{considered in Section~\ref{sec:model_prob}}, with a slight variation. The term $K_{21}$ must include \eirik{additional integrals} and becomes: \begin{equation} K_{21} = \int_0^T \frac{\partial \psi_j}{\partial t} \frac{\partial \beta_l}{\partial t} dt - \int_{\partial \Omega_2} \Nabla \psi_j \cdot \mathbf{n} \beta_l \, {\rm d s} . \end{equation} \eirik{Consequently}, the global system of equations becomes: \begin{equation} \label{eq:global_wave_sys} (c^2 K_{11} \otimes K_{22} - K_{12} \otimes K_{21})U = \mathbf{0}. \end{equation} \eirik{Finally, application of boundary and initial conditions to~\eqref{eq:global_wave_sys} results in the final system of linear algebraic equations.} \subsection{Model Problem 3: SUPG Stabilized Advection Dominated Advection Diffusion Equation}\label{sec:prob3} \eirik{To highlight the versatility of our approach to consider non-standard FE techniques, we consider a PDE which is known to lead to stability issues in the Galerkin FE setting. Hence, we consider an advection-diffusion PDE in which advection is the dominant:} \begin{equation}\label{eqn:model_problem3} \begin{split} -\kappa \, \Delta u + \mathbf{b} \cdot \Nabla u = f, \quad \textrm{on} \quad \Omega \\ u = u_D \quad \textrm{on} \quad \partial \Omega,, \end{split} \end{equation} where $\Omega$ is defined as a Cartesian Product of two lower dimensional Lipschitz domains: $\Omega = \Omega_1 \times \Omega_2$. \eirik{By following the standard procedure of deriving integral formulations,} we get the corresponding weak form: \begin{equation}\label{eqn:weak_form3} \int_{\Omega} \kappa \Nabla u \cdot \Nabla v + \mathbf{b} \cdot \Nabla u v dx = \int_{\Omega}fv dx \quad \forall v \in \mathcal{V}(\Omega) \end{equation} In problems where the advection term dominates the diffusion, that is when Peclet number $Pe = \frac{L \| \mathbf{b} \| }{\kappa} >> 1$, where $L$ is the characteristic length, the standard Galerkin \eirik{method applied to~\eqref{eqn:weak_form3} may result in a} discretization that is \eirik{unstable. } \eirik{This issue of stability can be overcome by careful design of the FE mesh or through stabilization techniques that ensure satisfaction of the discrete \emph{inf-sup} condition. Here, we consider the SUPG method introduced by Brooks and Hughes~\cite{brooks1982streamline} since it is widely used and has well developed criteria for discrete stability.} \eirik{The SUPG method leads to stable FE discretizations by adjusting the discretized weak form~\eqref{eqn:weak_form3} with a penalized residual, i.e., find $u_h \in \mathcal{U}_h(\Omega)$: } \begin{equation}\label{eqn:weak_form3b} \kappa(\Nabla u_h , \Nabla v_h)_{\Omega} + (\mathbf{b} \cdot \Nabla u_h, v_h)_{\Omega} + (\underbrace{-\kappa\,\Delta u_h + \mathbf{b} \cdot \Nabla u_h -f}_{\eirik{Residual}}, \tau(\mathbf{b} \cdot \Nabla v_h) )_{\Omega} = (f,v_h)_{\Omega} \quad \forall v_h \in \mathcal{V}_h(\Omega), \end{equation} \eirik{where the trial and test spaces consist of standard piecewise polynomials. Note that when the residual is zero, the stabilization term vanishes, i.e., it is consistent with the weak form~\eqref{eqn:weak_form3}.} \eirik{As for the preceding model problems,} we wish to construct the 4 dimensional tensor $A_{ijkl}$ using the product bases of the test and trial spaces. \eirik{Substitution of} the product basis into (\ref{eqn:weak_form3b}) gives: \begin{equation} \kappa(\Nabla (\phi_i\,\psi_j) , \Nabla (\gamma_k\,\beta_l))_{\Omega} + (\mathbf{b} \cdot \Nabla (\phi_i\, \psi_j), \gamma_k \beta_l)_{\Omega} + (-\kappa\Delta (\phi_i\,\psi_j) + \mathbf{b} \cdot \Nabla (\phi_i\,\psi_j) -f, \tau[\mathbf{b} \cdot \Nabla(\gamma_k\,\beta_l)] )_{\Omega} = (f,\,\gamma_k\, \beta_l)_{\Omega}. \end{equation} \eirik{Application of} the product rule gives: \begin{equation} \begin{split} \kappa(\psi_j \Nabla \phi_i + \phi_i \Nabla \psi_j , \beta_l \Nabla \gamma_k + \gamma_k \Nabla \beta_l)_{\Omega} + (\mathbf{b} \cdot (\psi_j \Nabla \phi_i +\phi_i \Nabla \psi_j), \gamma_k \beta_l)_{\Omega} +\\ (-\kappa(\psi_j \Delta (\phi_i) + \phi_i \Delta (\psi_j)) + \mathbf{b} \cdot (\psi_j \Nabla (\phi_i) + \phi_i \Nabla(\psi_j)) -f, \tau[\mathbf{b} \cdot (\beta_l \Nabla (\gamma_k) + \gamma_k \Nabla (\beta_l))] )_{\Omega} = (f \gamma_k \beta_l)_{\Omega}, \end{split} \end{equation} \eirik{which we expand} using Fubini's theorem: \begin{equation} \label{eq:fubinisplit} \begin{split} \kappa \left[ (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1}(\psi_j,\beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}( \Nabla \psi_j , \Nabla \beta_l)_{\Omega_2}\right] + (\mathbf{b} \cdot \Nabla \phi_i, \gamma_k)_{\Omega_1} (\psi_j,\beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2} - \\ \kappa \tau [(\Delta \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1} (\psi_j \beta_l)_{\Omega_2} + (\Delta \phi_i , \gamma_k)_{\Omega_1}(\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} + (\phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}(\Delta \psi_j, \beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}] + \\ \tau [ (\mathbf{b} \cdot \Nabla \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}(\psi_j,\beta_l)_{\Omega_2} + (\mathbf{b} \cdot \Nabla \phi_i, \gamma_k)_{\Omega_1}(\psi_j , \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} + \\ (\phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j, \beta_l )_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}] \\ = f_{dof}(\gamma_k)_{\Omega_1}(\beta_l)_{\Omega_2} - \tau f_{dof}(\mathbf{b} \cdot \gamma_k)_{\Omega_1}(\beta_l)_{\Omega_2} - \tau f_{dof} (\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}, \end{split} \end{equation} \eirik{where $f_{dof}$ represents the function value $f$ at each corresponding degree of freedom .} The discrete weak form in~\eqref{eq:fubinisplit} can be represented as a global stiffness matrix compromised of the following smaller submatrices: \begin{equation} \begin{split} K_{11} = (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1} \quad & K_{21} = ( \Nabla \psi_j , \Nabla \beta_l)_{\Omega_2}\\ K_{12} = (\phi_i,\gamma_k)_{\Omega_1} \quad & K_{22} = (\psi_j, \beta_l)_{\Omega_2}\\ K_{13} = (\mathbf{b} \cdot \Nabla \phi_i,\gamma_k)_{\Omega_1}\quad & K_{23} = (\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2}\\ K_{14} = (\Delta \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}\quad & K_{24} = (\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ K_{15} = (\Delta \phi_i , \gamma_k)_{\Omega_1} \quad & K_{25} = (\Delta \psi_j , \beta_l)_{\Omega_2} \\ K_{16} = (\phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}\quad & K_{26} = (\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ K_{17} = (\mathbf{b} \cdot \Nabla \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}\quad & K_{27} = (\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ F_{1} = (\gamma_k)_{\Omega_1} \quad & F_{2} = (\beta_l)_{\Omega_2} \\ F_{3} = (\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} \quad & F_{4} = (\mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}. \end{split} \end{equation} Then the global system of equations to solve would be: \begin{equation} \begin{split} [ \kappa (K_{11} \otimes K_{22} + K_{12}\otimes K_{21}) + K_{13} \otimes K_{22} + K_{12} \otimes K_{23} - \kappa \tau(K_{14} \otimes K_{22} + K_{15} \otimes K_{26} + K_{16} \otimes K_{25} + K_{12} \otimes K_{24} ) + \\ \tau ( K_{17} \otimes K_{22} + K_{13} \otimes K_{26} + K_{16} \otimes K_{23} + K_{12} \otimes K_{27} )]\mathbf{U} = f_{dof} F_1 \otimes F_2 - \tau f_{dof}(F_4 \otimes F_2 + F_1 \otimes F_3). \end{split} \end{equation} One advantage of this method is that it is possible to align one subdomain with the velocity vector. \eirik{In this special case} when $\mathbf{b}$ is only non-zero along one subdomain (e.g., $\Omega_2$) the above weak form~\eqref{eq:fubinisplit} reduces to: \begin{equation}\label{eqn:weak_form3c} \begin{split} \kappa \left[ (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1}(\psi_j,\beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}( \Nabla \psi_j , \Nabla \beta_l)_{\Omega_2}\right]+ (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2} - \\ \kappa \tau\left[ (\Delta \phi_i , \gamma_k)_{\Omega_1}(\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\right] \\ + \tau \left[ (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\right] = f_{dof}(\gamma_k)_{\Omega_1}(\beta_l)_{\Omega_2} - \tau f_{dof} (\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}. \end{split} \end{equation} \eirik{Hence, the local matrices are defined:} \begin{equation} \begin{split} K_{11} = (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1} \quad & K_{21} = ( \Nabla \psi_j , \Nabla w)_{\Omega_2}\\ K_{12} = (\phi_i,\gamma_k)_{\Omega_1}\quad & K_{22} = (\psi_j, \beta_l)_{\Omega_2}\\ K_{13} = (\Delta \phi_i , \gamma_k)_{\Omega_1} \quad & K_{23} = (\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2}\\ K_{24} = (\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\quad & K_{25} = (\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ K_{26} = (\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\quad & F_{1} = (\gamma_k)_{\Omega_1} \\ F_{2} = (\beta_l)_{\Omega_2} \quad & F_{3} = (\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}, \end{split} \end{equation} \eirik{substitution} into~\eqref{eqn:weak_form3c} yields the global system: \begin{equation} \label{eqn:discrete_AD_1direction} (\kappa K_{11} \otimes K_{22} + \kappa K_{12} \otimes K_{21} + K_{12} \otimes K_{23} -\kappa \tau K_{13} \otimes K_{24} - \kappa \tau K_{12} \otimes K_{25} + \tau K_{12} \otimes K_{26})\mathbf{U} = f_{dof} F_1 \otimes F_2 - \tau f_{dof} F_1 \otimes F_3. \end{equation} \section{Numerical Verifications}\label{sec:numerical_results} For each of the three problem \eirik{introduced in Section~\ref{sec:methods}, we consider and implement a specific test case} in FEniCS. \eirik{To verify the developed framework, we investigate the $h-$convergence properties of the implemented methods by consideration of the rate of convergence of the FE solutions. The test and trial spaces in all cases consist of continuous Lagrange polynomials of degree 1.} \subsection{Case 1: 4-D Poisson Equation } \eirik{ We first consider the Poisson PDE in the high dimensional space of order four. We define the computational domain as } a tensor product between two unit squares, \eirik{i.e.,} $\Omega = ((0,1)\times(0,1)) \times ((0,1) \times (0,1))$. \eirik{We select} the forcing function defined as $f = \text{4}\pi^2 u_{exact}$ and the exact solution as $u_{exact} = \Pi^4_{i=1} sin(\pi x_i) $, where $x_i$ is a coordinate in the domain $\Omega$. \eirik{In Table~\ref{tab:results_poisson}, the convergence data for the four dimensional Poisson problem is listed. Note that the convergence of the FE solution is optimal, as the rate of convergence of the root mean square error (RMSE) and $l_{\infty}$ norm approaches $\mathcal{O}(h^{p+1})$}. \begin{table}[h!] \centering \caption{\label{tab:results_poisson} Error estimation results for the 4D Poisson problem.} \begin{tabular}{@{}llllll@{}} \toprule {dofs \hspace{6mm}} & { h \hspace{6mm}} & {$l_{\infty}$ \hspace{6mm}} & { RMSE \hspace{6mm}} & {RMSE rate} & {$l_{\infty}$ rate} \\ \midrule \midrule 256 & 0.333 & 1.40E-01 & 3.16E-02 & - & - \\ 1296 & 0.200 & 7.88E-02 & 1.61E-02 & 1.44 & 1.13 \\ 2401 & 0.167 & 6.55E-02 & 1.21E-02 & 1.57 & 1.01 \\ 4096 & 0.143 & 4.50E-02 & 9.39E-03 & 1.65 & 2.44 \\ 6561 & 0.125 & 3.76E-02 & 7.48E-03 & 1.70 & 1.34\\ 1461 & 0.100 & 2.42E-02 & 5.06E-03 & 1.75 & 1.96 \\ 20736 & 0.091 & 1.95E-02 & 4.26E-03 & 1.79 & 2.31 \\ 28561 & 0.083 & 1.69E-02 & 3.64E-03 & 1.81 & 1.61 \\ 38416 & 0.077 & 1.41E-02 & 3.15E-03 & 1.83 & 2.26 \\ 50625 & 0.071 & 1.25E-02 & 2.74E-03. & 1.84 & 1.67 \\ \bottomrule \end{tabular} \end{table} \subsection{Case 2: 2D Space-Time Wave Equation} \eirik{As a verification of the space-time wave model problem}~\eqref{eqn:model_prob2}, \eirik{we select} the space-time domain as the Cartesian product of a unit square \eirik{spatial} domain $\Omega$ and an interval time domain, i.e., $\Omega_T = ((0,1) \times (0,1)) \times (0,T)$. The wave propagation speed is $c=1$ and \eirik{we consider a} manufactured solution $u_{exact} = sin(x-ct) + sin(y-ct)$. \eirik{This solution is used to ascertain boundary and initial conditions needed to solve the resulting system of equations.} \eirik{In Table~\ref{tab:wave_spacetime}, the convergence results are presented along with the time interval element size, denoted by dt, and the space-time CFL number. The RMSE is observed to converge at the expected optimal rate, whereas the } $l_{\infty}$ error \eirik{exhibits a reduced rate for the finer meshes}. \begin{table}[h!] \centering \caption{\label{tab:wave_spacetime} Results for the Wave Space-Time problem.} \begin{tabular}{@{}llllllll@{}} \toprule {dofs \hspace{6mm}} & { h \hspace{6mm}} & { dt \hspace{6mm}} & { CFL \hspace{6mm}} & {$l_{\infty}$ \hspace{6mm}} & { RMSE \hspace{15mm}} & {RMSE rate} & {$l_{\infty}$ rate} \\ \midrule \midrule 200 & 0.250 & 0.14 & 0.57 & 2.25E-04 & 5.84E-05 & - & - \\ 1215 & 0.125 & 0.07 & 0.57 & 5.06E-05 & 1.51E-05 & 1.95 & 2.22 \\ 3718 & 0.083 & 0.05 & 0.57 & 2.53E-05 & 7.01E-06 & 1.89 & 1.71 \\ 8381 & 0.063 & 0.04 & 0.57 & 1.31E-05 & 3.95E-06 & 1.99 & 1.31 \\ \bottomrule \end{tabular} \end{table} \subsection{Case 3: SUPG Stabilized Advection Dominated Advection Diffusion Equation} \eirik{As a final numerical verification, we consider a special case of advection dominated advection diffusion equation. In particular, we consider the case in which the advection acts in a single direction aligned with a coordinate axis, see~\eqref{eqn:discrete_AD_1direction}. } \eirik{We consider a case where the} domain $\Omega$ is a tensor product of 2 unit intervals: $\Omega = (0,1) \times (0,1)$, diffusivity constant $\kappa =\frac{1}{100}$, and \eirik{advection vector is} $\mathbf{b} = (0,1)$. The analytic solution in this case \eirik{is inspired by the work of Egger and Sch{\"o}berl~\cite{egger2010hybrid}}: \begin{equation} \label{eq:ADR_exact} u_{exact} = (-4(x-0.5)^2+1)\left[ y + \frac{e^{\frac{1}{\kappa} \cdot b_y \cdot y} - 1 }{1 - e^{\frac{1}{\kappa} \cdot b_y}}\right], \end{equation} which implies that the forcing term must be $f = -\kappa\,\Delta u_{exact} + \mathbf{b} \cdot \Nabla u_{exact}$ \eirik{and we enforce the corresponding} homogeneous Dirichlet on the boundary $\partial \Omega$. \eirik{In Figures~\ref{fig:cartesian_ADR}, and~\ref{fig:analytic_ADR}, we show the approximate FE solution and the analytic solution, respectively. As expected, the SUPG stabilization results in a stable solution at this relatively coarse FE mesh.} \eirik{In Table~\ref{tab:results_ADR}, the convergence results for this final case are presented. Both the RMSE and $l_{\infty}$ error converge at the expected optimal rates.} \begin{table}[h] \centering \caption{\label{tab:results_ADR} Error estimation results for advection dominated advection diffusion problem.} \begin{tabular}{@{}llllll@{}} \toprule {dofs \hspace{6mm}} & { h \hspace{6mm}} & {$l_{\infty}$ \hspace{6mm}} & { RMSE \hspace{15mm}} & {RMSE rate} & {$l_{\infty}$ rate} \\ \midrule \midrule 9 & 0.50 & 2.50E-01 & 8.33E-02 & - & - \\ 16 & 0.33 & 1.19E-01 & 4.65E-02 & 1.44 & 1.84 \\ 25 & 0.25 & 7.38E-02 & 3.01E-02 & 1.51 & 1.65 \\ 36 & 0.20 & 4.79E-02 & 2.08E-02 & 1.65 & 1.93 \\ 49 & 0.17 & 3.33E-02 & 1.52E-02 & 1.73 & 1.99 \\ 256 & 0.07 & 6.41E-03 & 2.85E-03 & 1.83 & 1.80 \\ 676 & 0.04 & 2.52E-03 & 1.07E-03 & 1.92 & 1.83 \\ 2601 & 0.02 & 6.50E-04 & 2.74E-04 & 1.96 & 1.96 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{Analytic_256_dof.png} \caption{Analytic solution projected onto FE mesh with 256 dofs.} \label{fig:analytic_ADR} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{Cartesian_Product_256_dof.png} \caption{Cartesian product FE solution at 256 dofs.} \label{fig:cartesian_ADR} \end{figure} \section{Conclusions}\label{sec:Conclusions} \eirik{In this paper, we have introduced and implemented tensor product FE routines for high dimensional problems in FEniCS. This methodology allows us to extend the} FEniCS library to domains with more than three dimensions so long as they are a Cartesian product of subdomains three or lower. \eirik{To verify the developed methodology, we consider three test cases utilizing classical and stabilized FE methods. } For each test case, \eirik{we observe the expected } convergence to the analytic solutions with respect to grid refinement was demonstrated in both the RMSE and $l_{\infty}$ norms. \eirik{We consider only linear PDEs here } since this allowed for the explicit construction of a single linear system of algebraic equations. \eirik{Hence, future studies should investigate potential extensions to nonlinear PDEs.} Additionally, the global system of equations was solved naively by explicitly constructing the global stiffness matrix and inverting. \eirik{However,} the global matrix is sparse with highly structured blocks which should allow for faster solvers that would greatly reduce run time. \eirik{We refer to existing works~\cite{BIALECKI1993369,Gao_thesis,Bank1978}, where related} problems were considered \eirik{and leave the consideration of such solvers for future studies.} \eirik{Further extensions to mixed FE methods, } such as those discussed in the book by Brezzi \emph{et al.}~\cite{brezzi2012mixed}. \eirik{Finally, } FE methods utilizing with discontinuous test/trial spaces \eirik{ could be considered due to their extensive use in engineering applications}. Furthermore, full integration of this method into the FEniCS API would be valuable for both simplicity of future implementations as well as for performance. The code used in the numerical studies is publicly available and can be found on GitHub at \\ \url{https://github.com/Markloveland/Cartesian_Product_FEM} \section{Acknowledgements} Author Loveland has been supported by the CSEM Fellowship from the Oden Institute at the University of Texas at Austin. Authors Loveland, Valseth, and Dawson have been supported by the United States National Science Foundation - NSF PREEVENTS Track 2 Program, under NSF Grant Number 1855047 and the Department of Homeland Security Coastal Resilience Center research project "Accurate and Fast Wave Modeling and Coupling with ADCIRC". Author Lukac has been supported by the University of Oregon. \bibliographystyle{elsarticle-num} \section{Introduction} In the last decade, open source libraries with high level APIs that automate the process of solving partial differential equations (PDEs) using the finite element method (FEM) have become valuable tools in computational research. Some prominent libraries of this kind include Firedrake~\cite{rathgeber2016firedrake}, deal.II~\cite{bangerth2007deal}, MFEM~\cite{anderson2021mfem}, and FEniCS~\cite{alnaes2015fenics}. FEniCS is one of the most widely used of these libraries. FEniCS, along with most other FEM libraries, only include capability of handling up to three dimensional problems (GetFEM~\cite{renard2020getfem} is a notable exception). FEniCS specifically supports unstructured meshes with corresponding basis functions in one, two, and three dimensions. Extension of FEniCS to solve problems in higher dimensions is of interest for problems such as spectral wind wave models, spatial population genetics, quantum mechanics, and even 3d space-time problems. FEniCS does not contain support for fully unstructured meshes in dimensions higher than three. However, at the expense of losing a fully unstructured mesh, a higher dimensional space can be discretized with the available tools in FEniCS if it is the Cartesian product of two lower dimensional (3 or lower) spaces that can themselves be unstructured. It is well known that in a Cartesian product space, a finite element basis can be constructed called the product basis which spans the function space of the full domain~\cite{brenner2008mathematical,quarteroni2010numerical,ern2013theory}. This research seeks to use the infrastructure of an FEM library, such as FEniCS, in the lower dimensional spaces to construct a basis for this high dimensional space. Using FEM to discretize a structured domain via a product basis has been done before without the use of FEM libraries such as FEniCS. In 1978, Banks~\cite{Bank1978} used tensor product finite elements to solve a 2D Poisson problem on a structured grid in order to find a faster solver. In the study, the domain was a Cartesian product between two unit intervals resulting in a uniform square mesh. Banks used the tensor product of two one dimensional quadratic and cubic basis functions to construct the polynomial basis of the full 2D domain. In 1980, Baker performed numerical tests as well as a stability analysis on a tensor product finite method with application to convection-dominated fluid flow problems~\cite{BAKER1981215}. The stability analysis showed the basic algorithm is spatially fourth- order accurate in its most elementary embodiment and the numerical experiments on the convection-dominated model test problems confirmed the basic viability of the developed algorithm, and its tensor product formulation. Recently, Du \eirik{\emph{et al.}~\cite{DU2013181}} used tensor product finite elements to construct a fast solver for an electromagnetics scattering problem of a cavity. In Firedrake~\cite{rathgeber2016firedrake}, there exists a capability to create tensor product finite elements only up to three dimensions~\cite{mcrae2016automated}. A package built on deal.II called Hyperdeal~\cite{munch2020hyperdeal} has the capability of creating tensor product finite elements in up to six dimensions. deal.II is different to a library such as FEniCS since deal.II uses quadrilateral elements in two dimensions whereas FEniCS uses triangles. \eirik{Here, we present a general software framework built on the components of the FEniCS library as well as other open source Python libraries to construct high dimensional meshes and corresponding FE discretizations. } Following this introduction, the general set of problems for which this paper focuses on will be defined and the notation for the product basis will be introduced \eirik{in Section~\ref{sec:model_prob}}. Then, a set of three different model problems will be described in detail and the derivation of the system of equations using tensor product finite elements will be shown for each case. The first model problem will be discussed in Section~\ref{sec:model_prob} which is an N dimensional Poisson problem where the domain is a Cartesian product between two subdomains of dimension three or less. In Section~\ref{sec:prob2} the second model problem is discussed which is a wave equation where the domain is decomposed as a Cartesian product between space and time. The third model problem is discussed in Section~\ref{sec:prob3} which is an advection dominated advection-diffusion equation where the advection aligns with one of the subdomains, a streamlined upwind Petrov-Galerkin method (SUPG) is formulated for this case. After each system of equations is derived, numerical tests were run using FEniCS for each model problem where specific boundary conditions were given. For each case, error and convergence rates are tabulated in Section~\ref{sec:numerical_results}. Lastly, conclusions and recommendations for future work are given in Section~\ref{sec:Conclusions}. \section{Methods} \label{sec:methods} To present the proposed methodology and algorithms, we consider the following class of problems, i.e., PDEs: \begin{equation*} \begin{split} \mathscr{L} u = f \quad \textrm{in} \quad \Omega = \Omega_1 \times \Omega_2. \end{split} \end{equation*} Where $\mathscr{L}$ is a linear differential operator, $f$ is a forcing function, and the domain $\Omega$ is defined as a Cartesian product between two lower dimensional Lipschitz domains. For example, if the global domain $\Omega \subset \mathbb{R}^2$, then $\Omega$ can be defined by the Cartesian product of 2 intervals $\Omega_1\subset \mathbb{R}$ and $\Omega_2 \subset \mathbb{R}$. Hence, we consider general domains of the form $\Omega = \Omega_1 \times \Omega_2 $, see Figure~\ref{fig:cartesian_illustration} for an illustration. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{CartesianProductIllust.png} \caption{Example of a global domain $\Omega$ that is a Cartesian product of two lower dimensional subdomains $\Omega_1, \Omega_2$.} \label{fig:cartesian_illustration} \end{figure} FEniCS can discretize up to three dimensional objects, hence, in practice this framework can be used to define domains that are Cartesian products of up to six dimensions. Furthermore, this process can be done iteratively to yield even higher dimensional domains as $\Omega_1$ itself could be a product of two other spaces and so on. We note that in this presentation, we consider only symmetric functional settings which admit well posed Galerkin FE discrtetization. It can be shown that if a function $u$ is defined on a Cartesian product domain $\Omega = \Omega_1 \times \Omega_2$ then we can construct a basis of a polynomial space to be used for the FEM \eirik{ by exploiting this underlying geometric structure, see, e.g.,}~\cite{quarteroni2010numerical,brenner2008mathematical,ern2013theory}. \eirik{With} a basis for a polynomial space for the first lower dimensional subdomain $\Omega_1$: \begin{equation} \{\phi_i\}_{i=1}^N, \end{equation} and a basis for the second subdomain $\Omega_2$: \begin{equation} \{\psi_j\}_{j=1}^M. \end{equation} The basis for the entire domain can subsequently be constructed and any arbitrary function $u$ whose domain is in $\Omega = \Omega_1 \times \Omega_2$ can be approximated via the product basis: \begin{equation} u \approx \sum_{i=1}^N\sum_{j=1}^M u_{i,j} \phi_i\, \psi_j. \end{equation} \eirik{Note that in the following,} we use boldface letters and symbols to denote vector quantities, e.g., $\dX = \dx \dy$. \eirik{To avoid raising complexity in implementation, we also assume in the numerical test cases (Section~\ref{sec:numerical_results}) that all physical parameters in the following model problems are constant and that the Dirichlet boundary conditions are homogeneous.} \matt{However, it is possible to remove these assumptions by projecting the parameters and boundary conditions to the appropriate function space.} \subsection{Model Problem 1: N dimensional Poisson Equation} \label{sec:model_prob} As a first model problem to illustrate the methodology, we consider the Poisson equation: \begin{equation}\label{eqn:model_prob1} \begin{split} - \Delta \, u &= f \quad \textrm{in} \quad \Omega, \\ u &= u_D \quad \textrm{on} \quad \partial \Omega, \end{split} \end{equation} where the source $f$ is in $L^2(\Omega)$ and the source data $u_D$ is assumed to be sufficiently regular. To define the weak formulation for~\eqref{eqn:model_prob1}, multiply both sides with a test function $v$ in $L^2(\Omega)$ and integrate over $\Omega$: \begin{equation}\label{eqn:weak_form1} \int_{\Omega} - \Delta \, u\, v \dX = \int_{\Omega} f \, v \dX \quad \forall v \in L^2(\Omega), \end{equation} Integrating by parts on the left side (assuming Dirichlet boundary conditions on entire boundary) gives the following weak formulation\matt{: find $u \in \mathcal{U}(\Omega)$ such that} \begin{equation} \label{eq:weak_form} \int_{\Omega} \Nabla u \cdot \Nabla v \dX = \int_{\Omega} f\,v \dX \quad \forall \quad v \in \mathcal{V}(\Omega), \end{equation} where the function space $\mathcal{V}(\Omega)$ is the Hilbert space with zero trace on the boundary $H^1_0(\Omega)$ and $\mathcal{U}(\Omega)$ is $\mathcal{V}(\Omega)$ with a finite energy lift on the boundary so that the Dirichlet condition $u=u_D$ is satisfied. The weak formulation in~\eqref{eq:weak_form}, and its corresponding discretization is known to be well posed, see, e.g.,~\cite{becker1981finite}. With a well posed weak formulation at hand, we can discretize this weak form using a finite element basis. In this case, the functional setting dictates the use of a $C^0$ continuous \eirik{polynomial} basis for the classical FEM. Hence, we approximate the trial functions with the product basis: \begin{equation} u \approx \sum_{i=1}^N\sum_{j=1}^M u_{i,j} \, \phi_i \, \psi_j, \end{equation} and the test functions by its product basis: \begin{equation} v \approx \sum_{k=1}^N\sum_{l=1}^M v_{k,l} \, \gamma_k \, \beta_l. \end{equation} The weak form from~\eqref{eqn:weak_form1} can then be discretized by substitution of the product bases: \begin{equation} \displaystyle \int_{\Omega} (\Nabla \sum_{i=1}^N\sum_{j=1}^M u_{i,j} \, \phi_i \, \psi_j) \cdot ( \Nabla \sum_{k=1}^N\sum_{l=1}^M v_{k,l}\,\gamma_k\, \beta_l) \dX = \int_{\Omega} f\, (\sum_{k=1}^N\sum_{l=1}^M v_{k,l}\,\gamma_k\, \beta_l ) \dX. \end{equation} Due to arbitrariness of the test function and the bilinearity of the weak form, this implies the following form: \begin{equation} \sum_{i=1}^N\sum_{j=1}^M \int_{\Omega} (\Nabla \, \phi_i \, \psi_j) \cdot ( \Nabla \,\gamma_k\, \beta_l) \dX \, u_{i,j} = \int_{\Omega} f\, (\,\gamma_k\, \beta_l ) \dX. \end{equation} This left hand side \eirik{results} in a \eirik{product between a} 4 dimensional \eirik{and} a 2 dimensional tensor $u_{i,j}$. \eirik{Consequently,} the 4 dimensional tensor $A_{ijkl}$ will be of the form: \begin{equation} \int_{\Omega} \Nabla \phi_i \,\psi_j \cdot \Nabla \gamma_k \,\beta_l \, \dX. \end{equation} By the product rule we have: \begin{equation} \label{eq:chain_ruled} \int_{\Omega} \Nabla \phi_i \, \psi_j \cdot \Nabla \gamma_k \,\beta_l \dX = \int_{\Omega}\, (\phi_i \, \Nabla \psi_j + \psi_j\, \Nabla \phi_i) \cdot (\gamma_k\, \Nabla \beta_l + \beta_l\, \Nabla \gamma_k) \dX. \end{equation} By construction, \eirik{the} $\phi$\eirik{'s} and$\gamma$\eirik{'s} only vary in the first subdomain $\Omega_1$ whereas \eirik{the} $\psi$\eirik{'s} and $\beta$\eirik{'s} only vary in the second subdomain $\Omega_2$. \eirik{Thus, the integral form~\eqref{eq:chain_ruled} can be simplified. To this end, we use the following notation convention:} if the gradient operator on the entire domain \eirik{$\Omega$} is $\Nabla$, define the gradient on the subdomains \eirik{$\Omega_1$ and $\Omega_2$} \eirik{as} $\Nabla_1$, $\Nabla_2$\eirik{, respectively}. \eirik{Hence, by} construction $\Nabla = (\Nabla_1, \Nabla_2)$. Rewriting \eirik{ the gradients in~\eqref{eq:chain_ruled} then} gives: \begin{equation} \label{eq:expansion} \begin{split} & \int_{\Omega} (\phi_i \, (\Nabla_1, \Nabla_2)\,\psi_j + \psi_j \,(\Nabla_1, \Nabla_2)\, \phi_i) \cdot (\gamma_k\, (\Nabla_1, \Nabla_2)\, \beta_l + \beta_l (\Nabla_1, \Nabla_2)\gamma_k)\dX, \\ \qquad &= \int_{\Omega} ( (\mathbf{0}, \phi_i\,\Nabla_2 \psi_j) + (\psi_j\,\Nabla_1\phi_i, \mathbf{0}) ) \cdot ( (\mathbf{0}, \gamma_k\, \Nabla_2\beta_l) + (\beta_l\,\Nabla_1\gamma_k, \mathbf{0}))\dX, \\ &= \int_{\Omega} (\psi_j\,\Nabla_1\phi_i, \phi_i\,\Nabla_2 \psi_j) \cdot (\beta_l\,\Nabla_1\gamma_k, \gamma_k\, \Nabla_2\beta_l) \dX, \\ \qquad &= \int_{\Omega} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k\,\psi_j\,\beta_l + \phi_i \,\gamma_k \,\Nabla_2 \psi_j \cdot \Nabla_2\,\beta_l \dX. \end{split} \end{equation} Since \eirik{the} domain $\Omega$ is a Cartesian product of \eirik{the} subdomains $\Omega_1, \Omega_2$ we can rewrite the \eirik{last} integral \eirik{in~\eqref{eq:expansion}, as}: \begin{equation} \int_{\Omega_1} \int_{\Omega_2} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k\,\psi_j\,\beta_l + \phi_i\, \gamma_k\, \Nabla_2 \psi_j \cdot \Nabla_2\beta_l \dy \dx. \end{equation} \eirik{An application of} Fubini's theorem \eirik{gives}: \begin{equation} \int_{\Omega_1} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k \dx \int_{\Omega_2} \psi_j\,\beta_l \dy + \int_{\Omega_1} \phi_i\, \gamma_k \dx \int_{\Omega_2} \Nabla_2 \psi_j \cdot \Nabla_2\beta_l \dy, \end{equation} These can be written as Kronecker products of matrices, \eirik{e.g., stiffness matrices computed with} FEniCS. \eirik{To continue the discussion, we introduce} the following notation: \begin{equation} \begin{split} K_{11} = \int_{\Omega_1} \Nabla_1\phi_i \cdot \Nabla_1\gamma_k \dx, \quad K_{22} = \int_{\Omega_2} \psi_j\;\beta_l \dy,\\ K_{12} = \int_{\Omega_1} \phi_i\; \gamma_k \dx, \quad K_{21} = \int_{\Omega_2} \Nabla_2 \psi_j \cdot \Nabla_2\beta_l \dy, \end{split} \end{equation} \eirik{where} the first index indicates subdomain (\eirik{i.e.,} 1 or 2 \eirik{in this case}) and the second indicates the term from the weak form (1 is the first order operator and 2 the \eirik{$0^{th}$} order). \eirik{Hence, each local matrix $K_{ij}$ can be constructed independently, and the global stiffness matrix $A_{ijkl}$ can be defined by the sum of 2 Kronecker products.} The Kronecker product allows us to represent \eirik{the matrix of a} 4D tensor \eirik{by smaller 2D matrices}: \begin{equation} A = K_{11} \otimes K_{22} + K_{12} \otimes K_{21}. \end{equation} \eirik{A similar reasoning for the right hand side leads to}: \begin{equation} \int_{\Omega} f_{dof}\;\phi_k\; \beta_l \dX = f_{dof} F_1 \otimes F_2 \end{equation} Where $f_{dof}$ is the pointwise value of $f$ at each $k,l$ coordinate in the global space. The forcing vectors are defined as: \begin{equation} F_{1} = \int_{\Omega_1} \phi_k \dx \quad F_{2} = \int_{\Omega_2} \beta_l \dy. \end{equation} \eirik{Finally,} the entire system \eirik{of equations} becomes: \begin{equation} \label{eq:final_poisson_form} (K_{11} \otimes K_{22} + K_{12} \otimes K_{21})(U) = f_{dof} F_1 \otimes F_2, \end{equation} where $U$ are the values at all d.o.f in the Cartesian product space for $u$. \eirik{A subsequent application of boundary conditions to~\eqref{eq:final_poisson_form} leads to the final system of linear algebraic equations.} \subsection{Model Problem 2: Arbitrary dimensional Space-Time Wave Equation}\label{sec:prob2} \eirik{The second model problem we consider is the linear wave equation. We consider this transient problem to highlight the application of this methodology to transient problems where space-time finite elements are employed. Thus, we have the following model problem:} \begin{equation}\label{eqn:model_prob2} \begin{split} \frac{\partial^2 u}{\partial t}-c^2\Delta u & = 0 \quad \textrm{in} \quad \Omega_T, \\ \eirik{u} &\eirik{ =u_D \quad \textrm{on} \quad \partial \Omega,} \\ \eirik{u} &\eirik{ =u_{initial} \quad \textrm{on} \quad \Omega,} \end{split} \end{equation} \eirik{where $c$ denotes the wave speed}. In this problem the space-time domain $\Omega_T$ is a Cartesian product domain of one, two, or three dimensional \eirik{spatial domain and a one dimensional temporal domain} ($\Omega_T = \Omega \times (0,T)$). The weak form of~\eqref{eqn:model_prob2} is obtained by multiplying by a test function and integrating over the entire space-time domain: \begin{equation} \int_{\Omega_T} \frac{\partial^2 u}{\partial t} v - c^2\Delta u v \dX \, {\rm d t} = 0 \quad \forall \quad v \in L^2(\Omega_T), \end{equation} \eirik{and subsequent integration } by parts in \eirik{space and time} gives the following weak form. . Find $u \in \mathcal{U}(\Omega_T)$: \begin{equation}\label{eqn:weak_form2} \int_{\Omega_T} -\frac{\partial u}{\partial t} \frac{\partial v}{\partial t} + c^2\Nabla u \cdot \Nabla v \dX \, {\rm d t} + \int_{\partial \Omega_T|_{t=T}}\frac{\partial u}{\partial t}v \dX = 0 \quad \forall \quad v \in \mathcal{V}(\Omega_T), \end{equation} where \eirik{we have applied Dirichlet conditions to the space-time boundary $\partial \Omega_T$, except at the final time boundary.} $\mathcal{V}(\Omega_T)$ is the space of all $H^1_0(\Omega_T)$ functions except on $\partial \Omega_T|_{t=T}$, where the trace is an unkown, and $\mathcal{U}(\Omega_T)$ is the space of all $H^1_0(\Omega_T)$ plus the trace $u=u_D$ on $\partial \Omega_T$ except the aforementioned part of the boundary $\Omega_T|_{t=T}$. \eirik{The corresponding discretization of}~\eqref{eqn:weak_form2} using a product basis gives a very similar system of equations to the Poisson problem \eirik{considered in Section~\ref{sec:model_prob}}, with a slight variation. The term $K_{21}$ must include \eirik{additional integrals} and becomes: \begin{equation} K_{21} = \int_0^T \frac{\partial \psi_j}{\partial t} \frac{\partial \beta_l}{\partial t} dt - \int_{\partial \Omega_2} \Nabla \psi_j \cdot \mathbf{n} \beta_l \, {\rm d s} . \end{equation} \eirik{Consequently}, the global system of equations becomes: \begin{equation} \label{eq:global_wave_sys} (c^2 K_{11} \otimes K_{22} - K_{12} \otimes K_{21})U = \mathbf{0}. \end{equation} \eirik{Finally, application of boundary and initial conditions to~\eqref{eq:global_wave_sys} results in the final system of linear algebraic equations.} \subsection{Model Problem 3: SUPG Stabilized Advection Dominated Advection Diffusion Equation}\label{sec:prob3} \eirik{To highlight the versatility of our approach to consider non-standard FE techniques, we consider a PDE which is known to lead to stability issues in the Galerkin FE setting. Hence, we consider an advection-diffusion PDE in which advection is the dominant:} \begin{equation}\label{eqn:model_problem3} \begin{split} -\kappa \, \Delta u + \mathbf{b} \cdot \Nabla u = f, \quad \textrm{on} \quad \Omega \\ u = u_D \quad \textrm{on} \quad \partial \Omega,, \end{split} \end{equation} where $\Omega$ is defined as a Cartesian Product of two lower dimensional Lipschitz domains: $\Omega = \Omega_1 \times \Omega_2$. \eirik{By following the standard procedure of deriving integral formulations,} we get the corresponding weak form: \begin{equation}\label{eqn:weak_form3} \int_{\Omega} \kappa \Nabla u \cdot \Nabla v + \mathbf{b} \cdot \Nabla u v dx = \int_{\Omega}fv dx \quad \forall v \in \mathcal{V}(\Omega) \end{equation} In problems where the advection term dominates the diffusion, that is when Peclet number $Pe = \frac{L \| \mathbf{b} \| }{\kappa} >> 1$, where $L$ is the characteristic length, the standard Galerkin \eirik{method applied to~\eqref{eqn:weak_form3} may result in a} discretization that is \eirik{unstable. } \eirik{This issue of stability can be overcome by careful design of the FE mesh or through stabilization techniques that ensure satisfaction of the discrete \emph{inf-sup} condition. Here, we consider the SUPG method introduced by Brooks and Hughes~\cite{brooks1982streamline} since it is widely used and has well developed criteria for discrete stability.} \eirik{The SUPG method leads to stable FE discretizations by adjusting the discretized weak form~\eqref{eqn:weak_form3} with a penalized residual, i.e., find $u_h \in \mathcal{U}_h(\Omega)$: } \begin{equation}\label{eqn:weak_form3b} \kappa(\Nabla u_h , \Nabla v_h)_{\Omega} + (\mathbf{b} \cdot \Nabla u_h, v_h)_{\Omega} + (\underbrace{-\kappa\,\Delta u_h + \mathbf{b} \cdot \Nabla u_h -f}_{\eirik{Residual}}, \tau(\mathbf{b} \cdot \Nabla v_h) )_{\Omega} = (f,v_h)_{\Omega} \quad \forall v_h \in \mathcal{V}_h(\Omega), \end{equation} \eirik{where the trial and test spaces consist of standard piecewise polynomials. Note that when the residual is zero, the stabilization term vanishes, i.e., it is consistent with the weak form~\eqref{eqn:weak_form3}.} \eirik{As for the preceding model problems,} we wish to construct the 4 dimensional tensor $A_{ijkl}$ using the product bases of the test and trial spaces. \eirik{Substitution of} the product basis into (\ref{eqn:weak_form3b}) gives: \begin{equation} \kappa(\Nabla (\phi_i\,\psi_j) , \Nabla (\gamma_k\,\beta_l))_{\Omega} + (\mathbf{b} \cdot \Nabla (\phi_i\, \psi_j), \gamma_k \beta_l)_{\Omega} + (-\kappa\Delta (\phi_i\,\psi_j) + \mathbf{b} \cdot \Nabla (\phi_i\,\psi_j) -f, \tau[\mathbf{b} \cdot \Nabla(\gamma_k\,\beta_l)] )_{\Omega} = (f,\,\gamma_k\, \beta_l)_{\Omega}. \end{equation} \eirik{Application of} the product rule gives: \begin{equation} \begin{split} \kappa(\psi_j \Nabla \phi_i + \phi_i \Nabla \psi_j , \beta_l \Nabla \gamma_k + \gamma_k \Nabla \beta_l)_{\Omega} + (\mathbf{b} \cdot (\psi_j \Nabla \phi_i +\phi_i \Nabla \psi_j), \gamma_k \beta_l)_{\Omega} +\\ (-\kappa(\psi_j \Delta (\phi_i) + \phi_i \Delta (\psi_j)) + \mathbf{b} \cdot (\psi_j \Nabla (\phi_i) + \phi_i \Nabla(\psi_j)) -f, \tau[\mathbf{b} \cdot (\beta_l \Nabla (\gamma_k) + \gamma_k \Nabla (\beta_l))] )_{\Omega} = (f \gamma_k \beta_l)_{\Omega}, \end{split} \end{equation} \eirik{which we expand} using Fubini's theorem: \begin{equation} \label{eq:fubinisplit} \begin{split} \kappa \left[ (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1}(\psi_j,\beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}( \Nabla \psi_j , \Nabla \beta_l)_{\Omega_2}\right] + (\mathbf{b} \cdot \Nabla \phi_i, \gamma_k)_{\Omega_1} (\psi_j,\beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2} - \\ \kappa \tau [(\Delta \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1} (\psi_j \beta_l)_{\Omega_2} + (\Delta \phi_i , \gamma_k)_{\Omega_1}(\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} + (\phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}(\Delta \psi_j, \beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}] + \\ \tau [ (\mathbf{b} \cdot \Nabla \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}(\psi_j,\beta_l)_{\Omega_2} + (\mathbf{b} \cdot \Nabla \phi_i, \gamma_k)_{\Omega_1}(\psi_j , \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} + \\ (\phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j, \beta_l )_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}] \\ = f_{dof}(\gamma_k)_{\Omega_1}(\beta_l)_{\Omega_2} - \tau f_{dof}(\mathbf{b} \cdot \gamma_k)_{\Omega_1}(\beta_l)_{\Omega_2} - \tau f_{dof} (\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}, \end{split} \end{equation} \eirik{where $f_{dof}$ represents the function value $f$ at each corresponding degree of freedom .} The discrete weak form in~\eqref{eq:fubinisplit} can be represented as a global stiffness matrix compromised of the following smaller submatrices: \begin{equation} \begin{split} K_{11} = (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1} \quad & K_{21} = ( \Nabla \psi_j , \Nabla \beta_l)_{\Omega_2}\\ K_{12} = (\phi_i,\gamma_k)_{\Omega_1} \quad & K_{22} = (\psi_j, \beta_l)_{\Omega_2}\\ K_{13} = (\mathbf{b} \cdot \Nabla \phi_i,\gamma_k)_{\Omega_1}\quad & K_{23} = (\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2}\\ K_{14} = (\Delta \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}\quad & K_{24} = (\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ K_{15} = (\Delta \phi_i , \gamma_k)_{\Omega_1} \quad & K_{25} = (\Delta \psi_j , \beta_l)_{\Omega_2} \\ K_{16} = (\phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}\quad & K_{26} = (\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ K_{17} = (\mathbf{b} \cdot \Nabla \phi_i, \mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}\quad & K_{27} = (\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ F_{1} = (\gamma_k)_{\Omega_1} \quad & F_{2} = (\beta_l)_{\Omega_2} \\ F_{3} = (\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} \quad & F_{4} = (\mathbf{b} \cdot \Nabla \gamma_k)_{\Omega_1}. \end{split} \end{equation} Then the global system of equations to solve would be: \begin{equation} \begin{split} [ \kappa (K_{11} \otimes K_{22} + K_{12}\otimes K_{21}) + K_{13} \otimes K_{22} + K_{12} \otimes K_{23} - \kappa \tau(K_{14} \otimes K_{22} + K_{15} \otimes K_{26} + K_{16} \otimes K_{25} + K_{12} \otimes K_{24} ) + \\ \tau ( K_{17} \otimes K_{22} + K_{13} \otimes K_{26} + K_{16} \otimes K_{23} + K_{12} \otimes K_{27} )]\mathbf{U} = f_{dof} F_1 \otimes F_2 - \tau f_{dof}(F_4 \otimes F_2 + F_1 \otimes F_3). \end{split} \end{equation} One advantage of this method is that it is possible to align one subdomain with the velocity vector. \eirik{In this special case} when $\mathbf{b}$ is only non-zero along one subdomain (e.g., $\Omega_2$) the above weak form~\eqref{eq:fubinisplit} reduces to: \begin{equation}\label{eqn:weak_form3c} \begin{split} \kappa \left[ (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1}(\psi_j,\beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}( \Nabla \psi_j , \Nabla \beta_l)_{\Omega_2}\right]+ (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2} - \\ \kappa \tau\left[ (\Delta \phi_i , \gamma_k)_{\Omega_1}(\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2} + (\phi_i,\gamma_k)_{\Omega_1}(\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\right] \\ + \tau \left[ (\phi_i,\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\right] = f_{dof}(\gamma_k)_{\Omega_1}(\beta_l)_{\Omega_2} - \tau f_{dof} (\gamma_k)_{\Omega_1}(\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}. \end{split} \end{equation} \eirik{Hence, the local matrices are defined:} \begin{equation} \begin{split} K_{11} = (\Nabla \phi_i, \Nabla \gamma_k)_{\Omega_1} \quad & K_{21} = ( \Nabla \psi_j , \Nabla w)_{\Omega_2}\\ K_{12} = (\phi_i,\gamma_k)_{\Omega_1}\quad & K_{22} = (\psi_j, \beta_l)_{\Omega_2}\\ K_{13} = (\Delta \phi_i , \gamma_k)_{\Omega_1} \quad & K_{23} = (\mathbf{b} \cdot \Nabla \psi_j,\beta_l)_{\Omega_2}\\ K_{24} = (\psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\quad & K_{25} = (\Delta \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\\ K_{26} = (\mathbf{b} \cdot \Nabla \psi_j, \mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}\quad & F_{1} = (\gamma_k)_{\Omega_1} \\ F_{2} = (\beta_l)_{\Omega_2} \quad & F_{3} = (\mathbf{b} \cdot \Nabla \beta_l)_{\Omega_2}, \end{split} \end{equation} \eirik{substitution} into~\eqref{eqn:weak_form3c} yields the global system: \begin{equation} \label{eqn:discrete_AD_1direction} (\kappa K_{11} \otimes K_{22} + \kappa K_{12} \otimes K_{21} + K_{12} \otimes K_{23} -\kappa \tau K_{13} \otimes K_{24} - \kappa \tau K_{12} \otimes K_{25} + \tau K_{12} \otimes K_{26})\mathbf{U} = f_{dof} F_1 \otimes F_2 - \tau f_{dof} F_1 \otimes F_3. \end{equation} \section{Numerical Verifications}\label{sec:numerical_results} For each of the three problem \eirik{introduced in Section~\ref{sec:methods}, we consider and implement a specific test case} in FEniCS. \eirik{To verify the developed framework, we investigate the $h-$convergence properties of the implemented methods by consideration of the rate of convergence of the FE solutions. The test and trial spaces in all cases consist of continuous Lagrange polynomials of degree 1.} \subsection{Case 1: 4-D Poisson Equation } \eirik{ We first consider the Poisson PDE in the high dimensional space of order four. We define the computational domain as } a tensor product between two unit squares, \eirik{i.e.,} $\Omega = ((0,1)\times(0,1)) \times ((0,1) \times (0,1))$. \eirik{We select} the forcing function defined as $f = \text{4}\pi^2 u_{exact}$ and the exact solution as $u_{exact} = \Pi^4_{i=1} sin(\pi x_i) $, where $x_i$ is a coordinate in the domain $\Omega$. \eirik{In Table~\ref{tab:results_poisson}, the convergence data for the four dimensional Poisson problem is listed. Note that the convergence of the FE solution is optimal, as the rate of convergence of the root mean square error (RMSE) and $l_{\infty}$ norm approaches $\mathcal{O}(h^{p+1})$}. \begin{table}[h!] \centering \caption{\label{tab:results_poisson} Error estimation results for the 4D Poisson problem.} \begin{tabular}{@{}llllll@{}} \toprule {dofs \hspace{6mm}} & { h \hspace{6mm}} & {$l_{\infty}$ \hspace{6mm}} & { RMSE \hspace{6mm}} & {RMSE rate} & {$l_{\infty}$ rate} \\ \midrule \midrule 256 & 0.333 & 1.40E-01 & 3.16E-02 & - & - \\ 1296 & 0.200 & 7.88E-02 & 1.61E-02 & 1.44 & 1.13 \\ 2401 & 0.167 & 6.55E-02 & 1.21E-02 & 1.57 & 1.01 \\ 4096 & 0.143 & 4.50E-02 & 9.39E-03 & 1.65 & 2.44 \\ 6561 & 0.125 & 3.76E-02 & 7.48E-03 & 1.70 & 1.34\\ 1461 & 0.100 & 2.42E-02 & 5.06E-03 & 1.75 & 1.96 \\ 20736 & 0.091 & 1.95E-02 & 4.26E-03 & 1.79 & 2.31 \\ 28561 & 0.083 & 1.69E-02 & 3.64E-03 & 1.81 & 1.61 \\ 38416 & 0.077 & 1.41E-02 & 3.15E-03 & 1.83 & 2.26 \\ 50625 & 0.071 & 1.25E-02 & 2.74E-03. & 1.84 & 1.67 \\ \bottomrule \end{tabular} \end{table} \subsection{Case 2: 2D Space-Time Wave Equation} \eirik{As a verification of the space-time wave model problem}~\eqref{eqn:model_prob2}, \eirik{we select} the space-time domain as the Cartesian product of a unit square \eirik{spatial} domain $\Omega$ and an interval time domain, i.e., $\Omega_T = ((0,1) \times (0,1)) \times (0,T)$. The wave propagation speed is $c=1$ and \eirik{we consider a} manufactured solution $u_{exact} = sin(x-ct) + sin(y-ct)$. \eirik{This solution is used to ascertain boundary and initial conditions needed to solve the resulting system of equations.} \eirik{In Table~\ref{tab:wave_spacetime}, the convergence results are presented along with the time interval element size, denoted by dt, and the space-time CFL number. The RMSE is observed to converge at the expected optimal rate, whereas the } $l_{\infty}$ error \eirik{exhibits a reduced rate for the finer meshes}. \begin{table}[h!] \centering \caption{\label{tab:wave_spacetime} Results for the Wave Space-Time problem.} \begin{tabular}{@{}llllllll@{}} \toprule {dofs \hspace{6mm}} & { h \hspace{6mm}} & { dt \hspace{6mm}} & { CFL \hspace{6mm}} & {$l_{\infty}$ \hspace{6mm}} & { RMSE \hspace{15mm}} & {RMSE rate} & {$l_{\infty}$ rate} \\ \midrule \midrule 200 & 0.250 & 0.14 & 0.57 & 2.25E-04 & 5.84E-05 & - & - \\ 1215 & 0.125 & 0.07 & 0.57 & 5.06E-05 & 1.51E-05 & 1.95 & 2.22 \\ 3718 & 0.083 & 0.05 & 0.57 & 2.53E-05 & 7.01E-06 & 1.89 & 1.71 \\ 8381 & 0.063 & 0.04 & 0.57 & 1.31E-05 & 3.95E-06 & 1.99 & 1.31 \\ \bottomrule \end{tabular} \end{table} \subsection{Case 3: SUPG Stabilized Advection Dominated Advection Diffusion Equation} \eirik{As a final numerical verification, we consider a special case of advection dominated advection diffusion equation. In particular, we consider the case in which the advection acts in a single direction aligned with a coordinate axis, see~\eqref{eqn:discrete_AD_1direction}. } \eirik{We consider a case where the} domain $\Omega$ is a tensor product of 2 unit intervals: $\Omega = (0,1) \times (0,1)$, diffusivity constant $\kappa =\frac{1}{100}$, and \eirik{advection vector is} $\mathbf{b} = (0,1)$. The analytic solution in this case \eirik{is inspired by the work of Egger and Sch{\"o}berl~\cite{egger2010hybrid}}: \begin{equation} \label{eq:ADR_exact} u_{exact} = (-4(x-0.5)^2+1)\left[ y + \frac{e^{\frac{1}{\kappa} \cdot b_y \cdot y} - 1 }{1 - e^{\frac{1}{\kappa} \cdot b_y}}\right], \end{equation} which implies that the forcing term must be $f = -\kappa\,\Delta u_{exact} + \mathbf{b} \cdot \Nabla u_{exact}$ \eirik{and we enforce the corresponding} homogeneous Dirichlet on the boundary $\partial \Omega$. \eirik{In Figures~\ref{fig:cartesian_ADR}, and~\ref{fig:analytic_ADR}, we show the approximate FE solution and the analytic solution, respectively. As expected, the SUPG stabilization results in a stable solution at this relatively coarse FE mesh.} \eirik{In Table~\ref{tab:results_ADR}, the convergence results for this final case are presented. Both the RMSE and $l_{\infty}$ error converge at the expected optimal rates.} \begin{table}[h] \centering \caption{\label{tab:results_ADR} Error estimation results for advection dominated advection diffusion problem.} \begin{tabular}{@{}llllll@{}} \toprule {dofs \hspace{6mm}} & { h \hspace{6mm}} & {$l_{\infty}$ \hspace{6mm}} & { RMSE \hspace{15mm}} & {RMSE rate} & {$l_{\infty}$ rate} \\ \midrule \midrule 9 & 0.50 & 2.50E-01 & 8.33E-02 & - & - \\ 16 & 0.33 & 1.19E-01 & 4.65E-02 & 1.44 & 1.84 \\ 25 & 0.25 & 7.38E-02 & 3.01E-02 & 1.51 & 1.65 \\ 36 & 0.20 & 4.79E-02 & 2.08E-02 & 1.65 & 1.93 \\ 49 & 0.17 & 3.33E-02 & 1.52E-02 & 1.73 & 1.99 \\ 256 & 0.07 & 6.41E-03 & 2.85E-03 & 1.83 & 1.80 \\ 676 & 0.04 & 2.52E-03 & 1.07E-03 & 1.92 & 1.83 \\ 2601 & 0.02 & 6.50E-04 & 2.74E-04 & 1.96 & 1.96 \\ \bottomrule \end{tabular} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{Analytic_256_dof.png} \caption{Analytic solution projected onto FE mesh with 256 dofs.} \label{fig:analytic_ADR} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{Cartesian_Product_256_dof.png} \caption{Cartesian product FE solution at 256 dofs.} \label{fig:cartesian_ADR} \end{figure} \section{Conclusions}\label{sec:Conclusions} \eirik{In this paper, we have introduced and implemented tensor product FE routines for high dimensional problems in FEniCS. This methodology allows us to extend the} FEniCS library to domains with more than three dimensions so long as they are a Cartesian product of subdomains three or lower. \eirik{To verify the developed methodology, we consider three test cases utilizing classical and stabilized FE methods. } For each test case, \eirik{we observe the expected } convergence to the analytic solutions with respect to grid refinement was demonstrated in both the RMSE and $l_{\infty}$ norms. \eirik{We consider only linear PDEs here } since this allowed for the explicit construction of a single linear system of algebraic equations. \eirik{Hence, future studies should investigate potential extensions to nonlinear PDEs.} Additionally, the global system of equations was solved naively by explicitly constructing the global stiffness matrix and inverting. \eirik{However,} the global matrix is sparse with highly structured blocks which should allow for faster solvers that would greatly reduce run time. \eirik{We refer to existing works~\cite{BIALECKI1993369,Gao_thesis,Bank1978}, where related} problems were considered \eirik{and leave the consideration of such solvers for future studies.} \eirik{Further extensions to mixed FE methods, } such as those discussed in the book by Brezzi \emph{et al.}~\cite{brezzi2012mixed}. \eirik{Finally, } FE methods utilizing with discontinuous test/trial spaces \eirik{ could be considered due to their extensive use in engineering applications}. Furthermore, full integration of this method into the FEniCS API would be valuable for both simplicity of future implementations as well as for performance. The code used in the numerical studies is publicly available and can be found on GitHub at \\ \url{https://github.com/Markloveland/Cartesian_Product_FEM} \section{Acknowledgements} Author Loveland has been supported by the CSEM Fellowship from the Oden Institute at the University of Texas at Austin. Authors Loveland, Valseth, and Dawson have been supported by the United States National Science Foundation - NSF PREEVENTS Track 2 Program, under NSF Grant Number 1855047 and the Department of Homeland Security Coastal Resilience Center research project "Accurate and Fast Wave Modeling and Coupling with ADCIRC". Author Lukac has been supported by the University of Oregon. \bibliographystyle{elsarticle-num}
{ "timestamp": "2022-02-03T02:04:18", "yymm": "2202", "arxiv_id": "2202.00762", "language": "en", "url": "https://arxiv.org/abs/2202.00762", "abstract": "We present a method to extend the finite element library FEniCS to solve problems with domains in dimensions above three by constructing tensor product finite elements. This methodology only requires that the high dimensional domain is structured as a Cartesian product of two lower dimensional subdomains. In this study we consider Dirichlet problems for scalar linear partial differential equations, though the methodology can be extended to non-linear problems. The utilization of tensor product finite elements allows us to construct a global system of linear algebraic equations that only relies on the finite element infrastructure of the lower dimensional subdomains contained in FEniCS. We demonstrate the effectiveness of our methodology in four distinctive test cases. The first test case is a Poisson equation posed in a four dimensional domain which is a Cartesian product of two unit squares solved using the classical Galerkin finite element method. The second test case is the wave equation in space-time, where the computational domain is a Cartesian product of a two dimensional space grid and a one dimensional time interval. In this second case we also employ the Galerkin method. The third test case is an advection dominated advection-diffusion equation where the global domain is a Cartesian product of two one dimensional intervals in which the streamline upwind Petrov-Galerkin method is applied to ensure discrete stability. The final test case uses the Galerkin approach to solve a Poisson problem on a Cartesian product of two intervals with a spatially varying, non-separable diffusivity term. In all cases, a p=1 basis is used and optimal L^2 convergence rates of order h^{p+1} of the errors are achieved with respect to h refinement", "subjects": "Numerical Analysis (math.NA); Mathematical Software (cs.MS)", "title": "Extending FEniCS to Work in Higher Dimensions Using Tensor Product Finite Elements", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429112114064, "lm_q2_score": 0.8354835411997897, "lm_q1q2_score": 0.8229871396926559 }
https://arxiv.org/abs/2211.09989
A simple construction of infinite finitely generated torsion groups
The goal of this note is to provide yet another proof of the following theorem of Golod: there exists an infinite finitely generated group $G$ such that every element of $G$ has finite order. Our proof is based on the Nielsen-Schreier index formula and is simple enough to be included in a standard group theory course.
\section{Introduction} Recall that a group $G$ is said to be \emph{torsion} (or \emph{periodic}) if every element of $G$ has finite order. Obviously, every finite group is torsion. Infinite torsion groups can be constructed as direct products of finite groups; note, however, that these groups are not finitely generated. The following famous problem was posed by William Burnside in 1902. \begin{prob} Is every finitely generated torsion group finite? \end{prob} This question and its variations served as a catalyst for research in group theory throughout the 20th century. In 1964, Golod answered it in negative \cite{Gol}. \begin{thm}[Golod] \label{main} There exists a finitely generated infinite torsion group. \end{thm} Golod's proof was based on the Golod-Shafarevich inequality giving a sufficient condition for certain graded algebras to be infinite dimensional. Since then, many alternative constructions of infinite finitely generated torsion groups have been found. Notable examples include free Burnside groups \cite{A}, groups acting on rooted trees \cite{Gri}, and inductive limits of hyperbolic groups \cite{Gro,Ols93}. Although most of these constructions are quite involved, the Grigorchuck example \cite{Gri} and Olshanskii's simplification of the original Golod's argument \cite{Ols95} are simple enough to be discussed in a standard group theory course. The goal of this paper is to provide yet another elementary proof based on the Nielsen-Schreier formula. The idea of our proof is by no means original. In one form or another, it appeared in \cite{LO,OO} and some other papers. However, the proofs in these papers were ``spoiled" by technicalities caused by the desire to ensure certain additional properties. Below we provide the simplified proof along these lines. \section{Proof of Golod's theorem} Given two elements $x,y$ of a group $G$, we write $x^y$ for $y^{-1}xy$. We denote by $\ll S\rr ^G$ the normal closure of a subset $S$ in $G$, i.e., the smallest normal subgroup of $G$ containing $S$. If $G$ is finitely presented, $\d (G)$ denotes the deficiency of $G$. That is, $\d(G)$ is the maximum of the difference between the number of generators and the number of relations over all finite presentations of $G$. \begin{lem}\label{sm} For every finite index subgroup $H$ of a finitely presented group $G$, we have \begin{equation}\label{eq:sm} \d (H)-1\ge (\d (G)-1)|G:H|. \end{equation} \end{lem} \begin{proof} Let $G=F/R$ be a finitely presented group, where $F=\langle x_1, \ldots , x_d\rangle $ is free of rank $d$, $R=\ll R_1, \ldots , R_r\rr ^F$, and $d-r=\d(G)$. Let $H$ be a finite index subgroup of $G$, $K$ the full preimage of $H$ in $F$. By the Nilsen-Schreier formula, $K$ is a free group of rank $(d-1)j+1$, where $j=|F:K|=|G:H|$. It is straightforward to check that $R=\ll \{ R_i^t \mid i=1, \ldots, r,\, t\in T\} \rr ^K$, where $T$ is a set of left transversal for $K$ (i.e., the set of representatives of left cosets of $K$ in $F$). Thus, $H=K/R$ has a presentation with $(d-1)j+1$ generators and $r|T|=rj$ relations, which implies (\ref{eq:sm}). \end{proof} Let $\mathcal D$ denote the class of all finitely presented groups that contain a finite index subgroup of deficiency at least $2$. Let $G\in \mathcal D$ and let $H\le G$ be a finite index subgroup of deficiency at least $2$. Passing to the intersection of all conjugates of $H$, we obtain a finite index normal subgroup $N\lhd G$ such that $N\le H$. Lemma \ref{sm} implies that $\d (N)\ge 2$. Thus, every $G\in \mathcal D$ contains a finite index \emph{normal} subgroup of deficiency at least $2$. For a group $G$, we denote by $\widehat G$ the quotient of $G$ by the intersection of all finite index subgroups of $G$. Basic linear algebra implies that every group of deficiency at least $2$ surjects onto $\mathbb Z$. Therefore, $|\widehat G|=\infty$ for every $G\in \mathcal D$. \begin{prop}\label{quot} Let $G\in \mathcal D$. For every $g\in G$, there exists $m\in \mathbb Z$ such that $Q=G/\ll g^{m}\rr ^G\in \mathcal D$ and the image of $g$ in $\widehat Q$ has finite order. \end{prop} \begin{proof} The idea of the proof is borrowed from \cite{BP}. If the image of $g$ in $\widehat G$ has finite order, we can take $m=0$. Henceforth, we assume that the image of $g$ in $\widehat G$ has infinite order. Let $M$ be a finite index normal subgroup of $G$ such that $\d (M)\ge 2$. By our assumption, there exist finite index subgroups $N\lhd G$ such that $|\langle g\rangle N / N |$ is arbitrarily large; in particular, we can find a finite index subgroup $N\lhd G$ such that $N\le M$ and \begin{equation}\label{o(g)} |\langle g\rangle N / N |> |G:M|. \end{equation} Let $m= |\langle g\rangle N / N |$, $f=g^m$. Obviously, $f\in N$. Let $T$ be a right transversal of $\langle g\rangle N$ in $G$. For every $s\in G$, we have $s=g^knt$ for some $k\in \mathbb Z$, $n\in N$, $t\in T$, and $f^s=f^{nt}=(f^{t})^{n^t}$. Since $n^{t}\in N$, we obtain $\ll f\rr ^G=\ll \{ f^{t} \, |\, t\in T\}\rr^N$. Therefore, $$ \d\Big(N/\ll f\rr^G\Big) \ge \d(N) - |T| =\d(N) - |G:\langle g\rangle N| = \d(N)- \frac{|G/N|}m. $$ Combining this inequality with Lemma \ref{sm} and (\ref{o(g)}), we obtain $$ \d\Big(N/\ll f\rr^G\Big) -1 \ge (\d(M)-1)|M/N| - \frac{|G/N|}m = |M/N|\left(\d(M)-1 - \frac{|G/M|}m\right) > 0. $$ Therefore, $G/\ll f\rr^G\in \mathcal D$. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] Let $M_0=G_0=F_2$ be the free group of rank $2$. Clearly, $G_0\in \mathcal D$. We enumerate all elements of $G=\{ 1=g_0, g_1, g_2, \ldots \} $ and construct an infinite torsion quotient of $G_0$ by the following inductive procedure. Suppose that for some $k\ge 0$, we have already constructed a group $G_k$ and a subgroup $M_k\lhd G_k$ such that the following conditions hold: \begin{enumerate} \item[(a)] $G_k\in \mathcal D$; \item[(b)] the natural image of $g_k$ in $\widehat G_k$ has finite order; \item[(c)] $|G_k/M_k|\ge k$. \end{enumerate} Since $G_k\in \mathcal D$, $G_k$ contains subgroups of arbitrarily large finite index. In particular, we can find a subgroup $L_k\lhd G_k$ such that $L_k\le M_k$ and $\infty > |G_k/L_k|\ge k+1$. If the image of $g_{k+1}$ in $\widehat G_{k+1}$ has finite order, we let $G_{k+1}=G_k$ and $M_{k+1}=L_k$. Otherwise, let $g$ be a non-trivial element of $\langle g_{k+1}\rangle \cap L_k$. By Proposition \ref{quot}, there exists $m\in \mathbb Z$ such that $G_{k+1}= G_{k}/\ll g^m\rr ^{G_{k}}\in \mathcal D$ and the image of $g_{k+1}$ in $\widehat G_{k+1}$ has finite order. Let $M_{k+1}$ be the image of $L_k$ in $G_{k+1}$. Note that we have $G_{k+1}/ M_{k+1} \cong G_{k}/L_k$ since $g\in L_k$; therefore, $G_{k+1}/ M_{k+1}$ naturally surjects onto $G_k/M_k$. Thus we obtain the following commutative diagram \begin{equation}\label{seq} \begin{array}{ccccccc} G_0 & \longrightarrow & G_1 & \longrightarrow & G_2 & \longrightarrow &\ldots\\ \downarrow && \downarrow && \downarrow &&\\ G_0/M_0& \longleftarrow & G_1/M_1 & \longleftarrow & G_2/M_2 & \longleftarrow & \ldots,\\ \end{array} \end{equation} where all arrows are surjective. Let $G$ be the direct limit of the first row. That is, $G=G_0/\bigcup_{k\in \mathbb N}N_k$, where $N_k$ is the kernel of the homomorphism $G_0\to G_k$ obtained by composing the first $k$ maps in the first row of (\ref{seq}). By (b), the image of $g_k$ has finite order in $\widehat G_k$. Since $\widehat G_k$ surjects onto $\widehat G$ for all $k$, $\widehat G$ is a torsion group. On the other hand, $G$ surjects onto $G_k/M_k$ for all $k$. Combining this with (c), we obtain that $\widehat G$ is infinite. \end{proof}
{ "timestamp": "2022-12-05T02:02:02", "yymm": "2211", "arxiv_id": "2211.09989", "language": "en", "url": "https://arxiv.org/abs/2211.09989", "abstract": "The goal of this note is to provide yet another proof of the following theorem of Golod: there exists an infinite finitely generated group $G$ such that every element of $G$ has finite order. Our proof is based on the Nielsen-Schreier index formula and is simple enough to be included in a standard group theory course.", "subjects": "Group Theory (math.GR)", "title": "A simple construction of infinite finitely generated torsion groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9850429107723175, "lm_q2_score": 0.8354835309589074, "lm_q1q2_score": 0.8229871292380958 }
https://arxiv.org/abs/2005.03250
Cohomological dimension of ideals defining Veronese subrings
Given a standard graded polynomial ring over a commutative Noetherian ring $A$, we prove that the cohomological dimension and the height of the ideals defining any of its Veronese subrings are equal. This result is due to Ogus when $A$ is a field of characteristic zero, and follows from a result of Peskine and Szpiro when $A$ is a field of positive characteristic; our result applies, for example, when $A$ is the ring of integers.
\section{Introduction} Throughout this paper, all rings are assumed to be commutative, Noetherian, and with an identity element. Let $T = \mathbb{Z} [x_1,x_2,\ldots,x_k]$ be the standard graded polynomial ring in $n$ indeterminants over the integers. Consider a minimal minimal presentation of its $n$-th Veronese subring $T^{(n)} = \oplus _{i \geq 0} T_{in}$ as $T^{(n)} \cong \mathbb{Z} [t_1, \ldots, t_d]/I$. We say that $I$ is the ideal defining the $n$-th Veronese subring of $T$. For $A$ a ring, we set $T_A = T\otimes _{\mathbb{Z}} A$. Ogus in \cite[Example 4.6]{ogus} proved that when $A$ is a field of characteristic zero, the cohomological dimension of $I$ is the same as its height. The same result also follows when $A$ is a field of positive characteristic by a result of Peskine and Szpiro \cite[Proposition \RN{3}.4.1]{PS}. We prove that this continues to hold for any commutative Noetherian ring $A$. The critical step is the calculation of local cohomology of the polynomial ring $\mathbb{Z}[t_1, \ldots t_d]$ supported at the ideal $I$. More precisely, we prove: \begin{theorem} \label{main} Let $T = \mathbb{Z}[x_1, \ldots, x_k]$ be a polynomial ring with the $\mathbb{N}$-grading $[T]_0 = \mathbb{Z}$ and deg $x_i = 1$ for each $i$. Consider a minimal presentation of $T^{(n)}$ as $R/I$. Then \[H^i_I(R) = 0 \quad \text{for } i \neq \mathrm{height}\ I. \] \end{theorem} Towards the above result, we establish a condition for the injectivity of multiplication by a prime integer on local cohomology modules over the ring $\mathbb{Z}[t_1, \ldots t_d]$ in Lemma~\ref{lc-inj}. This strengthens \cite[Corollary 2.18]{LSW} and is a result of independent interest. It is worth mentioning that in the above context, the arithmetic rank may vary with the characteristic of the ring $A$: \begin{example} Let $k[x_1, \ldots, x_n]$ be a standard graded polynomial ring over a field $k$. Let $R$ be a polynomial ring over $k$ in indeterminants that map entrywise to the distinct elements of the matrix \begin{center} \ensuremath{\begin{pmatrix} x_1^2 & x_1x_2 & \cdots & x_1x_n \\ x_1x_2 & x_2^2 & \cdots & x_2x_n \\ \vdots & \vdots & \ddots & \vdots \\ x_1x_n & x_2x_n & \cdots & x_n^2 \end{pmatrix}} . \end{center} Thus, $R$ is a polynomial ring in $n+1 \choose 2$ indeterminants. The relations between the generators of $R$ under the above map are precisely those corresponding to the size two minors of this matrix. These relations define an ideal $I$ of $R$, with $R/I$ being a minimal presentation. Barile proved that the arithmetic rank of $I$, i.e., the minimum number of equations defining the affine variety $V(I)$ set-theoretically, is \begin{center} \ensuremath{ \displaystyle{\mathrm{ara} \ I} = \begin{cases} {n \choose 2} & \text{if char } $k = 2$, \\ {n+1 \choose 2} - 2 & \text{otherwise.} \end{cases}} \end{center} More generally, Barile computed the arithmetic rank of the class of ideals generated by the size $t$ minors of a symmetric $n \times n$ matrix of indeterminants over a field in \cite[Theorems 3.1, 5.1]{Ba} and remarked: \textit{This seems to be the first class of ideals defined over $\mathbb{Z}$ for which, after specialization to a field $k$, the arithmetical rank depends on $k$.} This dependence of the arithmetic rank of $I$ on the characteristic of the field makes it interesting to investigate the local cohomology of polynomial rings over the integers such as those examined here. \end{example} \section{Injectivity of multiplication by a prime integer\\ on local cohomology modules} The following lemma gives a criterion for integer torsion in local cohomology modules of a standard graded polynomial ring over the integers: \begin{lemma} \cite[Corollary 2.18]{LSW} Let $R = \mathbb{Z}[x_1, \ldots, x_n]$ be a polynomial ring with the $\mathbb{N}$-grading $[R]_0 = \mathbb{Z}$ and deg $x_i = 1$ for each $i$. Let $I$ be a homogeneous ideal, $p$ a prime integer, and $h$ a nonnegative integer. Suppose that the Frobenius action on \[[H^{n-h} _{(x_1, \ldots, x_n)} (R/(I+pR))]_0\] is nilpotent, and that the multiplication by $p$ map \[H^{h+1}_I (R)_{x_i} \overset{.p} \rightarrow H^{h+1}_I (R)_{x_i}\] is injective for each $i$. Then the multiplication by $p$ map on $H^{h+1}_I (R)$ is injective. \end{lemma} The proof of this lemma largely relies on the following theorem. For an overview of $\mathcal{D}$-modules and $\mathcal{F}$-modules, we refer the reader to \cite{LSW}. \begin{theorem} \cite[Theorem 2.16]{LSW} Let $R$ be a standard graded polynomial ring, where $[R]_0$ is a field of prime characteristic. Let $\mathbf{m}$ be the homogeneous maximal ideal of $R$, and $I$ an arbitrary homogeneous ideal. For each nonnegative integer $k$, the following are equivalent: \begin{enumerate} \item Among the composition factors of the Eulerian $\mathcal{D}$-module $\xi (H^k _I (R))$, there is at least one composition factor with support $\{\mathbf{m}\}$. \item Among the composition factors of the graded $\mathcal{F}$-finite module $H^k_I(R)$, there is at least one composition factor with support $\{\mathbf{m}\}$. \item $H^k_I(R)$ has a graded $\mathcal{F}$-module homomorphic image with support $\{\mathbf{m}\}$. \item The natural Frobenius action on $[H^{dimR - k}_{\mathbf{m}} (R/I)]_0$ is not nilpotent. \end{enumerate} \end{theorem} We strengthen Lemma $2.1$ as follows: \begin{lemma} \label{lc-inj} Let $R = \mathbb{Z}[x_1, \ldots, x_n]$ be a polynomial ring with the $\mathbb{N}$-grading $[R]_0 = \mathbb{Z}$ and deg $x_i = 1$ for each $i$. Let $I$ be a homogeneous ideal, $p$ a prime integer, and $h$ a nonnegative integer. Let $t_1, \ldots, t_k$ be homogeneous elements in $R$ such that \[\sqrt{(t_1, \ldots, t_k)}R/I = (x_1, \ldots, x_n)R/I.\] Further, suppose that the Frobenius action on \[[H^{n-h} _{(t_1, \ldots, t_k)} (R/(I+pR))]_0\] is nilpotent and that the multiplication by $p$ map \[H^{h+1}_I (R)_{t_i} \overset{.p} \rightarrow H^{h+1}_I (R)_{t_i}\] is injective for each $i$. Then the multiplication by $p$ map on $H^{h+1}_I (R)$ is injective. \end{lemma} \begin{proof} Since local cohomology modules depend only on the radical of the ideal defining the support, \[H^{n-h} _{(t_1, \ldots, t_k)} (R/(I+pR)) = H^{n-h} _{(x_1, \ldots, x_n)} (R/(I+pR)).\] Therefore, the natural Frobenius action on $[H^{n-h} _{(x_1, \ldots, x_n)} (R/(I+pR))]_0$ is nilpotent. The short exact sequence \[0 \rightarrow R \overset{.p} \rightarrow R \rightarrow R/pR \rightarrow 0\] induces the following long exact sequence of local cohomology modules: \[\cdots \rightarrow H^i_I(R) \rightarrow H^i_I(R/pR) \overset{\delta} \rightarrow H^{i+1} _I (R) \overset{.p} \rightarrow H^{i+1} _I (R) \rightarrow \cdots .\] Let $K$ denote the kernel of the multiplication by $p$ map in the above display, and $\mathbf{m}$ denote the homogeneous maximal ideal of $R/pR$. By hypothesis, the localization $K_{t_i}$ is zero for each $i$. Thus, any prime ideal in the support of $K$ must contain each $t_i$. We may assume that $I$ is a proper ideal of $R$. Thus, prime ideals $\mathbf{p}$ in the support of $K$ are such that \[(t_1, \ldots, t_k)R \subseteq \mathbf{p} \text{ and } I \subseteq \mathbf{p}.\] Therefore, $\sqrt{(t_1, \ldots, t_k)R + I} = \mathbf{m}$ is contained in $\mathbf{p}$. Thus, Supp($K$) is contained in $\{\mathbf{m}\}$. The kernel $K$ is a $\mathcal{D}_{\mathbb{Z}}(R)$-module; since it is annihilated by $p$, it is also a module over \[\mathcal{D}_{\mathbb{Z}}(R)/p\mathcal{D}_{\mathbb{Z}}(R) \cong \mathcal{D}_{\mathbb{F}_p}(R/pR).\] This isomorphism follows, for example, from \cite[Lemma 2.1]{bblsz}. If $K$ is nonzero, then it is a homomorphic image of $H^i_I (R/pR)$ in the category of Eulerian graded $\mathcal{D}_{\mathbb{F}_p} (R/pR)$-modules, supported precisely at the homogeneous maximal ideal $\mathbf{m}$ of $R/pR$. But this is not possible, since the $\mathcal{D}_{\mathbb{F}_p} (R/pR)$-module $H^i_I (R/pR)$ has no composition factor with support $\{\mathbf{m}\}$ by Theorem $2.2$. \end{proof} We illustrate Lemma $2.3$ with the following example, but first a definition: \begin{definition} Let $I$ be an ideal of a ring $R$. For each $R$-module $M$, set \[\mathrm{cd}_R (I,M) = \sup \{n \in \mathbb{N} : H^n_I (M) \neq 0\}.\] The \textit{cohomological dimension} of $I$ is \[\mathrm{cd}(I) = \sup \{\mathrm{cd}_R(I,M) : \text{$M$ is an $R$-module}\}.\] By the right exactness of the functor $H^{\mathrm{cd}(I)} _I(-)$, we get $\mathrm{cd}_R (I) = \mathrm{cd}_R (I,R)$. \end{definition} \begin{example} Consider the ring $T = \mathbb{Z}[x^4, x^3y,xy^3,y^4]$, which has a minimal presentation: \[T \cong \mathbb{Z}[t_1, t_2, t_3, t_4]/ (t_1t_4 - t_2t_3 \ , t_2t_4^2-t_3^3\ , t_1t_3^2-t_2^2t_4 \ , t_1^2t_3-t_2^3) = R/I.\] We calculate the cohomological dimension of the ideal $I$. For any field $k$, we denote $T\otimes _{\mathbb{Z}} k$ by $T_k$. Hartshorne in \cite[Theorem]{hartshorne} showed that for $k$, a field of positive characteristic, the arithmetic rank of $IR_k$ is two. Since the ideal $I$ has height two, it follows that the cohomological dimension of $IR_k$ is also two. We denote by $T'_k$ the ring $k[x^4, x^3y,x^2y^2,xy^3,y^4]$, which is the normalization of $T_k$ . The short exact sequence of $T_k$-modules \[0 \rightarrow T_k \rightarrow T'_k \rightarrow T'_k/T_k \rightarrow 0\] induces an isomorphism of local cohomology modules \[H^2 _{(x^4,x^3y,xy^3,y^4)} (T_k) \cong H^2 _{(x^4,x^3y,xy^3,y^4)} (T'_k),\] since $T'_k/T_k$ is a zero-dimensional $T_k$-module. As $T'_k$ is a direct summand of the polynomial ring $k[x,y]$, it follows that $[H^{2} _{(t_1, t_2,t_3,t_4)} (R/(I+pR))]_0 = 0$. Note that $\sqrt{(t_1, t_4)}R/I = (x^4,x^3y,xy^3,y^4)R/I$. Further, \[IR_{t_1} = (t_3 - t_2^3/t_1^2 \ , t_4 - t_2^4/t_1^3 ) \text{ and } IR_{t_4} = (t_1 - t_3^4/t_4^3 \ , t_2 - t_3^3/t_4^2) \] are both two generated ideals. Thus, by Lemma~\ref{lc-inj}, the map $H^3_I(R) \overset{.p} \rightarrow H^3_I(R)$ is injective for each nonzero prime integer $p$. The exact sequence of local cohomology modules induced by \[0 \rightarrow R \overset{.p} \rightarrow R \rightarrow R/pR \rightarrow 0\] shows that $H^3_I(R) \overset{.p} \rightarrow H^3_I(R)$ is surjective since $H^3_I (R/pR) = 0$. Therefore, $H^3_I(R)$ is a $\mathbb{Q}$-vector space. But the cohomological dimension of $IR_{\mathbb{Q}}$ is known to be two. We conclude that the cohomological dimension of $I$ is two. It is worth noting that $T/pT \cong R/(I+pR)$ is not $F$-pure, since, \[(x^3y)^2 \notin (x^4)T/pT \text{ but } (x^3y)^{2p} \in (x^{4p})T/pT.\] \end{example} \section{Calculation of cohomological dimension} \begin{definition} Let $R = \oplus _{i \geq 0} R_i$ be a graded ring, and $n$ be a positive integer. We denote by $R^{(n)}$, the \emph{Veronese subring} of $R$ spanned by elements which have degree a multiple of $n$, i.e., $R^{(n)} = \oplus _{i \geq 0} R_{in}$. \end{definition} We now present the key result which helps us calculate the cohomological dimension of ideals defining Veronese subrings. \begin{proposition} \label{ci} Let $A$ be a domain. Let $T = A[x_1, \ldots, x_k]$ be a polynomial ring with the $\mathbb{N}$-grading $[T]_0 = A$ and deg $x_i = 1$ for each $i$. Consider the lexicographic ordering of monomials in $T$ induced by $x_1 > x_2 > \cdots >x_k$. Write a minimal presentation of $T^{(n)}$ as $R/I$ where $R = A[t_1, \ldots, t_d]$ with $t_i$ mapping to the $i$-th degree $n$ monomial under the above monomial ordering. Then, for each $i$ such that $t_i \longmapsto x_j^n$ for some $j$, the ideal $IR_{t_i}$ is generated by a regular sequence of length $\mathrm{height}\ I$. \end{proposition} \begin{proof} By symmetry, it is enough to consider $t_1 \longmapsto x_1^n$. We claim that the ideal $IR_{t_1}$ is generated by the regular sequence \[t_{k+1} - t_2 ^2/t_1,\ t_{k+2} - t_2t_3/t_1,\ t_{k+3} - t_2t_4/t_1,\ \ldots, t_{k+1 \choose 2} - t_k ^2/t_1,\ t_{{k+1 \choose 2}+1}-t_2 ^3/t_1^2, \ldots \] \[\ldots,\ t_{k+2 \choose 3} - t_k^3/t_1^2\ , \ldots ,\ t_{d-1} - t_{k-1}t_2^{n-1}/t_1^{n-1},\ t_d - t_k^n/t_1^{n-1}.\] Note that the length of this regular sequence is equal to $\mathrm{height}\ I$. Let $J$ be the ideal \[(t_{k+1} -\alpha _{k+1},\ t_{k+2} -\alpha_{k+2},\ \ldots,\ t_d - \alpha_d)R_{t_1},\] where $\alpha_{k+1},\alpha_{k+2}, \ldots \alpha_d$ are as above, i.e., $\alpha_{k+1} = t_2^2/t_1,\ \alpha_{k+2} = t_2t_3/t_1,\ \ldots,\text{ and } \alpha_d = t_k^n/t_1^{n-1}$. We claim that $J = IR_{t_1}$. It is clear that the ideal $J$ is contained in $IR_{t_1}$. Since $(R/I)_{t_1}$ is a subring of the fraction field of $R/I$, it follows that the ideal $IR_{t_1}$ is prime of height $d-k$. Define a ring homomorphism $\phi \colon R_{t_1} \rightarrow A[t_1, \ldots, t_k][\frac{1}{t_1}]$ such that $t_i \longmapsto t_i$ for $1 \leq i \leq k$ and $t_j \longmapsto \alpha _j$ for $k+1 \leq j \leq d$. Then the map $\phi$ is a surjective ring homomorphism with kernel $J$. Hence, $J$ is a prime ideal of $R_{t_1}$ of height $d-k$. Thus, $J \subseteq IR_{t_1}$ are prime ideals of the same height in the ring $R_{t_1}$. We conclude that the ideals $J$ and $IR_{t_1}$ are equal. \end{proof} \begin{remark} In the notation of Proposition~\ref{ci}, assume that the ring $A$ is regular. Then for each $t_i$ with $t_i \longmapsto x_j^n$, the ring \[(R/I)_{t_i} \cong T_{x_j ^n} ^{(n)} = A[x_j ^n\ , 1/x_j ^n\ , x_2/x_j\ , \ldots, x_{j-1}/x_j\ , x_{j+1}/x_j\ , \ldots x_k/x_j]\] is regular. \end{remark} One of the most well-known vanishing results for local cohomology modules in positive characteristic was given by Peskine and Szpiro: \begin{theorem} \cite[Proposition \RN{3}.4.1]{PS} \label{peskine} Let $R$ be a regular domain of positive characteristic $p$ and $I$ be an ideal of $R$ such that $R/I$ is a Cohen-Macaulay ring. Then \[H^i_I(R) = 0 \quad \text{for } i \neq \mathrm{height}\ I. \] \end{theorem} The proof uses the flatness of the Frobenius action on $R$ which characterizes regular rings in positive characteristic. Before we proceed to our main result, we would like to remark that the cohomological dimension of ideals may depend on the coefficient ring: \begin{remark} Let $k$ be a field. Let $R = \mathbb{Z}[u,v,w,x,y,z]$ and $R_k = R \otimes _{\mathbb{Z}}k$. Let $I$ be the ideal $(\Delta_1, \Delta_2, \Delta_3)R$ where $\Delta_1 = vz-wy$, $\Delta_2 = wx-uz$, and $\Delta_3 = uy-vx$. It is easily checked that $\mathrm{height}\ I = 2$. Then $\mathrm{cd}_{R/pR}(I, R/pR) =~2$ by Theorem~\ref{peskine}. However, Hochster observed that $H^3_I(R_{\mathbb{Q}})$ is nonzero, i.e., $\mathrm{cd}_{R_{\mathbb{Q}}}(I, R_{\mathbb{Q}}) =~3$. Since local cohomology commutes with localization, we also have $H^3_I(R)$ is nonzero, i.e., $\mathrm{cd}_{R}(I, R) = 3$. We point the reader to \cite[Example 21.31]{twentyfour} for further details. \end{remark} In Theorem~\ref{main}, we obtain a vanishing result for local cohomology modules over the integers similar to Theorem~\ref{peskine}. \begin{proof} [Proof of Theorem \ref{main}] Let $h$ denote the height of the ideal $I$. Since $R$ is regular, the grade of $I$ equals $h$ so that $H^i_I(R) = 0$ for $i <h$. Further, by Grothendieck's nonvanishing theorem, $H^h_I (R) \neq 0$. Let $p$ be a nonzero prime integer. The short exact sequence \[0 \rightarrow R \overset{.p} \rightarrow R \rightarrow R/pR \rightarrow 0\] induces \[\cdots \rightarrow H^i_I(R) \rightarrow H^i_I(R/pR) \overset{\delta} \rightarrow H^{i+1} _I (R) \overset{.p} \rightarrow H^{i+1} _I (R) \rightarrow \cdots .\] Note that the height of the ideal $IR/pR$ is also $h$. Hence, by Theorem~\ref{peskine} \[H^i _I(R/pR) = H^i _{IR/pR}(R/pR) = 0 \text{\; for \;} i \neq h. \] It follows that the map $H^i_I(R)\overset{.p} \rightarrow H^i_I(R)$ is an isomorphism for each $i > h+1$ and that the map $H^{h+1}_I(R) \overset{.p} \rightarrow H^{h+1}_I(R)$ is surjective. The crucial part that remains to show is that the map $H^{h+1}_I(R) \overset{.p} \rightarrow H^{h+1}_I(R)$ is also injective. For this, we appeal to Lemma ~\ref{lc-inj}. After reordering of indices, let $t_1, \ldots, t_k$ denote the preimages of $x_1^n, \ldots, x_k^n$ respectively. The ring $R/(I+pR)$ is a direct summand of the polynomial ring $T/pT$. Therefore, $[H^{n-h} _{(t_1, \ldots, t_k)} (R/(I+pR))]_0$ is zero. By symmetry, it is enough to show that the multiplication by $p$ map \[H^{h+1}_I (R)_{t_1} \overset{.p} \rightarrow H^{h+1}_I (R)_{t_1}\] is injective. Note that the $R$-module $H^{h+1}_I (R)_{t_1}$ is isomorphic to $H^{h+1}_{IR_{t_1}}(R_{t_1})$. Applying Proposition ~\ref{ci} with $A = \mathbb{Z}$, we get that the ideal $IR_{t_1}$ is generated by a regular sequence of length $h$. Therefore, $H^{h+1}_{IR_{t_1}}(R_{t_1}) = 0$ and thus the map $H^{h+1}_I(R) \overset{.p} \rightarrow H^{h+1}_I(R)$ is injective. For $i >h$, by \cite[Example 4.6]{ogus}, the module $H^i_I(R) \otimes_{\mathbb{Z}} \mathbb{Q}$ vanishes so that $H^i_I(R)$ is equal to its $\mathbb{Z}$-torsion submodule. But the $\mathbb{Z}$-torsion submodule of $H^i_I(R)$ is zero since multiplication by each nonzero prime integer is injective. We therefore conclude that the local cohomology modules $H^i_I(R)$ vanish for $i> h$. \end{proof} \begin{remark} Following the notation of Theorem~\ref{main}, all but finitely many prime integers are known to be nonzerodivisors on $H^i_I(R)$ for any $i$ by \cite[Theorem 3.1 (2)]{bblsz}. Note that in Theorem~\ref{main}, we proved that \textit{each} nonzero prime integer is a nonzerodivisor on $H^i_I(R)$ for every $i$. Consequently, any associated prime of the $R$-module $H^h_I(R)$ contracts to the zero ideal in the integers. In \cite[Section 4]{singh}, Singh constructs an example of a local cohomology module over a six dimensional hypersurface, which has $p$-torsion elements for \emph{each} prime integer $p$, and consequently has infinitely many associated prime ideals. \end{remark} In \cite[Theorem 4.1]{raicu}, Raicu recovers the result due to Ogus in \cite[Example 4.6]{ogus} which we used in proving Theorem \ref{main}; and also determines the $\mathcal{D}$-module structure of the only nonvanishing local cohomology module. Finally, we extend Theorem~\ref{main} to standard graded polynomial rings with coefficients from any commutative Noetherian ring. For this, we use the following proposition which is proved in \cite{BV} more generally when $R = \mathbb{Z}[t_1, \ldots, t_d]/J$ is a faithfully flat $\mathbb{Z}$-algebra. \begin{proposition} \cite[Proposition 3.14]{BV} \label{bv} Let $I$ be an ideal of the polynomial ring $R = \mathbb{Z}[t_1, \ldots, t_d]$ and $A$ be a ring. If there exists an integer $h$ such that $\mathrm{grade}\ I(R\otimes _{\mathbb{Z}}k) = h$ for every field $k$, then $\mathrm{grade }\ I(R\otimes _{\mathbb{Z}}A) = h$. Analogous statements hold for height. \end{proposition} \begin{theorem} Let $A$ be a commutative Noetherian ring and $T = A[x_1, \ldots, x_k]$ be a polynomial ring with the $\mathbb{N}$-grading $[T]_0 = A$ and deg $x_i = 1$ for each $i$. Consider a minimal presentation of $T^{(n)}$ as $R/I$. Then \[H^i_I(R) = 0 \quad \text{for } i \neq \mathrm{height}\ I. \] \end{theorem} \begin{proof} Theorem ~\ref{main} and Proposition ~\ref{bv} together ensure that $\mathrm{height}$ $I$ and $\mathrm{grade}$ $I$ are equal. Therefore, $H^i _I(R) = 0 \text{\; for \;} i < \text{height }I$. Further, the map $\mathbb{Z} \longrightarrow A$ induces the map $\mathbb{Z}[x_1, \ldots, x_n] \longrightarrow R$ which makes $R$ into a $\mathbb{Z}[x_1, \ldots, x_n]$-module. By the right exactness of the top local cohomology, the cohomological dimension of $I$ in $R$ is at most the cohomological dimension of $I$ in $\mathbb{Z}[x_1, \ldots, x_n]$, which, by Theorem~\ref{main}, equals height $I$. \end{proof} \section*{Acknowledgement} The author would like to thank Anurag Singh for many valuable discussions and for his constant encouragement and support. \bibliographystyle{alpha}
{ "timestamp": "2020-05-08T02:06:00", "yymm": "2005", "arxiv_id": "2005.03250", "language": "en", "url": "https://arxiv.org/abs/2005.03250", "abstract": "Given a standard graded polynomial ring over a commutative Noetherian ring $A$, we prove that the cohomological dimension and the height of the ideals defining any of its Veronese subrings are equal. This result is due to Ogus when $A$ is a field of characteristic zero, and follows from a result of Peskine and Szpiro when $A$ is a field of positive characteristic; our result applies, for example, when $A$ is the ring of integers.", "subjects": "Commutative Algebra (math.AC)", "title": "Cohomological dimension of ideals defining Veronese subrings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.972830769252026, "lm_q2_score": 0.8459424373085145, "lm_q1q2_score": 0.8229588320297759 }
https://arxiv.org/abs/2001.05767
Universal arrays
A word on $q$ symbols is a sequence of letters from a fixed alphabet of size $q$. For an integer $k\ge 1$, we say that a word $w$ is $k$-universal if, given an arbitrary word of length $k$, one can obtain it by removing entries from $w$. It is easily seen that the minimum length of a $k$-universal word on $q$ symbols is exactly $qk$. We prove that almost every word of size $(1+o(1))c_qk$ is $k$-universal with high probability, where $c_q$ is an explicit constant whose value is roughly $q\log q$. Moreover, we show that the $k$-universality property for uniformly chosen words exhibits a sharp threshold. Finally, by extending techniques of Alon [Geometric and Functional Analysis 27 (2017), no. 1, 1--32], we give asymptotically tight bounds for every higher dimensional analogue of this problem.
\section{Introduction} A \emph{universal} mathematical structure is one which contains all possible substructures of a particular form. Famous examples of universal structures are De Bruijn sequences~\cite{DeBruijn}, which are periodic words that contain, exactly once, every possible word of a fixed size as a substring. Universal structures where perhaps first considered in a general sense by Rado~\cite{Rado1964}, who studied the existence of universal graphs, hypergraphs and functions for various notions of containment. The study of universal (finite) graphs has received particular attention, and for these the containment relation of choice has been that of induced subgraphs. Thus a graph $G$ is said to be \emph{$k$-universal} if $G$ contains every graph on $k$ vertices as an induced subgraph. Two problems have been at the centre of the study of $k$-universal graphs. The first one is to determine $n$, the minimum value such that there exists a $k$-universal graph on $n$ vertices. In 1965, Moon~\cite{moon_1965} gave, through a simple counting argument, a lower bound of $2^{(k-1)/2}$ for that value of $n$. Recently, Alon~\cite{alon2017} showed that this lower bound is asymptotically tight, essentially settling this 50-year-old problem. More so, in a later paper, Alon and Sherman~\cite{alon2019} gave an asymptotically tight bound for the hypergraph generalisation of this problem. The second central problem in the study of $k$-universal graphs is the ``random'' analogue of the previous question, that is, finding the minimum $n$ such that ``almost every" $n$-vertex graph is $k$-universal. After works of Bollob\'as and Thomason~\cite{bollobas1981}, and Brightwell and Kohayakawa~\cite{brightwell1993}, Alon~\cite{alon2017} has essentially settled this problem as well. Finding a $k$-universal graph is equivalent to finding an adjacency matrix which ``contains'' the adjacency matrices of all $k$-vertex graphs. Here we are considering that an adjacency matrix $M$ contains another matrix $M'$, if we can obtain $M'$ from $M$ by iteratively applying the following operation: choose a value $i$ and delete the $i$-th row and the $i$-th column. It is thus natural to consider square matrices together with the notion of containment given by the operation of choosing values $i,j$ and deleting row~$i$ and column~$j$, and its associated notion of universality. More generally, we shall consider the analogue of this notion of containment for ``$d$-dimensional arrays'' for all $d\geqslant 1$. In what follows, we use the common notation $[k]=\{1, \dotsc ,k\}$, for any integer $k\geqslant 1$. Given an alphabet $\Sigma$, a positive integer $d$, and a $d$-tuple $\mathbf{n} = (n_1, \dotsc, n_d) \in \mathbb{N}^d$, a \emph{$d$-dimensional array of size $\mathbf{n}$ over $\Sigma$} is a collection of symbols $a_{\mathbf{i}} \in \Sigma$ indexed by the vectors $\mathbf{i} \in [n_1] \times [n_2] \times \dotsb \times [n_d]$. With regard to the alphabet $\Sigma$ we are only interested in its cardinality, and will assume $\Sigma=[q]$, whenever $|\Sigma|=q$. Thus $\Sigma$ will usually be clear from the context and, for short, we will just talk about \emph{$d$-arrays} of a certain size. A $d$-array of \emph{order $n$} is a $d$-array of size $(n, n, \dotsc, n)$. Note that $1$-arrays of size $n$ are commonly referred to as \emph{words} of \emph{length~$n$}. For general $d$-arrays and a corresponding notion of universality, we study the analogue of the two questions settled by Alon in the graph case (the ``deterministic'' and ``random'' questions). Whenever $d \geqslant 2$, we obtain asymptotically tight bounds for both questions (see Theorem~\ref{thm:2} and Corollary~\ref{coro}, below) by extending a method used by Alon in the graph case. However, this technique does not seem (directly) to work when $d=1$, that is, for the case of words. For this case we develop different tools which allow us to show tight bounds for both problems (see Theorems~\ref{theorem:universalwords-deterministic} and~\ref{theorem:universalwords-random}). Let us first define the notion of containment we will consider for general $d$-arrays, which is a generalisation of the containment notion for matrices discussed above. For fixed $d$, let $A = (a_{\mathbf{i}})$ be a $d$-array of size $(n_1, \dotsc, n_d)$. We define the \emph{coordinate restriction operation} on $A$ as follows. Choose some $j \in [d]$ and $\ell \in [n_j]$. Delete all the symbols whose $j$-th coordinate is $\ell$, to obtain a $d$-array of size $(n_1, n_2, \dotsc, n_{j-1}, n_j - 1, n_{j+1}, \dotsc, n_d)$. We say a $d$-array $A$ \emph{contains} a $d$-array $A'$ if we can obtain $A'$ by iteratively applying coordinate restriction operations, and consider universal $d$-arrays under this containment notion. For fixed $d,k\geqslant 1$, and a fixed alphabet $\Sigma$, we say a $d$-array over~$\Sigma$ is \emph{$k$-universal} if it contains every $d$-array $A$ on $\Sigma$ of size $(n_1, n_2, \dotsc , n_d)$, where $n_j \leqslant k$ for all $j\in [d]$. Note that if we want to show that a given $d$-array is $k$-universal, it is enough to show that it contains every $d$-array of order $k$. We let $f_{d}(q,k)$ be the minimum $n$ such that there exists a $k$-universal $d$-array of order $n$ over the $q$-symbol alphabet. It is important to distinguish the notion of containment we consider for the case of words from that considered in De Bruijn sequences. We obtain a smaller word from a larger one by deleting entries (this notion is commonly called \emph{subword} or \emph{subsequence}), while De Bruijn (cyclic) sequences contain each smaller word in a contiguous manner (a notion usually called \emph{substring} or \emph{factor}). In particular, a substring is always a subword but not vice versa. From a combinatorial perspective, the notion of subword is just as natural, and has been considered in various extremal problems. Examples include the ``twins problem"~\cite{APP,bukh2016twins,bukh2014longest} which asks for two disjoint identical subwords of a given word which leave as few unused symbols as possible. Another example is the ``longest common subsequence problem"\cite{chvatal, kiwi2005expected}, which seeks the expected value of the longest common subsequence of two randomly chosen words. The notion of subword also has additional desirable properties. For instance, the subword density of a fixed length word is continuous with respect to the cut-distance in the limit theory for words sequences~\cite{HKPS}. Our results in the case of words are the following. \begin{theorem} \label{theorem:universalwords-deterministic} Let $k \geqslant 1$ and $q \geqslant 2$ be integers. Then $f_1(q,k) = qk$. \end{theorem} This result establishes the gap between the notions of subword and substring. While a minimal $k$-universal word has size $qk$, a De Bruijn sequence has size $\Theta(q^k)$. We also obtain the following ``threshold'' behaviour for randomly chosen words to be $k$-universal. For any $q \geqslant 1$, let $H_q = \sum_{i=1}^q 1/i$ be the $q$th harmonic number. \begin{theorem} \label{theorem:universalwords-random} Let $q \geqslant 2$ be a fixed integer and $c_q = q H_q$. Consider a uniformly chosen word $w$ of length $n = n(k)$ over the $q$-symbol alphabet. For every $\varepsilon > 0$ we have \[ \probability[\,\text{$w$ is $k$-universal}\:] \rightarrow \begin{cases} 0 & \text{if } n \leqslant (1 - \varepsilon) c_q k, \text{ and} \\ 1 & \text{if } n \geqslant (1 + \varepsilon) c_q k, \end{cases} \] where the limit is taken as $k \rightarrow \infty$. \end{theorem} In particular, for the $2$-symbol alphabet, we have $f_1(2,k) = 2k$, while roughly $3k$ symbols are necessary and sufficient for a typical binary word of that length to be $k$-universal. This last statement answers a question of Biers-Ariel, Godbole and Kelley~\cite{BGK18}. The following theorem and its corollary are our results for general $d$-arrays with $d\geqslant 2$. \begin{theorem}\label{thm:2} Let $d, q \geqslant 2$ be fixed integers. For every $\varepsilon>0$, a uniformly chosen $d$-array of order $n=(1+\varepsilon)\frac keq^{ \frac{k^{d-1}}{d}}$ over the $q$-symbol alphabet is $k$-universal with high probability as $k\to\infty$. \end{theorem} Furthermore, a simple counting argument gives $f_d(q, k) \geqslant \frac{k}{e} q^{ \frac{k^{d-1}}{d}}$ (see Section 3). Thus we obtain the following. \begin{corollary}\label{coro} Let $d, q \geqslant 2$ be fixed integers. We have $f_d(q, k)= (1 + o_k(1)) \frac{k}{e} q^{ \frac{k^{d-1}}{d}}.$ \end{corollary} We point out that the cases $d=1$ and $d\geqslant 2$ behave in completely different manners. In the case $d=1$, the case of words, the value of $n$ in the random version is considerably larger than $f_1(q,k)$ (a similar scenario holds for the graph case~\cite{alon2017}). In contrast, for $d$-arrays with $d\ge2$ the order which is necessary for random $d$-arrays to be $k$-universal is asymptotically equal to $f_d(q,k)$. The paper is organised as follows. The proofs of Theorem~\ref{theorem:universalwords-deterministic} and \ref{theorem:universalwords-random} are found in Section~\ref{secwords}. In Section~\ref{secarrays}, we prove Theorem~\ref{thm:2} and give the counting argument which implies Corollary~\ref{coro}. The paper ends with concluding remarks in Section~\ref{seclast}. \section{Universal words}\label{secwords} In this section we prove Theorems~\ref{theorem:universalwords-deterministic} and~\ref{theorem:universalwords-random}. We will use $\Sigma = [q]$ as the fixed $q$-symbol alphabet. We recall the standard notation used to work with words. Given a word $w$ and an integer $k$, $w^k$ is the $k$-fold concatenation of $w$ with itself $k$ times. We write $\Sigma^k$ for the set of all words of length $k$ over $\Sigma$ and $\Sigma^\ast$ for the set of all words over $\Sigma$. Although Theorem~\ref{theorem:universalwords-deterministic} can be proved directly, we will derive it using stronger tools, which we will need to prove Theorem~\ref{theorem:universalwords-random}. To introduce these tools, we begin with a few definitions. Given any word $w$ in $\Sigma^\ast$, define $U_{\Sigma}(w)$ as the minimal prefix of $w$ which contains all symbols of $\Sigma$ if it exists, or $U_{\Sigma}(w) = w$ otherwise. Define $T_{\Sigma}(w)$ as $w$ with the prefix $U_{\Sigma}(w)$ removed. Given a word $w$, we can greedily decompose it in a unique way as $w = u_1 u_2 \dotsb u_{\ell} u'$ such that for all $i \in [\ell]$, $u_i = U_{\Sigma}( u_i u_{i+1} \dotsb u_\ell u' )$ and $T_{\Sigma}(u_i u_{i+1} \dotsb u_\ell u') = u_{i+1} \dotsb u_{\ell} u'$, each $u_i$ contains all the symbols of $\Sigma$ and $u'$ (possibly empty) does not contain all the symbols of $\Sigma$. We say $u_1 u_2 \dotsb u_{\ell} u'$ is the \emph{$\Sigma$-universal decomposition of $w$} and we let $\nu_{\Sigma}(w) = \ell$. We can use $\nu_{\Sigma}(w)$ to characterise $k$-universal words, as follows. \begin{proposition} \label{proposition:universalword-characterisation} A word $w \in \Sigma^\ast$ is $k$-universal if and only if $\nu_{\Sigma}(w) \geqslant k$. \end{proposition} \begin{proof} Suppose $w$ satisfies $\nu_{\Sigma}(w) \geqslant k$. Then $w$ has as a prefix a substring $u_1 u_2 \dotsb u_k$ where each of the words $u_i$ contains all of the symbols from $\Sigma$. Then any word $x \in \Sigma^k$ can be found greedily as a subword in $w$ by finding the $i$-th symbol of $x$ inside the word~$u_i$. In the other direction, suppose $\nu_{\Sigma}(w) = k' < k$ and let $w = u_1 \dotsb u_{k'} u'$ be the $\Sigma$-universal decomposition of $w$. Since each $u_i$ is a minimal prefix of $u_i \dotsb u_{k'} u'$ that contains all the symbols of $\Sigma$, it must have the form $u_i = v_i \sigma_i$, where $\sigma_i$ is a symbol in $\Sigma$ and $v_i$ does not use the symbol $\sigma_i$. Further, let $\sigma_{k' + 1}$ be any symbol in $\Sigma$ which does not appear in $u'$ (which exists by definition). We claim that $w$ does not contain the word $w' = \sigma_1 \sigma_2 \dotsb \sigma_{k'} \sigma_{k' + 1}$. Since $k'+1 \leqslant k$, this readily implies that $w$ is not $k$-universal. To find a contradiction, suppose that $w'$ is contained in $w$. The first symbol of $w'$ is $\sigma_1$, and the first time $\sigma_1$ appears in $w$ is at the end of $u_1$, and thus the remaining symbols must appear after the end of $u_1$. That means the word $\sigma_2 \dotsb \sigma_{k'} \sigma_{k' + 1}$ is contained in $u_2 \dotsb u_{k'} u$. Using the same reasoning, we see that for all $j \leqslant k'$, the $j$-th symbol of $w'$ appears in $w$ only after the last symbol of $u_j$. Therefore, the last symbol of $w'$, which is $\sigma_{k' + 1}$, appears as a symbol in $u'$, a contradiction. \end{proof} Note that Theorem~\ref{theorem:universalwords-deterministic} follows trivially from Proposition~\ref{proposition:universalword-characterisation}. This is because any word containing all symbols from $\Sigma$ must have size at least $|\Sigma| = q$, thus a word $w$ with $\nu_\Sigma(w) \geqslant k$ must have least $qk$ symbols. Equality is attained by $(12 \dotsb q)^k$. Now, we will need to estimate $\nu_{\Sigma}(w)$ for a uniformly chosen random word $w$. We will appeal to the well-known ``coupon-collector problem''. Given a $q$-sized set $Q$ and a sequence $X_1, X_2, \dotsc$ of independent and uniformly chosen random variables $X_i \in Q$ for all $i \geqslant 1$, define the random variable $T$ as the minimum integer such that $\{ X_1, \dotsc, X_T \} = Q$. It is known that $T$ can be written as the sum of $q$ independent geometric random variables $T = G_1 + \dotsb + G_q$, where $G_j$ has parameter $j/q$ for each $j \in [q]$, and from this it is deduced that $\expectation[T] = c_q := q H_q$ Now we are ready for the proof of Theorem~\ref{theorem:universalwords-random}. \begin{proof}[Proof of Theorem~\ref{theorem:universalwords-random}] Let $\Sigma$ be an alphabet of size $q$. To estimate $\nu_\Sigma(w)$ of a random word $w$, we will couple $w$ with a word created from ``coupon-collector'' experiments, as follows. Define a random string $U \in \Sigma^\ast$ using the following process. Initially, let $U=\sigma_0$ be a word of length 1, where $\sigma_0$ is chosen uniformly from $\Sigma$. If $U$ already has all the symbols of $\Sigma$, stop. Otherwise, choose uniformly and independently a symbol $\sigma \in \Sigma$ and update~$U$ by appending $\sigma$ at the end. Clearly, the length $|U|$ of $U$ distributes as the random variable $T$ defined before the start of the proof and thus $\expectation[|U|] = c_q$. Given $k > 0$, let $U_1, \dotsc, U_{k}$ be independent random strings, each of them distributed as $U$, and let $U^{(k)} = U_1 U_2 \dotsb U_{k}$ be their concatenation. Crucially, we have $\nu_\Sigma(U^{(k)}) = k$, and each strict prefix $u$ of $U^{(k)}$ satisfies $\nu_\Sigma(u) < k$. Given $k, n > 0$, we construct a (random) word $w$ in $\Sigma^n$ as follows: if $|U^{(k)}| \geqslant n$ then let $w$ be the first $n$ symbols of $U^{(k)}$; otherwise, construct $w'$ from $U^{(k)}$ by appending $n - |U^{(k)}|$ fresh random symbols at the end of $U^{(k)}$. Note that each symbol of $w$ is chosen independently and uniformly over the symbols of $\Sigma$, so $w$ corresponds exactly to a word on $\Sigma^n$ chosen uniformly at random. By construction it is clear that, for all $k, n > 0$, \begin{align} \probability[\,\text{$w$ is $k$-universal}\:] = \probability[ \nu_\Sigma(w) \geqslant k ] = \probability[ |U^{(k)}| \leqslant n ], \label{equation:coupling} \end{align} where the first equality is due to Proposition~\ref{proposition:universalword-characterisation}. To estimate the last probability, note that $|U^{(k)}| = \sum_{i=1}^\ell |U_i|$ and recall that each of the $|U_i|$ has expectation equal to $c_q$. Thus, by the (Weak) Law of Large Numbers, we have that, for all $\varepsilon > 0$, \begin{align} \probability[ (1 - \varepsilon) c_q k \leqslant |U^{(k)}| \leqslant (1 + \varepsilon) c_q k ] \rightarrow 1, \label{equation:couponconcentration} \end{align} whenever $k$ goes to infinity. In particular, if $n \leqslant (1 - \varepsilon)c_q k$ then $\probability[ |U^{(k)}| \leqslant n ] \rightarrow 0$; and if $n \geqslant (1 + \varepsilon)c_q k$ then $\probability[ |U^{(k)}| \leqslant n ] \rightarrow 1$. By~\eqref{equation:coupling}, the result follows. \end{proof} \begin{remark} Theorem~\ref{theorem:universalwords-random} admits an improvement over the size of the error term $\varepsilon$, which can be replaced by any function in $\omega(k^{-1/2})$. We sketch the proof. We do the same as before, but instead of~\eqref{equation:couponconcentration} one should use that $\Pr[ | |U^{(k)}| - c_q k| > \omega(1) \sqrt{k} ] \rightarrow 0$, where $\omega(1)$ is any function that goes to infinity together with $k$. To prove this last statement, write each $|U_i|$ as the sum of $q$ geometric random variables $|U_i| = \sum_{j=1}^q G^{(i)}_j$, where $G^{(i)}_j$ has expectation~$q/j$. We want to bound the probability that $|U^{(k)}| - c_q k > \omega(1) \sqrt{k}$ holds (the event given by the ``reverse'' inequality can be treated analogously). If the inequality holds, then there exists $j \in [q]$ such that $\sum_{i=1}^k G^{(i)}_j - qk/j > \omega(1) \sqrt{k} / q$. A sum of independent geometric random variables follows a negative binomial distribution, which admits a Chernoff--Hoeffding-type deviation bound (see, e.g., \cite[Problem~2.4]{DubhashiPanconesi2009}), and using it gives the result. \end{remark} \section{Universal $d$-arrays}\label{secarrays} As before, let $\Sigma = [q]$ be the $q$-symbol alphabet. For integers $d,k\geqslant 1$, we write $\mathcal A_d(\Sigma,k)$ for the set of all $d$-arrays of order $k$ over $\Sigma$. In this section, we prove Theorem~\ref{thm:2} and establish the lower bound for $f_d(q,k)$ which implies Corollary~\ref{coro}. To do so, we first need the following well-known estimates for binomial coefficients, most of which follow from Stirling's approximation. For all $n, k \geqslant 1$, \begin{align} k!\geqslant \left(\frac ke\right)^k\hspace{.5cm}\text{and}\hspace{.5cm} \binom{n}{k} \leqslant \left( \frac{en}{k} \right)^k. \label{equation:binomialsimpleupperbound} \end{align} Further, if $k \rightarrow \infty$ as $n\rightarrow \infty$, while $k = o(\sqrt{n})$, \begin{align} \binom{n}{k} = (1 + o(1)) \frac{1}{\sqrt{2 \pi k}} \left( \frac{en}{k} \right)^k, \label{equation:stirling} \end{align} and if $k=\Omega(n)$ and $k \leqslant n/2$ then \begin{align} \log_2\binom{n}{k} = (1 + o(1))H\left( \frac{k}{n} \right)n, \label{equation:entropy} \end{align} where $H(x)=-x\log_2x-(1-x)\log_2(1-x)$ is the binary entropy. The lower bound for $f_d(q,k)$ when $d\geqslant 2$ is given by the following counting argument. Notice that there are $q^{k^d}$ $q$-symbol $d$-arrays of order $k$. Therefore, a $q$-symbol $d$-array of order $n$ must satisfy \[\binom{n}{k}^d\geqslant q^{k^d}\] in order to contain all arrays of order $k$. By~\eqref{equation:binomialsimpleupperbound} and the definition of $f_d(q,k)$ we obtain \[\left(\frac{ef_d(q,k)}{k}\right)^{kd}\geqslant\binom{f_d(q,k)}{k}^d\geqslant q^{k^d},\] and thus we have \begin{align}\label{equation:lower} f_d(q,k)\geqslant \frac ke q^{k^{d-1}/d}. \end{align} In light of Theorem~\ref{theorem:universalwords-deterministic}, we know that for $d=1$ the lower bound obtained here is considerably far from being tight. But we will show that it is asymptotically tight for all $d\geqslant 2$. In fact, it is asymptotically tight for the random version of the problem. In order to prove Theorem~\ref{thm:2} we follow an approach taken by Alon~\cite{alon2017} in the study of universal graphs. Before diving into the proof let us first give a rough outline. Given $k\in\mathbb N$ sufficiently large and $n=(1+o(1))\frac ke q^{k^{d-1}/d}$, let $A\in \mathcal A_d(\Sigma,n)$ be a uniformly chosen $d$-array of order $n$ over $\Sigma$. For a fixed array $M\in \mathcal A_d(\Sigma,k)$, we consider the random variable $X$ that counts the number of copies of $M$ in~$A$. Since there are $q^{k^d}$ $d$-arrays of order $k$ over $\Sigma$, it is enough to prove that $\probability[X=0]=o(q^{-k^d})$ and then use a union bound in order to conclude. However, is not easy to prove this directly. Instead, we consider the random variable $Y$ which is the size of the maximum family of disjoint copies of~$M$ in~$A$. It is clear that $Y=0$ if and only if $X=0$. Therefore, it is enough to estimate $\probability[Y=0]$. The random variable $Y$ has the advantage that it is 1-Lipschitz, meaning that changing the value of one entry of the random array may change the value of $Y$ in at most~$1$. Therefore, we may use (a known consequence of) Talagrand's inequality in order to upper bound $\probability[Y=0]$. However, to be able to use this tool, we need estimates on the expected number of pairs of copies of $M$ in $A$ which overlap in some entries, which amounts to studying the variance of $X$. Grasping the asymptotic behaviour of this variance turns out to be the most technical part of our proof. \begin{theorem}[Talagrand's inequality~{\cite[Theorem 7.7.1]{AlonSpencer2016}}] Let $\Omega=\prod_{i\in [r]}\Omega_i$ be a product probability space, with the product probability measure, and let $h:\Omega\to \mathbb R$ be a 1-Lipschitz function, that is, $|h(x)-h(y)|\leqslant 1$ when $x$ and $y$ differ in at most one coordinate. For $f\colon \mathbb N \rightarrow \mathbb N$, suppose that $h$ is $f$-certifiable, that is, if $x\in\Omega$ is such that $h(x)\geqslant s$ then there exists a set $I\subseteq [r]$ of size at most $f(s)$ such that if a vector $y\in \Omega$ coincides with $x$ on $I$, then $h(y)\geqslant s$. Then for $Y(x)=h(x)$ and all $b,t$, we have \[\probability[Y\leqslant b-t\sqrt{f(b)}]\cdot \probability[Y\geqslant b]\leqslant e^{-t^2/4}.\] \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:2}] Let $d,q, k\geqslant 2$ be fixed, $\varepsilon > 0$. Let $n=(1+\varepsilon)\frac{k}{e} q^{ \frac{k^{d-1}}{d}}$. Recall that we are interested in the asymptotic behaviour when $d, q$ are fixed and $k$ tends to infinity. In particular, we can assume whenever necessary that $k$ is sufficiently large with respect to $d, q, \varepsilon$. All asymptotic notation is with respect to $k$ tending to infinity. Let $M\in \mathcal A_d(\Sigma,k)$ be a fixed $d$-array of order $k$ over the $q$-symbol alphabet $\Sigma$, and let $A$ be a uniformly chosen array from $\mathcal A_d(\Sigma,n)$. Our aim is to find a good upper bound on the probability that $A$ does not contain $M$, i.e., one allowing us to use a union bound to prove the result. Let $\mathcal T$ denote the collection of subsets of $[n]^d$ of the form $T=T_1\times\dotsb\times T_d$, where $|T_i|=k$ for each $1\leqslant i\leqslant d$. Given $T\in\mathcal T$, let $T(A)$ be the subarray of $A$ with entries $a_{\mathbf{i}}$ and $\mathbf{i} \in T$. Let $X_T$ be the indicator function of the event that $T(A)$ induces a copy of $M$, and let $X=\sum_{T\in \mathcal T}X_T$ be the number of copies of $M$ in $A$. Since for every $T\in\mathcal T$ we have $\expectation[X_T]=q^{-k^d}$, by linearity of the expectation then we have \begin{equation}\label{equation:mu} \mu:=\expectation[X]=\binom{n}{k}^d q^{-k^d}=(1+o(1))(2\pi k)^{-d/2} (1+\varepsilon)^{dk}\geqslant (16 \log q) k^{2d} , \end{equation} where the last equality follows from the choice of $n$ and~\eqref{equation:stirling}, while the inequality follows from the assumption that $k$ is large. It will be crucial to show that we have \begin{equation}\label{equation:variance2} \variance(X)\leqslant (1+o(1))\mu. \end{equation} To this end, we investigate (the expectation of) the random variable \[ Z = \sum_{T,T'} X_T X_{T'}, \] where the sum ranges over the pairs of distinct $T, T' \in \mathcal{T}$ which intersect in at least one cell. For a vector $\mathbf{i} = (i_1,\dots,i_d) \in [k]^d$, we write \[\Delta_{\mathbf{i}}=\sum_{T,T'\in\mathcal T_{\mathbf{i}}}\expectation[X_TX_{T'}],\] where $\mathcal T_{\mathbf{i}}$ denotes the collection of indices $T,T'\in\mathcal T$ such that $|T_j \cap T'_j| = i_{j}$ for all $j \in [d]$. Equivalently, $T$ and $T'$ intersect on exactly $i_j$ indices on the $j$-th coordinate. Therefore, if $\Delta = \expectation[Z]$, and $\mathbf{k} = (k, k, \dotsc, k)$ then we have \begin{align} \Delta= \sum_{T,T'} \expectation[ X_T X_{T'} ] = \sum_{\mathbf{i} \in [k]^d \setminus \{ \mathbf{k} \}}\Delta_{\mathbf{i}}.\label{equation:Delta} \end{align} Given $i\in[k]$, we define \[\Lambda_i=\binom{n}{k}\binom{k}{i}\binom{n-k}{k-i} \hspace{.5cm}\text{and}\hspace{.5cm} L_d(i)=q^{\frac{i^{d}}{d}\left(1-({k}/{i})^{d-1}\right)}\frac1{(k-i)!}\binom{k}i\left(\frac{(1+\varepsilon)k}e\right)^{k-i}. \] In order to prove \eqref{equation:variance2} we will use the following two claims. \begin{claim}\label{claim:Delta} For all $\mathbf{i} \in [k]^d$ we have \[\frac{\Delta_{\mathbf{i}}}{\mu}\leqslant\prod_{j\in[d]}L_{d}(i_j).\] \end{claim} \noindent \emph{Proof of Claim~\ref{claim:Delta}.} Let $\mathbf{i} = (i_1, \dotsc, i_d)$. First, note that the total number of pairs $T, T'$ which intersect on $i_j$ entries on the $j$-th coordinate is exactly equal to $\prod_{j\in[d]}\Lambda_{i_j}$. Moreover, the union of two subarrays $T$ and $T'$ of this type together span exactly $2 k^d - i_1 \dotsb i_d$ cells. Then $X_T X_{T'} = 1$ holds if and only if in each one of those cells the correct symbol is attained, which implies \[ \Delta_{\mathbf{i}} \leqslant q^{-(2k^d-i_1 \dotsb i_d)} \prod_{j\in[d]}\Lambda_{i_j}. \] By the AM-GM inequality applied to $i_1^d, \dotsc, i_d^d$, we have $i_1 \dotsb i_d \leqslant (\sum_{j=1}^d i_j^d) / d$. Thus we have \[\dfrac{\Delta_{\mathbf{i}}}{\mu} \leqslant \dfrac{q^{-(2k^d-i_1 \dotsb i_d)}\prod_{j\in[d]}\Lambda_{i_j}}{\binom{n}{k}^dq^{-k^d}}\leqslant q^{-k^d+\frac{1}{d} \sum_{j\in[d]}i_j^d}\prod_{j\in[d]}\binom{k}{i_j}\binom{n}{k-i_j}.\] Using that $\tbinom{n}{k-i}\leqslant n^{k-i}/(k-i)!$ and replacing $n=(1+\varepsilon)\frac keq^{k^{d-1}/d}$ we have \[\begin{array}{ccl}\dfrac{\Delta_{\mathbf{i}}}{\mu}&\leqslant&\displaystyle q^{-k^d+\frac1d\sum_{j\in[d]}i_j^d}\prod_{j\in[d]}\binom{k}{i_j}\dfrac{n^{k-i_j}}{(k-i_j)!}\\ &\leqslant & \displaystyle \prod_{j\in[d]}q^{\frac1d(i_j^d-k^d)}\frac1{(k-i_j)!}\binom{k}{i_j}\left(\frac{(1+\varepsilon)k}e\right)^{k-i_j}q^{\frac1dk^{d-1}(k-i_j)}\\ &=&\displaystyle \prod_{j\in[d]}L_{d}(i_j), \end{array} \] as desired.\hfill $\lrcorner$ \begin{claim}\label{claim:correlations} If $1\leqslant i\leqslant k-1$, then $L_d(i)=o(k^{-d})$. \end{claim} \noindent \emph{Proof of Claim~\ref{claim:correlations}.} We will show that there exists $c > 0$ such that, for all $i$, $L_d(i) \leqslant e^{-ck}$ holds, which clearly yields the claim. Without loss of generality we may assume that $\varepsilon\leqslant (\log q)/8$, as otherwise we can restrict to a smaller array. Setting $i=k-j$ and by the Bernoulli inequality $(1 + j/(k-j))^{d-1} \geqslant 1 + j(d-1)/(k-j)$ we have \begin{align*} L_d(k-j) & = \displaystyle q^{\frac{(k-j)^d}d(1-(1+j/(k-j))^{d-1})}\frac{1}{j!}\binom{k}{j}\left(\frac{(1+\varepsilon)k}e \right)^j \\ & \leqslant \displaystyle q^{-\frac{d-1}dj(k-j)^{d-1}}\frac{1}{j!}\binom{k}{j}\left(\frac{(1+\varepsilon)k}e \right)^j \leqslant \displaystyle q^{-\frac{d-1}dj(k-j)^{d-1}}\binom{k}{j}\left(\frac{(1+\varepsilon)k}j \right)^j, \end{align*} where in the last step we used \eqref{equation:binomialsimpleupperbound} to bound $j!$. Taking logarithms, using $d \geqslant 2$, and later using $\log(1 + \varepsilon) \leqslant \varepsilon$, we obtain \begin{align} \log L_d(k-j) & \leqslant - \frac{1}{2} j (k-j) \log q + j \log \left( \frac{(1+\varepsilon)k}{j} \right) + \log \binom{k}{j} \nonumber \\ & \leqslant - \frac{1}{2} j (k-j) \log q + j ( \varepsilon + \log k - \log j ) + \log \binom{k}{j} \label{equation:logLd} \end{align} Now, let $\beta \in (0,1/2)$ be small enough so that \[ \log\left(\frac{1}{1 - \beta}\right) + 2 \log(2) H(\beta) \leqslant \frac{1}{16} \log q \] holds. We now split the proof into two cases. Assume first that we have $j \leqslant (1 - \beta) k$. In this case, we use \eqref{equation:binomialsimpleupperbound} to bound the binomial coefficient and $k - j \geqslant \beta k$ in \eqref{equation:logLd} to get \begin{align*} \log L_d(k-j) & \leqslant - \frac{1}{2} j \beta k \log q + j ( \varepsilon + \log k ) + j ( \log k + 1 - \log j ) \\ & \leqslant j \left[ - \frac{1}{2} \beta k \log q + \varepsilon + 2 \log k + 1 \right]. \end{align*} For $k$ large enough, we have $\varepsilon + 2 \log k + 1 \leqslant \frac{1}{4} \beta k \log q$, so the term in the brackets is negative. Using $j \geqslant 1$ we finally get $\log L_d(k-j) \leqslant - \frac{1}{4} \beta k \log q$, or equivalently $L_d(k-j) \leqslant \exp\left( - \frac{1}{4} \beta k \log q \right)$, which finishes the proof in this case. Now, consider the remaining case $j > (1 - \beta) k$. Since $\beta < 1/2$, we have $\binom{k}{j} \leqslant \binom{k}{(1 - \beta)k} = \binom{k}{ \beta k}$. Using \eqref{equation:entropy}, we get $\log \binom{k}{j} \leqslant (1 + o(1)) \log(2) H(\beta) k \leqslant 2 \log(2) H(\beta) k$. We plug this in \eqref{equation:logLd} to get \begin{align*} \log L_d(k-j) & \leqslant - \frac{1}{2} (k-1) \log q + j ( \varepsilon + \log k -\log j ) + 2 \log(2) H(\beta) k \\ & \leqslant - \frac{1}{2} (k-1) \log q + j ( \varepsilon - \log (1 - \beta ) ) + 2 \log(2) H(\beta) k \\ & \leqslant k \left[ - \frac{1}{4} \log q + \varepsilon - \log(1 - \beta) + 2 \log(2) H(\beta) \right], \end{align*} where in the last step we used $j \leqslant k$ and $(k-1)/2 \geqslant k/4$, which holds since $k$ is large. Now we use the assumption $\varepsilon \leqslant (\log q) / 8$ and the choice of $\beta$ to get \begin{align*} \log L_d(k-j) & \leqslant k \left[ - \frac{1}{8} \log q - \log(1 - \beta) + 2 \log(2) H(\beta) \right] \leqslant - k \frac{\log q}{16}. \end{align*} Thus in this case we have $L_d(k-j) \leqslant \exp( - k \frac{\log q}{16} )$, which finishes the proof of the claim. \hfill $\lrcorner$ \vskip .1 in Since the sum in~\eqref{equation:Delta} is over all the $k^d - 1$ many tuples $\mathbf{i}$ in $[k]^d$ distinct from $(k,\dots,k)$, then Claim~\ref{claim:Delta} and Claim~\ref{claim:correlations} together imply that \begin{align} \Delta =o(\mu).\label{equation:covariance}\end{align} Now, since $X$ is a sum of zero-one random variables, we have \[\variance(X) \leqslant \expectation[X]+\sum_{T,T'\in\mathcal T} \covariance(X_T,X_{T'}). \] In the sum we only need to consider the pairs $T, T' \in \mathcal{T}$ with non-trivial intersection (otherwise the variables $X_T, X_{T'}$ are independent and thus their covariance is zero). Further, we have $\covariance(X_T,X_{T'})\leqslant \expectation(X_TX_{T'})$. Therefore, by~\eqref{equation:covariance} we have \begin{align*} \variance (X)\leqslant \mu+\Delta= (1+o(1))\mu, \end{align*} and so we have finally proved \eqref{equation:variance2}. By Chebyshev's inequality, and equations~\eqref{equation:mu} and~\eqref{equation:variance2} we have \[\probability[|X-\mu|\geqslant \tfrac 14\mu]\leqslant \frac{16\variance(X)}{\mu^2}\leqslant \frac{32}{\mu}\to 0.\] Therefore, $X\geqslant 3\mu/4$ with probability at least $3/4$. Likewise, by Markov's inequality and~\eqref{equation:covariance} we have \[\probability[Z\geqslant \mu / 5] \leqslant \frac{ 5 \expectation[Z]}{\mu} = \frac{5 \Delta}{\mu}\to 0,\] and therefore $Z\leqslant \mu/4$ with probability at least $3/4$. In particular, both events hold at the same time with probability at least $1/2$. Let $Y$ denote the random variable which is the size of the maximum family of disjoint copies of $M$ in $A$. Since $X\geqslant 3\mu/4$ and $Z\leqslant \mu/4$ hold simultaneously with probability at least $1/2$, then, by conditioning on this event, we deduce that \begin{equation}\label{equation:muhalf} \probability[Y\geqslant \mu/2]\geqslant 1/2. \end{equation} Notice also that $X = 0$ if and only if $Y = 0$. We are now ready to use Talagrand's inequality to finish the proof. Note that $h(A):=Y$ is $1$-Lipschitz, since by switching the value of one entry one can add or remove at most 1 copy of $M$ (the one using that entry). Moreover, $h(A)$ is $f$-certifiable for $f(s)=sk^d$. Using $b=\mu/2$ and $t=k^{-d/2}\sqrt{\mu/2}$, Talagrand's inequality and \eqref{equation:muhalf} give us \[\probability[X=0] = \probability[Y=0]\leqslant 2e^{-\mu k^{-d}/8}.\] Finally, we use a union bound over all the possible choices of $M \in \mathcal A_d(\Sigma,k)$, to deduce that the probability that $A$ is not $k$-universal is at most \[2q^{k^d}e^{-\mu k^{-d}/8}\leqslant 2q^{k^d}e^{- 2k^{d}\log q}=2q^{-k^{d}}\to 0,\] where the inequality comes from~\eqref{equation:mu}. The result follows. \end{proof} \begin{remark} The constant error term $\varepsilon$ in Theorem~\ref{thm:2} can be improved to $\Omega(\log k / k)$. This can be seen by checking that replacing $\varepsilon = C \log k / k$ (with $C$ being a large constant) is enough for \eqref{equation:mu} to hold. This does not change the rest of the calculations. \end{remark} \section{Concluding remarks}\label{seclast} \subsection{Uniqueness of subwords} One could wonder whether $k$-universal words can contain every subword exactly once, just as De Bruijn sequences contain every substring exactly once. However, this is not the case for essentially every value of~$k$. \begin{proposition} Let $k\geqslant 2$ and $q\geqslant 2$ be integers, and $w$ a $k$-universal word over the $q$-symbol alphabet. There exists a word $u$ of length $k$ such that $w$ contains at least two copies of $u$. \end{proposition} \begin{proof} Since $w$ is $k$-universal, Theorem~\ref{theorem:universalwords-deterministic} implies $w$ has at least $qk$ symbols. Thus $w$ contains at least $\binom{qk}{k}$ subwords of length $k$. For all $1 \leqslant i < k$ we have $(qk-k+i)/i > q$ and thus \[ \binom{qk}{k} = \frac{qk (qk-1) \dotsb (qk - k + 1)}{k\cdot(k-1)!} = q \prod_{i=1}^{k-1} \frac{qk-k+i}{i} > q^k, \] so by the pigeonhole principle there is a word of length $k$ that appears at least twice as a subword in $w$. \end{proof} The same proof together with~\eqref{equation:stirling} yields that, for $q$ fixed and $k$ large, for every universal word $w$ there is a word of length $k$ contained at least $\binom{qk}{k} q^{-k} = 2^{\Omega(k)}$ times in $w$. Notice that the lower bound~\eqref{equation:lower} for $f_d(q, k)$ when $d \geqslant 2$, follows from computing the size of a $k$-universal $d$-array containing every array of order $k$ exactly once. So perhaps studying the uniqueness of subarrays in universal $d$-arrays could help in obtaining even tighter bounds for these functions. \subsection{Explicit universal $d$-arrays} In Theorem~\ref{theorem:universalwords-deterministic} we establish that $f_1(q,k)=qk$ by giving an explicit construction of a minimal universal word. In contrast, in Theorem~\ref{thm:2} we establish the upper bound on $f_d(q,k)$ for $d\ge2$, by showing that random $d$-arrays of that order are likely to be $k$-universal. It would be interesting to find ``explicit'' constructions of almost optimal universal arrays, for all relevant parameters. The simplest open question in this regard would be the following. \begin{question} Is it possible to give an explicit description of a $k$-universal $2$-array on $2$ symbols of optimal (or close to optimal) order? \end{question} In the setting of $d$-uniform hypergraphs, Alon and Sherman~\cite{alon2019} gave an explicit construction of $d$-graphs on $\Theta\left(2^{\binom{k}{d}/d}\right)$ vertices which contain every $k$-vertex $d$-graph as an induced subhypergraph, which is optimal up to the implied constant. \subsection{Universal random higher-dimensional permutations} For $\ell\geqslant 1$ an integer, let $S_\ell$ be the set of all permutations of $[\ell]$. For $n\geqslant k\geqslant 1$, we say that a permutation $\sigma\in S_n$ \emph{contains} a permutation $\tau\in S_k$ if there are indices $1\leqslant i_1<\dots<i_k\leqslant n$ such that for all $j,\ell \in [k]$, $\sigma(i_j)<\sigma(i_\ell)$ if and only if $\tau(j)<\tau(\ell)$. A \emph{$k$-universal} permutation is one that contains all permutations of $S_k$. The question of the minimal $n$ such that there exists a $k$-universal permutation in $S_n$ was asked by Arratia~\cite{arratia}. Through a simple counting argument he observed that such an $n$ must satisfy $n\geqslant (1+o(1))k^2/e^2$ and conjectured that this was the optimal value of $n$ given $k$. The trivial bound which motivated this conjecture (now known to be false) was recently improved by Chroman, Kwan and Singhal~\cite{CKS2021}, although only by a small constant factor. The random version of this problem was posed by Alon (see~\cite{arratia}) who conjectured that a random permutation of order $(1+o(1))k^2/4$ is $k$-universal with high probability. If true, this bound would be tight, as can be deduced from the known results on the length of the longest increasing subsequence of random permutations. The best known upper bound for this problem is due to He and Kwan~\cite{xekwan}, who recently proved that a random permutation on $O(k^2\log\log k)$ elements is $k$-universal with high probability. The study of higher dimensional permutations is ripe for further research. A \emph{line} of a $d$-array $A = (a_{i_1, \dotsc, i_d})$ of order $n$ is a sequence of elements obtained by choosing some $j \in [n]$ and looking at the entries $a_{i_1, \dotsc, i_{j-1}, \ell, i_{j+1}, \dotsc, i_d}$, for some fixed $i_1, \dotsc, i_{j-1}, i_{j+1}, \dotsc, i_d \in [n]$ and $\ell$ ranging from $1$ to $n$. Just as a usual permutation can be identified with a permutation matrix, it possible to define a \emph{$d$-dimensional permutation} (henceforth, \emph{$d$-permutation}) of order $n$ as a $(d+1)$-array of order $n$ over $\{0,1\}$, where each line contains a unique $1$ entry (see~\cite{LinialLuria2014, LinialSimkin2018} for equivalent definitions and discussion). Looking for connections with the case of permutations, we propose the following notion of ``universality'' for $d$-permutations. A \emph{$d$-pattern} of order $k$ is a sequence $(\sigma_1, \dotsc, \sigma_d)$ where $\sigma_\ell \in S_k$ for all $\ell\in [d]$. We say a $d$-permutation $M$ of order $n$ \emph{contains} a $d$-pattern of order $k$ if there exists a sequence $x^{(1)}, \dotsc, x^{(k)} \in [n]^{d+1}$ of index vectors such that $M_{x^{(i)}_1 x^{(i)}_2 \dotsb x^{(i)}_{d+1}} = 1$ for all $i \in [k]$, $x^{(1)}_1 < x^{(2)}_1 < \dotsb < x^{(k)}_1$ (the first coordinates of the vectors are increasing), and further, for each $\ell \in [d]$ and all $i,j \in [k]$, it holds that $x^{(i)}_{\ell + 1} < x^{(j)}_{\ell + 1}$ if and only if $\sigma_\ell(i) < \sigma_\ell(j)$. Note that for $d = 1$ this is equivalent to the containment of one permutation in another. We say a $d$-permutation $M$ is \emph{$k$-pattern-universal} if it contains all $d$-patterns of order $k$. Linial and Simkin~\cite{LinialSimkin2018} considered ``monotone subsequences of length~$k$'' in $d$-permutations, which expressed in our language correspond to $d$-patterns of order~$k$ of the form $(\sigma, \dotsc, \sigma)$, where $\sigma$ is the identity function. They showed that the longest monotone subsequence in a random $d$-permutation of order $n$ has length $\Theta(n^{d/(d+1)})$ with high probability. This implies that a random $d$-permutation needs to have order at least $\Omega(k^{(d+1)/d})$ to be $k$-pattern-universal with high probability. In analogy with the case of permutations, we believe this to be tight. \begin{conjecture}For $d\geqslant 2$, there exists a constant $C>0$ such that a random $d$-permutation of order $Ck^{(d+1)/d}$ is $k$-pattern-universal with high probability as $k\to\infty$. \end{conjecture} \section*{Acknowledgements} MPS was partially supported by ANID Doctoral scholarship ANID-PFCHA/ Doctorado Nacional/2017-21171132. MPS and DAQ thankfully acknowledge support from Programa Regional MATH-AMSUD, MATH190013; from Concurso para Proyectos de Investigaci\'on Conjunta CONICYT Chile -- FAPESP Brasil, 2019/13364-7; and from ANID + PIA/Apoyo a Centros Cient\'ificos y Tecnol\'ogicos de Excelencia con Financiamiento Basal AFB170001. DAQ thankfully acknowledges support from FONDECYT/ANID Iniciaci\'on en Investigaci\'on Grant 11201251. NSM was supported by the Czech Science Foundation, grant number GA19-08740S with institutional support RVO: 67985807. We also would like to thank the anonymous referees for their careful reading and helpful comments. \bibliographystyle{amsplain}
{ "timestamp": "2021-08-24T02:05:05", "yymm": "2001", "arxiv_id": "2001.05767", "language": "en", "url": "https://arxiv.org/abs/2001.05767", "abstract": "A word on $q$ symbols is a sequence of letters from a fixed alphabet of size $q$. For an integer $k\\ge 1$, we say that a word $w$ is $k$-universal if, given an arbitrary word of length $k$, one can obtain it by removing entries from $w$. It is easily seen that the minimum length of a $k$-universal word on $q$ symbols is exactly $qk$. We prove that almost every word of size $(1+o(1))c_qk$ is $k$-universal with high probability, where $c_q$ is an explicit constant whose value is roughly $q\\log q$. Moreover, we show that the $k$-universality property for uniformly chosen words exhibits a sharp threshold. Finally, by extending techniques of Alon [Geometric and Functional Analysis 27 (2017), no. 1, 1--32], we give asymptotically tight bounds for every higher dimensional analogue of this problem.", "subjects": "Combinatorics (math.CO)", "title": "Universal arrays", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9901401444055119, "lm_q2_score": 0.8311430415844384, "lm_q1q2_score": 0.8229480912160523 }
https://arxiv.org/abs/2205.02798
Effective poset inequalities
We explore inequalities on linear extensions of posets and make them effective in different ways. First, we study the Björner--Wachs inequality and generalize it to inequalities on order polynomials and their $q$-analogues via direct injections and FKG inequalities. Second, we give an injective proof of the Sidorenko inequality with computational complexity significance, namely that the difference is in $\#P$. Third, we generalize the Sidorenko inequality to posets with small chain intersections and give complexity theoretic applications.
\section{Introduction} \medskip \subsection{Foreword}\label{ss:intro-foreword} There are two schools of thought on what to do when an \defng{interesting combinatorial inequality} is established. The first approach would be to treat it as a tool to prove a desired result. The inequality can still be sharpened or generalized as needed, but this effort is aimed with applications as the goal and not about the inequality per se. The second approach is to treat the inequality as a result of importance in its own right. The emphasis then shifts to finding the ``right proof'' in an attempt to understand, refine or generalize it. This is where the nature of the inequality intervenes --- when both sides count combinatorial objects, the desire to relate these objects is overpowering. The inequality can be made \defna{effective} \hskip.03cm in several different ways. A \defng{direct injection} can give it a combinatorial interpretation for the difference or prove the equality conditions. Such an injection can also be a work of art, inspiring and thought-provoking in the best case. Alternatively, a \defng{technical proof} (say, probabilistic or algebraic), can establish tools for generalizations out of reach by direct combinatorial arguments. Both types of proof are most impactful when presented in combination. Making comparisons between different approaches can lead to further results, new open problems, and is the source of wonder of the beauty and diversity of mathematics. \medskip As the reader must have guessed, we aim to make effective several celebrated combinatorial inequalities for the numbers of linear extensions of finite posets: \smallskip \qquad $\circ$ \ the \defn{Bj\"orner--Wachs inequality}, \smallskip \qquad $\circ$ \ the \defn{Sidorenko inequality}, and \smallskip \qquad $\circ$ \ the \defn{generalized Stanley inequality}. \smallskip \noindent Although there is a certain commonality of tools and approaches, our investigation of these inequalities are largely independent, united by the goal of being effective. In addition to injections, we also use \defng{probabilistic} and \defng{algebraic tools}, with some curious combinatorial twists. \smallskip \subsection{Bj\"orner--Wachs inequality}\label{ss:intro-HP} Let \. $P=(X,\prec)$ \. be a poset with \. $|X|=n$ \. elements. For each element \hskip.03cm $x \in X$, let \. $B(x) := \big\{y \in X \. : \. y \succcurlyeq x \big\}$ \. be the \defn{upper order ideal} generated by~$x$, and let \. $b(x):=|B(x)|$. A \defn{linear extension} of $P$ is a bijection \. $f: X \to [n]=\{1,\ldots,n\}$, such that \. $f(x) < f(y)$ \. for all \. $x \prec y$. Denote by \hskip.03cm $\Ec(P)$ \hskip.03cm the set of linear extensions of $P$, and let \. $e(P):=|\Ec(P)|$. \smallskip \begin{thm}[{\rm Bj\"orner and Wachs~\cite[Thm~6.3]{BW89}}{}]\label{t:HP} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset with \hskip.03cm $|X|=n$ \hskip.03cm elements. In the notation above, we have: \begin{equation}\label{eq:HP} e(P) \ \geq \ \. n! \.\cdot \. \prod_{x \in X} \, \frac{1}{b(x)}\,. \end{equation} \end{thm} \smallskip This inequality was popularized by Stanley who stated it without proof or a reference in \cite[Exc.~3.57]{Sta-EC}.\footnote{Richard Stanley informed us that he indeed took it from~\cite{BW89} (personal communication, March 27, 2022).} When the poset is a tree rooted at the minimal element, the inequality in the theorem is an equality known in the literature as the \defn{hook-length formula for trees}. A variation on the classical \defn{hook-length formulas} for straight and shifted Young diagrams, this case is usually attributed to Don Knuth (1973), see e.g.~\cite{Bea,SY}. Although for other families of poset the lower bound on~$e(P)$ given by~\eqref{eq:HP} is relatively weak, nothing better is known in full generality, see e.g.~\cite{BP,MPP-asy,Pak-case}. \smallskip We start by recalling the original direct injective proof by Bj\"orner and Wachs of the inequality~\eqref{eq:HP}. This allows us to prove that the inequality is in~${\textsc{\#P}}$ (Theorem~\ref{t:HP-SP}). We then obtain the following extension of Theorem~\ref{t:HP}. \smallskip Let \hskip.03cm $[k]:=\{1,\ldots,k\}$. For an integer \hskip.03cm $t\ge 1$, denote by \. $\Omega(P,t)$ \. the number of order preserving maps \. $g: X\to [t]$, i.e.\ maps which satisfy \. $g(x)\le g(y)$ \. for all \. $x \prec y$. This is the \defn{order polynomial} corresponding to poset~$P$, see e.g.~\cite[$\S$3.12]{Sta-EC}. \smallskip \begin{thm}\label{thm:HP order poly} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset with \hskip.03cm $|X|=n$ \hskip.03cm elements. Then, for every \hskip.03cm $t\in \mathbb N$, we have: \begin{equation}\label{eq:OP-HP} \Omega(P,t) \ \geq \ t^r \hskip.03cm (t+1)^{n-r} \, \prod_{x \in X} \. \frac{1}{b(x)}\,, \end{equation} where $r$ is the number of maximal elements of~$P$. \end{thm} \smallskip Let us note that \begin{equation}\label{eq:OP-asy} \Omega(P,t) \ \sim \ \frac{e(P)\, t^n}{n!} \quad \text{as} \ \ \ t \to \infty\hskip.03cm. \end{equation} Thus, Theorem~\ref{thm:HP order poly} implies Theorem~\ref{t:HP}. Note also that \. $\Omega(P,t) \hskip.03cm = \hskip.03cm \Omega(P^\ast,t)$, where \hskip.03cm $P^\ast=(X, \prec^\ast)$ \hskip.03cm is the poset where relations are reversed: \hskip.03cm $x\prec y \ \Leftrightarrow \ y \prec^\ast x$. Thus, the theorem holds when maximal elements are replaced with minimal elements. The tools we use to establish Theorem~\ref{thm:HP order poly} are based on \defng{Shepp's lattice}, and are extremely far-reaching. They allows us to establish a variety of inequalities on the order polynomial as well as their $q$-analogues. Notably, we establish the asymptotic version of \defng{Graham's conjecture} (Conjecture~\ref{conj:Graham}) via the following \defng{strict log-concavity} \hskip.03cm of the order polynomial: \smallskip \begin{thm}[{\rm See Theorem~\ref{thm:log-concave} and Theorem~\ref{thm:log-concave-strict}}{}] \label{thm:log-concave-intro} Let $P=(X,\prec)$ be a finite poset. Then, for every integer \hskip.03cm $t \geq 2$, we have: \[ \Omega(P,t)^2 \ > \ \Omega(P,t+1) \. \cdot\. \Omega(P,t-1).\] \end{thm} \smallskip Our next result is a general lower bound on the order polynomial strengthening the asymptotic formula~\eqref{eq:OP-asy}. \smallskip \begin{thm}\label{thm:order_le} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset with \hskip.03cm $|X|=n$ \hskip.03cm elements. Then, for every \hskip.03cm $t\in \mathbb N$, we have: \begin{equation}\label{eq:OP-gen} \Omega(P,t) \ \geq \ \frac{e(P)\, t^n}{n!}\.. \end{equation} \end{thm} \smallskip Our proof of Theorem~\ref{thm:order_le} uses a direct injection. Among other applications of this approach, see the equality conditions in Corollary~\ref{c:OP2-equality}. \smallskip Since the Bj\"orner--Wachs inequality~\eqref{eq:HP} can be rather weak in various special cases, neither of Theorems~\ref{thm:HP order poly} and~\ref{thm:order_le} implies another (see Example~\ref{e:OP-Stanley}). In a different direction, inequality~\eqref{eq:OP-gen} strengthens the trivial inequality \begin{equation}\label{eq:OP-gen-easy} \Omega(P,t) \ \geq \ e(P) \cdot \binom{t}{n} \ = \ \frac{e(P)\,\. t\hskip.03cm (t-1)\cdots (t-n+1)}{n!}\.. \end{equation} Here the RHS counts the number of injections \. $f: X\to [t]$, which are naturally mapped onto~$\Ec(P)$. Note that~\eqref{eq:OP-gen} agrees with~\eqref{eq:OP-gen-easy} in the leading term given also by~\eqref{eq:OP-asy}, but is sharper in the second term of the asymptotics. Finally, there is a remarkably simple proof of the Bj\"orner--Wachs inequality by Vic Reiner, via extension of the inequality to its $q$-analogue (Theorem~\ref{t:HP-q}). We also use our tools in~$\S$\ref{ss:q-analogue-our} to obtain new inequalities for the \defng{$q$-order polynomial}. \smallskip \subsection{Sidorenko inequality} \label{ss:intro-sid} Below we give an equivalent but somewhat nonstandard reformulation of the \defng{Sidorenko inequality} that makes it amenable for generalization. A more traditional version is given in Section~\ref{s:sid}. \smallskip A \emph{chain} in a poset \. $P=(X,\prec)$ \. is a subset \. $\{x_1,\ldots,x_\ell\} \subseteq X$, such that \. $x_1 \prec x_2 \prec \ldots \prec x_\ell\hskip.03cm.$ Denote by \hskip.03cm $\mathcal C(P)$ \hskip.03cm the set of chains in~$P$. \smallskip \begin{thm}[{\rm Sidorenko~\cite{Sid}}]\label{t:sid} Let \. $P=(X,\prec)$ \. and \. $Q=(X,\prec')$ \. be two posets on the same set with \. $|X|=n$ \. elements. Suppose \begin{equation}\label{eq:sid-chain-condition} \bigl|C \cap C'\bigr| \. \le \. 1 \quad \text{for all} \ \ C\in \mathcal C(P), \ C' \in \mathcal C(Q). \end{equation} Then: \begin{equation}\label{eq:sid} e(P) \. e(Q)\, \ge \, n! \end{equation} \end{thm} \smallskip Natural examples of posets \hskip.03cm $(P,Q)$ \hskip.03cm as in the theorem are the \defn{permutation posets} \hskip.03cm $\bigl(P_\sigma, P_{\overline \sigma}\bigr)$, where \hskip.03cm $P_\sigma=([n],\prec)$ \hskip.03cm is defined as $$ i\prec j \quad \Longleftrightarrow\quad i<j \ \ \text{and} \ \ \sigma(i)<\sigma(j)\,, \quad \text{for all} \ \ i,j\in [n]. $$ and \. $\overline{\sigma}:=\bigl(\sigma(n),\ldots,\sigma(1)\bigr)$. In this case \hskip.03cm $P_{\sigma}$ \hskip.03cm is a \defng{$2$-dimensional poset}, and \hskip.03cm $P_{\overline \sigma}$ \hskip.03cm is its \defng{plane dual}. Sidorenko's original proof used combinatorial optimization and proved also equality conditions for~\eqref{eq:sid}, see~$\S$\ref{ss:sid-proof}. In~\cite{BBS}, the authors gave an easy reduction to a special case of the (still open) \defng{Mahler conjecture} for \defng{convex corners}. That special case was resolved earlier by Saint-Raymond~\cite{StR}, and was reproved and further extended in a series of papers, see \cite{AASS,BBS} for the context and the references. \smallskip In this paper we give a direct injective proof of the Sidorenko inequality~\eqref{eq:sid}, which allows us to prove that the inequality is in~${\textsc{\#P}}$ (Theorem~\ref{t:sid-SP}). This completely resolves the open problem Morales and the last two authors in \cite{MPP} of finding a combinatorial proof of~\eqref{eq:sid}. Although presented differently, our injection likely coincides with an injection of Gaetz and Gao~\cite{GG1}, see~$\S$\ref{ss:finrem-sid}; the latter was discovered independently and generalized to other Coxeter groups. \smallskip Our proof can also be extended to give the following generalization of Theorem~\ref{t:sid}. \smallskip \begin{thm}\label{t:sid-gen} Let \. $P=(X,\prec)$ \. and \. $Q=(X,\prec')$ \. be two posets on the same set with \. $|X|=n$ \. elements. Suppose $$ \bigl|C \cap C'\bigr| \. \le \. k \quad \text{for all} \ \ C\in \mathcal C(P), \ C' \in \mathcal C(Q). $$ Then: \begin{equation}\label{eq:sid-gen} e(P) \. e(Q)\, \ge \, \frac{n!}{k^{n-k}\. k!} \.. \end{equation} \end{thm} \smallskip The proof, examples and applications of this result are given in Section~\ref{s:sid}. \smallskip \subsection{Stanley inequality}\label{ss:intro-Sta} We start with the following inspiring \defn{Stanley inequality}: \smallskip \begin{thm}[{\rm Stanley~\cite{Sta-AF}}{}]\label{t:Sta} Let \. $P=(X,\prec)$ \. be a poset on \. $|X|=n$ \. elements. For an element \. $x\in X$ \. and integer \. $1\le a \le n$, let \. $\Ec(P,x,a)$ \. be the set of linear extensions \. $f\in \Ec(P)$ \. such that \. $f(x)=a$. Denote by \. $\textrm{\em N}(P, x,a):=\bigl|\Ec(P, x,a)\bigr|$ \. the number of such linear extensions. Then: \begin{equation}\label{eq:Sta} \textrm{\em N}(P, x,a)^2 \, \ge \, \textrm{\em N}(P, x,a+1) \.\cdot \. \textrm{\em N}(P, x,a-1). \end{equation} \end{thm} \smallskip Stanley's original proof of this result is via reduction to the classical and very deep \emph{Alexandrov--Fenchel} (AF-) \emph{inequality} in convex geometry. While the latter has several proofs, see references in~\cite[$\S$7.1]{CP2}, none are elementary and direct even in the case of convex polytopes. With the aim to prove \eqref{eq:Sta} by an elementary argument, the (somewhat technical) proof in~\cite{CP} uses nothing but linear algebra. Finding a direct injective proof is a major open problem (see~$\S$\ref{ss:finrem-GP-injection}). In the absence of an injective proof, the \defng{equality conditions} can become \emph{more difficult} than the original inequality, cf.~$\S$\ref{ss:finrem-vanishing}. This is famously the case for the AF-inequality and many of its consequences. For the Stanley inequality, the equality conditions were discovered recently by Shenfeld and van Handel~\cite{SvH}, by a deep geometric argument. Fortunately, part of the equality conditions called the \defng{vanishing conditions}, are completely combinatorial. Denote by \hskip.03cm $\ell(x) := \bigl|\{y \in X\,{}:\,{}y\preccurlyeq x\}\bigr|$ \hskip.03cm and \hskip.03cm $b(x) := \bigl|\{y \in X\,{}:\,{}y\succcurlyeq x\}\bigr|$ \hskip.03cm the sizes of lower and upper ideals of \hskip.03cm $x\in X$, respectively. \smallskip \begin{thm}[{\rm Shenfeld and van Handel~\cite[Lemma~15.2]{SvH}}] \label{t:SvH-vanish} Let \. $P=(X,\prec)$ \. be a poset on \. $|X|=n$ \. elements, let \hskip.03cm $x\in X$ \hskip.03cm and \hskip.03cm $1\le a \le n$. Then \. $\textrm{\em N}(P, x,a)>0$ \. \underline{if and only if} \. $\ell(x)\le a$ \. and \. $b(x)\le n-a+1$. \end{thm} \smallskip Note that by the Stanley inequality, if \. $\textrm{N}(P, x,a)=0$, then \. $\textrm{N}(P, x,a+ 1)=0$ or \.$\textrm{N}(P, x,a- 1)=0$ , so whenever the conditions in the theorem are not satisfied the equation~\eqref{eq:Sta} is an equality. We can now define the \hskip.03cm \defn{generalized Stanley inequality}. \smallskip \begin{thm}[{\rm Stanley~\cite{Sta-AF}}{}]\label{t:Sta-gen} Let \. $P=(X,\prec)$ \. be a poset on \. $|X|=n$ \. elements. Fix elements \. $x,z_1,\ldots,z_k\in X$ \. and integers \. $a,c_1,\ldots,c_k\in [n]$; we write \. ${\textbf{z}} =(z_1,\ldots,z_k)$ \. and \. $\textbf{\textbf{c}} =(c_1,\ldots,c_k)$. Let \. $\Ec_{{\textbf{z}}\hskip.03cm\emph{\textbf{\textbf{c}}}}(P,x,a)$ \. be the set of linear extensions \. $f\in \Ec(P)$ \. such that \. $f(x)=a$ \. and \. $f(z_i)=c_i$, for all \. $1\le i \le k$. Denote by \. $\textrm{\em N}_{{\textbf{z}}\hskip.03cm\textbf{\textbf{c}}}(P, x,a):=\bigl|\Ec_{{\textbf{z}}\hskip.03cm\textbf{\textbf{c}}}(P,x,a)\bigr|$ \. the number of such linear extensions. Then: \begin{equation}\label{eq:Sta-gen} \textrm{\em N}_{{\textbf{z}}\hskip.03cm\textbf{\textbf{c}}}(P, x,a)^2 \, \ge \, \textrm{\em N}_{{\textbf{z}}\hskip.03cm\textbf{\textbf{c}}}(P, x,a+1) \.\cdot \. \textrm{\em N}_{{\textbf{z}}\hskip.03cm\textbf{\textbf{c}}}(P, x,a-1). \end{equation} \end{thm} \medskip We can now state the \defng{vanishing conditions} for the generalized Stanley inequality. Without loss of generality, we can assume that numbers~$\textbf{\textbf{c}}$ are in increasing order, in which case we can assume that elements~${\textbf{z}}$ form a chain. Let \hskip.03cm $x,y\in X$ \hskip.03cm be two poset elements such that \. $x\prec y$. Define \. $h(x,y) \hskip.03cm := \hskip.03cm \#\{ z\in P, \text{ s.t. } x\prec z\prec y\}$. \smallskip \begin{thm}\label{t:vanish} Let \. $P=(X,\prec)$ \. be a poset on \. $|X|=n$ \. elements. Fix elements \. $u_1 \prec\ldots \prec u_k\in X$ \. and integers \. $1\le a_1 < \ldots < a_k \le n$; we write \. $\textbf{\textit{u}} =(u_1,\ldots,u_k)$ \. and \. $\textbf{\textbf{a}} =(a_1,\ldots,a_k)$. Let \. $\Ec(P,\textbf{\textit{u}}, \textbf{{\textit{a}}})$ \. be the set of linear extensions \. $f\in \Ec(P)$ \. such that \. $f(u_i)=a_i$, for all \. $1\le i \le k$. Then \. $\bigl|\Ec(P, \textbf{\textit{u}}, \textbf{{\textit{a}}})\bigr| > 0$ \, \underline{if and only if} \begin{equation}\label{eq:Sta-gen-vanish} \aligned & \ell(u_i)\. \le \. a_i \., \quad b(u_i)\le n-a_i+1\,, \quad \ \text{for all} \quad 1\.\le \. i \. \le \. k\hskip.03cm, \ \ \, \text{and} \\ & a_j \. - \. a_i \, > \, h(u_i,u_j) \quad \ \text{for all} \quad 1\.\le \. i \. < \. j \. \le \. k\hskip.03cm. \endaligned \end{equation} \end{thm} \smallskip In Theorem~\ref{t:gen-Stanley-unique}, we also prove the \defng{uniqueness conditions} for the problem, i.e.\ necessary and sufficient conditions for \. $\bigl|\Ec(P, \textbf{\textit{u}}, \textbf{{\textit{a}}})\bigr| = 1$. We postpone the statement until~$\S$\ref{ss:restricted-unique}. \smallskip \subsection{Complexity implications}\label{ss:intro-CS} We assume the reader is familiar with basic \defna{Computational Complexity}, and refer to standard textbooks \cite{AB,MM,Pap} for definitions and notation. \smallskip Recall the \defng{counting complexity class} \. ${\textsc{\#P}}$ \. of functions which count the number of objects whose membership is decided in polynomial time. Let \. ${\textsc{GapP}}={\textsc{\#P}}-{\textsc{\#P}}$ \. be the closure of~${\textsc{\#P}}$ under subtraction, see e.g.~\cite{For}. Finally, let \hskip.03cm ${\textsc{GapP}}_{\ge 0} := {\textsc{GapP}} \cap \{u\ge 0\}$. Clearly, \. ${\textsc{\#P}}\subseteq {\textsc{GapP}}_{\ge 0}$\., but it remains open whether this inclusion is proper. For example, the \emph{Kronecker coefficients} \hskip.03cm $g(\lambda,\mu,\nu) \in {\textsc{GapP}}_{\ge 0}$. It is not known whether $g(\cdot)\in {\textsc{\#P}}$, and this remains a major open problem in Algebraic Combinatorics, see e.g.~\cite{PP}. \smallskip As before, let \. $P = (X,\prec)$ \. be a poset on \hskip.03cm $n$ \hskip.03cm elements. Clearly, the function \hskip.03cm $e: P\to e(P)$ \hskip.03cm is in ${\textsc{\#P}}$, and is famously \hskip.03cm ${\textsc{\#P}}$-complete~\cite{BW}. In fact, the function \hskip.03cm $e(\cdot)$ \hskip.03cm is \hskip.03cm ${\textsc{\#P}}$-complete even when restricted to permutation posets \hskip.03cm $P_\sigma$, $\sigma \in S_n$ and posets of height two, see~\cite{DP}. Define \begin{equation}\label{eq:HP-function} \xi(P) \, := \, e(P) \.\cdot\. \prod_{x \in X} b(x) \, - \, \. n! \end{equation} Observe that \. $\xi\in {\textsc{GapP}}_{\ge 0}$ \. by the definition and the Bj\"orner--Wachs inequality~\eqref{eq:HP}. In fact, the injective proof of~\eqref{eq:HP} can be used to obtain the following effective version of the inequality: \begin{thm} \label{t:HP-SP} The function \. $\xi: P \to \mathbb N$ \. defined by~\eqref{eq:HP-function} \hskip.03cm is in~$\hskip.03cm{\textsc{\#P}}$. \end{thm} \smallskip Similarly, define \. ${\zeta}: \hskip.03cm P\times \mathbb N \hskip.03cm \to \mathbb N$ \begin{equation}\label{eq:OP-function} {\zeta}(P,t) \, := \, \Omega(P,t) \. n! \, - \, e(P) \. t^n\.. \end{equation} We can now give an effective version of~\eqref{eq:OP-gen}: \smallskip \begin{thm} \label{t:OP-SP} The function \. ${\zeta}: P \to \mathbb N$ \. defined by~\eqref{eq:OP-function} \hskip.03cm is in~$\hskip.03cm{\textsc{\#P}}$. \end{thm} For every $\sigma\in S_n$, let \hskip.03cm $\eta: S_n \to \mathbb Z$ \hskip.03cm be defined as follows: \begin{equation}\label{eq:Sid-function} \eta(\sigma)\, := \, e(P_\sigma) \. e\bigl(P_{\overline{\sigma}}\bigr) \, - \, \. n! \end{equation} Observe that \. $\eta \in {\textsc{GapP}}_{\ge 0}$ \. by the definition and the Sidorenko inequality~\eqref{eq:sid}. In fact, our injective proof of~\eqref{eq:sid} can be used to obtain the following result: \smallskip \begin{thm} \label{t:sid-SP} The function \. $\eta: S_n \to \mathbb N$ \. defined by~\eqref{eq:Sid-function} \hskip.03cm is in~$\hskip.03cm{\textsc{\#P}}$. \end{thm} \smallskip For the vanishing conditions of the generalized Stanley inequality, the implications are completely straightforward: \smallskip \begin{cor} \label{cor:Sta-gen-vanish-poly} In the conditions of Theorem~\ref{t:vanish}, deciding whether \. $\bigl|\Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})\bigr| > 0$ \.\hskip.03cm is in $\hskip.03cm{\textsc{P}}$. Moreover, when \. $\bigl|\Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})\bigr| > 0$, a linear extension \. $f\in \Ec(P,\textbf{\textit{u}}, \textbf{{\textit{a}}})$ \. can be found in polynomial time. \end{cor} \smallskip We conclude with a corollary of Theorem~\ref{t:gen-Stanley-unique}. \smallskip \begin{cor} \label{cor:Sta-gen-uniqueness-poly} In the conditions of Theorem~\ref{t:vanish}, deciding whether \. $\bigl|\Ec(P,\textbf{\textit{u}}, \textbf{{\textit{a}}})\bigr| = 1$ \.\hskip.03cm is in $\hskip.03cm{\textsc{P}}$. \end{cor} \smallskip \subsection{Structure of the paper} The paper is written in a straightforward manner, as we devote different sections to proofs of different results. These proofs are completely independent and largely self-contained. We are hoping they will appeal to a diverse readership. \smallskip We start with a short Section~\ref{s:notation}, where give some basic notation used throughout the paper. In Section~\ref{s:injective}, we recall the original direct injective proof of the Bj\"orner--Wachs inequality~\eqref{eq:HP} via direct injection and then prove Theorem~\ref{t:HP-SP}. Here we introduce promotions of linear extensions, a tool which appears repeatedly throughout the paper. In a lengthy Section~\ref{s:OP-induction}, we use Shepp's lattice to prove Theorem~\ref{thm:log-concave-intro} and other inequalities for the order polynomial. The second half of this section is motivated by connection and applications to the \defng{Kahn--Saks Conjecture} (Conjecture~\ref{conj:KS-mon}) and the \defng{Graham Conjecture} (Conjecture~\ref{conj:Graham}), as we prove special cases of both of them. In the next Section~\ref{s:q-analogue}, we present an elegant proof by Reiner of the Bj\"orner--Wachs inequality. We then prove a $q$-analogue of \defng{Shepp's inequality} for the $q$-analogue of the order polynomial, by using the remarkable \defng{$q$-FKG inequality} by Bj\"orner. We continue with the general lower bound on the order polynomial (Section~\ref{s:OP-inj}), and prove Theorem~\ref{thm:order_le} by a direct injection. In Section~\ref{s:restricted}, we prove the vanishing conditions (Theorem~\ref{t:vanish}) and uniqueness conditions (Theorem~\ref{t:gen-Stanley-unique}), from which Corollaries~\ref{cor:Sta-gen-vanish-poly} and~\ref{cor:Sta-gen-uniqueness-poly} easily follow. Our proof is based on an algebraic approach of Coxeter group action on linear extensions, see~$\S$\ref{ss:finrem-algebraic} for some history of the subject. In Section~\ref{s:sid}, we give an injective proof of the Sidorenko inequality, and prove its extension Theorem~\ref{t:sid-gen}. We then derive Theorem~\ref{t:sid-SP} which is surprisingly nontrivial given the many other proofs of the inequality (see~$\S$\ref{ss:finrem-sid}). We conclude with Section~\ref{s:finrem} containing lengthy historical remarks and open problems. \medskip \section{Basic definitions and notation} \label{s:notation} In a poset \hskip.03cm $P=(X,\prec)$, elements \hskip.03cm $x,y\in X$ \hskip.03cm are called \defn{parallel} or \defn{incomparable} if \hskip.03cm $x\not\prec y$ \hskip.03cm and \hskip.03cm $y \not \prec x$. We write \. $x\parallel y$ \. in this case. Element \hskip.03cm $x\in X$ \hskip.03cm is said to \defn{cover} \hskip.03cm $y\in X$, if \hskip.03cm $y\prec x$ \hskip.03cm and there are no elements \hskip.03cm $z\in X$ \hskip.03cm such that \. $y\prec z \prec x$. A \defn{chain} is a subset \hskip.03cm $C\subset X$ \hskip.03cm of pairwise comparable elements. The \defn{height} of poset \hskip.03cm $P=(X,\prec)$ \hskip.03cm is the size of the maximal chain. An \defn{antichain} is a subset \hskip.03cm $A\subset X$ \hskip.03cm of pairwise incomparable elements. The \defn{width} of poset \hskip.03cm $P=(X,\prec)$ \hskip.03cm is the size of the maximal antichain. A \defn{dual poset} \hskip.03cm is a poset \hskip.03cm $P^\ast=(X,\prec^\ast)$, where \hskip.03cm $x\prec^\ast y$ \hskip.03cm if and only if \hskip.03cm $y \prec x$. A \defn{disjoint sum} \hskip.03cm $P+Q$ \hskip.03cm of posets \hskip.03cm $P=(X,\prec)$ \hskip.03cm and \hskip.03cm $Q=(Y,\prec')$ \. is a poset on \hskip.03cm $(X\cup Y,\prec^{\small{\ts\diamond\ts}})$, where the relation $\prec^{\small{\ts\diamond\ts}}$ coincides with $\prec$ and $\prec'$ on $X$~and~$Y$, and \. $x\.\|\. y$ \. for all \hskip.03cm $x\in X$, $y\in Y$. A \defn{linear sum} \hskip.03cm $P\oplus Q$ \hskip.03cm of posets \hskip.03cm $P=(X,\prec)$ \hskip.03cm and \hskip.03cm $Q=(Y,\prec')$ \. is a poset on \hskip.03cm $(X\cup Y,\prec^{\small{\ts\diamond\ts}})$, where the relation $\prec^{\small{\ts\diamond\ts}}$ coincides with $\prec$ and $\prec'$ on $X$~and~$Y$, and \. $x\prec^{\small{\ts\diamond\ts}} y$ \. for all \hskip.03cm $x\in X$, $y\in Y$. A \defn{product} \hskip.03cm $P\times Q$ \hskip.03cm of posets \hskip.03cm $P=(X,\prec)$ \hskip.03cm and \hskip.03cm $Q=(Y,\prec^\ast)$ \. is a poset on \hskip.03cm $(X\times Y,\prec^{\small{\ts\diamond\ts}})$, where the relation \. $(x,y) \preccurlyeq^{\small{\ts\diamond\ts}} (x',y')$ \. if and only if \. $x \preccurlyeq x'$ \. and \. $y \preccurlyeq^\ast y'$, for all \hskip.03cm $x,x'\in X$ \hskip.03cm and \hskip.03cm $y,y'\in Y$. Posets constructed from one-element posets by recursively taking disjoint and linear sums are called \defn{series-parallel}. Both \defn{$n$-chain} \hskip.03cm $C_n$ \hskip.03cm and \defn{$n$-antichain} \hskip.03cm $A_n$ \hskip.03cm are examples of series-parallel posets. For a subset \hskip.03cm $Y\subset X$, a \hskip.03cm \defn{restriction} \hskip.03cm of the poset \hskip.03cm $=(X,\prec)$ \hskip.03cm to \hskip.03cm $X\smallsetminus Y$ \hskip.03cm is a subposet \hskip.03cm $(X\smallsetminus Y,\prec)$ \hskip.03cm of~$P$, which we denote by \hskip.03cm $P\smallsetminus Y$ \hskip.03cm and \hskip.03cm $P|_{X\smallsetminus Y}$. For a poset \hskip.03cm $P=(X,\prec)$, a function \hskip.03cm $f: X\to \mathbb R$ \hskip.03cm is called \defn{$\prec$-increasing} \hskip.03cm if \hskip.03cm $f(x)\le f(y)$ \hskip.03cm for all \hskip.03cm $x\preccurlyeq y\hskip.03cm;$ such functions are also called \defn{weakly order-preserving} in a different context. The \hskip.03cm \defn{$\prec$-decreasing} \hskip.03cm functions are defined analogously. Throughout the paper we use \, $\mathbb N=\{0,1,2,\ldots\}$, \. $\mathbb P=\mathbb N_{\ge 1} =\{1,2,\ldots\}$ \. and \. $[n]=\{1,\ldots,n\}$. \medskip \section{Injective proof of the Bj\"orner--Wachs inequality}\label{s:injective} In this short section we recap the original proof by Bj\"orner and Wachs. We do this both as a warmup and as a way to introduce some definitions and ideas that will prove useful throughout the paper. As a quick application, we obtain the proof of Theorem~\ref{t:HP-SP}. The reader well familiar with~\cite{BW89} can skip this section. \smallskip Denote by \hskip.03cm $\mathcal S=\mathcal S(P)$ \hskip.03cm the set of all bijections \. $\sigma: X\to [n]$, so that \. $\Ec(P) \subseteq \mathcal S(P)$. Denote by \hskip.03cm $\mathcal B=\mathcal B(P)$ \hskip.03cm the set of maps \. $g: X\to X$ \. such that \. $g(x)\succcurlyeq x$ \. for all \. $x\in X$. The inequality~\eqref{eq:HP} can then be written as: $$(\ast) \qquad \bigl|\mathcal S(P)\bigr| \, \le \, \bigl|\Ec(P)\bigr| \. \cdot \. \bigl|\mathcal B(P)\bigr| \.. $$ We prove \hskip.03cm $(\ast)$ \hskip.03cm by a direct injection \. $\Phi: \mathcal S \to \Ec \times \mathcal B$ \. defined as follows. We say that a bijection \hskip.03cm $f: X\to [n]$ \hskip.03cm is \defn{sorted} on a subset \hskip.03cm $Y \subseteq X$, if \. $f(x)< f(y)$ \. for all \. $x, \hskip.03cm y\in Y$ \. such that \. $x\prec y$. Fix a linear extension \hskip.03cm $\alpha\in \Ec(P)$ and label the elements of $X$ naturally according to $\alpha$, so that $x_i = \alpha^{-1}(i)$. For every \. $\sigma\in \mathcal S$, proceed with the following \defng{sorting algorithm} for the elements \. $x_n,\ldots,x_1$ \. in this order. At the $k$-th step, take the element \. $x=x_{n-k+1}$ \. and let \. $f(x):=\sigma(x)$. If \hskip.03cm $\sigma(x)$ is the smallest of \hskip.03cm $\{f(y), y\in B(x)\}$, do nothing. Otherwise, start the \defn{demotion} of \hskip.03cm $x$ \hskip.03cm by swapping its value $f(x)$ with the smallest $f(x')$, where \. $x'\in B(x)$. Repeat this with $x'$, etc., until all elements in \hskip.03cm $B(x)$ \hskip.03cm are sorted. Let \hskip.03cm $g(x):=y$ \hskip.03cm be the element in $B(x)$ where $x$ is demoted to (i.e. $y$ is the largest element in $B(x)$ affected by the demotion). At end of the sorting algorithm, we obtain a bijection \hskip.03cm $f: X\to [n]$ \hskip.03cm that is sorted on the whole~$X$, i.e. $f \in \Ec(P)$. We also obtain a map \hskip.03cm $g\in \mathcal B(P)$. Define \. $\Phi(\sigma):=(f,g)$. \smallskip \begin{prop}[{\cite{BW89}}] \label{p:HP-injection} The map \. $\Phi: \hskip.03cm \mathcal S(P) \. \to \. \Ec(P)\hskip.03cm\times \hskip.03cm\mathcal B(P)$ \. defined above is an injection. \end{prop} \begin{proof} Define the \emph{inverse construction} as follows. Proceed through the reverse order of the elements \. $x_1,\ldots,x_n$. At \hskip.03cm $(n-k)$-th step, take the element \. $x=x_{n-k} \. and let \. $y=g(x)$. At this step, the bijection \. $f: X\to [n]$ \. is sorted on \. $B(x)$. Start the \defn{promotion} of $y$ by swapping $f(y)$ with the maximal $f(y')$, over all \. $x\preccurlyeq y' \prec y$, until eventually $f(y)$ is promoted to the element~$x$. Denote by \. $\sigma\in \mathcal S(P)$ \. the result of this iterated promotion and define a map \. $\Psi(f,g):=\sigma$. Now observe that for all \. $\sigma\in \mathcal S(P)$ \. we have \. $\Psi(\Phi(\sigma))=\sigma$, since the map \. $\Psi$ \. retraces each step of $\Phi$ by the properties of promotion and demotion. This implies that $\Phi$ is an injection and completes the proof of the claim. \end{proof} \smallskip \begin{proof}[Proof of Theorem~\ref{t:HP-SP}] Let \. $\mathcal H(P)\subseteq \Ec(P) \times \mathcal B(P)$ \. be the set of pairs \. $(f,g)\in \Ec(P) \times \mathcal B(P)$, such that \. $\Phi(\Psi(f,g))\ne (f,g)$. By definition, \. $\bigl|\mathcal H(P)\bigr| = \xi(P)$. Since both $\Phi$ and $\Psi$ are computable in polynomial time, then so is the membership in~$\mathcal H(P)$. This proves the result. \end{proof} \smallskip A series-parallel poset \hskip.03cm $P=(X,\prec)$ \hskip.03cm is called an \defn{ordered forest} if it is a disjoint union of rooted trees, where each tree is rooted it its unique minimal element. \smallskip \begin{prop}[{\cite{BW89}}]\label{p:HP-forest} The Bj\"orner--Wachs inequality~\eqref{eq:HP} is an equality \hskip.03cm \underline{if and only if} \hskip.03cm $P$ \hskip.03cm is an ordered forest. \end{prop} \begin{proof} As mentioned in the introduction, for the ``if'' direction, the equality can be easily proved by induction, see e.g.~\cite{Bea,SY}. For the ``only if'' direction, let \. $x,y,z\in X$ \. be a poset elements such that \. $x\prec z$, \. $y\prec z$, \. $x \.\|\. y$, and such that both elements \hskip.03cm $x,y$ \hskip.03cm are covered by~$z$. We claim that \eqref{eq:HP} is a strict inequality in this case. In the notation above, choose \. $g\in \mathcal B(P)$ \. such that \. $g(x)=g(y)=z$ \. and \. $g(s)=s$ \. for all $s \in X \smallsetminus \{x,y\}$. It is easy to see that there exists \. $f\in \Ec(P)$, such that \. $f(x)= k-1$, \. $f(y)=k$ \. and \. $f(z)=k+1$, for some \. $1< k< n$. Assume that $x$ precedes $y$ in the natural labeling. Applying $\Psi$ we see that after promoting $f(x)$ to $z$, the result is no longer a linear extension on $B(y)$ and thus \. $\Psi \. $ is not defined there. Thus \.$\Phi$\. is not a bijection, which proves the claim. Finally, observe that $P$ is an ordered forest if and only if every poset element covers at most one element. This proves the result. \end{proof} \medskip \section{Bounding order polynomial by the FKG inequality} \label{s:OP-induction} In this section we prove the bound on the order polynomial from Theorem~\ref{thm:HP order poly} using an inductive approach and an application of the FKG inequality on the Shepp's lattice. \smallskip \subsection{Shepp's lattice and the FKG inequality}\label{ss:Shepp lattice} Recall that a \defnb{lattice} \. ${\mathcal{L}}:=(L,\prec^{\small{\ts\diamond\ts}})$ \. is a partially ordered set on~$L$, such that every \hskip.03cm $a,b\in L$ \hskip.03cm has a unique least upper bound called \defnb{join} \hskip.03cm $a \vee b$, and a unique greatest lower bound called \defnb{meet} \hskip.03cm $a \wedge b$. A lattice is called \defnb{distributive} \hskip.03cm if \[ a \wedge (b \vee c) \ = \ (a \wedge b) \vee (a \wedge c) \quad \text{ for all } \ a, \hskip.03cm b, \hskip.03cm c\hskip.03cm \in L\hskip.03cm. \] A function \. $\mu: L\to \Rb_{\geq 0}$ \. is called \defnb{log-supermodular} \hskip.03cm if \[ \mu(a) \. \mu(b) \ \leq \ \mu(a \wedge b) \. \mu(a \vee b) \quad \text{ for all } \ a,\hskip.03cm b \in L\hskip.03cm. \] Fix a positive integer \hskip.03cm $t>0$. Let \. $P=(X,\prec)$ \. be a poset on \hskip.03cm $|X|=n$ \hskip.03cm elements, let \. $X=Y \sqcup Z$ \. be a partition of $X$ into two disjoint subsets. \defnb{Shepp's lattice} \. ${\mathcal{L}}= {\mathcal{L}}_{Y,Z,t} \ := \ (L,\prec^{\small{\ts\diamond\ts}})$ \. is defined as \[ L \ := \ \big\{ \textbf{\textit{v}} = (v_x)_{x \in X} \, : \, 1 \. \leq \. v_x \. \leq \. t \big\}, \] and let \begin{equation*} \textbf{\textit{v}} \. \preccurlyeq^{\small{\ts\diamond\ts}} \. \textbf{w} \quad \Longleftrightarrow \quad \left\{\aligned & \, v_y \. \leq \. w_y & \text{for all} \ \ y \in Y \\ & \, v_z \. \geq \. w_z & \text{for all} \ \ z \in Z \endaligned \right. \end{equation*} Let \. $\mu=\mu_{Y,Z} \hskip.03cm : \hskip.03cm L \to \{0,1\}$ \. be a function defined as \begin{equation*} \mu(\textbf{\textit{v}})=1 \quad \Longleftrightarrow \quad v_{x} \ \leq \ v_{x'} \quad \text{for all} \ \ x \preccurlyeq x' \ \ \ \text{such that} \ \ \ \left[ \. \aligned & x,x' \in Y\hskip.03cm, \ \, \text{or} \\ & x,x'\in Z\hskip.03cm. \endaligned\right. \end{equation*} \smallskip \begin{thm}[\cite{She}] Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a finite poset, and let \hskip.03cm $X=Y \sqcup Z$ \hskip.03cm be a partition of the ground set $X$ into two disjoint subsets. Then \. ${\mathcal{L}}_{Y,Z,t}$ \. is a distributive lattice, and \. $\mu_{Y,Z}$ \. is a log-supermodular function. \end{thm} \smallskip This beautiful result is relatively little known; we include a short proof for completeness. \smallskip \begin{proof} It follows from the definition of \. $\prec^{\small{\ts\diamond\ts}}$\hskip.03cm, that for all \. $\textbf{\textit{v}}, \textbf{w} \in L$, we have: \begin{equation}\label{eq:She1} \begin{split} & (\textbf{\textit{v}} \wedge \textbf{w})_{y} \ = \ \min\{v_y, w_y\}, \qquad (\textbf{\textit{v}} \wedge \textbf{w})_{z} \ = \ \max\{v_z, w_z\},\\ & (\textbf{\textit{v}} \vee \textbf{w})_{y} \ = \ \max\{v_y, w_y\}, \qquad (\textbf{\textit{v}} \vee \textbf{w})_{z} \ = \ \min\{v_z, w_z\}, \end{split} \end{equation} where \hskip.03cm $y \in Y$ \hskip.03cm and \hskip.03cm $z \in Z$. Now note that, for all real numbers \. $\alpha,\beta, \gamma \in \Rb$\., \begin{equation}\label{eq:She2} \min\{\alpha, \max\{\beta,\gamma\} \} \ = \ \max \{ \min \{\alpha,\beta\}, \. \min \{\alpha,\gamma\} \}. \end{equation} It then follows from \eqref{eq:She1} and \eqref{eq:She2} that \hskip.03cm ${\mathcal{L}}$ \hskip.03cm is a distributive lattice. To show that $\mu$ is a log-supermodular function, it suffices to verify the cases when \. $\mu(\textbf{\textit{v}})= \mu(\textbf{w})=1$\hskip.03cm. Let \. $y,y' \in Y$ \. be such that $y \prec y'$. Note that \. $v_y \leq v_{y'}$ \. and \. $w_y \leq w_{y'}$. Then we have: \[ (\textbf{\textit{v}} \wedge \textbf{w})_y \ = \ \min\{v_y, w_y\} \ \leq \ \min\{v_{y'}, w_{y'}\} \ = \ (\textbf{\textit{v}} \wedge \textbf{w})_{y'}\.. \] Similarly, we have \. $(\textbf{\textit{v}} \wedge \textbf{w})_{z} \. \leq \. (\textbf{\textit{v}} \wedge \textbf{w})_{z}$ \. for all \. $z \prec z'$, \. $z, z' \in Z$. Therefore, \. $\mu(\textbf{\textit{v}} \wedge \textbf{w})=1$. Analogously, we also have \. $\mu(\textbf{\textit{v}} \vee \textbf{w})=1$\., and the proof is complete. \end{proof} \smallskip \begin{rem} Shepp's lattice \hskip.03cm ${\mathcal{L}}$ \hskip.03cm used in this section should not be confused with another lattice defined in~\cite{She-XYZ} by Shepp. Both lattices share the same ground set but have different partial orders, and the partial order of the lattice in \cite{She-XYZ} was specifically chosen to prove the \defng{$XYZ$ \hskip.03cm inequality}. \end{rem} \smallskip Now recall the classical \defn{FKG inequality}, see e.g.~\cite[$\S$6.2]{ASE}. \smallskip \begin{thm}[{\rm {\em{\defn{FKG inequality}}}, \cite{FKG}}]\label{thm:FKG} Let \. ${\mathcal{L}}=(L,\prec)$ \. be a finite distributive lattice, and let \. $\mu: L \to \Rb_{\geq 0}$ \. be a log-supermodular function. Then, for every \hskip.03cm $\prec^{\small{\ts\diamond\ts}}$-decreasing functions \. $g,h: \hskip.03cm L \to \Rb_{\geq 0}$\hskip.03cm, we have: \begin{equation}\label{eq:FKG-ineq} E(\textbf{\em 1}) \. E(gh) \ \geq \ E(g) \. E(h), \end{equation} where \. \[ E(g) \, = \, E_\mu(g) \ := \ \sum_{a \in L} \, g(x) \. \mu(x)\., \] and function \. $\textbf{\em 1}:L \to \Rb$ \. is given by \. $\textbf{\em 1}(a)=1$ \. for all \hskip.03cm $a \in L$. The inequality~\eqref{eq:FKG-ineq} holds also when \hskip.03cm $g,h$ \hskip.03cm are $\prec^{\small{\ts\diamond\ts}}$-increasing. On the other hand, for \hskip.03cm $g$ is $\prec^{\small{\ts\diamond\ts}}$-decreasing and $h$ is $\prec^{\small{\ts\diamond\ts}}$-increasing, the inequality~\eqref{eq:FKG-ineq} is reversed. \end{thm} \smallskip Below we apply the FKG inequality to Shepp's lattice to prove several inequalities for the order polynomial. \medskip \subsection{Correlation inequalities} \label{ss:OP-induction-prelim} Let \. $P=(X,\prec)$ \. be a poset on $n$ elements. As above, denote by \hskip.03cm $\mathcal S=\mathcal S(P)$ \hskip.03cm the set of bijections \hskip.03cm $f: X\to [n]$. By an abuse of notation, for every (not necessarily distinct) elements \. $u, v\in X$, we write \[ \{\hskip.03cm u \precc v \hskip.03cm \} \qquad \text{as a shorthand for the collection} \qquad \{\hskip.03cm f \in \mathcal S \. : \. f(u) \le f(v) \hskip.03cm \}\.. \] One can write the set of linear extensions \hskip.03cm $\Ec(P)$ \hskip.03cm as the intersection of collections \. $\{u\precc v\}$, for all pairs \hskip.03cm $u\prec v$ \hskip.03cm in~$P$. Conversely, every such intersection is a set of linear extensions of the corresponding poset. The language of \defn{collections} is technically useful for our purposes. Let \. $X= Y \. \sqcup \. Z$ \. be a \defn{partition} of~$X$ into two disjoint subsets. A collection \hskip.03cm $C$ \hskip.03cm is called \hskip.03cm \defn{$Y$-minimizing} \hskip.03cm w.r.t.\ the partition \hskip.03cm $Y\sqcup Z$ \hskip.03cm if $C$ is an intersection of collections of the form \. $\{y\precc z\}$, for \. $y\in Y$ \. and \. $z\in Z$. Similarly, a collection \hskip.03cm $C$ \hskip.03cm is called \defn{$Y$-maximizing} w.r.t.\ the partition \hskip.03cm $Y\sqcup Z$ \hskip.03cm if $C$ is an intersection of collections of the form \. $\{z\precc y\}$, for \. $y\in Y$ \. and \. $z\in Z$. By a slight abuse of notation, we write \hskip.03cm $\Omega(C,t)$ \hskip.03cm to denote the order polynomial of a poset given by the collection~$C$. For the rest of this section, let \hskip.03cm $A$ \hskip.03cm be the collection given by \begin{equation*}\label{eq:colA} A \ := \ \bigcap_{y\prec y', \ y, y'\in Y} \. \{ y \precc y' \}\hskip.03cm \ \cap \ \bigcap_{z\prec z', \ z, z'\in Z} \. \{ z \precc z' \}\hskip.03cm, \end{equation*} the collection of events involving only elements of $Y$ or only elements of $Z$. \smallskip \begin{lemma}[{\cite[Eq.~(2.12)]{She}}]\label{t:FKG order polynomial} In the notation above, let \hskip.03cm $C,C'$ \hskip.03cm be $Y$-minimizing collections w.r.t.\ partition \ $X=Y\sqcup Z$. Then, for every integer \hskip.03cm $t>0$, we have: \[ \Omega \big(C \cap C' \cap A,\hskip.03cm t\big) \. \cdot \. \Omega\big( A,\hskip.03cm t\big) \quad \geq \quad \Omega \big(C \cap A,\hskip.03cm t\big) \. \cdot \. \Omega\big( C' \cap A,\hskip.03cm t\big)\.. \] If $C$ is $Y$-minimizing and $C'$ is $Y$-maximizing, then the above inequality is reversed. \end{lemma} \smallskip In the probabilistic language, it says that the order polynomial satisfies \defna{positive correlation} for intersections of $Y$-minimizing collections (viewed as events). We should note that in~\cite{She} this result was not singled out and appears as an equation in the middle of the proof of the main result. We again include the proof for completeness. \smallskip \begin{proof}[Proof of Lemma~\ref{t:FKG order polynomial}] Let \hskip.03cm ${\mathcal{L}}=(L,\prec^{\small{\ts\diamond\ts}})$ \hskip.03cm be Shepp's lattice defined in~$\S$\ref{ss:Shepp lattice}. Let \. $g,h : L \to \{0,1\}$ \. be given by \[ g(\textbf{\textit{v}}) \ := \ \begin{cases} \. 1 & \text{if} \ \ v_y \hskip.03cm \leq \hskip.03cm v_z \ \ \, \forall \. \{ y \precc z\} \in C\\ \. 0 & \text{otherwise} \end{cases} \quad \text{and} \quad h(\textbf{\textit{v}}) \ := \ \begin{cases} \. 1 & \text{if} \ \ v_y \hskip.03cm \leq \hskip.03cm v_z \ \ \, \forall \. \{ y \precc z\} \in C'\\ \. 0 & \text{otherwise.} \end{cases} \] Let us prove that \hskip.03cm $g$ \hskip.03cm is $\prec^{\small{\ts\diamond\ts}}$-decreasing. It suffices to show that \begin{equation* g(\textbf{w})\hskip.03cm =\hskip.03cm 1 \quad \text{and} \quad \textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w} \quad \Longrightarrow \quad g(\textbf{\textit{v}})\hskip.03cm = \hskip.03cm 1. \end{equation*} Note that for every \hskip.03cm $y \in Y$ \hskip.03cm and \hskip.03cm $z \in Z$ \hskip.03cm such that \hskip.03cm $y \preccurlyeq z$, we have: \[ v_y \ \leq \ w_y \ \leq \ w_z \ \leq \ v_z, \] where the first and the third inequality is because \hskip.03cm $\textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w}$\hskip.03cm, and the second inequality is because $g(\textbf{w})=1$. This implies that $g(\textbf{\textit{v}})=1$, and thus $g$ is a $\prec^{\small{\ts\diamond\ts}}$-decreasing function. By the same reasoning, we also have that function \hskip.03cm $h$ \hskip.03cm is $\prec^{\small{\ts\diamond\ts}}$-decreasing. Finally, note that \begin{align*} \Omega(C \cap C' \cap A, t) \. = \. E(gh), \ \ \Omega( A, t) \. = \. E(\textbf{1}), \ \ \Omega(C \cap A, t) \. = \. E(g), \ \ \Omega(C' \cap A, t) \. = \. E(h), \end{align*} The lemma now follows from the FKG inequality (Theorem~\ref{thm:FKG}). \end{proof} \smallskip We can now apply this result in the more traditional notation of order polynomials of posets. \smallskip \begin{lemma}\label{l:main order polynomial} Let \. $P=(X,\prec)$ \. be a poset, and let \. $x,y\in X$ \. be minimal elements. Then, for every integer $t>0$, we have: \[ {\Omega\big(P,\hskip.03cm t\big)} \. \cdot \. {\Omega\big(P \smallsetminus \{x,y\},\hskip.03cm t\big)} \ \geq \ {\Omega\big(P \smallsetminus x, \hskip.03cm t\big)} \. \cdot \. {\Omega\big(P\smallsetminus y,\hskip.03cm t\big)}, \] where by \hskip.03cm $P\smallsetminus x$, \hskip.03cm $P\smallsetminus y$ \hskip.03cm and \hskip.03cm $P\smallsetminus \{x,y\}$ \hskip.03cm denote the subposets of~$\hskip.03cm P$ \hskip.03cm restricted to \hskip.03cm $X-x$, \hskip.03cm \hskip.03cm $X-y$ \hskip.03cm and \hskip.03cm $X-x-y$, respectively. \end{lemma} \smallskip \begin{proof} Note that \. $x$ \hskip.03cm and \hskip.03cm $y$ \. are incomparable elements. Let \. $Y:= \{x,y \}$ \. and \. $Z:= X\smallsetminus Y$ \. be the partition of~$X$. Consider the $Y$-minimizing collections \. $C$ \hskip.03cm and \hskip.03cm $C'$ \. given by \begin{equation}\label{eq:colC} C \ := \ \bigcap_{z\in B(x)-x} \. \{ x \precc z \} \ \, \quad \text{and} \quad \ C' \ := \ \bigcap_{z'\in B(y)-y} \. \{ y \precc z' \}, \end{equation} where \hskip.03cm $B(x)$ \hskip.03cm and \hskip.03cm $B(y)$ \hskip.03cm are upper order ideals of elements~$x$ and~$y$, respectively. Observe that \begin{alignat*}{2} & \Omega(P,t) \ = \ \Omega(C \cap C' \cap A,t)\., \qquad && \Omega(P\smallsetminus x,t) \ = \ \frac{1}{t} \,\. \Omega(C' \cap A,t)\.,\\ & \Omega(P \smallsetminus y,t) \ = \ \frac{1}{t} \, \Omega(C \cap A,t)\., \qquad && \Omega(P \smallsetminus \{x,y\},t) \ = \ \frac{1}{t^2} \, \Omega(A,t)\.. \end{alignat*} The lemma now follows from Lemma~\ref{t:FKG order polynomial} and the equations above. \end{proof} \smallskip \subsection{Lower bounds}\label{ss:OP-induction-lower} We are now ready to prove Theorem~\ref{thm:HP order poly} by induction. The following lemma established the induction step from which the theorem follows. \smallskip \begin{lemma}\label{l:OP-HP-ind} Let \. $P=(X,\prec)$ \. be a poset with \. $|X|=n$ \. elements, and let \hskip.03cm $x\in X$ \hskip.03cm be a minimal element such that \hskip.03cm $b(x)>1$. Then we have: \begin{equation}\label{eq:HP-ind-min} \frac{\Omega(P,t)}{t+1} \ \geq \ \frac{\Omega(P\smallsetminus x,t)}{b(x)}\.. \end{equation} \end{lemma} \begin{proof} Let \hskip.03cm $Q=(Y,\prec)$ \hskip.03cm be a finite poset. By labeling the elements in~$Y$ with $m$ distinct integers from~$\hskip.03cm [t]$, we can write \begin{align}\label{eq:sum_ideals} \Omega(Q,t) \ = \ \sum_{m=1}^n \, \bigl|\mathcal I_m(Q)\bigr| \, \binom{t}{m}\., \end{align} where \. $|\mathcal I_m(Q)|$ \. is the number of ascending chains $$\varnothing \. = \. I_0 \. \subset \. I_1 \. \subset \. I_2 \. \subset \. \ldots \. \subset \. I_m \. = \.Q $$ \. of upper order ideals in~$P$, s.t.\ \. $I_i \smallsetminus I_{i-1} \neq \varnothing$ \. for all \hskip.03cm $1\le i \le m$. Denote by \. $P' = P \smallsetminus \{x\}$ \. the induced poset on \hskip.03cm $X \smallsetminus x$. Suppose that $x$ is a unique minimal element of $P$, and let \. $P'=P\smallsetminus x$. Note that \. $b(x)=n$ \. in this case. Summing over all possible values of~$x$, we obtain~\eqref{eq:HP-ind-min}: \begin{align*} \Omega(P,t) \ &= \ \sum_{k=1}^t \. \Omega(P',k) \ =_{\eqref{eq:sum_ideals}} \ \sum_{m=1}^{n-1} \, \bigl|\mathcal I_m(P')\bigr| \, \sum_{k=m}^t \. \binom{k}{m} \ = \ \sum_{m=1}^{n-1} \, \bigl|\mathcal I_m(P')\bigr| \, \binom{t+1}{m+1} \\ & \geq \ \frac{t+1}{n} \, \sum_{m=1}^{n-1} \, \bigl|\mathcal I_m(P')\bigr| \, \binom{t}{m} \ =_{\eqref{eq:sum_ideals}} \ \frac{t+1}{n} \ \Omega(P',t)\.. \end{align*} Here the inequality follows from $$\binom{t+1}{m+1} \ = \ \frac{t+1}{m+1} \. \binom{t}{m} \ \geq \ \frac{t+1}{n} \. \binom{t}{m} \quad \ \.\text{for all \ $m\leq n-1$.} $$ Suppose now that \hskip.03cm $x\in X$ \hskip.03cm is not a unique minimal element. Let \hskip.03cm $y\in X$, \hskip.03cm $y\ne x$ \hskip.03cm be another minimal element in~$P$. By Lemma~\ref{l:main order polynomial}, we have: \begin{equation}\label{eq:FKG2} \frac{\Omega(P,t)}{\Omega(P',t)} \ \geq \ \frac{\Omega(P \smallsetminus y, t)}{\Omega(P' \smallsetminus y, t)} \,. \end{equation} Now proceed by induction to remove all minimal elements in~$P$ incomparable to~$x$, until element~$x$ becomes the unique minimal element. Applying the inequality~\eqref{eq:FKG2} repeatedly, we obtain: \begin{equation* \frac{\Omega(P,t)}{\Omega(P',t)} \ \geq \ \ldots \ \geq \ \frac{t+1}{b(x)}\,. \end{equation*} This proves~\eqref{eq:HP-ind-min} in full generality. \end{proof} \smallskip \begin{proof}[Proof of Theorem~\ref{thm:HP order poly}] We prove the inequality~\eqref{eq:OP-HP} by induction. First, suppose that \hskip.03cm $b(x)=1$, so \hskip.03cm $x$ \hskip.03cm is the maximal element in~$P$. Then \. $\Omega(P,t) \hskip.03cm = \hskip.03cm t \. \Omega(P',t)$, since we can choose the value \hskip.03cm $f(x)\in [t]$ \hskip.03cm independently of other values. The inequality~\eqref{eq:OP-HP} follows then. For \. $b(x)>1$, Lemma~\ref{l:OP-HP-ind} gives the step of induction and complete the proof. \end{proof} \medskip \subsection{Log-concavity}\label{ss:OP-log-concavity} The main result of this subsection is the log-concavity of the evaluation of the order polynomial. \smallskip \begin{thm}\label{thm:log-concave} Let $P=(X,\prec)$ be a finite poset. Then, for every integer \hskip.03cm $t \geq 2$, we have: \[ \Omega(P,t)^2 \ \geq \ \Omega(P,t+1) \. \cdot\. \Omega(P,t-1).\] \end{thm} \smallskip As for other poset inequalities, one can ask about equality conditions of in Theorem~\ref{thm:log-concave}. Turns out, the log-concavity in the theorem is always strict, see Theorem~\ref{thm:log-concave-strict} below. The proof of both results use the same approach, but the strict log-concavity is built on top of the non-strict version and is a bit more involved. Thus, we start with the easier result for clarity. \smallskip \begin{proof}[Proof of Theorem~\ref{thm:log-concave}] Let \hskip.03cm $Y=X$ \hskip.03cm and \hskip.03cm $Z=\varnothing$. Let \hskip.03cm ${\mathcal{L}}=(L,\prec^{\small{\ts\diamond\ts}})$ \hskip.03cm be Shepp's lattice defined in~$\S$\ref{ss:Shepp lattice}, and let \hskip.03cm $t\geq 3$. Let \. $g,h : L \to \{0,1\}$ \. be two functions given by \begin{align*} g(\textbf{\textit{v}}) \ &:= \ \begin{cases} \. 1 & \text{if} \ \ v_x \geq 2 \ \ \text{ for all \ $x \in X$}\\ \. 0 & \text{otherwise}, \end{cases} \quad h(\textbf{\textit{v}}) \ := \ \begin{cases} \. 1 & \text{if} \ \ v_x \leq t-1 \ \ \text{ for all \ $x \in X$}\\ \. 0 & \text{otherwise}. \end{cases} \end{align*} To prove that $g$ is $\prec^{\small{\ts\diamond\ts}}$-increasing, it suffices to show that \begin{equation* g(\textbf{\textit{v}})\hskip.03cm =\hskip.03cm 1 \quad \text{and} \quad \textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w} \quad \Longrightarrow \quad g(\textbf{w})\hskip.03cm = \hskip.03cm 1. \end{equation*} Note that, for every \hskip.03cm $x \in Y=X$, we have: \[ w_x \, \geq \, v_x \, \geq \, 2, \] where the first inequality is because \hskip.03cm $\textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w}$\hskip.03cm, and the second inequality is because $g(\textbf{\textit{v}})=1$. This implies that $g(\textbf{w})=1$, and thus $g$ is a $\prec^{\small{\ts\diamond\ts}}$-increasing function. By an analogous reasoning we also have that $h$ is $\prec^{\small{\ts\diamond\ts}}$-decreasing. Now note that \begin{equation}\label{eq:expectation-as-OP} \begin{split} E(gh) \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \, : \, 2 \. \leq \. v_x \. \leq \. t-1 \, \ \text{ for all } \,\ x \in X\big\} \big| \ = \ \Omega(P,t-2),\\ E(g) \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \. : \, 2 \. \leq \. v_x \, \ \text{ for all } \, \ x \in X\big\} \big| \ = \ \Omega(P,t-1),\\ E(h) \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \, : \, v_x \. \leq \. t-1 \,\ \text{ for all } \, \ x \in X\big\} \big| \ = \ \Omega(P,t-1),\\ E(\textbf{1}) \ &= \ |L| \ = \ \Omega(P,t). \end{split} \end{equation} It then follows from the FKG inequality~(Theorem~\ref{thm:FKG}), that \[ \Omega(P,t-2) \.\cdot\. \Omega(P,t) \ \leq \ \Omega(P,t-1) \.\cdot\. \Omega(P,t-1), \] and the theorem now follows by substituting \. $t\to t+1$. \end{proof} \medskip \subsection{Strict log-concavity} We can now prove that the inequality in Theorem~\ref{thm:log-concave} is always strict, by applying the FKG inequality in a more careful manner. This theorem will also proved useful in~$\S$\ref{ss:OP-Graham} to establish the asymptotic version of Graham's Conjecture~\ref{conj:Graham}. \smallskip \begin{thm}\label{thm:log-concave-strict} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset with \hskip.03cm $|X|=n$ \hskip.03cm elements. Then, for every integer \hskip.03cm $t \geq 2$, we have: \begin{equation}\label{eq:log-concave-strict} \Omega(P,t)^2 \ \geq \ \bigg(1 \. + \. \frac{1}{(t+1)^{n+1}} \bigg) \ \Omega(P,t+1) \ \Omega(P,t-1). \end{equation} \end{thm} \smallskip Let \hskip.03cm $Y=X$ \hskip.03cm and \hskip.03cm $Z=\varnothing$. Let \hskip.03cm ${\mathcal{L}}=(L,\prec^{\small{\ts\diamond\ts}})$ \hskip.03cm be Shepp's lattice, let \hskip.03cm $\mu = \mu_{Y,Z} \hskip.03cm : \hskip.03cm L \to \{0,1\}$ \hskip.03cm be the log-supermodular function defined in~$\S$\ref{ss:Shepp lattice}, and let \hskip.03cm $t\geq 3$. Without loss of generality, assume that \hskip.03cm $X=[n]$ \hskip.03cm and that this is a \defn{natural labeling} of~$X$, i.e.\ \. $i<j$ \. for all \. $i \prec j$. For all \hskip.03cm $1\le i\le n$, let \. $g_i,h_i : L \to \{0,1\}$ \. be two functions given by \begin{align*} g_i(\textbf{\textit{v}}) \ &:= \ \begin{cases} \. 1 & \text{if} \ \ v_i \geq 2 \\ \. 0 & \text{otherwise} \end{cases} \qquad \text{and} \qquad h_i(\textbf{\textit{v}}) \ := \ \begin{cases} \. 1 & \text{if} \ \ v_i \leq t-1 \\ \. 0 & \text{otherwise}. \end{cases} \end{align*} In notation of the proof of Theorem~\ref{thm:log-concave}, we have \. $g=g_1\cdots g_n$ \. and \. $h=h_1\cdots h_n$. It follows from the same argument above, that \hskip.03cm $g_i$ \hskip.03cm are $\prec^{\small{\ts\diamond\ts}}$-increasing, while \hskip.03cm $h_i$ \hskip.03cm are $\prec^{\small{\ts\diamond\ts}}$-increasing, for all \hskip.03cm $1\le i\le n$. We now show that $g_i$ and $h_i$ are log-modular functions. Indeed, note that \[ v_i \geq 2 \ \text{ and } \ w_i \geq 2 \qquad \Longleftrightarrow \qquad \max\{v_i, w_i \} \geq 2 \ \text{ and } \ \min\{v_i, w_i \} \geq 2. \] This implies that \. $g_i(\textbf{\textit{v}}) \. g_i(\textbf{w}) \ = \ g_i(\textbf{\textit{v}} \wedge \textbf{w}) \. g_i(\textbf{\textit{v}} \vee \textbf{w})$, as desired. The same argument implies that $h_i$ is also a log-modular function. \smallskip \begin{lemma} \begin{equation}\label{eq:remove-g} \frac{E_\mu(g_1\hskip.03cm \cdots \hskip.03cm g_n \cdot h)}{E_\mu(g_1\hskip.03cm\cdots \hskip.03cm g_n)} \ \leq \ \frac{E_\mu(g_n \hskip.03cm h )}{E_\mu(g_n)}. \end{equation} \end{lemma} \begin{proof} Let \hskip.03cm $\mu_i: L \to \Rb$ \hskip.03cm be given by \. $\mu_i := (g_i \hskip.03cm \cdots \hskip.03cm g_n) \hskip.03cm \mu$, for all \. $1\le i \le n$, and let \hskip.03cm $\mu_{n+1}:=\mu$. Note that function \hskip.03cm $\mu_i$ \hskip.03cm is a log-supermodular since it is a product of log-modular and log-supermodular functions. Therefore, for all \hskip.03cm $2\le i \le n$, we have: \begin{equation}\label{eq:remove-g-induction} \frac{E_{\mu_{i}}(g_{i-1} \hskip.03cm h)}{E_{\mu_{i}}(g_{i-1} \hskip.03cm )} \ \leq \ \frac{E_{\mu_{i}}( h)}{E_{\mu_{i}}(\textbf{1})} \ = \ \frac{E_{\mu_{i+1}}(g_i \hskip.03cm h)}{E_{\mu_{i+1}}(g_i)}\,, \end{equation} where the inequality is by the FKG inequality (Theorem~\ref{thm:FKG}) applied to the $\prec^{\small{\ts\diamond\ts}}$-increasing function~$g_{i-1}\hskip.03cm$, to the $\prec^{\small{\ts\diamond\ts}}$-decreasing function~$\hskip.03cm h$, and to the log-supermodular function~$\hskip.03cm\mu_i\hskip.03cm$. We conclude: \[ \frac{E_\mu(g_1\hskip.03cm\cdots\hskip.03cm g_n \cdot h)}{E_\mu(g_1\hskip.03cm\cdots\hskip.03cm g_n)} \ = \ \frac{E_{\mu_2}(g_1 \hskip.03cm h)}{E_{\mu_2}(g_1 )} \leq \ \frac{E_{\mu_{n+1}}(g_n \hskip.03cm h )}{E_{\mu_{n+1}}(g_n)} \ = \ \frac{E_{\mu}(g_n \hskip.03cm h )}{E_{\mu}(g_n)}\,, \] where the inequality is by consecutive applications of \eqref{eq:remove-g-induction}. \end{proof} \smallskip Now, let \. $\eta_i:= (h_i\hskip.03cm \cdots \hskip.03cm h_n) \hskip.03cm \mu$ \. for all \. $1\le i\le n$, and let \. $\eta_{n+1}:=\mu$. Again, note that \hskip.03cm $\eta_i$ \hskip.03cm is a log-supermodular function. Observe that \. $E_{\eta_i}[g_n] \. = \. E_{\eta_{i+1}}[g_n \hskip.03cm h_i]$ \. for all \. $2\le i \le n$. This implies that \begin{equation}\label{eq:remove-h} \frac{E_{\mu}(g_n \hskip.03cm h )}{E_{\mu}(g_n)} \ = \ \prod_{i=2}^{n+1} \. \frac{E_{\eta_i}(g_n \hskip.03cm h_{i-1})}{E_{\eta_i}(g_n)}\.. \end{equation} We apply two different inequalities to the RHS of~\eqref{eq:remove-h}. First, for all \hskip.03cm $2\le i\le n$, we have: \begin{equation}\label{eq:easy-FKG} \frac{E_{\eta_i}(g_n \hskip.03cm h_{i-1})}{E_{\eta_i}(g_n)} \ \leq \ \frac{E_{\eta_i}(h_{i-1})}{E_{\eta_i}(\textbf{1})} \ = \ \frac{E_{\mu}(h_{i-1} h_i\ldots h_n)}{E_{\mu}(h_{i} \ldots h_n)}\,, \end{equation} where the inequality is due to the FKG inequality~(Theorem~\ref{thm:FKG}) applied to the $\prec^{\small{\ts\diamond\ts}}$-increasing function~$\hskip.03cm g_n$, to the $\prec^{\small{\ts\diamond\ts}}$-decreasing function~$\hskip.03cm h_{i-1}\hskip.03cm$, and to the log-supermodular function~$\hskip.03cm\eta_i$. Although \eqref{eq:easy-FKG} holds for \hskip.03cm $i=n+1$ \hskip.03cm by the same argument, we will use the following stronger inequality instead. \smallskip \begin{lemma}\label{lem:strict-FKG} \begin{equation}\label{eq:strict-FKG} \frac{E_{\eta_{n+1}}(g_n \hskip.03cm h_{n})}{E_{\eta_{n+1}}(g_n)} \ \leq \ \bigg(1-\frac{1}{t^{n+1}} \bigg) \frac{E_\mu(h_n)}{E_{\mu}(\textbf{\em 1})}. \end{equation} \end{lemma} \begin{proof By a direct calculation, the claim is equivalent to showing that \[ \frac{E_{\mu}(g_n) \. E_{\mu}(h_n) \, - \, E_\mu(g_n h_n) \. E_{\mu}(\textbf{1})}{E_{\mu}(g_n) \. E_{\mu}(h_n)} \ \geq \ \frac{1}{t^{n+1}}\.. \] Let \. $g_n',h_n':L \to \{0,1\}$ \. be given by \. $g_n'(\textbf{\textit{v}}) := 1- g_n(\textbf{\textit{v}})$ \. and \. $h_n'(\textbf{\textit{v}}) := 1- h_n(\textbf{\textit{v}})$. Then we have: \begin{align*} g_n'(\textbf{\textit{v}}) \, = \, \begin{cases} 1 & \text{ if } \ v_n \. = \. 1\\ 0 & \text{ otherwise} \end{cases} \qquad \text{and} \qquad h_n'(\textbf{\textit{v}}) \, = \, \begin{cases} 1 & \text{ if } \ v_n \. = \. t\\ 0 & \text{ otherwise.} \end{cases} \end{align*} By the linearity of expectations, the claim is then equivalent to showing that \begin{equation}\label{eq:strict-correlation} \frac{E_{\mu}(g_n') \. E_{\mu}(h_n') \ - \ E_\mu(g_n' h_n') \. E_{\mu}(\textbf{1})}{E_{\mu}(g_n) \. E_{\mu}(h_n)} \ \geq \ \frac{1}{t^{n+1}}. \end{equation} Now note that, since $n$ is a maximal element of $P$, we have: \begin{align*} E_\mu(g_n') \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \, : \, v_n \. = \. 1\big\} \big| \ \geq \, 1,\\ E_\mu(h_n') \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \, : \, v_n \. = \. t\big\} \big| \ = \, \Omega(P\smallsetminus\{n\},t),\\ E_{\mu}(g_n'h_n') \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \, : \, v_n \. = \. 1 \. = \. t\big\} \big| \ = \, 0,\\ E_{\mu}(g_n) \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \, : \, v_n \. \geq \. 2 \big\} \big| \ \leq \, t \. \Omega(P\smallsetminus\{n\},t),\\ E_{\mu}(h_n) \ &= \ \big|\big\{ \textbf{\textit{v}} \in L \, : \, v_n \. \leq \. t-1 \big\} \big| \ \leq \, \Omega(P,t) \, \leq \, t^n. \end{align*} The inequalities above directly imply \eqref{eq:strict-correlation}. \end{proof} \smallskip \begin{proof}[Proof of Theorem~\ref{thm:log-concave-strict}] Combining \eqref{eq:remove-g}, \eqref{eq:remove-h}, \eqref{eq:easy-FKG}, and \eqref{eq:strict-FKG}, we get: \[ \frac{E_{\mu}(gh)}{E_{\mu}(g)} \ \leq \ \bigg(1\.- \.\frac{1}{t^{n+1}} \bigg) \. \frac{E_\mu(h)}{E_\mu(\textbf{1})}\..\] Using the values from \eqref{eq:expectation-as-OP}, we obtain: \[ \frac{\Omega(P,t-2)}{\Omega(P,t-1)} \ \leq \ \bigg(1\.- \. \frac{1}{t^{n+1}} \bigg) \. \frac{\Omega(P,t-1)}{\Omega(P,t)}\..\] The theorem now follows by substituting \. $t\gets t+1$. \end{proof} \smallskip \begin{rem} The term \. $\big(1+1/t^{n+1}\big)$ \. in~\eqref{eq:strict-FKG} is far from optimal and can be improved in many cases. In particular, note that in the proof of Lemma~\ref{lem:strict-FKG} we used a separate calculation for an element~$n$ in \hskip.03cm $X=[n]$. Making this calculation for a general element \hskip.03cm $x \in [n]$, gives a lower bound with the term \. $\big(1+C/t^{\ell(x)+b(x)}\big)$, for some \hskip.03cm $C>0$. Thus \hskip.03cm $x=n$ \hskip.03cm is the \emph{least optimal choice} for the lower bound, and is made for clarity. \end{rem} \medskip \subsection{Kahn--Saks conjecture}\label{ss:OP-KS-mono} The following interesting conjecture can be found in the solution to Exc.~3.163(b) in~\cite{Sta-EC}. \smallskip \begin{conj}[{\rm \defn{\em Kahn--Saks monotonicity conjecture}}{}]\label{conj:KS-mon} For a poset \hskip.03cm $P=(X,\prec)$ \hskip.03cm with \hskip.03cm $|X|=n$ \hskip.03cm elements, the \hskip.03cm \defn{scaled order polynomial} \. $\Omega(P,t)/t^n$ \. is weakly decreasing on~$\hskip.03cm\mathbb N_{\ge 1}$. \end{conj} \smallskip As Stanley points out in~\cite[Exc.~3.163(a)]{Sta-EC}, the conjecture holds for \hskip.03cm $t$ \hskip.03cm large enough, since the coefficient \hskip.03cm $[t^{n-1}] \hskip.03cm\Omega(P,t)>0$. Curiously, the proof is based on an elegant direct injection. Now, to fully appreciate the power of this conjecture, let us derive from it the following unusual extension of Theorem~\ref{thm:HP order poly}. \smallskip \begin{thm} \label{t:OP-via-KS-mon} Let \. $P=(X,\prec)$, let \. $\max(P)\subseteq X$ \. be the subset of maximal elements, and let \. $r:=|\max(P)|$ \. be the number of maximal elements. If Conjecture~\ref{conj:KS-mon} holds, then we have: \begin{equation}\label{eq:OP-analytic} \Omega(P,t) \ \geq \ t^r \prod_{x \hskip.03cm\in\hskip.03cm X\smallsetminus \max(P)} \. \bigg(\frac{t}{b(x)}\,+\,\frac{1}{2}\bigg). \end{equation} \end{thm} \smallskip Compared to~\eqref{eq:OP-HP}, the inequality \eqref{eq:OP-analytic} adds \hskip.03cm $\tfrac12$ \hskip.03cm to every term in the product. It would be interesting to prove this result unconditionally.\footnote{It would be even more interesting to \emph{disprove} it, perhaps.} \smallskip \begin{proof Denote $$ F_m(t) \ := \ \frac{1}{t^m} \, \sum_{k=1}^t \. k^m\.. $$ Let us prove now, that if the Kahn--Saks monotonicity conjecture holds, then we have: \begin{equation}\label{eq:OP-F-ineq} \Omega(P,t) \ \geq \ \prod_{x \in X} \. F_{b(x)-1}(t)\hskip.03cm. \end{equation} To see this, first suppose that \hskip.03cm $r=1$, so the poset~$P$ has a unique maximal element $x$. Thus, $b(x)=n$ \hskip.03cm in this case. The number of order preserving functions for which $x$ has value $k$ is equal to \. $\Omega(P\smallsetminus x, t-k+1)$. We have: $$ \Omega(P,t) \ = \ \sum_{k=1}^t \. \Omega(P \setminus x, k) \ \geq \ \sum_{k=1}^t \. \Omega(P \smallsetminus x, t) \, \frac{k^{n-1}}{t^{n-1}} \ = \ \Omega(P \smallsetminus x, t) \. F_{b(x)-1}(t), $$ where the inequality follows from the conjectured monotonicity. When \hskip.03cm $r\ge 2$, the rest of the proof of~\eqref{eq:OP-F-ineq} follows verbatim the proof of Theorem~\ref{thm:HP order poly}. Now note the following bounded version of the \defng{Faulhaber's formula}: $$ F_m(t) \, \geq \, \frac{t}{m} \. + \. \frac{1}{2} \quad \text{for all} \ \ m \geq 2. $$ This inequality is well-known and can be easily proved by induction. Substituting it into~\eqref{eq:OP-F-ineq}, gives the result. \end{proof} \smallskip In support of this conjecture we prove the following partial result. \smallskip \begin{prop} \label{prop:scaled} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset with \hskip.03cm $|X|=n$ \hskip.03cm elements, and let \hskip.03cm $k, \hskip.03cm t\in \mathbb N_{\ge 1}$\hskip.03cm. Then: $$ \frac{1}{t^n} \, \Omega(P,t) \, \geq \, \frac{1}{(kt)^n} \, \Omega(P,kt)\hskip.03cm. $$ Moreover, there is an injection which shows that the function \. $\Omega(P,t)\hskip.03cm k^n \hskip.03cm - \hskip.03cm \Omega(P,kt) \in {\textsc{\#P}}$. \end{prop \begin{proof} Let \. $f \in {\mathbf{\Omega}}(P,kt)$. Consider an increasing function \hskip.03cm $g \in {\mathbf{\Omega}}(P,t)$ \hskip.03cm given by $$g(x) \, := \, \left \lfloor \frac{f(x)-1}{k}\right\rfloor \. + \. 1, $$ and let \hskip.03cm $\beta: X \to \{0,1,\ldots,k-1\}$ \hskip.03cm be given as the residue of~$f(x)$ modulo~$k$. It is clear that the pair \hskip.03cm $(g,\beta)$ \hskip.03cm uniquely determines~$f$. Then \. $\Omega(P,t)\hskip.03cm k^n \hskip.03cm - \hskip.03cm\Omega(P,kt)$ \. is the number of pairs \hskip.03cm $(g,\beta)$, such that the map \hskip.03cm $h:X \to [n]$ \hskip.03cm given by \hskip.03cm $h(x) := k(g(x)-1) + \beta(x) +1$ \hskip.03cm is not a linear extension, i.e.\ if \hskip.03cm $h(x) > h(y)$ \hskip.03cm for some \hskip.03cm $x\prec y$. The last condition can be verified in polynomial time, proving that the difference is in~${\textsc{\#P}}$. \end{proof} \medskip \subsection{Reverse monotonicity}\label{ss:OP-reverse-mono} The following result at first appears counterintuitive until one realizes that it's trivial asymptotically, when \hskip.03cm $t\to \infty$. Just like the Kahn--Saks monotonicity conjecture, the small values of~$t$ is where the difficulty occurs. \smallskip \begin{thm}\label{t:reverse-width} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a finite poset of width~$w$. Then the polynomial \. $\Omega(P,t)/t^w$ \. is weakly increasing on all \. $t \in \mathbb N_{\ge 1}$\hskip.03cm. \end{thm} \smallskip The proof is based on yet another application of the FKG inequality in the following lemma of independent interest. \smallskip \begin{lemma}\label{lem:KS-mon} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a finite poset and let \hskip.03cm $t \geq k \ge 1$ \hskip.03cm be positive integers. Then, for every minimal element $x$ of~$P$, we have: \begin{equation}\label{eq:KS-mon} \frac{\Omega(P,k)}{\Omega(P,t)} \ \leq \ \. \frac{\Omega(P\setminus x,k)}{\Omega(P\setminus x,t)}\.. \end{equation} \end{lemma} \smallskip \begin{proof} Let \hskip.03cm $Y=\{x\}$ \hskip.03cm and \hskip.03cm $Z=X \setminus \{x\}$. When $x$ is incomparable to every element of $Z$, we have \begin{equation}\label{eq:OP-k-t} \frac{\Omega(P,k)}{\Omega(P,t)} \ = \ \frac{k\. \Omega(P\setminus x,k)}{t\. \Omega(P\setminus x,t)}\., \end{equation} and the result follows. Thus, without loss of generality we can assume that \hskip.03cm $x \prec z$ \hskip.03cm for some \hskip.03cm $z \in Z$. Let \. $g,h: L \to \{0,1\}$ \. be given by $$ g(\textbf{\textit{v}}) \, := \, \left\{ \aligned\. 1 & \quad \text{if} \ \ v_x \. \leq \. v_z \ \ \. \text{for all} \ \ z\in B(x), \, z\ne x \\ \. 0 & \quad\text{otherwise}, \endaligned \right. $$ $$ h(\textbf{\textit{v}}) \, := \, \left\{ \aligned \. 1 & \quad \text{if} \ \ v_z \leq k \ \ \. \text{for all} \ \ z \in Z\\ \. 0 & \quad \text{otherwise}. \endaligned\right. $$ To show that \hskip.03cm $g$ \hskip.03cm is \hskip.03cm $\prec^{\small{\ts\diamond\ts}}$-decreasing, it suffices to check that, \begin{equation* g(\textbf{w})\hskip.03cm =\hskip.03cm 1 \quad \text{and} \quad \textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w} \quad \Longrightarrow \quad g(\textbf{\textit{v}})\hskip.03cm = \hskip.03cm 1. \end{equation*} Note that for every \hskip.03cm $z \in Z$ \hskip.03cm such that \hskip.03cm $x \prec z$, we have: \[ v_x \ \leq \ w_x \ \leq \ w_z \ \leq \ v_z, \] where the first and the third inequality is because \hskip.03cm $\textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w}$\hskip.03cm, and the second inequality is because \hskip.03cm $g(\textbf{w})=1$. This implies that \hskip.03cm $g(\textbf{\textit{v}})=1$, and thus \hskip.03cm $g$ \hskip.03cm is a $\prec^{\small{\ts\diamond\ts}}$-decreasing function. Similarly, to show that \hskip.03cm $h$ \t is $\prec^{\small{\ts\diamond\ts}}$-increasing, it suffices to check that \begin{equation* h(\textbf{\textit{v}})\hskip.03cm =\hskip.03cm 1 \quad \text{and} \quad \textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w} \quad \Longrightarrow \quad h(\textbf{w})\hskip.03cm = \hskip.03cm 1. \end{equation*} Note that for every \hskip.03cm $z \in Z$, we have: \[ w_z \ \leq \ v_z \ \leq \ k, \] where the first inequality is because \hskip.03cm $\textbf{\textit{v}} \prec^{\small{\ts\diamond\ts}} \textbf{w}$\hskip.03cm, and the second inequality is because \hskip.03cm $h(\textbf{\textit{v}})=1$. This implies that \hskip.03cm $h(\textbf{w})=1$, and thus \hskip.03cm $h$ \hskip.03cm is a $\prec^{\small{\ts\diamond\ts}}$-increasing function. Now observe that the um \hskip.03cm $E(gh)$ \hskip.03cm counts \hskip.03cm $\textbf{\textit{v}} \in L$ \hskip.03cm for which \hskip.03cm $v_x \leq k$. Indeed, by the assumption there exist \hskip.03cm $z \in Z$, such that \hskip.03cm $x \prec z$. This implies \[ v_x \ \leq \ v_z \ \leq \ k, \] where the first inequality is because \. $g(\textbf{\textit{v}})=1$\., and the second inequality is because \. $h(\textbf{\textit{v}})=1$. Thus, \hskip.03cm $E(gh)$ counts the number of order preserving maps \. $f: X \to [k]$, i.e. $E(gh)=\Omega(P,k)$. It is also straightforward to verify that \begin{align*} E(g) \, = \, \Omega(P,t), \quad E(h) \, = \, t \. \Omega(P \setminus x,k) \quad \text{and} \quad E(\textbf{1}) \, = \, t \. \Omega(P\setminus x,t). \end{align*} The lemma now follows from the FKG inequality~(Theorem~\ref{thm:FKG}). \end{proof} \smallskip \begin{proof}[Proof of Theorem~\ref{t:reverse-width}] Let \hskip.03cm $H$ \hskip.03cm be a maximal antichain in \hskip.03cm $P$ \hskip.03cm of width~$w$. Note that Lemma~\ref{lem:KS-mon} can also be applied to maximal elements~$x$ of~$P$, by considering the dual poset \hskip.03cm $P^\ast$. Now, by consecutively removing elements in \hskip.03cm $X \setminus H$, by Lemma~\ref{lem:KS-mon}, we get \begin{equation*} \frac{\Omega(P,k)}{\Omega(P,t)} \ \leq \ \. \frac{\Omega(P',k)}{\Omega(P',t)} \ = \ \frac{k^{w}}{t^{w}}\,, \end{equation*} where \hskip.03cm $P'=P|_H$ \hskip.03cm is the subposet of $P$ restricted to~$H$. This implies the result. \end{proof} \smallskip We conclude with another conjecture motivated by~\eqref{eq:OP-k-t} in the proof of Lemma~\ref{lem:KS-mon}. \smallskip \begin{conj}\label{conj:KS-FKG} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a finite poset, and let \. $t \geq k\ge 1$ \. be positive integers. Then there exists \hskip.03cm $x \in X$, such that \begin{equation}\label{eq:KS-FKG} \frac{\Omega(P,k)}{\Omega(P,t)} \ \geq \ \frac{k\. \Omega(P\setminus x,k)}{t\. \Omega(P\setminus x,t)}. \end{equation} \end{conj} \smallskip \begin{prop} Conjecture~\ref{conj:KS-FKG} \hskip.03cm implies \hskip.03cm Conjecture~\ref{conj:KS-mon}. \end{prop} \smallskip The proof of this proposition follows the proof of the theorem above. \medskip \subsection{Graham conjecture}\label{ss:OP-Graham} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm on \hskip.03cm $|X|=n$ elements. Fix element \hskip.03cm $x\in X$ \hskip.03cm and an integer \hskip.03cm $t\ge 1$. Denote by \hskip.03cm $\Omega(P,t; \hskip.03cm x,a)$ \hskip.03cm the number of order preserving maps \hskip.03cm $g: X\to [t]$, such that \hskip.03cm $g(x)=a$. In \cite[p.~129]{Gra}, by analogy with Stanley's inequality~\eqref{eq:Sta}, Graham made in passing the following conjecture: \smallskip \begin{conj}[{\em \defn{\rm Graham's conjecture}}{}]\label{conj:Graham} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a finite poset, and let \hskip.03cm $x\in X$, let \hskip.03cm $a, \hskip.03cm t\in \mathbb N_{\ge 1}$, and suppose \hskip.03cm $1< a < t$. Then: $$ \Omega(P,t; \hskip.03cm x,a)^2 \, \ge \, \Omega(P,t; \hskip.03cm x,a+1) \. \cdot \. \Omega(P,t; \hskip.03cm x,a-1). $$ \end{conj} \smallskip The following result shows that holds asymptotically. \smallskip \begin{thm}\label{t:Graham-asy} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a finite poset, and let \hskip.03cm $x\in X$. Then, for every \hskip.03cm $a\in \mathbb N_{\ge 1}$\hskip.03cm, there exists an integer \. $T(P,x,a)>0$ \. such that, for all \. $t > T(P,x,a)$, we have: $$ \Omega(P,t; \hskip.03cm x,a)^2 \, \ge \, \Omega(P,t; \hskip.03cm x,a+1) \. \cdot \. \Omega(P,t; \hskip.03cm x,a-1). $$ \end{thm} \smallskip \begin{proof} Fix an integer \hskip.03cm $a\ge 1$. Denote by \. $D:=\{y\in X \.:\.y \preccurlyeq x\}$ \. the lower order ideal of~$x$, and let \hskip.03cm $d:=|D|$. Observe that \. $\Omega(P,t; \hskip.03cm x,a)$ \. is a polynomial in~$\hskip.03cm t$ \hskip.03cm with the leading term $$ \Omega(D-x,a) \, \frac{e(P\smallsetminus D)}{(n-d)!} \ t^{n-d}\.. $$ Indeed, for every $\prec$-increasing function \hskip.03cm $g: X\to [t]$, we have \hskip.03cm $g(y)\le a$ \hskip.03cm for all \hskip.03cm $y\in D$, which explains the term \hskip.03cm $\Omega(D-x,a)$. For the remaining elements \hskip.03cm $z\in X\smallsetminus D$, we have no such restrictions as \hskip.03cm $t\to \infty$, and the number of such functions is asymptotically \hskip.03cm $\sim e(P\smallsetminus D) \. t^{n-d}/(n-d)!$ Therefore, as \hskip.03cm $t\to \infty$, the leading coefficient of the polynomial $$\Omega(P,t; \hskip.03cm x,a)^2 \, - \, \Omega(P,t; \hskip.03cm x,a+1) \. \cdot \. \Omega(P,t; \hskip.03cm x,a-1) $$ is equal to \[ \Big[\Omega(D-x,a)^2 \. - \. \Omega(D-x,a+1) \. \Omega(D-x,a-1) \Big] \, \frac{e(P-D)}{(n-d)!}\.. \] This is strictly positive by Theorem~\ref{thm:log-concave-strict}, which implies the result. \end{proof} \smallskip As a further evidence in favor of the conjecture, we present the following variation. \smallskip \begin{thm} For every \hskip.03cm $x \in X$ \hskip.03cm and integers \hskip.03cm $t> a >1$, we have: $$ \Omega(P,t; \hskip.03cm x,a+1) \. \cdot \.\Omega(P,t; \hskip.03cm x,a) \, \ge \, \Omega(P,t+1; \hskip.03cm x,a+1) \. \cdot \. \Omega(P,t-1; \hskip.03cm x,a). $$ \end{thm} \begin{proof} The proof follows from the same argument as in Theorem~\ref{thm:log-concave}. In this case, replace Shepp's lattice \hskip.03cm ${\mathcal{L}}$ \hskip.03cm with a sublattice \. ${\mathcal{L}}_{x,a} \hskip.03cm := \hskip.03cm \{ \textbf{\textit{v}} \in L \. : \. \textbf{\textit{v}}_x=a\}$. Define functions \hskip.03cm $g,\hskip.03cm h, \hskip.03cm \mu$ \hskip.03cm as in the proof of Theorem~\ref{thm:log-concave}, and proceed verbatim. We omit the details. \end{proof} \medskip \section{Bounds on the $q$-analogue} \label{s:q-analogue} In this section we study the $q$-order polynomial generalization. First, we present Reiner's short proof of the Bj\"orner--Wachs inequality. Then, we give $q$-analogue of Shepp's inequality and study its consequences. \subsection{Reiner's inequality}\label{ss:q-analogue-Reiner} Most recently, Vic Reiner shared with us the following elegant approach to the Bj\"orner--Wachs inequality which we reproduce with his permission.\footnote{Vic Reiner, personal communication, March~17, 2022. } \smallskip \begin{thm}[{\rm Reiner, 2022}] \label{t:HP-q} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset with \hskip.03cm $|X|=n$ \hskip.03cm elements. Denote by \. $\mathcal R(P)$ \. the set of all weakly order-preserving maps \. $g: X\to \mathbb N$, i.e.\ \. $g(x)\le g(y)$ \. for all \. $x\prec y$. Let \. $|g| \hskip.03cm := \hskip.03cm \sum_{x\in X} \hskip.03cm g(X)$. Then: \begin{equation}\label{eq:HP-q} \sum_{g\in \mathcal R(P)} \. q^{|g|} \ \geqslant_q \ \. \prod_{x \hskip.03cm \in X} \, \frac{1}{1-q^{b(x)}}\,, \end{equation} where the inequality between two power series is coefficient-wise. \end{thm} \smallskip Theorem~\ref{t:HP} follows from \defng{$P$-partition theory} of Stanley, see~\cite[$\S$3.15]{Sta-EC}. Indeed, recall that $$ \sum_{g\in \mathcal R(P)} \. q^{|g|} \, = \, \frac{F_P(q)}{(1-q)(1-q^2)\cdots (1-q^n)}\,, $$ such that \hskip.03cm $F_P(1)=e(P)$. Here \hskip.03cm $F_P(q)$ \hskip.03cm denotes the sum of \hskip.03cm $q^{\mathrm{maj}(f)}$ \hskip.03cm over all \hskip.03cm $f\in \Ec(P)$, see~\cite{Sta-EC} for the details.\footnote{The notation in~\cite{Sta-EC} is different but equivalent; we change it for simplicity since the \defng{major index} plays only tangential role in this paper. There is also a minor subtlety here, that \defng{Stanley's P-partition theory} requires \emph{natural labeling} of~$X$, cf.~$\S$\ref{s:OP-inj}. } Taking \hskip.03cm $0< q <1$, multiplying both sides of~\eqref{eq:HP-q} by \. $(1-q)(1-q^2)\cdots (1-q^n)$, and taking the limit \. $q\to 1-$, gives the Bj\"orner--Wachs inequality~\eqref{eq:HP}. \smallskip \begin{proof}[Proof of Theorem~\ref{t:HP-q}] Interpret the RHS of~\eqref{eq:HP-q} as the GF for maps \. $g\in \mathcal R(P)$ \. which are obtained as a nonnegative integer linear combination of the characteristic functions of upper order ideals: $$g \.= \, \sum_{x\in X} \, m(x) \.\chi_{B(x)}\., \ \ \. \text{where} \ \ m(x) \in \mathbb N \ \ \text{for all} \ \ x\in X. $$ Note that characteristic functions \hskip.03cm $\chi_{B(x)}$ \hskip.03cm are linearly independent because in the standard basis \hskip.03cm $\chi_{y}$, \hskip.03cm $y\in X$, the transition matrix is unitriangular. Since now \. $|g| = \sum_{x\in X} \. m(x) \. b(x)$, the result follows immediately. \end{proof} \smallskip \begin{ex} Consider a poset \hskip.03cm $P=(X,\prec)$ \hskip.03cm with \hskip.03cm $X=\{a,b,c,d\}$ \hskip.03cm and \hskip.03cm $a\prec b,c \prec d$, so \hskip.03cm $P\simeq C_2\times C_2$. The RHS of~\eqref{eq:HP-q} as in the proof above is the GF for \hskip.03cm $g\in \mathcal R(P)$, such that \hskip.03cm $g(a)+g(d) \ge g(b) + g(c)$. Not all \hskip.03cm $g\in \mathcal R(P)$ \hskip.03cm satisfy this property, e.g.\ $g(a)=0$, \hskip.03cm $g(b)=g(c)=g(d)=1$ \hskip.03cm does not. \end{ex} \smallskip \begin{rem}\label{r:G-M} In principle, there is a way to convert the natural injection as in the proof above into an injection as in Proposition~\ref{p:HP-injection}. The idea is to make the multiplication by \. $(1-q)\cdots (1-q^n)$ \. to be effective by using the \defng{involution principle} of Garsia and Milne~\cite{GM}. See also~\cite{Gre} which comes closest in this special case. Note that the resulting maps tend to be hard to compute, sometimes provably so, see e.g.~\cite{KP}. \end{rem} \smallskip \subsection{$q$-order polynomial} \label{ss:q-analogue-our} For an integer \hskip.03cm $t\ge 1$, define $$\Omega_q(P,t) \, : = \, \sum_{g} \ q^{|g|-n} $$ where the summation is over all order preserving maps \. $g: X\to [t]=\{1,\ldots,t\}$, i.e.\ maps which satisfy \. $g(x)\le g(y)$ \. for all \. $x \prec y$. This is the \defn{$q$-order polynomial} corresponding to poset~$P$, see e.g.~\cite{Cha}. Let us emphasize that here~$q$ is a formal variable, while \hskip.03cm $t\ge 1$ \hskip.03cm is an integer. \smallskip \begin{thm}[{\rm {\em{\defn{$q$-analogue of Shepp's inequality}}}}]\label{t:FKG-OP-q} Let \hskip.03cm $A$ \hskip.03cm be the collection defined in \S\ref{ss:OP-induction-prelim}, and let \hskip.03cm $C,C'$ \hskip.03cm be $Y$-minimizing collections w.r.t.\ partition \ $X=Y\sqcup Z$. Then, \[ \Omega_q \big(C \cap C' \cap A,\hskip.03cm t\big) \. \cdot \. \Omega_q\big(A,\hskip.03cm t\big) \quad \geqslant_q \quad \Omega_q \big(C \cap A,\hskip.03cm t\big) \. \cdot \. \Omega_q\big( C' \cap A,\hskip.03cm t\big)\., \] where the inequality holds coefficient-wise as a polynomial in~$q$, for all integer \hskip.03cm $t\ge 1$. \end{thm} \smallskip The proof follows the original proof in~\cite{She}, with the following \defng{$q$-FKG inequality} by Bj\"orner~\cite{Bjo}. Let \. $L:=(L,\prec^{\small{\ts\diamond\ts}})$ \. be a distributive lattice. A function \. $r:L \to \Rb_{\geq 0}$ \. is called \hskip.03cm \defn{modular} \hskip.03cm if \[ r(a) \. + \. r(b) \ = \ r(a \wedge b) \. + \. r(a \vee b) \quad \text{ for every } \. a,b \in L. \] \smallskip \begin{thm}[{\rm {\em{\defn{$q$-FKG inequality}}}, \cite[Thm~2.1]{Bjo}}]\label{thm:q-FKG} Let \hskip.03cm ${\mathcal{L}} =(L,\prec)$ \hskip.03cm be a finite distributive lattice, let \. $\mu: L \to \Rb_{\geq 0}$ \. be a log-supermodular function, and let \. $r:L \to \Rb_{\geq 0}$ \. be a modular function. Then, for every pair of $\prec^{\small{\ts\diamond\ts}}$-decreasing functions \. $g,h: L \to \Rb_{\geq 0}$, we have: \[ E_q(\textbf{\em 1}) \. E_q(gh) \ \geqslant_{q} \ E_q(g) \. E_q(h), \] where the inequality holds coefficient-wise as a polynomial in $q$, \. where \[ E_{q}(g) \, = \, E_{q}(g;\mu,r) \ := \ \sum_{a \in L} g(x) \. \mu(x) \. q^{r(x)}, \] and \. $\textbf{\em 1}:L \to \Rb$ \. is given by \. $\textbf{\em 1}(a)=1$ \. for all \hskip.03cm $a \in L$. \end{thm} \smallskip \begin{rem}\label{r:Bjorner} Note that Theorem 2.1 in \cite{Bjo} assumes that \. $r:L \to \Rb_{\geq 0}$ \. is the \emph{rank function} of the lattice~$L$. It is however straightforward to show that the same proof still works when applied to any modular function~$\hskip.03cm r$. \end{rem} \smallskip \begin{proof}[Proof of Theorem~\ref{t:FKG-OP-q}] The proof follows the same argument as in the proof of Lemma~\ref{t:FKG order polynomial}, with the FKG inequality being replaced with Theorem~\ref{thm:q-FKG} applied to the modular function \. $r:L \to \Rb_{\geq 0}$ \. given by \. $r(\textbf{\textit{v}}) \. := \. \sum_{x \in X} v_x$\.. \end{proof} \smallskip \begin{cor}\label{cor:main OP-q} Let \. $P=(X,\prec)$ \. be a poset, and let \. $x,y\in X$ \. be minimal elements. Then, for all \hskip.03cm $t\in \mathbb N_{\ge 1}$ \hskip.03cm and \hskip.03cm $q\in \mathbb R_+$, we have: \[ {\Omega_q\big(P,\hskip.03cm t\big)} \. \cdot \. {\Omega_q\big(P \smallsetminus \{x,y\},\hskip.03cm t\big)} \ \geq \ {\Omega_q\big(P \smallsetminus x, \hskip.03cm t\big)} \. \cdot \. {\Omega_q\big(P\smallsetminus y,\hskip.03cm t\big)}. \] \end{cor} \begin{proof} Denote \. $(n)_q \hskip.03cm := \hskip.03cm 1+q+\ldots + q^{n-1}$. Let $C$ and $C'$ be as in \eqref{eq:colC}, and $A$ be as in \eqref{eq:colA}. Observe that \begin{alignat*}{2} & \Omega_q(C \cap C' \cap A,t) \ = \ \Omega_q(P,t), \qquad && \Omega_q(C' \cap A,t) \ = \ q \hskip.03cm (t)_q \. \Omega_q(P\smallsetminus x,t),\\ & \Omega_q(C \cap A,t) \ = \ q \hskip.03cm (t)_q \. \Omega_q(P \smallsetminus y,t), \qquad && \Omega_q(A,t) \ = \ q^2 \hskip.03cm (t)_q^2 \.\hskip.03cm \Omega_q(P \smallsetminus \{x,y\},t). \end{alignat*} The conclusion of the lemma now follows from Theorem~\ref{t:FKG-OP-q} and the equation above. \end{proof} \smallskip \begin{rem}\label{r:q-analogue} Note that our proof does not show that the inequality in Corollary~\ref{cor:main OP-q} holds coefficient-wise as a polynomial in~$q$, since the derivation involves canceling the term \hskip.03cm $q\hskip.03cm (t)_q$. It remains to be seen if a $q$-analogue of Theorem~\ref{thm:HP order poly} exists, which hinges on finding an appropriate $q$-analogue for Lemma~\ref{l:main order polynomial}. \end{rem} \smallskip We also have the following \defng{$q$-log-concavity} for order polynomials. \smallskip \begin{cor}\label{c:q-log-concavity} Let $P=(X,\prec)$ be a finite poset. Then, for every integer $t \geq 2$, \[ \Omega_q(P,t)^2 \ \geqslant_q \ \Omega_q(P,t+1) \. \cdot \. \Omega_q(P,t-1),\] where the inequality holds coefficient-wise as a polynomial in~$q$. \end{cor} \begin{proof} The proof follows the same argument as in the proof of Theorem~\ref{thm:log-concave}, with the FKG inequality being replaced with Theorem~\ref{thm:q-FKG} applied to the modular function \. $r:L \to \Rb_{\geq 0}$ \. given by \. $r(\textbf{\textit{v}}) \. := \. \sum_{x \in X} v_x$\.. \end{proof} \bigskip \section{Bounding the order polynomial by injection} \label{s:OP-inj} \smallskip Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset with \hskip.03cm $|X|=n$ \hskip.03cm elements. Denote by \hskip.03cm ${\mathbf{\Omega}}(P,t)$ \hskip.03cm the set of order preserving maps \hskip.03cm $P \to [t]$, so that \. $\Omega(P,t)=|{\mathbf{\Omega}}(P,t)|$. Fix a \defn{natural labeling} of $X$, i.e.\ write \. $X=\{x_1,\ldots,x_n\}$, where \. $i<j$ \. for all \. $x_i \prec x_j$. For a sequence \. $(a_1,\ldots,a_k)$ \. of distinct integers, a \defn{standardization} is a permutation \. $\sigma=(\sigma_1,\ldots,\sigma_k)\in S_k$ \. with integers in the same relative order: $$ a_i < a_j \ \Leftrightarrow \ \sigma_i < \sigma_j \quad \text{for all} \quad 1\le i < j \le k\hskip.03cm. $$ For example, the standardization of \. $(4,7,6,3)$ \. is \. $(2,4,3,1)\in S_4$. \smallskip \begin{proof}[Proof of Theorem~\ref{thm:order_le}] We construct an injection $$\Psi: \, \Ec(P) \hskip.03cm \times \hskip.03cm [t]^n \. \to \ {\mathbf{\Omega}}(P,t) \hskip.03cm \times \hskip.03cm S_n\hskip.03cm. $$ One can think of \. $[t]^n$ \. as an \emph{ordered set partition} $$ [n] \ = \ B_1 \. \sqcup \. \ldots \. \sqcup B_t\., $$ where \. $B_i\subseteq [n]$ \. can be empty. We use \. $\beta=(B_1,\ldots,B_t)$ \. to denote this ordered set partition. Let \. $f \in \Ec(P)$ \. be a linear extension, and \. $\beta=(B_1,\ldots,B_t)$ \. be an ordered set partition as above. Denote \. $b_i := |B_i|$, where \. $1 \le i \le t$. Let \. $\alpha = (a_1,\ldots,a_n)\in [t]^n$ \. be a weakly increasing sequence $$ \big(1,\ldots, 1, \ 2,\ldots, 2, \ \ \ldots \ \ , \ t, \ldots, t\big) \ \ \, \text{with \. $b_i$ \. copies of \. $i$, \ for all \. $1\le i \le t$}\hskip.03cm. $$ By abuse of notation, we also use \hskip.03cm $\alpha$ \hskip.03cm to denote a function \. $\alpha: [n]\to [t]$ \. given by \. $\alpha(i):=a_i$. Define a function \. $g: \hskip.03cm X \to [t]$ \. as \. $g(x_i) := \alpha\bigl(f(x_i)\bigr)$, so that elements \. $f^{-1}(1)$, \ldots, $f^{-1}(b_1)$ are assigned value~$1$, elements \. $f^{-1}(b_1+1)$, \ldots, $f^{-1}(b_1+b_2)$ are assigned value~$2$, etc. Observe that \. $g \in {\mathbf{\Omega}}(P,t)$ \. since~$\hskip.03cm f$ \hskip.03cm is increasing with respect to the poset order, and~$\hskip.03cm \alpha$ is a weakly increasing function. Next, define a permutation \. $\sigma\in S_n$ \. as follows. For each~$i$, let \. $g^{-1}(i) = \big\{x_{i_1},\ldots,x_{i_k}\big\}$, where \. $i_1<\ldots<i_k$ \. and \hskip.03cm $k=b_i$ \hskip.03cm by construction. Let \hskip.03cm $s^{(i)} \in S_k$ \hskip.03cm be the standardization of the sequence \. $\bigl(f(x_{i_1}),\ldots, f(x_{i_k})\bigr)$. Now, rearrange the elements in \hskip.03cm $B_i$ \hskip.03cm according to~$s^{(i)}$, obtaining a sequence \. $\gamma_{i}$ \. whose standardization is~$s^{(i)}$. The permutation $\sigma$ is then obtained by concatenating the resulting sequences, i.e. $\sigma:=\gamma_{1} \gamma_{2}\ldots \gamma_{t} \in S_n$. Finally, define \. $\Psi(f,\beta) := (g, \sigma)$. \smallskip To prove that \hskip.03cm $\Psi$ \hskip.03cm is an injection, we construct an inverse map \hskip.03cm $\Psi^{-1}$. Let \. $g \in \Omega(P,t)$ \. and \. $\sigma \in S_n$. Denote \. $c_i :=|g^{-1}(i)|$, for all \. $1\le i \le t$. Let \. $\tau\in [t]^n$ \. be the sorted sequence of values that the function~$g$ takes, i.e.\ $$\tau \ := \ \big(1,\ldots, 1, \ 2,\ldots, 2, \ \ \ldots \ \ , \ t, \ldots, t\big) \ \ \, \text{with \. $c_i$ \. copies of \. $i$, \ for all \. $1\le i \le t$}\hskip.03cm. $$ Note that \hskip.03cm $\tau$ \hskip.03cm is the weakly increasing. For each $i$, let $$ C_i \ := \ \big\{\sigma_{c_1\hskip.03cm+\hskip.03cm\ldots \hskip.03cm+\hskip.03cm c_{i-1}\hskip.03cm+\ts1} \., \. \ldots \., \. \sigma_{c_1\hskip.03cm+\hskip.03cm\ldots \hskip.03cm+\hskip.03cm c_i}\big\} $$ consisting of a block of size $c_i$ of entries from~$\sigma$. Denote by \. $\pi=(C_1,\ldots,C_t)$ \. the resulting ordered partition. Finally, define a function \. $h: X\to [n]$ \. obtained by rearranging the values on the $c_i$ elements in~$g^{-1}(i)$ according to the ordering in \. $$ \big(\sigma_{c_1\hskip.03cm+\hskip.03cm\ldots \hskip.03cm+\hskip.03cm c_{i-1}\hskip.03cm+\ts1} \., \. \ldots \., \. \sigma_{c_1\hskip.03cm+\hskip.03cm\ldots \hskip.03cm+\hskip.03cm c_i}\big), $$ i.e., so that their standardizations are the same permutations. Let us emphasize that \hskip.03cm $h$ \hskip.03cm is not necessarily a linear extension for general \hskip.03cm $(g,\sigma)$ \hskip.03cm as above. Now take \. $\Psi^{-1} := (h,\pi)$, and observe that $$ \Psi^{-1}\bigl(\Psi(f,\beta)\bigr) \, = \, (f,\beta) $$ by construction. This completes the proof. \end{proof} \smallskip \begin{ex} Let us illustrate the construction of \. $\Psi(f,\beta)=(g,\sigma)$ \. in the proof above. Let \. $P=(X,\prec)$ \. be a poset on \hskip.03cm $n=7$ \. elements as in Figure~\ref{f:injection}, where \. $X=\{x_1,\ldots,x_7\}$ \. with the partial order~$\prec$ increasing downwards. Note that we chose a natural labeling, see above. Suppose \hskip.03cm $t=3$. Let \hskip.03cm $f\in \Ec(P)$ \hskip.03cm be a linear extension as in the figure, and let \. $\beta=(B_1,B_2,B_3)$, where \. $B_1=\{2,3,7\}$, $B_2=\{4,6\}$ \. and \. $B_3 = \{1,5\}$. Then we have \. $\alpha = (1,1,1,2,2,3,3)$ \. and the order preserving function \hskip.03cm $g$ \hskip.03cm is given as in the figure. Then, standardize the values \. $$\aligned & \hskip2.cm\bigl(f(x_1),f(x_3),f(x_5)\bigr) = (2,1,3) \ \longrightarrow \ (2,1,3)\.,\\ & \bigl(f(x_2),f(x_7)\bigr) = (4,5) \ \longrightarrow \ (1,2)\., \qquad \bigl(f(x_4),f(x_6)\bigr) = (6,7) \ \longrightarrow \ (1,2). \endaligned $$ Permute the elements within \. $B_1,B_2,B_3$ \. accordingly to get \. $\gamma_1 = (3,2,7)$, \. $\gamma_2=(4,6)$ \. and \. $\gamma_3 = (1,5)$. Concatenating these, we obtain \. $\sigma = (3,2,7,4,6,1,5)\in S_7$. \smallskip In the opposite direction, let \. $g'\in {\mathbf{\Omega}}(P,3)$ \. be as in Figure~\ref{f:injection}, and let \. $\sigma=(7,1,3,2,5,4,6)$. Then \. $\tau=(1,1,2,2,3,3,3)$, so \. $c_1=c_2=2$ \. and \. $c_3=3$. This gives \. $C_1=\{1,7\}$, \. $C_2=\{2,3\}$ \. and \. $C_3=\{4,5,6\}$. The corresponding reduced permutations are then \. $(2,1)$, \. $(2,1)$ \. and \. $(2,1,3)$, respectively, giving a map \. $h: X\to [t]$. Finally, note that \. $h\notin \Ec(P)$ \. in this case. \end{ex} \begin{figure}[hbt] \begin{center} \includegraphics[width=16.5cm]{Injection1} \end{center} \vskip-.3cm \caption{An example of the injection $\Psi$ and the inverse map $\Psi^{-1}$. }\label{f:injection} \vskip.3cm \end{figure} \smallskip \begin{proof}[{Proof of Theorem~\ref{t:OP-SP}}] In the notation of the proof above, let \. $(g,\sigma) \in {\mathbf{\Omega}}(P,t) \times S_n$ \. and let \. $(h,\pi) = \Psi^{-1}(g,\sigma)$. By construction, we have \. $(h,\pi) \in \Ec(P) \times [t^n]$ \. if and only if \. $h \in \Ec(P)$. Thus, the function \. ${\zeta}(P,t)$ \. is equal to the number of \. $(g,\sigma) \in {\mathbf{\Omega}}(P,t) \times S_n$ \. such that \hskip.03cm $h \notin \Ec(P)$. Since \hskip.03cm $\Psi^{-1}$ \hskip.03cm can be computed in polynomial time, this implies the result. \end{proof} \smallskip \begin{ex}\label{e:HP-power} Let \. $P= A_{n}$ \. be an antichain on \hskip.03cm $n$ \hskip.03cm elements. Then we have \. $e(A_n)=n!$ \. and \. $\Omega(A_n,t)=t^n$. In this case both~\eqref{eq:OP-HP} and~\eqref{eq:OP-gen} are equalities. Similarly, let \hskip.03cm $P=C_n$ \hskip.03cm be a chain of $n$ elements. Then we have \. $e(C_n)=1$ \. and \. $\Omega(P,t) = \binom{t+n-1}{n}$. In this case, the lower bound~\eqref{eq:OP-HP} is slightly better than~\eqref{eq:OP-gen}. \end{ex} \smallskip In a different direction, here are equality conditions for~\eqref{eq:OP-gen} in Theorem~\ref{thm:order_le}. \smallskip \begin{cor}\label{c:OP2-equality} Let \hskip.03cm $P=(X,\prec)$ \hskip.03cm be a poset on \hskip.03cm $|X|=n$ \hskip.03cm elements. Then $$ \Omega(P,t) \, = \, e(P) \. \frac{t^n}{n!} $$ for some \. $t \in\mathbb N_{\ge 1}$ \. \underline{if and only if} \. $P=A_n$ \hskip.03cm is a $n$-antichain. \end{cor} \begin{proof} The ``if'' part is clear. \hskip.03cm For the ``only if'' part, suppose that \hskip.03cm $P \neq A_n$. Then there are \hskip.03cm $x_i, x_{i+1}\in X$ \hskip.03cm such that \hskip.03cm $x_i\prec x_{i+1}$, and \hskip.03cm $x_{i+1}$ covers~$x_i$. Without loss of generality, we can assume that \hskip.03cm $x_i$ \hskip.03cm is a minimal element. It follows from the proof of Theorem~\ref{thm:order_le}, that equality in~\eqref{eq:OP-gen} holds \hskip.03cm if and only if \hskip.03cm $\Psi$ is a bijection. In particular, for all \.$(g,\sigma) \in {\mathbf{\Omega}}(P,t) \times S_n$ \. the map \hskip.03cm $h$ \hskip.03cm in \. $\Psi^{-1}(g, \sigma) = (h,\pi)$ \. must be a linear extension. Now take an order preserving map~$g$, such that \. $g(x_1)=\ldots=g(x_{i+1})=1$, and \hskip.03cm $\sigma = (i,i+1)$. Then we have \hskip.03cm $h(x_i)=i+1$ \hskip.03cm and \hskip.03cm $h(x_{i+1})=i$, so that \hskip.03cm $h\notin \Ec(P)$ \hskip.03cm is not a linear extension. Thus, map~$\Psi$ is not a bijection in this case. This completes the proof. \end{proof} \smallskip \begin{ex}\label{e:OP-Stanley} Let \. $P_n = C_1 \oplus A_{n-1}$ \. be an ordered tree poset consisting of one minimal element and $(n-1)$ maximal elements. Then \hskip.03cm $e(P)=(n-1)!$ \hskip.03cm and the bound~\eqref{eq:HP} is an equality. Observe that $$ \Omega(P_n,t) \ = \ 1^{n-1}\. + \. 2^{n-1}\. + \. \ldots \. + \. t^{n-1} \ = \ \frac{t^n}{n} + \frac{t^{n-1}}{2} + O(t^{n-1}). $$ The general inequality~\eqref{eq:OP-gen} gives \. $\Omega(P_n,t) \hskip.03cm \ge \ \frac{t^n}{n}$\hskip.03cm, while~\eqref{eq:OP-HP} gives a stronger bound: $$ \Omega(P_n,t) \ \ge \ \frac{1}{n} \. \bigl(t^n + \.t^{n-1}\bigr). $$ Asymptotically, both lower bounds are not tight in the second order term. Compare this to~\eqref{eq:OP-F-ineq} which conjecturally gives a sharp bound. \end{ex} \smallskip \begin{rem}\label{r:OP-injection} Neither of the Theorems~\ref{thm:HP order poly} and~\ref{thm:order_le} imply each other. Note that the leading coefficient of \hskip.03cm $\Omega(P,t)$ \hskip.03cm is \hskip.03cm $e(P)/n!$, and the Bj\"orner--Wachs inequality~\eqref{eq:HP} is an equality only for ordered forests (Proposition~\ref{p:HP-forest}). Thus, for large values of~$t$, the lower bound in Theorem~\ref{thm:order_le} asymptotically better. On the other hand, the lower bound in Theorem~\ref{thm:order_le} cannot be improved to \. $t^r\hskip.03cm (t+1)^{n-r}\hskip.03cm e(P)/n!$ \. as Theorem~\ref{thm:HP order poly} might suggest. Indeed, for the poset $P_n$ as in the example above, we have: $$\Omega(P_n,2) \ = \ 1^{n-1}+2^{n-1} \ < \ \frac{2\cdot 3^{n-1}}{n} \quad \ \text{for \ $n\geq 3$.} $$ Finally, let us mention that the order polynomial \hskip.03cm $\Omega(P,t)$ \hskip.03cm can have negative coefficients, implying that~\eqref{eq:OP-gen} does not follow directly from the leading term of \. $t^n e(P)/n!$ \. For example, recall that $$ \Omega(P_5,t) \ = \ 1^{4}\. + \. 2^{4} \. + \. \ldots \. + \. t^{4} \ = \ \frac{1}{30}\. \bigl(6 \hskip.03cm t^5 \hskip.03cm + \hskip.03cm 15 \hskip.03cm t^4 \hskip.03cm + \hskip.03cm 10 \hskip.03cm t^3 \hskip.03cm - \hskip.03cm t\bigr), $$ and note the negative coefficient in~$t$. \end{rem} \medskip \section{Restricted linear extensions} \label{s:restricted} In this section we use an algebraic approach to obtain vanishing and uniqueness conditions for the generalized Stanley inequality. We also present a direct combinatorial argument for the uniqueness conditions. \subsection{Background}\label{ss:restricted-background} Before we proceed to generalizations, let us recall some definition and results about group action on the set~$\Ec(P)$ of linear extensions. In our presentation we follow Stanley's survey~\cite{Sta-promo}. Let \hskip.03cm $X=(X,\prec)$ \hskip.03cm be a poset on \hskip.03cm $|X|=n$ \hskip.03cm elements. A \defn{promotion} \. $\partial: \Ec(P) \to \Ec(P)$ \. is a bijection on linear extensions defined as follows. For \hskip.03cm $f\in \Ec(P)$, let \. $t_1 \prec \ldots \prec t_r$ \. be a maximal chain in~$P$, such that $f(t_1),f(t_2),\ldots,f(t_r)$ is lexicographically smallest. Define \. $f\partial\in \Ec(P)$ \. as $$f\partial(x) = \begin{cases} \ f(t_{i+1})-1 & \ \ \text{ if \ }x=t_i\text{ \, for some } \, i<r\hskip.03cm, \\ \ n & \ \ \text{ if \ $x=t_r$}\hskip.03cm,\\ \ f(x)-1\hskip.03cm & \ \ \ \text{otherwise} . \end{cases} $$ We think of~$\partial$ as an operator applied on the right, and write \. $\partial: \hskip.03cm f \hskip.03cm\to\hskip.03cm f\hskip.03cm \partial$. An \defn{evacuation} \. $\varepsilon: \Ec(P) \to \Ec(P)$ \. is a another operator on linear extensions defined as follows. Denote by \hskip.03cm $\partial_i$ \hskip.03cm as the promotion on a poset obtained by restriction to elements with values \. $1,\ldots,i$, so that \hskip.03cm $\partial_n=\partial$ \hskip.03cm and \hskip.03cm $\partial_1=1$. Then \hskip.03cm $\varepsilon$ \hskip.03cm is defined as the composition \. $\varepsilon := \hskip.03cm\partial_n \hskip.03cm \circ \hskip.03cm\dots\hskip.03cm\circ\hskip.03cm\partial_1$\hskip.03cm, and we write \. $f\varepsilon = f\hskip.03cm\partial_n \hskip.03cm \cdots\hskip.03cm\partial_1$\hskip.03cm. \smallskip The promotion and evacuation maps can be interpreted using group actions on linear extensions as follows. \smallskip Let \hskip.03cm $\textrm{G}_n=\<\tau_1,\ldots,\tau_{n-1}\>$ \hskip.03cm be an infinite Coxeter group with the relations \begin{equation}\label{eq:Coxeter} \tau_1^2\. = \. \ldots \. = \tau_{n-1}^2 \. = \. 1 \qquad \text{and} \qquad \tau_i\hskip.03cm\tau_j\. = \. \tau_j\hskip.03cm \tau_i \ \ \text{ for all } \ \, |i-j|\. > \.1\hskip.03cm. \end{equation} Note that the symmetric group \hskip.03cm $S_n$ \hskip.03cm is a quotient of~$\hskip.03cm\textrm{G}_n$. We also define elements \. $\delta_2,\ldots,\delta_{n}=\delta\in \textrm{G}_n$ \. as follows: $$\delta_k \. := \. \tau_1 \hskip.03cm \tau_2 \.\cdots \.\tau_{k-1} \ \ \. \text{for} \ \ 1< k \le n\., \quad \text{and} \quad \gamma \. := \. \delta_n \hskip.03cm \delta_{n-1} \hskip.03cm \cdots \hskip.03cm \delta_2 \hskip.03cm. $$ Note that \. $\textrm{G}_n=\<\delta_2,\ldots,\delta_n\>$ and that $\gamma$ is an involution, $\gamma^2=1$, With every linear extension \hskip.03cm $f\in \Ec(P)$ \hskip.03cm we associate a word \. $\textbf{\textit{x}}_f \hskip.03cm=\hskip.03cm x_1\ldots x_n\hskip.03cm \in X^\ast$, such that \. $f(x_i) =i$ \. for all \. $1\le i \le n$. In the notation of the previous section, this says that \. $X=\{x_1,\ldots,x_n\}$ \. is a natural labeling corresponding to~$f$. We can now define the action of \hskip.03cm $\textrm{G}_n$ \hskip.03cm on \hskip.03cm $\Ec(P)$ \hskip.03cm as the right action on the words \hskip.03cm $\textbf{\textit{x}}_f$, \hskip.03cm $f\in \Ec(P)$. For \. $\textbf{\textit{x}}_f \hskip.03cm =\hskip.03cm x_1\ldots \hskip.03cm x_n$ \. as above, let \begin{equation}\label{eq:tau-def} (x_1\ldots \hskip.03cm x_n) \. \tau_i \ := \ \begin{cases} \ x_1 \ldots \hskip.03cm x_n, & \ \text{if \, $x_i \prec x_{i+1}$}\hskip.03cm,\\ \ x_1\dots x_{i+1} \hskip.03cm x_i \dots x_n\hskip.03cm, & \ \text{if \, $x_i \parallel x_{i+1}$}\hskip.03cm. \end{cases} \end{equation} Observe that if \. $1= i_1 < i_2 <\dots <i_r\le n$ \. are the indices of the lexicographically smallest maximal chain in the linear extension~$f$, then $$ (\textbf{\textit{x}}_f) \. \delta \, = \, (x_1\ldots \hskip.03cm x_n) \. \delta \, = \, x_2 \ldots \hskip.03cm x_{i_2-1} \hskip.03cm x_{i_1} \ldots \hskip.03cm x_{i_r-1} \hskip.03cm x_{i_{r-1}} \ldots \hskip.03cm x_{i_r} \, = \, \textbf{\textit{x}}_{f\partial}\., $$ where \hskip.03cm $\delta=\delta_n$ \hskip.03cm as above and $\textbf{\textit{x}}_f \delta = \textbf{\textit{x}}_{f\partial}$ is the promotion operator. \smallskip \begin{prop}[{\rm see e.g.~\cite[Prop.~4.1]{AKS}}{}] \label{p:LE-transitive} Let \. $P=(X,\prec)$ \. be a poset on with \. $|X|=n$ \. elements. Then group \. {\rm $\textrm{G}_n$} \hskip.03cm acts transitively on \hskip.03cm $\Ec(P)$. \end{prop} \smallskip \begin{rem} The proposition is a folklore result repeatedly rediscovered in different contexts. For the early proofs and connections to Markov chains, see~\cite{KK,Mat}. For a brief overview of generalizations and further references, we refer to the discussion which follows Prop.~1.2 in~\cite{DK1}. \end{rem} \smallskip \subsection{Generalization to restricted posets}\label{ss:restricted-gen} Let \hskip.03cm $P=(X,\prec)$. Fix a sequence of $k$ elements \. $\textbf{\textit{u}}=(u_1,\ldots,u_k) \in X$ \. and a sequence of $k$ distinct integers \. $\textbf{{\textit{a}}} = (a_1,\ldots,a_k)$, such that \. $1\leq a_1 < \ldots < a_k \leq n$. A \defn{restricted linear extension} with respect to \hskip.03cm $(\textbf{\textit{u}},\textbf{{\textit{a}}})$ \hskip.03cm is a linear extension \hskip.03cm $f\in \Ec(P)$ \hskip.03cm such that \. $f(u_i)=a_i$ \. for all \. $1\le i\le k$. As in the introduction, we denote this set by \. $\Ec(P,\hskip.03cm\textbf{\textit{u}},\hskip.03cm\textbf{{\textit{a}}})$. Our first goal is to modify and generalize Proposition~\ref{p:LE-transitive} for restricted linear extensions. For simplicity assume that \. $a_i +1 < a_{i+1}$ \. for all \. $1\le i< k$. Otherwise, we can identify elements \hskip.03cm $u_i\sim u_{i+1}$ \hskip.03cm and consider the equivalent problem for the so obtained smaller poset. We also assume $a_1>1$ and $a_k<n$, since otherwise the corresponding element $u_1$ or $u_n$ should be minimal/maximal and can be removed from the poset, again reducing the problem. We also assume that $u_i \prec u_{i+1}$ in the poset. \smallskip Let \. $A:=\{a_1,\ldots, a_k\}$ \. and \. $A'=\{a_1-1,\ldots, a_k-1\}$, so \. $A\cap A'=\varnothing$ \. by the assumption. Let $$ \textrm{H}_n(\textbf{{\textit{a}}}) \, = \, \<\hskip.03cm\tau_i, \. \sigma_r \ : \ 1\le i < n, \. i\notin A \cup A', \. 1\le r\le k\hskip.03cm\> $$ be an infinite group with relations as in~\eqref{eq:Coxeter} and \. $\sigma_r^2=1$ \. for all \. $1\le r \le k$. This is a free product of several infinite Coxeter groups which acts on \hskip.03cm $\Ec(P, \hskip.03cm \textbf{\textit{u}},\hskip.03cm \textbf{{\textit{a}}})$ \hskip.03cm as follows. First, for all \. $i\notin A \cup A'$, \. $1\le i < n$, the action of \hskip.03cm $\tau_i$ \hskip.03cm defined in~\eqref{eq:tau-def} can be restricted to act on \hskip.03cm $\Ec(P, \hskip.03cm \textbf{\textit{u}},\hskip.03cm\textbf{{\textit{a}}})$. Next, for all \. $1\le r \le k$ \. and \. $j=a_i$ \. define the action of~$\hskip.03cm \sigma_i$ \hskip.03cm on \hskip.03cm $\Ec(P,\hskip.03cm \textbf{\textit{u}},\textbf{{\textit{a}}})$: $$ (x_1\ldots \hskip.03cm x_n) \. \sigma_i \ := \ \begin{cases} \ x_1 \ldots \hskip.03cm x_{j+1} \hskip.03cm x_j \hskip.03cm x_{j-1} \ldots \hskip.03cm x_n & \ \text{if } \ \ x_{j-1} \parallel x_j\., \ \. x_{j-1} \parallel x_{j+1} \ \ \text{and} \ \, x_{j} \parallel x_{j+1}\,,\\ \ x_1\ldots \hskip.03cm x_{j-1} \hskip.03cm x_j \hskip.03cm x_{j+1} \ldots \hskip.03cm x_n & \text{otherwise}\hskip.03cm. \end{cases} $$ Here we continue using our convention of association of \hskip.03cm $\textbf{\textit{x}}_f$ \hskip.03cm with \. $f \in \Ec(P,\hskip.03cm \textbf{\textit{u}},\textbf{{\textit{a}}})$. Note that when \. $x_{j-1} \parallel x_j$ \. and \. $x_{j} \parallel x_{j+1}$ \. we have $$ \textbf{\textit{x}} \. \sigma_i \, = \textbf{\textit{x}} \. \tau_j \hskip.03cm \tau_{j-1} \hskip.03cm \tau_j \, = \, \textbf{\textit{x}} \. \tau_{j-1}\hskip.03cm \tau_j\hskip.03cm \tau_{j-1}\hskip.03cm. $$ However, when \. $x_{j-1} \prec x_{j}$ \. and \. $x_j \parallel x_{j+1}$ \. we have \. $\textbf{\textit{x}} \. \sigma_i \. = \. \textbf{\textit{x}}$, but $$\textbf{\textit{x}} \. \tau_{j-1} \hskip.03cm\tau_j\hskip.03cm \tau_{j-1} \, \neq \, \textbf{\textit{x}}, $$ since \. $x_j$ \hskip.03cm has been moved to $(j+1)$-st position. The same property holds when \. $x_{j-1} \parallel x_{j}$ \. and \. $x_j \prec x_{j+1}$\hskip.03cm. \smallskip \begin{ex}\label{e:group-H-not} Let us note that the action of \hskip.03cm $\textrm{H}_n(\textbf{{\textit{a}}})$ \hskip.03cm on \hskip.03cm $\Ec(P, \hskip.03cm\textbf{\textit{u}},\hskip.03cm\textbf{{\textit{a}}})$ \hskip.03cm is not necessarily transitive. For example, let \hskip.03cm $P=\bigl(X,\prec)$, where \. $X=\{x,y,u_1,z,u_2\}$, be a poset isomorphic to \. $C_3 + C_1 + C_1$ \. with \. $u_1 \prec z \prec u_2$ \. and $x,y$ incomparable to \. $\{u_1,z,u_2\}$. Now, when \. $a_1=2$ \. and \. $a_2=4$, the action of group $\textrm{H}_5(\textbf{{\textit{a}}})$ \. has two orbits: \. $\{xu_1zu_2y\}$ \. and \. $\{yu_1zu_2x\}$. This shows that to generalize Proposition~\ref{p:LE-transitive} we need to enlarge group~$\textrm{H}_5(\textbf{{\textit{a}}})$. \end{ex} \smallskip Let \. $\wh{\textrm{G}}_n=\bigl\<\tau_{i\hskip.03cm j} \.:\. 1\leq i<j\leq n\bigr\>$ \. be an infinite group with relations \begin{equation}\label{eq:tau-G-def} \aligned \tau_{i\hskip.03cm j}^2 \. = \. 1 & \qquad \text{for all } \ \ 1\le i<j\le n\hskip.03cm, \\ \tau_{i\hskip.03cm j} \. \tau_{k\hskip.03cm \ell} \. = \. \tau_{k\hskip.03cm \ell} \.\tau_{i\hskip.03cm j} & \qquad \text{for all } \ \ 1\le i<k<\ell<j\le n\hskip.03cm. \endaligned \end{equation} Define the action of \. $\wh{\textrm{G}}_n$ \hskip.03cm on \hskip.03cm $\Ec(P)$ \hskip.03cm as $$(x_1\ldots \hskip.03cm x_n) \. \tau_{i\hskip.03cm j} \ := \ \begin{cases} \ x_1\ldots \hskip.03cm x_j \ldots \hskip.03cm x_i \ldots x_n & \ \text{if \. $x_i \parallel y$ \. and \. $x_j \parallel y$ \. for all \. $y \in\{x_{i+1},\ldots,x_{j-1}\}$}, \\ \ x_1 \ldots \hskip.03cm x_n & \ \text{otherwise}. \end{cases} $$ In the notation above, we have \. $\tau_{i \. i+1} = \tau_i$\., so \. $\textrm{G}_n \subset \wh{\textrm{G}}_n$ \. is a subgroup. For brevity, we write \hskip.03cm $\tau_i$ \hskip.03cm for \hskip.03cm $\tau_{i \. i+1}$ \hskip.03cm from this point on. Finally, let \. $\wh{\textrm{G}}_n(\textbf{{\textit{a}}})$ \. be a subgroup of \hskip.03cm $\wh{\textrm{G}}$ \hskip.03cm defined as follows: $$\wh{\textrm{G}}_n(\textbf{{\textit{a}}}) \, := \, \bigl\<\hskip.03cm \tau_{i\hskip.03cm j} \, : \, i,j\notin A, \, 1\le i<j\le n \hskip.03cm\bigr\>\hskip.03cm. $$ \smallskip \begin{thm}\label{t:LE-gen-transitive} Let \. $P=(X,\prec)$ \. be a poset with \. $|X|=n$ \. elements. Fix a chain of $k$ elements \. $\textbf{\textit{u}}=(u_1,\ldots,u_k) \in X$ \. and an increasing sequence of $k$ distinct integers \. $\textbf{{\textit{a}}} = (a_1,\ldots,a_k)$. Then group \. {\rm $\wh{\textrm{G}}_n(\textbf{{\textit{a}}})$} \hskip.03cm defined above acts transitively on \hskip.03cm $\Ec(P, \hskip.03cm \textbf{\textit{u}},\hskip.03cm\textbf{{\textit{a}}})$. \end{thm} \begin{proof} Suppose \. $\Ec(P,\hskip.03cm \textbf{\textit{u}},\hskip.03cm \textbf{{\textit{a}}}) \neq \varnothing$. Fix \. $f \in \Ec(P,\hskip.03cm \textbf{\textit{u}},\hskip.03cm\textbf{{\textit{a}}})$ \. and write \. $\textbf{\textit{x}}_0=x_1\ldots x_n$ \. corresponding to the natural labeling \. $f(x_i)=i$. For every \. \. $\textbf{\textit{y}} = y_1\ldots y_n$ \. corresponding to a linear extension \. $g \in \Ec(P,\hskip.03cm \textbf{\textit{u}},\hskip.03cm \textbf{{\textit{a}}})$, let \hskip.03cm $\operatorname{{\rm inv}}(\textbf{\textit{y}})$ \hskip.03cm be the number of inversions in the permutation~$f(\textbf{\textit{y}}):=\bigl(f(y_1),\ldots,f(y_n)\bigr)$. We claim that unless \. $\textbf{\textit{y}}=\textbf{\textit{x}}_0$, we can use operators in \. $\wh{\textrm{G}}_n(\textbf{{\textit{a}}})$ \. to decrease \hskip.03cm $\operatorname{{\rm inv}}(\textbf{\textit{y}})$. Using this recursively, we can then reach \hskip.03cm $\textbf{\textit{x}}_0$ \hskip.03cm as the unique element for which~$\hskip.03cm \operatorname{{\rm inv}}(\textbf{\textit{x}}_0) =0$. Consider the permutation \. $w:=\bigl(f(y_1),\ldots,f(y_n)\bigr)$. If \. $w\ne \mathbf{1}$, there exist elements \. $f(y_i), f(y_{i+1}) \in w$ \. such that \. $f(y_i)>f(y_{i+1})$. Then we have \. $y_i \parallel y_{i+1}$. We call such \. $(y_i, y_{i+1})$ \. a \defn{descending pair}. Suppose there is a descending pair with \. $y_i, y_{i+1} \not \in \textbf{\textit{u}}$. Then \. $\tau_i \in \textrm{H}_n(\textbf{{\textit{a}}})$, and \. $\textbf{\textit{y}}\. \tau_i \hskip.03cm = \hskip.03cm \ldots y_{i+1}\hskip.03cm y_i\ldots$ \hskip.03cm Therefore, we have \. $\operatorname{{\rm inv}}(\textbf{\textit{y}}\hskip.03cm\tau_i)\hskip.03cm =\hskip.03cm \operatorname{{\rm inv}}(\textbf{\textit{y}}) -1$, which proves the claim in this case. In the remaining cases, every descending pair involves at least one element from~$\textbf{\textit{u}}$, which are fixed points of the labeling~$f$. Suppose there are two adjacent descending pairs, i.e.\ \. $f(y_{i-1})>f(y_i)>f(y_{i-1})$ \. and \. $y_i \in \textbf{\textit{u}}$. Then we have \. $\operatorname{{\rm inv}}(\textbf{\textit{y}}\tau_{i-1 \. i+1})=\operatorname{{\rm inv}}(\textbf{\textit{y}})-3$, which prove the claim in this case. Finally, suppose every descending pair involves at least one element from~$\textbf{\textit{u}}$, and none are adjacent. Let \hskip.03cm $i_1-1$ \hskip.03cm be the last descent of $w$. Then \hskip.03cm $i_1-1\in \textbf{{\textit{a}}}$, i.e. $i_1-1=a_t$ \. for some $a_t\in \textbf{{\textit{a}}}$, and we have \. $a_t=w_{a_t} >w_{i_1}$. To see this, suppose the contrary that the last descent in $w$ is at $i_1 -1 =a_t-1$, so $w_{i_1-1} > w_{a_t}=a_t$ and all elements of $w$ after $a_t$ are increasing. Since $a_t$ is a fixed point and we have $n-a_t$ positions after $a_t$ filled with numbers larger than $a_t$, i.e. from the interval $\{a_t+1,\ldots,n\}$, we must have $w_i=i$ for $i=a_t,\ldots,n$, so all values $>a_t$ appear after it. Thus $w_{a_t-1}<a_t =w_{a_t}$, reaching a contradiction, and so the last descent is at $a_t$. Let \. $m <i_1-1$ \. be the largest value for which \. $w_{m}>i_1-1=a_t$. Such value exists since by the reasoning above at least one of \. $\{i_1,\ldots,n\}$ \. appears before~$i_1-1$. Now, form a sequence \. $i_1\hskip.03cm>\hskip.03cm i_2\hskip.03cm>\hskip.03cm\ldots\hskip.03cm>\hskip.03cm i_r\hskip.03cm >\hskip.03cm i_0= m$\hskip.03cm, such that \hskip.03cm $i_j$ \hskip.03cm is the largest index smaller than \hskip.03cm $i_{j-1}$ \hskip.03cm such that \. $w_{i_j}<w_{i_{j-1}}$. Note that by similar interval arguments we must have that \. $f(y_{i_j})<i_j$ \. and they do not hit any of the elements in~$\textbf{{\textit{a}}}$. Note that these indices give a maximal increasing subsequence in~$w_{m+1}\ldots w_n$ which ends at~$w_{i_1}$. Now apply \. $\tau_{m\. i_r}\. \tau_{i_r \. i_{r-1}} \. \cdots \. \tau_{i_2\. i_1}$, which is nontrivial as it transposes the element $y_m$, incomparable to all elements in positions $\in [m+1,i_1]$, with the elements $y_{i_r},\ldots$ which are also incomparable with the elements in the corresponding interval. Note that this is the cycle permutation \. $(m,i_1,i_2,\ldots)$ \. so that $y_{m}$ moves to position $i_1$, and the other elements slide down. This give a linear extension where the elements sliding to left bypass only elements of larger value of~$f$, and hence respect the partial order. Since \hskip.03cm $f(y_{m})$ \. is larger than the elements it jumps over, the resulting permutation has fewer inversions. This proves the claim in that case and completes the proof of the theorem. \end{proof} \smallskip \subsection{Vanishing conditions}\label{ss:restricted-vanish} For \. $\textbf{{\textit{a}}}=(a_1,\ldots,a_k)$, let \. $\textbf{{\textit{a}}}^{\<i\>} := (a_1,\ldots,a_i+1,\ldots,a_k)$. By definition, the operator $$ \tau_{a_i}\, : \ \Ec(P,\textbf{\textit{u}}, \textbf{{\textit{a}}}) \cup \Ec\bigl(P,\textbf{\textit{u}},\textbf{{\textit{a}}}^{\<i\>}\bigr) \ \to \ \Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}}) \cup \Ec\bigl(P,\textbf{\textit{u}},\textbf{{\textit{a}}}^{\<i\>}\bigr) $$ is an involution. For \. $i< j$, let $$\delta_{i \hskip.03cm j} \, := \, \tau_i \. \tau_{i+1} \. \cdots \. \tau_{j-1} \quad \text{and} \quad \delta_{j\hskip.03cm i} \, := \, \tau_{j-1} \. \cdots \. \tau_{i+1} \. \tau_i $$ be the \defn{promotion operator} starting at position~$i$ and ending in position~$j$, and the \defn{demotion operator} starting at position~$j$ and ending in position~$i$. \smallskip \begin{proof}[Proof of Theorem~\ref{t:vanish}] Without loss of generality, we can assume that poset \. $P=(X,\prec)$ \. has a unique minimal element~$\widehat{0}$ and unique maximal element~$\widehat{1}$. Since \. $f\big(\widehat{0}\big) = 1$ \. and \. $f\big(\widehat{1}\big) = n$ \. for every linear extension \. $f\in \Ec(P)$, we can also add \. $u_0 = \widehat{0}$ \. and \. $u_{k+1} = \widehat{1}$ \. to the chain \. $u_1\prec \ldots \prec u_k$\hskip.03cm, and set \. $a_0=1$, \. $a_{k+1}=n$. Equation~\eqref{eq:Sta-gen-vanish} then simplifies to \begin{align}\label{eq:a_conditions} a_j\. - \. a_i \ > \ h(u_i,u_j) \ \quad \text{ for all } \quad 0\leq i<j \leq k+1. \end{align} First, let us show that inequalities~\eqref{eq:a_conditions} always hold. Indeed, in every word \hskip.03cm $\textbf{\textit{x}}_f$ \hskip.03cm corresponding to a linear extension \. $f\in \Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})$ \. with \. $f(u_i)=a_i$\hskip.03cm, we must have the elements from \. $\bigl(u_i,u_j\bigr)_P$ \. lie between~$u_i$ and~$u_j$, and hence \. $a_j-a_i>h(u_i,u_j)$. \smallskip In the opposite direction, assume that the inequalities~\eqref{eq:a_conditions} hold for all \. $0\leq i<j \leq k+1$. To prove that \. $\Ec(P,\textbf{\textit{u}},\hskip.03cm\textbf{{\textit{a}}})\ne \varnothing$, proceed by induction on~$k$. For \hskip.03cm $k=1$, let \hskip.03cm $\alpha$ \hskip.03cm be a word obtained from totally ordering of the poset interval \. $\big(\widehat{0},u_1\big)$, and \hskip.03cm $\beta$ \hskip.03cm be a word obtained from totally ordering of the poset interval \. $\big(u_1,\widehat{1}\big)$. Order the remaining elements of \hskip.03cm $P-u_1$ \hskip.03cm into a word~$\gamma$, and then insert \hskip.03cm $u_1$ \hskip.03cm at position \hskip.03cm $a_1$ \hskip.03cm in the concatenation \. $\alpha\hskip.03cm\gamma\hskip.03cm\beta$. Since \hskip.03cm $u_1 \parallel \gamma$, \. $a_1>|\alpha|$, and \. $n-a_1>|\beta|$, this is a linear extension in \hskip.03cm $\Ec(P,u_1,a_1)$. Suppose now that the result holds for all sequences of length \. $k\ge 1$, and let \. $\textbf{\textit{y}} \in \Ec(P,\hskip.03cm \textbf{\textit{u}},\hskip.03cm \textbf{{\textit{a}}})$. Now let \hskip.03cm $u_{k+1}$ \hskip.03cm be another element and \hskip.03cm $a_{k+1}$ \hskip.03cm satisfy the conditions in the statement. Suppose that the position of \hskip.03cm $u_{k+1}$ \hskip.03cm is at \. $a'\neq a_{k+1}$. Let us show that if \hskip.03cm $a'<a_{k+1}$, then we can move \hskip.03cm $u_{k+1}$ \hskip.03cm to position \hskip.03cm $a'+1$ without moving the other~$u$'s, and if \hskip.03cm $a'>a_{k+1}$ we can move \hskip.03cm $u_{k+1}$ \hskip.03cm one position down. Repeating this we will eventually get \hskip.03cm $u_{k+1}$ \hskip.03cm at a position \hskip.03cm $a'=a_{k+1}$, to obtain the desired linear extension. \smallskip From now on we act with the group $G$, and the promotion and demotion operators $\delta_{ij}$ to transform $\textbf{\textit{y}}$. Let \. $a'<a_{k+1}$. Since \. $n-a'>n-a_{k+1}\geq h(u_{k+1},\widehat{1})$, there must be at least one element in~$\textbf{\textit{y}}$ appearing after \hskip.03cm $u_{k+1}$ \hskip.03cm which is incomparable to \hskip.03cm $u_{k+1}$; denote by \hskip.03cm $z$ \hskip.03cm the first such element, at position $t$. Then the elements between \hskip.03cm $u_{k+1}$ \hskip.03cm and \hskip.03cm $z$ \hskip.03cm are incomparable with $z$, since they must be $\succ u_{k+1}$. Inserting~{\hskip.03cm $z$} \hskip.03cm immediately before \hskip.03cm $u_{k+1}$ \hskip.03cm, i.e.\ forming the word \hskip.03cm $\textbf{\textit{y}}\delta_{t a'}$, then respects the partial order and shifts \hskip.03cm $u_{k+1}$ \hskip.03cm to position~$\hskip.03cm a'+1$. Note that this transformation does not move $u_1,\ldots, u_k$ since we assume that $u_k \prec u_{k+1}$, which implies that $a_k <a$. Suppose now that \. $a'>a_{k+1}$. Then \. $a'> h(\widehat{0},u_{k+1}) +1$, so there is an element before \hskip.03cm $u_{k+1}$ \hskip.03cm which is incomparable to \hskip.03cm $u_{k+1}$. Let \hskip.03cm $z_0$ \hskip.03cm be the last such element, and suppose that it is at position \. $i_0\in (a_{r-1},a_{r})$. Note that the elements between \hskip.03cm $z_0$ \hskip.03cm and \hskip.03cm $u_{k+1}$ in~$\textbf{\textit{y}}$ must be incomparable to~{\hskip.03cm $z_0$}, since by minimality they must be all~{\hskip.03cm$\prec u_{k+1}$}. If $z_0$ appears after~$\hskip.03cm u_k$, then we obtain $\textbf{\textit{y}}\delta_{i_0a'}$, where $z$ is inserted after \hskip.03cm $u_{k+1}$ \hskip.03cm and \hskip.03cm $u_{k+1}$ shifts to position \hskip.03cm $a'-1$. Otherwise, since \. $a'-a_k > h(u_{k+1},u_k)+1$, there is an element \hskip.03cm $z_k$ \hskip.03cm between $u_k$ and $u_{k+1}$ in $\textbf{\textit{y}}$ such that $z_k$ is incomparable to either $u_k$ or $u_{k+1}$. Since \. $z_k \neq z_0$, we must have \. $z_k \parallel u_k$, and let this \hskip.03cm $z_k$ \hskip.03cm be the first such element after \hskip.03cm $u_k$ \hskip.03cm at position~$i_k$. In general, for every \.$t \in [r,k]$\. we define $z_t$ at position \.$i_t<a'$\. to be the first element after \.$u_t$\., such that \hskip.03cm $z_t \parallel u_t$. Note that such element exists, which is seen as follows. Since \.$h(u_t,u_{k+1})+1<a'-a_t$\. there is an element \.$z$ \. between \.$u_t$ \. and \. $u_{k+1}$ \. incomparable to at least one of them. However, for all such \.$z\prec u_{k+1}$\., so we must have \.$z \parallel u_t$. Next, observe that \.$z_t$ is incomparable to all elements in $\textbf{\textit{y}}$ appearing between $u_t$ and $z_t$. Now transform $\textbf{\textit{y}}$ as follows. First, let \hskip.03cm $\textbf{\textit{y}}^0:= \textbf{\textit{y}} \delta_{i_0 \. a'}$, and note that here $z_0$ is sent to position $a'$ and all elements in between have been shifted down one position. Let \hskip.03cm $a'_t:=a_t-1$ \hskip.03cm and \hskip.03cm $i'_t:=i_t-1$ \hskip.03cm be the positions of $u_t$ and $z_t$ in~$\textbf{\textit{y}}^0$. Next, let $\textbf{\textit{y}}^1 = \textbf{\textit{y}}^0 \. \delta_{i'_r \. a_r'}$ which moves $z_r$ before $u_r$, so the position of $u_r$ is restored to $a_r$ as well as all elements between them. Suppose that $i_r \in (a_{p-1}, a_{p})$. Let then $\textbf{\textit{y}}^2:=\textbf{\textit{y}}^1\. \delta_{i'_p \. a'_p}$, so the element $z_p$ is demoted to the position before $u_p$, and thus all other elements at positions $[a_p,\.i_p]$ have now restored their original position from~$\textbf{\textit{y}}$. Continuing this way, if \. $i_p \in (a_{q-1},\.a_q)$\. we obtain $\textbf{\textit{y}}^3:= \textbf{\textit{y}}^2\. \delta_{i'_q\. a'_q}$ and so on until we have shifted all elements \hskip.03cm $u_r,\ldots,u_k$ \hskip.03cm to their positions \hskip.03cm $a_r,\ldots,a_k$. Also, \hskip.03cm $u_{k+1}$ \hskip.03cm is at position~$a'-1$, which is what we needed to show. This completes the proof. \end{proof} \smallskip \begin{proof}[Proof of Corollary~\ref{cor:Sta-gen-vanish-poly}] The first part follows trivially from~\eqref{eq:Sta-gen-vanish} or, equivalently, its simplified version~\eqref{eq:a_conditions}. For the second part, note that the proof above is completely constructive and builds \. $f\in \Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})$ \. in polynomial time starting with a linear extension \. $g\in \Ec(P)$. The details are straightforward. \end{proof} \smallskip \subsection{Uniqueness conditions}\label{ss:restricted-unique} In the next lemma, we show that, given a linear extension \hskip.03cm $f \in \Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})$, we can check if such~$f$ is unique in polynomial time. In the notation of Theorem~\ref{t:vanish}, let \. $v_i:=f^{-1}(a_i-1)$ \. and \. $w_i:=f^{-1}(a_i+1)$ \. for \. $1\le i \le k$. We adopt the convention that \hskip.03cm $v_1=\widehat{0}$ \hskip.03cm if \hskip.03cm $a_1=1$, and \hskip.03cm $w_k=\widehat{1}$ \hskip.03cm if \hskip.03cm $a_k=n$. For \. $1 \leq i \leq j \leq n$, let \[ f^{-1}[i,j] \ := \ \bigl\{\hskip.03cm f^{-1}(i)\., \.\ldots \., f^{-1}(j)\hskip.03cm \bigr\}. \] \smallskip \begin{thm}\label{t:gen-Stanley-unique} In the notation of Theorem~\ref{t:vanish}, let \. $f \in \Ec(P,\textbf{{\textit{a}}},\textbf{\textit{u}})$ \. be a linear extension as in the theorem. Then \hskip.03cm $|\Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})|=1$ \, \underline{if and only if} \, the following conditions hold: \begin{equation* \aligned & (1) \quad \text{$f^{-1}[a_i+1,a_{i+1}-1]$ \. forms a chain in $P$ for every \. $1\le i \le k$, \hskip.03cm and } \\ & (2) \quad \text{There are no \. $1 \leq i \leq j \leq k$\hskip.03cm, \. such that \. $\{v_i,w_j\} \hskip.03cm \parallel \hskip.03cm f^{-1}[a_i,a_j]$.} \endaligned \end{equation*} \end{thm} \begin{proof} For the \hskip.03cm $\Rightarrow$ \hskip.03cm direction, note that~$(1)$ follows directly. Indeed, recall that \. $e(P)>1$ \. unless \hskip.03cm $P$ \hskip.03cm is a chain. Therefore if the restriction of \hskip.03cm $P$ to \. $\big\{f^{-1}(a_i+1),\ldots,f^{-1}(a_{i+1}-1)\big\}$ \. is not a chain, then there is more than one linear extension over these elements, which extends to the desired linear extension \. $g\in \Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})$, \. $g\ne f$. For~$(2)$, suppose to the contrary that \. $v_i,w_j \hskip.03cm \parallel \hskip.03cm f^{-1}[a_i,a_j]$. Then swapping the value of \hskip.03cm $f(v_i)$ \hskip.03cm and $f(w_j)$ \hskip.03cm via $\textbf{\textit{x}}_f \tau_{a_i-1 \hskip.03cm a_j+1}$ we get a new linear extension, a contradiction. \smallskip For the \hskip.03cm $\Leftarrow$ \hskip.03cm direction, suppose that~(1) and~(2) hold and there exists \hskip.03cm $g\in \Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})$ \hskip.03cm for some \hskip.03cm $g\ne f$. By Theorem~\ref{t:LE-gen-transitive} we have that there is an element \hskip.03cm $\pi \in \wh{\textrm{G}}_n(\textbf{{\textit{a}}})$, s.t. $\textbf{\textit{x}}_f \pi = \textbf{\textit{x}}_g$. Write $\pi$ as the minimal (reduced) product of transpositions $\pi = \tau_{r_1 \hskip.03cm s_1} \cdots$ which act nontrivially. So $\textbf{\textit{x}}_f \tau_{r_1 \hskip.03cm s_1} \neq \textbf{\textit{x}}_f$ and thus $\{f^{-1}(r_1), f^{-1}(s_1)\} \hskip.03cm \parallel \hskip.03cm f^{-1}[r_1+1,s_1-1]$. Since the elements in \hskip.03cm $f^{-1}[a_i+1,a_{i+1}-1]$ \hskip.03cm form a chain we must have that $r_1 =a_i-1$ for some $i$ and that $s_1 = a_j+1$ for some $j$. Note that by definition \hskip.03cm $r_1<s_1$ \hskip.03cm and \hskip.03cm $r_1,s_1 \not\in \textbf{{\textit{a}}}$. Thus \. $\{ f^{-1}(a_i-1), f^{-1}(a_j+1)\} \parallel f^{-1}[a_i,a_j]$, and so condition~(2) does not hold, a contradiction. \end{proof} \smallskip \begin{proof}[Proof of Corollary~\ref{cor:Sta-gen-uniqueness-poly}] By the first part of Corollary~\ref{cor:Sta-gen-vanish-poly}, we can decide if \. $|\Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})|>0$ \. in polynomial time. By the second part of the same corollary, we can find a linear extension \. $f\in \Ec(P,\textbf{\textit{u}},\textbf{{\textit{a}}})$ \. in polynomial time. By Theorem~\ref{t:gen-Stanley-unique}, we can decide if such~$f$ is unique in polynomial time. \end{proof} \medskip \section{Injective proof of the Sidorenko inequality} \label{s:sid} \subsection{Preliminaries} \label{ss:sid-pre} Let $P=(X,\prec)$ be a poset with $|X|=n$ elements. Denote by \hskip.03cm $P|_J$ \hskip.03cm the restriction of \hskip.03cm $P$ \hskip.03cm to a subset \hskip.03cm $J\subset X$. We write \hskip.03cm $P-y$ \hskip.03cm to denote the restriction \hskip.03cm $P|_{X-y}$. Denote by \. $P^\ast=(X,\prec^\ast)$ \. the \defn{reverse poset}~: $$x\prec^\ast y \quad \Longleftrightarrow \quad y \prec x\,, \quad \text{for all} \ \ x, \hskip.03cm y \in X. $$ Clearly, \hskip.03cm $e(P^\ast)=e(P)$. Denote by \hskip.03cm $\mathcal C(P)$ \hskip.03cm the set of chains, and by $\mathcal A(P)$ the set of antichains in~$P$. The \defn{comparability graph} \hskip.03cm $ {\text {\rm Com} } (P)=(X,E)$ \hskip.03cm is defined by \. $E=\bigl\{(x,y)~:~x\prec y, \. \text{ where } \. x,y\in X\bigr\}$. Note that the chains in~$P$ are \emph{cliques} (complete subgraphs) in $ {\text {\rm Com} } (P)$. Similarly, the antichains in~$P$ are \emph{stable} (independent) \emph{sets} in $ {\text {\rm Com} } (P)$. Throughout this section, we think of the \defng{promotion} in a different way, as a map from linear extensions to chains in the poset. Formally, for $f\in \mathcal E(P)$, let \. $x_1=f^{-1}(1)$. For \hskip.03cm $i>1$, let $x_{i}\in X$ be an element with the smallest value of~$f$ on \. $\{y~:~x_{i-1}\prec y\}$. This gives a \defn{promotion chain} \. $C= \bigl[x_1\to x_2\to\ldots \to x_\ell\bigr]\in \mathcal C(P)$, which can also be viewed as the \emph{DFS path} in the \emph{Hasse diagram} of~$P$. Denote by \. $\Phi: \mathcal E(P)\to \mathcal C(P)$ \. the map \. $\Phi(f) = C$. \bigskip \begin{lemma} \label{lem:DFS} For all $P=(X,\prec)$ and $y\in X$ we have: $$e(P-y) \. = \. \bigl|\bigl\{g\in \mathcal E(P)\,:\, y \in \Phi(g)\bigr\}\bigr| $$ \end{lemma} \medskip \begin{proof} Consider a bijection $$\varphi: \, \mathcal E(P-y) \. \longrightarrow \. \bigl\{g\in \mathcal E(P)\,:\,y \in \Phi(f)\bigr\} $$ defined as follows. Let \. $f\in \mathcal E(P-y)$, and let \. $[y\to x_1 \to \ldots \to x_k]$ \. be the promotion path in the upper order ideal \. $B(y) = \{x\in X\.:\. x \succ y\}$. Define \. $g=\varphi(f) \in \mathcal E(P)$ \. as follows. Let \. $g(y):=f(x_1)$, \. $g(x_i):=f(x_{i+1})$ \. for \. $1\le i < k$, and \. $g(x_k):=n$. Observe that in \hskip.03cm $P^\ast$ \hskip.03cm we now have \. $\Phi(f) \. = \. \bigl[x_k\to\ldots \to x_1 \to y \to \ldots \bigr]$. Reversing the role of \hskip.03cm $P$ \hskip.03cm and \hskip.03cm $P^\ast$ \hskip.03cm implies the result. \end{proof} \bigskip \begin{cor}[{\rm {see~\cite{EHS}}}{}] \label{cor:antichain} For every antichain $A\in \mathcal A(P)$ we have \begin{equation}\label{eq:EHS-ineq} \sum_{y \in A} \. e(P-y) \, \le \, e(P). \end{equation} Furthermore, when \hskip.03cm $A\subseteq X$ \hskip.03cm is the set of minimal elements, the inequality~\eqref{eq:EHS-ineq} is an equality. \end{cor} \begin{proof} Note that for every $C\in \mathcal A(P)$ and $A\in \mathcal A(P)$ we have $|A\cap C|\le 1$. Thus, we have: $$\sum_{y \in A} \. e(P-y) \, = \, \bigl|\bigl\{f\in \mathcal E(P)\,:\,|\Phi(f)\cap A|=1\bigr\}\bigr| \, \le \, |\mathcal E(P)| \, = \, e(P), $$ which proves~\eqref{eq:EHS-ineq}. For the second part, note that for every \. $f\in \Ec(P)$, the promotion path \hskip.03cm $\Phi(f)$ \hskip.03cm starts with the minimal element in~$A$. This implies that~\eqref{eq:EHS-ineq} is an equality, as desired. \end{proof} \smallskip \begin{rem}\label{r:comp} Lemma~\ref{lem:DFS} is implicit in~\cite{EHS}, which only discusses equality cases (cf.\ Corollary~\ref{cor:antichain}). By~\cite[Thm~4]{Sid}, map~$\Phi$ gives the \emph{linear extension flow} through $ {\text {\rm Com} } (P)$ viewed as directed network. Although Sidorenko gives a combinatorial construction of this flow in~\cite[Rem~1.2]{Sid}, this construction is also inexplicit. Let us mention that second part of Corollary~\ref{cor:antichain} implies by induction that \hskip.03cm $e(P)$ \hskip.03cm depends only on the comparability graph $ {\text {\rm Com} } (P)$, see~\cite{EHS,Sta-promo}. The same holds for the order polynomial \hskip.03cm $\Omega(P,t)$, and can be proved using Ehrhart polynomials~\cite{Sta-two}, cf.~$\S$\ref{ss:finrem-Ehrhart}. Alternatively, this result can be shown via certain ``turning upside-down'' flips discussed in \cite[Exc.~3.163]{Sta-EC}. \end{rem} \smallskip \subsection{Sidorenko's inequality} \label{ss:sid-proof} As in the introduction, let \. $P=(X,\prec)$ \. and \. $Q=(X,\prec')$ \. be two posets on the same ground set, such that \. $|C\cap C'|\le 1$ \. for all \. $C\in \mathcal C(P)$ \. and \. $C'\in \mathcal C(Q)$. Then \. $\mathcal C(P) = \mathcal A(Q)$ \. and $\mathcal A(P) = \mathcal C(Q)$, by definition. \bigskip \begin{lemma}[{\rm cf.~\cite[Lemma~10]{Sid}}{}] \label{lem:main} For all $P$ and $Q$ as above, we have: $$ \sum_{y\in X} \, e(P-y) \. e(Q-y) \, \le \, e(P) \. e(Q)\hskip.03cm. $$ \end{lemma} \smallskip \begin{proof} We have: $$ \aligned \sum_{y\in X} \. e(P-y) \. e(Q-y) \, & =_{\text{Lem~\ref{lem:DFS}}} \, \sum_{y\in X} \. e(Q-y) \, \sum_{C\in \mathcal C(P) \ : \ C\ni y} \. |\{f\in \mathcal E(P)\,:\,\Phi(f)=C\}| \\& = \hskip.95cm \sum_{C\in \mathcal C(P)} \, \sum_{y\in C} \. e(Q-y) \, \cdot \, |\{f\in \mathcal E(P)\,:\,\Phi(f)=C\}| \\ & \le_{\text{Cor.~\ref{cor:antichain}}} \, \sum_{C\in \mathcal C(P)} \, e(Q) \, \cdot \, |\{f\in \mathcal E(P)\,:\,\Phi(f)=C\bigr\}| \\ & \le \ e(Q) \, \sum_{C\in \mathcal C(P)} \,|\{f\in \mathcal E(P)\,:\,\Phi(f)=C\}| \quad = \ e(P) \. e(Q). \endaligned $$ Here in the third line, Cor.~\ref{cor:antichain} applies to poset~$Q$, since every chain in~$P$ is an antichain in~$Q$. \end{proof} \smallskip \begin{proof}[Proof of Theorem~\ref{t:sid}] The theorem follows from Lemma~\ref{lem:main}, by induction on $n=|X|$. \end{proof} \smallskip \begin{cor}[{\cite[Thm~11]{Sid}}{}]\label{c:sid-equality} In notation of Theorem~\ref{t:sid}, the inequality~\eqref{eq:sid} is an equality \. \underline{if and only if} \. $P$ \hskip.03cm is a \emph{series-parallel poset}. \end{cor} \smallskip The result is well-known and follows easily by tracing back the inequalities in the proof of Lemma~\ref{lem:main}. We omit the details. \smallskip \begin{proof}[Proof of Theorem~\ref{t:sid-gen}] We can rewrite the proof of Lemma~\ref{lem:main} as follows: $$ \aligned \sum_{y\in X} \. e(P-y) \. e(Q-y) \ = \ \sum_{f\in \mathcal E(P)} \. \sum_{g\in \mathcal E(Q)} \. \sum_{y\in \Phi(P)\cap \Phi(Q)} \. 1 \ \le \ k \. e(P) \. e(Q). \endaligned $$ The result now follows by induction on~$n\ge k$, with the base \hskip.03cm $n=k$ \hskip.03cm trivial. \end{proof} \smallskip \begin{proof}[Proof of Theorem~\ref{t:sid-SP}] Let \. $\beta: S_n \to \mathcal E(P_\sigma)\times \mathcal E\bigl(P_{\overline{\sigma}}\bigr)$ \. be the injection defined implicitly by the proof of Theorem~\ref{t:sid} above. First, observe that \hskip.03cm $\beta$ \hskip.03cm is computable in polynomial time. Indeed, by induction, it is a composition of maps \hskip.03cm $\beta_i$ \hskip.03cm each consisting of applying maps \hskip.03cm $\Phi$ \hskip.03cm to posets corresponding to partial permutations $\sigma_i:=(\sigma(1),\ldots,\sigma(i))$ and its dual \hskip.03cm $\overline{\sigma_i}$, see the proof of Lemma~\ref{lem:DFS}. Second, whenever defined, the inverse map \hskip.03cm $\beta^{-1}$ \hskip.03cm can be computed by the proof of Lemma~\ref{lem:DFS}, since the inverse of \hskip.03cm $\Phi$ \hskip.03cm on~$P$ is a map \hskip.03cm $\Phi$ \hskip.03cm on~$P^\ast$. On the other hand, at each stage, the decision if the inverse of \hskip.03cm $\beta_i$ \hskip.03cm exists reduces to a problem whether a given antichain in the \. $Q_i:=P_{\overline{\sigma_i}}$ \. is a \defn{cut}, i.e.\ it intersects every chain in~$Q_i$. This is a special case of directed graph connectivity problem, and thus in~${\textsc{P}}$. Putting this together implies that we can decide in polynomial time if \. $(f,g) \in \beta(S_n)$, for all \. $f\in \mathcal E(P_\sigma)$ \. and \. $g\in \mathcal E\bigl(P_{\overline{\sigma}}\bigr)$. In summary, the function \hskip.03cm $\eta(\sigma)$ \hskip.03cm counts the number of pairs of linear extensions \hskip.03cm $(f,g)$ \hskip.03cm as above, such that \hskip.03cm $(f,g) \notin \beta(S_n)$. Since the problem whether \hskip.03cm $(f,g) \in \beta(S_n)$ \hskip.03cm can be decided in polynomial time, this completes the proof. \end{proof} \medskip \section{Final remarks and open problems}\label{s:finrem} \smallskip \subsection{Bj\"orner--Wachs inequality}\label{ss:finrem-HP} In total, we include three proofs of the Bj\"orner--Wachs inequality: the original injective proof in~$\S$\ref{s:injective}, the probabilistic proof via Shepp's inequality in~$\S$\ref{s:OP-induction}, and Reiner's proof via $q$-analogue in~$\S$\ref{s:q-analogue}. Another proof was given by Hammett and Pittel in~\cite[Cor.~2]{HP}, who seemed unaware of the origin of the problem despite having~\cite{BW89} among the references. Although somewhat lengthy and technical, their proof is completely self-contained and is based on a geometric probability argument. It is similar in spirit to Reiner's proof, but without benefits of the brevity. \smallskip \subsection{Order polynomial} \label{ss:finrem-OP} There is surprisingly little literature on the order polynomials given that they emerge naturally in both P-partition theory and discrete geometry. We refer to~\cite{Joc} for order polynomials in the case of symmetric posets, which are of independent interest, and to~\cite{LT} for some computations. It seems, there are more conjectures and open problems than results in the subject. It addition to the Kahn--Saks Conjecture~\ref{conj:KS-mon} and the Graham Conjecture~\ref{conj:Graham}, we have our own Conjecture~\ref{conj:KS-FKG}. We should warn the reader that there seem to be insufficient effort towards testing of these conjectures, so it would be interesting to obtain more computational evidence. \smallskip \subsection{Ehrhart polynomial}\label{ss:finrem-Ehrhart} It is a classical observation by Stanley \cite{Sta-two}, that the order polynomial \hskip.03cm $\Omega(P,t+1)$ \hskip.03cm is the \defng{Ehrhart polynomial} of the corresponding order polytope \hskip.03cm $\mathcal{O}_P$\hskip.03cm: $$ \mathrm{Ehr}(\mathcal{O}_P,t) \, = \, \Omega(P,t+1)\hskip.03cm. $$ This allows one to translate the results from combinatorial to geometric language ad vice versa. Notably, our Example~\ref{e:OP-Stanley} is motivated by Stanley's \hskip.03cm {\tt MathOverflow} \hskip.03cm observation\footnote{Richard P.~Stanley, \href{https://mathoverflow.net/q/200574}{mathoverflow.net/q/200574} (March~20, 2015).} that the order polynomial \hskip.03cm $\Omega(C_1 \oplus A_m,t+1)$ \hskip.03cm can have negative coefficients for \hskip.03cm $m\ge 20$. We refer to~\cite{LT} for more on this example and to~\cite{Liu} for the background on non-negative Ehrhart polynomials and further references. We refer to \cite{Cha} for $q$-Ehrhart polynomials, and to~\cite{KS} for further results. \smallskip \subsection{Geometric form of the Kahn--Saks conjecture}\label{ss:finrem-KS} One can ask if a version of the Kahn--Saks Conjecture~\ref{conj:KS-mon} holds for general integral polytopes: \begin{equation}\label{eq:KS-Ehr} \text{Is} \quad \mathrm{Ehr}(Q,t-1)/t^d \quad \text{weakly decreasing for all} \ \ Q\in \mathbb R^d \ \ \text{and} \ \ t \in \mathbb N_{\ge 1} \. ? \end{equation} First, recall the example of \defng{Reeve's tetrahedron} with vertices at $$ (0,0,0) \,, \quad (1,0,0) \,, \quad (0,1,0) \quad \text{and} \quad (1,1,h), $$ see e.g.\ \cite[Ex.~3.23]{BR} and \cite[$\S$4.1]{GW}. In this case, the Ehrhart polynomial has negative signs, and the scaled Ehrhart polynomial is non-monotone for large values of~$h$. This shows that the \defng{geometric Kahn--Saks conjecture}~\eqref{eq:KS-Ehr} does not hold for general lattice polytopes. On the other hand, it is rather plausible that~\eqref{eq:KS-Ehr} holds for \defng{antiblocking} (\defng{corner}) \defng{polytopes} \hskip.03cm (see e.g.~\cite[$\S$5.9]{Sch}) with integer vertices. If true, this would imply the Kahn--Saks Conjecture~\ref{conj:KS-mon}. Indeed, although the order polytope $\hskip.03cm \mathcal{O}_P$ \hskip.03cm is not antiblocking, the \defng{stable set} (\defng{chain}) \defng{polytope} \hskip.03cm $\mathcal{C}_P$ \hskip.03cm is both altiblocking and has the same Ehrhart polynomial by Stanley's theorem: \hskip.03cm $\mathrm{Ehr}(\mathcal{C}_P,t) = \mathrm{Ehr}(\mathcal{O}_P,t)$, see~\cite{Sta-two}. Finally, let us mention that the proof of Proposition~\ref{prop:scaled} can be modified to show that $$\frac{1}{t^d} \. \mathrm{Ehr}(Q,t-1) \, \ge \, \frac{1}{(k \hskip.03cm t)^d} \.\mathrm{Ehr}(Q,kt-1)\hskip.03cm, $$ for all antiblocking polytopes \hskip.03cm $Q\in \mathbb R^d$ \hskip.03cm with integer vertices. This gives some credence to our speculation~\eqref{eq:KS-Ehr} in this case. \smallskip \subsection{Log-concavity and $q$-log-concavity}\label{ss:finrem-log} The log-concavity for order polynomials proved in Theorem~\ref{thm:log-concave} is somewhat different from other log-concave inequalities, see e.g.\ \cite{Bra,Huh,CP,Sta2}. The $q$-log-concavity in Corollary~\ref{c:q-log-concavity} is also classical albeit less studied, see e.g.\ \cite{Kra,Ler,Sag}. \smallskip \subsection{Sidorenko inequality}\label{ss:finrem-sid} Note that another combinatorial proof of Sidorenko's inequality (Theorem~\ref{t:sid}) was independently found in \cite[$\S$4.1]{GG}, where the authors gave an elegant explicit construction of a surjection proving~\eqref{eq:sid}. Unfortunately, the proof of correctness of that surjection is technical and cannot be easily inverted to obtain the desired injection. More precisely, the authors give a explicit surjection \. $\alpha: \mathcal E(P_\sigma)\times \mathcal E\bigl(P_{\overline{\sigma}}\bigr) \to S_n$\hskip.03cm. Unfortunately, the proof in~\cite{GG} is technical and indirect, so an explicit injection requires further effort. As we mentioned in the introduction, our injection~$\beta$ defined implicitly in the proof of Theorem~\ref{t:sid} likely coincides with an explicit injection in~\cite{GG1}, since both essentially reverse engineer and make effective the original proof by Sidorenko~\cite{Sid}. The connection with the argument in~\cite{StR} and the surjection in~\cite{MPP} in the case of \emph{Fibonacci posets} remains unclear. We also conjecture that the function \. $u: S_n \to \mathbb N$ \. defined by~\eqref{eq:Sid-function} \hskip.03cm is \. ${\textsc{\#P}}$-complete. The conjecture would follow if ${\textsc{\#P}}$-completeness was proved for self-dual $2$-dimensional posets $P\simeq \overline{P}$. Unfortunately, the construction in~\cite{DP} is too specialized and technical to obtain this result. Finally, there a $q$-analogue of Sidorenko's inequality in~\cite[Cor.~3]{GG} generalizing $q$-equality for the series-parallel posets given in~\cite{Wei}. See also~\cite{KS} for the definition of $e_q(P)$ for general~$P$ based on the $P$-partition theory, and~\cite{BW} for many other results on~$e_q(P)$. \smallskip \subsection{Mixed Sidorenko inequality}\label{ss:finrem-mixed-sid} In~\cite{BBS}, Bollob\'as, Brightwell and Sidorenko showed how to obtain Sidorenko's Theorem~\ref{t:sid} via a known special case of \defng{Mahler's Conjecture}. Most recently, Artstein-Avidan, Sadovsky and Sanyal extended this approach in~\cite{AASS} to obtain the following remarkable generalization of the Sidorenko inequality. For two posets \. $P=(X,\prec)$ \. and \. $Q=(X,\prec')$ \. on the same set, \defn{mixed linear extensions} are triples \hskip.03cm $(f,g,J)$, where \. $J\subset [n]$, \. $f\in \mathcal E(P|_J)$, and \. $g\in \mathcal E\bigl(Q|_{\overline{J}}\bigr)$. Denote by \hskip.03cm $e_k(P,Q)$ \hskip.03cm the number of such triples with $|J|=k$, i.e.\ $$ e_k(P,Q) \ := \ \sum_{J\in \binom{[n]}{k}} \. e(P|_J) \. e\bigl(Q|_{\overline{J}}\bigr). $$ \begin{thm}[{\rm \cite[Thm~6.2]{AASS}}{}] \label{t:sid-mxed} Let \. $P, Q, S, T$ \. be four posets on the same ground set, such that \. $|C\cap C'|\le 1$ \. and \. $|D\cap D'|\le 1$, for all \. $C\in \mathcal C(P)$, \. $C'\in \mathcal C(Q)$, \. $D\in \mathcal C(S)$ \. and \. $D'\in \mathcal C(T)$. Then we have: \begin{equation}\label{eq:sid-mixed} e_k(P,Q) \, e_k(S,T) \ \ge \ n!\.\hskip.03cm \binom{n}{k}\hskip.03cm. \end{equation} \end{thm} It would be interesting to find a combinatorial proof of this result. It would be even more interesting to find a direct injective proof, and conclude that the function giving the difference of the two sides of~\eqref{eq:sid-mixed} is in~$\hskip.03cm {\textsc{\#P}}$. The results in~\cite{IP} suggest that this might not be possible. Finally, does the \defn{mixed Sidorenko inequality}~\eqref{eq:sid-mixed} have an upper bound similar to that in~\cite{BBS}? \smallskip \subsection{Complexity of correlation inequalities}\label{ss:finrem-more} By taking the limit \. $t\to \infty$ \. in Lemma~\ref{l:main order polynomial}, we obtain: \begin{equation}\label{eq:min-min} \tag{$\ast$} (n-1) \. \cdot \. e(P) \. \cdot \. e\bigl(P \smallsetminus \{x,y\}\bigr) \ \geq \ n \. \cdot \. e\big(P \smallsetminus x\big) \. \cdot \. e\big(P\smallsetminus y\big). \end{equation} It would be interesting to see if this inequality can be proved injectively. Is the function giving the difference of the two sides of this inequality in~$\hskip.03cm {\textsc{\#P}}$? Note that, by applying the negative-correlation version of the FKG inequality to the proof of Lemma~\ref{l:main order polynomial}, we obtain the following result: \begin{lemma}\label{l:OP min-max} Let \. $P=(X,\prec)$ \. be a poset, let \. $x\in X$ \. be a minimal element, and let \. $y \in X$ \. be a maximal element such that $y$ does not cover~$x$. Then, for every integer $t>0$, we have: \[ {\Omega\big(P,\hskip.03cm t\big)} \. \cdot \. {\Omega\big(P \smallsetminus \{x,y\},\hskip.03cm t\big)} \ \leq \ {\Omega\big(P \smallsetminus x, \hskip.03cm t\big)} \. \cdot \. {\Omega\big(P\smallsetminus y,\hskip.03cm t\big)}. \] \end{lemma} By taking the limit \. $t\to \infty$ \. in the lemma, we get the inequality opposite to~\eqref{eq:min-min}: \begin{equation}\label{eq:min-max} \tag{$\ast\ast$} (n-1) \. \cdot \. e(P) \. \cdot \. e\bigl(P \smallsetminus \{x,y\}\bigr) \ \leq \ n \. \cdot \. e\big(P \smallsetminus x\big) \. \cdot \. e\big(P\smallsetminus y\big). \end{equation} Of course, the element $y$ was minimal in~\eqref{eq:min-min} and is maximal in~\eqref{eq:min-max}, but these inequalities are striking in appearance. Again, it would be interesting to see if this inequality can be proved injectively. \smallskip \subsection{Vanishing and uniqueness conditions}\label{ss:finrem-vanishing} Note that the vanishing conditions for the Stanley inequality are a special case of the \defna{equality conditions}, which are fully described in~\cite{SvH} and reproved in~\cite{CP}. For example, Corollary~\ref{cor:antichain} and Corollary~\ref{c:sid-equality} give further examples of equality conditions, with a simple proof in both cases via direct injection. When there is no injective proof, the equality condition can become a major challenge. For example, for the \defng{Kahn--Saks inequality} generalizing Stanley's inequality, the equality conditions remain open in full generality. See, however, \cite[$\S$8]{CPP2} for the vanishing conditions of the Kahn--Saks inequality, proved also via the promotion technology. See also \cite{CPP1,CPP2}, for the equality conditions of the Kahn--Saks and cross-product inequalities for posets of width two. Finally, let us mention Lemma~14.6 in~\cite{CP}, which is yet another variation on Theorem~\ref{t:LE-gen-transitive} and proved by a direct combinatorial argument using promotions. In a different direction, sometimes the equality conditions are trivial as the natural inequalities are always strict except for some degenerate cases. This is the case with the \defng{XYZ inequality}~\cite{Fish}, and the \defng{log-concavity} (Theorem~\ref{thm:log-concave-strict}) discussed above. The uniqueness conditions are studied less frequently than vanishing and equality conditions, since they tend to be harder. For example, there is no description of the uniquely colorable graphs, and this remains a major open problem~\cite{CZ}. Notable positive results include uniqueness conditions for the \defng{Kostka numbers}~\cite{BZ} and for the \defng{Littlewood--Richardson coefficients} \cite[Prop.~3.13]{BI}. \smallskip \subsection{Poset dynamics}\label{ss:finrem-algebraic} Promotions, demotions and evacuations were defined by Sch\"utzenberger in~\cite{Sch}, and this approach has been immensely influential leading to the \defng{RSK Algorithm} \hskip.03cm and the \defng{Edelman--Greene bijection}, among other things. Group theoretic approach in the context of combinatorics of words were developed by Lascoux and Sch\"utzenberger, and specifically in the generality of posets were introduced by Haiman~\cite{Hai} and Malvenuto--Reutenauer~\cite{MR}. See also~\cite{KB} for a related approach in the context of semistandard Young tableaux, and~\cite{Sta-promo} for an extensive survey. It would be interesting to find a generalization of the evacuation~$\varepsilon$ which would preserve its involution property. Such ``restricted evacuation'' might give rise to ``restricted domino linear extensions'' which would be of independent interest, see e.g.~\cite[$\S$3]{Sta-promo}. The extension of promotion to general bijections \. $X\to [n]$ \. was obtained in~\cite{DK}. Can our group action on restricted linear extensions be generalized in this direction? Note that we have only limited understanding if the group action can be applied to study the order polynomial. See however~\cite{Hop} for some elegant product formulas in some special cases. The promotion operators were used in~\cite{AKS} to define a Markov chain on the set \hskip.03cm $\Ec(P)$ \hskip.03cm of linear extensions of a given poset~$P$. See also~\cite{RS} where a related Markov chain was shown to be mixing in time \hskip.03cm $O(n \log n)$. It would be interesting to see if these results can be generalized to show that the restricted linear extensions in \. $\Ec(P,\textbf{\textit{x}}, \textbf{{\textit{a}}})$ \. can be sampled in polynomial time. Finally, let us mention a curious loop-free listing algorithm in~\cite{CW}. Is there a similar algorithm for the restricted linear extensions? \smallskip \subsection{Injections and matchings for the Stanley inequality}\label{ss:finrem-GP-injection} As we mentioned in the introduction, it remains a major open problem whether Stanley's inequality~\eqref{eq:Sta} can be proved by a direct injection, see e.g.~\cite[$\S$17.17]{CP}. Formally, in the notation of Theorem~\ref{t:Sta}, let $$ \rho(P,x,a) \ := \ \textrm{N}(P, x,a)^2 \, - \, \textrm{N}(P, x,a+1) \.\cdot \. \textrm{N}(P, x,a-1)\hskip.03cm. $$ \begin{op} \label{op:Sta-SP} Is \. $\rho \hskip.03cm \in \. {\textsc{\#P}}$? \end{op} At this point, it is even hard to guess which way the answer would go. While some of us believe the answer should be negative, others disagree. The only thing certain is that none of the positive proofs in~\cite{CP,Sta-AF} imply a positive answer, while the negative results in~\cite{IP} are not even close to resolving the problem. Since part of the motivation behind our algebraic approach aimed at resolving this problem, let us propose the following approach. \smallskip We would like to give an injection proving Stanley's inequality~\eqref{eq:Sta}. Consider the following family of elements of the group $G$ from Section~\ref{s:restricted} whose actions would be good candidates for such an injection. Let\. $\mathcal{G}=(V\sqcup W,E)$ \. be a bipartite graph, where \. $V=\Ec(P,x,a) \times \Ec(P,x,a)$ \. and \. $W= \Ec(P,x,a-1) \times \Ec(P,x, a+1)$. We define the set~$E$ of edges as follows. Let \. $\pi \in \Ec(P,x,a-1)$ \. and \. $\sigma \in \Ec(P,x,a+1)$, so that \. $(\pi,\sigma) \in W$. For every element \hskip.03cm $y$ \hskip.03cm which appears after \hskip.03cm $x$ \hskip.03cm in \hskip.03cm $\pi$ and before \hskip.03cm $x$ \hskip.03cm in \hskip.03cm $\sigma$, that is \. $i:=\pi^{-1}(y) > a-1$ \. and \. $j:=\sigma^{-1}(y) <a+1$, we apply the promotion/demotion operators on the chain starting/ending at~$y$ in $\sigma$ and $\pi$, respectively. Let \. $\bar{\delta}_j := \tau_{n-1}\cdots \tau_j$. Then in the word \hskip.03cm $\delta_i \pi$, the chain starting at~$y$ is pushed up, so that~$x$ is moved to position~$a$. Similarly, in the word \. $\bar{\delta}_j \sigma$, the chain ending at~$y$ is pushed down, so that~$x$ is moved to position $a$. Thus \. $(\delta_i \pi, \bar{\delta}_j\sigma) \in \Ec(P;a;x) \times \Ec(P;a;x) = V$, and we connect it to \hskip.03cm $(\pi,\sigma)$ \hskip.03cm by an edge. Note that by the pigeonhole principle, there are at least two possibilities for elements~$y$, and thus there will be at least one edge, however it is not necessarily true that the degree of every \hskip.03cm $(\pi,\sigma)$ \hskip.03cm is at least~$2$. \begin{conj} Let \. $\mathcal{G}=(V\sqcup W, E)$ \. be the graph defined above. Then there exists a maximal matching which covers all vertices in~$W$. \end{conj} This matching will be the desired injection and imply the Stanley inequality. By itself, the conjecture would not imply that \. $\rho\hskip.03cm \in\. {\textsc{\#P}}$\hskip.03cm. For that, the injection would need to be computable in polynomial time. \vskip.5cm \subsection*{Acknowledgements} We are grateful to Nikita Gladkov, F\"edor Petrov, Yair Shenfeld and Ramon van Handel for helpful discussions and remarks on the subject. Matt Beck, Fu Liu and Sinai Robins kindly helped us with the Ehrhart polynomial questions. We thank Christian Gaetz and Yibo Gao for help with the references. Over the years, we held numerous conversations with Christian Ikenmeyer on complexity, and the knowledge we acquired has been indispensable. We thank Vic Reiner for sharing his proof of Theorem~\ref{t:HP-q} with us and generously allowing us to publish it. We also thank Sam Hopkins for telling us about Exc.~3.143 in \cite{Sta-EC} (see Remark~\ref{r:comp}). Special thanks to Richard Stanley for resolving the mystery where Exc.~3.57 comes from, and telling us about Exc.~3.163 in \cite{Sta-EC}. We continuously regret not memorizing all the exercises in~\cite{Sta-EC}. Finally, we owe a dept of gratitude to Yufei Zhao who convinced us not to initialize first names in the references, a practice we followed for years. It is the right thing to do and we urge others to follow the suit.\footnote{For more on this, see Yufei~Zhao, How I manage my {\tt BibTeX} references, and why I prefer not initializing first names, \hskip.03cm \href{https://yufeizhao.com/blog/2021/07/04/bibtex/}{personal blog post} \hskip.03cm (July 4, 2021).} This research was partially done while the third author was enjoying MSRI's hospitality in Fall of~2021. The first author was partially supported by the Simons Foundation. The second and third authors were partially supported by the~NSF. \newpage
{ "timestamp": "2022-05-25T02:07:33", "yymm": "2205", "arxiv_id": "2205.02798", "language": "en", "url": "https://arxiv.org/abs/2205.02798", "abstract": "We explore inequalities on linear extensions of posets and make them effective in different ways. First, we study the Björner--Wachs inequality and generalize it to inequalities on order polynomials and their $q$-analogues via direct injections and FKG inequalities. Second, we give an injective proof of the Sidorenko inequality with computational complexity significance, namely that the difference is in $\\#P$. Third, we generalize the Sidorenko inequality to posets with small chain intersections and give complexity theoretic applications.", "subjects": "Combinatorics (math.CO); Computational Complexity (cs.CC); Discrete Mathematics (cs.DM)", "title": "Effective poset inequalities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.979976559880137, "lm_q2_score": 0.8397339736884712, "lm_q1q2_score": 0.8229196107497055 }
https://arxiv.org/abs/2207.02189
Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time
Hamiltonian Monte Carlo (HMC) is a popular method in sampling. While there are quite a few works of studying this method on various aspects, an interesting question is how to choose its integration time to achieve acceleration. In this work, we consider accelerating the process of sampling from a distribution $\pi(x) \propto \exp(-f(x))$ via HMC via time-varying integration time. When the potential $f$ is $L$-smooth and $m$-strongly convex, i.e.\ for sampling from a log-smooth and strongly log-concave target distribution $\pi$, it is known that under a constant integration time, the number of iterations that ideal HMC takes to get an $\epsilon$ Wasserstein-2 distance to the target $\pi$ is $O( \kappa \log \frac{1}{\epsilon} )$, where $\kappa := \frac{L}{m}$ is the condition number. We propose a scheme of time-varying integration time based on the roots of Chebyshev polynomials. We show that in the case of quadratic potential $f$, i.e., when the target $\pi$ is a Gaussian distribution, ideal HMC with this choice of integration time only takes $O( \sqrt{\kappa} \log \frac{1}{\epsilon} )$ number of iterations to reach Wasserstein-2 distance less than $\epsilon$; this improvement on the dependence on condition number is akin to acceleration in optimization. The design and analysis of HMC with the proposed integration time is built on the tools of Chebyshev polynomials. Experiments find the advantage of adopting our scheme of time-varying integration time even for sampling from distributions with smooth strongly convex potentials that are not quadratic.
\section{Introduction} Markov chain Monte Carlo (MCMC) algorithms are fundamental techniques for sampling from probability distributions, which is a task that naturally arises in statistics \citep{Duane87,Duane1987216}, optimization \citep{FKM05,DBW12,JGNKJ17}, machine learning and others \citep{Wetal20,SM08,KF09,WT11}. Among all the MCMC algorithms, the most popular ones perhaps are Langevin methods \citep{LST22,D17,DMM19,VW19,LST21colt,CGLMRS20} and Hamiltonian Monte Carlo (HMC) \citep{N12,B19,HG14,LHS18}. For the former, recently there have been a sequence of works leveraging some techniques in optimization to design Langevin methods, which include borrowing the idea of momentum methods like Nesterov acceleration \citep{N13} to design fast methods, e.g., \citep{MCCFBJ21,DR20}. Specifically, \citet{MCCFBJ21} show that for sampling from distributions satisfying the log-Sobolev inequality, under-damped Langevin improves the iteration complexity of over-damped Langevin from $O(\frac{d}{\epsilon})$ to $O(\sqrt{\frac{d}{\epsilon}})$, where $d$ is the dimension and $\epsilon$ is the error in KL divergence, though whether their result has an optimal dependency on the condition number is not clear. On the other hand, compared to Langevin methods, the connection between HMCs and techniques in optimization seems rather loose. Moreover, to our knowledge, little is known about how to accelerate HMCs with a provable acceleration guarantee for converging to a target distribution. Specifically, \citet{CV19} show that for sampling from strongly log-concave distributions, the iteration complexity of ideal HMC is $O( \kappa \log \frac{1}{\epsilon} )$, and \citet{V21} shows the same rate of ideal HMC when the potential is strongly convex quadratic in a nice tutorial. In contrast, there are a few methods that exhibit acceleration when minimizing strongly convex quadratic functions in optimization. For example, while Heavy Ball \citep{P64} does not have an accelerated linear rate globally for minimizing general smooth strongly convex functions, it does show acceleration when minimizing strongly convex quadratic functions \citep{WLA21,WLWH22}. This observation makes us wonder whether one can get an accelerated linear rate of ideal HMC for sampling, i.e., $O( \sqrt{\kappa} \log \frac{1}{\epsilon} )$, akin to acceleration in optimization. We answer this question affirmatively, at least in the Gaussian case. We propose a time-varying integration time for HMC, and we show that ideal HMC with this time-varying integration time exhibits acceleration when the potential is a strongly convex quadratic (i.e.\ the target $\pi$ is a Gaussian), compared to what is established in \cite{CV19} and \cite{V21} for using a constant integration time. Our proposed time-varying integration time at each iteration of HMC depends on the total number of iterations $K$, the current iteration index $k$, the strong convexity constant $m$, and the smoothness constant $L$ of the potential; therefore, the integration time at each iteration is simple to compute and is set before executing HMC. Our proposed integration time is based on the roots of Chebysev polynomials, which we will describe in details in the next section. In optimization, Chebyshev polynomials have been used to help design accelerated algorithms for minimizing strongly convex quadratic functions, i.e., Chebyshev iteration (see e.g., Section 2.3~in \cite{DST21}). Our result of accelerating HMC via using the proposed Chebyshev integration time can be viewed as the sampling counterpart of acceleration from optimization. Interestingly, for minimizing strongly convex quadratic functions, acceleration of vanilla gradient descent can be achieved via a scheme of step sizes that is based on a Chebyshev polynomial, see e.g., \cite{AGZ21}, and our work is inspired by a nice blog article by \cite{pedregosa2021nomomentum}. Hence, our acceleration result of HMC can also be viewed as a counterpart in this sense. In addition to our theoretical findings, we conduct experiments of sampling from a Gaussian as well as sampling from distributions whose potentials are not quadratics, which include sampling from a mixture of two Gaussians, Bayesian logistic regression, and sampling from a \emph{hard} distribution that was proposed in \cite{LRT21} for establishing some lower-bound results of certain Metropolized sampling methods. Experimental results show that our proposed time-varying integration time also leads to a better performance compared to using the constant integration time of \cite{CV19} and \cite{V21} for sampling from the distributions whose potential functions are not quadratic. We conjecture that our proposed time-varying integration time also helps accelerate HMC for sampling from log-smooth and strongly log-concave distributions, and we leave the analysis of such cases for future work. \section{Preliminaries} \begin{algorithm}[t] \begin{algorithmic}[1] \small \caption{\textsc{Ideal HMC } \label{alg:ideal}{} \STATE Require: an initial point $x_{0} \in \mathbb{R}^{d}$, number of iterations $K$, and a scheme of integration time $\{\eta_k^{(K)}\}$. \FOR{$k=1$ to $K$} \STATE Sample velocity $\xi \sim N(0, I_d)$. \STATE Set $(x_k,v_k) = \mathrm{HMC}_{\eta_k^{(K)}}(x_{k-1}, \xi)$. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Hamiltonian Monte Carlo (HMC)} Suppose we want to sample from a target probability distribution $\nu(x) \propto \exp(-f(x))$ on $\mathbb{R}^d$, where $f \colon \mathbb{R}^d \to \mathbb{R}$ is a continuous function which we refer to as the potential. Denote $x \in \mathbb{R}^{d}$ the position and $v \in \mathbb{R}^{d}$ the velocity of a particle. In this paper, we consider the standard {\em Hamiltonian} of the particle \citep{CV19,N12}, which is defined as \begin{equation} \label{hxv} \textstyle H(x,v) := f(x) + \frac{1}{2} \| v \|^2, \end{equation} while we refer the readers to \cite{Duane1987216,HTD21,BL21} and the references therein for other notions of the Hamiltonian. The {\em Hamiltonian flow} generated by $H$ is the flow of the particle which evolves according to the following differential equations: $$\frac{dx}{dt} = \frac{\partial H}{\partial v} ~~\text{ and }~~ \frac{dv}{dt} = - \frac{\partial H}{\partial x}.$$ For the standard Hamiltonian defined in~\eqref{hxv}, the Hamiltonian flow becomes \begin{equation}\label{Eq:HamFlow} \frac{dx}{dt} = v ~~~\text{ and }~~~ \frac{dv}{dt} = - \nabla f(x). \end{equation} We will write $(x_t,v_t) = \mathrm{HMC}_{t}(x_0, v_0)$ as the position $x$ and the velocity $v$ of the Hamiltonian flow after integration time $t$ starting from $(x_0,v_0)$. There are many important properties of the Hamiltonian flow including that the Hamiltonian is conserved along the flow, the vector field associated with the flow is divergence free, and the Hamiltonian dynamic is time reversible, see e.g., Section~3 in \cite{V21}. The {\bf Ideal HMC} algorithm (see Algorithm~\ref{alg:ideal}) proceeds as follows: in each iteration $k$, sample an initial velocity from the normal distribution, and then flow following the Hamiltonian flow with a pre-specified integration time $\eta_{k}$. It is well-known that ideal HMC preserves the target density $\pi(x) \propto \exp(-f(x))$; see e.g., Theorem~5.1~in \cite{V21}. Furthermore, in each iteration, HMC brings the density of the iterates $x_k \sim \rho_k$ closer to the target $\pi$. However, the Hamiltonian flow $\mathrm{HMC}_{t}(x_0, v_0)$ is in general difficult to simulate exactly, except for some special potentials. In practice, the Verlet integrator is commonly used to approximate the flow and a Metropolis-Hastings filter is applied to correct the induced bias arises from the use of the integrator \citep{TRGT17,BL21,HRS21,LRT21,CDWY20}. In recent years, there have been some progress on showing some rigorous theoretical guarantees of HMCs for converging to a target distribution, e.g., \cite{CDWY20,DMS17,RE21,MS19,MS20,MV18}. There are also other variants of HMCs proposed in the literature, e.g., \cite{RV22,RS17,ZG21,SG21,HG14,TRGT17,CFG14}, to name just a few. Recall that the 2-Wasserstein distance between probability distributions $\nu_{1}$ and $\nu_{2}$ is $$\mathrm{W}_{2}(\nu_1,\nu_2) := \inf_{x,y \in \Gamma(\nu_1,\nu_2)} \mathbb{E}\left[ \| x - y \|^2 \right]^{{1/2}}$$ where $\Gamma(\nu_1,\nu_2)$ represents the set of all couplings of $\nu_{1}$ and $\nu_{2}$. \subsection{Analysis of HMC in quadratic case with constant integration time} In the following, we replicate the analysis of ideal HMC with a constant integration time for quadratic potentials \citep{V21}, which provides the necessary ingredients for introducing our method in the next section. Specifically, we consider the following quadratic potential: \begin{equation} \label{eqf} \textstyle f(x):= \sum_{j=1}^d \lambda_j x_j^2 , \text{ where } 0< m \leq \lambda_j \leq L, \end{equation} which means the target density is the Gaussian distribution $\pi = \mathcal{N}(0, \Lambda^{-1})$, where $\Lambda$ the diagonal matrix whose $j^{{\mathrm{th}}}$ diagonal entry is $\lambda_j$. We note for a general Gaussian target $\mathcal{N}(\mu,\Sigma)$ for some $\mu \in \mathbb{R}^d$ and $\Sigma \succ 0$, we can shift and rotate the coordinates to make $\mu = 0$ and $\Sigma$ a diagonal matrix, and our analysis below applies. So without loss of generality, we may assume the quadratic potential is separable, as in~\eqref{eqf}. In this quadratic case, the Hamiltonian flow~\eqref{Eq:HamFlow} becomes a linear system of differential equations, and we have an exact solution given by sinusoidal functions, which are \begin{equation} \label{ideal} \begin{split} x_t[j] & = \cos\left( \sqrt{2 \lambda_j} t \right) x_0[j] + \frac{1}{ \sqrt{ 2 \lambda_j} } \sin\left( \sqrt{2 \lambda_j} t \right) v_0[j], \\ v_t[j] & = - \sqrt{2 \lambda_j} \sin\left( \sqrt{2 \lambda_j} t \right) x_0[j] + \cos\left( \sqrt{2 \lambda_j} t \right) v_0[j]. \end{split} \end{equation} In particular, we recall the following result on the deviation between two co-evolving particles with the same initial velocity. \begin{lemma} \citep{V21} \label{lem:coup} Let $x_0, y_0 \in \mathbb{R}^d$. Consider the following coupling: $(x_{t},v_{t})= \mathrm{HMC}_{t}(x_0, \xi )$ and $(y_{t},u_{t})= \mathrm{HMC}_{t}(y_0, \xi )$ for some $\xi \in \mathbb{R}^{d}$. Then for all $t \ge 0$ and for all $j \in [d]$, it holds that $$x_{t}[j] - y_{t}[j] = \mathrm{cos}\left( \sqrt{2 \lambda_j} {t} \right) \times ( x_0[j] - y_0[j] ).$$ \end{lemma} \begin{proof} Given $(x_t,v_t):= \mathrm{HMC}_{t}(x_0, \xi )$ and $(y_t,u_t):= \mathrm{HMC}_{t}(y_0, \xi )$, we have $\frac{dv_t}{dt} - \frac{du_t}{dt} = - \nabla f(x_t) + \nabla f(y_t) = 2 \Lambda (y_t - x_t).$ Therefore, we have $\frac{d^2 (x_t[j] - y_t[j] ) }{dt^2} = - 2\lambda_j( x_t[j] - y_t[j] ),$ for all $j \in [d]$. Because of the initial condition $\frac{d x_0[j]}{dt} = \frac{d y_0[j]}{dt} = \xi[j] $, the differential equation implies that $x_{t}[j] - y_{t}[j] = \text{cos}\left( \sqrt{2 \lambda_j} {t} \right) \times ( x_0[j] - y_0[j] ).$ It is noted that the result also follows directly from the explicit solution above. \end{proof} Using Lemma~\ref{lem:coup}, we can derive the convergence rate of ideal HMC for the quadratic potential as follows. \begin{lemma} \citep{V21} \label{lem:V21} Let $\pi \propto \exp( - f) = \mathcal{N}(0, \Lambda^{-1})$ be the target distribution, where $f(x)$ is defined on \eqref{eqf}. Let $\rho_{K}$ be the distribution of $x_{K}$ generated by Algorithm~\ref{alg:ideal} at the final iteration $K$. Then for any $\rho_0$ and any $K \ge 1$, we have \[\textstyle W_{2}(\rho_K, \pi) \leq \max_{j \in [d]} \left| \Pi_{k=1}^K \mathrm{cos}\left( \sqrt{2 \lambda_j} \eta_k^{(K)} \right) \right| W_{2}( \rho_0,\pi). \] \end{lemma} \begin{proof} Starting from $x_0 \sim \rho_0$, draw an initial point $y_0 \sim \pi$ such that $(x_0,y_0)$ has the optimal $W_2$-coupling between $\rho_0$ and $\pi$. Consider the following coupling at each iteration $k$: $(x_{k},v_{k})= \mathrm{HMC}_{\eta_k^{(K)}}(x_{k-1}, \xi_k )$ and $(y_{k},u_{k})= \mathrm{HMC}_{\eta_k^{(K)}}(y_{k-1}, \xi_k )$ where $\xi_k \sim \mathcal{N}(0,I)$ is an independent Gaussian. We collect $\{ x_k \}_{{k=1}}^{K}$ and $\{ y_k \}_{{k=1}}^{K}$ from Algorithm~\ref{alg:ideal}. We know each $y_{k}~\sim \pi$, since $\pi$ is a stationary distribution of the HMC Markov chain. Then by Lemma~\ref{lem:coup} we have \begin{equation} \label{e1} \begin{split} \textstyle W_2^2( \rho_K, \pi) &\leq \mathbb{E}[ \| x_K - y_K \|^2 ] \\ &\textstyle = \mathbb{E}[\sum_{j \in [d]} (x_K[j] - y_K[j])^2] \\ & \textstyle = \mathbb{E}[ \sum_{j \in [d]} \left( \Pi_{k=1}^K \mathrm{cos}\left( \sqrt{2 \lambda_j} {\eta_k^{(K)}} \right) \times ( x_0[j] - y_0[j] ) \right)^2 ] \\ & \textstyle \leq \left( \max_{j \in [d]} \left( \Pi_{k=1}^K \mathrm{cos}\left( \sqrt{2 \lambda_j} \eta_k^{(K)} \right) \right)^2 \right) \mathbb{E}[\sum_{j \in [d]} (x_0[j] - y_0[j])^2] \\ &= \textstyle \left( \max_{j \in [d]} \left( \Pi_{k=1}^K \mathrm{cos}\left( \sqrt{2 \lambda_j} \eta_k^{(K)} \right) \right)^2 \right) W_{2}^2( \rho_0,\pi), \end{split} \end{equation} Taking the square root on both sides leads to the result. \end{proof} \citet{V21} shows that by choosing \begin{equation} \label{const} (\textbf{Constant integration time}) \qquad \eta_k^{(K)} = \frac{\pi}{2}\frac{1}{\sqrt{2L}}, \end{equation} one has that $\text{cos}\left( \sqrt{2 \lambda_j} \eta_k^{(K)} \right) \leq 1 - \Theta\left( \frac{m}{L} \right)$ for all the iterations $k \in [K]$ and dimensions $j \in [d]$. Hence, by Lemma~\ref{lem:V21}, the distance satisfies $$W_2( \rho_K, \pi) = O\left( \left( 1 - \Theta\left( \frac{m}{L} \right) \right)^K \right) W_2( \rho_0, \pi)$$ after $K$ iterations of ideal HMC with the constant integration time. On the other hand, for general smooth strongly convex potentials $f(\cdot)$, \citet{CV19} show the same convergence rate $1 - \Theta\left( \frac{m}{L} \right)$ of HMC using a constant integration time $\eta_k^{(K)} = \frac{c}{\sqrt{L}}$, where $c>0$ is a universal constant. Therefore, under the constant integration time, HMC needs $O(\kappa \log \frac{1}{\epsilon})$ iterations to reach error $W_2(\rho_K, \pi) \le \epsilon$, where $\kappa = \frac{L}{m}$ is condition number. Furthermore, they also show that the relaxation time of ideal HMC with a constant integration time is $\Omega(\kappa)$ for the Gaussian case. \subsection{Chebyshev polynomials} We denote $\Phi_{K}(\cdot)$ the degree-$K$ {\em Chebyshev polynomial of the first kind}, which is defined by: \begin{equation} \label{che} \Phi_{K}(x) = \begin{cases} \mathrm{cos}( K \text{ } \mathrm{arccos}(x) ) & \text{ if } x \in [-1,1], \\ \mathrm{cosh}( K \text{ } \mathrm{arccosh}(x) ) & \text{ if } x > 1, \\ (-1)^K \mathrm{cosh}( K \text{ } \mathrm{arccosh}(x) ) & \text{ if } x < 1 . \end{cases} \end{equation} Our proposed integration time is built on a scaled-and-shifted Chebyshev polynomial, defined as: \begin{equation} \label{scaled} \bar{\Phi}_K(\lambda) := \frac{ \Phi_K( h(\lambda)) }{ \Phi_K( h(0 ) ) }, \end{equation} where $h(\cdot)$ is the mapping $h(\lambda) := \frac{L+m - 2 \lambda}{L-m}$. Observe that the mapping $h(\cdot)$ maps all $\lambda \in [m,L]$ into the interval $[-1,1]$. The roots of the degree-$K$ scaled-and-shifted Chebyshev polynomial $\bar{\Phi}_K(\lambda)$ are \begin{mdframed} \begin{equation} \textbf{ (Chebyshev roots) } \qquad r_{k}^{(K)} := \frac{L+m}{2} - \frac{L-m}{2} \text{cos}\left( \frac{(k-\frac{1}{2}) \pi}{K} \right), \end{equation} \end{mdframed} where $k = 1,2,\dots, K$, i.e., $\bar{\Phi}_K( r_{k}^{(K)}) = 0$. We now recall the following key result regarding the scaled-and-shifted Chebyshev polynomial $\bar{\Phi}_K$. \begin{lemma} \label{lem:rate} (e.g., Section 2.3 in \cite{DST21}) For any positive integer $K$, we have \begin{equation} \textstyle \max_{\lambda \in [m,L]} \left| \bar{\Phi}_K(\lambda) \right| \leq 2 \left( 1 - 2 \frac{ \sqrt{m} }{ \sqrt{L} + \sqrt{m} } \right)^{K} = O\left( \left( 1 - \Theta\left( \sqrt{ \frac{m}{L} } \right)\right)^K \right). \end{equation} \end{lemma} \begin{proof} Observe that the numerator of $\bar{\Phi}_K(\lambda) = \frac{ \Phi_K( h(\lambda)) }{ \Phi_K( h(0 ) ) }$ satisfies $|\Phi_K( h(\lambda))| \leq 1$, since $h(\lambda) \in [-1,1]$ for $\lambda \in [m,L]$ and that the Chebyshev polynomial satisfies $|\Phi_{K}(\cdot)| \leq 1$ when its argument is in $[-1,1]$ by the definition. It remains to bound the denominator, which is $\Phi_K( h(0 ) ) = \mathrm{ cosh }\left( K \text{ } \mathrm{arccosh}\left( \frac{L+m}{L-m} \right) \right)$. Since \begin{equation*} \textstyle \mathrm{ arccosh }\left( \frac{L+m}{L-m} \right) = \log \left( \frac{L+m}{L-m} + \sqrt{ \left( \frac{L+m}{L-m} \right)^2 -1 } \right) = \log ( \theta ), \text{ where } \theta:= \frac{ \sqrt{L} + \sqrt{ m} }{ \sqrt{L} - \sqrt{ m} }, \end{equation*} we have \begin{equation*} \textstyle \Phi_K( h(0 ) )= \mathrm{ cosh }\left( K \text{ } \mathrm{arccosh}\left( \frac{L+m}{L-m} \right) \right) = \frac{ \exp(K \log (\theta) ) + \exp(- K \log (\theta) ) }{2} = \frac{ \theta^K + \theta^{-K}}{2} \geq \frac{ \theta^K }{2}. \end{equation*} Combing the above inequalities, we obtain the desired result: \begin{align*} \max_{\lambda \in [m,L]} \left| \bar{\Phi}_K(\lambda) \right| = \max_{\lambda \in [m,L]} \left| \frac{ \Phi_K( h(\lambda)) }{ \Phi_K( h(0 ) ) } \right| \leq \frac{2}{\theta^K} &= 2 \left( 1 - 2 \frac{ \sqrt{m} }{ \sqrt{L} + \sqrt{m} } \right)^{K} \\ &= O\left( \left( 1 - \Theta\left( \sqrt{ \frac{m}{L} } \right)\right)^K \right). \end{align*} \end{proof} \section{Chebyshev integration time} We are now ready to introduce our scheme of time-varying integration time. Let $K$ be the pre-specified total number of iterations of HMC. Our proposed method will first permute the array $[1,2,\dots, K]$ before executing HMC for $K$ iterations. Denote $\sigma(k)$ the $k_{\mathrm{th}}$ element of the array $[1,2,\dots, K]$ \emph{after} an arbitrary permutation $\sigma$. Then, we propose to set the integration time of HMC at iteration $k$, i.e., set $\eta_k^{(K)}$, as follows: \begin{mdframed} \begin{equation} \label{int:che} (\textbf{Chebyshev integration time}) \qquad \eta_k^{(K)} = \frac{\pi}{2 } \frac{1}{ \sqrt{ 2 r_{\sigma(k)}^{(K)} } }. \end{equation} \end{mdframed} We note the usage of the permutation $\sigma$ is not needed in our analysis below; however, it seems to help improve performance in practice. Specifically, though the guarantees of HMC at the final iteration $K$ provided in Theorem~\ref{thm:acc} and Lemma~\ref{lem:key} below is the same regardless of the permutation, the progress of HMC varies under different permutations of the integration time, which is why we recommend an arbitrary permutation of the integration time in practice. Our main result is the following improved convergence rate of HMC under the Chebyshev integration time, for quadratic potentials. \begin{theorem} \label{thm:acc} Denote the target distribution $\pi \propto \exp( - f(x) ) = \mathcal{N}(0,\Lambda^{-1})$, where $f(x)$ is defined on \eqref{eqf}, and denote the condition number $\kappa:= \frac{L}{m}$. Let $\rho_{K}$ be the distribution of $x_{K}$ generated by Algorithm~\ref{alg:ideal} at the final iteration $K$. Then, we have \[ W_{2}(\rho_K, \pi) \leq 2 \left( 1 - 2 \frac{ \sqrt{m} }{ \sqrt{L} + \sqrt{m} } \right)^{K} W_{2}( \rho_0,\pi) = O\left( \left( 1 - \Theta\left( \frac{1}{ \sqrt{\kappa} } \right) \right)^K \right) W_{2}( \rho_0,\pi). \] Consequently, the total number of iterations $K$ such that the Wasserstein-2 distance satisfies $W_{2}(\rho_K, \pi) \leq \epsilon$ is $O \left( \sqrt{ \kappa } \log \frac{1}{\epsilon} \right)$. \end{theorem} Theorem~\ref{thm:acc} shows an accelerated linear rate $1 - \Theta\left( \frac{1}{ \sqrt{\kappa} } \right)$ using Chebyshev integration time, and hence improves the previous result of $1 - \Theta\left( \frac{1}{ \kappa} \right)$ as discussed above. The proof of Theorem~\ref{thm:acc} relies on the following lemma, which upper-bounds the cosine products that appear in the bound of the $W_{2}$ distance in Lemma~\ref{lem:V21} by the scaled-and-shifted Chebyshev polynomial $\bar{\Phi}_K(\lambda)$ on \eqref{scaled}. \begin{lemma} \label{lem:key} Denote $| P_{K}^{\mathrm{Cos}}(\lambda) | :=\left| \Pi_{k=1}^K \mathrm{cos}\left( \frac{\pi}{2} \sqrt{ \frac{\lambda}{ r_{\sigma(k)}^{(K)} } } \right) \right| $. Suppose $\lambda \in [m, L]$. Then, we have for any positive integer $K$, \begin{equation} \label{sta} | P_{K}^{\mathrm{Cos}}(\lambda) | \leq \left| \bar{ \Phi }_K(\lambda) \right|. \end{equation} \end{lemma} \begin{proof} We use the fact that the $K$-degree scaled-and-shifted Chebyshev Polynomial can be written as, \begin{equation} \textstyle \bar{\Phi}_K(\lambda) = \Pi_{k=1}^K \left(1 - \frac{\lambda}{ r_{\sigma(k)}^{(K)} }\right), \end{equation} for any permutation $\sigma(\cdot)$, since $\{ r_{\sigma(k)}^{(K)} \}$ are its roots and $\bar{\Phi}_K(0)=1$. So inequality \eqref{sta} is equivalent to \begin{equation} \label{conj2} \textstyle \left| \Pi_{k=1}^K \mathrm{cos}\left( \frac{\pi}{2} \sqrt{ \frac{\lambda}{ r_{\sigma(k)}^{(K)} } } \right) \right| \leq \left| \Pi_{k=1}^K \left(1 - \frac{\lambda}{ r_{\sigma(k)}^{(K)} }\right) \right|. \end{equation} To show \eqref{conj2}, let us analyze the mapping $\psi(x) := \frac{ \cos \left( \frac{\pi}{2} \sqrt{x} \right) }{1 - x}$ for $x \ge 0$, $x \neq 1$, with $\psi(1) = \frac{\pi}{4}$ by continuity, and show that $\max_{x: x \geq 0} | \psi(x) | \leq 1$, as \eqref{conj2} would be immediate. We have $\textstyle \psi'(x) = - \frac{\pi}{4 \sqrt{x}} \frac{1}{1-x} \sin( \frac{\pi}{2} \sqrt{x} ) + \cos( \frac{\pi}{2} \sqrt{x} ) \frac{1}{ (1-x)^2 }.$ Hence, $\psi'(x)= 0$ when \begin{equation} \label{using} \textstyle \tan( \frac{\pi}{2} \sqrt{x} ) = \frac{ 4 \sqrt{x} }{ \pi (1-x) }. \end{equation} Denote an extreme point of $\psi(x)$ as $\hat{x}$, which satisfies \eqref{using}. Then, using \eqref{using}, we have $\textstyle |\psi(\hat{x})| = \left| \frac{ \cos \left( \frac{\pi}{2} \sqrt{\hat{x}} \right) }{1 - \hat{x}} \right| = \left| \frac{\pi}{ \sqrt{16 \hat{x} + \pi^2(1- \hat{x} )^2 } } \right|$, where we used $\cos( \frac{\pi}{2} \sqrt{\hat{x}} ) = \frac{\pi(1-\hat{x}) }{ \sqrt{16 \hat{x} + \pi^2(1- \hat{x} )^2 } } \text{ or } \\ \frac{-\pi(1-\hat{x}) }{ \sqrt{16 \hat{x} + \pi^2(1- \hat{x} )^2 } } $. The denominator $\sqrt{16 \hat{x} + \pi^2(1- \hat{x} )^2 }$ has the smallest value at $\hat x=0$, which means that the largest value of $|\psi(x)|$ happens at $x=0$, which is $1$. The proof is now completed. \end{proof} \begin{figure}[t] \centering \footnotesize \subfloat{% \includegraphics[width=0.48\textwidth]{./compare.jpg} } \subfloat{% \includegraphics[width=0.48\textwidth]{./ratio.jpg} } \caption{\footnotesize \textbf{Left}: Set $K=400$, $m=1$ and $L=100$. The green solid line (Chebyshev integration time \eqref{int:che}) on the subfigure represents $\max_{\lambda \in \{ m, m+0.1, \dots, L \}} \left| \Pi_{s=1}^k \mathrm{cos}\left( \sqrt{ 2 \lambda} \eta_s^{(K)} \right) \right| =\left| \Pi_{s=1}^k \mathrm{cos}\left( \frac{\pi}{2} \sqrt{ \frac{\lambda}{ r_{ \sigma(s) }^{(K)} } } \right) \right|$ v.s.~$k$, while the blue dash line (Constant integration time \eqref{const}) represents $\max_{\lambda \in \{ m, m+0.1, \dots, L \}} \left| \Pi_{s=1}^k \mathrm{cos}\left( \sqrt{ 2 \lambda} \eta_s^{(K)} \right) \right|=\left| \Pi_{s=1}^k \mathrm{cos}\left( \frac{\pi}{2} \sqrt{ \frac{\lambda}{ L } } \right) \right| $ v.s.~$k$. Since the cosine product controls the convergence rate of the $W_2$ distance by Lemma~\ref{lem:V21}, this confirms the acceleration via using the proposed scheme of Chebyshev integration over the constant integration time \citep{CV19,V21}. \textbf{Right:} $\psi(x) = \frac{ \cos \left( \frac{\pi}{2} \sqrt{x} \right) }{1 - x}$ v.s.~$x$. \vspace{-5pt} } \label{fig:} \end{figure} Figure~\ref{fig:} compares the cosine product $\max_{\lambda \in [m,L]} \left| \Pi_{s=1}^k \mathrm{cos}\left( \sqrt{ 2 \lambda} \eta_s^{(K)} \right) \right|$ in Lemma~\ref{lem:V21} of using the proposed integration time and that of using the constant integration time, which illustrates acceleration via the proposed Chebyshev integration time. We now provide the proof of Theorem~\ref{thm:acc}. \begin{proof}(of Theorem~\ref{thm:acc}) From Lemma~\ref{lem:V21}, we have \begin{equation} \label{a1} \textstyle W_{2}(\rho_K, \pi) \leq \max_{j \in [d]} \left| \Pi_{k=1}^K \mathrm{cos}\left( \sqrt{2 \lambda_j} \eta_k^{(K)} \right) \right| \cdot W_{2}( \rho_0,\pi). \end{equation} We can upper-bound the cosine product of any $j \in [d]$ as, \begin{equation} \label{a2} \textstyle \left|\Pi_{k=1}^K \mathrm{cos}\left( \sqrt{2 \lambda_j} \eta_k^{(K)} \right) \right| \overset{(a)}{=} \left|\Pi_{k=1}^K \mathrm{cos}\left( \frac{\pi}{2} \sqrt{ \frac{\lambda_j}{r_{\sigma(k)}^{(K)} } } \right) \right| \overset{(b)}{\leq} \left| \bar{ \Phi }_K(\lambda_j) \right| \overset{(c)}{\leq} 2 \left( 1 - 2 \frac{ \sqrt{m} }{ \sqrt{L} + \sqrt{m} } \right)^{K}, \end{equation} where (a) is due to the use of Chebyshev integration time \eqref{int:che}, (b) is by Lemma~\ref{lem:key}, and (c) is by Lemma~\ref{lem:rate}. Combining \eqref{a1} and \eqref{a2} leads to the result. \end{proof} \paragraph{HMC with Chebyshev Integration Time for General Distributions} To sample from general strongly log-concave distributions, we propose Algorithm~\ref{alg:HMC}, which adopts the Verlet integrator (a.k.a.~the leapfrog integrator) to simulate the Hamiltonian flow $\mathrm{HMC}_{\eta}(\cdot, \xi )$ and uses Metropolis filter to correct the bias. It is noted that the number of leapfrog steps $S_{k}$ in each iteration $k$ is equal to the integration time $\eta_{k}^{(K)}$ divided by the step size $\theta$ used in the leapfrog steps. More precisely, we have $S_k=\lfloor \frac{\eta_k^{(K)}}{ \theta } \rfloor$ in iteration $k$ of HMC. \begin{algorithm}[t] \begin{algorithmic}[1] \small \caption{\textsc{HMC with Chebyshev integration time} } \label{alg:HMC}{} \STATE Given: a potential $f(\cdot)$, where $\pi(x) \propto \exp(-f(x) )$ and $f(\cdot)$ is $L$-smooth and $m$-strongly convex. \STATE Require: number of iterations $K$ and the step size of the leapfrog steps $\theta$. \STATE Define $r_k^{(K)} := \frac{L+m}{2} - \frac{L-m}{2} \mathrm{cos}\left( \frac{(k-\frac{1}{2}) \pi}{K} \right), \text{for } k = 1, \dots, K.$ \STATE Arbitrarily permute the array $[1,2,\dots, K]$. Denote $\sigma(k)$ the $k_{\mathrm{th}}$ element of the array after permutation. \FOR{$k=1,2,\dots, K$} \STATE Sample velocity $\xi_k \sim N(0, I_d)$. \STATE Set integration time $\eta_{k}^{(K)} \gets \frac{\pi}{2 } \frac{1}{ \sqrt{ 2 r_{\sigma(k)}^{(K)} } }$. \STATE Set the number of \emph{leapfrog} steps $S_k \leftarrow \lfloor \frac{\eta_k^{(K)}}{ \theta} \rfloor$. \STATE $(\bar{x}_0,\bar{v}_0) \gets (x_{k-1}, \xi_k)$ \\ \% Leapfrog steps \FOR{$s=0,2,\dots, S_k-1$} \STATE $\bar{v}_{ s+\frac{1}{2} } = \bar{v}_{s} - \frac{\theta}{2} \nabla f( \bar{x}_s ); \qquad \bar{x}_{ s+1} = \bar{x}_s + \theta \bar{v}_{s+ \frac{1}{2} }; \qquad \bar{v}_{s+1} = \bar{ v}_{s+\frac{1}{2} } - \frac{\theta}{2} \nabla f( \bar{x}_{s+1} );$ \ENDFOR \\ \% Metropolis filter \STATE Compute the acceptance ratio $\alpha_{k}= \min \left( 1 , \frac{ \exp(-H(\bar{x}_{S_k}, \bar{v}_{S_k}) ) }{ \exp(- H(\bar{x}_0,\bar{v}_0) ) } \right)$. \STATE Draw $\zeta \sim \mathrm{Uniform}[0,1]$. \STATE \textbf{If} $\zeta < \alpha_k$ \textbf{ then } \STATE \quad $x_{k} \gets \bar{x}_{S_{k}}$ \STATE \textbf{Else} \STATE \quad $x_{k} \gets x_{k-1}$. \ENDFOR \end{algorithmic} \end{algorithm} \section{Experiments} We now evaluate HMC with the proposed Chebyshev integration time (Algorithm~\ref{alg:HMC}) and HMC with the constant integration time (Algorithm~\ref{alg:HMC} with line 7 replaced by the constant integration time \eqref{const}) in several tasks. For all the tasks in the experiments, the total number of iterations of HMCs is set to be $K=10,000$, and hence we collect $K=10,000$ samples along the trajectory. For the step size $h$ in the leapfrog steps, we let $h \in \{ 0.001, 0.005, 0.01, 0.05 \}$. To evaluate the methods, we compute effective sample size (ESS), which is a common performance metric of HMCs \citep{Duane1987216,BL21,HTD21,RV22,HRS21,HG14,SG21}, by using the toolkit ArViz \citep{KCHM19}. The ESS of a sequence of $N$ dependent samples is computed based on the autocorrelations within the sequence at different lags: $\text{ESS} := N / (1+ 2 \sum_k \gamma(k) )$, where $\gamma(k)$ is an estimate of the autocorrelation at lag $k$. We consider 4 metrics, which are (1) \textbf{Mean ESS:} the average of ESS of all variables. That is, ESS is computed for each variable/dimension, and Mean ESS is the average of them. (2) \textbf{Min ESS:} the lowest value of ESS among the ESSs of all variables; (3) \textbf{Mean ESS/Sec.:} Mean ESS normalized by the CPU time in seconds; (4) \textbf{Min ESS/Sec.:} Minimum ESS normalized by the CPU time in seconds. In the following tables, we denote ``Cheby.''~as our proposed method, and ``Const.''~as HMC with the the constant integration time \citep{V21,CV19}. Each of the configurations is repeated 10 times, and we report the average and the standard deviation of the results. We also report the acceptance rate of the Metropolis filter (Acc.~Prob) on the tables. Our implementation of the experiments is done by modifying a publicly available code of HMCs by \citet{BL21}. Code for our experiments can be found on \url{https://github.com/jimwang123/Accelerating_HMC_via_Chebyshev_Time}. \subsection{Ideal HMC flow for sampling from a Gussian with a diagonal covariance} Before evaluating the empirical performance of Algorithm~\ref{alg:HMC} in the following subsections, here we discuss and compare the use of a arbitrary permutation of the Chebyshev integration time and that without permutation (as well as that of using a constant integration time). We simulate ideal HMC for sampling from a Gaussian $\mathcal{N}(\mu, \Sigma)$, where $\mu = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$ and $\Sigma = \begin{bmatrix} 1 & 0 \\ 0 & 100 \end{bmatrix}$. It is noted that ideal HMC flow for this case has a closed-form solution as \eqref{ideal} shows. The result are reported on Table~\ref{expHMC}. From the table, the use of a Chebyshev integration time allows to obtain a larger ESS than that from using a constant integration time, and a arbitrary permutation helps get a better result. An explanation is that the ESS is a quantity that is computed along the trajectory of a chain, and therefore a permutation of the integration time could make a difference. We remark that the observation here (a arbitrary permutation of time generates a larger ESS) does not contradict to Theorem~\ref{thm:acc}, since Theorem~\ref{thm:acc} is about the guarantee in $W_2$ distance at the last iteration $K$. \begin{table}[t] \begin{center} \scriptsize \begin{tabular}{ l | r r} \hline Method & Mean ESS & Min ESS \\ \hline \hline Cheby.~(W/) & \bf $10399.00811 \pm 347.25021$ & $7172.50338 \pm 257.21244$ \\ Cheby.~(W/O)& \bf $10197.09964 \pm 276.94894$ & $7043.55293 \pm 284.78037$ \\ Const. & $7692.00382 \pm 207.19628$ & $5533.26519 \pm 213.31943$ \\ \hline \end{tabular} \end{center} \caption{ \footnotesize Ideal HMC with $K=10,000$ iterations for sampling from a Gaussian $\mathcal{N}(\mu, \Sigma)$, where $\mu = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$ and $\Sigma = \begin{bmatrix} 1 & 0 \\ 0 & 100 \end{bmatrix}$. Here, Cheby.~(W/) is ideal HMC with a arbitrary permutation of the Chebyshev integration time, while Cheby.~(W/O) is ideal HMC without a permutation; and Const.~refers to using the constant integration time \eqref{const}. } \label{expHMC} \vspace{-2.0pt} \end{table} \subsection{Sampling from a Gaussian} We sample $\mathcal{N}(\mu, \Sigma)$, where $\mu = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ and $\Sigma = \begin{bmatrix} 1 & 0.5 \\ 0.5 & 100 \end{bmatrix}$. Therefore, the strong convexity constant $m$ is approximately $0.01$ and the smoothness constant $L$ is approximately $1$. Table~\ref{exp:gauss} shows the results. HMC with Chebyshev integration time consistently outperforms that of using the constant integration time in terms of all the metrics: Mean ESS, Min ESS, Mean ESS/Sec, and Min ESS/Sec. We also plot two quantities throughout the iterations of HMCs on Figure~\ref{fig:G}. Specifically, Sub-figure (a) on Figure~\ref{fig:G} plots the size of the difference between the targeted covariance $\Sigma$ and an estimated covariance $\hat{\Sigma}_{k}$ at each iteration $k$ of HMC, where $\hat{\Sigma}_{k}$ is the sample covariance of $10,000$ samples collected from a number of $10,000$ HMC chains at their $k_{{\mathrm{th}}}$ iteration. Sub-figure (b) plots a discrete TV distance that is computed as follows. We use a built-in function of Numpy to sample $10,000$ samples from the target distribution, while we also have $10,000$ samples collected from a number of $10,000$ HMC chains at each iteration $k$. Using these two sets of samples, we construct two histograms with $30$ number of bins for each dimension, we denote them as $\hat{\pi}$ and $\hat{\rho}_{k}$. The discrete $\mathrm{TV}(\hat{\pi},\hat{\rho}_k)$ at iteration $k$ is $0.5$ times the sum of the absolute value of the difference between the number of counts of all the pairs of the bins divided by $10,000$, which serves as a surrogate of the Wasserstein-2 distance between the true target $\pi$ and $\rho_{k}$ from HMC, since computing or estimating the true Wasserstein distance is challenging. \begin{table}[t] \begin{center} \scriptsize \begin{tabular}{ r l | r r r r r } \hline Step Size & Method & Mean ESS & Min ESS & Mean ESS/Sec. & Min. ESS/Sec. & Acc. Prob \\ \hline \hline $0.001$ & Cheby. & \bf $5187.28 \pm 261.13$ & $307.09 \pm 21.92$ & $20.28 \pm 1.74$ & $1.20\pm 0.11$ & $1.00\pm 0.00$ \\ $0.001$ & Const. & $1912.76 \pm 72.10$ & $39.87 \pm 13.77$ & $15.87 \pm 0.89$ & $0.33 \pm 0.11$ & $1.00\pm 0.00$ \\ \hline $0.005$ & Cheby. & $5146.71 \pm 257.65$ & $304.126 \pm 19.09$ & $97.84 \pm 9.23$ & $5.79 \pm 0.68$ & $1.00\pm 0.00$ \\ $0.005$ & Const. & $1926.71 \pm 136.53$ & $32.83 \pm 9.57$ & $80.31 \pm 4.39$ & $1.37 \pm 0.39$ & $1.00\pm 0.00$ \\ \hline $0.01$ & Cheby. & $5127.90 \pm 211.46$ & $279.59 \pm 38.09$ & $184.26 \pm 20.99$ & $10.01 \pm 1.52$ & $1.00\pm 0.00$ \\ $0.01$ & Const. & $1832.87 \pm 77.47$ & $35.71 \pm 11.74$ & $147.53 \pm 12.59$ & $2.85 \pm 0.95$ & $1.00\pm 0.00$ \\ \hline $0.05$ & Cheby. & $5133.67 \pm 195.07$ & $316.87 \pm 36.27$ & $ 871.72 \pm 88.73$ & $53.54 \pm 6.22$ & $0.99\pm 0.00$ \\ $0.05$ & Const. & $1849.15 \pm 92.75$ & $34.98 \pm 14.70$ & $615.73 \pm 30.16$ & $11.70 \pm 5.07$ & $0.99\pm 0.00$ \\ \hline $0.1$ & Cheby. & $4948.46 \pm 144.03$ & $281.66 \pm 44.79$ & $1492.96 \pm 166.21$ & $84.39 \pm 13.04 $ & $0.99\pm 0.00$ \\ $0.1$ & Const. & $1852.79 \pm 132.95$ & $38.17 \pm 16.35$ & $1035.54 \pm 82.34$ & $21.44 \pm 9.51$ & $0.99\pm 0.00$ \\ \hline \hline \end{tabular} \end{center} \caption{ \footnotesize Sampling from a Gaussian distribution. We report 4 metrics regarding ESS (the higher the better), please see the main text for their definitions. } \label{exp:gauss} \vspace{-2.0pt} \end{table} \begin{figure}[t] \centering \footnotesize \subfloat[ $\| \Sigma - \hat{\Sigma}_k \|_F$ v.s. iteration $k$]{% \includegraphics[width=0.48\textwidth]{./TV_cov.jpg} } \subfloat[ discrete $\mathrm{TV}(\hat{\pi},\hat{\rho}_k)$ v.s. iteration $k$ \label{subfig-1:dummy}]{% \includegraphics[width=0.48\textwidth]{./TV_iters.jpg} } \caption{\footnotesize Sampling from a Gaussian distribution. Both lines correspond to HMCs with the same step size $h=0.05$ used in the leapfrog steps (but with different schemes of the integration time). Please see the main text for the precise definitions of the quantities and the details of how we compute them. } \label{fig:G} \end{figure} \subsection{Sampling from a mixture of two Gaussians} For a vector $a \in \mathbb{R}^{d}$ and a positive definite matrix $\Sigma \in \mathbb{R}^{{d \times d}}$, we consider sampling from a mixture of two Gaussians $\mathcal{N}(a,\Sigma)$ and $\mathcal{N}(-a,\Sigma)$ with equal weights. Denote $b := \Sigma^{{-1}} a$ and $\Lambda := \Sigma^{{-1}}$. The potential and gradient are \begin{align*} \textstyle f(x) &= \frac{1}{2} \| x - a \|^2_{\Lambda} - \log ( 1 + \exp( -2 x^\top b) ), \\ \nabla f(x) \textstyle &= \Lambda x- b + 2 b ( 1+\exp(-2 x^\top b) )^{-1}. \end{align*} For each dimension $i \in [d]$, we set $a[i] = \frac{\sqrt{i} }{ 2d} $ and set the covariance $\Sigma = \text{diag}_{{1 \leq i \leq d}} ( \frac{i}{d} )$. The potential is strongly convex if $ a^\top \Sigma^{-1} a < 1$, see e.g., \cite{RV22}. We set $d=10$ in the experiment, and simply use the smallest and the largest eigenvalue of $\Lambda$ to approximate the strong convexity constant $m$ and the smoothness constant $L$ of the potential, which are $\hat{m}=1$ and $\hat{L}=10$ in this case. Table~\ref{exp:mix} shows that the proposed method generates a larger effective sample size than the baseline. \begin{table} \begin{center} \scriptsize \begin{tabular}{ r l | r r r r r } \hline Step Size & Method & Mean ESS & Min ESS & Mean ESS/Sec. & Min. ESS/Sec. & Acc. Prob \\ \hline \hline $0.001$ & Cheby. & $2439.86 \pm 71.83$ & $815.20 \pm 83.82$ & $22.68 \pm 0.93$ & $7.57 \pm 0.81$ & $0.89 \pm 0.00$ \\ $0.001$ & Const. & $845.44 \pm 31.42$ & $261.14 \pm 34.34$ & $12.90 \pm 0.52$ & $3.98 \pm 0.53$ & $0.91 \pm 0.00$ \\ \hline $0.005$ & Cheby. & $2399.50 \pm 100.12$ & $784.06 \pm 82.07$ & $105.97 \pm 8.78$ & $34.58 \pm 4.12$ & $0.89 \pm 0.00$ \\ $0.005$ & Const. & $876.61 \pm 25.62$ & $277.72 \pm 30.62$ & $63.80 \pm 4.67$ & $20.22 \pm 2.62$ & $0.91 \pm 0.00$ \\ \hline $0.01$ & Cheby. & $2341.35 \pm 89.99$ & $794.27 \pm 48.75$ & $194.81 \pm 23.51$ & $66.30 \pm 9.89$ & $0.88 \pm 0.00$ \\ $0.01$ & Const. & $860.61 \pm 20.39$ & $235.33 \pm 33.73$ & $110.62 \pm 14.09$ & $30.40 \pm 6.34$ & $0.91 \pm 0.00$ \\ \hline $0.05$ & Cheby. & $2214.19 \pm 87.27$ & $748.66 \pm 46.18$ & $761.59 \pm 68.88$ & $256.51 \pm 13.76$ & $0.89 \pm 0.00$ \\ $0.05$ & Const. & $853.40 \pm 41.05$ & $265.70 \pm 37.41$ & $376.54 \pm 67.83$ & $116.45 \pm 22.23$ & $0.91 \pm 0.00$ \\\hline $0.1$ & Cheby. & $2064.42 \pm 67.44$ & $657.45 \pm 60.44$ & $1162.67 \pm 84.19$ & $370.07 \pm 41.11$& $0.90 \pm 0.00$ \\ $0.1$ & Const. & $632.70 \pm 22.78$ & $182.88 \pm 37.10$ & $450.53 \pm 93.60$ & $132.58 \pm 43.91$ & $0.92 \pm 0.00$ \\ \hline \hline \end{tabular} \end{center} \caption{ \footnotesize Sampling from a mixture of two Gaussians} \label{exp:mix} \end{table} \subsection{Bayesian logistic regression} \begin{table} \begin{center} \scriptsize \smallskip \begin{tabular}{ r l | r r r r r } \hline \hline \hline \hline \multicolumn{7}{l}{ \shortstack{ \footnotesize \textsc{Heart} dataset {\scriptsize ($\hat{m}= 2.59$, $\hat{L} = 92.43$) } } } \\ \hline Step Size & Method & Mean ESS & Min ESS & Mean ESS/Sec. & Min. ESS/Sec. & Acc. Prob \\ \hline \hline $0.001$ & Cheby. & $1693.71 \pm 63.53$ & $520.43 \pm 62.24$ & $18.54 \pm 2.88$ & $5.69 \pm 1.12$ & $1.00\pm 0.00$ \\ $0.001$ & Const. & $312.18 \pm 12.65$ & $80.97 \pm 15.97$ & $6.57 \pm 0.42$ & $1.69 \pm 0.28 $ & $1.00 \pm 0.00$ \\ \hline $0.005$ & Cheby. & $1664.87 \pm 43.72$ & $481.76 \pm 49.00$ & $82.90 \pm 16.51$ & $24.08 \pm 5.72 $ & $0.99\pm 0.00$\\ $0.005$ & Const. & $329.48 \pm 13.15$ & $75.78 \pm 17.30$ & $31.87 \pm 2.73$ & $7.40 \pm 2.06 $ & $0.99\pm 0.00$ \\ \hline $0.01$ & Cheby. & $1648.25 \pm 47.50$ & $508.69 \pm 49.81$ & $157.09\pm 26.70$ & $48.45 \pm 9.64 $ & $0.99\pm 0.00$ \\ $0.01$ & Const. & $307.52 \pm 8.77$ & $82.85 \pm 13.88$ & $53.89 \pm 6.37$ & $14.62 \pm 3.28$ & $0.99\pm 0.00$ \\ \hline $0.05$ & Cheby. & $1424.21 \pm 54.03$ & $439.88 \pm 56.25$ & $458.56\pm 51.33$ & $140.51 \pm 16.58$ & $0.98\pm 0.00$ \\ $0.05$ & Const. & $242.44 \pm 14.61$ & $56.42 \pm 17.68$ & $103.36 \pm 12.64$ & $23.90 \pm 7.40 $ & $0.98\pm 0.00$ \\ \hline \hline \hline \hline \end{tabular} \begin{tabular}{ r l | r r r r r } \multicolumn{7}{l}{ \shortstack{ \footnotesize \textsc{Breast Cancer} dataset } {\scriptsize ($\hat{m} = 1.81$, $\hat{L} = 69.28$) } } \\ \hline Step Size & Method & Mean ESS & Min ESS & Mean ESS/Sec. & Min. ESS/Sec. & Acc. Prob \\ \hline \hline $0.001$ & Cheby. & $1037.98 \pm 34.46$ & $575.72 \pm 41.14$ & $9.40\pm 0.31$ & $5.21 \pm 0.31$ & $1.00 \pm 0.00$ \\ $0.001$ & Const. & $174.73 \pm 13.91$ & $78.24 \pm 23.28$ & $2.59 \pm 0.29$ & $2.59 \pm 0.29$ & $1.00 \pm 0.00$ \\ \hline $0.005$ & Cheby. & $1010.49 \pm 24.15$ & $571.03 \pm 36.64$ & $43.09 \pm 1.14$ & $24.35 \pm 1.70$ & $0.99 \pm 0.00$ \\ $0.005$ & Const. & $173.17 \pm 11.40$ & $79.76 \pm 13.49$ & $11.88 \pm 1.39$ & $11.88 \pm 1.39$ & $0.99 \pm 0.00$ \\ \hline $0.01$ & Cheby. & $1038.10 \pm 31.48$ & $565.54 \pm 50.51$ & $82.82 \pm 3.51$ & $45.14 \pm 4.44$ & $0.99 \pm 0.00$ \\ $0.01$ & Const. & $162.64 \pm 9.43$ & $58.79 \pm 16.02$ & $18.92 \pm 2.59$ & $18.92 \pm 2.59$ & $0.99 \pm 0.00$ \\ \hline $0.05$ & Cheby. & $886.24 \pm 38.92$ & $499.54 \pm 43.99$ & $240.08 \pm 12.55$ & $135.28 \pm 12.04$ & $0.98 \pm 0.00$ \\ $0.05$ & Const. & $99.48 \pm 10.10$ & $44.70 \pm 13.23$ & $33.25 \pm 6.50$ & $33.25 \pm 6.50$ & $0.98 \pm 0.00$ \\ \hline \hline \hline \hline \end{tabular} \begin{tabular}{ r l | r r r r r } \multicolumn{7}{l}{ \shortstack{ \footnotesize \textsc{Diabetes} dataset { \scriptsize ($\hat{m}= 4.96$, $\hat{L} = 270.20$) } } } \\ \hline Step Size & Method & Mean ESS & Min ESS & Mean ESS/Sec. & Min. ESS/Sec. & Acc. Prob \\ \hline \hline $0.001$ & Cheby. & $726.08 \pm 33.92$ & $424.59 \pm 58.77$ & $11.64 \pm 0.85$ & $6.83 \pm 1.16$ & $0.99 \pm 0.00$ \\ $0.001$ & Const. & $100.50 \pm 9.32$ & $41.84 \pm 19.33$ & $3.6 \pm 0.31$ & $1.50 \pm 0.68$ & $0.99 \pm 0.00$ \\ \hline $0.005$ & Cheby. & $731.46 \pm 33.04$ & $395.82 \pm 47.98$ & $54.92 \pm 5.26$ & $29.61\pm 3.75$ & $0.99 \pm 0.00$ \\ $0.005$ & Const. & $100.16 \pm 11.83$ & $44.62 \pm 20.81$ & $ 14.71 \pm 2.52$ & $6.67 \pm 3.37$ & $0.99 \pm 0.00$ \\ \hline $0.01$ & Cheby. & $687.74 \pm 29.31$ & $399.44 \pm 45.01$ & $93.10 \pm 6.78$ & $53.90\pm 5.38$ & $0.98 \pm 0.00$ \\ $0.01$ & Const. & $83.04 \pm 9.36$ & $36.39 \pm 12.43$ & $20.87 \pm 3.31$ & $9.09 \pm 3.25$ & $0.98 \pm 0.00$ \\ \hline $0.05$ & Cheby. & $546.80 \pm 37.40$ & $330.09 \pm 34.31$ & $206.07 \pm 17.76$ & $125.07\pm18.87$ & $0.96 \pm 0.00$ \\ $0.05$ & Const. & $57.11 \pm 9.52$ & $23.44 \pm 9.57$ & $27.23 \pm 5.18$ & $11.02 \pm 4.34$ & $0.96 \pm 0.00$ \\ \hline \hline \end{tabular} \end{center} \caption{ \footnotesize Bayesian logistic regression} \label{exp:log} \end{table} We also consider Bayesian logistic regression to evaluate the methods. Given an observation $(z_i,y_i)$, where $z_{i} \in \mathbb{R}^d$ and $y_{i} \in \{ 0, 1\}$, the likelihood function is modeled as $p(y_i | z_i, w) = \frac{1}{1 + \exp( - y_i z_i^\top w )} $. Moreover, the prior on the model parameter $w$ is assumed to follow a Gaussian distribution, $p(w) = N(0, \alpha^{-1} I_d)$, where $\alpha > 0$ is a parameter. The goal is to sample $w \in \mathbb{R}^{d}$ from the posterior, $p(w | \{ z_i, y_i \}_{i=1}^n ) = p(w) \Pi_{i=1}^n p(y_i | z_i,w)$, where $n$ is the number of data points in a dataset. The potential function $f(w)$ can be written as \begin{equation} \textstyle f(w) = \sum_{{i=1}}^{n} f_i(w), \text{ where } f_{i}(w) = \log \left( 1 + \exp( - y_i w^\top z_i ) \right) + \alpha \frac{ \|w\|^2 }{2 n}. \end{equation} We set $\alpha=1$ in the experiments. We consider three datasets: Heart, Breast Cancer, and Diabetes binary classification datasets, which are all publicly available online. \footnote{Datasets are available on \url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}.} To approximate the strong convexity constant $m$ and the smoothness constant $L$ of the potential $f(w)$, we compute the smallest eigenvalue and the largest eigenvalue of the Hessian $\nabla^{2} f(w)$ at the maximizer of the posterior, and we use them as estimates of $m$ and $L$ respectively. We apply Newton's method to approximately find the maximizer of the posterior. The experimental results are reported on Table~\ref{exp:log}, which show that our method consistently outperforms the baseline. \subsection{Sampling from a \emph{hard} distribution} \begin{table} \scriptsize \begin{center} \begin{tabular}{ r l | r r r r r } \hline Step Size & Method & Mean ESS & Min ESS & Mean ESS/Sec. & Min. ESS/Sec. & Acc. Prob \\ \hline \hline \multicolumn{7}{l}{ \shortstack{ sampling from $\pi(x) \propto \exp(-f_{0.001}(x))$ }} \\ \hline $0.001$ & Cheby. & $6222.21 \pm 88.90$ & $453.03 \pm 30.35$ & $114.74 \pm 7.59$ & $8.36 \pm 0.83$ & $1.00 \pm 0.00$ \\ $0.001$ & Const. & $2098.18 \pm 46.56$ & $63.53 \pm 15.00$ & $82.31 \pm 5.29$ & $2.50 \pm 0.63$ & $1.00 \pm 0.00$ \\ \hline \hline \multicolumn{7}{l}{ \shortstack{ sampling from $\pi(x) \propto \exp(-f_{0.005}(x))$ }} \\ \hline $0.005$ & Cheby. & $6271.43 \pm 117.71$ & $429.42 \pm 34.52$ & $545.76 \pm 26.10$ & $37.28 \pm 2.29$ & $0.99 \pm 0.00$ \\ $0.005$ & Const. & $2125.36 \pm 21.87$ & $67.42 \pm 16.51$ & $361.14 \pm 5.65$ & $11.44 \pm 2.76$ & $0.99 \pm 0.00$ \\ \hline \hline \multicolumn{7}{l}{ \shortstack{ sampling from $\pi(x) \propto \exp(-f_{0.01}(x))$ }} \\ \hline $0.01$ & Cheby. & $6523.21 \pm 95.65$ & $459.48 \pm 38.83$ & $1070.77 \pm 68.78$ & $75.61 \pm 9.79$ & $0.99 \pm 0.00$ \\ \hline $0.01$ & Const. & $2125.04 \pm 31.83$ & $69.66 \pm 20.75$ & $528.35 \pm 80.17$ & $17.19 \pm 6.34$ & $0.99 \pm 0.00$ \\ \hline \hline \multicolumn{7}{l}{ \shortstack{ sampling from $\pi(x) \propto \exp(-f_{0.05}(x))$ }} \\ \hline $0.05$ & Cheby. & $6457.21 \pm 110.05$ & $375.97 \pm 30.64$ & $3319.51 \pm 134.92$ & $193.06 \pm 14.49$ & $0.97 \pm 0.00$ \\ \hline $0.05$ & Const. & $2796.41 \pm 56.89$ & $62.33 \pm 13.26$ & $1893.99 \pm 57.23$ & $42.22 \pm 9.05$ & $0.97 \pm 0.00$ \\ \hline \hline \end{tabular} \end{center} \caption{\footnotesize Sampling from a distribution $\pi(x) \propto \exp(-f_h(x))$ whose potential $f_h(\cdot)$ is defined on \eqref{eqh}.} \label{exp:h} \vspace{-2pt} \end{table} We also consider sampling from a step-size-dependent distribution $\pi(x) \propto \exp(-f_h(x))$, where the potential $f_h(\cdot)$ is $\kappa$-smooth and $1$-strongly convex. The distribution is considered in \cite{LRT21} for showing a lower bound regarding certain Metropolized sampling methods using a constant integration time and a constant step size $h$ of the leapfrog integrator. More concretely, the potential is \begin{equation} \label{eqh} \textstyle f_{h}(x):=\sum_{i=1}^d f_i^{(h)}(x_i), \text{ where } f_i^{(h)}(x_i) = \begin{cases} & \frac{1}{2} x_i^2 , \quad i = 1 \\ & \frac{\kappa}{3} x_i^2 - \frac{\kappa h}{3} \cos \left( \frac{x_i}{ \sqrt{h }} \right), \quad 2 \leq i \leq d. \end{cases} \end{equation} In the experiment, we set $\kappa=50$ and $d=10$. The results are reported on Table~\ref{exp:h}. The scheme of the Chebyshev integration time is still better than the constant integration time for this task. \section{Discussion and outlook} The Chebyshev integration time shows promising empirical results for sampling from a various of strongly log-concave distributions. On the other hand, the theoretical guarantee of acceleration that we provide in this work is only for strongly convex quadratic potentials. Therefore, a direction left open by our work is establishing some provable acceleration guarantees for general strongly log-concave distributions. However, unlike quadratic potentials, the output (position, velocity) of a HMC flow does not have a closed-form solution in general, which makes the analysis much more challenging. A starting point might be improving the analysis of \citet{CV19}, where a contraction bound of two HMC chains under a small integration time $\eta = O(\frac{1}{ \sqrt{L} })$ is shown. Since the scheme of the Chebyshev integration time requires a large integration time $\eta = \Theta\left(\frac{1}{ \sqrt{m} }\right)$ at some iterations of HMC, a natural question is whether a variant of the result of \cite{CV19} can be extended to a large integration time $\eta = \Theta\left(\frac{1}{ \sqrt{m} }\right)$. If one can get a non-trivial bound of the coupling that is applicable to HMC with a large integration time for sampling from general strongly log-concave distributions, then it might be possible to design a scheme of time-varying integration time accordingly with provable acceleration guarantees by using the tools of Chebyshev polynomials as we do in this work. We state as an open question: can ideal HMC with a scheme of time-varying integration time achieve an accelerated rate $O( \sqrt{\kappa} \log \frac{1}{\epsilon} )$ for general smooth strongly log-concave distributions?
{ "timestamp": "2022-07-06T02:21:57", "yymm": "2207", "arxiv_id": "2207.02189", "language": "en", "url": "https://arxiv.org/abs/2207.02189", "abstract": "Hamiltonian Monte Carlo (HMC) is a popular method in sampling. While there are quite a few works of studying this method on various aspects, an interesting question is how to choose its integration time to achieve acceleration. In this work, we consider accelerating the process of sampling from a distribution $\\pi(x) \\propto \\exp(-f(x))$ via HMC via time-varying integration time. When the potential $f$ is $L$-smooth and $m$-strongly convex, i.e.\\ for sampling from a log-smooth and strongly log-concave target distribution $\\pi$, it is known that under a constant integration time, the number of iterations that ideal HMC takes to get an $\\epsilon$ Wasserstein-2 distance to the target $\\pi$ is $O( \\kappa \\log \\frac{1}{\\epsilon} )$, where $\\kappa := \\frac{L}{m}$ is the condition number. We propose a scheme of time-varying integration time based on the roots of Chebyshev polynomials. We show that in the case of quadratic potential $f$, i.e., when the target $\\pi$ is a Gaussian distribution, ideal HMC with this choice of integration time only takes $O( \\sqrt{\\kappa} \\log \\frac{1}{\\epsilon} )$ number of iterations to reach Wasserstein-2 distance less than $\\epsilon$; this improvement on the dependence on condition number is akin to acceleration in optimization. The design and analysis of HMC with the proposed integration time is built on the tools of Chebyshev polynomials. Experiments find the advantage of adopting our scheme of time-varying integration time even for sampling from distributions with smooth strongly convex potentials that are not quadratic.", "subjects": "Machine Learning (cs.LG); Computation (stat.CO)", "title": "Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9873750496039276, "lm_q2_score": 0.8333245911726382, "lm_q1q2_score": 0.8228039095452564 }
https://arxiv.org/abs/0707.2885
Sylvester's Minorant Criterion, Lagrange-Beltrami Identity, and Nonnegative Definiteness
We consider the characterizations of positive definite as well as nonnegative definite quadratic forms in terms of the principal minors of the associated symmetric matrix. We briefly review some of the known proofs, including a classical approach via the Lagrange-Beltrami identity. For quadratic forms in up to 3 variables, we give an elementary and self-contained proof of Sylvester's Criterion for positive definiteness as well as for nonnegative definiteness. In the process, we obtain an explicit version of Lagrange-Beltrami identity for ternary quadratic forms.
\section{Introduction} \label{sec:intro} Let $A=(a_{ij})$ be an $n\times n$ real symmetric matrix and $$ Q(\mathbf{x})=Q(x_1, \dots, x_n):= \mathbf{x}A\mathbf{x}^T = \sum_{i=1}^n \sum_{j=1}^n a_{ij} \, x_i x_j \, $$ be the corresponding (real) quadratic form in $n$ variables. Recall that the matrix $A$ or the form $Q$ is said to be \emph{positive definite} (resp: \emph{nonnegative definite}\footnote{Sometimes the term \emph{positive semi-definite} is used in place of \emph{nonnegative definite}. On the other hand, some books (e.g., \cite{CJ,Ho,Wi}) define a \emph{positive semi-definite} quadratic form as one which is nonnegative definite but not positive definite.}) if $Q(\mathbf{x})>0$ (resp: $Q(\mathbf{x}) \ge 0$) for all $\mathbf{x}\in {\mathbb R}^n$, $\mathbf{x}\ne \mathbf{0}$. For any matrix, a \emph{minor} is the determinant of a square submatrix. A minor is called a \emph{principal minor\,} if the rows and columns chosen to form the submatrix have the same indices; further, if these indices are consecutive and start from $1$, then it is called a \emph{leading principal minor}. Sylvester's minorant criterion is the well-known result that \begin{equation} A \mbox{ is positive definite} \Longleftrightarrow \mbox{the leading principal minors of $A$ are positive.} \label{posdef} \end{equation} An analogous characterization for nonnegative definite matrices seems relatively less well-known and conspicuous by its absence in most texts on Linear Algebra. It may be tempting to guess that $A$ is nonnegative definite if and only if all the leading principal minors of $A$ are nonnegative. In fact, some books (e.g., \cite[Ch. 2 \S 5]{BB} or \cite[p. 133]{Wi}) appear to state incorrectly that this is true. To see that nonnegativity of leading principal minors does not imply nonnegative definiteness, it suffices to consider the matrix $$ A = \left(\begin{array}{rr} 0 & \ 0 \\ 0 & \ -1 \end{array} \right) \quad \mbox{ or the corresponding quadratic form } \quad Q(x,y) = -y^2. $$ In \cite[p.293]{Bo}, this example is given and the author also states that for positive semi-definiteness, there is no straightforward generalization of Sylvester's minorant criterion! Nonetheless there is a simple and natural generalization as follows. \begin{equation} A \mbox{ is nonnegative definite} \Longleftrightarrow \mbox{the principal minors of $A$ are nonnegative.} \label{nonnegdef} \end{equation} The aim of this article is to effectuate a greater awareness of \eqref{nonnegdef} and a related algebraic fact known as the Lagrange-Beltrami identity. The existing proofs in the literature of \eqref{nonnegdef} as well as \eqref{posdef} seem rather involved. (See Remark \ref{rem:onpfs} and the paragraph before Proposition \ref{pro:bqf}.) With this in view, we shall outline a completely self-contained and elementary proof of \eqref{posdef} and \eqref{nonnegdef} when $n \le 3$. There is a good reason why such a proof may be useful and interesting. As is well-known, characterizations of positive definite matrices, when applied to the Hessian matrix, play a crucial role in the local analysis of real-valued functions of several real variables. For example, they give rise to the so called Discriminant Test, which is a useful criterion to determine a local extremum or a saddle point. Characterizations of nonnegative definiteness are also useful here, and more importantly, in the study of convexity and concavity of functions of several variables. (See, for example, \cite[\S 42]{varberg}.) Usually these topics are studied in Calculus courses before the students have an exposure to Linear Algebra and learn notions such as eigenvalues and results such as the Spectral Theorem. Also, it is common to restrict to functions of two or three variables. Thus it seems desirable to have a proof for $n\le 3$ that assumes only the definition of the determinant of a $2\times 2$ or $3\times 3$ matrix. As indicated earlier, the pursuit of an elementary proof leads one to the Lagrange-Beltrami identity, which is yet another gem from classical linear algebra that deserves to be better known and better understood. In Section \ref{sec2} below, we explain this identity and illustrate its use in proving \eqref{posdef} and \eqref{nonnegdef} in the simplest case $n=2$. We also comment on some of the existing proofs of \eqref{posdef} and \eqref{nonnegdef} in the general case. Section \ref{sec3} deals with the case $n=3$, and ends with a number of remarks and a related question. \section{Lagrange-Beltrami Identity and Binary Quadratic forms} \label{sec2} Let $A=(a_{ij})$ and $Q(\mathbf{x})$ be as in the Introduction. For $1\le k\le n$, let $$ \Delta_k := \left| \begin{array}{ccc} a_{11} & \dots & a_{1k} \\ \vdots & & \vdots \\ a_{k1} & \dots & a_{kk} \end{array} \right| $$ be the $k$th leading principal minor of $A$. Set $\Delta_0 := 1$. Evidently, a natural way to prove the implication `$\Leftarrow$' in \eqref{posdef} is to show that if $\Delta_k > 0$ for $1\le k\le n$, then $Q(x)$ is a sum of squares that vanishes only when $\mathbf{x} = \mathbf{0}$. The Lagrange-Beltrami identity does just this. It states that if the product $\Delta_1 \cdots \Delta_{n-1}$ is nonzero, then \begin{equation} Q(\mathbf{x})= \sum_{i=1}^n \frac{\Delta_i}{\Delta_{i-1}} \, y_i^2, \quad \mbox{ where } \quad y_i= x_i + \sum_{i<j\le n} b_{ij}x_j \quad \mbox{for } i=1, \dots , n, \label{LB} \end{equation} and each $b_{ij}$ is a rational function in the entries of $A$. Notice that the equations for $y_1, \dots, y_n$ in terms of $x_1, \dots, x_n$ are in a triangular form; hence if $\Delta_k > 0$ for $1\le k\le n$, then $$ Q(\mathbf{x})= \mathbf{0} \Longleftrightarrow y_i =0 \mbox{ for } i=1, \dots , n \Longleftrightarrow x_i =0 \mbox{ for } i=1, \dots , n. $$ To prove the other implication `$\Rightarrow$' in \eqref{posdef}, it is customary to use induction on $n$ together with the following fact. \begin{equation} A \mbox{ is positive definite } \Longrightarrow \det A > 0. \label{posdet} \end{equation} This fact follows readily from the following eigenvalue characterization: \begin{equation} A \mbox{ is positive definite } \Longleftrightarrow \mbox{ the eigenvalues of $A$ are positive.} \label{eigenpos} \end{equation} In turn, \eqref{eigenpos} follows from the Spectral Theorem for real symmetric matrices. In the case of nonnegative definiteness, we have a straightforward analogue of \eqref{eigenpos}, namely, \begin{equation} A \mbox{ is nonnegative definite } \Longleftrightarrow \mbox{ the eigenvalues of $A$ are nonnegative.} \label{eigennonneg} \end{equation} This, too, follows from the Spectral Theorem. Also, as a consequence, we have an obvious analogue of \eqref{posdet} that together with induction on $n$ will prove the implication `$\Rightarrow$' in \eqref{nonnegdef}. However, if one is seeking an elementary proof, one should try to avoid the use of the Spectral Theorem and its consequences such as \eqref{eigenpos} and \eqref{eigennonneg}. Also, if some $\Delta_i=0$, then to prove \eqref{nonnegdef}, the Lagrange-Beltrami identity \eqref{LB} seems useless even if we clear the denominators. We will now see that at least for small values of $n$, the Lagrange-Beltrami identity is still useful if we know it explicitly and also if we know some of its {\it avatars}. Moreover, the use of Spectral Theorem can be avoided by suitable `substitution tricks'. Let us illustrate by considering the simplest case of binary quadratic forms, that is, the case of $n=2$. \begin{proposition} \label{pro:bqf} Let $Q(x,y):=ax^2 + 2bxy+cy^2$ be a binary quadratic form in the variables $x$ and $y$ with coefficients $a,b,c$ in ${\mathbb R}$. Then $$ Q(x,y) \mbox{ is nonnegative definite } \Longleftrightarrow a\ge 0, \ c\ge 0 \mbox{ and } ac - b^2 \ge 0. $$ \end{proposition} \begin{proof} Suppose $Q(x,y)$ is nonnegative definite. Then $a = Q(1,0) \ge 0$ and $c=Q(0,1)\ge 0$. In case $a\ne 0$, consider $$ Q(b, -a) = ab^2 - 2ab^2 + ca^2 = ca^2 - ab^2 = a (ac-b^2). $$ Since $Q(b, -a)\ge 0$ and $a > 0$, we must have $ac-b^2 \ge 0$. Next, in case $a=0$ and $c\ne 0$, consider $$ Q(c,-b) = ac^2 - 2cb^2 + cb^2 = ac^2 - cb^2 = c(ac - b^2). $$ Since $Q(c,-b)\ge 0$ and $c>0$, we must have $ac-b^2 \ge 0$. Finally, in case $a=0$ and $c=0$, we have $2b = Q(1,1) \ge 0$ and $-2b = Q(1,-1)\ge 0$, which implies that $b=0$; hence, in this case $ac-b^2 = 0$. Conversely, suppose $a\ge 0$, $c\ge 0$ and $ac-b^2 \ge 0$. Let $\varDelta:=ac - b^2$. In case $a > 0$, the identity $$ aQ(x,y) = a^2 x^2 + 2ab xy + acy^2=(ax+by)^2+\varDelta y^2 $$ implies that $Q(x,y)\ge 0$ for all $(x,y)\in {\mathbb R}^2$. In case $c>0$, the identity $$ cQ(x,y) = ac x^2 + 2bc xy + c^2y^2=(bx+cy)^2+\varDelta x^2 $$ implies that $Q(s,t)\ge 0$ for all $(s,t)\in {\mathbb R}^2$. In case $a=c=0$, the condition $ac-b^2 \ge 0$ implies that $b=0$, and hence $Q(s,t) = 0$ for all $(s,t)\in {\mathbb R}^2$. Thus, in any case, $Q(x,y)$ is nonnegative definite. \end{proof} It may be noted that for $n=2$, the above proposition not only yields \eqref{nonnegdef} but the arguments in the proof readily yield \eqref{posdef} as well. In fact, proving \eqref{posdef} is simpler because one has to consider fewer cases. \begin{remark} \label{rem:onpfs} A proof of \eqref{posdef} using the Lagrange-Beltrami identity appears, for example, in \cite{BB, Be, Ho}. Other proofs, as can be found, for example, in \cite{Fr, Gi, HJ}, use an inductive argument based on \eqref{eigenpos} and something like the Interlacing Theorem \cite[Thm. 7.3.9]{HJ} or a version of the Courant-Fischer ``min-max theorem'' \cite[Thm. 4.2.11]{HJ}. As we have noted already, a proof of \eqref{eigenpos} uses the Spectral Theorem. The Lagrange-Beltrami identity can be proved by an inductive argument using essentially the ideas of Gaussian elimination (cf. \cite[Ch. 5, \S 2]{Be} or \cite[\S 9.17]{Ho}). As for the characterization \eqref{nonnegdef} of nonnegative definiteness, one of the implications in \eqref{nonnegdef} appears as an exercise in \cite[p. 405]{HJ}. A complete statement together with some related characterizations and an outline of a proof can be found in \cite[\S 9.3]{St}, while \cite[Thm. 9.4.9]{RB} gives a more detailed proof. These proofs are not difficult except that they use all those `standard theorems' that one usually discusses toward the fag end of a serious course in Linear Algebra. \end{remark} \section{Ternary Quadratic Forms} \label{sec3} The substitution tricks and an explicit version of the Lagrange-Beltrami identity together with its {\it avatars} can be used in proving \eqref{nonnegdef} for ternary quadratic forms, that is, for $n=3$, as follows. In the statement below, we have avoided the use of subscripts for the entries of $A$ or the coefficients of $Q$. The only thing used in the proof is the corresponding result for binary quadratic forms (Proposition \ref{pro:bqf}). \begin{proposition} \label{Prop:5.3} Let $Q(x,y, z):= ax^2 +2bxy + 2pxz + cy^2 + 2qyz + rz^2$ be a ternary quadratic form in the variables $x$, $y$ and $z$ with coefficients $a,b,c,p,q,r$ in ${\mathbb R}$. Let $$ A:=\left(\begin{array}{lll} a & \ b & \ p \\ b & \ c & \ q \\ p & \ q & \ r \end{array} \right) \quad {\rm and} \quad \Delta := \det A = p(bq-cp) + q(bp - aq) + r (ac-b^2). $$ Then $Q(x,y,z)$ is nonnegative definite if and only if all the principal minors of $A$ are nonnegative, i.e., $$ a\ge 0, \ c\ge 0, \ r\ge 0, \ ac-b^2\ge 0, \ cr-q^2\ge 0, \ ar-p^2\ge 0 \mbox{ and } \Delta \ge 0. $$ \end{proposition} \begin{proof} Suppose $Q(x,y,z)$ is nonnegative definite. Then the binary quadratic forms $Q(x,y,0)$, $Q(x,0,z)$ and $Q(0,y,z)$ are nonnegative definite. Hence, by Proposition \ref{pro:bqf}, each of $a,c,r, \, ac-b^2,\, cr-q^2$ and $ar-p^2$ is nonnegative. Further, we observe that $ Q(bq - cp, \ bp - aq, \ ac - b^2) = (ac - b^2)\Delta. $ Hence if $ac - b^2 \ne 0$, then $\Delta \ge 0$. By permuting the variables $x$, $y$ and $z$ cyclically, we obtain $Q(cr - q^2, \ pq - br, \ bq - cp) = (cr - q^2)\Delta$ and $Q(pq - br, \ ar - p^2, \ bp - aq) = (ar - p^2)\Delta$. Hence if $cr-q^2 \ne 0$ or if $ar-p^2 \ne 0$, then $\Delta \ge 0$. Finally, suppose $ac-b^2 = cr - q^2 = ar - p^2 = 0$. Now, if $a=0$, then we must have $b=p=0$. Similarly, if $c=0$, then $b=q=0$, while if $r=0$, then $p=q=0$. It follows that if $acr=0$, then $\Delta =0$. Next, suppose $acr\ne 0$. Then $bpq\ne 0$ since $b^2 =ac$, $p^2 = ar$ and $q^2 = cr$. Consider $Q(b, -a, z) = 2z (bp-aq) + rz^2$. Since $Q(x,y,z)$ is nonnegative definite, we see that $2(bp-aq) + rz \ge 0$ if $z > 0$ and $2(bp-aq) + rz \le 0$ if $z < 0$. Taking limit as $z \to 0^+$ and also as $z \to 0^-$, we see that $(bp-aq)\ge 0$ and also $(bp-aq)\le 0$. Consequently, $bp-aq=0$, i.e., $aq=bp$. Hence $abq=b^2p = acp$, and therefore, $bq = cp$. Thus $bq-cp=0$, $bp-aq=0$ and $ac-b^2=0$. It follows that $\Delta =0$. Conversely, suppose each of $a,c,r, \, ac-b^2,\, ar-p^2, \, cr-q^2$ and $\Delta$ is nonnegative. In case $a=0$, then the inequalities $ac-b^2 \ge 0$ and $ar-p^2\ge 0$ imply that $b=0$ and $p=0$. Thus, in this case, $Q(x,y,z) = cy^2+ 2qyz + rz^2$, and this is nonnegative definite by Proposition \ref{pro:bqf}. Similarly, if $c=0$, then $b=q=0$, while if $r=0$, then $p=q=0$, and in either of these cases, $Q(x,y,z)$ is nonnegative definite by Proposition \ref{pro:bqf}. Suppose $a>0$, $c>0$ and $r>0$. If $ac-b^2 >0$, then the identity $$ a (ac-b^2) Q(x,y,z) = (ac-b^2)(ax+by+pz)^2 + [(ac-b^2)y+(aq-bp)z]^2 + a\Delta z^2 $$ implies that $Q(x,y,z)$ is nonnegative definite. Similarly, if $cr-q^2 >0$, then $$ c(cr-q^2)Q(x,y,z) = (cr-q^2)(bx+cy+qz)^2 + [(cr-q^2)z + (cp-bq)x]^2 + c\Delta x^2 $$ implies that $Q(x,y,z)$ is nonnegative definite, whereas if $ar-p^2 >0$, then $$ r(ar-p^2)Q(x,y,z) = (ar-p^2)(px+qy+r z)^2 + [(ar-p^2)z+(br-pq)y]^2 + r\Delta y^2 $$ implies that $Q(x,y,z)$ is nonnegative definite. Finally, suppose $ac-b^2=ar-p^2=cr-q^2=0$. Then $bpq\ne 0$ because $a$, $c$ and $r$ are positive. Moreover, $b^2p^2 = (ac)(ar) = a^2(cr) = a^2q^2$. Hence $bp = \pm aq$. On the other hand, $\Delta = 2q(bp-aq)$, and so if $bp = -aq$, then $\Delta = -4a q^2 < 0$, which is a contradiction. It follows that $bp = aq$ and as a consequence, $aQ(x,y,z) = (ax+by+pz)^2$, which implies that $Q(x,y,z)$ is nonnegative definite. \end{proof} \begin{remarks} \label{Rem:5.4} (i) The above proof of Proposition \ref{Prop:5.3} can be easily adapted to prove \eqref{posdef} for $n=3$. In fact, proving \eqref{posdef} would be much simpler since one does not have to bother with degenerate cases. (ii) The identity $ a (ac-b^2) Q(x,y,z) = (ac-b^2)(ax+by+pz)^2 + [(ac-b^2)y+(aq-bp)z]^2 + a\Delta z^2 \ $ appearing in the proof of Proposition \ref{Prop:5.3} may be viewed as an explicit version of the Lagrange-Beltrami identity for $n=3$. Indeed, when $a\ne 0$ and $ac-b^2 \ne 0$, it can be written as $$ Q(x,y,z) = \frac{a}{1} \left( x + \frac{b}{a} y + \frac{p}{a}z\right)^2 + \frac{ac-b^2}{a}\left(y + \frac{aq-bp}{ac-b^2} z\right)^2 + \frac{\Delta}{ac-b^2} z^2. $$ The other two similar looking identities appearing in the proof of Proposition \ref{Prop:5.3} may be viewed as distinct {\it avatars} of the Lagrange-Beltrami identity for $n=3$. In all, there are $6$ such identities expressing $M_1M_2Q(x,y,z)$ as a linear combination of squares, where $(M_1, M_2)$ is any nested sequence of $1\times 1$ and $2\times 2$ principal minors of $A$. These readily imply a generalization \cite[Thm. 7.2.5]{HJ} of \eqref{posdef} in the case $n=3$. Namely, the positivity of any nested sequence of leading principal minors implies positive definiteness. (iii) Changing $A$ to $-A$ in \eqref{posdef} and \eqref{nonnegdef}, we readily obtain characterizations of negative definite matrices as well as of nonpositive definite matrices. From the characterizations of nonnegative definite quadratic forms and nonpositive definite quadratic forms, we can deduce a characterization of indefinite quadratic forms, that is, of those (real) quadratic forms which take positive as well as negative values. This can be quite useful in the study of saddle points. (iv) For the sake of simplicity, and with a view toward applications to Calculus, we have restricted ourselves to real symmetric matrices. However, the results and proofs discussed in this article extend easily to complex hermitian matrices. (v) It will be interesting to obtain an explicit version of the Lagrange-Beltrami identity for any $n>3$, and a self-contained `high-school algebra' proof to show the equivalence of nonnegative definiteness of any quadratic form and the nonnegativity of all the principal minors of the associated matrix. \end{remarks}
{ "timestamp": "2007-07-19T14:27:22", "yymm": "0707", "arxiv_id": "0707.2885", "language": "en", "url": "https://arxiv.org/abs/0707.2885", "abstract": "We consider the characterizations of positive definite as well as nonnegative definite quadratic forms in terms of the principal minors of the associated symmetric matrix. We briefly review some of the known proofs, including a classical approach via the Lagrange-Beltrami identity. For quadratic forms in up to 3 variables, we give an elementary and self-contained proof of Sylvester's Criterion for positive definiteness as well as for nonnegative definiteness. In the process, we obtain an explicit version of Lagrange-Beltrami identity for ternary quadratic forms.", "subjects": "History and Overview (math.HO); General Mathematics (math.GM)", "title": "Sylvester's Minorant Criterion, Lagrange-Beltrami Identity, and Nonnegative Definiteness", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303446461048, "lm_q2_score": 0.8311430499496096, "lm_q1q2_score": 0.8226906115818368 }
https://arxiv.org/abs/1809.09036
Combinatorial interpretations of Lucas analogues of binomial coefficients and Catalan numbers
The Lucas sequence is a sequence of polynomials in s, and t defined recursively by {0}=0, {1}=1, and {n}=s{n-1}+t{n-2} for n >= 2. On specialization of s and t one can recover the Fibonacci numbers, the nonnegative integers, and the q-integers [n]_q. Given a quantity which is expressed in terms of products and quotients of nonnegative integers, one obtains a Lucas analogue by replacing each factor of n in the expression with {n}. It is then natural to ask if the resulting rational function is actually a polynomial in s and t with nonnegative integer coefficients and, if so, what it counts. The first simple combinatorial interpretation for this polynomial analogue of the binomial coefficients was given by Sagan and Savage, although their model resisted being used to prove identities for these Lucasnomials or extending their ideas to other combinatorial sequences. The purpose of this paper is to give a new, even more natural model for these Lucasnomials using lattice paths which can be used to prove various equalities as well as extending to Catalan numbers and their relatives, such as those for finite Coxeter groups.
\section{Introduction} Let $s$ and $t$ be two indeterminants. The corresponding {\em Lucas sequence} is defined inductively by letting $\{0\}=0$, $\{1\}=1$, and $$ \{n\}=s\{n-1\}+t\{n-2\} $$ for $n\ge2$. For example $$ \{2\}=s, \{3\}=s^2+t, \{4\}=s^3+2st, $$ and so forth. Clearly when $s=t=1$ one recovers the Fibonacci sequence. When $s=2$ and $t=-1$ we have $\{n\}=n$. Furthermore if $s=1+q$ and $t=-q$ then $\{n\}=[n]_q$ where $[n]_q=1+q+\dots+q^{n-1}$ is the usual $q$-analogue of $n$. So when proving theorems about the Lucas sequence, one gets results about the Fibonacci numbers, the nonnegative integers, and $q$-analogues for free. It is easy to give a combinatorial interpretation to $\{n\}$ in terms of tilings. Given a row of $n$ squares, let ${\cal T}(n)$ denote the set of tilings $T$ of this strip by monominoes which cover one square and dominoes which cover two adjacent squares. Figure~\ref{T(3)} shows the tilings in ${\cal T}(3)$. Given any configuration of tiles $T$ we define its {\em weight} to be $$ \wt T = s^{\text{number of monominoes in $T$}}\ t^{\text{number of dominoes in $T$}}. $$ Similarly, given any set of tilings ${\cal T}$ we define its {\em weight} to be $$ \wt{\cal T}=\sum_{T\in{\cal T}} \wt T. $$ To illustrate $\wt({\cal T}(3))=s^3+2st=\{4\}$. This presages the next result which follows quickly by induction and is well known so we omit the proof. \begin{prop} \label{T(n-1)} For all $n\ge1$ we have \vs{3pt} \eqqed{ \{n\}=\wt({\cal T}(n-1)). } \end{prop} \begin{figure} \begin{center} \begin{tikzpicture} \draw (0,0) grid (3,1); \fill (.5,.5) circle (.1); \fill (1.5,.5) circle (.1); \fill (2.5,.5) circle (.1); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) grid (3,1); \draw (.5,.5)--(1.5,.5); \fill (.5,.5) circle (.1); \fill (1.5,.5) circle (.1); \fill (2.5,.5) circle (.1); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) grid (3,1); \draw (1.5,.5)--(2.5,.5); \fill (.5,.5) circle (.1); \fill (1.5,.5) circle (.1); \fill (2.5,.5) circle (.1); \end{tikzpicture} \caption{The tilings in ${\cal T}(3)$} \label{T(3)} \end{center} \end{figure} Given any quantity which is defined using products and quotients of integers, we can replace each occurrence of $n$ in the expression by $\{n\}$ to obtain its {\em Lucas analogue}. One can then ask if the resulting rational fiunction is actually a polynomial in $s,t$ with nonnegative integer coefficients and, if it is, whether it is the generating function for some set of combinatorial objects. Let ${\mathbb N}$ denote the nonnegative integers so that we are interested in showing that various polynomials are in ${\mathbb N}[s,t]$. We begin by discussing the case of binomial coefficients. For $n\ge0$, the Lucas analogue of a factorial is the {\em Lucastorial} $$ \{n\}! = \{1\}\{2\}\dots\{n\}. $$ Now given $0\le k\le n$ we define the corresponding {\em Lucasnomial} to be \begin{equation} \label{Lnomial} { n \brace k} = \frac{\{n\}!}{\{k\}!\{n-k\}!}. \end{equation} It is not hard to see that this function satisfies an analogue of the binomial recursion (Proposition~\ref{LnomRr} below) and so inductively prove that it is in ${\mathbb N}[s,t]$. The first simple combinatorial interpretation of the Lucasnomials was given by Sagan and Savage~\cite{ss:cib} using tilings of Young diagrams inside a rectangle. Earlier but more complicated models were given by Gessel and Viennot~\cite{gv:bdp} and by Benjamin and Plott~\cite{bp:caf}. Despite its simplicity, there were three difficulties with the Sagan-Savage approach. The model was not flexible enough to permit combinatorial demonstrations of straight-forward identities involving the Lucasnomials. Their ideas did not seem to extend to any other related combinatorial sequences such as the Catalan numbers. And their model contained certain dominoes in the tilings which appeared in an unintuitive manner. The goal of this paper is to present a new construction which addresses these problems. We should also mention related work on a $q$-version of these ideas. As noted above, letting $s=t=1$ reduces $\{n\}$ to $F_n$, the $n$th Fibonacci numbers. One can then make a $q$-Fibonacci analogue of a quotient of products by replacing each factor of $n$ by $[F_n]_q$. Working in this framework, some results parallel to ours were found indpendently during the working sessions of the Algebraic Combinatorics Seminar at the Fields Institute with the active participation of Farid Aliniaeifard, Nantel Bergeron, Cesar Ceballos, Tom Denton, and Shu Xiao Li~\cite{abcdl}. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,6)--(0,0); \draw (1,5)--(1,0); \draw (2,4)--(2,0); \draw (3,3)--(3,0); \draw (4,2)--(4,0); \draw (5,1)--(5,0); \draw (5,0)--(0,0); \draw (5,1)--(0,1); \draw (4,2)--(0,2); \draw (3,3)--(0,3); \draw (2,4)--(0,4); \draw (1,5)--(0,5); \draw[->] (0,-4)--(0,8); \draw[->] (-4,0)--(8,0); \draw[line width=1mm] (0,6)--(0,0)--(6,0) (5,0)--(5,1)--(4,1)--(4,2)--(3,2)--(3,3)--(2,3)--(2,4)--(1,4)--(1,5)--(0,5); \node at (1,-0.6) {$1$}; \node at (2,-0.6) {$2$}; \node at (3,-0.6) {$3$}; \node at (4,-0.6) {$4$}; \node at (5,-0.6) {$5$}; \node at (6,-0.6) {$6$}; \node at (-0.6,1) {$1$}; \node at (-0.6,2) {$2$}; \node at (-0.6,3) {$3$}; \node at (-0.6,4) {$4$}; \node at (-0.6,5) {$5$}; \node at (-0.6,6) {$6$}; \end{tikzpicture} \hs{20pt} \begin{tikzpicture}[scale=0.6] \draw (0,6)--(0,0); \draw (1,5)--(1,0); \draw (2,4)--(2,0); \draw (3,3)--(3,0); \draw (4,2)--(4,0); \draw (5,1)--(5,0); \draw (5,0)--(0,0); \draw (5,1)--(0,1); \draw (4,2)--(0,2); \draw (3,3)--(0,3); \draw (2,4)--(0,4); \draw (1,5)--(0,5); \draw[->] (0,-4)--(0,8); \draw[->] (-4,0)--(8,0); \draw[line width=1mm] (0,6)--(0,0)--(6,0) (5,0)--(5,1)--(4,1)--(4,2)--(3,2)--(3,3)--(2,3)--(2,4)--(1,4)--(1,5)--(0,5); \foreach \x in {.5,1.5,...,4.5 } \filldraw (\x,.5) circle(3pt); \foreach \x in {.5,1.5,...,3.5 } \filldraw (\x,1.5) circle(3pt); \foreach \x in {.5,1.5,...,2.5 } \filldraw (\x,2.5) circle(3pt); \foreach \x in {.5,1.5} \filldraw (\x,3.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,4.5) circle(3pt); \draw (2.5,.5)--(3.5,.5) (.5,1.5)--(1.5,1.5) (1.5,2.5)--(2.5,2.5); \node at (1,-0.6) {$1$}; \node at (2,-0.6) {$2$}; \node at (3,-0.6) {$3$}; \node at (4,-0.6) {$4$}; \node at (5,-0.6) {$5$}; \node at (6,-0.6) {$6$}; \node at (-0.6,1) {$1$}; \node at (-0.6,2) {$2$}; \node at (-0.6,3) {$3$}; \node at (-0.6,4) {$4$}; \node at (-0.6,5) {$5$}; \node at (-0.6,6) {$6$}; \end{tikzpicture} \end{center} \caption{$\delta_6$ embedded in $\mathbb{R}^2$ on the left and a tiling on the right.} \label{R2} \end{figure} We will need to consider lattice paths inside tilings of Young diagrams. Let $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_l)$ be an integer partition, that is, a weakly decreasing sequence of positive integers. The $\lambda_i$ are called {\em parts} and the {\em length} of $\lambda$ is the number of parts and denoted $l(\lambda)$. The {\em Young diagram} of $\lambda$ is an array of left-justified rows of boxes which we will write in French notation so that $\lambda_i$ is the number of boxes in the $i$th row from the bottom of the diagram. We will also use the notation $\lambda$ for the diagram of $\lambda$. Furthermore, we will embed this diagram in the first quadrant of a Cartesian coordinate system with the boxes being unit squares and the southwest-most corner of $\lambda$ being the origin. Finally, it will be convenient in what follows to consider the unit line segments from $(\lambda_1,0)$ to $(\lambda_1+1,0)$ and from $(0,l(\lambda))$ to $(0,l(\lambda)+1)$ to be part of $\lambda$'s diagram. On the left in Figure~\ref{R2} the diagram of $\lambda=\delta_6$ is outlined with thick lines where $$ \delta_n=(n-1,n-2,\dots,1). $$ A {\em tiling} of $\lambda$ is a tiling $T$ of the rows of the diagram with monominoes and dominoes. We let ${\cal T}(\lambda)$ denote the set of all such tilings. An element of ${\cal T}(\delta_6)$ is shown on the right in Figure~\ref{R2}. We write $\wt\lambda$ for the more cumbersome $\wt({\cal T}(\lambda))$. The fact that $\wt\delta_n=\{n\}!$ follows directly from the definitions. So to prove that $\{n\}!/p(s,t)$ is a polynomial for some polynomial $p(s,t)$, it suffices to partition ${\cal T}(\delta_n)$ into subsets, which we will call {\em blocks}, such that $\wt\beta$ is evenly divisible by $p(s,t)$ for all blocks $\beta$. We will use lattice paths inside $\delta_n$ to create the partitions where the choice of path will vary depending on which Lucas analogue we are considering. The rest of this paper is organized as follows. In the next section, we give our new combinatorial interpretation for the Lucasnomials. In Section~\ref{ifl} we prove two identities using this model. The demonstration for one of them is straightforward, but the other requires a surprisingly intricate algorithm. Section~\ref{cfc} is devoted to showing how our model can be modified to give combinatorial interpretations to Lucas analogues of the Catalan and Fuss-Catalan numbers. In the following section we prove that the Coxeter-Catalan numbers for any Coxeter group have polynomial Lucas analogues and that the same is true for the infiinite families of Coxeter groups in the Fuss-Catalan case. In fact we generalize these results by considering $d$-divisible diagrams, $d$ being a positive integer, where each row has length one less than a multiple of $d$. We end with a section containing comments and directions for future research. \section{Lucasnomials} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,6)--(0,0); \draw (1,5)--(1,0); \draw (2,4)--(2,0); \draw (3,3)--(3,0); \draw (4,2)--(4,0); \draw (5,1)--(5,0); \draw (5,0)--(0,0); \draw (5,1)--(0,1); \draw (4,2)--(0,2); \draw (3,3)--(0,3); \draw (2,4)--(0,4); \draw (1,5)--(0,5); \draw[->] (0,-4)--(0,8); \draw[->] (-4,0)--(8,0); \draw[line width=1mm] (3,0)--(2,0)--(2,2)--(1,2)--(1,5)--(0,5)--(0,6); \foreach \x in {.5,1.5,...,4.5 } \filldraw (\x,.5) circle(3pt); \foreach \x in {.5,1.5,...,3.5 } \filldraw (\x,1.5) circle(3pt); \foreach \x in {.5,1.5,...,2.5 } \filldraw (\x,2.5) circle(3pt); \foreach \x in {.5,1.5} \filldraw (\x,3.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,4.5) circle(3pt); \draw (2.5,.5)--(3.5,.5) (.5,1.5)--(1.5,1.5) (1.5,2.5)--(2.5,2.5); \node at (3,-0.6) {$(3,0)$}; \node at (-1,6) {$(0,6)$}; \end{tikzpicture} \hs{20pt} \begin{tikzpicture}[scale=0.6] \draw (0,6)--(0,0); \draw (1,5)--(1,0); \draw (2,4)--(2,0); \draw (3,3)--(3,0); \draw (4,2)--(4,0); \draw (5,1)--(5,0); \draw (5,0)--(0,0); \draw (5,1)--(0,1); \draw (4,2)--(0,2); \draw (3,3)--(0,3); \draw (2,4)--(0,4); \draw (1,5)--(0,5); \draw[->] (0,-4)--(0,8); \draw[->] (-4,0)--(8,0); \draw[line width=1mm] (3,0)--(2,0)--(2,2)--(1,2)--(1,5)--(0,5)--(0,6); \foreach \x in {2.5,3.5,4.5 } \filldraw (\x,.5) circle(3pt); \foreach \x in {.5,1.5} \filldraw (\x,1.5) circle(3pt); \foreach \x in {1.5,2.5 } \filldraw (\x,2.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,3.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,4.5) circle(3pt); \draw (2.5,.5)--(3.5,.5) (.5,1.5)--(1.5,1.5) (1.5,2.5)--(2.5,2.5); \node at (3,-0.6) {$(3,0)$}; \node at (-1,6) {$(0,6)$}; \end{tikzpicture} \end{center} \caption{The path for the tiling in Figure~\ref{R2} and the corresponding partial tiling} \label{path} \end{figure} In this section we will use the method outlined in the introduction to show that the Lucasnomials defined by~\ree{Lnomial} are polynomials in $s$ and $t$. In particular, we will prove the following result. \begin{thm} \label{Lnom:ptn} Given $0\le k\le n$ there is a partition of ${\cal T}(\delta_n)$ such that $\{k\}!\{n-k\}!$ divides $\wt\beta$ for every block $\beta$. \eth \begin{proof} Given $T\in{\cal T}(\delta_n)$ we will describe the block $\beta$ containing it by using a lattice path $p$. The path will start at $(k,0)$ and end at $(0,n)$ taking unit steps north ($N$) and west ($W$). If $p$ is at a lattice point $(x,y)$ then it moves north to $(x,y+1)$ as long as doing so will not cross a domino and not take $p$ out of the diagram of $\delta_n$. Otherwise, $p$ moves west to $(x-1,y)$. For example, if $T$ is the tiling in Figure~\ref{R2} then the resulting path is shown on the left in Figure~\ref{path}. Indeed, initially $p$ is forced west by a domino above, then moves north twice until forced west again by a domino, then moves north three times until doing so again would take it out of $\delta_6$, and finishes by following the boundary of the diagram. In this case we write $p=WNNWNNNWN$. The north steps of $p$ are of two two kinds: those which are immediately preceded by a west step and those which are not. Calll the former $NL$ steps (since the $W$ and $N$ step together look like a letter ell) and the latter $NI$ steps. In our example the first, third, and sixth north steps are $NL$ while the others are $NI$. The block containing the original tiling $T$ consists of all tilings in ${\cal T}(\delta_n)$ which agree with $T$ to the right of each $NL$ step and to the left of each $NI$ step. Returning to Figure~\ref{path}, the diagram on the right shows the tiles which are common to all tilings in $T$'s block whereas the squares which are blank can be tiled arbitrarily. Now there are $k-i$ boxes to the left of the $i$th $NL$ step and thus the weight of tiling these boxes is $\{k-i+1\}$. Since there $k$ such steps, the total contribution to $\wt\beta$ is $\{k\}!$ for the boxes to the left of these steps. Similarly, $\{n-k\}!$ is the contribution to $\wt\beta$ of the boxes to the right of the $NI$ steps. This completes the proof. \end{proof}\medskip The tiling showing the fixed tiles for a given block $\beta$ in the partition of the previous theorem will be called a {\em binomial partial tiling} $B$. As just proved, $\wt\beta = \{k\}!\{n-k\}!\wt B$. Thus we have the following result. \begin{cor} \label{bpartial} Given $0\le k\le n$ we have $$ {n \brace k}=\sum_B \wt B $$ where the sum is over all binomial partial tilings associated with lattice paths from $(k,0)$ to $(0,n)$ in $\delta_n$. Thus ${n \brace k}\in{\mathbb N}[s,t]$. \hfill \qed \end{cor} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,3)--(0,0); \draw (1,3)--(1,0); \draw (2,3)--(2,0); \draw (3,3)--(3,0); \draw (3,0)--(0,0); \draw (3,1)--(0,1); \draw (3,2)--(0,2); \draw (3,3)--(0,3); \foreach \x in {.5,1.5,2.5 } \foreach \y in {.5,1.5,2.5 } \filldraw (\x,\y) circle(3pt); \draw (.5,2.5)--(1.5,2.5) (1.5,.5)--(1.5,1.5) (2.5,.5)--(2.5,1.5); \draw[line width=1mm] (0,0)--(1,0)--(1,2)--(2,2)--(2,3)--(3,3); \end{tikzpicture} \end{center} \caption{The tiling of a rectangle corresponding to the partial tiling in Figure~\ref{path}} \label{R} \end{figure} We end this section by describing the relationship between the tilings we have been considering and those in the model of Sagan and Savage. In their interpretation, one considered all lattice paths $p$ in a $k\times(n-k)$ rectangle $R$ starting at the southwest corner, ending at the northeast corner, and taking unit steps north and east. The path divides $R$ into two partitions: $\lambda$ whose parts are the rows of boxes in $R$ northwest of $p$ and $\lambda^*$ whose parts are the columns of $R$ southeast of $p$. One then considers all tilings of $R$ which are tilings of $\lambda$ (so any dominoes are horizontal) and of $\lambda^*$ (so any dominoes are vertical) such that each tiling of a column of $\lambda^*$ begins with a domino. They then proved that ${n\brace k}$ is the generating function for all such tilings. But there is a bijection between these tilings and our binomial partial tilings where one uses the fixed tilings to left of $NI$ steps for the rows of $\lambda$ and those to the right of the $NL$ steps for $\lambda^*$. Figure~\ref{R} shows the tiling of $R$ corresponding to the partial tiling in Figure~\ref{path}. Note that dominoes in the tiling of $\lambda^*$ occur naturally in the context of binomial partial tilings rather than just being an imposed condition. And, as we will see, the viewpoint of partial tilings is much more flexible than that of tilings of a rectangle. \section{Identities for Lucasnomials} \label{ifl} We will now use Corollary~\ref{bpartial} to prove various identities for Lucasnomials. We start with the analogue of the binomial recursion mentioned in the introduction. In the Sagan and Savage paper, this formula was first proved by other means and then used to obtain their combinatorial interpretation. Here, the recursion follows easily from our model. \begin{prop} \label{LnomRr} For $0<k<n$ we have $$ {n\brace k} = \{k+1\} {n-1\brace k} + t \{n-k-1\} {n-1\brace k-1}. $$ \end{prop} \begin{proof} By Corollary~\ref{bpartial} it suffices to partition the set of partial tilings for ${n\brace k}$ into two subsets whose generating functions give the two terms of the recursion. First consider the binomial partial tilings $B$ whose path $p$ starts from $(k,0)$ with an $N$ step. Since this is an $NI$ step, the portion of the first row to the left of the step is tiled, and by Proposition~\ref{T(n-1)} the generating function for such tilings is $\{k+1\}$. Now $p$ continues from $(k,1)$ through the remaining rows which form the partition $\delta_{n-1}$. It follows that the weights of this portion of the corresponding partial tilings sum to ${n-1\brace k}$. Thus these $p$ give the first term of the recursion. Suppose now that $p$ starts with a $W$ step. It follows that the second step of $p$ must be $N$ and so an $NL$ step. In this case the portion of the first row to the right of the $NL$ step is tiled and that tiling begins with a domino. Using the same reasoning as in the previous paragraph, one sees that the weight generating function for the tiling of the first row is $t \{n-k-1\}$ while ${n-1\brace k-1}$ accounts for the rest of the rows of the tiling. \end{proof}\medskip We will next give a bijective proof of the symmetry of the Lucasnomials. In particular, we will construct an involution to demonstrate the following result. \begin{prop} \label{sym} For $0\le t\le k\le n$ we have \begin{equation} \label{symeq} \{k\}\{k-1\}\dots\{k-t+1\}{n\brace k} = \{n-k+t\}\dots\{n-k+1\}{n \brace n-k+t}. \end{equation} In particular, when $t=0$, $$ { n \brace k} = {n\brace n-k}. $$ \end{prop} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,7)--(0,0); \draw (1,6)--(1,0); \draw (2,5)--(2,0); \draw (3,4)--(3,0); \draw (4,3)--(4,0); \draw (5,2)--(5,0); \draw (6,1)--(6,0); \draw (7,0)--(7,0); \draw (7,0)--(0,0); \draw (6,1)--(0,1); \draw (5,2)--(0,2); \draw (4,3)--(0,3); \draw (3,4)--(0,4); \draw (2,5)--(0,5); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,-1)--(0,8); \draw[->] (-1,0)--(8,0); \node at (1,-0.6) {$1$}; \node at (2,-0.6) {$2$}; \node at (3,-0.6) {$3$}; \node at (4,-0.6) {$4$}; \node at (5,-0.6) {$5$}; \node at (6,-0.6) {$6$}; \node at (7,-0.6) {$7$}; \node at (-0.6,1) {$1$}; \node at (-0.6,2) {$2$}; \node at (-0.6,3) {$3$}; \node at (-0.6,4) {$4$}; \node at (-0.6,5) {$5$}; \node at (-0.6,6) {$6$}; \node at (-0.6,7) {$7$}; \foreach \x in {4.5,5.5 } \filldraw (\x,.5) circle(3pt); \foreach \x in {3.5,4.5 } \filldraw (\x,1.5) circle(3pt); \foreach \x in {.5,1.5,2.5} \filldraw (\x,2.5) circle(3pt); \foreach \x in {.5,1.5,2.5} \filldraw (\x,3.5) circle(3pt); \foreach \x in {5.5,6.5,...,8.5} \filldraw (\x,6.5) circle(3pt); \foreach \x in {5.5,6.5,7.5} \filldraw (\x,5.5) circle(3pt); \draw (4.5,.5)--(5.5,.5) (3.5,1.5)--(4.5,1.5) (.5,2.5)--(1.5,2.5) (1.5,3.5)--(2.5,3.5); \draw (6.5,6.5)--(7.5,6.5) (5.5,5.5)--(6.5,5.5); \draw[line width=1mm] (5,0)--(4,0)--(4,1)--(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(2,5)--(1,5)--(1,6)--(0,6)--(0,7); \draw (5,5) grid (8,7); \draw (8,6) grid (9,7); \node at (-1.6,3.5){$B$}; \node at (4.4,6.5){$S_1$}; \node at (4.4,5.5){$S_2$}; \end{tikzpicture} \caption{An extended binomial partial tiling of type $(7,5,2)$ \label{752}} \end{center} \end{figure} Although this proposition is easy to prove algebraically, the algorithm giving the involution is surprisingly intricate. To define the bijection we will need the following concepts. A {\em strip of length $k$} will be a row of $k$ squares, that is, the Young diagram of $\lambda=(k)$. For $0\le t\le k\le n$ an {\em extended binomial partial tiling of type $(n,k,t)$} is a $(t+1)$-tuple ${\cal B}=(B;S_1,\dots,S_t)$ where \begin{enumerate} \item $B$ is a partial binomial tiling of $\delta_n$ whose lattice path starts at $(k,0)$, and \item $S_i$ is a tiled strip of length $k-i$ for $1\le i\le t$. \end{enumerate} For brevity we will sometimes write ${\cal B}=(B;{\cal S})$ where ${\cal S}=(S_1,\dots,S_t)$. In figures, we will display the strips to the northeast of $B$. See Figure~\ref{752} for an example with $(n,k,t)=(7,5,2)$. Clearly the sum of the weights of all ${\cal B}$ of type $(n,k,t)$ is the left-hand side of equation~\ree{symeq}. Our involution will be a map $\iota:{\cal B}\mapsto{\cal C}$ where ${\cal B}$ and ${\cal C}$ are of types $(n,k,t)$ and $(n,n-k+t,t)$, respectively. This will provide a combinatorial proof of Proposition~\ref{sym}. To describe the algorithm producing $\iota$ we need certain operations on partial binomial tilings and strips. Given two strips $R,S$ we denote their concatenation by $RS$ or $R\cdot S$. So, using the strips in Figure~\ref{752}, \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,0) grid (7,1); \foreach \x in {.5,1.5,...,6.5} \filldraw (\x,.5) circle(3pt); \draw (1.5,.5)--(2.5,.5) (4.5,.5)--(5.5,.5); \node at (-3,.5) {$S_1 S_2=S_1\cdot S_2=$}; \node at (7.5,.2) {.}; \end{tikzpicture} \end{center} Given a partially tiled strip $R$ of length $n$ and a partial binomial tiling $B$ of $\delta_n$ we define their {\em concatenation}, $RB=R\cdot B$, to be the tiling of $\delta_{n+1}$ whose first (bottom) row is $R$ and with the remaining rows tiled as in $B$. So if $B'$ is the partial tiling of $\delta_6$ given by the top six rows of the partial tiling $B$ in Figure~\ref{752} then $B=RB'$ where \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,0) grid (6,1); \foreach \x in {4.5,5.5} \filldraw (\x,.5) circle(3pt); \draw (4.5,.5)--(5.5,.5); \node at (-2,.5) {$R=$}; \node at (6.5,.2) {.}; \end{tikzpicture} \end{center} Note that only for certain $R$ will the concatenation $RB$ remain a partial binomial tiling for some path. In particular $R$ will have to be either a {\em left strip} where only the left-most boxes are tiled, or a {\em right strip} with tiles only on the right-most boxes. The example $R$ is a right strip tiled by a domino. Given a strip $S$ of length $k$ and $0\le s\le k$ we define $S\fl{s}$ and $S\ce{s}$ to be the strips consisting of, respectively, the first $s$ and the last $s$ boxes of $S$. Continuing our example \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,0) grid (3,1); \foreach \x in {.5,1.5,2.5} \filldraw (\x,.5) circle(3pt); \draw (1.5,.5)--(2.5,.5); \node at (-2,.5) {$S_1\fl{3}=$}; \node at (5.5,.5){and $S_1\ce{3}=$}; \draw (8,0) grid (11,1); \foreach \x in {8.5,9.5,10.5} \filldraw (\x,.5) circle(3pt); \draw (8.5,.5)--(9.5,.5); \node at (11.5,.2) {.}; \end{tikzpicture} \end{center} Note that these notations are undefined if taking the desired boxes would involve breaking a domino. Also, to simplify notation, we will use $RS\fl{s}$ to be the first $s$ boxes of the concatentation of $RS$, while $R\cdot S\fl{s}$ will be the concatenation of $R$ with the first $s$ boxes of $S$. A similar convention applies to the last $s$ boxes. We will also use the notation $S^r$ for the {\em reverse} of a strip obtained by reflecting it in a vertical axis. In our example, \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,0) grid (3,1); \foreach \x in {.5,1.5,2.5} \filldraw (\x,.5) circle(3pt); \draw (1.5,.5)--(2.5,.5); \node at (-1,.5) {$S_2^r=$}; \node at (3.5,.2) {.}; \end{tikzpicture} \end{center} Our algorithm will break into four cases depending on the following concept. Call a point $(r,0)$ an {\em $NI$ point} of a partial binomial tiling $B$ if taking an $N$ step from this vertex stays in $B$ and does not cross a domino. Otherwise call $(r,0)$ an {\em $NL$ point} which also includes the case where this vertex is not in $B$ to begin with. We consider use the previous two definitions for strips by considering them as being embedded in the first quadrant as a one-row partition. Because our algorithm is recursive, we will have to be careful about its notation. A priori, given a partial binomial tiling $B$ the notation $\iota(B)$ is not well defined since $\iota$ needs as input a pair ${\cal B}=(B;{\cal S})$. However, it will be convenient to write $$ \iota(B;S_1,\dots,S_t) = (\iota(B);\iota(S_1),\dots,\iota(S_t)) $$ where it is understood on the right-hand side that $\iota$ is always taken with respect to the input pair $(B;{\cal S})$ to the algorithm. Because the algorithm is recursive, we will also have to apply $\iota$ to ${\cal B}'=(B';S_1',\dots,S_r')$ where $B'$ is $B$ with its bottom row removed and $S_1',\dots,S_r'$ are certain strips. So we define $\iota'(B')$ and $\iota'(S_i')$ for $1\le i\le r$ by $$ \iota(B';S'_1,\dots,S'_r) = (\iota'(B');\iota'(S'_1),\dots,\iota'(S'_r)). $$ In particular, if $S_i'=S_j$ for some $i,j$ then $\iota'(S_j)=\iota'(S'_i)$ so that $S_j$ is being treated as an element of ${\cal B}'$ rather than of ${\cal B}$. We now have all the necessary concepts to present the recursive algorithm which is given in Figure~\ref{alg}. \begin{figure} \begin{center} \begin{enumerate} \item[]{\bf Algorithm $\iota$} \item[] Input: An extended binomial tiling ${\cal B}=(B;S_1,\dots,S_t)$ having type $(n,k,t)$. \item[] Output: An extended binomial tiling $\iota({\cal B})=(C;T_1,\dots,T_t)=(C;{\cal T})$ having type $(n,n-k+t,t)$. \item If $n=0$ then $\iota$ is the identity and ${\cal C}={\cal B}$. \item If $n>0$ then let $R$ be the strip of tiled squares in the bottom row of $B$, and let $B'$ be $B$ with the bottom row removed. \item Construct ${\cal B}'=(B',{\cal S}')$, calculate $\iota'({\cal B}')$ recusively, and then define $\iota({\cal B})$ using the following four cases. \begin{enumerate} \item If $(k,0)$ is an $NI$ point of $B$ and $(k-t-1,0)$ is an $NI$ point of $B$ then let $$ \begin{array}{rcl} S_{t+1} &=& R\fl{k-t-1},\\ {\cal S}' &=&(S_1,\dots,S_t,S_{t+1}),\\ C&= & R^L\cdot\iota'(B') \text{ where $R^L$ is a left strip tiled by $R'=\iota'(S_{t+1}) \cdot R\ce{t+1}^r$},\\ {\cal T}&=& (\iota'(S_1),\dots,\iota'(S_t)). \end{array} $$ \item If $(k,0)$ is an $NI$ point of $B$ and $(k-t-1,0)$ is an $NL$ point of $B$ then let $$ \begin{array}{rcl} {\cal S}' &=&(S_1,\dots,S_t),\\ C&= & R^R\cdot\iota'(B') \text{ where $R^R$ is a right strip tiled by $R'=R\fl{k-t}^r$},\\ {\cal T}&=& (\iota'(S_t)\cdot R\ce{t}^r,\ \iota'(S_1),\ \dots,\ \iota'(S_{t-1})). \end{array} $$ \item If $(k,0)$ is an $NL$ point of $B$ and $(k-t-1,0)$ is an $NI$ point of $S_1$ (by convention, this is considered to be true if $t=0$ so that $S_1$ does not exist) then let $$ \begin{array}{rcl} S_{t+1} &=& S_1\fl{k-t-1},\\ {\cal S}' &=&(S_2,\dots,S_t,S_{t+1}),\\ C&= & R^L\cdot\iota'(B') \text{ where $R^L$ is a left strip tiled by $R'=R^r S_1^r\fl{n-k+t}$},\\ {\cal T}&=& (\iota'(S_2), \dots,\iota'(S_t), \iota'(S_{t+1})). \end{array} $$ \item If $(k,0)$ is an $NL$ point of $B$ and $(k-t-1,0)$ is an $NL$ point of $S_1$ then let $$ \begin{array}{rcl} {\cal S}' &=&(S_2,\dots,S_t),\\ C&= & R^R\cdot\iota'(B') \text{ where $R^R$ is a right strip tiled by $R'=R^r S_1^r\ce{k-t}$},\\ {\cal T}&=& (R^r S_1^r\fl{n-k+t-1},\ \iota'(S_2),\ \dots,\ \iota'(S_t)). \end{array} $$ \end{enumerate} \end{enumerate} \caption{The algorithm for computing the involution $\iota$} \label{alg} \end{center} \end{figure} A step-by-step example of applying $\iota$ to the extended binomial tiling in Figure~\ref{752} is begun in Figure~\ref{752I} and finished in Figure~\ref{752II}. In it, $(B^{(i)};{\cal S}^{(i)})$ represents the pair on which $\iota$ is called in the $i$th iteration. So in the notation of the algorithm $(B^{(1)};{\cal S}^{(1)})=(B';{\cal S}')$ and so forth. These superscripts will make it clear that when we write, for example, $\iota(B^{(i)})$ we are referring to $\iota$ acting on the pair $(B^{(i)};{\cal S}^{(i)})$. These pairs are listed down the left sides of the figures. On the right sides are as much of the output $(C,{\cal T})$ as has been constructed after each iteration. Strips contributing to ${\cal T}$ may be left partially blank if the recursion has not gone deep enough yet to completely fill them. The circles used for the tiles have been replaced by numbers or letters to make it easier to follow the movement of the tiles. Tiles from $B$ are labeled with the number of their row while tiles from the strips are labeled alphabetically. Finally, the ``maps to" symbols indicate which of the four cases (a)--(d) of the algorithm is being used at each step. We will now consider the first four steps in detail since they will illustrate each of the four cases. The reader should find it easy to fill in the particulars for the rest of the steps. \begin{figure} \begin{center} \vspace*{-30pt} \begin{tikzpicture}[scale=0.6] \draw (0,7)--(0,0); \draw (1,6)--(1,0); \draw (2,5)--(2,0); \draw (3,4)--(3,0); \draw (4,3)--(4,0); \draw (5,2)--(5,0); \draw (6,1)--(6,0); \draw (7,0)--(7,0); \draw (7,0)--(0,0); \draw (6,1)--(0,1); \draw (5,2)--(0,2); \draw (4,3)--(0,3); \draw (3,4)--(0,4); \draw (2,5)--(0,5); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,-1)--(0,8); \draw[->] (-1,0)--(8,0); \node at (1,-0.6) {$1$}; \node at (2,-0.6) {$2$}; \node at (3,-0.6) {$3$}; \node at (4,-0.6) {$4$}; \node at (5,-0.6) {$5$}; \node at (6,-0.6) {$6$}; \node at (7,-0.6) {$7$}; \node at (-0.6,1) {$1$}; \node at (-0.6,2) {$2$}; \node at (-0.6,3) {$3$}; \node at (-0.6,4) {$4$}; \node at (-0.6,5) {$5$}; \node at (-0.6,6) {$6$}; \node at (-0.6,7) {$7$}; \foreach \x in {4.5,5.5 } \filldraw (\x,.5) node{$1$}; \foreach \x in {3.5,4.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {.5,1.5,2.5} \filldraw (\x,2.5) node{$3$}; \foreach \x in {.5,1.5,2.5} \filldraw (\x,3.5) node{$4$}; \foreach \x in {5.5,6.5,...,8.5} \filldraw (\x,6.5) node{$a$}; \foreach \x in {5.5,6.5,7.5} \filldraw (\x,5.5) node{$b$}; \draw (4.6,.5)--(5.4,.5) (3.6,1.5)--(4.4,1.5) (.6,2.5)--(1.4,2.5) (1.6,3.5)--(2.4,3.5); \draw (6.6,6.5)--(7.4,6.5) (5.6,5.5)--(6.4,5.5); \draw[line width=1mm] (5,0)--(4,0)--(4,1)--(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(2,5)--(1,5)--(1,6)--(0,6)--(0,7); \draw (5,5) grid (8,7); \draw (8,6) grid (9,7); \node at (-1.6,3.5){$B$}; \node at (4.4,6.5){$S_1$}; \node at (4.4,5.5){$S_2$}; \node at (15,3.5){$(C,{\cal T})=\emptyset$}; \node at (20,3.5){$\begin{array}{c}\rm (d)\\ \mapsto\end{array}$}; \end{tikzpicture} \vspace*{10pt} \begin{tikzpicture}[scale=0.6] \draw (0,7)--(0,1); \draw (1,6)--(1,1); \draw (2,5)--(2,1); \draw (3,4)--(3,1); \draw (4,3)--(4,1); \draw (5,2)--(5,1); \draw (6,1)--(6,1); \draw (6,1)--(0,1); \draw (5,2)--(0,2); \draw (4,3)--(0,3); \draw (3,4)--(0,4); \draw (2,5)--(0,5); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,0)--(0,8); \draw[->] (-1,1)--(7,1); \node at (1,.6) {$1$}; \node at (2,.6) {$2$}; \node at (3,.6) {$3$}; \node at (4,.6) {$4$}; \node at (5,.6) {$5$}; \node at (6,.6) {$6$}; \node at (-0.6,2) {$1$}; \node at (-0.6,3) {$2$}; \node at (-0.6,4) {$3$}; \node at (-0.6,5) {$4$}; \node at (-0.6,6) {$5$}; \node at (-0.6,7) {$6$}; \foreach \x in {3.5,4.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {.5,1.5,2.5} \filldraw (\x,2.5) node{$3$}; \foreach \x in {.5,1.5,2.5} \filldraw (\x,3.5) node{$4$}; \foreach \x in {5.5,6.5,7.5} \filldraw (\x,6.5) node{$b$}; \draw (5,6) grid (8,7); \draw (3.6,1.5)--(4.4,1.5) (.6,2.5)--(1.4,2.5) (1.6,3.5)--(2.4,3.5); \draw (5.6,6.5)--(6.4,6.5); \node at (-1.6,4){$B^{(1)}$}; \node at (4.2,6.5){$S^{(1)}_1$}; \draw[line width=1mm] (4,1)--(3,1)--(3,2)--(3,3)--(3,4)--(2,4)--(2,5)--(1,5)--(1,6)--(0,6)--(0,7); \node at (8.6,3.5) {$C$}; \draw[->] (9.4,0)--(18,0); \draw[->] (10,-.6)--(10,7.5); \draw (10,1)--(16,1) (11,1)--(11,0) (12,1)--(12,0) (13,1)--(13,0) (14,1)--(14,0) (15,1)--(15,0) (16,1)--(16,0); \node at (9.4,1){$1$}; \node at (11,-0.6) {$1$}; \node at (12,-0.6) {$2$}; \node at (13,-0.6) {$3$}; \node at (14,-0.6) {$4$}; \node at (15,-0.6) {$5$}; \node at (16,-0.6) {$6$}; \node at (17,-0.6) {$7$}; \foreach \x in {13.5,...,15.5 } \filldraw (\x,.5) node{$a$}; \draw (13.6,.5)--(14.4,.5); \draw (15,5) grid (17,7); \draw (17,6) grid (18,7); \foreach \x in {15.5,16.5} \filldraw (\x,6.5) node{$1$}; \foreach \x in {17.5 } \filldraw (\x,6.5) node{$a$}; \draw[line width=1mm] (14,0)--(13,0)--(13,1); \draw (15.6,6.5)--(16.4,6.5); \node at (14.2,6.5){$T_1$}; \node at (13.7,5.5){$\iota(S_1^{(1)})$}; \node at (20,3.5){$\begin{array}{c}\rm (c)\\ \mapsto\end{array}$}; \end{tikzpicture} \vspace*{10pt} \begin{tikzpicture}[scale=0.6] \node at (-1.6,4.5){$B^{(2)}$}; \draw (0,7)--(0,2); \draw (1,6)--(1,2); \draw (2,5)--(2,2); \draw (3,4)--(3,2); \draw (4,3)--(4,2); \draw (5,2)--(5,2); \draw (5,2)--(0,2); \draw (4,3)--(0,3); \draw (3,4)--(0,4); \draw (2,5)--(0,5); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,1)--(0,8); \draw[->] (-1,2)--(6,2); \node at (1,1.6) {$1$}; \node at (2,1.6) {$2$}; \node at (3,1.6) {$3$}; \node at (4,1.6) {$4$}; \node at (5,1.6) {$5$}; \node at (-0.6,3) {$1$}; \node at (-0.6,4) {$2$}; \node at (-0.6,5) {$3$}; \node at (-0.6,6) {$4$}; \node at (-0.6,7) {$5$}; \foreach \x in {.5,1.5,2.5} \filldraw (\x,2.5) node{$3$}; \foreach \x in {.5,1.5,2.5} \filldraw (\x,3.5) node{$4$}; \foreach \x in {5.5,6.5} \filldraw (\x,6.5) node{$b$}; \draw (5,6) grid (7,7); \draw (5.6,6.5)--(6.4,6.5); \node at (4.2,6.5){$S_1^{(2)}$}; \draw (.6,2.5)--(1.4,2.5) (1.6,3.5)--(2.4,3.5); \draw[line width=1mm] (3,2)--(3,3)--(3,4)--(2,4)--(2,5)--(1,5)--(1,6)--(0,6)--(0,7); \node at (8.6,3.5) {$C$}; \draw[->] (9.4,0)--(18,0); \draw[->] (10,-.6)--(10,7.5); \draw (10,1)--(16,1) (10,2)--(15,2) (11,2)--(11,0) (12,2)--(12,0) (13,2)--(13,0) (14,2)--(14,0) (15,2)--(15,0) (16,1)--(16,0); \node at (9.4,1){$1$}; \node at (9.4,2){$2$}; \node at (11,-0.6) {$1$}; \node at (12,-0.6) {$2$}; \node at (13,-0.6) {$3$}; \node at (14,-0.6) {$4$}; \node at (15,-0.6) {$5$}; \node at (16,-0.6) {$6$}; \node at (17,-0.6) {$7$}; \foreach \x in {13.5,...,15.5 } \filldraw (\x,.5) node{$a$}; \foreach \x in {10.5,11.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {12.5} \filldraw (\x,1.5) node{$b$}; \draw (13.6,.5)--(14.4,.5) (10.6,1.5)--(11.4,1.5); \draw (15,5) grid (17,7); \draw (17,6) grid (18,7); \foreach \x in {15.5,16.5} \filldraw (\x,6.5) node{$1$}; \foreach \x in {17.5 } \filldraw (\x,6.5) node{$a$}; \draw (15.6,6.5)--(16.4,6.5); \node at (14.2,6.5){$T_1$}; \node at (13.7,5.5){$\iota(S_1^{(2)})$}; \draw[line width=1mm] (14,0)--(13,0)--(13,1)--(13,2); \node at (20,3.5){$\begin{array}{c}\rm (b)\\ \mapsto\end{array}$}; \end{tikzpicture} \vspace*{10pt} \begin{tikzpicture}[scale=0.6] \node at (-1.6,5){$B^{(3)}$}; \draw (0,7)--(0,3); \draw (1,6)--(1,3); \draw (2,5)--(2,3); \draw (3,4)--(3,3); \draw (4,3)--(4,3); \draw (4,3)--(0,3); \draw (3,4)--(0,4); \draw (2,5)--(0,5); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,2)--(0,8); \draw[->] (-1,3)--(5,3); \node at (1,2.6) {$1$}; \node at (2,2.6) {$2$}; \node at (3,2.6) {$3$}; \node at (4,2.6) {$4$}; \node at (-0.6,4) {$1$}; \node at (-0.6,5) {$2$}; \node at (-0.6,6) {$3$}; \node at (-0.6,7) {$4$}; \foreach \x in {.5,1.5,2.5} \filldraw (\x,3.5) node{$4$}; \foreach \x in {5.5,6.5} \filldraw (\x,6.5) node{$b$}; \draw (5,6) grid (7,7); \draw (5.6,6.5)--(6.4,6.5); \node at (4.2,6.5){$S_1^{(3)}$}; \draw (1.6,3.5)--(2.4,3.5); \draw[line width=1mm] (3,3)--(3,4)--(2,4)--(2,5)--(1,5)--(1,6)--(0,6)--(0,7); \node at (8.6,4){$C$}; \draw[->] (9.4,0)--(18,0); \draw[->] (10,-.6)--(10,7.5); \draw (10,1)--(16,1) (10,2)--(15,2) (10,3)--(14,3) (11,3)--(11,0) (12,3)--(12,0) (13,3)--(13,0) (14,3)--(14,0) (15,2)--(15,0) (16,1)--(16,0); \node at (9.4,1){$1$}; \node at (9.4,2){$2$}; \node at (9.4,3){$3$}; \node at (11,-0.6) {$1$}; \node at (12,-0.6) {$2$}; \node at (13,-0.6) {$3$}; \node at (14,-0.6) {$4$}; \node at (15,-0.6) {$5$}; \node at (16,-0.6) {$6$}; \node at (17,-0.6) {$7$}; \foreach \x in {13.5,...,15.5 } \filldraw (\x,.5) node{$a$}; \foreach \x in {10.5,11.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {12.5} \filldraw (\x,1.5) node{$b$}; \foreach \x in {12.5,13.5} \filldraw (\x,2.5) node{$3$}; \draw (13.6,.5)--(14.4,.5) (10.6,1.5)--(11.4,1.5) (12.6,2.5)--(13.4,2.5); \draw (15,5) grid (17,7); \draw (17,6) grid (18,7); \foreach \x in {15.5,16.5} \filldraw (\x,6.5) node{$1$}; \foreach \x in {17.5 } \filldraw (\x,6.5) node{$a$}; \filldraw (16.5,5.5) node{$3$}; \draw (15.6,6.5)--(16.4,6.5); \node at (14.2,6.5){$T_1$}; \node at (13.7,5.5){$\iota(S_1^{(2)})$}; \draw[->] (15.5,4.3)--(15.5,4.9); \node at (15.5,3.9){$\iota(S_1^{(3)})$}; \draw[line width=1mm] (14,0)--(13,0)--(13,1)--(13,2)--(13,2)--(12,2)--(12,3); \node at (20,3.5){$\begin{array}{c}\rm (a)\\ \mapsto\end{array}$}; \end{tikzpicture} \caption{The $\iota$ recursion applied to the extended binomial partial tiling in Figure~\ref{752}, part 1} \label{752I} \end{center} \end{figure} \begin{figure} \begin{center} \vspace*{-30pt} \begin{tikzpicture}[scale=0.6] \node at (-1.6,5.5){$B^{(4)}$}; \draw (0,7)--(0,4); \draw (1,6)--(1,4); \draw (2,5)--(2,4); \draw (3,4)--(3,4); \draw (3,4)--(0,4); \draw (2,5)--(0,5); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,3)--(0,8); \draw[->] (-1,4)--(5,4); \node at (1,3.6) {$1$}; \node at (2,3.6) {$2$}; \node at (3,3.6) {$3$}; \node at (-0.6,5) {$1$}; \node at (-0.6,6) {$2$}; \node at (-0.6,7) {$3$}; \foreach \x in {5.5,6.5} \filldraw (\x,6.5) node{$b$}; \filldraw (5.5,5.5) node{$4$}; \draw (5,6) grid (7,7); \draw (5,5) grid (6,6); \draw (5.6,6.5)--(6.4,6.5); \node at (4.2,6.5){$S_1^{(4)}$}; \node at (4.2,5.5){$S_2^{(4)}$}; \draw[line width=1mm] (3,4)--(2,4)--(2,5)--(1,5)--(1,6)--(0,6)--(0,7); \node at (8.6,4){$C$}; \draw[->] (9.4,0)--(18,0); \draw[->] (10,-.6)--(10,7.5); \draw (10,1)--(16,1) (10,2)--(15,2) (10,3)--(14,3) (10,4)--(13,4) (11,4)--(11,0) (12,4)--(12,0) (13,4)--(13,0) (14,3)--(14,0) (15,2)--(15,0) (16,1)--(16,0); \node at (9.4,1){$1$}; \node at (9.4,2){$2$}; \node at (9.4,3){$3$}; \node at (9.4,4){$4$}; \node at (11,-0.6) {$1$}; \node at (12,-0.6) {$2$}; \node at (13,-0.6) {$3$}; \node at (14,-0.6) {$4$}; \node at (15,-0.6) {$5$}; \node at (16,-0.6) {$6$}; \node at (17,-0.6) {$7$}; \foreach \x in {13.5,...,15.5 } \filldraw (\x,.5) node{$a$}; \foreach \x in {10.5,11.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {12.5} \filldraw (\x,1.5) node{$b$}; \foreach \x in {12.5,13.5} \filldraw (\x,2.5) node{$3$}; \foreach \x in {10.5,11.5}\filldraw (\x,3.5) node{$4$}; \draw (13.6,.5)--(14.4,.5) (10.6,1.5)--(11.4,1.5) (12.6,2.5)--(13.4,2.5) (10.6,3.5)--(11.4,3.5); \draw (15,5) grid (17,7); \draw (17,6) grid (18,7); \foreach \x in {15.5,16.5} \filldraw (\x,6.5) node{$1$}; \foreach \x in {17.5 } \filldraw (\x,6.5) node{$a$}; \filldraw (16.5,5.5) node{$3$}; \draw (15.6,6.5)--(16.4,6.5); \node at (14.2,6.5){$T_1$}; \node at (13.7,5.5){$\iota(S_1^{(2)})$}; \draw[->] (15.5,4.3)--(15.5,4.9); \node at (15.5,3.9){$\iota(S_1^{(4)})$}; \draw[line width=1mm] (14,0)--(13,0)--(13,1)--(13,2)--(13,2)--(12,2)--(12,3)--(12,4); \node at (20,3.5){$\begin{array}{c}\rm (c)\\ \mapsto\end{array}$}; \end{tikzpicture} \vspace*{10pt} \begin{tikzpicture}[scale=0.6] \node at (-1.6,6){$B^{(5)}$}; \draw (0,7)--(0,5); \draw (1,6)--(1,5); \draw (2,5)--(2,5); \draw (2,5)--(0,5); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,5)--(0,8); \draw[->] (-1,5)--(3,5); \node at (1,4.6) {$1$}; \node at (2,4.6) {$2$}; \node at (-0.6,6) {$1$}; \node at (-0.6,7) {$2$}; \filldraw (5.5,6.5) node{$4$}; \draw (5,6) grid (6,7); \draw (5,5)--(5,6); \node at (4.2,6.5){$S_1^{(5)}$}; \node at (4.2,5.5){$S_2^{(5)}$}; \draw[line width=1mm] (2,5)--(1,5)--(1,6)--(0,6)--(0,7); \node at (8.6,4){$C$}; \draw[->] (9.4,0)--(18,0); \draw[->] (10,-.6)--(10,7.5); \draw (10,1)--(16,1) (10,2)--(15,2) (10,3)--(14,3) (10,4)--(13,4) (10,5)--(12,5) (11,5)--(11,0) (12,5)--(12,0) (13,4)--(13,0) (14,3)--(14,0) (15,2)--(15,0) (16,1)--(16,0); \node at (9.4,1){$1$}; \node at (9.4,2){$2$}; \node at (9.4,3){$3$}; \node at (9.4,4){$4$}; \node at (9.4,5){$5$}; \node at (11,-0.6) {$1$}; \node at (12,-0.6) {$2$}; \node at (13,-0.6) {$3$}; \node at (14,-0.6) {$4$}; \node at (15,-0.6) {$5$}; \node at (16,-0.6) {$6$}; \node at (17,-0.6) {$7$}; \foreach \x in {13.5,...,15.5 } \filldraw (\x,.5) node{$a$}; \foreach \x in {10.5,11.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {12.5} \filldraw (\x,1.5) node{$b$}; \foreach \x in {12.5,13.5} \filldraw (\x,2.5) node{$3$}; \foreach \x in {10.5,11.5}\filldraw (\x,3.5) node{$4$}; \foreach \x in {10.5,11.5}\filldraw (\x,4.5) node{$b$}; \draw (13.6,.5)--(14.4,.5) (10.6,1.5)--(11.4,1.5) (12.6,2.5)--(13.4,2.5) (10.6,3.5)--(11.4,3.5) (10.6,4.5)--(11.4,4.5); \draw (15,5) grid (17,7); \draw (17,6) grid (18,7); \foreach \x in {15.5,16.5} \filldraw (\x,6.5) node{$1$}; \foreach \x in {17.5 } \filldraw (\x,6.5) node{$a$}; \filldraw (16.5,5.5) node{$3$}; \draw (15.6,6.5)--(16.4,6.5); \node at (14.2,6.5){$T_1$}; \node at (13.7,5.5){$\iota(S_1^{(2)})$}; \draw[->] (15.5,4.3)--(15.5,4.9); \node at (15.5,3.9){$\iota(S_1^{(5)})$}; \draw[line width=1mm] (14,0)--(13,0)--(13,1)--(13,2)--(13,2)--(12,2)--(12,3)--(12,4)--(12,5); \node at (20,3.5){$\begin{array}{c}\rm (d)\\ \mapsto\end{array}$}; \end{tikzpicture} \vspace*{10pt} \begin{tikzpicture}[scale=0.6] \node at (-1.6,6.6){$B^{(6)}$}; \draw (0,7)--(0,6); \draw (1,6)--(1,6); \draw (1,6)--(0,6); \draw (0,7)--(0,7); \draw[->] (0,5)--(0,8); \draw[->] (-1,6)--(2,6); \node at (1,5.6) {$1$}; \node at (-0.6,7) {$1$}; \draw[line width=1mm] (1,6)--(0,6)--(0,7); \draw (5,6)--(5,7); \node at (4.2,6.5){$S_1^{(6)}$}; \node at (8.6,4){$C$}; \draw[->] (9.4,0)--(18,0); \draw[->] (10,-.6)--(10,7.5); \draw (10,1)--(16,1) (10,2)--(15,2) (10,3)--(14,3) (10,4)--(13,4) (10,5)--(12,5) (10,6)--(11,6) (11,6)--(11,0) (12,5)--(12,0) (13,4)--(13,0) (14,3)--(14,0) (15,2)--(15,0) (16,1)--(16,0); \node at (9.4,1){$1$}; \node at (9.4,2){$2$}; \node at (9.4,3){$3$}; \node at (9.4,4){$4$}; \node at (9.4,5){$5$}; \node at (9.4,6){$6$}; \node at (9.4,7){$7$}; \node at (11,-0.6) {$1$}; \node at (12,-0.6) {$2$}; \node at (13,-0.6) {$3$}; \node at (14,-0.6) {$4$}; \node at (15,-0.6) {$5$}; \node at (16,-0.6) {$6$}; \node at (17,-0.6) {$7$}; \foreach \x in {13.5,...,15.5 } \filldraw (\x,.5) node{$a$}; \foreach \x in {10.5,11.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {12.5} \filldraw (\x,1.5) node{$b$}; \foreach \x in {12.5,13.5} \filldraw (\x,2.5) node{$3$}; \foreach \x in {10.5,11.5}\filldraw (\x,3.5) node{$4$}; \foreach \x in {10.5,11.5}\filldraw (\x,4.5) node{$b$}; \draw (13.6,.5)--(14.4,.5) (10.6,1.5)--(11.4,1.5) (12.6,2.5)--(13.4,2.5) (10.6,3.5)--(11.4,3.5) (10.6,4.5)--(11.4,4.5); \draw (15,5) grid (17,7); \draw (17,6) grid (18,7); \foreach \x in {15.5,16.5} \filldraw (\x,6.5) node{$1$}; \foreach \x in {17.5 } \filldraw (\x,6.5) node{$a$}; \draw (16.5,5.5) node{$3$}; \draw (15.5,5.5) node{$4$}; \draw (15.6,6.5)--(16.4,6.5); \node at (14.2,6.5){$T_1$}; \node at (13.7,5.5){$\iota(S_1^{(2)})$}; \draw[line width=1mm] (14,0)--(13,0)--(13,1)--(13,2)--(13,2)--(12,2)--(12,3)--(12,4)--(12,5)--(11,5)--(11,6); \node at (20,3.5){$\begin{array}{c}\rm (d)\\ \mapsto\end{array}$}; \end{tikzpicture} \vspace*{10pt} \begin{tikzpicture}[scale=0.6] \node at (-1.6,7){$B^{(7)}$}; \draw[->] (0,6)--(0,8); \draw[->] (-1,7)--(1,7); \filldraw (0,7) circle(3pt); \node at (8.6,4){$C$}; \draw[->] (9.4,0)--(18,0); \draw[->] (10,-.6)--(10,7.5); \draw (10,1)--(16,1) (10,2)--(15,2) (10,3)--(14,3) (10,4)--(13,4) (10,5)--(12,5) (10,6)--(11,6) (11,6)--(11,0) (12,5)--(12,0) (13,4)--(13,0) (14,3)--(14,0) (15,2)--(15,0) (16,1)--(16,0); \node at (9.4,1){$1$}; \node at (9.4,2){$2$}; \node at (9.4,3){$3$}; \node at (9.4,4){$4$}; \node at (9.4,5){$5$}; \node at (9.4,6){$6$}; \node at (9.4,7){$7$}; \node at (11,-0.6) {$1$}; \node at (12,-0.6) {$2$}; \node at (13,-0.6) {$3$}; \node at (14,-0.6) {$4$}; \node at (15,-0.6) {$5$}; \node at (16,-0.6) {$6$}; \node at (17,-0.6) {$7$}; \foreach \x in {13.5,...,15.5 } \filldraw (\x,.5) node{$a$}; \foreach \x in {10.5,11.5 } \filldraw (\x,1.5) node{$2$}; \foreach \x in {12.5} \filldraw (\x,1.5) node{$b$}; \foreach \x in {12.5,13.5} \filldraw (\x,2.5) node{$3$}; \foreach \x in {10.5,11.5}\filldraw (\x,3.5) node{$4$}; \foreach \x in {10.5,11.5}\filldraw (\x,4.5) node{$b$}; \draw (13.6,.5)--(14.4,.5) (10.6,1.5)--(11.4,1.5) (12.6,2.5)--(13.4,2.5) (10.6,3.5)--(11.4,3.5) (10.6,4.5)--(11.4,4.5); \draw (15,5) grid (17,7); \draw (17,6) grid (18,7); \foreach \x in {15.5,16.5} \filldraw (\x,6.5) node{$1$}; \foreach \x in {17.5 } \filldraw (\x,6.5) node{$a$}; \draw (16.5,5.5) node{$3$}; \draw (15.5,5.5) node{$4$}; \draw (15.6,6.5)--(16.4,6.5); \node at (14.2,6.5){$T_1$}; \node at (14.2,5.5){$T_2$}; \draw[line width=1mm] (14,0)--(13,0)--(13,1)--(13,2)--(13,2)--(12,2)--(12,3)--(12,4)--(12,5)--(11,5)--(11,6)--(10,6)--(10,7); \end{tikzpicture} \hspace*{50pt} \caption{The $\iota$ recursion applied to the extended binomial partial tiling in Figure~\ref{752}, part 2} \label{752II} \end{center} \end{figure} Initially $(n,k,t)=(7,5,2)$. We see that $(5,0)$ is an $NL$ point of $B$ and $(5-2-1,0)=(2,0)$ is also an $NL$ point of $S_1$. So we are in case (d). Accordingly, ${\cal S}^{(1)}=(S_2)$ so that $S_1^{(1)}=S_2$ which is the strip filled with $b$'s. Also \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,0) grid (6,1); \foreach\x in {.5,1.5} \draw (\x,.5) node{$1$}; \foreach \x in {2.5,3.5,4.5,5.5} \draw (\x,.5) node{$a$}; \draw (.6,.5)--(1.4,.5) (3.6,.5)--(4.4,.5); \node at (-2,.5) {$R^r S_1^r=$}; \node at (6.5,.2) {.}; \end{tikzpicture} \end{center} Taking that last $5-2=3$ squares of $R^r S_1^r$ gives a right strip for $C$. Recalling that $S_2=S_1^{(1)}$ we have that ${\cal T}=(T_1,\iota(S_1^{(1)}))$ where $T_1$ is the tiling of the remaining $7-5+2-1=3$ squares of $R^r S_1^r$. Since we will have to recurse further to compute $\iota(S_1^{(1)})$, the squares for that strip are left blank in the figure. In ${\cal B}^{(1)}$ we have a $(6,4,1)$ extended tiling. Furthermore $(4,0)$ is an $NL$ point of $B^{(1)}$ while $(4-1-1,0)=(2,0)$ is an $NI$ point of $S_1^{(1)}$. It follows that we are in case (c). Thus ${\cal S}^{(2)}$ consists of only one strip which are the first two tiles of $S_1^{(1)}$. The second row of $C$ will be the left strip gotten by taking the tiles in the first $6-4-1=3$ boxes of the concatenation \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,0) grid (5,1); \foreach\x in {.5,1.5} \draw (\x,.5) node{$2$}; \foreach \x in {2.5,3.5,4.5} \draw (\x,.5) node{$b$}; \draw (.6,.5)--(1.4,.5) (3.6,.5)--(4.4,.5); \node at (5.5,.2) {.}; \end{tikzpicture} \end{center} And ${\cal T}^{(2)}$ consists of the single strip $\iota(S_1^{(2)})$ which still remains to be computed. At the next stage, the extended tiling is of type $(5,3,1)$. The two points $(3,0)$ and $(3-1-1,0)=(1,0)$ are, respectively, $NI$ and $NL$ points of $B^{(2)}$. This is case (b) so ${\cal S}^{(3)}={\cal S}^{(2)}$. The new row of $C$ consists of the right strip tiled by the first $5-3=2$ tiles of the lowest row of $B^{(2)}$ in reverse order. (Reversal does nothing since the tiling is just a single domino.) And the single strip in ${\cal T}^{(3)}$ is obtained by concatenating $\iota(S_1^{(3)})$ with the last tile of the lowest row of $B^{(2)}$ in reverse order (which again does nothing since the tiling is just a single monomino). We have that ${\cal B}^{(3)}$ is of type $(4,3,1)$. The point $(3,0)$ is an $NI$ step of $B^{(3)}$ as is $(3-1-1,0)=(1,0)$. So we are in case (a). So ${\cal S}^{(4)}$ will have a new strip consisting of the first tile of \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,0) grid (3,1); \foreach\x in {.5,1.5,2.5} \draw (\x,.5) node{$4$}; \draw (1.6,.5)--(2.4,.5); \node at (3.5,.2) {.}; \end{tikzpicture} \end{center} Now $C$ adds a row consisting of the reversal of the tiles on the remaining two squares of the above strip, while ${\cal T}$ does not change from the previous step. \begin{thm} \label{iothm} The map $\iota$ is a well defined involution on extended binomial tilings. \eth \begin{proof} We induct on $n$ where the case $n=0$ is trivial. So assume $n>0$ and that the theorem holds for extended binomial tilings with first parameter $n-1$. We will now go through each of the cases of the algorithm in turn. Consider case (a). To check that $\iota$ is well defined, we must first show that restricting to the first $k-t-1$ (or the last $t+1$) boxes of $R$ does not break a domino. But this is true since $R$ has length $k$ and $(k-t-1,0)$ is an $NI$ point of $B$. Note also that $|S_{t+1}|=k-t-1$ is the correct length to be the final strip in ${\cal S}'$ since one takes an $NI$ step to go from $B$ to $B'$ and so the path still has $x$-coordinate $k$. Similarly, the other strips of ${\cal S}'$ have the appropriate lengths. Next we must be sure that the left strip used for the bottom of $C$ will permit the beginning of a path starting at $(n-k+t,0)$ with an $NI$ step. First note that the forms of $B'$ and ${\cal S}'$ show that ${\cal B}'$ has parameters $(n-1,k,t+1)$, so by induction $\iota'({\cal B}')$ has type $(n-1,n-k+t,t+1)$ It follows that $\iota'(S_{t+1})$ has length $(n-k+t)-(t+1)=n-k-1$, and the number of boxes tiled in the first row of $C$ is $$ |\iota'(S_{t+1}) \cdot R\ce{t+1}^r| = (n-k-1)+(t+1)=n-k+t $$ as desired. We must also make sure that once the $NI$ step is taken in $C$, its end point will be the same as the initial point of the path when we compute $\iota'({\cal B}')$. But since the first step in $C$ will be $NI$, its $x$-coordinate will still be $n-k+t$ which agrees with the middle parameter computed for $\iota'({\cal B}')$ above. Finally, we must check that the entries of ${\cal T}$ have the correct lengths. But this follows from the fact that the middle parameters for ${\cal B}$ and ${\cal B}'$ are both $k$. We now check that $\iota^2({\cal B})={\cal B}$ in case (a). To avoid confusion we will always use two different alphabets to distinguish between ${\cal B}$ and ${\cal C}=\iota({\cal B})$. So, for example $S_i$ will always be the $i$th strip of ${\cal B}$, not the $i$th strip of ${\cal C}$ which will be denoted $T_i$. In the previous paragraph we saw that $C$ starts with an $NI$ step. Furthermore, the definition of $R'$ as a concatenation shows that $(k-t-1,0)$ is an $NI$ point of $C$. So ${\cal C}$ is again in case (a). By induction $\iota'(B')$ will be $B$ with its lowest row removed. And the bottom row will be a left strip tiled by $$ (\iota')^2(S_{t+1})\cdot R'\ce{t+1}^r = S_{t+1}\cdot R\ce{t+1}=R\fl{k-t-1}\cdot R\ce{t+1}=R. $$ Thus $\iota^2(B)=B$. Also, using induction, $$ \iota^2({\cal S}) = \iota'({\cal T}) =((\iota')^2(S_1),\dots,(\iota')^2(S_t)) = {\cal S}. $$ Hence $\iota^2({\cal B})={\cal B}$ as we wished to prove. For the remaining three cases, much of the demonstration of being well defined is similar to what was done in case (a). So we will just mention any important points of difference. In case (b), the fact that $(k-t-1,0)$ is an $NL$ point for $B$ implies that there is a domino between squares $k-t-1$ and $k-t$ in the bottom row of $B$. In particular, this means that $(k-t,0)$ is an $NI$ point for $B$ and so it is possible to take the first $k-t$ squares of $R$ when forming $R'$. Note also that by definition of the right strip $R^R$, the domino just mentioned will cover squares $n-k+t$ and $n-k+t+1$ in the bottom row of $C$. Thus a path starting at $(n-k+t,0)$ will be forced west and so this will an $NL$ point of $C$. Furthermore, the first component of ${\cal T}$ is $\iota'(S_t)\cdot R\ce{t}^r$, where by induction $\iota'(S_t)$ has length $[(n-1)-k+t]-t=n-k-1$. So this gives an $NI$ point of $T_1$ with coordinates $(n-k-1,0)=((n-k+t)-t-1,0)$ and thus ${\cal C}$ is in case (c). To see that we have an involution in case (b), we have just noted that for ${\cal B}$ in this case we have ${\cal C}=\iota({\cal B})$ is in case (c). As usual, $\iota'(B')$ returns the top rows of $B$ to what they were. As for the bottom row we have, by definition of case (c) and the fact that ${\cal C}$ is of type $(n,n-k+t,t)$, that it is a left strip tiled by $$ (R')^r T_1^r\fl{n-(n-k+t)-t} = (R\fl{k-t})(R\ce{t}\cdot\iota'(S_t)^r) \fl{k} = R. $$ Finally, we have $$ \iota^2({\cal S})=(\iota'(T_2),\dots,\iota'(T_{t+1}))=((\iota')^2(S_1),\dots,(\iota')^2(S_{t-1}),\iota'(T_1\fl{(n-k+t)-t-1})) $$ where $$ T_1\fl{(n-k+t)-t-1}=(\iota'(S_t) \cdot R\ce{t}^r)\fl{n-k-1}=\iota'(S_t). $$ So by induction $\iota^2({\cal S})={\cal S}$ in this case as well. The proof if ${\cal B}$ is in case (c) is similar to the one for case (b) which is its inverse so this part of the demonstration will be omitted. Finally we turn to case (d). To prove that this case is well defined, one again checks that the dominoes which force $(k,0)$ to be an $NL$ point of $B$ and $(k-t-1,0)$ to be an $NL$ point of $S_1$ appear in $T_1$ and $C$, respectively, so that $((n-k+t)-t-1,0)=(n-k-1,0)$ is an $NL$ point of $T_1$ and $(n-k+t,0)$ is an $NL$ point of $C$. One then uses this fact to show that applying $\iota$ twice is the identity. But no new ideas appear so we will leave these details to the reader. \end{proof}\medskip \section{Catalan and Fuss-Catalan numbers} \label{cfc} The well-known {\em Catalan numbers} are given by $$ C_n =\frac{1}{n+1}\binom{2n}{n} $$ for $n\ge0$. So the Lucas analogue is $$ C_{\{n\}} = \frac{1}{\{n+1\}}{2n\brace n}. $$ In 2010, Lou Shapiro suggested this definition. Further, he asked whether this was a polynomial in $s$ and $t$ and, if so, whether it had a combinatorial interpretation. There is a simple relation between $C_{\{n\}}$ and the Lucasnomials which shows that the answer to the first question is yes. This was first pointed out by Shalosh Ekhad~\cite{ekh:ssl}. We will prove this equation combinatoriallly below. We can now show that the second question also has an affirmative answer. \begin{thm} \label{Lcat:ptn} Given $n\ge0$ there is a partition of ${\cal T}(\delta_{2n})$ such that $\{n\}!\{n+1\}!$ divides $\wt\beta$ for every block $\beta$. \eth \begin{proof} Given $T\in{\cal T}(\delta_{2n})$ we find the block containing it as follows. First construct a lattice path $p$ starting at $(n-1,0)$ and ending at $(2n,0)$ in exactly the same was as in the proof of Theorem~\ref{Lnom:ptn}. Now put a tiling in the same block as $T$ if it agrees with $T$ on the left side of $NI$ steps and on the right side of $NL$ steps in all rows above the first row. In the first row, the tiling on both sides of $p$ is arbitrary except for the required domino if $p$ begins with a $W$ step. Since $p$ goes from $(n-1,0)$ to $(2n,0)$, the parts of the tiling which vary as in the Lucasnomial case contribute $\{n-1\}!\{n+1\}!$ to $\wt\beta$. So we just need to show that the extra varying portion in the first row will give a factor of $\{n\}$. If $p$ begins with an $N$ step, then the extra factor comes from the $n-1$ boxes to the left of this step which yields $\{n\}$. If $p$ begins with $WN$, then this factor comes from the $n-1$ boxes to the right of the domino causing this $NL$ step, which again gives the desired $\{n\}$. \end{proof}\medskip Again, we can associate with each block of the partition in the previous theorem a {\em Catalan partial tiling} which is like a binomial partial tiling except that the first row will be blank except for a domino if $p$ begins with a $W$ step. We will sometimes omit the modifiers like ``binomial" and ``Catalan" if it is clear from context which type of partial tiling is intended. Figure~\ref{Cat:part} illustrates a Catalan partial tiling \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw (0,6)--(0,0); \draw (1,5)--(1,0); \draw (2,4)--(2,0); \draw (3,3)--(3,0); \draw (4,2)--(4,0); \draw (5,1)--(5,0); \draw (5,0)--(0,0); \draw (5,1)--(0,1); \draw (4,2)--(0,2); \draw (3,3)--(0,3); \draw (2,4)--(0,4); \draw (1,5)--(0,5); \draw[->] (0,-4)--(0,8); \draw[->] (-4,0)--(8,0); \draw[line width=1mm] (2,0)--(1,0)--(1,2)--(0,2)--(0,6); \foreach \x in {1.5,2.5 } \filldraw (\x,.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,1.5) circle(3pt); \foreach \x in {.5,1.5,2.5 } \filldraw (\x,2.5) circle(3pt); \draw (1.5,.5)--(2.5,.5) (.5,2.5)--(1.5,2.5); \node at (2,-0.6) {$(2,0)$}; \node at (-1,6) {$(0,6)$}; \end{tikzpicture} \end{center} \caption{A Catalan partial tiling} \label{Cat:part} \end{figure} \begin{cor} \label{Cpartial} Given $n\ge0$ we have $$ C_{\{n\}}=\sum_C \wt C $$ where the sum is over all Catalan partial tilings $C$ associated with lattice paths from $(n-1,0)$ to $(0,2n)$ in $\delta_{2n}$. Thus $C_{\{n\}}\in{\mathbb N}[s,t]$. \hfill \qed \end{cor} We can now give a combinatorial proof of the identity relating the Lucas-Catalan polynomials $C_{\{n\}}$ and the Lucasnomials which we mentioned earlier. \begin{prop} \label{Csum} For $n\ge2$ we have $$ C_{\{n\}}={2n-1\brace n-1} + t{2n-1 \brace n-2}. $$ \end{prop} \begin{proof} By Corollary~\ref{Cpartial}, it suffices to partition the Catalan partial tilings $P$ into two subsets whose weight generating functions are the two terms in the sum. First consider the partial tilings associated with lattice paths $p$ whose first step is $N$. Then the bottom row of $P$ is blank. And the portion of $p$ in the remaining rows goes from $(n-1,1)$ to $(0,2n)$ inside $\delta_{2n-1}$. Thus the contribution of these partial tilings is ${2n-1\brace n-1}$. If instead $p$ begins with $WN$, then there is a single domino in the first row which contributes $t$. The rest of the path goes from $(n-2,1)$ to $(0,2n)$ inside $\delta_{2n-1}$ and so contributes ${2n-1 \brace n-2}$ as desired. \end{proof}\medskip Note that this proposition is a Lucas analogue of the well-known identity $$ C_n = \binom{2n-1}{n-1}-\binom{2n-1}{n-2} $$ obtained when $s=2$ and $t=-1$. We now wish to study the Lucas analogue of the {\em Fuss-Catalan numbers} which are $$ C_{n,k} =\frac{1}{kn+1}\binom{(k+1)n}{n} $$ for $n\ge0$ and $k\ge1$. Clearly $C_{n,1}=C_n$. Consider the Lucas analogue $$ C_{\{n,k\}}=\frac{1}{\{kn+1\}}{(k+1)n\brace n}. $$ To prove the next result, it will be convenient to give coordinates to the squares of a Young diagram $\lambda$. We will use brackets for these coordinates to distinguish them from the Cartesian coordinates we have been using for lattice paths. Let $[i,j]$ denote the square in row $i$ from the bottom and column $j$ from the left. Alternatively, if a square has northeast corner with Cartesian coordinates $(j,i)$ then the square's coordinates are $[i,j]$. \begin{thm} Given $n\ge0, k\ge1$ there is a partition of ${\cal T}(\delta_{(k+1)n})$ such that $\{n\}!\{kn+1\}!$ divides $\wt\beta$ for every block $\beta$. \eth \begin{proof} To find the block containing a tiling $T$ of $\delta_{(k+1)n}$ we proceed as follows. Consider the usual lattice path $p$ in $T$ starting at $(n-1,0)$ and ending at $((k+1)n,0)$. If $p$ starts with an $N$ step, then we construct $\beta$ exactly as in the proof of Theorem~\ref{Lcat:ptn}. In this case, the parts of the tiling which vary as in the Lucasnomial case contribute $\{n-1\}!\{kn+1\}!$ and the squares in the first row to the left of the $NI$ step give a factor of $\{n\}$ so we are done for such paths. Now suppose $p$ begins $WN$. It follows that there is a domino of $T$ between squares $[1,n-1]$ and $[1,n]$. Also, there is no domino between squares $[1,(k+1)n-1]$ and $[1,(k+1)n]$ because the latter square is not part of $\delta_{(k+1)n}$. So there is a smallest index $m$ such that there is a domino between $[1,mn-1]$ and $[1,mn]$ but no domino between $[1,(m+1)n-1]$ and $[1,(m+1)n]$. The block of $\beta$ will consist of all tilings agreeing with $T$ as for Lucasnomials in rows above the first. And in the first row they agree with $T$ to the right of the $NL$ step except in the squares from $[1,mn+1]$ through $[1,(m+1)n-1]$ where the tiling is allowed to vary. As in the previous paragraph, the variable parts of $\beta$ which are the same as for Lucasnomials contribute $\{n-1\}!\{kn+1\}!$ while the variable portion to the right of the first $NL$ gives a factor of $\{n\}$. This finishes the demonstration. \end{proof}\medskip \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw (1,8)--(1,0) (2,7)--(2,0) (3,6)--(3,0) (4,5)--(4,0) (5,4)--(5,0) (6,3)--(6,0) (7,2)--(7,0) (8,1)--(8,0); \draw (8,1)--(0,1) (7,2)--(0,2) (6,3)--(0,3) (5,4)--(0,4) (4,5)--(0,5) (3,6)--(0,6) (2,7)--(0,7) (1,8)--(0,8); \draw[->] (0,-2)--(0,11); \draw[->] (-2,0)--(11,0); \draw[line width=1mm] (2,0)--(1,0)--(1,4)--(0,4)--(0,9); \foreach \x in {1.5,2.5,...,5.5 } \filldraw (\x,.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,1.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,2.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,3.5) circle(3pt); \foreach \x in {.5,1.5,...,3.5} \filldraw (\x,4.5) circle(3pt); \draw (1.5,.5)--(2.5,.5) (4.5,.5)--(5.5,.5) (.5,4.5)--(1.5,4.5); \node at (2,-0.6) {$(2,0)$}; \node at (-1,9) {$(0,9)$}; \end{tikzpicture} \end{center} \caption{A Fuss-Catalan partial tiling} \label{FCat:part} \end{figure} As usual, we can represent a block $\beta$ of this partition by a {\em Fuss-Catalan partial tiling}. Figure~\ref{FCat:part} displays such a tiling when $n=3$, $k=2$, and $m=2$ (in the notation of the previous proof). \begin{cor} \label{FCpartial} Given $n\ge0, k\ge1$ we have $$ C_{\{n,k\}}=\sum_P \wt P $$ where the sum is over all Fuss-Catalan partial tilings associated with lattice paths going from $(n-1,0)$ to $(0,(k+1)n)$ in $\delta_{(k+1)n}$. Thus $C_{\{n,k\}}\in{\mathbb N}[s,t]$. \hfill \qed \end{cor} The next result is proved in much the same way as Proposition~\ref{Csum} and so the demonstration is left to the reader. \begin{prop} For $n\ge2, k\ge1$ we have \vs{5pt} \eqqed{ C_{\{n,k\}} = {(k+1)n-1\brace n-1} + \sum_{m=1}^k t^m \{n\}^{m-1} \{k-m+1\} {(k+1)n-1 \brace n-2}. } \end{prop} \section{Coxeter groups and $d$-divisible diagrams} \label{cgd} There is a way to associate a Catalan number and Fuss-Catalan numbers with any finite irreducible Coxeter group $W$. This has lead to the area of reseach called Catalan combinatorics. For more details, see the memoir of Armstrong~\cite{arm:gnp}. The purpose of this section is to prove that for any $W$, the Coxeter-Catalan number is in ${\mathbb N}[s,t]$. In fact, we will prove more general results using $d$-divisible Young diagrams. This will also permit us to prove that for the infinite families of Coxeter groups, the Lucas-Fuss-Catalan analogue is in ${\mathbb N}[s,t]$. \begin{figure} $$ \begin{array}{|c|l|l|} \hline W & d_1, \dots, d_n & h \\ \hline A_n & 2, 3, 4,\dots, n+1 & n+1\\ B_n & 2, 4, 6, \dots, 2n & 2n \\ D_n & 2, 4, 6, \dots, 2(n-1), n & 2(n-1) \hs{5pt} (\text{for $n\ge3$})\\ I_2(m) & 2,m & m \hs{5pt} (\text{for $m\ge2$})\\ H_3 & 2, 6,10 & 10\\ H_4 & 2, 12, 20, 30 & 30\\ F_4 & 2, 6, 8, 12 & 12\\ E_6 & 2, 5, 6, 8, 9, 12 & 12\\ E_7 & 2, 6, 8, 10, 12, 14, 18 & 18\\ E_8 & 2, 8, 12, 14, 18, 20, 24, 30 & 30\\ \hline \end{array} $$ \caption{finite irreducible Coxeter group degrees} \label{deg} \end{figure} The finite Coxeter groups $W$ are those which can be generated by reflections. Those which are irreducible have a well-known classification with four infinite families ($A_n$, $B_n$, $D_n$, and $I_2(m)$) as well as 6 exceptional groups ($H_3$, $H_4$, $F_4$, $E_6$, $E_7$, and $E_8$) where the subscript denotes the dimension $n$ of the space on which the group acts. Associated with each finite irreducilble group is a set of {\em degrees} which are the degrees $d_1,\dots,d_n$ of certain polynomial invariants of the group. The degrees of the various groups are listed in Figure~\ref{deg}. The {\em Coxeter number} of $W$ is the largest degree and is denoted $h$. One can now define the {\em Coxeter-Catalan number} of $W$ to be $$ \Cat W = \prod_{i=1}^n \frac{h+d_i}{d_i} $$ with corresponding {\em Lucas-Coxeter analogue} $$ \Cat \{W\} = \prod_{i=1}^n \frac{\{h+d_i\}}{\{d_i\}}. $$ If $W$ is of type $J_n$ for some $J$ then we will also use the notation $J_{\{n\}}$ for $\{W\}$. Directly from the definitions, $\Cat A_{n-1}=C_n$. Also, after cancelling powers of $2$, we have $\Cat B_n=\binom{2n}{n}$. But $\{2n\}\neq\{2\}\{n\}$ so we will have to find another way to deal with $\Cat B_{\{n\}}$. In fact, we will be able to give a combinatorial interpretation when the numerator and denominator are both constructed using ``Lucastorials" containing the integers divisible by some fixed integer $d\ge1$. Define the {\em $d$-divisible Lucastorial} as $$ \{n:d\}! = \{d\} \{2d\}\dots \{nd\} $$ with corresponding {\em $d$-divisible Lucasnomial} $$ {n:d\brace k:d} = \frac{\{n:d\}!}{\{k:d\}! \{n-k:d\}!} $$ for $0\le k\le n$. So we have $$ \Cat B_{\{n\}} = {2n:2\brace n:2}. $$ Also define the {\em $d$-divisible staircase parttion} $$ \delta_{n:d} = (nd-1,(n-1)d-1,\dots,2d-1,d-1). $$ The fact that $\wt\delta_{n:d}=\{n:d\}!$ follows immediately from the definitions. \begin{thm} \label{dLnom:ptn} Given $d\ge1$ and $0\le k\le n$ there is a partition of ${\cal T}(\delta_{n:d})$ such that $\{k:d\}!\{n-k:d\}!$ divides $\wt\beta$ for every block $\beta$. \eth \begin{proof} We determine the block $\beta$ containing a tiling $T$ by constructing a path $p$ from $(kd,0)$ to $(0,n)$ as follows. The path takes an $N$ step if and only if three conditions are satisfied: the two for Lucasnomial paths (the step does not cross a domino and stays within the Young diagram) together with the requirement that the $x$-coordinate of the $N$ step must be congruent to $0$ or $-1$ modulo $d$ with at most one $N$ step on each line of the latter type. So $p$ starts by either goes north along $x=kd$ or, if there is a blocking domino, taking a $W$ step and going north along $x=kd-1$. In the first case it can take another $N$ step if not blocked, or go $W$ and then $N$ if it is. In the second case, $p$ proceeds using $W$ steps to $((k-1)d,1)$ and either goes north from that lattice point or, if blocked, takes one more $W$ step to go north from $((k-1)d-1,1)$, etc. See Figure~\ref{ddiv:part} for an example. Call an $N$ step an $NI$ step if it has $x$-coordinate divisible by $d$ and an $NL$ step otherwise. We now construct $\beta$ as for Lucasnomials: agreeing with $T$ to the left of $NI$ steps and to the right of $NL$ steps. It is an easy matter to check that $\{k:d\}!\{n-k:d\}!$ is a factor of $\wt\beta$. \end{proof}\medskip \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.6] \draw (1,4)--(1,0) (2,3)--(2,0) (3,3)--(3,0) (4,2)--(4,0) (5,2)--(5,0) (6,1)--(6,0) (7,1)--(7,0); \draw (7,1)--(0,1) (5,2)--(0,2) (3,3)--(0,3) (1,4)--(0,4); \draw[->] (0,-2)--(0,6); \draw[->] (-2,0)--(9,0); \draw[line width=1mm] (4,0)--(3,0)--(3,1)--(1,1)--(1,2)--(0,2)--(0,4); \foreach \x in {.5,1.5,...,6.5} \filldraw (\x,.5) circle(3pt); \foreach \x in {.5,1.5,...,4.5} \filldraw (\x,1.5) circle(3pt); \foreach \x in {.5,1.5,...,2.5} \filldraw (\x,2.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,3.5) circle(3pt); \draw (0.5,.5)--(1.5,.5) (3.5,.5)--(4.5,.5) (1.5,1.5)--(2.5,1.5) (3.5,1.5)--(4.5,1.5); \node at (4,-0.6) {$(4,0)$}; \node at (-1,4) {$(0,4)$}; \end{tikzpicture} \hs{20pt} \begin{tikzpicture}[scale=0.6] \draw (1,4)--(1,0) (2,3)--(2,0) (3,3)--(3,0) (4,2)--(4,0) (5,2)--(5,0) (6,1)--(6,0) (7,1)--(7,0); \draw (7,1)--(0,1) (5,2)--(0,2) (3,3)--(0,3) (1,4)--(0,4); \draw[->] (0,-2)--(0,6); \draw[->] (-2,0)--(9,0); \draw[line width=1mm] (4,0)--(3,0)--(3,1)--(1,1)--(1,2)--(0,2)--(0,4); \foreach \x in {3.5,4.5,5.5,6.5} \filldraw (\x,.5) circle(3pt); \foreach \x in {1.5,2.5,...,4.5} \filldraw (\x,1.5) circle(3pt); \draw (3.5,.5)--(4.5,.5) (1.5,1.5)--(2.5,1.5) (3.5,1.5)--(4.5,1.5); \node at (4,-0.6) {$(4,0)$}; \node at (-1,4) {$(0,4)$}; \end{tikzpicture} \end{center} \caption{A $2$-divisible path on the left and corresponding partial tiling on the right} \label{ddiv:part} \end{figure} The definition of {\em $d$-divisible partial tiling} (illustrated in Figure~\ref{ddiv:part}) and the next result are as expected. \begin{cor} \label{ddpartial} Given $d\ge1$ and $0\le k\le n$ we have $$ {n:d\brace k:d}=\sum_P \wt P $$ where the sum is over all $d$-divisible partial tilings associated with lattice paths going from $(kd,0)$ to $(0,n)$ in $\delta_{n:d}$. Thus ${n:d\brace k:d}\in{\mathbb N}[s,t]$. \hfill \qed \end{cor} The Lucas-Coxeter analogue for $D_n$ is \begin{equation} \label{CatD} \Cat D_{\{n\}} = \frac{\{3n-2\}}{\{n\}}{2(n-1) : 2 \brace n-1 : 2}. \end{equation} Again, we will be able to prove a $d$-divisible generalization of this result. But first we need a result of Hoggatt and Long~\cite{hl:dpg} about the divisibility of polynonials in the Lucas sequence. (In their paper they only prove the divisbility statement, but the fact that the quotient is in ${\mathbb N}[s,t]$ follows easily from their demonstration.) \begin{figure}[t] \begin{tikzpicture}[scale=0.6] \draw (1,9)--(1,1) (2,8)--(2,1) (3,8)--(3,1) (4,7)--(4,1) (5,7)--(5,0) (6,6)--(6,0) (7,6)--(7,0) (8,5)--(8,0) (9,5)--(9,0) (10,4)--(10,0) (11,4)--(11,0) (12,3)--(12,0) (13,3)--(13,0) (14,2)--(14,0) (15,2)--(15,0) (16,1)--(16,0) (17,1)--(17,0); \draw (17,1)--(0,1) (15,2)--(0,2) (13,3)--(0,3) (11,4)--(0,4) (9,5)--(0,5) (7,6)--(0,6) (5,7)--(0,7) (3,8)--(0,8) (1,9)--(0,9); \draw[->] (0,-1)--(0,10); \draw[->] (-1,0)--(18,0); \draw[line width=1mm] (10,0)--(10,1)--(10,2)--(9,2)--(9,3)--(8,3)--(7,3)--(7,4)--(6,4)--(6,5)--(6,6)--(5,6)--(5,7)--(3,7)--(3,8)--(1,8)--(1,9)--(0,9); \foreach \x in {5.5,6.5,...,16.5} \filldraw (\x,.5) circle(3pt); \foreach \x in {.5,1.5,...,14.5} \filldraw (\x,1.5) circle(3pt); \foreach \x in {.5,1.5,...,12.5} \filldraw (\x,2.5) circle(3pt); \foreach \x in {.5,1.5,...,10.5} \filldraw (\x,3.5) circle(3pt); \foreach \x in {.5,1.5,...,8.5} \filldraw (\x,4.5) circle(3pt); \foreach \x in {.5,1.5,...,6.5} \filldraw (\x,5.5) circle(3pt); \foreach \x in {.5,1.5,...,4.5} \filldraw (\x,6.5) circle(3pt); \foreach \x in {.5,1.5,...,2.5} \filldraw (\x,7.5) circle(3pt); \foreach \x in {.5} \filldraw (\x,8.5) circle(3pt); \draw (7.5,2.5)--(8.5,2.5) (9.5,2.5)--(10.5,2.5) (2.5,2.5)--(3.5,2.5) (3.5,3.5)--(4.5,3.5) (7.5,3.5)--(8.5,3.5) (9.5,.5)--(8.5,.5) (14.5,.5)--(15.5,.5) (8.5,1.5)--(9.5,1.5) (3.5,1.5)--(4.5,1.5) (.5,1.5)--(1.5,1.5) (11.5,1.5)--(12.5,1.5) (.5,4.5)--(1.5,4.5) (4.5,4.5)--(5.5,4.5) (1.5, 5.5)--(2.5,5.5) (6.5,4.5)--(7.5,4.5) (1.5,5.5)--(2.5,5.5) (.5,7.5)--(1.5,7.5) (1.5,6.5)--(2.5,6.5); \node at (10,-0.6) {$(10,0)$}; \node at (-1,9) {$(0,9)$}; \end{tikzpicture} \hs{10pt} \begin{tikzpicture}[scale=0.6] \draw (1,9)--(1,1) (2,8)--(2,1) (3,8)--(3,1) (4,7)--(4,1) (5,7)--(5,0) (6,6)--(6,0) (7,6)--(7,0) (8,5)--(8,0) (9,5)--(9,0) (10,4)--(10,0) (11,4)--(11,0) (12,3)--(12,0) (13,3)--(13,0) (14,2)--(14,0) (15,2)--(15,0) (16,1)--(16,0) (17,1)--(17,0); \draw (17,1)--(0,1) (15,2)--(0,2) (13,3)--(0,3) (11,4)--(0,4) (9,5)--(0,5) (7,6)--(0,6) (5,7)--(0,7) (3,8)--(0,8) (1,9)--(0,9); \draw[->] (0,-1)--(0,10); \draw[->] (-1,0)--(18,0); \draw[line width=1mm] (10,0)--(10,1)--(10,2)--(9,2)--(9,3)--(8,3)--(7,3)--(7,4)--(6,4)--(6,5)--(6,6)--(5,6)--(5,7)--(3,7)--(3,8)--(1,8)--(1,9)--(0,9); \foreach \x in {5.5,6.5,...,9.5} \filldraw (\x,.5) circle(3pt); \foreach \x in {.5,1.5,...,9.5} \filldraw (\x,1.5) circle(3pt); \foreach \x in {9.5,10.5,...,12.5} \filldraw (\x,2.5) circle(3pt); \foreach \x in {7.5,8.5,...,10.5} \filldraw (\x,3.5) circle(3pt); \foreach \x in {.5,1.5,...,5.5} \filldraw (\x,4.5) circle(3pt); \foreach \x in {.5,1.5,...,5.5} \filldraw (\x,5.5) circle(3pt); \draw (9.5,2.5)--(10.5,2.5) (7.5,3.5)--(8.5,3.5) (9.5,.5)--(8.5,.5) (8.5,1.5)--(9.5,1.5) (3.5,1.5)--(4.5,1.5) (.5,1.5)--(1.5,1.5) (.5,4.5)--(1.5,4.5) (4.5,4.5)--(5.5,4.5) (1.5, 5.5)--(2.5,5.5) (1.5,5.5)--(2.5,5.5) ; \node at (10,-0.6) {$(10,0)$}; \node at (-1,9) {$(0,9)$}; \end{tikzpicture} \caption{A lattice path and partial tiling for $D_{\{5\}}$ \label{D5}} \end{figure} \begin{thm}[\cite{hl:dpg}] \label{hl} For positive integers $m,n$ we have $m$ divides $n$ if and only if $\{m\}$ divides $\{n\}$. In this case $\{m\}/\{n\}\in{\mathbb N}[s,t]$. \hfill \qed. \eth If $\lambda,\mu$ are Young diagrams with $\mu\subseteq\lambda$, then the corresponding {\em skew diagram}, $\lambda/\mu$, consists of all the boxes in $\lambda$ but not in $\mu$. The skew diagram used in Figure~\ref{D5} is $\delta_{9:2}/(5)$. We will use skew diagrams to prove the following result which yields~\ree{CatD} as a special case. \begin{thm} Given $d\ge1$ we have \begin{equation} \label{dCatD} \frac{\{(d+1)n-d\}}{\{n\}}{2(n-1) : d \brace n-1: d} \end{equation} is in ${\mathbb N}[s,t]$. \eth \begin{proof} Tile the rows of the skew shape $\delta_{2n-1:d}/(d-1)n$. It is easy to see that the corresponding generating function is the numerator of equation~\ree{dCatD} where the bottom row contributes $\{(2n-1)d-(d-1)n\}=\{(d+1)n-d\}$ which is the numerator of the fractional factor. As usual, we group the tilings into blocks $\beta$ and show that the denominator of~\ree{dCatD} divides the weight of each block. Given a tiling $T$ we find its block by starting a lattice path $p$ at $(nd,0)$ and using exactly the same rules as in the proof of Theorem~\ref{dLnom:ptn}. See the upper diagram in Figure~\ref{D5} for an example when $d=2$ and $n=5$. We now let the strips to the side of each north step either be fixed or vary, again as dictated in Theorem~\ref{dLnom:ptn}'s demonstration. The bottom diagram in Figure~\ref{D5} shows the partial tiling corresponding to the upper diagram. There are two cases. If $p$ starts with an $N$ step then its right side contributes $\{(2n-1)d-nd\}=\{(n-1)d\}$. Now $p$ enters the top $2n-2$ rows which form a $\delta_{2n-2:d}$ at $x$-coordinate $nd$. It follows that the contribution of this portion of $p$ to $\wt\beta$ is $\{n:d\}!\{n-2:d\}!$. So the total contribution to the weight of the variable parts of each row is $$ \{(n-1)d\}\cdot \{n:d\}!\{n-2:d\}! = \{nd\}\cdot \{n-1:d\}!\{n-1:d\}!. $$ Thanks to Theorem~\ref{hl}, this is divisible by $\{n\}\cdot \{n-1:d\}!\{n-1:d\}!$ which is the denominator of~\ree{dCatD} so we are done with this case. If $p$ starts with $WN$, then the left side of $p$ gives a contribution of $\{nd-n(d-1)\}=\{n\}$. Because of the rule that $p$ can take at most one step on a vertical line of the form $=kd-1$, the path must now continue to take west steps until it reaches $((n-1)d,1)$. Now it can enter the upper rows of the diagram from this point contributing a factor of $ \{n-1:d\}!\{n-1:d\}!$ to $\wt\beta$. Multiplying the two contributions gives exactly the denominator of~\ree{dCatD} which finishes this case and the proof. \end{proof}\medskip We note that replacing $n$ by $n+1$ in equation~\ree{dCatD} we obtain $$ \frac{\{(d+1)n+1\}}{\{n+1\}}{2n : d \brace n: d} $$ which, for $d=1$, is just a multiple of the $n$th Lucas-Catalan polynomial. We also note that one can generalize even further. Given positive integers satisfying $l<kd<md$, we consider starting lattice paths from $(kdn,0)$ in the skew diagram $\delta_{m(n-1)+1:d}/(ld)$ using the same rules as in the previous two proofs. This yields the following result whose demonstration is similar enough to those just given that we omit it. But note that we get the previous theorem as the special case $l=d-1$, $k=1$, and $m=2$. \begin{thm} \label{genCatD} Given positive integers satisfying $l<kd<md$ we have $$ \frac{\{(dm-l)n-(m-1)d\}}{\{gn\}} {m(n-1) : d\brace kn-1 :d} $$ is in ${\mathbb N}[s,t]$ where $g=\gcd(kd,kd-l)$.\hfill \qed \eth We finally come to our main theorem for this section. \begin{thm} \label{CoxCat} If $W$ is a finite irreducible Coxeter group $W$ then $\Cat \{W\}\in{\mathbb N}[s,t]$. \eth \begin{proof} We have already proved the result in types $A$, $B$, and $D$. And for the exceptional Coxeter groups, we have verified the claim by computer. So we just need to show that $$ \Cat I_{\{2\}}(m) =\frac{\{m+2\}\{2m\}}{\{2\}\{m\}} $$ is in ${\mathbb N}[s,t]$. But this follows easily from Theorem~\ref{hl}. Indeed, if $m$ is odd then the relatively prime polynomials $\{2\}=s$ and $\{m\}$ both divide $\{2m\}$. It follows that the same is true of their product which completes this case. If $m$ is even then $\{2\}$ divides $\{m+2\}$ and $\{m\}$ divides $\{2m\}$. So, again, we have a polynomial quotient and all quotients have nonnegative integer coefficients. \end{proof}\medskip We now turn to the Fuss-Catalan case. For any finite Coxeter group $W$ and positive integer $k$ there is a {\em Coxeter-Fuss-Catalan number} defined by $$ \Cat^{(k)} W = \prod_{i=1}^n \frac{kh + d_i}{d_i}. $$ So, there is a corresponding {\em Lucas-Coxeter-Fuss-Catalan analogue} given by $$ \Cat^{(k)} \{W\} = \prod_{i=1}^n \frac{\{kh + d_i\}}{\{d_i\}}. $$ In particular, $\Cat^{(k)} A_{\{n-1\}} = C_{\{n,k\}}$. \begin{thm} If $W=A_n, B_n, D_n, I_2(m)$ then $\Cat^{(k)} \{W\}\in{\mathbb N}[s,t]$. \eth \begin{proof} We have already shown this for $A_n$ in Corollary~\ref{FCpartial}. For type $B$ we find that $$ \Cat^{(k)} B_{\{n\}} = \prod_{i=1}^n \frac{\{2kn + 2i\}}{\{2i\}} = {(k+1)n :2 \brace n:2} $$ which is a polynomial in $s$ and $t$ with nonnegative coefficients by Corollary~\ref{ddpartial}. In the case of type $D$ we see \begin{align*} \Cat^{(k)} D_{\{n\}} &= \frac{\{2k(n-1) + n\}}{\{n\}} \prod_{i=1}^{n-1} \frac{\{2k(n-1) + 2i\}}{\{2i\}}\\[10pt] &= \frac{\{(2k+1)n - 2k\}}{\{n\}} {(k+1)(n-1) : 2 \brace n-1 : 2} \end{align*} which is in ${\mathbb N}[s,t]$ by Theorem~\ref{genCatD}. For $\Cat^{(k)} I_{\{2\}}(m)$ we can use an argument similar to that of the proof of Theorem~\ref{CoxCat}. We have that $$ \Cat^{(k)} I_{\{2\}}(m) = \frac{ \{km + 2\} \{(k+1)m \}}{\{2\} \{m\}} $$ and can consider the parity of $m$ and $k$. If $m$ or $k$ is even then $\{2\}$ divides $\{km+2\}$ and $\{m\}$ divides $\{(k+1)m\}$. If $m$ and $k$ are both odd then $\{2\}$ and $\{m\}$ are relatively prime and both divide $\{(k+1)m\}$. And in all cases the quotients have coefficients in ${\mathbb N}$. \end{proof}\medskip \section{Comments and future work} Here we will collect various observations and open problems in the hopes that the reader will be tempted to continue our work. \subsection{Coefficients} Note that we can write $$ \{n\} = \sum_k a_k s^{n-2k-1} t^k $$ where the $a_k$ are positive integers and $0\le k\le (n-1)/2$. We call $a_0,a_1,\dots$ the {\em coefficient sequence} of $\{n\}$ and note that any of our Lucas analogues considered previously will also correspond to such a sequence. There are several properties of sequences of real numbers which are common in combinatorics, algebra, and geometry. One is that the sequence is {\em unimodal} which means that there is an index $m$ such that $$ a_0\le a_1\le \dots \le a_m\ge a_{m+1}\ge \dots. $$ Another is that the sequence is {\em log concave} which is defined by the inequality $$ a_k^2 \ge a_{k-1} a_{k+1} $$ for all $k$, where we assume $a_k=0$ if the subscript is outside of the range of the sequence. Finally, we can consider the generating function $$ f(y) = \sum_{k\ge0} a_k y^k $$ and ask for properties of its roots. For more information about such matters, see the survey articles of Stanley~\cite{sta:lus} and Brenti~\cite{bre:lus}. In particular, the following result is well known and straightforward to prove. \begin{prop} If $a_0,a_1,\dots$ is a sequence of positive reals then its generating function having real roots implies that it is log concave. And if the sequence is log concave then it is unimodal.\hfill \qed \end{prop} To see which of these properties are enjoyed by our Lucas analogues, it will be convenient to make a connection with Chebyshev polynomials. The {\em Chebyshev polynomials of the second kind}, $U_n(x)$, are defined recursively by $U_0(x)=1$, $U_1(x)=2x$, and for $n\ge 2$ $$ U_n(x) = 2x U_{n-1}(x) - U_{n-2}(x). $$ It follows immediately that \begin{equation} \label{Un} \{n\} = U_{n-1}(x) \end{equation} if we set $s=2x$ and $t=-1$. \begin{thm} If the Lucas analogue of a quotient of products is a polynomial then it has a coefficient generating function which is real rooted. So if the coefficient sequence consists of positive integers then the sequence is log concave and unimodal. \eth \begin{proof} From the previous proposition, it suffices to prove the first statement. It is well known and easy to prove by using angle addition formulas that $$ U_n(\cos\th) = \frac{\sin(n+1)\th}{\sin\th}. $$ It follows that the roots of $U_n(x)$ are $$ x=\cos\frac{k\pi}{n+1} $$ for $0<k<n+1$ and so real. By equation~\ref{Un} we see that $\{n\}$ and $U_{n-1}(x/2)$ have the same coefficient sequence except that in the former all coefficients are positive and in the latter signs alternate. Now if we take a quotient of products of the $\{n\}$ which is a polynomial $p(s,t)$, then the corresponding quotient of products where $\{n\}$ is replaced by $U_{n-1}(x/2)$ will be a polynomial $q(x)$. Further, from the paragraph above, $q(x)$ will have real roots. It follows that the coefficient sequence of $p(s,t)$ (which is obtained from $q(x)$ by removing zeros and making all coefficients positive) has a generating function with only real roots. \end{proof}\medskip \subsection{Rational Catalan numbers} Rational Catalan numbers generalize the ordinary Catalans and have many interesting connections with Dyck paths, noncrossing partitions, associahedra, cores of integer partitions, parking functions, and Lie theory. Let $a,b$ be positive integers whose greatest common divisor is $(a,b)=1$. Then the associated {\em ratioinal Catalan number} is $$ \Cat(a,b) = \frac{1}{a+b}\binom{a+b}{a}. $$ In particular, it is easy to see that $\Cat(n,n+1)=C_n$. We note that the the relative prime condition is needed to ensure that $\Cat(a,b)$ is an integer. As usual, consider $$ \Cat\{a,b\}=\frac{1}{\{a+b\}}{a+b \brace a}. $$ We will now present a proof that $\Cat\{a,b\}$ is a polynomial in $s,t$ which was obtained by the Fields Institute Algebraic Combinatorics Group~\cite{abcdl} for the $q$-Fibonacci analogue and works equally well in our context. First we need the following lemma. \begin{lem}[\cite{hl:dpg}] We have $(\{m\},\{n\})=\{(m,n)\}$.\hfill \qed \end{lem} \begin{thm}[\cite{abcdl}] If $(a,b)=1$ then $C\{a,b\}$ is a polynomial in $s,t$. \eth \begin{proof} Consider the quantity $$ p = \frac{\{a+b\}!}{\{a-1\}!\{b\}!} = \{a+b\} {a+b-1 \brace a-1}. $$ From the second expression for $p$, it is clearly a polynomial in $s$ and $t$. Note that $\{a\}$ divides evenly into $p$ because $p/\{a\}= {a+b \brace a}$. Similarly, $\{a+b\}$ divides $p$ since $p/\{a+b\}={a+b-1 \brace a-1}$. But $(a,a+b)=1$ and so, by the lemma, $\{a\}\{a+b\}$ divides into $p$. It follows that $p/(\{a\}\{a+b\}) = C\{a,b\}$ is a polynomial in $s,t$. \end{proof}\medskip Despite this result, we have been unable to find a combinatorial interpretation for $C\{a,b\}$ or prove that its coefficients are nonnegative integers, although this has been checked by computer for $a,b\le 50$. \subsection{Narayana numbers.} For $1\le k\le n$ we define the {\em Narayana number} $$ N_{n,k} = \frac{1}{n} \binom{n}{k} \binom{n}{k-1}. $$ It is natural to consider the Narayana numbers in this context because $C_n=\sum_k N_{n,k}$. Further discussion of Narayana numbers can be found in the paper of Br\"and\'en~\cite{bra:qnn}. In our usual manner, let $$ N_{\{n,k\}} =\frac{1}{\{n\}} {n \brace k} {n\brace k-1}. $$ \begin{conj} For all $1\le k\le n$ we have $N_{\{n,k\}}\in{\mathbb N}[s,t]$. \end{conj} This conjecture has been checked by computer for $n\le 100$. One could also consider Narayana numbers for other Coxeter groups. \subsection{Coxeter groups again} As is often true in the literature on Coxeter groups, our proof of Theorem~\ref{CoxCat} is case by case. It would be even better if a case-free demonstration could be found. One could also hope for a closer connection between the geometric or algebraic properties of Coxeter groups and our combinatorial constructions. Alternatively, it would be quite interesting to give a proof of Theorem~\ref{CoxCat} by weighting one of the standard objects counted by $\Cat W$ such as $W$-noncrossing partitions. \vs{10pt} {\bf Acknowledgment.} We had helpful discussions with Nantel Bergeron, Cesar Ceballos, and Volker Strehl about this work. \nocite{*} \bibliographystyle{alpha} \newcommand{\etalchar}[1]{$^{#1}$}
{ "timestamp": "2018-09-25T02:25:17", "yymm": "1809", "arxiv_id": "1809.09036", "language": "en", "url": "https://arxiv.org/abs/1809.09036", "abstract": "The Lucas sequence is a sequence of polynomials in s, and t defined recursively by {0}=0, {1}=1, and {n}=s{n-1}+t{n-2} for n >= 2. On specialization of s and t one can recover the Fibonacci numbers, the nonnegative integers, and the q-integers [n]_q. Given a quantity which is expressed in terms of products and quotients of nonnegative integers, one obtains a Lucas analogue by replacing each factor of n in the expression with {n}. It is then natural to ask if the resulting rational function is actually a polynomial in s and t with nonnegative integer coefficients and, if so, what it counts. The first simple combinatorial interpretation for this polynomial analogue of the binomial coefficients was given by Sagan and Savage, although their model resisted being used to prove identities for these Lucasnomials or extending their ideas to other combinatorial sequences. The purpose of this paper is to give a new, even more natural model for these Lucasnomials using lattice paths which can be used to prove various equalities as well as extending to Catalan numbers and their relatives, such as those for finite Coxeter groups.", "subjects": "Combinatorics (math.CO)", "title": "Combinatorial interpretations of Lucas analogues of binomial coefficients and Catalan numbers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303437461133, "lm_q2_score": 0.8311430394931456, "lm_q1q2_score": 0.8226906004836898 }
https://arxiv.org/abs/1601.00609
Small drift limit theorems for random walks
We show analogs of the classical arcsine theorem for the occupation time of a random walk in $(-\infty,0)$ in the case of a small positive drift. To study the asymptotic behavior of the total time spent in $(-\infty,0)$ we consider parametrized classes of random walks, where the convergence of the parameter to zero implies the convergence of the drift to zero. We begin with shift families, generated by a centered random walk by adding to each step a shift constant $a>0$ and then letting $a$ tend to zero. Then we study families of associated distributions. In all cases we arrive at the same limiting distribution, which is the distribution of the time spent below zero of a standard Brownian motion with drift 1. For shift families this is explained by a functional limit theorem. Using fluctuation-theoretic formulas we derive the generating function of the occupation time in closed form, which provides an alternative approach.In the course also give a new form of the first arcsine law for the Brownian motion with drift.
\section{Introduction} For the classical symmetric random walk with $\pm 1$ steps it is well known that the three random variables ``time spent on the positive axis", ``position of the first maximum" and ``last exit from zero'' are identically distributed and (suitably normalized) asymptotically arcsine-distributed. Here the norming factor is the length of the time interval the random walk has been observed, so that the limiting statements refer to ``relative" times. Consider now a classical random walk with drift ${\delta}\neq 0$. Clearly the same ``relative" variables can be studied. The asymptotic distribution of the random variable ``(fraction of) time spent in $(-\infty,{\alpha}]$ has been determined by Tak\'acs \cite{Tak}, by applying a functional limit theorem. But if ${\delta}\neq 0$ there is also another, ``absolute'' perspective. If for example ${\delta} > 0$ for a general random walk, it is clear that $Z({\delta})=$ ``number of visits in $(-\infty,0)$ '' is almost surely finite, and that $Z({\delta})\longrightarrow \infty$ in probability as ${\delta}\searrow 0$. One may ask if $Z({\delta})$, after multiplication with some deterministic function $a({\delta} )$, has a non-degenerate limit distribution. This paper aims to answer these and related questions for random walks in the heavy-traffic regime, i.e., when the drift converges to zero. \color{black} In all cases the limiting distribution for the occupation time in $(-\infty, 0)$, properly rescaled, turns out to have the density \begin{align} \label{density} p(t)=2\,{\varphi(\sqrt{2t})\over \sqrt{2t}}-2\,\Phi(-\sqrt{2t}),\ \ t>0 \end{align} where $\varphi$ and $\Phi$ are the density and the distribution function of $N(0,1)$, respectively. The distribution of the occupation time in $(-\infty,0)$ \color{black} of Brownian motion with positive \color{black} drift also has density (\ref{density}), and in Section 2 we begin with related results for Brownian motion. We show \color{black} for example that \color{black} the distribution of the time of the last exit from $0$ \color{black} of Brownian motion with drift during a finite time interval is composed of the arcsine and a truncated exponential distribution. In Section 2 we derive the limiting occupation time distribution for shift families generated from a centered random walk by adding to each step a shift constant $a>0$ and then letting $a$ tend to zero. The proof that (\ref{density}) gives the asymptotic distribution is based on Donsker's invariance principle. In Section 4 we give the key fluctuation-theoretic formulas for the distribution for the occupation time in $(-\infty, 0)$ for general random walks. The arcsine law and its ramifications are a classical topic but there are always recent contributions, for example some new explicit distributions \cite{L}, new proofs \cite{Hof}, or asymptotic considerations \cite{Mar}. Interesting results on the number of visits to one point by skipfree random walks and related questions can be found in \cite{Ross}. The problem considered in this paper is also connected to the heavy traffic approximation problem in queueing theory, in which the growth of the all-time maximum of $S_n-na$ (where $S_n$ is the $n$th partial sum of iid random variables with mean zero) is studied as $a\searrow 0$. In the queueing context this is equivalent to the growth of the steady-state waiting time in a $GI/G/1$ system when the traffic load tends to 1. This question was first posed by Kingman (see \cite{Kin}) and was investigated by many authors (e.g. \cite{BoxCoh, Kos, Pro, ResSam, Shnwa}). \section{Occupation times and last exit from $0$ \color{black} for Brownian motion with drift} We start by presenting two results on occupation times for Brownian motion with positive drift ${\delta}>0$ and variance ${\sigma^2}$, one known and one new. Let $B_t$ be a standard Brownian motion and $X_t=\sigma B_t+{\delta} t$. \begin{lem}\label{lem} (1) Let $z>0$ and $T_z=\inf\{ t\geq 0\;:\;X_t\geq z\}$ be the first time when $X_t$ reaches level $z$. Then $T_z$ has Laplace transform $$\ell_{T_z}(s)=\mathbb{E} e^{-sT_z}=\exp\left(-{ z \over {\sigma^2}}(\sqrt{{\delta}^2+2{\sigma^2} s}-{\delta})\right).$$ (2) Let $V_0=V_0({\delta})=\int_{0}^\infty 1_{(-\infty,0)}(X_t)\,dt$ be the total time that $X_t$ spends below zero. Then $V_0$ has Laplace transform $$\ell_{V_0}(s)=\mathbb{E} e^{-sV_0}={ 2{\delta} \over {\delta} +\sqrt{{\delta}^2+2{\sigma^2} s}}.$$ \end{lem} Proofs for (1) resp. (2) (for ${\sigma^2}=1$) can be found in \cite{KarT} resp. \cite{Imh}. Note $({\delta}^2/2{\sigma^2}) V_0$ has the Laplace transform $ 2/(1 +\sqrt{1+ s})$. We call $A$ a generic random variable with this Laplace transform. The density of $A$ is given by (\ref{density}). To see this, note that $1 / \sqrt{1+s}$ is the Laplace transform of the gamma distribution $\Gamma_{1,{1\over 2}}$, which has density $$\gamma_{1,{1\over 2}}(t)=1_{(0,\infty)}(t)\,{e^{-t} \over \sqrt{\pi t}}.$$ Therefore $[1-(1 / \sqrt{1+s})]/s$ is the Laplace transform of $1-\Gamma_{1,{1\over 2}}(t)=\int_t^\infty \gamma_{1,{1\over 2}}(x)\,dx$. The equality $${1 \over 1+\sqrt{1+s}}={1 \over \sqrt{1+s} }- {1 \over s}\Big(1-{1\over \sqrt{1+s}}\Big)$$ now yields density (\ref{density}). For $z\geq 0$ let $V_z = \int_{0}^\infty 1_{(-\infty,z)}(X_t)\,dt$ the total time the process spends below $z$. Then the obvious decomposition (obtained by conditioning on $T_z$ ) $V_z=T_z+V_0^\prime$ (where $V_0^\prime $ is independent of $T_z$ and dsitributed as $V_0$) yields \begin{lem}\label{lem2} $V_z$ has Laplace transform $$\ell_{V_z}(s)=\mathbb{E}(e^{-sV_z})=\ell_{T_z}(s)\,\ell_{V_0}(s).$$ \end{lem} The density and distribution function are given in \cite{Imh}.\\ We focus in the sequel on the time spent on the negative axis, but it is also of interest to look at the other classical arcsine variable, i.e., the time of the last exit from $0$. Here we determine its distribution. \color{black} Let ${\delta} \in\mathbb{R}\setminus\{0\}, {\sigma^2} =1$, so that $X_t=B_t+{\delta} t$, and consider $W=\sup\{t\in [0,1]\,:\,X_t=0\}$, the last time $X_t$ visits $0$ in $[0,1]$. Recall that for ${\delta}=0$, i.e., for the standard Brownian motion, the standard arcsine distribution (which has density $1_{(0,1)}(t)(1 / \sqrt{t(1-t)})$ and distribution function $(2 / \pi)\arcsin(\sqrt{t})$ on $[0,1]$) is the distribution of the last exit time from zero in the interval $[0,1]$. \color{black} The distribution of $W$ turns out to have a nice representation in terms of the standard arcsine distribution and a truncated exponential distribution. As this result seems new, we provide a proof. \begin{thm}\label{thmle} $W\stackrel{d}{=}C\cdot \min \{1,D_{\delta}\}$ where $C$ and $D_{\delta}$ are independent, $C$ is arcsine-distributed, and $D_{\delta}$ is $\exp({\delta}^2/2)$-distributed. The moments of $W$ are given by $$\mathbb{E}W^k= {2k \choose k}{1 \over 2^{2k}}\,\int_0^1 ky^{k-1} e^{-{\delta}^2y/2}\,dy, \ \ k\ge 1.$$ \end{thm} \begin{proof} We use a random walk approximation in the style of Tak\'acs \cite{Tak}. Let $Y_1,Y_2,\ldots$ be iid with $$\mathbb{P}(Y_i=1)=p={1 \over 2}+{{\delta} \over 2\sqrt{n}}, \ \ \mathbb{P}(Y_i=-1)=q=1-p$$ ($p$ and $q$ depend on $n$, but this is suppressed in the notation) and partial sums $S_0=0,\,S_k=\sum_{i=1}^k Y_i$. It it easy to see that the processes $X^{(n)}$ defined by $$X^{(n)}(t)={1\over \sqrt{n}} S_{\lfloor nt \rfloor}, \ \ 0\le t \le 1 $$ converge in distribution to $X=(X_t)_{t\in [ 0,1]}$ in $D[0,1]$. Furthermore, the last-exit time from $0$ is continuous in the Skorohod topology on $D[0,1]$ on a set of $P_X$-measure 1, and $$T_n=\sup\{t\in[0,1]\;:\;X^{(n)}(t)=0\} ={1\over n} \max\{0\leq k\leq n\;:\; S_k=0\} =:{M_n / n}\color{black}$$ Then it suffices to show that ${M_N/ N}\longrightarrow C\cdot \min\{1,D_{\delta}\}$ as $N\longrightarrow \infty$. Since ${1/ \sqrt{1-4pqz^2}}$ and $(\sqrt{1-4pqz^2})/(1-z)$ are the generating functions for the sequences of probabilities $\mathbb{P}(S_n=0)$ and $\mathbb{P}(S_1\not = 0,\ldots,S_n\not = 0)$, \color{black} respectively, the generating function of $M_N$ is \begin{align*} \mathbb{E}t^{M_N} &\ =\sum_{k=0}^N t^{k}\, \mathbb{P}(S_k=0, S_{k+1} \not=0,\ldots,S_{N}\not=0)\\ & =\sum_{k=0}^N t^{k}\, \mathbb{P}(S_k=0)\,\mathbb{P}(S_{1} \neq 0,\ldots,S_{N-k}\neq 0)\\ &=[z^{N}] {1\over \sqrt{1-4pqt^2z^2}}{\sqrt{1-4pqz^2} \over 1-z}\\ &=[z^{N}] {1\over \sqrt{1-4pqt^2z^2}}{\sqrt{1-4pqz^2} \over 1-z^2}(1+z). \end{align*} (Here and in the following $[z^{N}] f(z)$ denotes the coefficient of $[z^{N}]$ in the Taylor expansion of the function $f(z)$ around zero.) Thus the generating functions for $N=2n+1$ and $N=2n$ are identical and it is enough to consider even $N$. Let $N=2n$ be even (and $n>{\delta}^2$) and $U_n={M_N / 2}$. Then the generating function of $U_n$ is \begin{align*} \mathbb{E}t^{U_n}&=[z^{2n}] {1\over \sqrt{1-4pqtz^2}}{\sqrt{1-4pqz^2} \over 1-z^2}\\ &=[z^{n}] {1\over \sqrt{1-4pqtz}}{\sqrt{1-4pqz} \over 1-z} \end{align*} so that the $k-$th factorial moment $u_{k,n}=\mathbb{E}\left(U_n(U_n-1)\cdots(U_n-k+1)\right)$ of $U_n$ is given by \begin{align*} u_{k,n}&= k! (-1)^k {-\frac{1}{2} \choose k}(4pq)^k\,[z^{n-k}]{1\over ({1-4pqz})^k\,(1-z)}\\ &= k (-1)^k {-\frac{1}{2} \choose k}(4pq)^k\,[z^{n-k}]{1\over (1-z)}\int _{0}^{\infty} x^{k-1} e^{-(1-4pqz)x}\,dx\\ &= (-1)^k {-\frac{1}{2} \choose k}(4pq)^k\,\int _{0}^{\infty} kx^{k-1} e^{-x}\left(\sum_{j=0}^{n-k}\frac{(4pqx)^j}{j!}\right)\,dx. \end{align*} Now denote by $Poiss (\lambda)$ a random variable having the Poisson distribution with parameter $\lambda$. As $4pq=1-({{\delta}^2}/{2n})$, we obtain \begin{align*} &\int _{0}^{\infty} k x^{k-1} e^{-x}\left(\sum_{j=0}^{n-k}\frac{(4pqx)^j}{j!}\right)\,dx\\ &=\int _{0}^{\infty} k x^{k-1} e^{-x(1-4pq)}\,\mathbb{P}\left(Poiss(4pqx)\leq n-k \right)\,dx\\ &=n^k\,\int _{0}^{\infty} k y^{k-1} e^{-{\delta}^2 y/2}\,\mathbb{P}\left(Poiss((n-\frac{{\delta}^2}{2})y)\leq n-k\right)\,dy. \end{align*} By the central limit theorem, $$\mathbb{P}\left(Poiss((n-\frac{{\delta}^2}{2})y)\leq n-k\right)\longrightarrow\left\{ \begin{array}{cll} &1 &\mbox{ for } 0\leq y<1\\ &\frac{1}{2} &\mbox{ for } y=1 \\ &0 &\mbox{ for } y>1 \end{array}\right.$$ so that for every $k$ we have $$\frac{u_{n,k}}{n^k}\longrightarrow (-1)^k {-\frac{1}{2} \choose k} \int_{0}^1 ky^{k-1}e^{-y{\delta}^2/2}\,dy.$$ Hence ${\mathbb{E}T_N^k}/{N^k}$ tends to the same limit. This shows the second assertion. Finally, $$\mathbb{E} C^k = (-1)^k {-\frac{1}{2} \choose k}={2k \choose k} {1 \over 2^{2k}}$$ and integration by parts shows that $\int_{0}^1 ky^{k-1}e^{-y{\delta}^2/2}\,dy=\mathbb{E} \min\{1,D_{\delta}^k \}$. Thus all moments of ${T_N / N}$ converge to the corresponding moments of $C\cdot\min\{1,D_{\delta} \}$. Since the distribution of $C\cdot\min\{1,D_{\delta} \}$ is clearly determined by its moments the first assertion follows. \end{proof} \begin{rem} As an immediate consequence of the scaling properties of Brownian motion we see that the distribution of $$W_T=\sup\{ t\leq T\,:\, \sigma B_t+{\delta} t=0\}$$ is the same as that of $C\cdot\min\{T,D_{{\delta}/\sigma} \}$. The time of the last zero of $\;\sigma B_t+{\delta} t\;$ in the interval $[0,\infty)$ is thus distributed as $C\cdot D_{{\delta}/\sigma} $, which is the gamma distribution with parameters ${{\delta}^2 / 2\sigma^2}$ and ${1}/{2}$. \hfill\vrule height5pt width5pt depth0pt \end{rem} \begin{rem} Clearly $V_0$ (the occupation time on the negative axis) is stochastically smaller than $W_\infty$ (the last exit time from zero), and the results above quantify this precisely. We find e.g. that $$\mathbb{E}(V_0)=\dfrac {{\sigma^2}}{2{\delta}^2}=\dfrac{1}{2}\,\mathbb{E}(W_\infty).$$ \hfill\vrule height5pt width5pt depth0pt \end{rem} \color{black} \begin{rem} Last-exit times of Brownian motion from moving boundaries have been studied intensively, and more complicated expressions for the density of the last-exit time from a linear boundary were derived in \cite{Sal} and \cite{Imh2}. The representation in (\ref{thmle}) appears to be new, as it is not mentioned in the encyclopedic monograph \cite{BorS}. For the density of the sojourn time found by Tak\'acs by a random walk limit two ``purely Brownian" explanations have been given in \cite{DonY}. It is natural to ask for such an explanation for the representation in (\ref{thmle}). \hfill\vrule height5pt width5pt depth0pt \end{rem} \section{Limit of occupation times for shifted random walks} In this section we consider a shifted random walk. Specifically, let $(X_{{\delta},1},X_{{\delta},2},\ldots)$ be a parametrized sequence of iid random variables with $\mathbb{E}(X_{{\delta},i})=0$, ${\mathrm{Var}}(X_{{\delta},i})={\sigma^2}({\delta}) \in (0, \infty)$. Let ${\delta}>0$ and $$ Y_i^{\delta} =X_{{\delta},i}+{\delta}, \ \ S_{{\delta},n}=\sum_{i=1}^n X_{{\delta},i}, \ \ S_n^{\delta}=\sum_{i=1}^n Y_i^{\delta}.$$ We are interested in the occupation time $$Z_0^{\delta}=\sum_{i=1}^\infty 1_{(-\infty,0)}(S_n^{\delta}).$$ Throughout this section we assume that ${\sigma^2}({\delta})\longrightarrow {\sigma^2}>0$ as ${\delta}\longrightarrow 0$ and that the following Lindeberg-type condition holds: for every ${\varepsilon}>0$, \begin{equation}\label{lind} \lim_{{\delta}\longrightarrow 0} \int_{|{\delta} X_{{\delta},1}|>{\varepsilon}} X_{{\delta},1}^2\,d\mathbb{P}=0. \end{equation} These conditions are chosen such that for the triangular array with the variables $$Z_{{\delta},k}=\dfrac{{\delta}}{\sigma({\delta})}X_{{\delta},k}, \ \ k=1,\ldots,\lfloor\dfrac{1}{{\delta}^2}\rfloor $$ the central limit theorem holds: indeed, ${\mathrm{Var}}(Z_{{\delta},1})={\delta}^2$ and the Lindeberg condition for this triangular array reads as $$\lim_{{\delta}\longrightarrow 0} \frac{1}{{\delta}^2}\int_{|Z_{{\delta},1}|>{\varepsilon}{\delta}^2\lfloor\frac{1}{{\delta}^2}\rfloor} Z_{{\delta},1}^2\,d\mathbb{P}= \lim_{{\delta}\longrightarrow 0} \frac{1}{{\sigma^2}({\delta}))}\int_{|{\delta} X_{{\delta},1}|>{\varepsilon} \sigma({\delta}) {\delta}^2\lfloor\frac{1}{{\delta}^2}\rfloor} X_{{\delta},1}^2\,d\mathbb{P}= 0 \ \ \text{for every} \ {\varepsilon}>0, $$ which is clearly true under the conditions above. We use similar ideas as Prohorov \cite{Pro}, who proved the following: \begin{thm} (Prohorov) In the situation above let $M^{\delta} =\min\{ S_n^{\delta}\;:\;n\geq 0\}$. Then $$\mathbb{P}({\delta} M^{\delta}>x)\longrightarrow e^{-2x/{\sigma^2}} \ \mbox{ for all } x>0.$$ \end{thm} In \cite{Pro} the maximum in the case of negative drift was considered instead of $M^{\delta}$. The result had been proved earlier by Kingman under the assumption of the existence of an exponential moment. The following lemma will be needed to obtain tightness bounds. \begin{lem} In the situation above let $z\geq 0$ and let ${\delta}_k>0$ be a sequence of positive numbers satisfying $\sup_{k\geq 1} {\sigma^2}({\delta}_k)< \infty$. Then for every ${\varepsilon}>0$ we can find a $T$ such that for all $k$ $$\mathbb{P}(\sup_{n\geq T/{\delta}_k^2} (|S_{{\delta}_k,n}|-n{\delta}_k)\geq -\frac{z}{{\delta}_k})<{\varepsilon}.$$ \end{lem} \begin{proof} First consider a sequence $S_n$ of partial sums of an arbitrary iid sequence $(X_i)$ with $\mathbb{E}(X_1)=0$ and ${\mathrm{Var}}(X_1)={\sigma^2}$. Let $a,b>0,Na>b$ and consider the event $E_N =\{\sup_{n\geq N} (|S_n|-na)\geq -b\}$. Clearly \begin{align*}E_N&=\bigcup_{j=0}^\infty\left\{\max_{2^jN\leq n< 2^{j+1}N} (|S_n|-na)\geq -b\right\}\\ &\subseteq \bigcup_{j=0}^\infty\left\{\max_{2^jN\leq n< 2^{j+1}N}|S_n|\geq 2^jNa- b \right\}\\ &\subseteq \bigcup_{j=0}^\infty\left\{\max_{n\leq 2^{j+1}N}|S_n|\geq 2^jNa- b \right\}.\\ \end{align*} By Kolmogorov's inequality, $$\mathbb{P}(\max_{n\leq 2^{j+1} N}|S_n|\geq 2^jNa-b)\leq \frac{2^{j+1} N{\sigma^2}}{(2^jNa-b)^2}. $$ Now set $N=\frac{T}{{\delta}^2}, a={\delta}, b=\frac{z}{{\delta}}, X_i=X_{{\delta},i}$. It follows that $$\mathbb{P}(\sup_{n\geq T/{\delta}^2} (|S_{{\delta},n}|-n{\delta})\geq -\frac{z}{{\delta}})\leq\sum_{j=0}^{\infty}\frac{2^{j+1} T {\sigma^2}({\delta})}{(2^jT-z)^2}.$$ The bound on the right side depends on ${\delta}$ only via ${\sigma^2}({\delta})$ and can clearly be made arbitrarily small (under the assumptions above). \end{proof} \begin{cor}\label{cor1} In the situation above let $z\geq 0$ and let ${\delta}_k>0$ be a sequence of positive numbers satisfying $\sup_{k\geq 1} {\sigma^2}({\delta}_k)<\infty$. Then for every ${\varepsilon}>0$ one can find a $T$ such that for all $k$ $$\mathbb{P}(\min_{n\geq T/{\delta}_k^2} {\delta}_k(S_{{\delta}_k,n}+n{\delta}_k)\leq z)<{\varepsilon}. $$ \end{cor} \begin{thm}\label{thmconv} $${{\delta}^2 \over 2{\sigma^2}({\delta})\color{black}}Z_0^{\delta} \longrightarrow A \ \mbox{ in distribution} \mbox{ as }{\delta}\searrow 0. $$ \end{thm} \begin{proof} By the remark following \ref{lem} it suffices to show that ${\delta}^2 Z_0^{\delta}\longrightarrow V_0$ in distribution, where $V_0$ is the distribution of the time the process $W_t=\sigma B_t+t$ spends below zero. Let $T>0$ and consider the sequence of processes $$U^{\delta}(t)={\delta}\,\sum_{i=1}^{\lfloor t/{\delta}^2\rfloor}Y_i^{\delta}, \;\;\;0\leq t \leq T.$$ By Donsker's limit theorem (in the version for triangular arrays, see e.g. \cite{Bil}, p.147), the sequence $U^{\delta}\longrightarrow \sigma B + id$ in distribution in $D[0,T]$, where $\sigma B + id$ denotes the Brownian motion with variance ${\sigma^2}$ and drift $1$, i.e., with coordinate variables $\sigma B_t +t$. For any bounded Borel function $v$ on $[0,T]$ the functional $x\mapsto \int_{0}^T v(x_t) \,dt$ on $D[0,T]$ is Skorohod-measurable and continuous except on a set of $B$-measure $0$ (see e.g. \cite{Bil}, p. 247). Thus, \begin{align*} {\delta}^2 \mbox{card}(\{n\; :\; S_n^{\delta} < 0, 1\leq n \leq T/{\delta}^2\})&=\int_0^{{\delta}^2\lfloor T/{\delta}^2\rfloor} 1_{(-\infty,0)}(U^{\delta}(t))\,dt\\ & \longrightarrow \int_0^T 1_{(-\infty,0)}(X_t)\,dt \mbox{ as }{\delta}\searrow 0\, \end{align*} in distribution and we will be done if we can justify the interchange of the limits $T\longrightarrow \infty$ and $ {\delta} \searrow 0$. Let ${\delta}_k > 0$ be a sequence decreasing to zero and let ${\varepsilon}>0$. By corollary \ref{cor1} we can find an $N$ such that $\mathbb{P}(\min_{n\geq N/{\delta}_k^2} S_n^{{\delta}_k}\leq 0)<{\varepsilon}$ for all $k$. Thus, \begin{equation}\label{lim}\lim_{T\longrightarrow \infty} \sup_{k\ge 1}\mathbb{P}(\min_{n\geq T/{\delta}_k^2} S_n^{{\delta}_k}\leq 0)=0 \end{equation} and the assertion follows since, by the monotone convergence theorem, $$ \lim_{T\longrightarrow \infty}\int_0^T 1_{(-\infty,0)}(X_t)\,dt = \int_0^\infty 1_{(-\infty,0)}(X_t)\,dt.$$ \end{proof} \begin{rem} A related discussion can be found in \cite{Shnwa}. In that paper, Shneer and Wachtel derived an extension of Kolmogorov's inequality and treated the maximum of random walks with negative drift and step size distributions attracted to a stable law of index ${\alpha}\in(1,2]$. In the case of finite variance (${\alpha}=2$) they already remarked that their results (including in particular the crucial relation \eqref{lim}) remain valid if the conditions assumed above hold. \hfill\vrule height5pt width5pt depth0pt \end{rem} \begin{rem} Assume that the $X_i$ are independent with $\mathbb{E}(X_i)=0$ and variances ${\mathrm{Var}}(X_i)=\sigma_i^2$ and satisfy Lindeberg's condition. Let $s_i^2=\sum_{k=1}^i \sigma_k^2$. Then the step processes $X_n(t)$ which jump to the value ${S_i / s_n}$ at time ${s_i^2 / s_n^2}$ converge weakly to a standard Brownian in $D[0,1]$ (by Prohorov's extension of Donsker's theorem). One may thus expect that they exhibit a similar limiting behavior. \hfill\vrule height5pt width5pt depth0pt \end{rem} Finally, replacing $0$ by $z/{\delta}$ and repeating the steps in the proof of \ref{thmconv} yields \begin{thm}\label{thmz} In the situation above let $z>0$ and $Z_z^{\delta}=\sum_{n=1}^\infty 1_{(-\infty,z)}(S^{\delta}_n)$ . Then ${\delta}^2 Z^{\delta}_{z/{\delta}} \longrightarrow V_z$ in distribution, where the Laplace transform of $V_z$ is given in Lemma \ref{lem2} with ${\delta}=1$. \end{thm} If here $z$ depends on ${\delta}$ such that ${\delta} z({\delta})\longrightarrow 0$ as ${\delta}\longrightarrow 0$ we find \color{black} \begin{prp} In the situation above let $(z({\delta}))$ a sequence of positive numbers with $z({\delta})=\mathrm{o}(1/{\delta})$ and $\sup_{\delta} z({\delta})<\infty$. Then $${\delta}^2 Z^{\delta}_{z({\delta})}\longrightarrow V_0=2{\sigma^2} A \mbox{ as } {\delta} \longrightarrow 0.$$ \end{prp} \begin{proof} Clearly $V_0$ is stochastically smaller than any distributional limit of ${\delta}^2 Z^{\delta}_{z({\delta})}$ (because $Z_0^{\delta}$ is stochastically smaller than $Z_y^{\delta}$ for $y\geq 0$), furthermore $V_y=T_y+V_0$ is stochastically smaller than $V_z$ for $y\leq z$. Let ${\varepsilon}>0$ and $C=\sup_{\delta} z({\delta})$, then $C<\infty$ and ${\delta}^2 Z^{\delta}_{{\varepsilon} C/{\delta}}\longrightarrow V_{{\varepsilon} C}$ in distribution as ${\delta} \longrightarrow 0$ (by theorem \ref{thmz}). Since $Z^{\delta}_{z({\delta})}=Z^{\delta}_{{\varepsilon} z({\delta})/{\varepsilon}}$ is stochastically smaller than $Z^{\delta}_{{\varepsilon} C/{\delta}}$ for ${\delta}\leq {\varepsilon}$, any distributional limit of ${\delta}^2 Z^{\delta}_{z({\delta})}$ is stochastically smaller than $V_{{\varepsilon} C}$. Thus the distributional limit exists and equals $V_0$. \end{proof} We close this section with an application of Theorem \ref{thmconv} in a frequently encountered situation. \begin{exl}\label{exl1} (Expectation shift in exponential families.)\\ Let $U$ be a non-constant real random variable such that the moment generating function $$m(s)=\mathbb{E} e^{sU}$$ is finite in an open interval $I$ around $0$, and $E(U)=m^\prime(0)=0$, $\mathrm{Var}(Y)={\sigma^2}$. For $p\in I$ let $U_p$ have the ``associated'' distribution with moment generating function $m_p(s)={ m(p+s) \over m(p)}$, clearly $U_p$ has expectation $\mathbb{E}(U_p)={ m^\prime(p) \over m(p)}$ and variance ${\sigma^2}(p)={ m^{\prime\prime}(p)m(p)- (m^\prime(p))^2 \over m(p)^2}$.\\ Let $Z_0(p)$ denote the random variable ``time spent in $(-\infty,0)$'' by the random walk generated by iid variables with distribution $U_p$. Then $${(\mathbb{E}(U_p))^2 \over 2 {\sigma^2}(p)}\,Z_0(p)\longrightarrow A \mbox{ in distribution for } p\searrow 0 \;\;\;.$$ \begin{proof} It is well known that $s\mapsto \log m(s)$ is strictly convex on $I$, thus $p\mapsto {m^\prime(p) \over m(p)}=\mathbb{E}(U_p)$ is strictly increasing. Thus we may parameterize the distributions by ${\delta}(p)= \mathbb{E}(U_p)$. We have ${\delta}(p)\searrow 0$ for $p \searrow 0$ and ${\sigma^2}(p)\longrightarrow {\sigma^2}$ as $p\searrow 0$. Let $X_{{\delta}(p)}=U_p-\mathbb{E}(U_p)$ and $Y^{{\delta}(p)}=X_{{\delta}(p)}+{\delta}(p)=U_p$. Then the Lindeberg condition \eqref{lind} is satisfied, since by Chebyshev's inequality $$\int_{|{\delta}(p) X_{{\delta}(p)}|>{\varepsilon}} X_{{\delta}(p)}^2\,d\mathbb{P}\leq {{\delta}^2(p){\sigma^2}(p)\over {\varepsilon}^2}$$ and the claim follows from Theorem \ref{thmconv}. \end{proof} \end{exl} \color{black} \section{The fluctuation theoretic approach} The topics investigated here belong to the fluctuation theory of random walks. We recall some basic facts, which will be used in the sequel and can e.g. be found in Section XII.7 of \cite{Fel2}. We consider a random walk $(S_n)_{n\ge 1}$, i.e., a sequence of partial sums of iid random variables and let $R=\inf\{ n\geq 1\;:\; S_n<0\}$ and $W=\inf\{n \geq 1\;:\; S_n\geq 0\}$ be the lengths of the first strictly descending and weakly ascending ladder epochs of the random walk, respectively.We denote by $r(z)$ and $a(z)$ denote the corresponding probability generating functions and set $\mu = \mathbb{E} W $. The occupation time of interest is $Z_0=\sum_{n=1}^\infty 1_{(-\infty, 0)}(S_n)$. \begin{thm}\label{thmSpA}(Sparre Andersen) For $|z|<1$ \begin{align*} {1 \over 1-r(z)}&=\exp \left\{\sum_{n=1}^\infty {z^n \over n} \mathbb{P}(S_n<0)\right\} \mbox{ }\\ {1 \over 1-a(z)}&=\exp \left\{\sum_{n=1}^\infty {z^n \over n} \mathbb{P}(S_n \ge 0)\right\} \mbox{ }\end{align*} \end{thm} An immediate consequence is the factorization theorem. \begin{thm}(Duality) For $|z|<1$ \begin{align*} (1-r(z))(1-a(z))=1-z. \end{align*} \end{thm} It follows from the factorization theorem is that $W$ ($R$) has a finite expected value if and only if $R$ ($W$) is defective, and that the relations $\mathbb{E}(R)\mathbb{P}(W=\infty)=1$ and $\mathbb{E}(W)\mathbb{P}(R=\infty)=1$ hold. At the combinatorial heart of fluctuation theory is the ``Sparre Andersen transformation'' (made explicit by Feller and refined by Bizley and Joseph) given in Lemma 3 of XII.8 of \cite{Fel2}: \begin{lem} Let $x_1,\ldots,x_n$ be real numbers with exactly $k\geq 0$ negative partial sums $s_{i_1},\ldots,s_{i_k}$, where $i_1>\ldots> i_k$. Write down $x_{i_1},...,x_{i_k}$ followed by the remaining $x_i$ in their original order. (If $k=0$, the sequence remains unchanged). The transformation thus defined is invertible, and the first (absolute) minimum of the partial sums of the new arrangement occurs at the $k$-th place. \end{lem} Clearly this extends to {\it infinite sequences} with exactly $k$ negative partial sums: just apply the bijection above to an initial section large enough to contain all the negative partial sums, and leave the rest unchanged. The following formulas express the generating function of $Z_0$ in terms of $r(z)$ or of $a(z)$, respectively. \begin{thm \begin{align}\label{asc} \mathbb{E} z^{Z_0}={1-r(1) \over 1-r(z)} = {1 \over \mu}{ 1-a(z) \over 1-z} =\exp\left\{-\sum_{k=1}^\infty (1-z^k)\frac{\mathbb{P}(S_k<0)}{k} \right\}. \end{align} \end{thm} \begin{proof} According to Lemma 2.3, each sequence $x_1,x_2\ldots$ with exactly $k$ negative partial sums there corresponds (by a finite reordering) a unique sequence with first (absolute) minimum at the $k$th place. The partial sums $s_0=0,s_1,s_2,\ldots$ of the rearranged sequence consist of a first part $s_0,s_1,\ldots,s_k$ and a second part $s_{k+1},s_{k+2},\ldots $ such that the partials sums satisfy $s_i>s_k$ for $i\leq k$ and $s_i-s_k\geq 0$ for $i>k$. For a random walk the joint distribution of the $X_i$ is invariant under finite permutations, and the two parts are independent. The first part has probability $$\mathbb{P}(0>S_k,S_1>S_k,\ldots,S_{k-1}>S_k)=\mathbb{P}(S_1<0,\ldots,S_k<0)$$ (by reversing the order of the variables), the second part has probability $$\mathbb{P}(S_{k+1}-S_k \geq 0,S_{k+2}-S_k\geq 0,\ldots)=\mathbb{P}(S_1\geq 0,S_2\geq 0,\ldots)=1-r(1).$$ This yields the first equation of (\ref{asc}). The second one follows immediately from the factorization identity $(1-a(z))(1-r(z))=1-z$ (recall Theorem 4.2) and the third one from Sparre Andersen's theorem. \end{proof} In some cases $r(z)$ can be computed in closed form, and the asymptotics of $Z_0 $ can be obtained from an explicit formula. An example is the normal random walk. Let the iid steps $X_i$ be $N({\delta}, {\sigma^2})$-distributed. Here we only assume that ${\delta} \neq 0$, i.e., we consider the cases of positive and negative ${\delta}$ simultaneously and let $d:=|{\delta}|, q:=\frac{{\delta}^2}{2{\sigma^2}}$. \begin{exl} For the normal random walk we have\\ (a) $ r(z)=1-(1-z)^{{1 \over 2}}\,\exp \left(\mbox{ sign}({\delta}) {d^2 \over \pi{\sigma^2}} {\displaystyle \int_{0}^{1}\int_0^{\infty} \,{ e^{-d^2(y^2+x^2) / {2{\sigma^2}}} \over 1-ze^{-d^2(y^2+x^2) / {2{\sigma^2}}}}}\,dy\,dx \right). $ (b) $qZ_0\longrightarrow A$ in distribution as ${\delta}^2/{\sigma^2}\searrow 0,{\delta} \searrow 0$. (c) $r(e^{-qs})^{1/\sqrt{q}}\longrightarrow e^{-(\sqrt{1+s} -1)}$ as $q \searrow 0,{\delta} \nearrow 0$. \end{exl} Note that here ${\sigma^2}$ may vary with ${\delta}$, it is only essential that ${\delta}/\sigma \longrightarrow 0$. \begin{proof} Directly from Sparre Andersen's theorem we find that \begin{align*} \log\left({1 \over 1-r(z)}\right)&=\sum_{n=1}^\infty {z^n \over n} \mathbb{P}(S_n<0) =\sum_{n=1}^\infty {z^n \over n} \int_{-\infty}^{-n{\delta}} {1 \over \sqrt{2n\pi{\sigma^2}}}\, e^{-x^2 / {2n{\sigma^2}}}\,dx \\ & =\sum_{n=1}^\infty {z^n \over n}\left({1 \over 2} - \mbox{sign}({\delta})\, \int_{0}^{n d} {1 \over \sqrt{2n\pi{\sigma^2}}}\, e^{-x^2 / {2n{\sigma^2}}}\,dx\right). \end{align*} Hence, \begin{align}\label{normal} 1-r(z)=(1-z)^{{1 \over 2}}\,\exp(\mbox{ sign}({\delta}) G(z)), \end{align} where $$G(z)=\sum_{n=1}^\infty {z^n \over n}\, \int_{0}^{nd} {1 \over \sqrt{2n\pi{\sigma^2}}}\, e^{{-x^2 / 2n{\sigma^2}}}\,dx.$$ We have \begin{align*} \int_{0}^{nd} {1 \over \sqrt{2n\pi{\sigma^2}}}\, e^{-x^2 / {2n{\sigma^2}}}\,dx&= \int_{0}^{d} \sqrt{ {n \over 2\pi{\sigma^2}}}\, e^{-ny^2 / {2{\sigma^2}}}\,dy = {n \over \pi{\sigma^2}} \int_{0}^{d}\int_0^{\infty} \, e^{-n(y^2+x^2) /{2{\sigma^2}}}\,dy\,dx \\ &= {nd^2 \over \pi{\sigma^2}} \int_{0}^{1}\int_0^{\infty} \, e^{-nd^2(y^2+x^2) / {2{\sigma^2}}}\,dy\,dx \end{align*} and therefore $$G(z)={d^2 \over \pi{\sigma^2}} \int_{0}^{1}\int_0^{\infty} \,{ e^{-d^2(y^2+x^2) / {2{\sigma^2}}} \over 1-ze^{-d^2(y^2+x^2) / {2{\sigma^2}}}}\,dy\,dx, $$ proving (a). Note that $G(z)$ depends only on the ratio $q={d^2/ 2{\sigma^2}}$. Fix $s>0$. Setting $z=e^{-qs}$ we obtain for $q\searrow 0$ (by dominated convergence): \begin{align*} G(e^{-qs})&={2\over \pi} \int_{0}^{1}\int_0^{\infty} \,{ q e^{-q(y^2+x^2) } \over 1-e^{-q(s+y^2+x^2) }}\,dy\,dx \\ & \longrightarrow {2\over \pi} \int_{0}^{1}\int_0^{\infty} \,{ 1 \over s+y^2+x^2}\,dy\,dx =\log \left({1+\sqrt{1+s}\over \sqrt{s}}\right). \end{align*} From this (b) and (c) follow easily. \end{proof} It is of methodological interest to have also a purely fluctuation-theoretic proof of Theorem 3.4., i.e., a proof which does not rely on the ``functional limit theorem'' approach used above. \color{black} The reviewer suggested the following alternative derivation of \ref{thmconv} based on Theorem 4.4. Assume the conditions introduced in Section 3. \begin{thm} = {\bf Theorem 3.4}\color{black} $${{\delta}^2 \over 2{\sigma^2}({\delta})\color{black}}Z_0^{\delta} \longrightarrow A \ \mbox{ in distribution} \mbox{ as }{\delta}\searrow 0. $$ \end{thm} \begin{proof} In principle, we follow the line of argument used for a similar proof in \cite{Shnwa}. Let ${\varepsilon}>0$ and split the series in the exponent of the right-hand side of \eqref{asc} into three parts: $$\sum_{k=1}^\infty =\sum_{k=1}^{{\varepsilon}/{\delta}^2}\; + \; \sum_{{\varepsilon}/{\delta}^2}^{T/{\delta}^2}\; + \;\sum_{T/{\delta}^2}^\infty=\sum\nolimits_1 +\sum\nolimits_2 +\sum\nolimits_3\;.$$ Let $s>0$ and set $$z=e^{-s{\delta}^2/2{\sigma^2}({\delta})}.$$ We consider the different sums separately, starting with $\sum\nolimits_1$: $$\sum_{k=0}^{{\varepsilon}/{\delta}^2} (1-z^k) \frac{\mathbb{P}(S^{\delta}_k<0)}{k}\leq \frac{s{\delta}^2}{2{\sigma^2}({\delta})} \sum_{k=0}^{{\varepsilon}/{\delta}^2} \mathbb{P}(S^{\delta}_k<0)\leq \frac{s{\varepsilon}}{2{\sigma^2}({\delta})}.$$ Furthermore, $\mathbb{P}(S_k^{\delta}<0)= \mathbb{P}(\sum_{j=1}^k X_{{\delta},j}<-k{\delta})\leq {\sigma^2}({\delta})/(k{\delta}^2)$ by Chebyshev's inequality. Therefore we obtain for ${\varepsilon}>{\delta}^2$ $$\sum_{k\geq {\varepsilon}/{\delta}^2}\frac{(1-z^k)}{k}\mathbb{P}(S^{\delta}_k<0) \leq \frac{{\sigma^2}({\delta})}{{\delta}^2}\sum_{k\geq {\varepsilon}/{\delta}^2}\frac{1}{k^2} \leq\frac{{\sigma^2}({\delta})}{{\delta}^2} \int_{{\varepsilon}/{\delta}^2}^\infty \frac{1}{(x-1)^2}\,dx=\frac{{\sigma^2}({\delta})}{{\varepsilon}-{\delta}^2}.$$ Since $\sigma({\delta})\longrightarrow {\sigma^2}\in(0,\infty)$ as ${\delta}\longrightarrow 0$, there is a ${\delta}_0$ such that $2{\delta}_0^2<{\varepsilon}$ and ${\sigma^2}({\delta})$ is bounded for ${\delta}\leq {\delta}_0$. Without loss of generality assume in the sequel ${\delta}\leq{\delta}_0$. Then $\sum\nolimits_3$ can be made arbitrarily small by a suitable choice of $T$, and $\sum\nolimits_2\leq 2C/{\varepsilon}$ for a suitable constant $C$. For $\sum\nolimits_2$ we use the asymptotic normality of ${\delta} S^{\delta}_{t/{\delta}^2}$ (which is implied by the Lindeberg condition, see the beginning of section 3)\color{black}: $$\mathbb{P}({\delta} S_k^d<0) \longrightarrow \mathbb{P}(N(t,{\sigma^2} t)<0)=\Phi(-\sqrt{\frac{t}{{\sigma^2}}})\; \mbox{ as }\; {\delta} \longrightarrow 0,\,k{\delta}^2\longrightarrow t$$ (uniformly for $t\in [{\varepsilon},T]$), and by the dominated convergence we conclude that $$\sum\nolimits_2 \longrightarrow \int_{\varepsilon}^T \frac{1-e^{-t/2{\sigma^2}}}{t}\, \Phi(-\sqrt{\frac{t}{{\sigma^2}}})\,dt.$$ Letting ${\varepsilon}\longrightarrow 0, T\longrightarrow \infty$ we finally arrive at \begin{align} \label{domi} \mathbb{E} e^{-s\frac{{\delta}^2}{2{\sigma^2}}}\longrightarrow \exp\{-\int_0^\infty \frac{1-e^{-su}}{u}\,\Phi(-\sqrt{2u})\,du\}. \end{align} Evaluating the integral finishes the proof. Avoiding the calculation, it suffices to notice that the right side of (\ref{domi}) is independent of the underlying distribution of the random walk so that one can look at the example of the normal random walk computed above, which leads to the conclusion that the right side of (\ref{domi})is equal to $2/(1+\sqrt{1+s})$. \end{proof} The advantage of this proof is that it generalizes to the ${\alpha}$-stable case ($1<{\alpha}<2$) essentially unchanged - the main difficulties (the corresponding estimates for these cases) can be overcome using inequality $(6)$ in \cite{Shnwa}. We close this section with a few remarks on the simple random walk \color{black} taking step +1 with probability $p>1/2$ and step -1 with probability $q=1-p$. It is well-known that in this example $$r(z)={1 -\sqrt{1-4pqz^2} \over 2pz},$$ so that a quick calculation shows that $$ \mathbb{E} z^{Z_0}={1-r(1) \over 1-r(z)} \color{black} = \dfrac{(p-q)(1+\sqrt{ 1-4pqz^2})}{p(1-2z^2 +\sqrt{ 1-4pqz^2})}$$ and $2(p-\frac{1}{2})^2Z_0 \longrightarrow A$ in distribution as $p\searrow 1/2$. \begin{rem} Let $T_0(p)=\sup\{n\ge 0\,:\,S_n^{(0)}=0\}$ the time of the last return to the origin. In the symmetric case $p=1/2$ the walk is persistent and $T_0(1/2)=\infty$ almost surely. In the transient case $p>1/2$, $T_0(p)$ has generating function $$h(z)={ p-q \over \sqrt{1-4pqz^2}}.$$ A short computation yields that ${1 \over 2}(p-q)^2T_0(p)$ converges in distribution as $p\searrow 1/2$, the limiting distribution having Laplace transform $\dfrac{1}{\sqrt{ 1+s}}$, i.e., being the $\Gamma_{1,{1 \over 2}}$ distribution with density $\gamma_{1,{1 \over 2}}(t)$ as above. \hfill\vrule height5pt width5pt depth0pt \end{rem} \begin{rem} Let $N_0(p)$ denote the number of zeros of the random walk. Then $$\mathbb{P}(N_0(p)=r,\,T_0(p)=2n)={r \over n-r}{2n- r \choose n} 2\,(pq)^n$$ and $(\delta N_0(p),{1 \over 2}\delta^2 T_0(p))$ converges weakly to the distribution with density $$f(y,t)=1_{(0,\infty)}(y)\,1_{(0,\infty)}(t)\,{y \over 2t}{1 \over \sqrt{2\pi t}}\, e^{-({y^2 / 4t})\,-t}\;\;.$$ In particular, $\delta N_0(p)$ is asymptotically $\exp(1)$. For the symmetric random walk let $N_{0, 2n}$ denote the number of zeros up to time $2n$. A classical theorem of Chung-Hunt \cite{ChH} states that $\sqrt{{2 / n}} N_{0, 2n} $ is asymptotically distributed as $ |N(0,1)|$. All these results show that deviations from the symmetric random walk become clearly visible after $n\approx \delta^{-2}$ steps. While characteristics like the positive sojourn time and the last exit time from zero are in both cases of approximately the same size their distributions differ. For the last exit time from zero a precise description is given in Theorem 2.3. \color{black}. \hfill\vrule height5pt width5pt depth0pt \end{rem} Apparently the distribution of $A$ occurs naturally as a limit of occupation times for random walks with drift. It is well-known (see e.g. Section XIV.3 in \cite{Fel2}) that the deeper reason for the frequent occurrence of the (generalized) arcsine distributions lies in their intimate connection to distribution functions with regularly varying tails. The same explanation applies here. In the case of drift zero the distribution functions of the ladder epochs are attracted to the standard positive stable distribution of index 1/2 and the positive (negative) sojourn times are asymptotically arcsine-distributed. In the cases with small drift (and finite variance) the ladder epochs are attracted to an associated distribution of this stable distribution, and therefore the positive (negative) sojourn times have asymptotically the distribution of $A$. {\bf Acknowledgement}. We would like to thank the referee for valuable remarks and in particular for suggesting the alternative proof of Theorem \ref{thmconv} given in Section 4. \color{black}
{ "timestamp": "2016-05-31T02:06:15", "yymm": "1601", "arxiv_id": "1601.00609", "language": "en", "url": "https://arxiv.org/abs/1601.00609", "abstract": "We show analogs of the classical arcsine theorem for the occupation time of a random walk in $(-\\infty,0)$ in the case of a small positive drift. To study the asymptotic behavior of the total time spent in $(-\\infty,0)$ we consider parametrized classes of random walks, where the convergence of the parameter to zero implies the convergence of the drift to zero. We begin with shift families, generated by a centered random walk by adding to each step a shift constant $a>0$ and then letting $a$ tend to zero. Then we study families of associated distributions. In all cases we arrive at the same limiting distribution, which is the distribution of the time spent below zero of a standard Brownian motion with drift 1. For shift families this is explained by a functional limit theorem. Using fluctuation-theoretic formulas we derive the generating function of the occupation time in closed form, which provides an alternative approach.In the course also give a new form of the first arcsine law for the Brownian motion with drift.", "subjects": "Probability (math.PR)", "title": "Small drift limit theorems for random walks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9820137895115187, "lm_q2_score": 0.837619959279793, "lm_q1q2_score": 0.8225543503828336 }
https://arxiv.org/abs/1908.04065
On nilpotent generators of the symplectic Lie algebra
Let $\mathfrak{sp}_{2n}(\mathbb {K})$ be the symplectic Lie algebra over an algebraically closed field of characteristic zero. We prove that for any nonzero nilpotent element $X \in \mathfrak{sp}_{2n}(\mathbb {K})$ there exists a nilpotent element $Y \in \mathfrak{sp}_{2n}(\mathbb {K})$ such that $X$ and $Y$ generate $\mathfrak{sp}_{2n}(\mathbb {K})$.
\section{Introduction} It is an important problem to find a minimal generating set of a given algebra. This problem was studied actively for semisimple Lie algebras. In 1951, Kuranishi \cite{MK} observed that any semisimple Lie algebra over a field of characteristic zero can be generated by two elements. Twenty-five years later, Ionescu \cite{TI} proved that for any nonzero element $X$ of a complex or real simple Lie algebra $\g$ there exists an element $Y$ such that the elements $X$ and $Y$ generate the Lie algebra $\g$. In 2009, Bois~\cite{JB} proved that any simple Lie algebra in characteristic different from~$2$ and $3$ can be generated by two elements. He also extended Ionescu’s result to classical simple Lie algebras over a field of characteristic different from~$2$ and $3$. In~\cite{AC} we obtained an analogue of Ionescu's result for nilpotent generators of the special linear Lie algebra $\sl_n$. This paper is devoted to the case of the symplectic Lie algebra over an algebraically closed field $\K$ of characteristic zero. \begin{theorem} \label{theorem} For any nonzero nilpotent element $X \in \sp$ there exists a nilpotent element $Y \in \sp$ such that $X$ and $Y$ generate $\sp$. \end{theorem} The idea of the proof is to reduce the problem to the case when $X$ is the lowest weight vector in the adjoint representation \cite[Theorem~4.3.3]{CM}. In this case we take as $Y$ an element from the principal nilpotent orbit. In the last section we discuss a conjecture generalizing Theorem~\ref{theorem} to an arbitrary simple Lie algebra. We also hope for possible applications of our results to transitivity properties of the symplectomorphism group of $\A^{2n}$. More precisely, Conjecture 2 claims that for all~$n \geqslant 2$ one can find three one-parameter subgroups $U_1$, $U_2$, $U_3$ of symplectomorphisms of~$\A^{2n}$ such that the group $H = \langle U_1, U_2, U_3 \rangle $ acts infinitely transitively on~$\A^{2n}$. \smallskip The author is grateful to her supervisor Ivan Arzhantsev for posing the problem and permanent support and to Ernest Vinberg for useful discussions and suggestions. \section{Main results} Let $\g$ be a semisimple Lie algebra over an algebraically closed field $\K$ of characteristic zero and $\Phi$ be the root system of the algebra $\g$. Fix a Cartan subalgebra $\h$ and consider the root space decomposition with respect to $\h$: $$ \g = \h \oplus \bigoplus\limits_{\alpha \in \Phi} \g_{\alpha}. $$ An element $T \in \h$ is called \textit{consistent} if $\alpha(T) \neq 0$ holds for any $\alpha \in \Phi$, and for all~$\alpha$,~$\beta \in \Phi$ the condition $\alpha \neq \beta$ implies $\alpha(T) \neq \beta(T)$. Denote by $v_{\alpha}$ an arbitrary nonzero element of the line $\g_{\alpha}$. \begin{lemma} \label{Vandermonde} A consistent element $T$ and the element $N = \sum\limits_{\alpha \in \Phi}v_{\alpha}$ generate $\g$. \end{lemma} \begin{proof} Consider the following $m = \dim \g - \dim \h$ elements: $$ A_1 = [T, N],\quad A_i = [T, A_{i-1}],\quad i = 2,\ldots,m. $$ Note that all elements $A_i$ belong to $\bigoplus\limits_{\alpha \in \Phi} \g_{\alpha} $ and $A_1,\ldots, A_m$ are linearly independent. Indeed, if we consider their coordinates in the basis $\{v_{\alpha}\}_{\alpha \in \Phi}$, we come to a Vandermonde matrix for numbers $\alpha(T)$, where $\alpha\in \Phi$. Since $T$ satisfies the consistensy conditions, it follows that this Vandermonde matrix has a nonzero determinant. Hence, the matrices $A_i$ form a basis of the space~$\bigoplus\limits_{\alpha \in \Phi} \g_{\alpha}$. This means that the subalgebra generated by $T$ and $N$ contains all $v_{\alpha}$ and coincides with $\g$. \end{proof} Let $V$ be a $2n$-dimensional vector space over $\K$. The symplectic Lie algebra $\sp$ is the algebra of all linear transformations of $V$ which annihilate a non-degenerate skew-symmetric bilinear form~$\Omega$. Below in this section we consider $\g = \sp$. \begin{lemma} \label{lowestnilp} Theorem~\ref{theorem} holds if $X$ is the lowest weight vector. \end{lemma} \begin{proof} Let us fix a basis $e_1,\dots,e_{2n}$ in $V$ such that $\Omega = \begin{pmatrix} 0 & E \\ -E & 0 \end{pmatrix}$. Then the subspace of all diagonal matrices in $\g$ is a Cartan subalgebra $\h$. Consider the set of simple roots~$\Delta$ corresponding to the matrices $E_{i, i+1} - E_{n+i+1, n+i}$, $i=1,\ldots,n-1$ and $E_{n-1, 2n}$. Choose these matrices as $v_\alpha$, $\alpha \in \Delta$. Denote by~$\psi$ the highest root with respect to $\Delta$. It corresponds to the matrix unit $v_{\psi} = E_{1, n+1}$, and the lowest root $-\psi$ corresponds to the matrix $v_{-\psi} = E_{n+1, 1}$. Define $$ T = (-1)^{n-1}v_{-\psi} + \sum_{\a\in \Delta} v_{\a} = \begin{pmatrix} 0 & 1 & \ldots & 0 & 0 & \ldots & 0 & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & \dots & 1 & 0 & \ldots & 0 & 0\\ 0 & 0 & \dots & 0 & 0 & \ldots & 0 & 1 \\ (-1)^{n-1} & 0 & \ldots & 0 & 0 & \ldots & 0 & 0\\ 0 & 0 & \dots & 0 & -1 & \ldots & 0 & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & \dots & 0 & 0 & \ldots&-1 & 0 \end{pmatrix}. $$ Our aim is to apply Lemma~\ref{Vandermonde} to elements $T$ and $X=v_{-\psi}$. Note that Lemma~\ref{Vandermonde} holds if we replace $N$ by $N + H$, where $H \in \h$. Since $T^{2n} = E$, the element $T$ is semisimple. Moreover, if $\lambda$ is a root of unity of order $2n$ then $$ v = e_1 + \lambda e_2 + \lambda^2e_3+\dots+ \lambda^{n-1}e_n + \frac{(-1)^{n-1}}{\lambda}e_{n+1}+ \frac{(-1)^n}{\lambda^2}e_{n+2} +\dots+ \frac{1}{\lambda^n}e_{2n} $$ is an eigenvector of $T$ with the eigenvalue~$\lambda$. It follows that all eigenvalues of the operator~$T$ are roots of unity of order $2n$ and thus $T$ is consistent. Since $T$ is semisimple, one can include it into a Cartan subalgebra $\widetilde{\h}$. Consider the root space decomposition of $\g$ with respect to $\widetilde{\h}$: $ \g = \widetilde{\h} \oplus \bigoplus\limits_{\beta \in \Phi} \widetilde{\g}_{\beta}. $ Let us now show that $X = \widetilde{H} + \sum\limits_{\beta\in \Phi} \widetilde{v}_{\beta}$, where $\widetilde{H} \in \widetilde{\h}$ and $\widetilde{v}_{\beta}$ is an arbitrary nonzero element of the line $\widetilde{\g}_{\beta}$. Subalgebras $\h$ and~$\widetilde{\h}$ are conjugated by an element of the group $\operatorname {Sp}_{2n}(\K)$. In other words, there exists a matrix $C \in \operatorname {Sp}_{2n}$ such that $C^{-1}TC$ is a diagonal matrix. It remains to show that all entries of the matrix $C^{-1}E_{n+1, 1}C$ outside the main diagonal are non-zero. It sufficient to check that all entries of the $(n+1)$th column of the matrix $C^{-1}$ are non-zero, which is equivalent to the fact that $e_{n+1}$ and any $2n-1$ eigenvectors of the operator~$T$ are linearly independent. Let $\xi$ be a primitive root of unity of order $2n$. Since the matrix~$C$ consists of eigenvectors of the operator~$T$, it coincides with the matrix $$ \begin{pmatrix} 1 & 1 & 1 & \ldots & 1\\ 1 & \xi & \xi^2 & \dots & \xi^{2n-1} \\ 1 & \xi^2 & \xi^4 & \dots & (\xi^2)^{2n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \xi^{n-1} & (\xi^{n-1})^2 & \dots & (\xi^{n-1})^{2n-1}\\ 1 & \xi^{-1} & (\xi^{-1})^2 & \dots & (\xi^{-1})^{2n-1}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \xi^{-n} & (\xi^{-n})^2 & \dots & (\xi^{-n})^{2n-1}\\ \end{pmatrix} = \begin{pmatrix} 1 & 1 & 1 & \ldots & 1\\ 1 & \xi & \xi^2 & \dots & \xi^{2n-1} \\ 1 & \xi^2 & \xi^4 & \dots & (\xi^2)^{2n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \xi^{n-1} & (\xi^{n-1})^2 & \dots & (\xi^{n-1})^{2n-1}\\ 1 & \xi^{2n-1} & (\xi^{2n-1})^2 & \dots & (\xi^{2n-1})^{2n-1}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \xi^n & (\xi^n)^2 & \dots & (\xi^{n})^{2n-1}\\ \end{pmatrix} $$ up to permutation and scalar multiplication of rows or columns. If we replace any column of this matrix by coordinates of the vector $e_{n+1}$, we get a Vandermonde matrix for $2n-1$ different numbers from the set $\{1,\xi,\dots,\xi^{2n-1}\}$. Using Lemma~\ref{Vandermonde} we obtain that $T$ and $X = v_{-\psi}$ generate $\sp$. So elements $X$ and $Y = T + (-1)^{n}\cdot X=\sum\limits_{\a\in \Delta} v_{\a}$ generate $\sp$ and the element $Y$ is nilpotent. \end{proof} \begin{proof}[Proof of the Theorem~\ref{theorem}] Let $X$ be a non-zero nilpotent element of $\sp$. The closure of the orbit of $X$ under the adjoint action of $\operatorname {Sp}_{2n}(\K)$ contains the lowest weight vector $X_0$ \cite[Theorem~4.3.3]{CM}. Lemma~\ref{lowestnilp} implies that there exists a nilpotent $Y_0 \in \sp$ such that $X_0$ and $Y_0$ generate $\sp$. Note that the set of elements $Z$ such that $Z$ and $Y_0$ generate $\sp$ is open in Zariski topology. So this set contains an element from the orbit of $X$. It follows that for the element $X$ there exists an element $Y$ in the orbit of $Y_0$ such that $X$ and $Y$ generate~$\sp$. \end{proof} \section{Examples} Let us give two examples with specific pairs of nilpotent matrices generating $\sp$. \begin{example} It follows from Lemma~\ref{lowestnilp} that the matrices $N = \sum\limits_{i=1}^{n-1}{(E_{i,i+1} - E_{n+i+1,n+i})} + E_{n, 2n}$ and $E_{n+1, 1}$ generate $\sp$. \end{example} Let $\g$ be a semisimple Lie algebra over an algebraically closed field $\K$ of characteristic zero. Let $\Phi$ be the root system of the algebra~$\g$ and $\Delta$ be a set of simple roots. Denote by $\Phi^{+}$ and $\Phi^{-}$ the set of positive and negative roots of $\Phi$ with respect to $\Delta$. Consider the elements $N = \sum\limits_{\alpha \in \Delta}v_{\alpha}$ and $M = \sum\limits_{\alpha \in \Delta}{\lambda_{\alpha}}v_{-\alpha}$, where $\{\lambda_{\alpha}\}_{\alpha \in\Delta}$ are choosen in such a way that the element $T = [N, M]$ satisfies the following conditions: $\alpha(T) \neq 0$ holds for any $\alpha \in \Delta$, and for all~$\alpha$, $\beta \in \Delta$ the condition $\alpha \neq \beta$ implies $\alpha(T) \neq \beta(T)$. \begin{proposition}\label{proposition} The nilpotent elements $N$ and $M$ generate $\g$. \end{proposition} \begin{proof} Similarly to the proof of Lemma~\ref{Vandermonde}, we obtain that the subspace $\sum\limits_{\alpha \in \Delta}{\g_{\alpha}}$ is contained in the subalgebra $\mathfrak{a}$ generated by the elements $T$ and $N$. It follows that the subalgebra~$\sum\limits_{\alpha \in \Phi^{+}}{\g_{\alpha}}$ is contained in $\mathfrak{a}$. Similarly, the subalgebra $\sum\limits_{\alpha \in \Phi^{-}}{\g_{\alpha}}$ is contained in $\mathfrak{a}$ as well. Hence $N$ and $M$ generate~$\g$. \end{proof} The following example illustrates Proposition~\ref{proposition} for $\g = \sp$. \begin{example} Let us consider the matrix $N$ from Example 1 and the matrix $$ M = \begin{pmatrix} 0 & \ldots & 0 & 0 & 0 & 0 & \ldots & 0 \\ \a_1 & \dots & 0 & 0 & 0 & 0 & \ldots & 0 \\ \vdots & \ddots & \vdots & \vdots &\vdots & \vdots & \ddots & \vdots \\ 0 & \dots & \a_{n-1} & 0 & 0 & 0 & \ldots & 0\\ 0 & 0 & \dots & 0 & 0 & -\a_1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & 0 & 0 & 0 & \ldots&-\a_{n-1} \\ 0 & 0 & \ldots & \a_n & 0 & 0 & \ldots & 0 \\ \end{pmatrix}, \text{ where $\a_i = 3^i$. } $$ Define $T = [N, M] = \diag(\a_1,\a_2-\a_1, \dots, \a_n-\a_{n-1}, -\a_1, \a_1 - \a_2, \dots, \a_{n-1} - \a_n)$. Simple roots from $\Delta$ take pairwise different nonzero values on $T$, so $M$ and $N$ satisfy conditions of Proposition~\ref{proposition}. \end{example} \section{Conjectures and remarks} Let us make some observations on the case of an arbitrary simple Lie algebra $\g$ over an algebraically closed field $\K$ of characteristic zero. \begin{conjecture}\label{1} For any nonzero nilpotent element $X \in \g$ there exists a nilpotent element $Y \in \g$ such that $X$ and $Y$ generate $\g$. \end{conjecture} The final part of the proof of Theorem~\ref{theorem} holds for any simple Lie algebra $\g$. Hence, in the case of an arbitrary simple Lie algebra the proof of Conjecture~\ref{1} reduces to Lemma~\ref{lowestnilp}. The first idea is to take the same nilpotent $Y$ as in the proof of Lemma~\ref{lowestnilp}, but it does not always work. For example, for $\g=\sl_{2n}(\K)$ the elements $X$ and $Y$ from Lemma~\ref{lowestnilp} generate not~$\g$ but its proper subalgebra $\sp$. It can be shown that the elements $X$ and $Y$ always generate a reductive Lie algebra. In \cite[Section 6]{BK} Kostant showed that the sum of these elements is a semisimple regular element. This paper had been prepared when the author found the work \cite{DG}. One can notice that \cite[Theorem~2.8]{DG} implies Conjecture~$1$ for Lie algebras whose system of simple roots has no automorphism. The proof is based on computations in computer algebra system GAP. It is used to verify that the lowest weight vector and the element $Y$ from the proof of Lemma~\ref{lowestnilp} generate the Lie algebra $\g$. \smallskip Let us now come to a possible application of Theorem~\ref{theorem}. An action of a group $H$ on a set $X$ is called infinitely transitive if for all positive integer $m$ for any two tuples of pairwise distinct points $x_1,\dots,x_m$ and $y_1,\dots,y_m$ in $X$ there exists an element $h \in H$ such that $h\cdot x_i = y_i$ for all $1 \leq i \leq m$. Let $\A^n$ be the affine space of dimension $n$ and $\G_a$ be the additive group of the ground field~$\K$. Denote by $\SAut(\A^n)$ the subgroup of $\Aut(\A^n)$ generated by all $\G_a$-subgroups in~$\Aut(\A^n)$. It is easy to see that for $n \geq 2$ the group $\SAut(\A^n)$ acts infinitely transitively on $\A^n$. It is an interesting question whether there exists a finite number of $\G_a$-subgroups in $\SAut(\A^n)$ such that the group generated by them acts infinitely transitively on $\A^n$. Our result in~\cite{AC} enabled to prove the following theorem. \begin{theorem}\cite[Theorem~5.17]{AKZ} For all $n \geqslant 2$ one can find three $\G_a$-subgroups $U_1$, $U_2$, $U_3$ in $\SAut(\A^n)$ such that the group $H = \langle U_1, U_2, U_3 \rangle $ acts infinitely transitively on $\A^n$. \end{theorem} The exponential map establishes a correspondence between nilpotents in a Lie algebra $\g$ and $\G_a$-subgroups in the corresponding algebraic group $G$. Hence Theorem~\ref{theorem} implies that for each $\G_a$-subgroup $U_1$ of $\operatorname {Sp}_{2n}(\K)$ there exists a $\G_a$-subgroup $U_2$ of $\operatorname {Sp}_{2n}(\K)$ such that $\langle U_1, U_2 \rangle = \operatorname {Sp}_{2n}(\K)$. In particular, the group $\langle U_1, U_2 \rangle$ acts transitively on $\A^{2n} \setminus \{0\}$. An algebraic automorphism of $\A^{2n}$ is called a symplectomorphism if its differential preserves a non-degenerate skew-symmetric bilinear form at each point on $\A^{2n}$. Involving one-parameter subgroup of non-linear symplectomorphisms we can hope for symplectic analogue of \cite[Theorem~5.17]{AKZ}. \begin{conjecture} For all $n \geqslant 2$ one can find three $\G_a$-subgroups $U_1$, $U_2$, $U_3$ of symplectomorphisms of $\A^{2n}$ such that the group $H = \langle U_1, U_2, U_3 \rangle $ acts infinitely transitively on~$\A^{2n}$. \end{conjecture}
{ "timestamp": "2019-08-13T02:18:47", "yymm": "1908", "arxiv_id": "1908.04065", "language": "en", "url": "https://arxiv.org/abs/1908.04065", "abstract": "Let $\\mathfrak{sp}_{2n}(\\mathbb {K})$ be the symplectic Lie algebra over an algebraically closed field of characteristic zero. We prove that for any nonzero nilpotent element $X \\in \\mathfrak{sp}_{2n}(\\mathbb {K})$ there exists a nilpotent element $Y \\in \\mathfrak{sp}_{2n}(\\mathbb {K})$ such that $X$ and $Y$ generate $\\mathfrak{sp}_{2n}(\\mathbb {K})$.", "subjects": "Rings and Algebras (math.RA)", "title": "On nilpotent generators of the symplectic Lie algebra", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861571, "lm_q2_score": 0.8333245911726382, "lm_q1q2_score": 0.8224742965716527 }
https://arxiv.org/abs/0704.2716
Constructing a quadrilateral inside another one
Connect each vertex of a convex quadrilateral Q to the midpoint of the next (proceeding counterclockwise) side. The four connecting lines create an interior quadrilateral I. We study the ratio area(I)/area(Q). We also determine what happens to area(I)/area(Q) when the four midpoints are replaced by points which divide the sides in the ratio of rho to (1-rho) proceeding clockwise. Here rho is any fixed number satisfying 0 < rho < 1.
\section{\label{s:1}The quadrilateral ratio problem} The description of Project 54 in 101 Project Ideas for the Geometer's Sketchpad \cite{Key} reads (in part): \qquad\parbox[1in]{4.5in}{On the Units panel of Preferences, set Scalar Precision to hundredths. Construct a generic quadrilateral and the midpoints of the sides. Connect each vertex to the midpoint of an opposite side in consecutive order to form an inner quadrilateral. Measure the areas of the inner and original quadrilateral and calculate the ratio of these areas. What conjecture are you tempted to make? Change Scalar Precision to thousandths and drag until you find counterexamples.} A figure similar to the following accompanies the project. \begin{center} \includegraphics[ height=2.911in, width=4.0318in ]% {Figure1.eps}% \end{center} Let $r$ be the ratio of the quadrilateral areas,% \begin{equation} r=\frac{\text{area}\left( EFGH\right) }{\text{area}\left( ABCD\right) }\text{.} \label{ratio}% \end{equation} The tempting conjecture is that $r=1/5$. In Theorem 1 we show that this is true in case the original quadrilateral is a parallelogram. However, the conjecture is false in general. Instead, the ratio can be any real number in the interval $(1/6,1/5]$. This is our Corollary \ref{c:2}. (Since $1/6<0.17<1/5$, it is possible to find counterexamples even with the Sketchpad Scalar Precision set to hundredths; however, it takes very industrious dragging to find them.) Suppose now that instead of a quadrilateral we had a triangle. Of course, joining each vertex to the opposite midpoint would not yield an inner triangle, since the three lines are medians, which are concurrent in a point. To look for an analogous result for a triangle, we can look for points which are not midpoints, but rather divide each side a ratio $\rho$ of the distance from one point to the next, $0<\rho<1$. For definiteness, we assume that \textquotedblleft next point\textquotedblright\ in this definition is based on movement in a counterclockwise direction. We call these points $\rho $\textit{-points}. It turns out that for a given $\rho$, the ratio of the area of the inner triangle to the area of the outer triangle is constant independent of the initial triangle and is given by $\frac{\left( 2\rho-1\right) ^{2}}{\rho^{2}-\rho+1}$. Note that when $\rho=1/2$, this reduces to $0$, which provides a convoluted proof that the medians of a triangle are concurrent. When $\rho=1/3$, the area ratio is $1/7$, which the Nobel Prize-winning physicist Richard Feynman once proved, though he was probably not the first to do so. The result for general $\rho$ is known, and a proof is given in \cite{DeV}, along with the Feynman story. Inspired by this result, we will study the quadrilateral question for $\rho $-points. Let $ABCD$ be a convex quadrilateral and let $N_{1},N_{2},N_{3},$ and $N_{4}$ be chosen so that $N_{1}$ is the $\rho$-point of $BC$, $N_{2}$ is the $\rho$-point of $CD$, $N_{3}$ is the $\rho$-point of $DA$, and $N_{4}$ is the $\rho$-point of $AB$. For fixed $\rho$ $\left( 0<\rho<1\right) $ connect each vertex of $ABCD$ to the $\rho$-point of the next side. ($A$ to $N_{1}$, $B$ to $N_{2}$, $C$ to $N_{3}$, and $D$ to $N_{4}$.) The intersections of the four line segments form the vertices of a convex quadrilateral $EFGH$.% \begin{center} \includegraphics[ height=2.469in, width=4.8231in ]% {Figure2.eps}% \end{center} Define the area ratio \[ r(\rho,ABCD)=\frac{Area\left( EFGH\right) }{Area\left( ABCD\right) }. \] Theorem \ref{t:2} below states that as $ABCD$ varies, the values of $r\left( \rho,ABCD\right) $ fill the interval% \begin{equation} \left( m,M\right] :=\left( \frac{\left( 1-\rho\right) ^{3}}{\rho^{2}% -\rho+1},\frac{\left( 1-\rho\right) ^{2}}{\rho^{2}+1}\right] \label{int}% \end{equation} and that it is possible to give an explicit characterization of the set of convex quadrilaterals with maximal ratio $M$. The fact that $M-m$ has a maximum value of about $.034$ and is usually much smaller explains the near constancy of $r\left( \rho,ABCD\right) $ as $ABCD$ varies. Here are the graphs of $M$ and $m$.% \begin{center} \includegraphics[ height=3.1566in, width=4.0517in ]% {rhograph.eps}% \end{center} A more delicate look at the graph of $M-m=\allowbreak\frac{\rho^{3}\left( \rho-1\right) ^{2}}{\left( \rho^{2}+1\right) \left( \rho^{2}% -\rho+1\right) }$ shows that as \textquotedblleft constant\textquotedblright% \ as $r$ is in the original $\rho=1/2$ case, it is even \textquotedblleft more constant\textquotedblright\ when $\rho$ is close to the endpoints $0$ and $1$. (Actually the maximum value of $M-m$ of about $.034$ is achieved at the unique real zero of $\rho^{5}-\rho^{4}+6\rho^{3}-6\rho^{2}+7\rho-3$ which is about $.55$.) The characterization proved in Theorem \ref{t:2} below shows that not only do parallelograms have maximal ratio $M\left( \rho\right) $ for \textit{every} $\rho$, but also they are the only quadrilaterals that have maximal ratio $M\left( \rho\right) $ for more than one $\rho$. \section{The midpoint case for parallelograms} \begin{theorem} \label{t:1}If each vertex of a parallelogram is joined to the midpoint of an opposite side in clockwise order to form an inner quadrilateral, then the area of the inner quadrilateral is one fifth the area of the original parallelogram. \end{theorem} \begin{proof} In this picture,% \begin{center} \includegraphics[ height=2.124in, width=5.4457in ]% {Figure3.eps}% \end{center} $ABCD$ is a parallelogram, and each $M_{i}$ is a midpoint of the line segment it lies on. Cut apart the figure along all lines. Then by rotating clockwise $180% {{}^\circ}% $ about point $M_{3}$, the reader can verify that we get $AHGG^{\prime}$(where $G^{\prime}$ is the image of $G$ under the rotation) congruent to $EFGH$. Similarly, each of the triangles $ABE$, $BCF$, and $CDG$ may be dissected and rearranged to form a parallelogram, each congruent to $EFGH$. Thus, the pieces of $ABCD$ can be rearranged into five congruent parallelograms, one of which is $EFGH$, which therefore has area $1/5$ the area of $ABCD$. \end{proof} This result is a special case of Corollary \ref{c:3} below, but is included because of the elegant and elementary nature of its proof. \section{\label{s:3}The filling of $\left( m,M\right] $ and the characterization} \begin{theorem} \label{t:2}Let $A,B,C,D$ be (counterclockwise) successive vertices of a convex quadrilateral. Define $EFGH$ as the inner quadrilateral formed by joining vertices to $\rho$-points as described in Section \ref{s:1}. Construct point $P$ so that $ABCP$ is a parallelogram. Locate(as in the following figure) $C^{\prime}$ and $C^{\prime\prime}$ on $\overrightarrow{BC}$ so that $C^{\prime}$ is a distance $\rho BC$ from $C$ and $C^{\prime\prime}$ is a distance $\left( 1/\rho\right) BC$ from $C$ and let $S=\overrightarrow {C^{\prime}P}\cup\overrightarrow{C^{\prime\prime}P}$. Then the ratio $r$ defined by equation (\ref{ratio}) is maximal exactly when $D$ is on $S^{\ast }=S\cap\operatorname*{int}\left( \angle ABC\right) \cap\operatorname*{ext}% \left( \triangle ABC\right) $. Furthermore the set of possible ratios is% \[ \left( \frac{\left( 1-\rho\right) ^{3}}{\rho^{2}-\rho+1},\frac{\left( 1-\rho\right) ^{2}}{\rho^{2}+1}\right] . \] \end{theorem} In the figure below, $S^{\ast}$ is indicated by the thickened portions of the rays composing $S$.% \begin{center} \includegraphics[ trim=-1.533271in 0.000000in -1.533271in 0.000000in, height=2.1223in, width=5.1923in ]% {Figure4.eps}% \end{center} \begin{proof} Fix $\rho$ and apply an transformation that maps $A,B,C,D$ successively to $\left( 0,1\right) ,\left( 0,0\right) ,\left( 1,0\right) ,\left( x,y\right) $. Since an affine transformation preserves both linear length ratios and area ratios, it is enough to prove the theorem after the transformation has been applied. Observe that $P$ has become $\left( 1,1\right) $, and the image of $S$ has become a pair of perpendicular rays through $\left( 1,1\right) $ with slopes $\rho$ and $-1/\rho$. Here is the situation.% \begin{center} \includegraphics[ height=3.6668in, width=4.8983in ]% {Figure5.eps}% \end{center} The line from $\left( 0,0\right) $ to $\left( x,y\right) $ divides the outer quadrilateral$\ $into two triangles, one of area $x/2$ and the other of area $y/2$, so that its area is $\left( x+y\right) /2$. To find the area of the inner quadrilateral, we first determine $\{r_{1},s_{1},\dots,s_{4}\}$ in terms of $x$, $y$ and $\rho$ by equating slopes. For example, the equations \[% \begin{array} [t]{l}% \dfrac{s_{1}-0}{r_{1}-0}=\dfrac{\rho y-0}{1+\rho\left( x-1\right) -0}\\ \dfrac{s_{1}-0}{r_{1}-\rho}=\dfrac{1-0}{0-\rho}% \end{array} \] can easily be solved for $r_{1}$ and $s_{1}$. The area of the interior quadrilateral is% \[ \frac{\left( r_{1}s_{2}-r_{2}s_{1}\right) +\left( r_{2}s_{3}-r_{3}% s_{2}\right) +\left( r_{3}s_{4}-r_{4}s_{3}\right) +\left( r_{5}s_{1}% -r_{1}s_{5}\right) }{2}. \] This is the $n=4$ case of a well-known formula for the area of an $n$-gon\cite{Bra} which can be proved by first proving the formula for triangles and then using induction, or by using Green's Theorem. Some computer algebra produces this formidable and seemingly intractable formula for $r\left( x,y\right) $.% \[ \frac{\left( \rho-1\right) ^{2}\left( \begin{array} [c]{c}% \rho^{4}y^{4}-\rho^{3}y^{4}-3\rho^{5}xy^{3}+2\rho^{4}xy^{3}+\rho^{3}% xy^{3}-2\rho^{2}xy^{3}\\ +2\rho^{5}y^{3}-6\rho^{4}y^{3}+\rho^{3}y^{3}+2\rho^{2}y^{3}-\rho y^{3}% +\rho^{6}x^{2}y^{2}\\ -\rho^{5}x^{2}y^{2}-6\rho^{4}x^{2}y^{2}+4\rho^{3}x^{2}y^{2}-\rho^{2}x^{2}% y^{2}-\rho x^{2}y^{2}\\ -3\rho^{6}xy^{2}+10\rho^{5}xy^{2}+3\rho^{4}xy^{2}-13\rho^{3}xy^{2}+5\\ \ast\rho^{2}xy^{2}+\rho xy^{2}-xy^{2}+\rho^{6}y^{2}-7\rho^{5}y^{2}+6\rho ^{4}y^{2}+7\\ \ast\rho^{3}y^{2}-7\rho^{2}y^{2}+2\rho y^{2}+2\rho^{5}x^{3}y-2\rho^{4}% x^{3}y-3\rho^{3}\\ \ast x^{3}y+2\rho^{2}x^{3}y-\rho x^{3}y-3\rho^{6}x^{2}y-5\rho^{5}x^{2}% y+15\rho^{4}\\ \ast x^{2}y-\rho^{3}x^{2}y-7\rho^{2}x^{2}y+4\rho x^{2}y-x^{2}y+5\rho^{6}xy\\ -3\rho^{5}xy-21\rho^{4}xy+18\rho^{3}xy-\rho^{2}xy-3\rho xy+x\\ \ast y-2\rho^{6}y+5\rho^{5}y+4\rho^{4}y-12\rho^{3}y+6\rho^{2}y-\rho y\\ +\rho^{4}x^{4}-\rho^{3}x^{4}-3\rho^{5}x^{3}-2\rho^{4}x^{3}+5\rho^{3}% x^{3}-2\rho^{2}x^{3}\\ +\rho^{6}x^{2}+8\rho^{5}x^{2}-6\rho^{4}x^{2}-5\rho^{3}x^{2}+5\rho^{2}% x^{2}-\rho x^{2}\\ -2\rho^{6}x-5\rho^{5}x+12\rho^{4}x-4\rho^{3}x-2\rho^{2}x+\rho x+\rho^{6}\\ -4\rho^{4}+4\rho^{3}-\rho^{2}% \end{array} \right) }{\left( \begin{array} [c]{c}% \left( y+\rho^{2}x-\rho x+x+\rho-1\right) \left( \rho y+x+\rho^{2}% -\rho\right) \\ \ast\left( \rho^{2}y+\rho x-\rho+1\right) \\ \ast\left( \rho^{2}y-\rho y+y+\rho^{2}x-\rho^{2}+\rho\right) \end{array} \right) }% \] \ Convexity means that $\left( x,y\right) $ is constrained to the open \textquotedblleft northeast corner\textquotedblright\ of the first quadrant bounded by $Y\cup T\cup X,$ $Y=\left\{ \left( 0,y\right) :y\geq1\right\} $, $T=\left\{ \left( x,1-x\right) :0\leq x\leq1\right\} $, $X=\left\{ \left( x,0\right) :x\geq1\right\} $. Restricting $r$ to $Y,$ we get a formula for $r(y)=r\left( 0,y\right) $. Taking the derivative unexpectedly gives this simple, completely factored formula:% \[ r^{\prime}\left( y\right) =\frac{\left( \rho-1\right) ^{2}\rho^{5}\left( y-\frac{\rho+1}{\rho}\right) \left( y-\left( \rho-1\right) \right) }{\left( \rho^{2}y-\left( \rho-1\right) \right) ^{2}\left( \left( \rho^{2}+1-\rho\right) y-\rho\left( \rho-1\right) \right) ^{2}}. \] The only solution to $r^{\prime}\left( y\right) =0$ with $y\in Y$ has $y=\frac{\rho+1}{\rho}$. It quickly follows that on $Y$, $r$ attains a maximum value of $M$ at $\left( 0,\frac{\rho+1}{\rho}\right) $ and is minimized by $m$ at the endpoints $\left( 0,1\right) $ and $\left( 0,\infty\right) $. (By this we mean that $\lim_{y\rightarrow\infty}r\left( 0,y\right) =m$.) Similarly on $T,$ the derivative of $r\left( x\right) =r\left( x,1-x\right) $ has the following fully factored form% \[ r^{\prime}\left( x\right) =\frac{\partial}{\partial x}r\left( x,1-x\right) =\frac{\left( \rho+1\right) \left( \rho-1\right) ^{2}\rho^{3}\left( \left( \rho-1\right) x-\rho\right) \left( x-\frac{\rho}{\rho+1}\right) }{\left( \left( \rho-1\right) x-\rho^{2}\right) ^{2}\left( \rho\left( \rho-1\right) x-\left( \rho^{2}+\left( 1-\rho\right) \right) \right) ^{2}}, \] so that $r$ has minimum value $m$ at the endpoints $\left( 0,1\right) $ and $\left( 1,0\right) $ and maximum value $M$ at $\left( \frac{\rho}{\rho +1},\frac{1}{\rho+1}\right) $; while on $X$, \[ r^{\prime}\left( x\right) =\frac{\partial}{\partial x}r\left( x,0\right) =\frac{\left( \rho-1\right) ^{2}\rho^{3}\left( \rho^{2}+1\right) \left( x-\left( \rho+1\right) \right) \left( x-\left( 1-\rho\right) \right) }{\left( x+\rho\left( \rho-1\right) \right) ^{2}\left( \left( \rho ^{2}+1-\rho\right) x+\rho-1\right) ^{2}}, \] so that $r$ has minimum value $m$ at the endpoints $\left( 1,0\right) $ and $\left( \infty,0\right) $ and maximum value $M$ at $\left( \rho+1,0\right) $. Motivated by these results we now sweep$\mathcal{\ }$the region of permissible values of $\left( x,y\right) $ by line segments with $y$-intercept $\eta$ and slope $-1/\rho$. From \begin{align*} & r^{\prime}\left( x\right) =\frac{\partial}{\partial x}r\left( x,-\frac{1}{\rho}x+\eta\right) =\\ & \frac{\left( \begin{array} [c]{c}% \left( \rho-1\right) ^{2}\rho^{6}\left( \rho^{2}+1\right) ^{2}\left( \eta-\frac{\rho+1}{\rho}\right) ^{2}\left( x-\dfrac{\rho^{2}+\eta\rho-\rho }{\rho^{2}+1}\right) \\ \left( \begin{array} [c]{c}% \left( \eta\rho^{3}+\rho^{3}-3\rho^{2}-\eta\rho+3\rho-1\right) x\\ -\left( \rho^{3}+2\eta\rho^{2}+\rho-\eta\rho^{3}-\eta^{2}\rho^{2}-2\rho ^{2}-\eta\rho\right) \end{array} \right) \end{array} \right) }{\left( \begin{array} [c]{c}% \left( \rho+\eta-1\right) \left( \eta\rho^{2}-\rho+1\right) \left( \left( \rho^{3}-\rho^{2}+\rho-1\right) x+\rho^{2}+\eta\rho-\rho\right) ^{2}\\ \left( \left( \rho^{3}-\rho^{2}+\rho-1\right) x+\eta\rho^{3}-\rho^{3}% -\eta\rho^{2}+\rho^{2}+\eta\rho\right) ^{2}% \end{array} \right) }% \end{align*} it is clear that $\eta=\frac{\rho+1}{\rho}$produces one arm of the image of $S$. Finally for all other $\eta$, $r$ has mound-shaped behavior with minimum values on the coordinate axes and reaches a maximum of $M$ where $y=-\frac {1}{\rho}x+\eta$ intersects the other arm. \end{proof} Recall that we have defined $\rho$-points in terms of counterclockwise orientation. Although Theorem \ref{t:2} is true for clockwise orientation, we stress that the value of $r$ depends, in general, on the orientation. In fact, clockwise and counterclockwise orientations always give different values of $r$ unless $D$ lies on the diagonal $\overrightarrow{BP}$. Setting $\rho=1/2$ in Theorem \ref{t:2} yields this corollary. \begin{corollary} \label{c:2}Let $A,B,C,D$ be (counterclockwise) successive vertices of a convex quadrilateral. Define $EFGH$ as the inner quadrilateral formed by joining vertices to midpoints as described in Section \ref{s:1}. Construct point $P$ so that $ABCP$ is a parallelogram. Locate $C^{\prime}$ and $C^{\prime\prime}$ on $\overrightarrow{BC}$ so that $C^{\prime}$ is a distance $\left( 1/2\right) BC$ from $C$ and $C^{\prime\prime}$ is a distance $2BC$ from $C$ and let $S=\overrightarrow{C^{\prime}P}\cup\overrightarrow{C^{\prime\prime}P}% $. Then the ratio $r$ defined by equation (\ref{ratio}) is maximal exactly when $D$ is on $S^{\ast}=S\cap\operatorname*{int}\left( \angle ABC\right) \cap\operatorname*{ext}\left( \triangle ABC\right) $. Furthermore the set of possible ratios is% \[ \left( \frac{1}{5},\frac{1}{6}\right] . \] \end{corollary} Another corollary of Theorem \ref{t:2} is the following generalization of Theorem \ref{t:1} from midpoints to $\rho$-points. \begin{corollary} \label{c:3}If each vertex of a parallelogram is joined to the $\rho$-point of an opposite side in counterclockwise order to form an inner quadrilateral, then the area of the inner quadrilateral is $\frac{\left( 1-\rho\right) ^{2}}{\rho^{2}+1}$ times the area of the original parallelogram. \end{corollary} A nice geometry exercise is to prove this corollary avoiding the calculus part of the proof of Theorem \ref{t:2}. Hint: Performing the affine transformation we may assume that the original quadrilateral is the unit square. Use slope considerations to see that the interior quadrilateral is actually a rectangle. Use length considerations to see that it is a square of side length $\sqrt{\frac{\left( 1-\rho\right) ^{2}}{\rho^{2}+1}}$.
{ "timestamp": "2007-09-17T17:24:20", "yymm": "0704", "arxiv_id": "0704.2716", "language": "en", "url": "https://arxiv.org/abs/0704.2716", "abstract": "Connect each vertex of a convex quadrilateral Q to the midpoint of the next (proceeding counterclockwise) side. The four connecting lines create an interior quadrilateral I. We study the ratio area(I)/area(Q). We also determine what happens to area(I)/area(Q) when the four midpoints are replaced by points which divide the sides in the ratio of rho to (1-rho) proceeding clockwise. Here rho is any fixed number satisfying 0 < rho < 1.", "subjects": "General Mathematics (math.GM)", "title": "Constructing a quadrilateral inside another one", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540734789343, "lm_q2_score": 0.83973396967765, "lm_q1q2_score": 0.8223968838424425 }
https://arxiv.org/abs/1004.2445
The Cauchy-Schlomilch transformation
The Cauchy-Schlömilch transformation states that for a function $f$ and $a, \, b > 0$, the integral of $f(x^{2})$ and $af((ax-bx^{-1})^{2}$ over the interval $[0, \infty)$ are the same. This elementary result is used to evaluate many non-elementary definite integrals, most of which cannot be obtained by symbolic packages. Applications to probability distributions is also given.
\section{Introduction} \label{intro} \setcounter{equation}{0} The problem of analytic evaluations of definite integrals has been of interest to scientists for a long time. The central question can be stated as follows: \\ \begin{center} {\em given a class of functions} $\mathfrak{F}$ {\em and an interval} $[a,b] \subset \mathbb{R}$, {\em express the integral of} $f \in \mathfrak{F}$ \begin{equation} I = \int_{a}^{b} f(x) \, dx, \nonumber \end{equation} \noindent {\em in terms of special values of functions in an enlarged class} $\mathfrak{G}$. \end{center} \medskip Many methods for the evaluation of definite integrals have been developed since the early stages of Integral Calculus, which resulted in a variety of ad-hoc techniques for producing closed-form expressions. Although a general procedure applicable to all integrals is undoubtedly unattainable, it is within reason to expect a systematic cataloguing procedure for large groups of definite integrals. To this effect, one of the authors has instituted a project to verify all the entries in the popular table by I. S. Gradshteyn and I. M. Ryzhik \cite{gr}. The website \begin{center} {\tt{http://www.math.tulane.edu/$\sim$vhm/$\text{web}_{-}\text{html}$/pap-index.html}} \end{center} \noindent contains a series of papers as a treatment to the above-alluded project. Naturally, any document containing a large number of entries, such as the table \cite{gr} or the encyclopedic treatise \cite{prudnikov1}, is likely to contain errors, many of which arising from transcription from other tables. The earliest extensive table of integrals still accessible is \cite{bierens1}, compiled by Bierens de Haan who also presented in \cite{bierens2} a survey of the methods employed in the verification of the entries from \cite{gr}. These tables form the main source for \cite{gr}. The revision of integral tables is nothing new. C. F. Lindman \cite{lindman1} compiled a long list of errors from the table by Bierens de Haan \cite{bierens3}. The editors of \cite{gr} maintain the webpage \begin{center} \texttt{http://www.mathtable.com/gr/} \end{center} \noindent where the corrections to the table are stored. Many techniques have been developed for evaluating definite integrals, and the goal of this paper is to present one such method popularized by Schl\"omilch in \cite{schlomilch1}. The identity (\ref{transf1}) below appeared in \cite{irrbook}, where it was called the {\em Schl\"omilch transform}, although it was used by J. Liouville \cite{liouville1} to evaluate the integral \begin{equation} \int_{0}^{1} \frac{t^{\mu + 1/2} (1-t)^{\mu - 1/2} \, dt} {(a+bt-ct^{2})^{\mu+1}}, \label{int-1} \end{equation} \noindent and in \cite{liouville2} Liouville quotes a letter from Schl\"omilch in which he describes his approach to (\ref{int-1}) via the formula \begin{equation} \int_{0}^{\infty} F \left( \frac{\alpha}{x} + \gamma x \right) \frac{dx}{\sqrt{x}} = \frac{1}{\sqrt{\gamma}} \int_{0}^{\infty} F( 2 \sqrt{\alpha \gamma} + y ) \frac{dy}{\sqrt{y}} {} \end{equation} \noindent to derive the reduction \begin{equation} \int_{0}^{\infty} \frac{x^{m+1/2} \, dx} {( \alpha + \beta x + \gamma x^{2})^{m+1}} = \frac{1}{\sqrt{\gamma}} \int_{0}^{\infty} \frac{y^{1/2-m} \, dy } {(\beta + 2 \sqrt{\alpha \gamma} + y)^{m+1}}. {} \end{equation} \noindent The integral (\ref{int-1}) is then evaluated in terms of the {\em beta function}. Schl\"omilch also states that his method can be found in a note by A. Cauchy \cite{cauchy-1823} published some $35$ years earlier\footnote{This note is available at \texttt{http://gallica.bnf.fr/ark:/12148/bpt6k90193x.zoom.f526}}. In view of this historical precedence, the name {\em Cauchy-Schl\"omilch} adopted here seems to be more justified. Some illustrative examples appear in the text \cite{melzak1}. \\ We present here a variety of definite integrals evaluated by use of the Schl\"omilch transform and its extensions. Several of the examples are not computable by the current symbolic languages. For each evaluation presented in the upcoming sections, we considered its computation using \texttt{Mathematica}. Naturally, the statement `the integral can not be computed symbolically' has to be complemented with the phrase `at the present date'. \section{The Cauchy-Schl\"omilch transformation} \label{sec-schlo} \setcounter{equation}{0} In this section we present the basic result accompanied with initial examples. Further applications are discussed in the remaining sections. \begin{Thm} \label{main-thm} [Cauchy-Schl\"omilch] Let $a, \, b >0$ and assume that $f$ is a continuous function for which the integrals in (\ref{transf1}) are convergent. Then \begin{eqnarray} \int_{0}^{\infty} f \left( \left(ax - bx^{-1} \right)^{2} \right) \, dx & = & \frac{1}{a} \int_{0}^{\infty} f(y^{2}) \, dy. \label{transf1} \end{eqnarray} \end{Thm} \begin{proof} The change of variables $t = b/ax$ yields \begin{eqnarray} I & = & \int_{0}^{\infty} f \left( \left(ax - b/x \right)^{2} \right) \, dx \nonumber \\ & = & \frac{b}{a} \int_{0}^{\infty} f \left( \left(at - b/t \right)^{2} \right) \, t^{-2} \, dt. \nonumber \end{eqnarray} \noindent The average of these two representations, followed by the change of variables $u = ax - b/x$ completes the proof. \end{proof} The next result is a direct consequence of Theorem \ref{main-thm}. \begin{Cor} Preserve the assumptions of Theorem \ref{main-thm}. Let $\begin{displaystyle} h_{n} = \sum_{k=0}^{n} c_{k}x^{2k+1} \end{displaystyle}$ be an odd polynomial. Define \begin{equation} g_{n}(x) = x \left( \sum_{k=0}^{n} d_{k}x^{k} \right)^{2}, \end{equation} \noindent where \begin{equation} d_{k} = \sum_{j=k}^{n} \binom{k+j}{2k} \frac{2j+1}{2k+1} (ab)^{j-k} c_{j}. \label{form-d} \end{equation} \noindent Then \begin{equation} \int_{0}^{\infty} f \left( \left[ h_{n}(ax) - h_{n}(bx^{-1}) \right]^{2} \right) \, dx = \frac{1}{a} \int_{0}^{\infty} f(g_{n}(y^{2})) \, dy. \label{form-d1} {} \end{equation} \end{Cor} \begin{proof} Denote \begin{equation} H_{n}(x) = h_{n}(ax) - h_{n}(bx^{-1}) = \sum_{k=0}^{n} c_{k} \psi_{k}(x) \end{equation} \noindent where $\psi_{k}(x) = (ax)^{2k+1}-(bx^{-1})^{2k+1}$ and let $\phi_{k}(x) = (ax-bx^{-1})^{2k+1}$. Then, the polynomials $\psi_{k}$ and $\phi_{k}$ obey the transformation rule \begin{equation} \psi_{k}(x) = \sum_{j=0}^{k} \binom{k+j}{2j} \frac{2k+1}{2j+1} (ab)^{k-j} \phi_{j}(x). \end{equation} \noindent The expression for $H_{n}$ in terms of $\phi_{k}$ follows directly from this. Moreover, $H_{n}^{2}(x) = g_{n}\left( (ax-bx^{-1})^{2} \right)$ and the result is obtained by applying the Cauchy-Schl\"omilch formula to $f(g_{n})$. \end{proof} \begin{Example} \label{beauty-1} For $n \in \mathbb{N}$, \begin{equation} \int_{0}^{\infty} \left[ (x^2+x^{-6})(x^4-x^2+1)-1 \right] e^{-(x^{7}-x^{-7})^{2n}} \, dx = \frac{1}{14n} \Gamma \left( \frac{1}{2n} \right). {} \nonumber \end{equation} \noindent In order to verify this value, write the Laurent polynomials of the integrand according to (\ref{form-d}) \begin{equation} x^{7}-x^{-7} = 7y + 14y^{3} + 7y^{5} +y^{7}, \nonumber {} \end{equation} \noindent and write the integrand as a function of $y = x - x^{-1}$, to obtain \begin{equation} 7(x^{2}+x^{-6})(x^4-x^2+1)-7 = 7 + 42y^{2} + 35y^{4} + 7y^{6}. {} \nonumber \end{equation} Implement (\ref{form-d1}) on \begin{equation} f(y^{2}) = (7 + 42y^{2} + 35y^{4} + 7y^{6}) e^{-(y^{2}(7+14y^{2} + 7y^{4} + y^{6})^{2})^{n}}, \end{equation} \noindent to yield \begin{equation} \int_{0}^{\infty} 7 \left[ (x^2+x^{-6})(x^4-x^2+1)-1 \right] e^{-(x^{7}-x^{-7})^{2n}} \, dx = \int_{0}^{\infty} f(y^{2}) dy. \nonumber \end{equation} \noindent The last step is an outcome of the substitution $z = 7y + 14y^{3} + 7y^{5} + y^{7}$. Hence \begin{eqnarray} \int_{0}^{\infty} f(y^{2}) dy & = & \int_{0}^{\infty} (7 + 42y^{2} + 35y^{4} + 7y^{6}) e^{-(7y+14y^{3}+7y^{5}+y^{7})^{2n}} \, dy \nonumber \\ & = & \int_{0}^{\infty} e^{-z^{2n}} \, dz = \frac{1}{2n} \Gamma \left( \frac{1}{2n} \right). \nonumber \end{eqnarray} \end{Example} \begin{Example} Proceeding as in the previous example, we obtain \begin{equation} \int_{0}^{\infty} \left[ 7(x^{2}+x^{-6})(x^{4}-x^{2}+1) - 6 \right] e^{-(x+x^{7} -x^{-1}-x^{-7})^{2n}} \, dx = \frac{1}{2n} \Gamma \left( \frac{1}{2n} \right). \nonumber {} \end{equation} \noindent The details are left to the reader. \end{Example} \begin{Example} The choice $y = x-x^{-1}, \, x^{3}-x^{-3} = 3y+y^{3}$ followed by $z= 3y+y^{3}$ produces \begin{equation} \int_{0}^{\infty} \frac{x^{4}-x^{2}+1 }{x^{2} } \prod_{j=0}^{\infty} \left[ 1 + (x^{3}-x^{-3})^{2} \, \nu^{2j} \right]^{-1} \, dx = \nonumber \end{equation} \begin{eqnarray} \quad \quad & = & \int_{0}^{\infty} (1 + y^{2}) \prod_{j=0}^{\infty} \left[ 1 + (3y+y^{3}))^{2} \, \nu^{2j} \right]^{-1} \, dy \nonumber \\ \quad \quad & = & \frac{1}{3} \int_{0}^{\infty} \prod_{j=0}^{\infty} (1+ z^{2} \nu^{2j})^{-1} \, dz \nonumber \\ \quad \quad & = & \frac{\pi}{6} (1 + \nu + \nu^{3} + \nu^{6} + \nu^{10} + \cdots )^{-1}. \nonumber \end{eqnarray} \end{Example} \section{An integral due to Laplace} \label{sec-laplace} \setcounter{equation}{0} The example described in this section is the original problem to which the Cauchy-Schl\"omilch transformation was applied. \begin{Example} \label{example2.1} The {\em normal integral} is \begin{equation} \int_{0}^{\infty} e^{-y^{2}} \, dy = \frac{\sqrt{\pi}}{2}. \label{normal-1} {} \end{equation} \noindent The reader will find in \cite{irrbook} a variety of proofs of this fundamental identity. Take $f(x) = e^{-x}$ in Theorem \ref{main-thm} to obtain \begin{equation} \int_{0}^{\infty} e^{-(ax-b/x)^{2}} \; dx = \frac{\sqrt{\pi}}{2a}. {} \label{int-gr1} \end{equation} \noindent Expanding the integrand and replacing the parameters $a$ and $b$ by their square roots produces entry $3.325$ in \cite{gr}: \begin{equation} \int_{0}^{\infty} \text{exp} \left( -ax^{2}- b/x^{2} \right) \, dx = \frac{1}{2} \sqrt{\frac{\pi}{a}} e^{-2 \sqrt{ab}}. \label{rel-1} {} \end{equation} The change of variable $x = \sqrt{b}t/\sqrt{a}$ shows that the result (\ref{rel-1}) can be written in terms of a single parameter $c = ab$ as \begin{equation} \int_{0}^{\infty} e^{-c(t -1/t)^{2}} \, dt = \frac{1}{2} \sqrt{\frac{\pi}{c}}. {} \label{single-para} \end{equation} \end{Example} \medskip \begin{Example} A host of other entries in \cite{gr} are amenable to the Cauchy-Schl\"omilch transformation. For example, $3.324.2$ states that \begin{equation} \int_{-\infty}^{\infty} \text{exp} \left[ -(x - b/x)^{2n} \right] \, dx = \frac{1}{n} \Gamma \left( \frac{1}{2n} \right). \label{rel-2} {} \end{equation} \noindent This is now evaluated by choosing $f(x) = e^{-x^{n}}$ in Theorem \ref{main-thm} so that \begin{equation} \int_{-\infty}^{\infty} \text{exp} \left[ -(x - b/x)^{2n} \right] \, dx = 2 \int_{0}^{\infty} e^{-y^{2n}} \, dy. \nonumber {} \end{equation} \noindent The change of variables $t = y^{2n}$ and the integral representation for the {\em gamma function} \begin{equation} \Gamma(a) = \int_{0}^{\infty} e^{-t}t^{a-1} \, dt {} \end{equation} \noindent imply (3.5). \end{Example} \medskip \begin{Example} The expression $ t - 1/t$ in (\ref{single-para}) suggests a natural change of variables $t = e^{u}$. This yields \begin{equation} \int_{-\infty}^{\infty} e^{u - c \text{ sinh}^{2}u } \, du = \sqrt{\frac{\pi}{c}}. {} \label{sinh-1} \end{equation} \noindent The latest version of \texttt{Mathematica} is unable to produce this result when $c$ is an arbitrary parameter. It does evaluate (\ref{sinh-1}) if $c$ is assigned a specific real value. \end{Example} \section{An integral with three parameters} \label{sec-threeparam} \setcounter{equation}{0} The introduction of parameters in a definite integral provides a greater flexibility in its evaluation. Many classical integrals are presented in \cite{bomo1} as special cases of the next theorem, which now appears as $3.242.2$ in \cite{gr}. The proof given below is in the spirit of the original observation of A. Cauchy. \begin{Thm} \label{boros-3param} Let \begin{eqnarray} I_{1} & = & \int_{0}^{\infty} \left( \frac{x^{2}}{x^{4}+2ax^{2}+1} \right)^{c} \cdot \frac{x^{2}+1}{x^{b}+1} \, \frac{dx}{x^{2}} \nonumber \\ I_{2} & = & \int_{0}^{\infty} \left( \frac{x^{2}}{x^{4}+2ax^{2}+1} \right)^{c} \, \frac{dx}{x^{2}} \nonumber \\ I_{3} & = & \int_{0}^{\infty} \left( \frac{x^{2}}{x^{4}+2ax^{2}+1} \right)^{c} \, dx \nonumber \\ I_{4} & = & \frac{1}{2} \int_{0}^{\infty} \left( \frac{x^{2}}{x^{4}+2ax^{2}+1} \right)^{c} \, \frac{x^{2}+1}{x^{2}} \, dx. \nonumber \end{eqnarray} \noindent Then $I_{1}=I_{2}=I_{3}=I_{4}$ and this common value is \begin{equation} I(a,b;c) = 2^{-1/2-c}(1+a)^{1/2-c} B \left( c - \tfrac{1}{2}, \tfrac{1}{2} \right). \label{master} \end{equation} \end{Thm} \begin{proof} Observe that if $g$ satisfies $g(1/x) = x^{2}g(x)$, differentiation with respect to the parameter $b$ shows that the integral of $g(x)/(x^{b}+1)$ over $[0, \infty)$ is independent of $b$. This proves the equivalence of the four stated integrals. Theorem \ref{main-thm} is now used to evaluate $I_{3}$. For any function $f$, the Cauchy-Schl\"omilch transformation gives \begin{eqnarray} \int_{0}^{\infty} f \left( \frac{x^{2}}{x^{4}+2ax^{2}+1} \right) \, dx & = & \int_{0}^{\infty} f \left( \frac{1}{(x - x^{-1})^{2}+2a+2} \right) \, dx \nonumber \\ & = & \int_{0}^{\infty} f \left( \frac{1}{x^{2}+2(a+1)} \right). \nonumber \end{eqnarray} \noindent Apply this to $f(x) = x^{c}$ and use the change of variables $u = \frac{2(a+1)}{x^{2}+2(a+1)}$ to produce \begin{equation} \int_{0}^{\infty} \left( \frac{x^{2}}{x^{4}+2ax^{2}+1} \right)^{c} \, dx = \frac{1}{2} \left[ 2(a+1) \right]^{\tfrac{1}{2}-c} \int_{0}^{1} u^{c-3/2} (1-u)^{-1/2}. \nonumber \end{equation} \noindent This last integral is the special value $B(c - \tfrac{1}{2}, \tfrac{1}{2})$ of Euler's beta function. \\ \end{proof} The next theorem presents an alternative form of the integral in Theorem \ref{main-thm}. \begin{Thm} \label{thm-alter} For any function $f$, \begin{equation} \int_{0}^{\infty} f \left( \frac{bx^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \frac{\sqrt{b}}{2 \sqrt{a_{*}}} \int_{0}^{1} \frac{f(a_{*}t)}{\sqrt{t(1-t)}} \, \frac{dt}{t}, \end{equation} \noindent where $a_{*} = \frac{b}{2(1+a)}$. \end{Thm} \begin{proof} This follows from the identity in Theorem \ref{main-thm} and the change of variable $t = 2(a+1)/[ x^{2}+2(a+1)]$. \end{proof} The {\em master formula} (\ref{master}) yields many other evaluations of definite integrals; see \cite{roopa} for some of them. The next theorem provides a new class of integrals that are derived from (\ref{master}). \begin{Thm} \label{thm-series} Suppose \begin{equation} f(x) = \sum_{n=1}^{\infty} c_{n}x^{n} \nonumber \end{equation} \noindent be an analytic function with $f(0) = 0$. Then \begin{equation} \int_{0}^{\infty} f \left( \frac{x^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \frac{\pi}{2^{3/2} \, \sqrt{1+a}} \sum_{n=0}^{\infty} c_{n+1} \binom{2n}{n} u^{n}, \label{series-for} \end{equation} \noindent where $u = \frac{1}{8(1+a)}$. \end{Thm} \begin{proof} Integrate term-by-term and use the value \begin{equation} B \left( m + \tfrac{1}{2}, \tfrac{1}{2} \right) = \frac{\pi}{2^{2m}} \binom{2m}{m} \nonumber {} \end{equation} \noindent to simplify the result. \end{proof} \medskip \section{Exponentials and Bessel functions} \label{sec-exponential} \setcounter{equation}{0} This section describes the application of Theorem \ref{thm-series} to a number of definite integrals. The Taylor expansion of $f(x) = 1-e^{-bx}$ with $b > 0$ leads to an integral that can be evaluated in terms of the {\em modified Bessel} functions $I_{\nu}(x)$ defined by the series \begin{equation} I_{\nu}(x) = \sum_{j=0}^{\infty} \frac{x^{\nu + 2j} }{j! \Gamma(\nu + j+1) \, 2^{\nu+ 2j}}. {} \end{equation} \noindent In particular \begin{equation} I_{0}(x) = \sum_{j=0}^{\infty} \frac{x^{2j}}{2^{2j} j!^{2}} \text{ and } I_{1}(x) = \sum_{j=0}^{\infty} \frac{x^{2j+1}}{2^{2j+1} j!(j+1)!}. \label{bessel-exp} {} \end{equation} Although, as will be seen below, our results below have more direct derivations, the following procedure is more informative. \begin{Example} \label{bessel-1} For $a>-1$ and $b>0$, let $c = \frac{b}{8(1+a)}$. Then \begin{equation} \int_{0}^{\infty} \left( 1 - e^{-bx^{2}/(x^{4}+2ax^{2}+1)} \right) \, dx = \frac{\pi b e^{-2c} }{2^{3/2} \sqrt{1+a}} \left[ I_{0}(2c) + I_{1}(2c) \right]. \nonumber {} \end{equation} \medskip \noindent The function $f(x) = 1 - e^{-bx}$ has coefficients $c_{n} = (-1)^{n+1}b^{n}/n!$ and (\ref{series-for}) yields \begin{equation} \int_{0}^{\infty} \left( 1 - e^{-bx^{2}/(x^{4}+2ax^{2}+1)} \right) \, dx = \frac{\pi b}{2^{3/2} \, \sqrt{1+a}} h(- b u ) \nonumber \end{equation} \noindent where $u = 1/8(1+a)$ and \begin{equation} h(x) = \sum_{n=0}^{\infty} \frac{\binom{2n}{n} }{(n+1)!} x^{n}. \label{h-def} \end{equation} \end{Example} \smallskip The result now follows from the relation $c = bu$ and an identification of the series $h$ in terms of Bessel functions. \begin{Prop} \label{bessel-lemma} The following identity holds: \begin{equation} \sum_{n=0}^{\infty} \frac{\binom{2n}{n}}{(n+1)!} x^{n} = e^{2x} \left[ I_{0}(2x) - I_{1}(2x) \right]. \nonumber {} \end{equation} \end{Prop} We present two different proofs. The first one is elementary and is based on the WZ-method described in \cite{aequalsb}. \texttt{Mathematica} actually provides a third proof by direct evaluation of the series. \begin{proof} The expansion (\ref{bessel-exp}) yields \begin{equation} I_{0}(2x) - I_{1}(2x) = \sum_{r=0}^{\infty} \frac{x^{r}}{b_{r}}, \label{identity-0} \end{equation} \noindent where \begin{equation} b_{r} = \begin{cases} j!^{2} & \text{ if } r = 2j \\ -j!(j+1)! & \text{ if } r = 2j+1. \end{cases} \nonumber \end{equation} \noindent Multiplying (\ref{identity-0}) with the series for $e^{2x}$ lends itself to an equivalnet formulation of the claim as the identity \begin{equation} \sum_{j=0}^{k} \frac{(-1)^{j}}{2^{j}} \binom{k}{j} \binom{j}{\lfloor{j/2\rfloor}} = \frac{1}{2^{k} (k+1)} \binom{2k}{k}. {} \label{wz-1} \end{equation} The upper index of the sum is extended to infinity and (\ref{wz-1}) is written as \begin{equation} \sum_{j \geq 0} \frac{(-1)^{j}}{2^{j}} \binom{k}{j} \binom{j}{\lfloor{j/2\rfloor}} = \frac{1}{2^{k} (k+1)} \binom{2k}{k}. \nonumber {} \end{equation} \noindent The even and odd indices are considered separately. Define \begin{equation} S_{e} := \sum_{j \geq 0} \frac{1}{2^{2j}} \binom{k}{2j} \binom{2j}{j} \text{ and } S_{o} := \sum_{j \geq 0} \frac{1}{2^{2j+1}} \binom{k}{2j+1} \binom{2j+1}{j}. {} \nonumber \end{equation} \noindent The result is obtained from the values \begin{equation} S_{e} = \frac{1}{2^{k}} \binom{2k}{k} \text{ and } S_{o} = \frac{k}{(k+1) \, 2^{k}} \binom{2k}{k}. {} \label{values} \end{equation} \noindent To establish (\ref{values}), the WZ-method is applied to the functions \begin{equation} S_{e}^{*}(k) = S_{e} \binom{2n}{k}^{-1} 2^{k} \text{ and } S_{o}^{*}(k) = S_{o} \binom{2k}{k}^{-1} \frac{k+1}{k}. {} \nonumber \end{equation} \noindent The output is that both $S_{e}^{*}$ and $S_{o}^{*}$ satisfy the recurrence $a_{k+1} - a_{k} = 0$ with {\em certificates} \begin{equation} \frac{-4j^{2}}{(2k+1)(k+1+2j)} \text{ and } \frac{-4j(j+1)}{(2k+1)(k-2j)}, \nonumber \end{equation} respectively. The initial conditions $S_{e}^{*}(0) = S_{o}^{*}(0) = 1$ give $S_{e}^{*}(k) \equiv S_{o}^{*}(k) \equiv 1$. \end{proof} \medskip As promised above we present an alternative proof of \ref{bessel-lemma} based on Theorem \ref{thm-alter}. \begin{proof} Theorem \ref{thm-alter} gives \begin{equation} I := \int_{0}^{\infty} \left( 1 - e^{-bx^{2}/(x^{4}+2ax^{2}+1)} \right) \, dx = \frac{\sqrt{b}}{2 \sqrt{a^{*}}} \int_{0}^{1} \frac{1-e^{-a^{*}t}}{t} \frac{dt}{\sqrt{t(1-t)}}. \nonumber {} \end{equation} The latter is known as {\em Frullani integral} and can be written as \begin{equation} I = \frac{\sqrt{b a^{*}}}{2} \int_{0}^{1} \int_{0}^{1} \frac{e^{-a^{*}ty}}{\sqrt{t(1-t)}} dy \, dt. \end{equation} \noindent Exchanging the order of integration, the inner integral is a well-known Laplace transform \cite{erderly3}(p. $366$, $19.5.11$ with $n=0$) \begin{equation} \int_{0}^{1} \frac{e^{-\omega t} \, dt}{\sqrt{t(1-t)}} = \pi e^{-\omega /2} I_{0} \left( \tfrac{\omega}{2} \right) {} \end{equation} whence we find \begin{equation} I = \frac{\pi \sqrt{b}}{\sqrt{a^{*}}} \int_{0}^{a^{*}/2} e^{-t} I_{0}(t) \, dt. \nonumber \end{equation} \noindent The relation \begin{equation} \frac{d}{dt} \left( te^{-t} ( I_{0}(t) + I_{1}(t) ) \right) = e^{-t} I_{0}(t) {} \end{equation} \noindent now completes the proof. \end{proof} \medskip \begin{Example} Choosing $a=0$ and $b=4$ gives \begin{equation} \int_{0}^{\infty} \left( 1 - e^{-4x^{2}/(x^{4}+1)} \right) \, dx = \frac{\pi \sqrt{2}}{e} \left( I_{0}(1) + I_{1}(1) \right). \nonumber {} \end{equation} \end{Example} \begin{Example} The values $a=1$ and $b=8$ yield \begin{equation} \int_{0}^{\infty} \left( 1 - e^{-8x^{2}/(x^{2}+1)^{2}} \right) \, dx = \frac{2 \pi}{e} \left( I_{0}(1) + I_{1}(1) \right). \nonumber {} \end{equation} \end{Example} \medskip \noindent \texttt{Mathematica} is unable to evaluate these two examples. \\ \section{Trigonometric and Bessel functions} \label{sec-trigo} \setcounter{equation}{0} The next example employs the familar Taylor expansion of $\sin b x$. The result is expressed in terms of the {\em Bessel function of the first kind} \begin{equation} J_{\nu}(x) = \sum_{k=0}^{\infty} \frac{(-1)^{k}}{k! \, (k+\nu)!} \left( \frac{x}{2} \right)^{2k + \nu}. {} \label{bes-def} \end{equation} \begin{Example} \label{cool} Let $c = b/8(1+a)$. Then \begin{equation} \int_{0}^{\infty} \sin \left( \frac{bx^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \frac{\pi b }{\sqrt{8(1+a)}} \left[ J_{0}(2c) \cos 2c + J_{1}(2c) \sin 2c \right]. {} \nonumber \end{equation} \noindent \smallskip To verify this, apply Theorem \ref{thm-series} to $\sin bx$ to obtain \begin{equation} \int_{0}^{\infty} \sin \left( \frac{bx^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \frac{\pi b }{\sqrt{8(1+a)}} \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!} \binom{4k}{2k} c^{2k}. \nonumber {} \end{equation} \begin{Lem} The following identity holds: \begin{equation} \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!} \binom{4k}{2k} c^{2k} = J_{0}(2c) \cos 2c + J_{1}(2c) \sin 2c. {} \nonumber \end{equation} \noindent \end{Lem} \begin{proof} Using the Cauchy product and the series expression (\ref{bes-def}), the right-hand side is written as \begin{equation} J_{0}(2c) \cos 2c + J_{1}(2c) \sin 2c = \nonumber \end{equation} \begin{eqnarray} & = & \sum_{k=0}^{\infty} \sum_{j=0}^{k} \frac{(-1)^{k} c^{2k} 4^{j}} {(2j)! (k-j)!^{2}} + 2c^{2} \sum_{k=0}^{\infty} \sum_{j=0}^{k} \frac{(-1)^{k} c^{2k} 4^{j}} {(2j+1)! (k-j)! (k-j+1)!} \nonumber \\ & = & \sum_{k=0}^{\infty} \frac{(-1)^{k} c^{2k} (4k)!}{(2k)!^{3}} + 2c^{2} \sum_{k=0}^{\infty} \frac{(-1)^{k} c^{2k} (4k+3)!}{(2k+3)! (2k+1)^{2}} \nonumber \\ & = & \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!} \binom{4k}{2k} c^{2k}. \nonumber \end{eqnarray} \noindent The passage from the first to the second equality is justified by the identities \begin{equation} \sum_{j=0}^{k} \frac{4^{j}}{(2j)! (k-j)!^{2}} = \frac{1}{(2k)!} \binom{4k}{2k} \text{ and } \sum_{j=0}^{k} \frac{4^{j}}{(2j)! (k-j)! \, (k-j+1)!} = \frac{4k+3}{(2k+3)!} \binom{4k+2}{2k+1}. \nonumber \end{equation} \noindent Both of these formulas are in turn verifiable via the WZ-method \cite{aequalsb} with their respective rational certificates \begin{equation} \frac{(6n^{2}+10n+4-4nk-34k)(2k-1)k}{(n+1-k)^{2} (4n+1)(4n+3)} \text{ and } \frac{(20n+17+6n^{2}-4nk-7k)(2k-1)}{(n+1-k)(n+2-k) (4n+5)(4n+7)}. \nonumber \end{equation} \end{proof} \medskip \begin{Example} The choice $a=0$ and $b=1$ in Example \ref{cool} produces \begin{equation} \int_{0}^{\infty} \sin \left( \frac{x^{2}}{x^{4}+1} \right) \, dx = \frac{\pi}{2 \sqrt{2}} \left[ J_{0} \left( \tfrac{1}{4} \right) \cos \left( \tfrac{1}{4} \right) + J_{1} \left( \tfrac{1}{4} \right) \sin \left( \tfrac{1}{4} \right) \right]. \label{nice-trick} {} \end{equation} \end{Example} \begin{Example} By choosing $a=b=1$ in Example \ref{cool}, we get \begin{equation} \int_{0}^{\infty} \sin \left( \left[\frac{x}{x^{2}+1} \right]^{2} \right) \, dx = \frac{\pi}{4} \left[ J_{0} \left( \tfrac{1}{8} \right) \cos \left( \tfrac{1}{8} \right) + J_{1} \left( \tfrac{1}{8} \right) \sin \left( \tfrac{1}{8} \right) \right]. {} \end{equation} \end{Example} \noindent As the time of this writing, \texttt{Mathematica} is unable to evaluate the integrals in the two previous examples. \\ \noindent {\bf Note}. The function \begin{equation} g(u) = \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2k+1)!} \binom{4k}{2k} u^{2k} \end{equation} \noindent also has a hypergeometric form as \begin{equation} g(u) = {}_{2}F_{3}\bigg[{\tfrac{1}{4} \,\,\, \tfrac{3}{4} \atop \tfrac{1}{2} \,\,\, 1 \,\,\, \tfrac{3}{2}}; -4u^{2} \bigg]. \label{standard} {} \end{equation} \noindent This follows directly from the identity \begin{equation} \frac{\binom{4k}{2k}}{(2k+1)!} = \frac{2^{2k-3/2} \Gamma(k + 1/4) \Gamma(k + 3/4)} {\Gamma(k+1/2) \, \Gamma^{2}(k+1) \, \Gamma(k + 3/2)}, {} \nonumber \end{equation} \noindent that is established via the duplication formula of the gamma function \begin{equation} \Gamma(2x) = \frac{1}{\sqrt{\pi}} 2^{2x-1} \Gamma(x) \Gamma(x + \tfrac{1}{2}) \nonumber {} \end{equation} \noindent and its iteration \begin{equation} \Gamma(4x) = \frac{1}{\pi \sqrt{\pi}} 2^{8x-5/2} \Gamma(x) \Gamma(x + \tfrac{1}{4}) \Gamma(x + \tfrac{1}{2} ) \Gamma(x + \tfrac{3}{4}) \nonumber {} \end{equation} \noindent Then $\Gamma(a+k) = (a)_{k} \Gamma(a)$ produces (\ref{standard}). Here $(a)_{k} = a(a+1) \cdots (a+k-1)$ is the Pochhammer symbol. This gives an alternative form of the result described in Example \ref{cool}, i.e., \begin{equation} \int_{0}^{\infty} \sin \left( \frac{bx^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \frac{\pi \, b}{2 \sqrt{2(1+a)}} \, \, {}_{2}F_{3}\bigg[{\tfrac{1}{4} \,\,\, \tfrac{3}{4} \atop \tfrac{1}{2} \,\,\, 1 \,\,\, \tfrac{3}{2}}; -b^{2}/16(1+a)^{2} \bigg]. {} \nonumber \end{equation} \medskip \noindent {\bf Note}. Proceeding as in Example \ref{beauty-1} we can obtain \begin{multline} \int_{0}^{\infty} (x^{2} + x^{-2}-1) \sin \left( \frac{x^{6}+x^{-6}-2}{x^{12} - 4x^{6} - 4x^{-6} + x^{-12} +7} \right) \, dx = \\ \frac{\pi}{6 \sqrt{2}} \left[ J_{0} \left( \tfrac{1}{4} \right) \cos \tfrac{1}{4}+ J_{1} \left( \tfrac{1}{4} \right) \sin \tfrac{1}{4} \right]. \nonumber {} \end{multline} \noindent \texttt{Mathematica} is unable to compute the preceding integral. \\ \noindent {\bf Note}. The result in Example \ref{cool} can also be established by the Frullani method described in Section \ref{sec-exponential}. Start with \begin{equation} I := \int_{0}^{\infty} \sin \left( \frac{bx^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \frac{\sqrt{b}}{2 \sqrt{a^{*}}} \int_{0}^{1} \frac{\sin(a^{*}t)}{t} \frac{dt}{\sqrt{t(1-t)}}. \nonumber \end{equation} \noindent As before, write \begin{equation} I = \frac{\sqrt{a^{*} b}}{2} \int_{0}^{1} \int_{0}^{1} \frac{\cos(a^{*}ty)} {\sqrt{t(1-t)}} dt \, dy. \nonumber \end{equation} \noindent The inner integral is a well-known cosine transform (\cite{erderly2},p.12, $1.3.14$\footnote{Note that formula $1.3.14$ in \cite{erderly2} is incorrect.} \begin{equation} \int_{0}^{1} \frac{\cos( \omega t) \, dt}{\sqrt{t(1-t)}} = \pi J_{0} \left( \frac{\omega}{2} \right) \cos \frac{\omega}{2}, {} \end{equation} \noindent and it follows that \begin{equation} I = \frac{\pi \sqrt{b}}{\sqrt{a^{*}}} \int_{0}^{a^{*}/2} \cos t \, J_{0}(t) \, dt. \nonumber \end{equation} \noindent The result is now immediate from the identity \begin{equation} \frac{d}{dt} \left[ t \left( \cos t \, J_{0}(t) + \sin t \, J_{1}(t) \right) \right] = \cos t \, J_{0}(t). {} \nonumber \end{equation} \end{Example} \section{The sine integral and Bessel functions} \label{sec-sine} \setcounter{equation}{0} The {\em sine integral} is defined by \begin{equation} \text{Si}(x) := \int_{0}^{x} \frac{\sin t}{t} \, dt. \end{equation} \noindent The Cauchy-Schl\"omilch transformation can be employed to prove \begin{equation} \int_{0}^{\infty} \text{Si} \left( \frac{bx^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \pi \sqrt{2(1+a)} \, \left[ (4c \cos 2c - \sin 2c ) J_{0}(2c) + 4c \sin 2c \, J_{1}(2c) \right], {} \nonumber \end{equation} \noindent where $c = b/8(1+a)$. To establish this identity, start with the evaluation in Example \ref{cool} \begin{equation} \int_{0}^{\infty} \sin \left( \frac{bx^{2}}{x^{4}+2ax^{2}+1} \right) \, dx = \frac{\pi b }{\sqrt{8(1+a)}} \left[ J_{0}(2c) \cos 2c + J_{1}(2c) \sin 2c \right] \nonumber \end{equation} \noindent divide by $b$ and integrate both sides. Then the identity \begin{equation} \frac{d}{dx} \left[ (2x \cos x - \sin x ) J_{0}(x) + 2x \sin x J_{1}(x) \right] = J_{0}(x) \cos x + J_{1}(x) \sin x {} \end{equation} \noindent gives the result. \begin{Example} The special case $a=0$ and $b=1$ yields \begin{equation} \int_{0}^{\infty} \text{Si} \left( \frac{x^{2}}{x^{4}+1} \right) \, dx = \frac{\pi}{2 \sqrt{2}} \left[ J_{0}( \tfrac{1}{4}) \left[ \cos \tfrac{1}{4} - 2 \sin \tfrac{1}{4} \right] + J_{1}( \tfrac{1}{4}) \sin \tfrac{1}{4} \right]. \nonumber {} \end{equation} \end{Example} \begin{Example} The special case $a=b=1$ yields \begin{equation} \int_{0}^{\infty} \text{Si} \left( \left[ \frac{x}{x^{2}+1} \right]^{2} \right) \, dx = \frac{\pi}{2} \left[ J_{0}( \tfrac{1}{8}) \left[ \cos \tfrac{1}{8} - 4 \sin \tfrac{1}{8} \right] + J_{1}( \tfrac{1}{8}) \sin \tfrac{1}{8} \right]. \nonumber {} \end{equation} \medskip \noindent \texttt{Mathematica} is unable to evaluate these integrals. \end{Example} \medskip \section{The Riemann zeta function} \label{sec-zeta} \setcounter{equation}{0} Interesting examples of definite integrals come from integral representations of special functions. For the Riemann zeta function \begin{equation} \zeta(s) = \sum_{k=1}^{\infty} \frac{1}{n^{s}}, \end{equation} \noindent one such expression is given by \begin{equation} \zeta(s) = \frac{1}{(1 - 2^{1-s}) \, \Gamma(s) } \int_{0}^{\infty} \frac{t^{s-1} \, dt}{1+e^{t}}. \label{zeta-int} {} \end{equation} \noindent Analytic properties of $\zeta(s)$ are often established via such integral formulas. The change of variables $t = y^{2}$ produces \begin{equation} \int_{0}^{\infty} \frac{y^{2s-1} \, dy}{1+e^{y^{2}}} = \frac{1}{2} (1-2^{1-s})\Gamma(s) \zeta(s). \label{zeta-def1} {} \end{equation} \noindent \begin{Example} We now employ Theorem \ref{main-thm} to establish \begin{equation} \int_{0}^{\infty} \frac{x^{2s+1} \, dx}{\cosh^{2}(x^{2})} = 2^{-s} (1-2^{1-s}) \Gamma(s+1) \zeta(s). {} \label{ex-zeta-2} \end{equation} The notation \begin{equation} \Lambda(s) := \frac{1-2^{1-s}}{2} \Gamma(s) \zeta(s) \end{equation} \noindent is employed in the proof. First introduce the change of variable $x = t^{r}$ in the Cauchy- Schl\"omilch formula and take $a = b$ for simplicity. Then \begin{equation} \int_{0}^{\infty} t^{r -1} f \left[ a^{2}( t^{r} - t^{-r} )^{2} \right] \, dt = \frac{1}{ar} \int_{0}^{\infty} f(y^{2}) \, dy. \label{sch-90} \end{equation} \noindent Now let $f(x) = x^{s-1/2}/(1+e^{x})$. Using the notation $S_{r} = \sinh(r \ln t)$, (\ref{sch-90}) yields \begin{equation} \frac{\Lambda(s)}{ar} = \int_{0}^{\infty} \frac{t^{r-1} (at-at^{-r})^{2s-1} \, dt } { 1 + \text{exp}[ (at^{r}-at^{-r})^{2} ]} = \int_{0}^{\infty} \frac{t^{r-1} (2aS_{r})^{2s-1} \, dt} {1+ \text{exp}[ (2a S_{r})^{2} ]}. \label{form-2} \end{equation} \noindent Differentiate (\ref{form-2}) with respect to $a$ and use \begin{equation} \frac{c}{( 1+ c)^{2}} = \frac{1}{4 \cosh^{2}(\theta/2)} \label{cosh-11} {} \end{equation} to obtain \begin{equation} \frac{2s-1}{a} \frac{\Lambda(s)}{ar} - 2(2a)^{2s-1} \int_{0}^{\infty} \frac{t^{r-1} S_{r}^{2s+1} \, dt}{\cosh^{2}[ 2(aS_{r})^{2}]} = - \frac{\Lambda(s)}{a^{2}r}, \end{equation} \noindent that produces \begin{equation} \int_{0}^{\infty} \frac{t^{r-1} S_{r}^{2s+1} \, dt}{\cosh^{2}[ 2(aS_{r})^{2}] } = \frac{2s \Lambda(s)}{(2a)^{2s} a^{2} r}. \label{int-two} \end{equation} \noindent Now change $t$ by $t^{-1}$ in (\ref{int-two}) and average the resulting integral with itself. The outcome is written as \begin{equation} \int_{0}^{\infty} \frac{t^{r-1} S_{r}^{2s+1} \, dt}{\cosh^{2}[ 2 (a S_{r})^{2} ] } = \frac{2s \Lambda(s)}{(2a)^{2s} a^{2} r}. \end{equation} \noindent The final change of variables $x = \sqrt{2} a S_{r}$ produces the stated result. The special case $s= \tfrac{1}{2}$ yields \begin{equation} \int_{0}^{\infty} \frac{x^{2} \, dx}{\cosh^{2}(x^{2})} = -\frac{1}{4} ( 2 - \sqrt{2}) \zeta(1/2) \sqrt{\pi}. \label{ex-zeta-1} {} \end{equation} \noindent \texttt{Mathematica} is unable to produce (\ref{ex-zeta-1}). \end{Example} \section{The error function} \label{sec-error} \setcounter{equation}{0} Several entries in the table \cite{gr} involve the {\em error function} \begin{equation} \text{erf}(x) := \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^{2}} \, dt. {} \end{equation} \noindent For instance, entry $3.466.1$ states that \begin{equation} \int_{0}^{\infty} \frac{e^{-\mu^{2}x^{2}} \, dx}{x^{2}+ \beta^{2}} = \frac{\pi}{2 \beta} \left( 1 - \text{erf}(\mu \beta) \right) e^{\mu^{2} \beta^{2}}. {} \label{error-1} \end{equation} \noindent \texttt{Mathematica} is able to compute this example, which can be checked by writing the exponential in the integrand as $e^{\mu^{2} \beta^{2}} \times e^{-\mu^{2}(x^{2}+\beta^2)}$ and differentiating with respect to $\mu^{2}$. \begin{Example} The Cauchy-Schl\"omilch transformation is now applied to the function \begin{equation} f(x) = e^{-\mu^{2}x^{2}}/(x^{2} + 2(a+1)) \end{equation} \noindent to produce \begin{equation} \int_{0}^{\infty} \frac{e^{-\mu^{2}(x^{2}+x^{-2})} \, dx } {x^{2}+2a + x^{-2}} = \frac{\pi e^{2a \mu^{2}}}{2 \sqrt{2(a+1)}} \left[ 1 - \text{erf} \left( \mu \sqrt{2(a+1)} \, \, \right) \right]. {} \end{equation} \noindent The choice $a=\mu=1$ yields \begin{equation} \int_{0}^{\infty} \frac{e^{-(x^2+x^{-2})} \, dx }{(x+x^{-1})^{2}} = \frac{\pi e^{2}}{4} \left[ 1 - \text{erf} (2)\right], {} \end{equation} \noindent and $a=0, \, \mu=1$ gives \begin{equation} \int_{0}^{\infty} \frac{e^{-(x^{2}+x^{-2})}\, dx }{x^{2}+x^{-2}} = \frac{\pi}{2 \sqrt{2}} \left[ 1 - \text{erf}(\sqrt{2}) \right]. {} \end{equation} \noindent Neither of these special cases is computable by the current version of \texttt{Mathematica}. \end{Example} \section{Elliptic integrals} \label{sec-elliptic} \setcounter{equation}{0} The classical {\em elliptic integral of the first kind} is defined by \begin{equation} \mathbf{K}(k) := \int_{0}^{1} \frac{dx}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}} = \int_{0}^{\pi/2} \frac{d \varphi}{\sqrt{1 - k^{2} \sin^{2} \varphi}}. \end{equation} \noindent The table \cite{gr} contains a variety of definite integrals that can be evaluated in terms of $\mathbf{K}(k)$. For instance, entry $3.843.4$ states that \begin{equation} \int_{0}^{\infty} \frac{\tan x}{\sqrt{1- k^{2} \sin^{2}(2x)}} \, \frac{dx}{x} = \mathbf{K}(k). \end{equation} \noindent The reader will find in \cite{lawden1} a large variety of examples. In the context of the Cauchy-Schl\"omilch transformation, we present two illustrative examples. \begin{Example} The first result is \begin{equation} \int_{0}^{\infty} \frac{x^{2} \, dx} {\sqrt{(x^{4}+2ax^{2}+1)(x^{4}+2bx^{2}+1)}} = \frac{1}{\sqrt{2(a+1)}} \mathbf{K} \left( \sqrt{ \frac{a-b}{a+1}} \right). \label{elliptic-1} {} \end{equation} \smallskip To verify this result, apply (\ref{main-thm}) to the function \begin{equation} f(x) = \frac{1}{\sqrt{(x+2a+2)(x+2b+2)}} \nonumber \end{equation} \noindent and observe that \begin{equation} f \left( ( x - 1/x)^{2} \right) = \frac{x^{2}}{\sqrt{(x^{4}+2ax^{2}+1)(x^{4}+2bx^{2}+1)}}. \nonumber \end{equation} \noindent The Cauchy-Schl\"omilch transformation gives \begin{equation} \int_{0}^{\infty} \frac{x^{2} \, dx} {\sqrt{(x^{4}+2ax^{2}+1)(x^{4}+2bx^{2}+1)}} = \int_{0}^{\infty} \frac{dx}{\sqrt{(x^{2}+A^{2})(x^{2}+B^{2})}} {} \nonumber \end{equation} \noindent with $A^{2} = 2(a+1)$ and $B^{2}=2(b+1)$. This last integral is computed by the change of variable $x = A \tan \varphi$ and the trigonometric form of the elliptic integral yields (\ref{elliptic-1}). \end{Example} \begin{Example} A similar argument produces the second elliptic integral evaluation. This time it involves \begin{equation} F( \varphi,k) := \int_{0}^{\varphi} \frac{dt}{\sqrt{1-k^{2} \sin^{2}t}} = \int_{0}^{\sin \varphi} \frac{dx}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}} \end{equation} \noindent the (incomplete) elliptic integral of the first kind. \smallskip Assume $a \leq b \leq c$. Then \begin{multline} \int_{0}^{\infty} \frac{x^{3} \, dx} {\sqrt{(x^{4}+2ax^{2}+1)(x^{4}+2bx^{2}+1)(x^{4}+2cx^{2}+1)}} = \\ \frac{1}{2 \sqrt{(b+1)(c-a)}} F \left[ \, \sin^{-1} \sqrt{ \frac{c-a}{c+1}}, \, \sqrt{ \frac{ (b-a)(c+1)}{b+1)(c-a)}} \, \right]. \end{multline} \end{Example} \medskip Following \cite{glasser4} a generalization of the Cauchy-Schl\"omilch identity is now used to evaluate some hyper-elliptic integrals. \begin{Thm} \label{larry-gen} Assume $\phi(z)$ is a meromorphic function with only real simple poles $a_{j}$ with $\text{Res}(\phi; a_{j}) < 0$. Moroever assume $\phi$ is asymptotically linear. Then, for any even real valued function $f$, \begin{equation} \int_{0}^{\infty} f \left[ \phi(x) \right] \, dx = \int_{0}^{\infty} f(x) \, dx. \end{equation} \end{Thm} \medskip \begin{Example} As a simple illustration we take \begin{equation} \phi_{N}(z) = z \prod_{j=1}^{N} \frac{z^{2}- b_{j}^{2}}{z^{2} - a_{j}^{2}}, \end{equation} \noindent where $a_{1} < a_{2} < \cdots < a_{N}$. Take $N=1$ and write $b_{1}=b$. Theorem \ref{larry-gen} and \begin{equation} \int_{0}^{\infty} \frac{dx}{\sqrt{(x^{2} + \alpha^{2})(x^{2}+\beta^{2})}} = \frac{1}{\alpha} \mathbf{K} \left( \frac{\sqrt{\alpha^{2}- \beta^{2}}} {\alpha} \right), \end{equation} \noindent give \begin{equation} \int_{0}^{\infty} \frac{(t-b^{2})^{2} \, dt} {\sqrt{t P(t) Q(t)}} = \frac{2}{\alpha} \mathbf{K} \left( \frac{\sqrt{\alpha^{2}- \beta^{2}}} { \alpha} \right), \end{equation} \noindent with \begin{eqnarray} P(t) & = & t^{3} + ( \alpha^{2} - 2a^{2})t^{2} + ( a^{4} - 2 \alpha^{2}b^{2})t + \alpha^{2} b^{4}, \nonumber \\ Q(t) & = & t^{3} + ( \beta^{2} - 2a^{2})t^{2} + ( a^{4} - 2 \beta^{2}b^{2})t + \beta^{2} b^{4}. \nonumber \end{eqnarray} As an interesting special case, we have \begin{equation} \int_{0}^{\infty} \frac{ (t - \tfrac{1}{2})^{2} \, dt} {\sqrt{t \left[ t^{3} - 2 (kk')^{2} t + k^{2} \right] \left[ t^{3} - 4(kk')^{2} t^{2} + k^{4} \right]} } = \frac{1}{k} \mathbf{K'}(k), \end{equation} \noindent where $k'$ is the complementary modulus and $\mathbf{K'}(k) := \mathbf{K}(k')$. \texttt{Mathematica} is unable to deal with this. \end{Example} \section{An extension of the Cauchy-Schl\"omilch method} \label{sec-extension} \setcounter{equation}{0} An extension of Theorem \ref{main-thm} by Jones \cite{jones1} is discussed here. The next section presents statistical applications of this result. \begin{Thm} Let $s$ be a continuous decreasing function from ${\mathbb{R}}^{+}$ onto ${\mathbb{R}}^{+}$. Assume $f$ is self-inverse, that is, $s^{-1}(x) = s(x)$ for all $x \in {\mathbb{R}}^{+}$. Then \begin{equation} \int_{0}^{\infty} f\left( [x - s(x) ]^{2} \right) \, dx = \int_{0}^{\infty} f\left( y^{2} \right) \, dy, \label{gen-sch} \end{equation} \noindent provided the integrals are convergent. \end{Thm} \begin{proof} The change of variables $t = s(x)$ yields \begin{equation} I = \int_{0}^{\infty} f( [x- s(x) ]^{2}) \, dx = - \int_{0}^{\infty} f( [s(t) - t ]^{2}) \, s'(t) \, dt. \end{equation} \noindent The average of these two representations, followed by the change of variables $u = x - s(x)$ gives the result. \end{proof} \noindent {\bf Note}. The above result is given without a scaling constant $a>0$. This could be introduced by the change of variable $x_{1} = ax$ in (\ref{gen-sch}) to obtain, after relabeling $x_{1}$ as $x$, \begin{equation} \int_{0}^{\infty} f( [ax- s(ax) ]^{2}) \, dx = \frac{1}{a} \int_{0}^{\infty} f\left( y^{2} \right) \, dy. \label{parameter-1} \end{equation} Jones \cite{jones1} lists several specific forms of self-inverse $s(x)$ along with two methods for generating such functions based on work of Kucerovsky, Marchand and Small \cite{kucerovsky1}. \begin{Example} An attractive self-inverse function is \begin{equation} s(x) = x - \frac{1}{\alpha} \log \left( e^{\alpha x} -1 \right). \label{formula-fors} \end{equation} \noindent Then (\ref{gen-sch}) becomes \begin{equation} \int_{0}^{\infty} f \left( \frac{1}{\alpha^{2}} \log^{2}(e^{\alpha x} -1 ) \right) \, dx = \int_{0}^{\infty} f(y^{2}) \, dy. \end{equation} \noindent The choice $f(x) = e^{-x}$ gives, using (\ref{normal-1}), \begin{equation} \int_{0}^{\infty} \text{exp}\left( - \frac{1}{\alpha^{2}} \log^{2} (e^{\alpha x} -1) \right) \, dx = \frac{\sqrt{\pi}}{2}. \label{jones-11} \end{equation} \end{Example} \begin{Example} Several other examples of self-inverse functions are provided in Jones \cite{jones1}. Each one produces a Cauchy-Schl\"omilch type integral. Some examples are \begin{eqnarray} \int_{1}^{\infty} f \left[ (x - \text{exp}(\alpha/\log x))^{2} \right] \, dx & = & \int_{0}^{\infty} f(y^{2}) \, dy, \nonumber \\ & & \nonumber \\ \int_{0}^{\infty} f \left[ \frac{1}{\alpha^{2}} \, \log^{2} \left( \frac{e^{\alpha x} \sinh(\alpha x)}{1 + \cosh( \alpha x)} \right) \right] \, dx & = & \int_{0}^{\infty} f(y^{2}) \, dy, \nonumber \\ & & \nonumber \\ \int_{0}^{\infty} f \left[ (x - \sinh(\alpha/\sinh^{-1}x) )^{2} \right] \, \, dx & = & \int_{0}^{\infty} f(y^{2}) \, dy. \nonumber \end{eqnarray} \end{Example} \section{Application to generating flexible probability distributions} \label{sec-applications} \setcounter{equation}{0} There has recently been renewed interest in the statistical literature in generating flexible families of probability distributions for univariate continuous random variables. Baker \cite{baker-rose} describes the use of Cauchy-Schl\"omilch transformation to generate new probability density functions from old ones. Jones \cite{jones1} does the same with the extended transformation of Section \ref{sec-extension}. The identity (\ref{parameter-1}) states that the total mass of $f(y^{2})$ is the same as that of $a f( [ ax - s(ax) ]^{2})$ for any self-inverse function $s$ and any scaling constant $a >0$. There are many techniques for introducing one or more parameters into a simple `parent distribution' with probability density function $g$, to produce more sophisticated distributions. One such method, not generally familiar, proceeds by `transformation of scale', defining $f_b(x) \propto g(t_b(x))$ where $t_b(x)$ depends on the new parameter $b$. The difficulty associated with this procedure is the validation that $f_{b}$ is integrable and then to explicitly provide its normalizing constant. The Cauchy-Schl\"omilch result in Theorem \ref{main-thm} guarantees that the choice $t_b(x) = |x-bx^{-1}|$ produces from the density $g$ of a positive random variable a new density $f_{b}$, also for a positive random variable, via \begin{equation} f_b(x) = g(|x-bx^{-1}|). \label{12.1} \end{equation} \noindent This was observed by Baker \cite{baker-rose}. The parameter $a$ in (\ref{transf1}) is redundant for distribution theory work since its action as a scale parameter is well understood; $a$ must, however, be reintroduced for practical fitting of such distributions to data. A number of general properties of distributions with density of the form $f_b$ follow, some of which are: \smallskip \noindent (i) $f_b$ is R-symmetric \cite{mudholkar1} about R-center $\sqrt{b}$, i.e.\ $f_b(\sqrt{b}x) = f_b(\sqrt{b}x^{-1})$; \smallskip \noindent (ii) $f_b(0)=0$ and $$ f_b(x) \approx g(b/x) ~{\rm as}~x \rightarrow 0;~~~~~ f_b(x) \approx g(x) ~{\rm as}~x \rightarrow \infty ;$$ \smallskip \noindent (iii) moment relationships follow from $$E_{f_b} \{ |X-bX^{-1})|^r\} = E_g(Y^r)$$ and, by R-symmetry, $$E_{f_{b}}(X^r) = b^{r+1} E_{f_b}(X^{-(r+2)});$$ \smallskip \noindent (iv) if $g$ is decreasing, then $f_b$ is unimodal with mode at $\sqrt{b}$; \smallskip \noindent (v) if $f_b$ is unimodal, its mean and its median are both greater than its mode. \\ \noindent These properties can be found in Baker \cite{baker-rose}, but only special cases of (iii) are provided. Amongst the most interesting distributions presented by Baker is the root-reciprocal inverse Gaussian distribution (RRIG). This example, also discussed in Mudholkar and Wang \cite{mudholkar1}, in the case of dispersion parameter $\lambda=1$, arises from (\ref{12.1}) when $g$ is the half-Gaussian density. The RRIG density is \begin{equation} f_b(x) = \sqrt{\frac{2}{\pi}} e^b \exp\left\{ -\frac12 \left( x^2+{b^2/ x^2 } \right) \right\}, \label{int-11} \end{equation} \noindent $b >0$. This corresponds in integral terms to (\ref{rel-1}) above. Similarly, a second example presented by Baker \cite{baker-rose} (Section 3.4) is the distribution based on the half-Subbotin distribution. This is directly linked to (\ref{rel-2}). A third example, based on the half-$t$ distribution, has density \begin{equation} f_{\nu,b}(x) = \frac{2 \Gamma((\nu+1)/2)}{\sqrt{\nu\pi}\, \Gamma(\nu/2)} (1+ (x-b/x)^2/\nu)^{-(\nu+1)/2}, \label{fnub} \end{equation} \noindent $\nu,b>0$. The verification that $f_{\nu,b}(x)$ integrates to $1$ can be done by using the integral $I_3$ in Theorem \ref{boros-3param}. Of course, other distributions in Baker \cite{baker-rose} correspond to other integral formulae, while integral formulae in this article which have nonnegative integrands correspond to other distributions. Transformation of scale densities are particularly amenable to having their skewness assessed by the asymmetry function $\gamma(p),$ $0<p<1$, of Critchley and Jones \cite{critchley-jones} and provide relatively rare tractable examples thereof. Jones \cite{jones1} shows that the asymmetry function associated with unimodal $f_b$ is \begin{equation} \gamma_b(p) = \left( \sqrt{c_g^2(p)+4b} -\sqrt{4b}\right) / c_g(p) \label{12.4} \end{equation} \noindent where $c_g(p) = g^{-1}(pg(0))$. This shows that the Cauchy-Schl\"omilch transformation of scale always results in positively skewed distributions, with asymmetry functions decreasing in $p$, which become more skew as $b$ decreases. For example, for the RRIG density (\ref{int-11}), \begin{equation} \gamma_b(p) = \left( \sqrt{2b-\log p} -\sqrt{2b}\right)/\sqrt{-\log p}. \label{RRIG-for} \end{equation} \noindent The extended Cauchy-Schl\"omilch transformation described in Section 11 also affords new transformation of scale distributions, as explored by Jones \cite{jones1}. Probability densities of the form \begin{equation} f_s(x) = g(|x-s(x)|) \label{12.5} \end{equation} \noindent for decreasing, onto, self-inverse $s$ are described there. In this situation, properties (i)-(v) discussed above become: \smallskip \noindent (i$^{\prime}$) $f_s$ can be defined to be S-symmetric about S-center ${x_0}$ since $f_s(x) = f_s(s(x))$. Here $x_0$ is defined by $s(x_0) = x_0$; \smallskip \smallskip \noindent (ii$^\prime$) $f_s(0)=0$ and $$ f_s(x) \approx g(s(x)) ~{\rm as}~x \rightarrow 0;~~~~~ f_s(x) \approx g(x) ~{\rm as}~x \rightarrow \infty ;$$ \noindent (iii$^\prime$) moment relationships follow from $$E_{f_s} \{ |X-s(X)|^r\} = E_g(Y^r)$$ and, by S-symmetry, $$E_{f_s}(X^r) = - E_{f_s}(s'(X)s^r(X) ).$$ A special case of the latter is that $E_{f_s}(s'(X))=-1$; \smallskip \noindent (iv$^\prime$) if $g$ is decreasing, then $f_s$ is unimodal with mode at $x_0$; \smallskip \noindent (v$^\prime$) if $f_s$ is unimodal and $g$ is convex, its mean and its median are both greater than its mode. \\ By way of example, Jones \cite{jones1} briefly explored the half-Gaussian-based analogue of (\ref{int-11}) when $s(x)$ is given by (\ref{formula-fors}). This has probability density \begin{equation} f_s(x) = \sqrt{\frac{2}{\pi}}\exp\left\{ -\frac1{2\alpha^2} \log^2 \left( e^{\alpha x}- 1\right) \right\}. \label{12.6} \end{equation} \noindent The fact that (\ref{12.6}) integrates to $1$ is closely related to (\ref{jones-11}). Its asymmetry function has the form \begin{equation} \gamma_s(p) = \frac1{\alpha \sqrt{-(\log p)/2}} \log \left\{ \cosh \left( {\alpha \sqrt{-(\log p)/2}} \right)\right\}. \label{12.7} \end{equation} \noindent Like (\ref{RRIG-for}), this asymmetry function is always positive and decreases in $p$; (\ref{12.7}) increases in $\alpha$. The Cauchy-Schl\"olmilch transformation has thus motivated and triggers a new and promising area of work in distribution theory. \section{Conclusions} \label{sec-conclusions} \setcounter{equation}{0} The Cauchy-Schl\"omilch transformation establishes the equality of two definite integrals with integrands related in a simple manner. Applying this transformation to a variety of well-known definite integrals yields examples that are beyond the current capabilities of symbolic languages. Our purpose in this paper is not only to present the many integrals considered here, but also to give an exposition of the salient points of the Cauhy-Schl\"omilch transformation so as to serve as motivating examples to explore further symbolic integration algorithms. \\ \noindent {\bf Acknowledgements}. The fourth author acknowledges the partial support of \noindent $\text{NSF-DMS } 0713836$.
{ "timestamp": "2010-04-15T02:01:34", "yymm": "1004", "arxiv_id": "1004.2445", "language": "en", "url": "https://arxiv.org/abs/1004.2445", "abstract": "The Cauchy-Schlömilch transformation states that for a function $f$ and $a, \\, b > 0$, the integral of $f(x^{2})$ and $af((ax-bx^{-1})^{2}$ over the interval $[0, \\infty)$ are the same. This elementary result is used to evaluate many non-elementary definite integrals, most of which cannot be obtained by symbolic packages. Applications to probability distributions is also given.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "The Cauchy-Schlomilch transformation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540710685613, "lm_q2_score": 0.8397339716830606, "lm_q1q2_score": 0.8223968837823774 }
https://arxiv.org/abs/1506.08962
Product of positive semi-definite matrices
It is known that every complex square matrix with nonnegative determinant is the product of positive semi-definite matrices. There are characterizations of matrices that require two or five positive semi-definite matrices in the product. However, the characterizations of matrices that require three or four positive semi-definite matrices in the product are lacking. In this paper, we give a complete characterization of these two types of matrices. With these results, we give an algorithm to determine whether a square matrix can be expressed as the product of $k$ positive semi-definite matrices but not fewer, for $k = 1,2,3,4,5$.
\section{Introduction} Let $M_n$ be the set of $n\times n$ complex matrices. In \cite{Wu}, the author showed that a matrix in $M_n$ with nonnegative determinant can always be written as the product of five or fewer positive semi-definite matrices. This is an extension to the result in \cite{B} asserting that every matrix in $M_n$ with positive determinant is the product of five or fewer positive definite matrices. Analogous to the analysis in \cite{B}, the author of \cite{Wu} studied those matrices which can be expressed as the product of two, three, or four positive semi-definite matrices. In particular, characterization was obtained for the matrices that can be expressed as the product of two semi-definite matrices; also, it was shown that any matrices not of the form $zI$, where $z$ is not a nonnegative number, could be written as the product of four positive semi-definite matrices. Moreover, it was proved in \cite[Theorem 3.3]{Wu} that if one applies a unitary similarity to change the square matrix to the form $T = \left[\begin{matrix}T_1 & * \cr 0 & T_2 \cr\end{matrix}\right] \in M_n$ such that $T_1$ is invertible and $T_2$ is nilpotent, and if $T_1$ is the product of three positive definite matrices, then $T$ can be expressed as the product of three positive semi-definite matrices as well. It was suspected that the converse of this statement is also true. However, the following example shows that it is not the case. \begin{example} Let $T = \left[\begin{matrix} -9 & -9 \cr ~0 &~0 \cr\end{matrix}\right]$. Then $T = \left[\begin{matrix} 9 & 3 \cr 3 & 2 \cr\end{matrix}\right] \left[\begin{matrix} 13 & -15 \cr -15 & 18 \cr\end{matrix}\right] \left[\begin{matrix} 1 & 1 \cr 1 & 1 \cr\end{matrix}\right]$ is the product of three positive semi-definite matrices. However, $T_1 = [-9]$ is not the product of three positive definite matrices because $\det(T_1) < 0$. \end{example} Of course, one may impose the obvious necessary condition that $\det(T_1) > 0$, and ask whether the conjecture is valid with this additional assumption. Nevertheless, it is easy to modify Example 1 by considering $$T \otimes I_2 = \left[\begin{matrix} -9I_2 & -9I_2 \cr ~0_2 &~0_2 \cr\end{matrix}\right] = (A_1\otimes I_2)(A_2 \otimes I_2)(A_3\otimes I_3)$$ using the factorization $T = A_1 A_2 A_3$. In the modified example, we have $T_1 = -9I_2$ and $T_2 = 0_2$. By \cite[Theorem 4]{B}, $T_1 = -9I_2$ is the product of no fewer than five positive definite matrices. In the next section, we will give a complete characterization for those matrices that can be written as the product of three positive semi-definite matrices and not fewer. Also, we add an easy to check necessary and sufficient condition for invertible matrices that can be written as the product of three positive definite matrices. With these results, one can use the Jordan form of a given matrix and its numerical range to decide whether it can be expressed as the product of two, three, four or five positive semi-definite matrices. \section{Product of three positive semi-definite matrices} We will prove the following. \begin{theorem} \label{main} Suppose \begin{equation}\label{T-form} T = \left[\begin{matrix}T_1 & R \cr 0 & T_2 \cr\end{matrix}\right] \quad \hbox{ such that } T_1 \hbox{ is invertible and } T_2 \hbox{ is nilpotent}. \end{equation} Then $T$ is a product of three positive semi-definite matrices if and only if one of the following holds. \begin{itemize} \item[{\rm (a)}] $R \ne 0$ or $T_2 \ne 0$. \item[{\rm (b)}] $R = 0$, $T_2 = 0$, and $T_1$ is the product of three positive definite matrices. \end{itemize} \end{theorem} We establish some lemmas to prove Theorem \ref{main}. The first one is covered by \cite[Theorem 1]{B}. \begin{lemma} \label{SAS} Let $A, S \in M_n$ such that $S$ is invertible. Then $A$ is a product of an odd number of positive semi-definite matrices if and only if $S^*AS$ is. \end{lemma} \iffalse \it Proof. \rm Note that $A = P_1 \cdots P_{2k+1}$ if and only if $S^*AS = \tilde P_1 \cdots \tilde P_{2k+1}$, where $\tilde P_{r} = S^*P_{r}S$ if $r$ is odd, and $\tilde P_{r} = S^{-1}P_{r}(S^*)^{-1}$ if $r$ is even. \hfill $\Box$\medskip \fi In the next lemma, we need the concept of the numerical range of a matrix $A \in M_n$ defined by $$W(A) = \{ x^*Ax: x\in \IC^n, x^*x = 1\}.$$ The numerical range is a useful tool in studying matrices. One may see \cite[Chapter 1]{HJ} for the basic properties of the numerical range. \begin{lemma} \label{RS} Suppose $T = \left[\begin{matrix}T_1 & R \cr 0 & 0_p \cr\end{matrix}\right]$ such that $T_1 \in M_m$ is invertible and $R$ is nonzero. Then $T$ is a product of three positive semi-definite matrices. \end{lemma} \it Proof. \rm First, we show that there is a $p\times m$ matrix $S$ such that $T_1 + RS$ is the product of three positive definite matrices. If $m = 1$, there is $S \in \IC^p$ such that $T_1 + RS > 0$. Suppose $m > 1$ and $R$ has a singular value decomposition $R = s_1 x_1y_1^* + \cdots + s_k x_k y_k^*$, where $s_1, \dots, s_k$ are nonzero singular values of $R$, and $x_1,\dots,x_k \in \IC^m$ and $y_1,\dots,y_k\in \IC^p$ are their corresponding right and left unit singular vectors accordingly. Let $\{e_1,\dots,e_m\}$ be the standard basis for $\IC^m$. Take a unitary $U$ such that $U x_1 = e_1$. Since $T_1$ is invertible, so is $\hat T_1 = UT_1U^*$. Suppose $\hat t_1^*$ is the first row of $\hat T_1$. Let $v = \hat t_1 + \epsilon e_2$ with $\epsilon > 0$ and $\hat T_1(\epsilon)$ be the $m \times m$ matrix obtained from $\hat T_1$ by replacing its first row with $v^*$. Take a sufficiently small $\epsilon >0$ such that $v$ is not a multiple of $e_1$ and the matrix $\hat T_1(\epsilon)$ is still invertible. Set $S = s_1^{-1} r e^{i\theta} y_1 v^*U$ with $r > 0$ and $\theta\in [0,2\pi)$. Then $$U(T_1 + RS)U^* = UT_1U^* + URSU^* = \hat T_1 + r e^{i\theta} e_1 v^*.$$ Since $v$ is not a multiple of $e_1$, the rank one matrix $e_1v^*$ is not normal and so $W(e_1v^*)$ is an elliptical disk with foci $0$ and $v_1$ with length of minor axis $\sqrt{\|v\|^2 - |v_1|^2} > 0$, where $v_1$ is the first entry of $v$; for example, see \cite[Theorem 1.3.6]{HJ}. By the fact that the map $X \mapsto W(X)$ is continuous, for a sufficiently large $r >0$, $W\left( e_1 v^* + {e^{-i\theta} } \hat T_1 / r \right)$ still contains $0$ as an interior point for any $\theta \in [0,2\pi)$. Then so does $$W\left(\hat T_1 + r e^{i\theta} e_1 v^*\right) = re^{i\theta}\, W\left( e_1 v^* + \frac{e^{-i\theta} }{r} \hat T_1 \right).$$ In addition, the value $r$ can be chosen so that $r > |\det(\hat T_1)/\det(\hat T_1(\epsilon))|$. Now by the linearity of determinant with respect to the first row, $$\det(T_1 + RS) = \det(U(T_1+RS)U^*) = \det(\hat T_1 + r e^{i\theta} e_1 v^*) = \det(\hat T_1) + r e^{i\theta} \det (\hat T_1(\epsilon)).$$ Since $|\det(\hat T_1)| < | r e^{i\theta} \det (\hat T_1(\epsilon))|$, there is $\theta\in [0,2\pi)$ such that $\det(T_1 + RS) > 0$. By \cite[Theorem 3]{B} (see also Proposition \ref{three}), $T_1+RS$ is a product of three positive definite matrices. Finally, note that $$\tilde T = \begin{bmatrix} I_m & S^* \cr 0 & I_p \end{bmatrix} \begin{bmatrix} T_1 & R \cr 0 & 0_p \end{bmatrix} \begin{bmatrix} I_m & 0 \cr S & I_p \end{bmatrix} = \begin{bmatrix} T_1 + RS & R \cr 0 & 0_p \end{bmatrix}. $$ By \cite[Theorem 3.3]{Wu}, $\tilde T$ is a product of three positive semi-definite matrices, and so is $T$ by Lemma \ref{SAS}. \hfill $\Box$\medskip \iffalse \it Proof. \rm Without loss of generality, we may assume $T_1 = [t_{ij}]$ is upper triangular. Suppose $R$ has singular value decomposition $R = s_1 x_1y_1^* + \cdots + s_k x_k y_k^*$, where $s_1,\dots, s_k$ are nonzero singular value of $R$, and $x_1,\dots,x_k$ and $y_1,\dots,y_k$ are the right and left unit singular vectors accordingly. Let $x_1 = (a_1,\dots,a_{n-m})$. Suppose $x_1$ has at least two nonzero entries. Assume $a_p$ and $a_q$ are the first and the last nonzero entries of $x_1$, i.e., $a_p \ne 0$, $a_q \ne 0$, and $a_j = 0$ for $j < p$ or $j > q$. Let $S = s_1^{-1} r e^{i\theta} x_1 e_q^*$ with $r > 0$ and $\theta \in [0,2\pi)$. Then $RS = r e^{i\theta} x_1 e_q^*$ and $T_1+RS$ is still an upper triangular matrix with the same diagonal entries of $T_1$, except the $(q,q)$-th entry equals $t_{qq} + r e^{i\theta} a_q$. Furthermore, the $2 \times 2$ submatrix of $T_1+RS$ corresponding to its $p$th and $q$th rows and columns has the form $$\hat T_1 = \begin{bmatrix} t_{pp} & t_{pq} + r e^{i\theta} a_p \cr 0 & t_{qq} + r e^{i\theta} a_q \end{bmatrix} \quad \hbox{and}\quad \det(T_1 + RS) = \det(T_1) + r e^{i\theta} t_{qq}^{-1} a_q \det(T_1).$$ Let $r = \max\{ |a_{p}^{-1}|(4|t_{pp}| + |t_{pq}|), 2 |a_q^{-1}| |t_{qq}|\}$. Notice that $W(\hat T_1)$ is an elliptical disk with foci $t_{pp}$ and $t_{qq} + r e^{i\theta} a_q$ with length of major axis equal to $\frac{1}{2} |t_{pq} + r e^{i\theta} a_p|$. Since $$|t_{pq} + r e^{i\theta} a_p| \ge |r e^{i\theta} a_p| - |t_{pq}| = r |a_p| - |t_{pq}| \ge 4 |t_{pp}| > 0,$$ $W(\hat T_1)$ always contains zero as an interior point for any $\theta$, so as $W(T_1)$ since $W(\hat T_1) \subseteq W(T_1)$. Now since $|r e^{i\theta} t_{qq}^{-1} a_q \det (T_1)| \ge 2 |\det (T_1)| > |\det (T_1)|$, there exists a $\theta\in [0,2\pi)$ such that $$\det(T_1 + RS) = \det(T_1) + r e^{i\theta} t_{qq}^{-1} a_q \det(T_1) > 0.$$ Now suppose $x_1$ has exactly one nonzero entry. If $x_1 = a_p e_p$ with $p < n-m1$, set $S = s_1^{-1} re^{i\theta} x_1 (e_p + e_{q})$ with $q = p+1$, $r > 0$ and $\theta \in [0,2\pi)$. Then $T+RS$ is upper triangular with $2 \times 2$ submatrix corresponding to its $p$th and $q$th rows and columns has the form $$\hat T_1 = \begin{bmatrix} t_{pp} + re^{i\theta} a_p & t_{pq} + r e^{i\theta} a_p \cr 0 & t_{qq} \end{bmatrix} \quad \hbox{and}\quad \det(T_1 + RS) = \det(T_1) + r e^{i\theta} t_{pp}^{-1} a_q \det(T_1).$$ By a similar argument, one can always find some $r$ and $\theta$ such that $W(\hat T_1+RS)$ contains zero as an interior point and $\det(T_1 + RS) > 0$. Finally if $x_1 = a_{n-m} e_{b-m}$, set $S = s_1^{-1} re^{i\theta} x_1 (e_{p} + e_{q})$ with $(p,q) = (n-m-1,n-m)$, $r > 0$ and $\theta \in [0,2\pi)$. Then $T+RS$ is upper triangular with $2 \times 2$ submatrix corresponding to its $p$th and $q$th rows and columns has the form $$\hat T_1 = \begin{bmatrix} t_{pp} & t_{pq} \cr r e^{i\theta} a_q & t_{qq} + r e^{i\theta} a_q \end{bmatrix} \quad \hbox{and}\quad \det(T_1 + RS) = \det(T_1) + r e^{i\theta} t_{pp}^{-1} t_{qq}^{-1} a_q (t_{pp} - t_{pq})\det(T_1).$$ Then there exists $\theta \in [0,2\pi)$ such that $$\det(T_1) ( 1 + \lambda $$ We can always find $\lambda \in \IC$ with sufficiently large modulus so that $$|t_{pq} + \lambda a_p| \ge 2 |t_{pp}| \quad\hbox{and}\quad \det(T_1) + \lambda t_{qq}^{-1}a_q \det(T_1) > 0.$$ has the form $\begin{bmatrix} M_1 & 0 \cr M_2 & M_3 \end{bmatrix}$ with $$M_1 = \begin{bmatrix} t_{11} + \lambda a_1 & 0 \cr t_{21} + \lambda a_2 & t_{22} \end{bmatrix}$$ Suppose one of $x_j$ has at least two nonzero entries. By relabeling the index, where $a_p,a_q \ne 0$ and $1\le p < q \le n-m$. Let $S = \lambda s_1^{-1} y_1 e_1^*$. Then $RS = \lambda x_1 e_1^*$ and hence $T_1+RS$ has the form $\begin{bmatrix} M_1 & 0 \cr M_2 & M_3 \end{bmatrix}$ with $$M_1 = \begin{bmatrix} t_{11} + \lambda a_1 & 0 \cr t_{21} + \lambda a_2 & t_{22} \end{bmatrix}$$ {\bf Raymond: Fill in the proof.} \vskip .5in The last statement follows from \cite[Theorem 3]{B} and \cite[Theorem 3.3]{Wu}. \hfill $\Box$\medskip \fi \medskip \it Proof of Theorem \ref{main}. \rm Suppose $T$ is the product of three positive semi-definite matrices. If $R \ne 0$ or $T_2 \ne 0$, then we are done. Else, $T = T_1 \oplus 0_p$ is the product of three positive semi-definite matrices. By \cite[Proposition 3.5]{Wu}, $T_1$ is the product of three positive definite matrices. \medskip To prove the converse, we consider the following three cases. \noindent {\bf Case 1.} Suppose $R \ne 0$. We use induction on $p$, the size of $T_2$. If $p = 1$, the result follows from Lemma \ref{RS} as $T_2 = [0]$. Assume the result holds for $T_2$ with size at most $p-1$. Since $T_2$ is nilpotent, without loss of generality, we may assume that $T_2$ is upper triangular with $T_2 = \begin{bmatrix} T_{21} & T_{22} \cr 0 & 0 \end{bmatrix}$, where $T_{21} \in M_{p-1}$. Write $R = \begin{bmatrix} R_1 & R_2 \end{bmatrix}$ where $R_1$ is $m \times (p-1)$. If $R_1 \ne 0$, then by induction, the matrix $\begin{bmatrix} T_1 & R_1 \cr 0 & T_{21} \end{bmatrix}$ is a product of three positive semi-definite matrices, say, $P_1P_2P_3$. Further, by \cite[Theorem 2.2]{Wu} (see also Proposition \ref{two}), we may assume that both $P_1$ and $P_2$ are invertible. Let $X = \begin{bmatrix} R_2 \cr T_{22} \end{bmatrix}$ with size $(m+p-1) \times 1$. Then for any $\epsilon > 0$, $$\begin{bmatrix} P_1 & 0 \cr 0 & 0 \end{bmatrix} \begin{bmatrix} P_2 & \epsilon P_1^{-1} X \cr \epsilon (P_1^{-1} X)^* & 1 \end{bmatrix} \begin{bmatrix} P_3 & 0 \cr 0 & \epsilon^{-1} \end{bmatrix} = \begin{bmatrix} P_1 P_2 P_3 & X \cr 0 & 0 \end{bmatrix} = \begin{bmatrix} T_1 & R_1 & R_2 \cr 0 & T_{21} & T_{22} \cr 0 & 0 & 0 \end{bmatrix} = T. $$ Clearly, $Q_1 = P_1 \oplus [0]$ and $Q_3 = P_3 \oplus [\epsilon^{-1}]$ are positive semi-definite matrices. Now one can choose a sufficiently small $\epsilon > 0$ so that $Q_2 = \begin{bmatrix} P_2 & \epsilon P_1^{-1}X \cr \epsilon (P_1^{-1} X)^* & 1 \end{bmatrix}$ is also positive semi-definite. Thus, $T$ is a product of three positive semi-definite matrices $Q_1Q_2Q_3$. \medskip Now suppose $R_1 = 0$. Then the $(m+1)$th column of $T$ is a zero column. By interchanging the $(m+1)$th and the last indices, one can see that $T$ is permutationally similar to $$\begin{bmatrix} T_1 & \tilde R & 0 \cr 0 & \tilde T_{21} & 0 \cr 0 & \tilde T_{22} & 0 \end{bmatrix} \quad\hbox{where $\tilde R$ is $m \times (p-1)$, $\tilde T_{21}$ is $(p-1) \times (p-1)$, and $\tilde T_{22}$ is $1 \times (p-1)$.}$$ Notice also that $\tilde R$ is nonzero and $\tilde T_{21}$ is nilpotent. By induction, $\begin{bmatrix} T_1 & \tilde R \cr 0 & \tilde T_{21} \end{bmatrix}$ is a product of three positive semi-definite matrices $P_1P_2P_3$. By \cite[Theorem 2.2]{Wu}, we can further assume that both $P_2$ and $P_3$ are invertible. Let $Y = \begin{bmatrix} 0 & \tilde T_{22} \end{bmatrix}$ with size $1 \times (m+p-1)$. Then for any $\epsilon > 0$, $$\begin{bmatrix} P_1 & 0 \cr 0 & \epsilon^{-1} \end{bmatrix} \begin{bmatrix} P_2 & \epsilon (Y P_3^{-1})^* \cr \epsilon (Y P_3^{-1}) & 1 \end{bmatrix} \begin{bmatrix} P_3 & 0 \cr 0 & 0 \end{bmatrix} = \begin{bmatrix} P_1 P_2 P_3 & 0 \cr Y & 0 \end{bmatrix} = \begin{bmatrix} T_1 & \tilde R & 0 \cr 0 & \tilde T_{21} & 0 \cr 0 & \tilde T_{22} & 0 \end{bmatrix}.$$ Again, one can choose a sufficiently small $\epsilon > 0$ such that all three matrices in the left side of the above equation are positive semi-definite. Thus, $T$ is permutationally similar to a product of three positive semi-definite matrices, and hence $T$ can also be written as a product of three positive semi-definite matrices. \medskip \noindent {\bf Case 2.} Suppose $R = 0$ and $T_2$ is nonzero. Without loss of generality, we may assume that $T_2$ is upper triangular with zero diagonal entries while the first row of $T_2$ is nonzero. Let $Z$ be the $p \times m$ matrix with 1 at the $(1,1)$-entry and zero elsewise. Then $Z^*T_2 \ne 0$ and $T_2 Z = 0$. Let $S = \begin{bmatrix} I_m & 0 \cr Z & I_p \end{bmatrix}$. Then $$ S^*TS = \begin{bmatrix} I_m & Z^* \cr 0 & I_p \end{bmatrix} \begin{bmatrix} T_1 & 0 \cr 0 & T_2 \end{bmatrix} \begin{bmatrix} I_m & 0 \cr Z & I_p \end{bmatrix} = \begin{bmatrix} T_1 + Z^*T_2Z & Z^*T_2 \cr T_2Z& T_2 \end{bmatrix} = \begin{bmatrix} T_1 & Z^*T_2 \cr 0 & T_2 \end{bmatrix}.$$ Since $Z^*T_2 \ne 0$, the result follows from Case 1 and Lemma \ref{SAS}. \iffalse \medskip \noindent {\bf Case 2.} Suppose $R = 0$ and $T_2$ is nonzero. Then we can show that there exists $Z$ such that $Z^*TZ = \left[\begin{matrix} \tilde T_1 & \tilde R \cr 0 & 0_k \cr\end{matrix}\right]$ so that $n-k$ is the rank of $T$ $\det(\tilde T_1) > 0$ and $W(\tilde T_1)$ contains 0 as an interior point. By \cite[Theorem 3.3]{Wu}, $Z^*TZ = P_1P_2P_3$ for three positive semi-definite matrices, and so is $T$. \fi \medskip \noindent {\bf Case 3.} Suppose $R = 0$ and $T_2 = 0$. If $T_1$ is the product of three positive definite matrices, then $T = T_1 \oplus 0_p$ is the product of three positive semi-definite matrices. \hfill $\Box$\medskip Note that Theorem \ref{main} depends on checking an invertible matrix is the product of three positive definite matrices. Such conditions are given in \cite[Theorem 3]{B}. We restated the result in terms of the numerical range in the following proposition, which is based on the discussion in \cite[Theorem 3 and Fact 3.2]{B}. \begin{proposition} \label{three} Let $T \in M_n$ be such that $\det(T) > 0$. Then $T$ is the product of three positive definite matrices if and only if one of the following holds. \begin{itemize} \item [{\rm (a)}] $W(T)$ contains 0 as an interior point. \item [{\rm (b)}] $W(T)$ contains a positive number, and the arguments of the eigenvalues of $T$ can be arranged as: $-\pi < \theta_1 \le \cdots \le \theta_n < \pi$ such that $\sum_{j=1}^n \theta_j = 0$. \end{itemize} \end{proposition} Note that in \cite[p.88]{B}, the author required in condition (3.6b), corresponding to our condition (b), that all real eigenvalues of $T$ are positive, which is ensured by our assumption that $\theta_j \in (-\pi, \pi)$ for all $j$. \section{Determining the number of factors} In this section, we describe an algorithm to determine the smallest number of positive semi-definite matrices whose product equals a given $A \in M_n$ with nonnegative determinant. \medskip We first present the following theorem providing some easy tests for a matrix $A$ to be the product of two positive semi-definite matrices. The equivalence of conditions (a), (b), (c) were given in \cite[Theorem 2.2]{Wu}. We include a short proof, which is different from that of Wu. \begin{proposition} \label{two} Let $A$ be a square matrix. The following are equivalent. \begin{itemize} \item[{\rm (a)}] $A$ is the product of two positive semi-definite matrices. \item[{\rm (b)}] $A = BC$, where $B, C$ are positive semi-definite matrices such that $B$ or $C$ can be assumed to be invertible. \item[{\rm (c)}] $A$ is similar to a nonnegative diagonal matrix. \item[{\rm (d)}] $A$ is unitarily similar to an upper block triangular matrix such that the diagonal blocks are scalar matrices corresponding to distinct scalars. \item[{\rm (e)}] The minimal polynomial of $A$ only has simple nonnegative zeros. \end{itemize} \end{proposition} \it Proof. \rm The equivalence of (c), (d), (e) are clear. (c) $\Rightarrow$ (b): $A = S^{-1}DS = S^{-1}(S^{-1})^*(S^*DS) = S^{-1}D (S^{-1})^*(S^*S)$. \medskip (b) $\Rightarrow$ (a): Trivial. \medskip (a) $\Rightarrow$ (c): Suppose $A = BC$, where $B$ and $C$ are $n\times n$ positive semi-definite. Let $U$ be unitary such that $U^*BU = B_0 \oplus 0_k$, where $B_0 \in M_{n-k}$ is positive definite. Let $U^*CU = \left[\begin{matrix}C_{11}& C_{12} \cr C_{21} & C_{22} \cr\end{matrix}\right]$ be such that $C_{22} \in M_k$. Assume $V$ is unitary such that $V^*C_{11}V = C_0 \oplus 0_\ell$ for a positive definite matrix $C_0\in M_{n-k-\ell}$. We may replace $U$ by $U(V\oplus I)$ and assume that $C_{11} = C_0 \oplus 0_\ell$. Because $C$ is positive semi-definite, $U^*CU = \left[\begin{matrix}C_0 & 0 & C_1 \cr 0 & 0_\ell & 0 \cr C_1^* & 0 & C_{22} \cr \end{matrix}\right]$. Then $$U^*AU = \left[\begin{matrix}B_0& 0 \cr 0 & 0_k \cr\end{matrix}\right] \left[\begin{matrix} C_{0}& 0 & C_1 \cr 0 & 0_\ell & 0 \cr C_1^* & 0 & C_{22} \cr\end{matrix}\right] = \left[\begin{matrix}B_0& 0 \cr 0 & 0_k \cr\end{matrix}\right] \left[\begin{matrix} C_{0}& 0 & C_1 \cr 0 & 0_\ell & 0 \cr 0 & 0 & 0_k \cr\end{matrix}\right].$$ Now \begin{eqnarray*} && \begin{bmatrix} B_0^{-\frac{1}{2}} & 0 \cr 0 & I_k \end{bmatrix} \begin{bmatrix} I & 0 & C_0^{-1} C_1 \cr 0 & I & 0 \cr 0 & 0 & I_k \end{bmatrix} \left[\begin{matrix}B_0& 0 \cr 0 & 0_k \cr\end{matrix}\right] \left[\begin{matrix} C_{0}& 0 & C_1 \cr 0 & 0_\ell & 0 \cr 0 & 0 & 0_k \cr\end{matrix}\right] \begin{bmatrix} I & 0 & -C_0^{-1} C_1 \cr 0 & I & 0 \cr 0 & 0 & I_k \end{bmatrix} \begin{bmatrix} B_0^{\frac{1}{2}} & 0 \cr 0 & I_k \end{bmatrix} \\[1mm] &=& \begin{bmatrix} B_0^{-\frac{1}{2}} & 0 \cr 0 & I_k \end{bmatrix} \left[\begin{matrix}B_0& 0 \cr 0 & 0_k \cr\end{matrix}\right] \left[\begin{matrix} C_{0}& 0 & 0 \cr 0 & 0_\ell & 0 \cr 0 & 0 & 0_k \cr\end{matrix}\right] \begin{bmatrix} B_0^{\frac{1}{2}} & 0 \cr 0 & I_k \end{bmatrix} = \left( B_0^\frac{1}{2} \begin{bmatrix} C_0 & 0 \cr 0 & 0_\ell \end{bmatrix} B_0^{\frac{1}{2}} \right) \oplus 0_k. \end{eqnarray*} Therefore, $A$ is similar to $\left( B_0^\frac{1}{2} \begin{bmatrix} C_0 & 0 \cr 0 & 0_\ell \end{bmatrix} B_0^{\frac{1}{2}} \right) \oplus 0_k$, which is positive semi-definite and is (unitarily) similar to a nonnegative diagonal matrix. \hfill $\Box$\medskip Now, we are ready to present an algorithm to check whether a matrix $A \in M_n$ with $\det(A) \ge 0$ can be written as a product of $k$ positive semi-definite matrices, but not fewer, for $k = 1, 2, 3, 4, 5$. \begin{algorithm} \rm Let $A \in M_n$ with $\det(A) \ge 0$. If $A = \alpha I_n$ such that $\alpha \notin [0, \infty)$, then $A$ can be expressed as a product of five positive semi-definite matrices, but not fewer. Otherwise, apply a unitary similarity to $A$ to get an upper triangular matrix $$T = \left[\begin{matrix}T_1 & R \cr 0 & T_2 \cr\end{matrix}\right] \quad \hbox{ such that } T_1 \hbox{ is invertible and } T_2 \hbox{ is nilpotent}.$$ \begin{itemize} \item[{\bf (1)}] If $T$ is a nonnegative diagonal matrix, then $A$ is itself a positive semi-definite matrix. \item[{\bf (2)}] Condition {\bf (1)} does not hold, and $A$ satisfies any one of the equivalent conditions in Proposition \ref{two}. Then $A$ can be expressed as a product of two positive semi-definite matrices, but not fewer. \item[{\bf (3)}] Suppose {\bf (1)} and {\bf (2)} do not hold. Then $A$ can be expressed as a product of three positives semi-definite matrices, but not fewer, if any of the following holds. \begin{itemize} \item[\rm (3.a)] $R$ or $T_2$ is nonzero. \item[\rm (3.b)] Both $R = 0$ and $T_2 = 0$, and $T_1$ is the product of three positive definite matrices. \end{itemize} \medskip\noindent In (3.b), the invertible matrix $T_1$ is a product of three positive definite matrices if one of the following holds. \begin{itemize} \item[\rm (i)] $\det(T_1) > 0$ and $W(T_1)$ contains 0 as an interior point, \item[\rm (ii)] $0$ is not in the interior of $W(T_1)$ and $W(T_1)$ contains a positive number and $\sum \theta_j = 0$, where $-\pi < \theta_1 \le \dots \le \theta_k < \pi$ are the arguments of the eigenvalues of $T_1$. \end{itemize} \item[{\bf (4)}] Suppose conditions {\bf (1), (2), (3)} do not hold, i.e., $T = T_1\oplus\, 0_p$ such that neither (i) nor (ii) holds for the upper triangular matrix $T_1$. \iffalse Equivalently, 0 does not lie in the interior of $W(T_1)$ and \medskip \centerline{(iii) $W(T_1) \cap (0,\infty) = \emptyset$, \quad or \quad (iv) $\sum_{j=1} \theta_j \ne 0$.} \medskip\noindent \fi Then $A$ can be expressed as a product of four positive semi-definite matrices, but not fewer. \end{itemize} \end{algorithm} \iffalse It would be nice to provide an algorithm for constructing the positive semi-definite matrices in the factorization of $A \in M_n$ in {\bf (1) -- (5)}. \medskip One may use the canonical form under $*$-congruence to detect (3) and (4). \fi \medskip\noindent {\bf Acknowledgment} The authors would like to thank the referee for his/her careful reading of the manuscript. The research of Cui was supported by National Natural Science Foundation of China 11271217. Li is an affiliated member of the Institute for Quantum Computing, University of Waterloo. He is also an honorary professor of the Shanghai University, and the University of Hong Kong; his research was supported by the USA NSF grant DMS 1331021, the Simons Foundation Grant 351047, and the NNSF of China Grant 11571220. The research of Sze was supported by a HK RGC grant PolyU 502512 and a PolyU central research grant G-UC25.
{ "timestamp": "2015-09-29T02:10:18", "yymm": "1506", "arxiv_id": "1506.08962", "language": "en", "url": "https://arxiv.org/abs/1506.08962", "abstract": "It is known that every complex square matrix with nonnegative determinant is the product of positive semi-definite matrices. There are characterizations of matrices that require two or five positive semi-definite matrices in the product. However, the characterizations of matrices that require three or four positive semi-definite matrices in the product are lacking. In this paper, we give a complete characterization of these two types of matrices. With these results, we give an algorithm to determine whether a square matrix can be expressed as the product of $k$ positive semi-definite matrices but not fewer, for $k = 1,2,3,4,5$.", "subjects": "Functional Analysis (math.FA)", "title": "Product of positive semi-definite matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9793540722737478, "lm_q2_score": 0.8397339616560072, "lm_q1q2_score": 0.8223968749743779 }
https://arxiv.org/abs/2006.16797
Confirming the Labels of Coins in One Weighing
There are $n$ bags with coins that look the same. Each bag has an infinite number of coins and all coins in the same bag weigh the same amount. Coins in different bags weigh 1, 2, 3, and so on to $n$ grams exactly. There is a unique label from the set 1 through $n$ attached to each bag that is supposed to correspond to the weight of the coins in that bag. The task is to confirm all the labels by using a balance scale once.We study weighings that we call downhill: they use the numbers of coins from the bags that are in a decreasing order. We show the importance of such weighings. We find the smallest possible total weight of coins in a downhill weighing that confirms the labels on the bags. We also find bounds on the smallest number of coins needed for such a weighing.
\section{Introduction}\label{sec:intro} Coin puzzles have fascinated mathematicians for a long time. Guy and Nowakowsky summarized the most famous coin problem in their paper \cite{GN}. We are interested in the particular famous coin-weighing puzzle below. This puzzle appeared on the 2000 Streamline Olympiad for 8th grade for the case of $n=6$. \begin{quote} \textbf{Puzzle.} You have $n$ bags with coins that look the same. Each bag has an infinite number of coins and all coins in the same bag weigh the same amount. Coins in different bags weigh 1, 2, 3, 4, $\ldots$, $n-1$, and $n$ grams exactly. There is a label, which is an integer from 1 to $n$, attached to each bag that is supposed to correspond to the weight of the coins in that bag. You have only a balance scale. What is the least number of times that you need to weigh coins in order to confirm that the labels are correct? \end{quote} The answer is 1 for all $n$, and we present an example of such a weighing. \textbf{Example.} On the right pan, place one coin from the bag labeled 2, two coins from the bag labeled 3, and so on such that there are $i-1$ coins from the bag labeled $i$. On the left pan, put coins from the bag labeled 1 to match the right pan in total weight. For example, if $n = 4$, we put 20 coins labeled 1 on the left pan and coins with the labels 2, 3, 3, 4, 4, and 4 on the right pan. Any rearrangement of the bags makes the left pan heavier or the right pan lighter. Thus, all the labels are confirmed if the scale balances. Our first goal in this paper is to find a weighing that minimizes the total weight on both pans. Our second goal is to minimize the number of coins used. In Section~\ref{sec:prelim}, we provide definitions, examples, and basic results. We study downhill weighings, which are weighings that have decreasing multiplicities, that is weighings that use fewer coins of weight $i$ than weight $j$, where $i > j$. For consistency, we assume that the multiplicities of the coins on the right pan are negative. In Section~\ref{sec:seppoint}, we explain the idea of a separation point and how it helps to calculate the minimum weight for a downhill weighing. In Sections~\ref{sec:3k+1}, \ref{sec:3k}, and \ref{sec:3k+2}, we calculate the minimum weight explicitly depending on the remainder of $n$ modulo 3. Section~\ref{sec:3k+1} is devoted to the simplest case of $n = 3k+1$. In this case, the minimum weight is $\frac{8n^3+12n^2-12n-8}{81}$. In Section~\ref{sec:3k} we study the case of $3k$ and find the minimum weight to be $\frac{8n^3+ 27n^2 +9n-81}{81}$. In Section~\ref{sec:3k+2} we study the case of $3k+2$. This case is more complicated than other cases. We show that the minimum weight is $\frac{8n^3+56n^2-58n-10}{81}$ with eight exceptions. In all the cases, the minimum weight grows as $\frac{8n^3}{81}$. We present data for the sequences of our lower bound for the minimum weight and the actual minimum weight in Section~\ref{sec:data}. Section~\ref{sec:mincoins} is devoted to finding the minimum number of coins. We show that for $n = 3k+1$ the answer is $\frac{5n^2 -n -4}{18}$. For other $n$ we find the lower and upper bounds on the number of coins. Both bounds are approximately $\frac{5n^2}{18}$. Section~\ref{sec:nottight} discusses downhill imbalances with the weight difference of more than 1. Such imbalances use more coins and more weight. The last two sections discuss amusing examples of weighings. Section~\ref{sec:solo} is devoted to weighings where one side of the pan has coins of one type. Section~\ref{sec:ap} looks at weighings forming an arithmetic progression. \section{Preliminaries}\label{sec:prelim} \subsection{Some Definitions} We call a weighing \textit{verifying} if it proves that all the labels on the bags are correct. We call a verifying weighing \textit{coin-optimal} if it uses the minimum number of coins. We call a verifying weighing \textit{weight-optimal} if the total weight is minimal. Many properties we discuss later are true for both coin-optimal and weight-optimal weighings. So we call a weighing \textit{optimal} if it is either coin- or weight-optimal. There should not be coins of the same type on both sides in an optimal weighing. Otherwise, we can remove both coins and have a solution with fewer total coins and less total weight. We call a weighing that does not have the same type of coins on both pans \textit{an economical weighing}. From now on we assume that all weighings are economical. Let $a_i$ be the number of coins of type $i$ that is placed on the left pan minus the number of coins of type $i$ on the right pan. If the coin of type $i$ is not on any pan during a weighing, then $a_i=0$. We call numbers $a_i$ \textit{multiplicities}. It follows, that if coins of type $i$ are placed only on the left pan, then $a_i$ is positive and if they are placed only on the right pan, then $a_i$ is negative. For example, if we compare one coin of weight 2 and two coins of weight 1 on the left pan against one coin of weight 4 on the right pan, then we write the set of multiplicities as $\{2,1,0,-1\}$. This weighing is verifying. Sometimes we can also write this weighing as $1 + 1 + 2 = 4$, or as $112=4$, where the equal sign denotes that this is a balanced weighing. We use $>$ and $<$ to denote imbalances. The total number of coins used in the weighing is $\Sigma_{i=1}^n |a_i|$. The total weight is $\Sigma_{i=1}^n |i \cdot a_i|$. The set of multiplicities in a verifying weighing cannot contain duplicates. If so, then we could swap the coins corresponding to these multiplicities among themselves without changing the weighing result. Thus, we cannot distinguish the corresponding bags, so the weighing would no longer be verifying. \begin{lemma} If a weighing with a set of multiplicities $A$ is verifying, then a weighing with a set of multiplicities $kA$ is also verifying. \end{lemma} \begin{proof} Suppose a weighing with the set of multiplicities $kA$ is not verifying. Hence, there exists some rearrangement of coins such that the pans do not change their relative weights. Taking this rearrangement of coins, dividing the weights of both pans by $k$ keeps the pans in the same position as the pans filled according to multiplicities $A$. Hence, $A$ is not verifying, creating a contradiction. \end{proof} That means, we only need to study the weighings with multiplicities $A$ such that $gcd(A) = 1$. We call such sets \textit{primitive}. We say that the coins from the same bag are of the same \textit{type}. We need at least $n-1$ types of coins present in a verifying weighing: if two types are not present, they cannot be distinguished between each other. Without loss of generality, for unbalanced weighings, we assume that the left pan is lighter. \subsection{Downhill weighings} We call weighings with decreasing multiplicities \textit{downhill} weighings. These weighings play an important role in this paper. \begin{lemma}\label{lemma:downhill} If a verifying weighing is an imbalance (left side lighter), then the multiplicities are decreasing: $a_i > a_j$ for $i < j$. That is, it is a downhill weighing. \end{lemma} \begin{proof} Suppose $a_i \leq a_j$ for $i < j$. Then $(j-i)(a_j-a_i) \geq 0$, which means $ia_i + ja_j \geq ia_j + ja_i$. That means, swapping bags $i$ and $j$ keeps the imbalance in place. Therefore, we can not differentiate between coins of type $i$ and $j$. Since this is a contradiction, an imbalanced verifying weighing must be downhill. \end{proof} \begin{corollary} If a verifying weighing is an imbalance with one type, $x$, of coins missing from the scale, then the lighter pan contains coins that are lighter than $x$, and the heavier pan contains coins that are heavier than $x$. \end{corollary} \begin{proof} If coins of type $x$ are not on the scale, then $a_x=0$. Therefore, for $i<x$, multiplicities $a_i$ are positive, which means that the corresponding coins are on the left side. A similar statement is true for the right side. \end{proof} We showed that a verifying imbalance must be downhill. The imbalances with a difference of weight equal to 1 play an important role in this paper. We call such imbalances \textit{tight}. \begin{lemma}\label{lemma:tightdownhillimbalance} A tight downhill imbalance is verifying. \end{lemma} \begin{proof} Any permutation of bags increases the left side, decreases the right side, or both. This changes the outcome of the weighing from the left pan being lighter to a balance or to a left pan being heavier. Therefore, this is a verifying weighing. \end{proof} \subsection{Balances} Balances are more complicated than imbalances. A verifying balance does not have to be downhill. For example, with 3 bags, the weighing $1113 = 222$ with multiplicities $\{3,-3,1\}$ is verifying but not downhill. Another such example for 4 bags is $11114=332$ with multiplicities $\{4,-1,-2,1\}$. However, we can prove that all downhill balances are verifying. On another hand, a downhill balance is always verifying for the same reason a tight imbalance is verifying: any permutation of bags increases the left side or decreases the right side. \begin{lemma} A downhill balance is verifying. \end{lemma} \begin{proof} This follows from the same logic used in Lemma~\ref{lemma:tightdownhillimbalance}. \end{proof} In our computational experiments, we found that in general, balances that are not downhill use more total coins and have greater total weight than downhill weighings. One exception is for 3 bags; the weighing $1+1=2$ with multiplicities $\{2,-1,0\}$ is weight-optimal. For the rest of the paper, we assume that all weighings are \textbf{downhill}. \subsection{Small number of bags} We can directly find the downhill weighings that are coin- and weight-optimal for small values of the number of bags. The results are in Table~\ref{table:smallnumberofbags}. \begin{table}[htb] \centering \begin{tabular}{| c | c | c|c |c|} \hline Number of bags & Example & Multiplicities & \#coins & Total weight \\ \hline 2 & $1 < 2$ & $\{1,-1\}$ & 2 & 3\\ 3 & $11 < 3$ & $\{2,0,-1\}$ & 3 & 5\\ 4 & $112 = 4$ & $\{2,1,0,-1\}$ & 4 & 8\\ 5 & $111223 = 55$ & $\{3,2,1,0,-2\}$ & 8 & 20 \\ 6 & $111122233 < 566$ & $\{4,3,2,0,-1,-2\}$ & 12 & 33 \\ 7 & $1111222334 = 677$ & $\{4,3,2,1,0,-1,-2\}$ & 13 & 40 \\ \hline \end{tabular} \caption{Minimum number of bags for downhill weighings for $n < 8$.}\label{table:smallnumberofbags} \end{table} \subsection{An imbalance that is not tight} Consider an imbalance that is not tight. \begin{lemma} If, in a verifying imbalance, the difference of the weights on the two pans is $d$, then the difference between two consecutive multiplicities is greater than or equal to $d$. \end{lemma} \begin{proof} Suppose multiplicities are $a_1, a_2, a_3,\ldots, a_n$, where $a_1 > a_2 > a_3 > .... > a_n$ by Lemma~\ref{lemma:downhill}. The scales tilts depending on the sign of \[a_1 + 2a_2 + 3a_3 + \cdots + na_n.\] Suppose the difference $a_k -a_{k+1} < d$. After we switch the coins labeled $k$ and $k+1$, the new weighing tilts according to \[a_1 + 2a_2 + 3a_3 + \cdots ka_{k+1} + (k+1)a_k + \cdots + na_n.\] The new expression differs from the previous one by \[ka_{k+1} + (k+1)a_k - ka_{k} + (k+1)a_{k+1} = a_{k+1} -a_k,\] which is less than $d$. That means the scale tilts the same way as before. Hence, such a weighing is not verifying. It follows that the consecutive multiplicities need to differ by at least $d$. \end{proof} We see that imbalances that are not tight need more coins and have greater weight. In Section~\ref{sec:nottight}, we prove that imbalances which minimize the number of coins and the total weight must to be tight. To prove this, we first find bounds for coin- and weight-optimal balances and tight imbalances. Then we show that the weight and the number of coins in non-tight imbalances exceed these bounds. \subsection{Useful formulae} The following well-known formulae are used repeatedly throughout this paper. In the first formula, we have $k$ products of two numbers such that one of the numbers is decreasing by one and the other is increasing by one: \begin{multline} ab + (a+1)(b-1) + \cdots + (a+k-1)(b-k+1)=\\ kab + \frac{k(k - 1)}{2}(b - a) - \frac{k(k - 1)(2k - 1)}{6}. \end{multline} One important case is $a=1$ and $b=k$: \begin{multline} 1 \cdot k + 2(k-1) + \cdots + k\cdot 1=k^2 + \frac{k(k - 1)}{2}(k - 1) -\frac{k(k - 1)(2k - 1)}{6} = \\ \frac{k(k+1)(k+2)}{6} =\binom{k+2}{3}. \end{multline} This sequence as a function of $k$ is the sequence of tetrahedral numbers. It is sequence A000292 in the OEIS \cite{OEIS}. In the next important formula, we have $k$ products of two numbers such that both numbers are increasing by 1: \begin{multline} ab + (a+1)(b+1) + \cdots + (a+k-1)(b+k-1)=\\ kab + \frac{k(k - 1)}{2}(b + a) + \frac{k(k - 1)(2k - 1)}{6}. \end{multline} An important case to consider is $a=1$: \begin{multline} b + 2(b+1) + \cdots + k(b+k-1)=kb + \frac{k(k - 1)}{2}(b + 1) + \frac{k(k - 1)(2k - 1)}{6}=\\ \frac{k(k + 1)}{2}b + \frac{k(k - 1)(2k +2)}{6}. \end{multline} \subsection{First Upper bound}\label{sec:introUB} Our first naive example in Section~\ref{sec:intro} provides an upper bound for the smallest total weight and the smallest number of coins in a verifying weighing. Here is the calculation. The right pan contains $\frac{n(n-1)}{2}$ coins for a total weight of: \[1\cdot 2 + 2\cdot 3 + \cdots + (n-1)\cdot n = \frac{(n-1)n(n+1)}{3}.\] The left pan contains coins of weight 1 to match the weight of the right pan. That means the total number of coins used is \[\frac{(n-1)n(n+1)}{3}+\frac{n(n-1)}{2},\] and the total weight is \[\frac{2(n-1)n(n+1)}{3}.\] Approximately, we use $\frac{1}{3}n^3$ coins for a total weight of $\frac{2}{3}n^3$. \section{Separation point and bounding weights}\label{sec:seppoint} Let us define a \textit{separation point} $s$ to be the smallest label on the right pan in a downhill weighing. Given a separation point, the set of distinct multiplicities with minimal sum is $s - 2, s - 3, s - 4, \ldots, 1, 0, -1, -2, \cdots, -(n - s + 1)$ where $n$ is the total number of bags. We denote the following quantity by $W_L(s,n)$: \[W_L(s,n) = 1\cdot(s-2) + 2\cdot(s-3) + \cdots + (s-3)\cdot 2 + (s-2)\cdot 1 = \binom{s}{3}.\] We call $W_L(s,n)$ the \textit{the bounding-left weight} as the weight of coins on the left pan in any downhill weighing with $n$ bags and separation point $s$ is at least $W_L(s,n)$. Similarly we call $W_R(s,n)$ the \textit{bounding-right weight}, where \[W_R(s,n) = 1\cdot s + 2\cdot(s+1) + \cdots + (n-s+1)\cdot n = \frac{(s - n - 2)(s - n - 1)(s + 2n)}{6}.\] Suppose $W_L(s,n) \geq W_R(s,n)$. In a downhill weighing the right pan has to weigh at least the same as the left. Hence, the total weight in an optimal downhill verifying weighing with the separation point $s$ has to be at least $2W_L(s,n)$. Now, suppose $W_L(s,n) < W_R(s,n)$. If the downhill weighing is tight, then the total weight in a verifying weighing has to be at least $2W_R(s,n) - 1$. We discuss the case of the imbalance that is not tight in Section~\ref{sec:nottight}. Let us denote $W_B(s,n)$ as the value $2W_R(s,n) - 1$ if $W_L(s,) < W_R(s,n)$ and $2W_L(s,n)$ otherwise. We call the integer $W_B(s,n)$ the \textit{bounding weight}. As demonstrated above, a tight downhill verifying weighing with a given separation point has to have a total weight of at least the bounding weight. We denote the minimum bounding weight as $W_B(n)$, i.e.\ $W_B(n) = \min_s\{W_B(s,n)\}$. We denote the minimum possible weight in a downhill weighing as $W_M(n)$. Bounding weights are functions of the number of bags and a separation point. We call $m$ the \textit{the minimal separation point} if the corresponding bounding weight is minimal, i.e.\ $W_B(n) = W_B(m,n)$. We call the corresponding bounding weight $W_B(m,n)$ the minimal bounding weight. The lemma below follows. \begin{lemma} The minimum weight is at least the minimal bounding weight: $W_M(n) \geq W_B(n)$. If the minimal bounding weight can be achieved in a weighing, then it is the minimum weight. \end{lemma} The polynomial $W_L(s,n) = \binom{s}{3}$ increases when $s$ ranges from 1 to $n$, while $W_R(s) = \frac{(s - n - 2)(s - n - 1)(s + 2n)}{6}$ decreases. They equal to each other for $s = \frac{2n + 4}{3}$. Unfortunately, this value is not always an integer. Thus, we divide our further investigations into three cases depending on the remainder of $n$ modulo 3. In the next three sections, we find the minimal weight for $n = 3k + 1$, $n = 3k$, and $n = 3k + 2$, where k is a non-negative integer. \section{Case $n = 3k + 1$. Minimum weight.}\label{sec:3k+1} \begin{lemma} If $n = 3k + 1$, the minimum weight in a downhill weighing, which is balanced or tight imbalanced, is $\frac{8n^3+12n^2-12n-8}{81}$. Furthermore, it is achievable as a balance. \end{lemma} \begin{proof} If $n = 3k + 1$, then our minimum separation point, $s=\frac{2n + 4}{3} = 2k+2$, is an integer. For this separation point \[W_L(2k+2,3k+1)= \binom{2k+2}{3} = \frac{2k(2k+1)(2k+2)}{6},\] and \begin{equation*} \begin{split} W_R(2k+2,3k+1) & = \frac{(2k+2 - 3k-1-2)(2k+2 - 3k-1-1)(2k+2+6k+2)}{6}\\ & = \frac{(-k-1)(-k)(8k+4)}{6}. \end{split} \end{equation*} As $W_L(s,n)=W_R(s,n)$, for $n = 3k + 1$, the minimum bounding weight is $\frac{2k(2k+1)(2k+2)}{3}$. It is achievable, and therefore, the minimum weight is: \[W_M(3k+1) = W_B(3k+1) = 2W_L(2k+2,3k+1)= \frac{2k(2k+1)(2k+2)}{3}=\frac{8n^3+12n^2-12n-8}{81}.\] \end{proof} For $3k+1$ bags, the minimum weight as a function of $k >0$ starts as: 8, 40, 112, 240, 440, and so on. These numbers are always even as they are equal to $2W_L(m,n)$. This sequence is not in the OEIS \cite{OEIS}, but the sequence $W_L(2k+2,3k+1)$ is there. It is sequence A002492, which is the sum of the first $n$ even squares: 4, 20, 56, 120, 220, and so on. \section{Case $n = 3k$. Minimum weight.}\label{sec:3k} \begin{lemma} If $n = 3k$, the minimum weight in a downhill weighing, which is balanced or tight imbalanced, is $\frac{8n^3+ 27n^2 +9n-81}{81}$. It is achievable as a tight imbalance. \end{lemma} For $n \neq 3k+1$, the expression $\frac{2n + 4}{3}$ is not an integer. Thus, the minimal separation point is either $\floor{\frac{2n + 4}{3}}$ or $\ceil{\frac{2n + 4}{3}}$. In our case of $n =3k$, the minimal separation point is either $2k+1$ or $2k+2$. If the separation point is $2k+1$, then $W_L(2k+1,3k) = \frac{8k^3-2k}{6}$ and $W_R(2k+1,3k)=\frac{8k^3+ 9k^2 +k}{6}$. In this case $W_R(2k+1,3k) > W_L(2k+1,3k)$, and the bounding weight is \[W_B(2k+1,3k) = 2W_R(2k+1,3k)-1 = \frac{8k^3+ 9k^2 +k}{6} -1,\] which is achievable. Indeed, we can add any weight to the left side and still have a downhill weighing by using coins of type 1. If the separation point is $2k+2$, then $W_R(s,n) < W_L(s,n)= \frac{8k^3+12k^2+4k}{6}$ and the bounding weight is \[W_B(2k+2,3k) = 2W_L(2k+2,3k) = \frac{8k^3+12k^2+4k}{3}.\] As $W_B(2k+1,3k) < W_B(2k+2,3k)$, the minimal separation point is $2k+1$. The minimal bounding weight is achievable. Thus, the minimum weight is: \[W_M(3k) =W_B(3k) = 2W_R(2k+1,3k)-1= \frac{8k^3+ 9k^2 +k}{3}-1 = \frac{8n^3+ 27n^2 +9n-81}{81}.\] For $3k$ bags, the minimum weight as a function of $k >0$ starts as: 5, 33, 99, 219, and so on. These numbers are always odd, as they are equal to $2W_R(m,n)-1$. This sequence is not in the OEIS \cite{OEIS}, but the sequence corresponding to $W_R(m,n)$ is there. It is sequence A132124$(n) =\frac{n(n + 1)(8n + 1)}{6}$, which starts as 0, 3, 17, 50, 110, 205, 343, 532, 780, 1095, 1485, 1958, and so on. \section{Case $n = 3k+2$. Minimum weight.}\label{sec:3k+2} In the case of $n=3k+2$, the minimal separation point is either $2k+2$ or $2k+3$. If the separation point is $2k+2$, then $W_R(2k+2,3k+2) > W_L(2k+2,3k+2)$, and the bounding weight is \[W_B(2k+2,3k+2) = 2W_R(2k+2,3k+2)-1 =\frac{8k^3+30k^2+34k+6}{3}-1.\] If the separation point is $2k+3$, then $W_R(2k+2,3k+2) < W_L(2k+3,3k+2)$, and the bounding weight is \[W_B(2k+2,3k+2) = 2W_L(2k+3,3k+2) =\frac{8k^3+24k^2+22k+6}{3}.\] The point $2k+3$ is the minimal separation point with the bounding weight \[W_B(3k+2)= W_B(2k+3,3k+2)=\frac{8k^3+24k^2+22k+6}{3}.\] Now we want to see if this weight is achievable, so we calculate the right weight: \[ \begin{split} W_R(2k+3,3k+2) = \frac{(2k+3 - 3k-2 - 2)(2k+3 - 3k-2 - 1)(2k+3 + 6k+4)}{6}\\ = \frac{8k^3+ 15k^2 +7k}{6}. \end{split} \] Consider the difference $W_L(2k+3)- W_R(2k+3) =\frac{3k^2 +5k+2}{2}= \frac{(3k+2)(k+1)}{2}$. Now we have to add some coins to the right pan so that the amount added is at least $\frac{(3k+2)(k+1)}{2}$, and the weighing is downhill. It is not always possible to add some more coins to the right pan so that their weight sums up exactly to $\frac{(3k+2)(k+1)}{2}$. We consider cases separately when $k$ is odd or even. We later show in this section that the minimum bounding weight is achievable with eight exceptions. For $3k+2$ bags, the minimum \textbf{bounding} weight as a function of $k >0$ starts as: 20, 70, 168, 330, 572. This is sequence A259110 in the OEIS \cite{OEIS}. The numbers are always even as they are equal to $2W_L$. The sequence $W_L$ is also in the OEIS. It is sequence A000447: sum of odd squares. \subsection{Subcase $n = 3k+2$, $k$ odd.} If $k=2j+1$ for some integer $j$, we can add the largest weighing coin $\frac{k+1}{2}$ times to the right pan. In this case, the bounding weight is achievable and equal to \[W_M(6j+5) = W_B(6j+5) =\frac{8k^3+24k^2+22k+6}{3}=\frac{8n^3+56n^2-58n-10}{81}.\] For $3k+2$, where $k=2j+1$, the minimum weight as a function of $j \geq 0$ starts as: 20, 168, 572, 1360, and so on. This sequence is not in the OEIS \cite{OEIS}, but the sequence corresponding to $W_L(m,n)$ is there. It is sequence A267031$(n) = (32n^3 - 2n)/3$, which starts as 10, 84, 286, 680, 1330, 2300. Thus, we have proved the following lemma. \begin{lemma} If $n = 3k+2$, where $k$ is odd, the minimum weight in a downhill weighing, which is balanced or tight imbalanced, is $\frac{8n^3+56n^2-58n-10}{81}$. Furthermore, it is achievable as a balance. \end{lemma} \subsection{Subcase $n = 3k+2$, $k$ even.} Before discussing this case, we must first introduce the notation for triangular numbers. Suppose $T_m$ is the $m$-th triangular number, that is \[T_m = 1 + 2 + \cdots + m = \frac{m(m+1)}{2}.\] \begin{lemma} For $n = 3k+2$, where $k$ is even, and $n > 50$, the minimum weight in a downhill weighing, which is either tight imbalanced or balanced, is $\frac{8n^3+56n^2-58n-10}{81}$. Furthermore, it is achievable as a balance. \end{lemma} \begin{proof} Our separation point is $s = 2k + 3$, and the difference between the RHS (right-hand side) and LHS (left-hand side) is $\frac{3k^2 + 5k +2}{2} = n\cdot \frac{n+4}{6} - \frac{n}{2}$. Consider the coins added to the RHS in order to make it greater than or equal to the LHS and result in a downhill weighing. Let the multiplicity of the additional coins weighing $i$ be $a_i$, where $s \leq i \leq n$. The total added weight is: $$a_nn + a_{n-1}(n-1) + a_{n-2}(n-2) + \cdots + a_{s}s,$$ where $a_n \geq a_2 \geq a_{n-1} \geq \ldots \geq a_s$ to keep the weighing downhill. We define non-negative integers $b_1, b_2, b_3, \ldots, b_{n-s+1}$ such that $b_{n-s+1} = a_{s}$ and $b_{i} = a_{n-i} - a_{n-i-1}$ for $1 \leq i < n-s+1$. With this notation we can represent the total weight $a_1n + a_2(n-1) + a_3(n-2) + \cdots + a_{n-s+1} \cdot s$ as $b_1{n} + b_2(2n-1) + b_3(3n-3) + \cdots + b_{n-s+1}(n(n-s+1) - T_{n-s})$. Note that the largest triangular number in the above expression is $T_{n-s} = T_{n-2k-3} = T_{k-1}=T_{\frac{n-2}{3}-1}$. This sum can be expressed as a multiple of $n$ minus a sum of triangular numbers: \[(b_1 + 2b_2 + 3b_3 + \cdots b_{n-s+1}(n-s+1))n - b_2T_1 - b_3T_2 - \cdots - b_{n-s+1}T_{n-s}.\] Additionally, we have \[(b_1 + 2b_2 + 3b_3 + \cdots b_{n-s+1}(n-s+1) = \frac{n+4}{6}\] and \begin{equation}\label{eq:t} b_2T_1 + b_3T_2 + \cdots + b_{n-s+1}T_{n-s} = \frac{n}{2}. \end{equation} By reversing the procedure above, we see that if we can express $\frac{n}{2}$ as a sum of triangular numbers $T_i$, where $i < \frac{n-2}{3}$, then we can get the multiplicities $a_i$ and, correspondingly, the set of coins to add to the right pan to achieve the minimum weight. Suppose we find an expression for $n=z$ using triangular numbers as in (Eq.)~\ref{eq:t}. Then, the expression works for $n= z+42$. We increase $b_7$ by 1. Thus, the left side in (Eq.)~\ref{eq:t} increases by 21, which is matched by increasing $n$ by 42. It follows that if the minimum bounding weight can be achieved for $n$, then it can be achieved for $n+42$. By computer search, we found solutions for $50 < n < 10000$. It follows that there are solutions for any $n > 50$. \end{proof} For $3k+2$, where $k = 2j$, the minimum weight as a function of $j >0$ starts as: 70, 330, 910, 1938, 3542, and so on. This sequence is not in the OEIS \cite{OEIS}, but the sequence corresponding to $W_L(m,n)$ is there. It is sequence A015219 of odd tetrahedral numbers. \subsubsection{Subcase $n = 3k+2$, $k$ even. Small values} We showed that for $n > 50$ the bounding weight is achievable. We found the minimum weight for smaller values of $n$ by an exhaustive search. The results are represented in Table~\ref{table:Minimalweight}. The last column shows the number of coins used in our weight-optimal weighings. \begin{table}[htb] \centering \begin{tabular}{| c |c| c|c|c|} \hline \#bags & Min Weight & Min Bounding Weight & Difference & \#coins\\ \hline 8 & 75 & 70 & 5 & 22 \\ 14 & 337 & 330 & 7 & 60 \\ 20 & 917 & 910 & 7 & 118 \\ 26 & 1943 & 1938 & 5 & 196 \\ 32 & 3543 & 3542 & 1 & 292 \\ 38 & 5857 & 5850 & 7 & 412 \\ 44 & 8991 & 8990 & 1 & 548 \\ 50 & 13095 & 13090 & 5 & 708 \\ \hline \end{tabular} \caption{Minimal weight for $n = 3k+2$, $k$ even, $n < 51$.}\label{table:Minimalweight} \end{table} The sequence of differences for minimum bounding weight is: 5, 7, 7, 5, 1, 7, 1, 5, 0, 0, 0, 0, 0, 0, $\dots$. Here we provide the multiplicities for the minimum weight for the exceptional cases. 8 bags: \{7, 4, 3, 2, 1, 0, $-$2, $-$3\} 14 bags: \{10, 9, 7, 6, 5, 4, 3, 2, 1, 0, $-$1, $-$3, $-$4, $-$5\} 20 bags: \{14, 13, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, $-$1, $-$2, $-$4, $-$5, $-$6, $-$7\} 26 bags: \{19, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, $-$1, $-$2, $-$3, $-$5, $-$6, $-$7, $-$8, $-$9\} 32 bags: \{21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, $-$1, $-$2, $-$3, $-$4, $-$6, $-$7, $-$8, $-$9, $-$10, $-$11\} 38 bags: \{26, 25, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, $-$1, $-$2, $-$3, $-$4, $-$5, $-$6, $-$8, $-$9, $-$10, $-$11, $-$12, $-$14\} 44 bags: \{29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, $-$1, $-$2, $-$3, $-$4, $-$5, $-$6, $-$7, $-$9, $-$10, $-$11, $-$12, $-$13, $-$14, $-$16\} 50 bags: \{35, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, $-$1, $-$2, $-$3, $-$4, $-$5, $-$6, $-$7, $-$8, $-$9, $-$11, $-$12, $-$13, $-$14, $-$15, $-$17, $-$18\} \section{Minimum Weight Data}\label{sec:data} We combined the formulae for the minimum weight in Table~\ref{table:mw}. Recall that the case $n= 3k+2$ has exceptions when $n <50$. \begin{table}[htb] \centering \begin{tabular}{| c |c|} \hline \#bags & Min Weight \\ \hline $3k+1$ & $\frac{8n^3+12n^2-12n-8}{81}$ \\ $3k$ & $\frac{8n^3+ 27n^2 +9n-81}{81}$ \\ $3k+2$ & $\frac{8n^3+56n^2-58n-10}{81}$ \\ \hline \end{tabular} \caption{Minimal weight.}\label{table:mw} \end{table} In any case, the minimum weight is not more than \[\frac{8n^3+56n^2+9n-8}{81}.\] The sequence of minimal bounding weights $W_B(n)$, which starts with three bags, is: 5, 8, 20, 33, 40, 70, 99, 112, 168, 219, 240, 330, 409, 440, 572, 685, 728, 910, 1063, 1120, 1360, 1559, 1632, 1938, 2189, 2280, 2660, 2969, 3080, 3542, 3915, 4048, 4600, 5043, 5200, 5850, 6369, 6552, 7308, 7909, 8120, 8990, 9679, 9920, 10912, 11695, 11968, 13090, 13973, 14280, 15540, 16529, 16872, 18278, 19379, 19760, 21320, 22539, 22960, 24682, 26025, 26488, 28380, 29853, 30360, 32430, 34039, 34592, 36848, 38599, 39200, 41650, 43549, 44200, 46852, 48905, 49608, 52470, 54683, 55440, 58520, 60899, 61712, 65018, 67569, 68440, 71980, 74709, 75640, 79422, 82335, 83328, 87360, 90463, 91520, 95810, 99109, 100232, and so on. The sequence of minimal weights $W_M(n)$ differs from the previous sequence in eight places and is: 5, 8, 20, 33, 40, \textbf{75}, 99, 112, 168, 219, 240, \textbf{337}, 409, 440, 572, 685, 728, \textbf{917}, 1063, 1120, 1360, 1559, 1632, \textbf{1943}, 2189, 2280, 2660, 2969, 3080, \textbf{3543}, 3915, 4048, 4600, 5043, 5200, \textbf{5857}, 6369, 6552, 7308, 7909, 8120, \textbf{8991}, 9679, 9920, 10912, 11695, 11968, \textbf{13095}, 13973, 14280, 15540, 16529, 16872, 18278, 19379, 19760, 21320, 22539, 22960, 24682, 26025, 26488, 28380, 29853, 30360, 32430, 34039, 34592, 36848, 38599, 39200, 41650, 43549, 44200, 46852, 48905, 49608, 52470, 54683, 55440, 58520, 60899, 61712, 65018, 67569, 68440, 71980, 74709, 75640, 79422, 82335, 83328, 87360, 90463, 91520, 95810, 99109, 100232, and so on. We highlighted the places where the two sequences differ. \section{Minimizing the Number of Coins}\label{sec:mincoins} \subsection{Minimizing Weight Versus Minimizing Coins} Weight-optimal and coin-optimal weighings are not always the same. Below is an example of when more coins produce less total weight. Consider the case where we have 9 bags. Using the optimal separation point of 7, we get the starting multiplicities $\{5, 4, 3, 2, 1, 0, -1, -2, -3\}$. Here, the LHS weighs 35 and the RHS weighs 50. Hence, we can add 14 to the LHS to optimize the total weight to 99. The corresponding multiplicities for the smallest number of coins are $\{8, 6, 4, 3, 1, 0, -1, -2, -3\}$ for a total of 28 coins. However, adding 15 to the LHS still keeps the weighing verifying, and we can do this with multiplicities $\{6, 5, 4, 3, 2, 0, -1, -2, -3\}$ for a total of 26 coins. Notice that the second example uses fewer coins while having a greater total weight. Still, the number of coins used in the weight-optimal weighing is close to the minimum, so we will calculate this number in this section. \subsection{Estimate for the number of coins in our strategy for the minimum weight} We start by calculating the number of coins used in weight-optimal weighings, which were described in the previous sections. We do this by analyzing individual cases. Note that the upper bound we find in this section is approximately $5n^2/18$, which is a big improvement from our initial bound of $n^3/3$ in Section~\ref{sec:introUB}. We can estimate the lower bound on the number of coins using the fact that the multiplicities must be different on each pan. As before, we call integer $s$ a \textit{separation point} if $s$ is the smallest label on the right pan in a downhill weighing. If the separation point is $s$, then the multiplicities on the left pan have to be at least $\{0, 1, 2, \ldots, s-2\}$ for a total of $\frac{(s-2)(s-1)}{2}$ coins, which we denote as $F_L(s,n)$. The multiplicities on the right have to be at least $\{1, 2, \ldots, n-s, n-s+1\}$ for a total of $\frac{(n-s+2)(n-s+1)}{2}$ coins, which we denote as $F_R(s,n)$. We denote $F_L(s,n) + F_R(s,n)$ as $F(s,n)$. Thus, we have proven the following lemma. \begin{lemma}\label{lemma:multiplicitiesbound} If the total number of bags is $n$, and the separation point is $s$ in a downhill weighing, we need at least $F(s,n) = F_L(s,n)+F_R(s,n)$ coins, which is equal to \[\left(s - \frac{n+3}{2}\right)^2 + \frac{n^2-1}{4}.\] \end{lemma} \subsection{Case $n = 3k+1$.} If $n = 3k+1$, then the separation point is $2k+2$. Then the left pan has $k(2k+1)$ coins, and on the right pan has $\frac{k(k+1)}{2}$ coins. Thus, the total number of coins used is \[\frac{k(5k+3)}{2}=\frac{(n-1)(5(n-1)+9)}{18}=\frac{(n-1)(5n+4)}{18}= \frac{5n^2-n-4}{18}.\] The first few numbers that this bound produces are: 4, 13, 27, 46, 70, 99. They form sequence A147875 in the OEIS \cite{OEIS}, which is the second heptagonal numbers. \subsection{Case $n = 3k$.} For $n = 3k$, the minimal separation point is $2k+1$. It follows that $W_L(2k+1,3k) = \frac{8k^3-2k}{6}$ and $W_R(2k+1,3k)=\frac{8k^3+ 9k^2 +k}{6}$. In this case, $W_R(s,n) > W_L(s,n)$, so the bounding weight is $2W_R(s,n)-1$, which is achievable. We can add the required weight by using the coins from bag 1. However, we can do it by using fewer coins. To estimate the number of coins, we start with $(s - \frac{n+3}{2})^2 + \frac{n^2-1}{4}$ from Lemma~\ref{lemma:multiplicitiesbound}, which is the number of coins used in the weighing before we add coins to balance the left and the right pan. This is equal to $\frac{5n^2-3n}{18}$. Now we need to add some number of coins to the left pan to get to the weight $W_R(s,n)-1$. That is, we need to add the weight $D = W_R(s,s)-1 - W_L(s,s)$ which is equal to \[D =\frac{8k^3+ 9k^2 +k}{6}- 1 - \frac{8k^3-2k}{6} = \frac{(3k-2)(k+1)}{2} = \frac{(n-2)(n+3)}{6}.\] To keep this weighing downhill, we add coins in groups of $1+2+\cdots +j$. Each group weighs a triangular number $T_j$. The largest triangular number available is \[T_{2k-1} = k(2k-1) = \frac{n}{3}\left(\frac{2n}{3}-1\right)=\frac{2n^2-3n}{9}.\] Our weight $D = \frac{(n-2)(n+3)}{6}$ that needs to be added is no more than $T_{2k-1}$ for $n \geq 6$. It is well-known fact, since Gauss discovered it in 1796, that each integer can be written as a sum of three triangular numbers, possibly including zero. Thus, we can represent $D$ as $T_a+T_b+T_c$. We want to find an upper bound on $a+b+c$, which is the corresponding number of coins. We have \[a(a+1) + b(b+1) +c(c+1) = 2D.\] Equivalently, \[(a+0.5)^2 + (b+0.5)^2 + (c+0.5)^2 = 2D + 0.75.\] By Cauchy–Schwarz inequality, if \[x^2 + y^2 +z^2 = R,\] then \[x + y +z \leq \sqrt{3R}.\] Thus, \[a+b+c \leq \sqrt{6D + 2.25} - 1.5 = \sqrt{n^2 +n - 3.75} - 1.5 \leq n+ 0.5 - 1.5 = n-1.\] Thus, there is a way to add the weight to the left pan that uses no more than $n-1$ coins. The total number of coins is no more than: \[\frac{5n^2-3n}{18}+n-1 = \frac{5n^2+15n-18}{18}.\] The first few numbers that this bound produces are 4, 14, 29, 49, 74, 104, 139. This sequence is not in the OEIS. \subsection{Case $n = 3k+2$ and $k$ is odd.} For $n = 3k+2$ where $k$ is odd, the minimal separation point is $2k+3$. To calculate the number of coins, we start with $(s - \frac{n+3}{2})^2 + \frac{n^2-1}{4}$ from Lemma~\ref{lemma:multiplicitiesbound}, which equals $\frac{5n^2+n-4}{18}$. We also need to add some number of coins on the right pan. As we showed before, we add $\frac{k+1}{2}$ or $\frac{n+1}{6}$. The total is \[\frac{5n^2+4n-1}{18}.\] In this case, this is exactly the number of coins used in the corresponding weight-optimal weighing. The first few numbers that this calculation produces are: 8, 36, 84, 152, 240, 348, 476, 624, 792. This sequence is not in the OEIS. \subsection{Case $n = 3k+2$ and $k$ is even and $n > 50$.} For $n = 3k+2$ where $k$ is even, the minimal separation point is $2k+3$. To calculate the number of coins, we start with $(s - \frac{n+3}{2})^2 + \frac{n^2-1}{4}$ from Lemma~\ref{lemma:multiplicitiesbound}, which equals $\frac{5n^2+n-4}{18}$. The algorithm we used above adds exactly $\lfloor \frac{k+1}{2}\rfloor = \frac{k+2}{2}$ coins. The latter expression equals $\frac{n+4}{6}$. Thus, the total number of coins is $\frac{5n^2+n-4}{18} + \frac{n+4}{6}$, which equals \[\frac{5n^2+4n+8}{18}.\] The first few numbers that this bound produces are: 2, 20, 58, 116, 194, 292, 410, 548, 706, 884, 1082, 1300, 1538, 1796, 2074, 2372, 2690. These numbers divided by 2 produce sequence A079273 in the OEIS, which is the Octo numbers. The weighing algorithm that we use in this description is valid for $n > 50$. Thus, the calculation for the number of coins is exact starting from 884. For smaller values we found weight-optimal weighings using 2, 20, 60, 118, 196, 292, 412, 548, 708 coins, and this sequence is not in the OEIS. \subsection{Lower Bound for the Number of Coins} As we showed in Lemma~\ref{lemma:multiplicitiesbound}, if the total number of bags is $n$, and the separation point is $s$ in a downhill weighing, we need at least \[\left(s - \frac{n+3}{2}\right)^2 + \frac{n^2-1}{4}\] coins. This value is minimized when $s = \frac{n+3}{2}$. This is equivalent to saying that to minimize the total number of coins, the zero multiplicity should be in the middle. This provides our first lower bound of $\frac{n^2-1}{4}$ coins. We want to improve this bound by considering weight. We consider a separation point $s' = \lfloor\frac{2n+4}{3}\rfloor$ to be the largest separation point such that the left minimal weight does not exceed the right minimal weight. First, we need a preliminary lemma. \begin{lemma}\label{lemma:movingsptoleft} If $s$ is a separation point and $s < k$, then to have the weight $\binom{k}{3}$ on the left pan with the separation point $s$, we need more than $\frac{(k-2)(k-1)}{2}$ coins on the left pan. \end{lemma} \begin{proof} The weight $\binom{k}{3}$ can be represented as a sum \[1\cdot(k-2) + 2\cdot(k-3) + \cdots + (k-3) \cdot 2 + (k-2)\cdot 1.\] This uses $\frac{(k-2)(k-1)}{2}$ coins. However, this expression for the LHS currently has separation point $k$. In order to have the same weight with separation point $s$ where $s < k$, we must remove some coins that weigh more than $s-1$ and rearrange their weight with coins that weigh less than $s$ while keeping the weighing downhill. As each removed coin weighs more than each added coin, the number of removed coins will be no more than the number of added coins. This means that we need at least $\frac{(k-2)(k-1)}{2}$ coins on the left pan. \end{proof} \begin{theorem} In a coin-optimal weighing with $n$ bags, we need at least $F(s')$ coins. \end{theorem} \begin{proof} The number of coins needed for the separation point $j < s'$ on the left pan is at least $F_L(s',n)$. If $j < s'$, then $F_R(j,n) > F_R(s',n)$. Therefore, the total number of coins needed for $j < s'$ is more than $F(s')$. We also know that, for $j > s'$, we have $F(j,n) > F(s',n)$. That means we need at least $F(s')$ coins. \end{proof} The calculation for $F(s',n)$ is presented in the following Table~\ref{table:Minimalseparationpointcoins}. The last two columns show $F(s',n)$ expressed in terms of $k$ and $n$. \begin{table}[htb] \centering \begin{tabular}{| c |c| c|c|c|} \hline Number of bags & $s'$ & $s''$& $F(s')$ & $F(s')$\\ \hline $n=3k$ & $2k+1$ & $2k+2$ & $\frac{5k^2- k}{2}$ & $\frac{5n^2 -3n}{18}$\\ $n=3k+1$ & $2k+2$ & $2k+2$ & $\frac{5k^2+3k}{2}$ & $\frac{5n^2 -n -4}{18}$\\ $n=3k+2$ & $2k+2$ & $2k+3$ & $\frac{5k^2+5k + 2}{2}$ & $\frac{5n^2-5n+8}{18}$\\ \hline \end{tabular} \caption{Minimal bounding number of coins}\label{table:Minimalseparationpointcoins} \end{table} As we can see, in any case our lower bound is approximately $\frac{5n^2}{18}$, which is the same growth as the upper bound. \subsection{Summary} We merge the lower and the upper bounds into Table~\ref{table:mergeLU}. \begin{table}[htb] \centering \begin{tabular}{| c |c| c|} \hline Number of bags & Lower bound & Upper bound \\ \hline $n=3k$ & $\frac{5n^2 -3n}{18}$ & $\frac{5n^2 + 15n - 18}{18}$ \\ $n=3k+1$ & $\frac{5n^2 -n -4}{18}$ & $\frac{5n^2 -n -4}{18}$ \\ $n=3k+2$, $k$ is odd & $\frac{5n^2-5n+8}{18}$ & $\frac{5n^2 +4n -1}{18}$ \\ $n=3k+2$, $n > 50$, $k$ is even & $\frac{5n^2-5n+8}{18}$ & $\frac{5n^2 +4n +8}{18}$ \\ \hline \end{tabular} \caption{Lower and Upper bound}\label{table:mergeLU} \end{table} We see that, for $n=3k+1$, both bounds are the same. \begin{corollary} For $n=3k+1$, the minimum number of coins is $\frac{5n^2 -n -4}{18}$. \end{corollary} The following Table~\ref{table:coinResults} shows our computational results. In the middle row we present the smallest number of coins that we found. The numbers in bold are proven to be the best bound by an exhaustive search or by the corollary above. \begin{table}[htb] \centering \begin{tabular}{| c |c| c|c|c| c| c|c|c|c|c|c|c|c|c|} \hline number of bags & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15\\ upper bound & N/A & 4 & 4 & 8 & 14 & 13 & N/A & 29 & 27 & 36 & 49 & 46 & N/A & 74\\ best found & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{8} & \textbf{12} & \textbf{13} & 22 & 26 & \textbf{27} & 36 & 47 & \textbf{46} & 60 & 70 \\ lower bound & 1 & 2 & 4 & 6 & 9 & 13 & 16 & 21 & 27 & 31 & 38 & 46 & 51 & 60\\ \hline \end{tabular} \caption{Lower and Upper bound}\label{table:coinResults} \end{table} \section{Downhill weighings that are not tight}\label{sec:nottight} Suppose a downhill verifying weighing is an imbalance that is not tight. As before, a separation point $s$ is the smallest weight used on the right pan. Then, the smallest set of multiplicities that is used on both pans is: \[\{2(s-2),2(s-3),2(s-4),\ldots, 4,2,0,-1, -3, -5, \ldots,-2n+2s-1\}.\] We want to calculate the number of coins used in this weighing as well as the weights on the right and the left pans. \begin{theorem} The smallest number of coins cannot be achieved in a downhill imbalance that is not tight. \end{theorem} \begin{proof} We can use the previous calculations by noticing that \begin{multline*} \{2(s-2),2(s-3),2(s-4),\ldots, 4,2,0,-1, -3, -5, \ldots,-2n+2s-1\} = \\ 2\{s-2,s-3,\ldots,2,1,0,-1,-2,\ldots, -n+s-1\} + \{0,\ldots, 0,0,0,1,1,\ldots,1\}. \end{multline*} The number of coins used is $2(s - \frac{n+3}{2})^2 + \frac{n^2-1}{2} - (n-s+1)$. The minimum is reached for $s=\frac{2n+5}{4}$, and the corresponding number of coins is $ \frac{4n^2-4n-1}{8}$. Asymptotically, this function grows faster than our worst bound of $\frac{5n^2 + 15n - 18}{18}$. In fact, it is bigger than our bound for $n > 6$. For $n \leq 6$, we calculated the exact minimum number of coins. This means that the smallest number of coins cannot be achieved in a weighing that is not tight. \end{proof} Now we direct our attention to the weight. \begin{theorem} The smallest weight cannot be achieved in a downhill imbalance that is not tight. \end{theorem} \begin{proof} Let us denote the weight of the coins with multiplicities above as $V(s,n)$. We see that, for $s < n$, the difference of the weights of multiplicities for separation points $s+1$ and $s$ is \[2,2,2,\ldots,2,-1,-2,-2,\ldots,-2,-2,\] where $-1$ is the multiplicity for the coin of type $s+1$. Thus, \[V(s+1,n) - V(s,n) = 2(1+2+3 +\cdots+s) -(s+1) - 2((s+2)+(s+3)+\cdots +n).\] Simplifying, we get \begin{multline*} V(s+1,n) - V(s,n) = s(s+1) -(s+1) - (n+s+2)(n-s-1) = \\ s^2 -1 -n^2 -n(s+2) + n(s+1) +(s+2)(s+1) = -n^2 - n + (s+1)(2s+1). \end{multline*} We see that, for small $s$, this expression is decreasing until $s$ reaches the value that satisfies the equation \begin{equation}\label{equation:criticalPoint} (s+1)(2s+1) = n^2+n. \end{equation} Then, it starts increasing. Let us denote the root of Equation~\ref{equation:criticalPoint} by $s'$. We see that $s' \approx \frac{\sqrt2 n}{2}$. The weight of the left pan in a non-tight weighing with separation point $s$ is at least $2W_L(s,n)$. The weight on the right is at least $2W_R(s,n) - s - (s+1) - \cdots - n$. Thus, the total weight is at least \begin{equation}\label{equation:lowerboundV} V(s,n) = 2W_L(s,n) + 2W_R(s,n) - s - (s+1) - \cdots - n. \end{equation} We chose $s'$ to be the minimum of $V(s,n)$ over $s$. This means that the total weight of a non-tight downhill weighing is at least $V(s',n)$. Now we notice that $s' > \frac{2n}{3}$. To prove this, assume for the sake of contradiction that $s' \le \frac{2n}{3}$. Then, \[(s' + 1)(2s' + 1) \le \left(\frac{2n}{3} + 1\right)\left(\frac{4n}{3} + 1\right) = \frac{8}{9}n^2 + 2n + 1.\] Moreover, \[\frac{8}{9}n^2 + 2n + 1 < n^2 + n\] for all $n > \frac{9 + 3\sqrt{13}}{2} \approx 9.90$. Thus, for all $n \ge 10$, we must have $s' > \frac{2n}{3}$. This means that \[V(s',n) > 4W_L(\frac{2n}{3},n) = 4\binom{\frac{2n}{3}}{3}.\] We showed in Section~\ref{sec:data} that the minimum weight for balanced and tight imbalanced weighings does not exceed \[\frac{8n^3+56n^2+9n-8}{81}.\] We now show that our lower bound for $V(s', n)$ is never better than this upper bound. First, we have \[V(s',n) > 4\binom{\frac{2n}{3}}{3} = 4 \frac{\frac{2n}{3}(\frac{2n}{3} - 1)(\frac{2n}{3} - 2)}{6} > \frac{8n^3+56n^2+9n-8}{81}\] whenever \[D(n) = 4 \frac{\frac{2n}{3}(\frac{2n}{3} - 1)(\frac{2n}{3} - 2)}{6} - \frac{8n^3+56n^2+9n-8}{81} = \frac{8 n^3 - 128 n^2 + 63 n + 8}{81} > 0.\] Since the leading coefficient of $D$ is positive, and the largest root of $D$ is approximately $15.48$, we have $D(n) > 0$ for all integers $n$, where $n \ge 16$. This means that, for all $n \ge 16$, non-tight imbalances are strictly worse in terms of weight. We now resolve the cases when $3 \le n \le 15$ ($n = 1$ does not require using the scale and non-tight weighings are clearly worse for $n = 2$). Recall that $s'$ is the root of Equation~\ref{equation:criticalPoint}, which, when solved explicitly, yields one positive root, being $$s' = \frac{-3 + \sqrt{1 + 8 n + 8 n^2}}{4}.$$ For $n = 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15$, the corresponding values of $s'$ are approximately \newline $1.71, 2.42, 3.13, 3.84, 4.55, 5.26, 5.96, 6.67, 7.38, 8.09, 8.79, 9.50, 10.21$. Note that we can approximate Equation~\ref{equation:lowerboundV} for real $s$ by using $$V(s, n) = 2\frac{s(s - 1)(s - 2)}{6} + 2\frac{(s - n - 1)(s - n - 2)(s + 2n)}{6} - \frac{(n + s)(n - s + 1)}{2}$$. Plugging our values of $s'$ into this approximation yields Table~\ref{table:notTightVersusPrevBound}. \begin{table}[htb] \centering \begin{tabular}{| c |c| c|c|c|} \hline $n$ & $s'$ & $\ceil{V(s', n)}$ & $W_M(n)$\\ \hline $3$ & $1.71$ & $14$ & $5$\\ $4$ & $2.42$ & $25$ & $8$\\ $5$ & $3.13$ & $40$ & $20$\\ $6$ & $3.84$ & $61$ & $33$\\ $7$ & $4.55$ & $89$ & $40$\\ $8$ & $5.26$ & $126$ & $75$\\ $9$ & $5.96$ & $172$ & $99$\\ $10$ & $6.67$ & $228$ & $112$\\ $11$ & $7.38$ & $297$ & $168$\\ $12$ & $8.09$ & $378$ & $219$\\ $13$ & $8.79$ & $474$ & $240$\\ $14$ & $9.50$ & $585$ & $337$\\ $15$ & $10.21$ & $712$ & $409$\\ \hline \end{tabular} \caption{Non-tight weighings compared to previous upper bound}\label{table:notTightVersusPrevBound} \end{table} Note that $W_M(n) < \ceil{V(s', n)}$ for all $3 \le n \le 15$. Thus, we have now shown that non-tight weighings are never better than our previous upper bound $W_M(n)$) over all $n$. \end{proof} \section{Solo weighings}\label{sec:solo} In this section, we consider the case when the right-hand side has only one type of coin. We call such weighings \textit{solo weighings}. We want to find downhill verifying solo weighings such that the multiplicities on the left side are consecutive numbers. As we are mostly interested in weighings with smaller weights and fewer number of coins, we consider two possibilities: the range on the left is $[0\ldots n-2]$, or it is $[1\ldots n-1]$. \begin{lemma} A balanced downhill verifying solo weighing such that the multiplicities on the left pan are consecutive numbers in the range $[1\ldots n-1]$ exists if and only if $n \equiv \pm 1 \pmod{6}$. An imbalanced downhill verifying solo weighing such that the multiplicities on the left pan are consecutive numbers in the range $[1\ldots n-1]$ exists if and only if $n = 2$, or $n=6$. \end{lemma} \begin{proof} Consider the multiplicities on the left pan: $n - 1, n - 2, n - 3, \ldots 2, 1$ for a total weight $\binom{n + 1}{3}$. If the weighing is a balance, then $n | \binom{n + 1}{3}$. For ratio $\binom{n + 1}{3} : n = \frac{(n + 1)(n - 1)}{6}$ to be an integer, number $n$ must be congruent to $1$ or $5$ mod 6. If the weighing is an imbalance, then, for $n > 2$, it must be a tight imbalance. Consequently, the RHS must have a weight of $\binom{n + 1}{3} + 1$, and thus, for this weighing to be possible, we must have $n | \binom{n + 1}{3} + 1$. Note that this is true if and only if $$\frac{\binom{n + 1}{3} + 1}{n} = \frac{\frac{(n + 1)(n)(n - 1)}{6} + 1}{n} = \frac{(n + 1)(n - 1)}{6} + \frac{1}{n}$$ is an integer. For $n \leq 6$, we see that $n = 1, 2, 6$ gives integers for this expression. However, for $n >6$, this expression is never an integer because the fractional part of $\frac{(n - 1)(n + 1)}{6}$ is one of: $0, \frac{1}{6}, \frac{1}{3}, \frac{1}{2}, \frac{2}{3}, \frac{5}{6}$. Adding $\frac{1}{n}$ for $n > 6$ cannot make this to an integer. \end{proof} In the case of a balance in the above lemma, the total weight is $2\binom{n + 1}{3}$ and the total number of coins is $$\frac{(n - 1)n}{2} (\text{on the LHS}) + \frac{(n + 1)(n - 1)}{6} (\text{on the RHS}) = \frac{4n^2 - 3n - 1}{6}.$$ For example, for 5 bags, we have multiplicities $\{4, 3, 2, 1, -4\}$ with 14 coins, and the total weight is 40. Notice that this solo weighing is more efficient than our first example, when we had only one type of coin on the left pan. \begin{lemma} A downhill verifying solo weighing such that the multiplicities on the left pan are consecutive numbers in the range $[0\ldots n-2]$ exists if and only if $n \equiv \pm 1 \pmod{3}$. \end{lemma} \begin{proof} Consider the multiplicities on the LHS: $n - 2, n - 3, n - 4, \ldots, 2, 1$ for the total weight of $\binom{n}{3}$. If the weighing is balanced, then $n| \binom{n }{3}$. For the ratio $\binom{n}{3} : n = \frac{(n - 1)(n - 2)}{6}$ to be an integer, number $n$ must be congruent to $1$ or $2$ mod 3. If the weighing is imbalanced, then, for $n > 2$ it must be a tight imbalance. Consequently, the RHS must have a weight of $\binom{n}{3} + 1$. Thus, for this weighing to be possible, we must have $n | \binom{n}{3} + 1$. Note that this is true if and only if $$\frac{\binom{n}{3} + 1}{n} = \frac{\frac{(n)(n - 1)(n - 2)}{6} + 1}{n} = \frac{(n - 1)(n - 2)}{6} + \frac{1}{n}$$ is an integer. For $n \leq 6$, this expression is only an integer for $n = 1$. For $n > 6$, this expression is never an integer for the same reasons as in the previous lemma. \end{proof} The total weight on both sides is $2\binom{n}{3}$, and the total number of coins is $$\frac{(n - 2)(n - 1)}{2} (\text{on the LHS}) + \frac{(n - 1)(n - 2)}{6} (\text{on the RHS}) = \frac{2}{3}(n - 1)(n - 2).$$ For example, for 5 bags, we have multiplicities $\{3, 2, 1, 0, -2\}$ with 8 coins, and the total weight is 20. \section{Multiplicities are in an arithmetic progression}\label{sec:ap} In this section, we discuss verifying balanced weighings that form an arithmetic progression. We are interested only in primitive weighings. We denote the difference in this arithmetic progression as $d$. \begin{lemma} A balanced verifying primitive weighing forms an arithmetic progression if and only if either $d=3$ and $n$ could be any integer, or $d=1$ and $n=3k+1$. \end{lemma} \begin{proof} Assume our multiplicities are \[a,\ a-d,\ a-2d,\ \ldots,\ a-(n-1)d,\] where $a$ is the first multiplicity. As our weighing is primitive, we have $\gcd(a,d)=1$. As the weighing balances, we know that \[a + 2(a-d) + 3(a-2d) + \cdots + n(a-(n-1)d)=0.\] We can group all the $a$'s and $d$'s together to get: \begin{align*} 0 &=a(1+2+3+\cdots +n) - d(2 \cdot 1 + 3 \cdot 2 + \cdots + n(n-1)) \\ &= a\left(\cfrac{n(n+1)}{2}\right) - d\left(\cfrac{(n-1)n(n+1)}{3}\right). \\ \end{align*} Dividing by $\frac{n(n+1)}{2}$ yields: \[a = \cfrac{2}{3}(n-1)d.\] Since $a$, $d$, and $n$ are all integers, $n \equiv 1$ (mod 3) or $d \equiv 0$ (mod 3). In addition, as $\gcd(a,d)=1$, if $d \equiv 0$ (mod 3), then $d=3$, and if $n \equiv 1$ (mod 3), then $d=1$. Therefore, we can only get primitive verifying balances in an arithmetic progression when either $d= 3$, or $d=1$ and there are $3k+1$ bags. \end{proof} For a small number of bags, the weighings with a difference of 3 are as follows: \begin{center} \begin{tabular}{ |c|c| } \hline Number of Bags: & Weighing: \\ \hline 3 & (4, 1, -2) \\ \hline 4 & (6, 3, 0, -3) \\ \hline 5 & (8, 5, 2, -1, -4) \\ \hline 6 & (10, 7, 4, 1, -2, -5) \\ \hline 7 & (12, 9, 6, 3, 0, -3, -6) \\ \hline 8 & (14, 11, 8, 5, 2, -1, -4, -7) \\ \hline \end{tabular} \end{center} We can notice that the multiplicities are of the general form \[2n-2, 2n-5, 2n-8, \ldots, 1-n,\] where the next multiplicity is the previous one minus 3 and $n\geq 3$. \section{Acknowledgements} This project was done as part of MIT PRIMES STEP, a program that allows students in grades 6 through 9 to try research in mathematics. Tanya Khovanova is the mentor of this project. We are grateful to PRIMES STEP and to its director, Slava Gerovitch, for this opportunity.
{ "timestamp": "2020-07-01T02:20:02", "yymm": "2006", "arxiv_id": "2006.16797", "language": "en", "url": "https://arxiv.org/abs/2006.16797", "abstract": "There are $n$ bags with coins that look the same. Each bag has an infinite number of coins and all coins in the same bag weigh the same amount. Coins in different bags weigh 1, 2, 3, and so on to $n$ grams exactly. There is a unique label from the set 1 through $n$ attached to each bag that is supposed to correspond to the weight of the coins in that bag. The task is to confirm all the labels by using a balance scale once.We study weighings that we call downhill: they use the numbers of coins from the bags that are in a decreasing order. We show the importance of such weighings. We find the smallest possible total weight of coins in a downhill weighing that confirms the labels on the bags. We also find bounds on the smallest number of coins needed for such a weighing.", "subjects": "History and Overview (math.HO); Combinatorics (math.CO)", "title": "Confirming the Labels of Coins in One Weighing", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357232512719, "lm_q2_score": 0.8376199694135333, "lm_q1q2_score": 0.8223214464819034 }
https://arxiv.org/abs/1805.08340
Reducing Parameter Space for Neural Network Training
For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.
\section{Summary} \label{sec:summary} In this paper we presented a set of constraints on multi-layer feedforward NNs with ReLU and binary activation functions. The weights in each neuron are constrained on the unit sphere, as opposed to the entire space. This effectively reduces the number of parameters in weights by one per neuron. The threshold in each neuron is constrained to a bounded interval, as opposed to the entire real line. We prove that the constrained NN formulation is equivalent to the standar unconstrained NN formulation. The constraints on the parameters reduce the search space for network training and can notably improve the training results. Our numerical examples for both single hidden layer and multiple hidden layers verify this finding. \section{Numerical Examples} \label{sec:examples} In this section we present numerical examples to demonstrate the properties of the constrained NN training. We focus exclusively on the ReLU activation function \eqref{ReLU} due to its overwhelming popularity in practice. Given a set of training samples $\{(\mathbf{x}_i, y_i)\}_{i=1}^n$, the weights and thresholds are trained by minimizing the following mean squared error (MSE) \begin{equation*} E=\frac{1}{n}\sum_{i=1}^n |N(\mathbf{x}_i)-y_i|^2. \end{equation*} We conduct the training using the standard unconstrained NN formulation \eqref{Nx} and our new constrained formulation \eqref{Nnew2} and compare the training results. In all tests, both formulations use exactly the same randomized initial conditions for the weights and thresholds. Since our new constrained formulation is irrespective of the numerical optimization algorithm, we use one of the most accessible optimization routines from MATLAB\textregistered, the function \texttt{fminunc} for unconstrained optimization and the function \texttt{fmincon} for constrained optimization. It is natural to explore the specific form of the constraints in \eqref{Nnew2} to design more effective constrained optimization algorithms. This is, however, out of the scope of the current paper. After training the networks, we evaluate the network approximation errors using another set of samples --- a validation sample set, which consists of randomly generated points that are independent of the training sample set. Even though our discussion applies to functions in arbitrary dimension $d\geq 1$, we present only the numerical results in $d=1$ and $d=2$ because they can be easily visualized. \subsection{Single Hidden Layer} We first examine the approximation results using NNs with single hidden layer, with and without constraints. \subsubsection{One dimensional tests}\label{example1} We first consider a one-dimensional smooth function \begin{equation}\label{testf1} f(x)=\sin(4\pi x), \quad x\in [0, 1]. \end{equation} The constrained formulation \eqref{Nnew2} becomes \begin{equation} \label{Nx_1D1} \widehat{N}(\mathbf{x}) = \sum_{j=1}^{\widehat{N}} \hat{c}_j\sigma(\hat{w}_j x + \hat{b}_j), \quad \hat{w}_j\in \{-1,1\}, \quad -1 \leq \hat{b}_j \leq 1. \end{equation} Due to the simple form of the weights and the domain $D=[0,1]$, the proof of Proposition \ref{prop:eq2} also gives us the following tighter bounds for the thresholds for this specific problem, \begin{equation} \label{Nx_1D2} \left\{ \begin{split} -1\leq \hat{b}_j \leq 0, &\qquad \textrm{if } \hat{w}_j = 1;\\ 0\leq \hat{b}_j \leq 1, &\qquad \textrm{if } \hat{w}_j=-1. \end{split} \right. \end{equation} We approximate \eqref{testf1} with NNs with one hidden layer, which consists of $20$ neurons. The size of the training data set is $200$. Numerical tests were performed for different choices of random initializations. It is known that NN training performance depends on the initialization. In Figures \ref{fig:1L1D_IC1}, \ref{fig:1L1D_IC2} and \ref{fig:1L1D_IC3}, we show the numerical results for three sets of different random initializations. In each set, the unconstrained NN \eqref{Nx}, the constrained NN \eqref{Nx_1D1} and the specialized constrained NN with \eqref{Nx_1D2} use the same random sequence for initialization. We observe that the standard NN formulation without constraints \eqref{Nx} produces training results critically dependent on the initialization. This is widely acknowledged in the literature. On the other hand, our new constrained NN formulations are more robust and produce better results that are less sensitive to the initialization. The tighter constraint \eqref{Nx_1D2} performs better than the general constraint \eqref{Nx_1D1}, which is not surprising. However, the tighter constraint is a special case for this particular problem and not available in the general case. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \label{fig1:con0} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/con0.eps} \caption{Unconstrained NN $N(x)$ \eqref{Nx}} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \label{fig1:con1} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/con1.eps} \caption{Constrained NN $\widehat{N}(x)$ \eqref{Nx_1D1}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \label{fig1:con2} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/con2.eps} \caption{Constrained NN $\widehat{\widehat{N}}(x)$ \eqref{Nx_1D2}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \label{fig1:MSE} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/MSE.eps} \caption{Convergence history} \end{subfigure} \caption{Numerical results for \eqref{testf1} with one sequence of random initialization.} \label{fig:1L1D_IC1} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/con0.eps} \caption{Unconstrained NN $N(x)$ \eqref{Nx}} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/con1.eps} \caption{Constrained NN $\widehat{N}(x)$ \eqref{Nx_1D1}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/con2.eps} \caption{Constrained NN $\widehat{\widehat{N}}(x)$ \eqref{Nx_1D2}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/MSE.eps} \caption{convergence history} \end{subfigure} \caption{Numerical results for \eqref{testf1} with a second sequence of random initialization.} \label{fig:1L1D_IC2} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/con0.eps} \caption{Unconstrained NN $N(x)$ \eqref{Nx}} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/con1.eps} \caption{Constrained NN $\widehat{N}(x)$ \eqref{Nx_1D1}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/con2.eps} \caption{Constrained NN $\widehat{\widehat{N}}(x)$ \eqref{Nx_1D2}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/MSE.eps} \caption{convergence history} \end{subfigure} \caption{Numerical results for \eqref{testf1} with a third sequence of random initialization.} \label{fig:1L1D_IC3} \end{figure} \subsubsection{Two-dimensional tests} We next consider two-dimensional functions. In particular, we show result for the Franke's function \cite{franke1982} \begin{equation}\label{testf2} \begin{split} f(x,y)&=\frac{3}{4}\exp\left(-\frac{(9x-2)^2}{4}-\frac{(9y-2)^2}{4}\right)+\frac{3}{4} \exp\left(-\frac{(9x+1)^2}{49}-\frac{9y+1}{10}\right)\\ &+\frac{1}{2}\exp\left(-\frac{(9x-7)^2}{4}-\frac{(9y-3)^2}{4}\right)-\frac{1}{5}\exp\left(-(9x-4)^2-(9y-7)^2\right) \end{split} \end{equation} with $(x, y)\in [0,1]^2$. Again, we compare training results for both the standard NN without constraints \eqref{Nx} and our new constrained NN formulation \eqref{Nnew}, using the same random sequence for initialization. The NNs have one hidden layer with $40$ neurons. The size of the training set is $500$ and that of the validation set is $1,000$. The numerical results are shown in Figure \ref{fig:1L2D}. On the left column, the contour lines of the training results are shown, as well as those of the exact function. Here all contour lines are at the same values, from 0 to 1 with an increment of $0.1$. We observe that the constrained NN formulation produces visually better result than the standard unconstrained formulation. On the right column, we plot the function value along $y=0.2 x$. Again, the improvement of the constrained NN is visible. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/con0.eps} \caption{Unconstrained \eqref{Nx}: contours} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/cut_con0.eps} \caption{Unconstrained \eqref{Nx}: $y=0.2x$ cut} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/con1.eps} \caption{Constrained \eqref{Nnew2}: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/cut_con1.eps} \caption{Constrained \eqref{Nnew2}: $y=0.2x$ cut} \end{subfigure} \caption{Numerical results for \eqref{testf2}. Top row: unconstrained formulation $N(x)$ \eqref{Nx}; Bottom row: constrained formulation $\widehat{N}(x)$ \eqref{Nnew2}.} \label{fig:1L2D} \end{figure} \subsection{Multiple Hidden Layers} We now consider feedforward NNs with multiple hidden layers. We present results for both the standard NN without constraints \eqref{MultNx} and the constrained ReLU NNs with the constraints \eqref{weight_general} and \eqref{b_general}. We use the standard notation $\{J_1,\dots, J_M\}$ to denote the network structure, where $J_m$ is the number of neurons in each layer. The hidden layers are $J_2,\dots, J_{M-1}$. Again, we focus on 1D and 2D functions for ease of visualization purpose, i.e., $J_1=1, 2$. \subsubsection{One dimensional tests} We first consider the one-dimensional function \eqref{testf1}. In Figure \ref{fig:2L1D}, we show the numerical results by NNs of $\{1, 20, 10, 1\}$, using three different sequences of random initializations, with and without constraints. We observe that the standard NN formulation without constraints \eqref{MultNx} produces widely different results. This is because of the potentially large number of local minima in the cost function and is not entirely surprising. On the other hand, using the exactly the same initialization, the NN formulation with constraints \eqref{weight_general} and \eqref{b_general} produces notably better results, and more importantly, is much less sensitive to the initialization. In Figure \ref{fig:4L1D}, we show the results for NNs with $\{1,5,5,5,5,1\}$ structure. We observe similar performance -- the constrained NN produces better results and is less sensitive to initialization. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng7_2010/con0.eps} \caption{Unconstrained NN} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng7_2010/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng4_2010/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng4_2010/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng5_2010/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng5_2010/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \caption{Numerical results for 1D function \eqref{testf1} with feedforward NNs of $\{1, 20, 10, 1\}$. From top to bottom: training results using three different random sequences for initialization. Left column: results by unconstrained NN formulation \eqref{MultNx}; Right column: results by NN formulation with constraints \eqref{weight_general} and \eqref{b_general}.} \label{fig:2L1D} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng11_10by4_m500/con0.eps} \caption{Unconstrained NN} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng11_10by4_m500/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng2_5by4_m500/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng2_5by4_m500/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng6_10by4_m500/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng6_10by4_m500/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \caption{Numerical results for 1D function \eqref{testf1} with feedforward NNs of $\{1, 5, 5, 5, 5, 1\}$. From top to bottom: training results using three different random sequences for initialization. Left column: results by unconstrained NN formulation \eqref{MultNx}; Right column: results by NN formulation with constraints \eqref{weight_general} and \eqref{b_general}.} \label{fig:4L1D} \end{figure} \subsubsection{Two dimensional tests} We now consider the two-dimensional Franke's function \eqref{testf2}. In Figure \ref{fig:2L2D}, the results by NNs with $\{2, 20, 10, 1\}$ structure are shown. In Figure \ref{fig:4L2D}, the results by NNs with $\{2, 10, 10, 10, 10, 1\}$ structure are shown. Both the contour lines (with exactly the same contour values: from 0 to 1 with increment $0.1$) and the function value at $y=0.2x$ are plotted, for both the unconstrained NN \eqref{MultNx} and the constrained NN with the constraints \eqref{weight_general} and \eqref{b_general}. Once again, the two cases use the same random sequence for initialization. The results show again the notably improvement of the training results by the constrained formulation. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/con0.eps} \caption{Unconstrained: contours} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/cut_con0.eps} \caption{Unconstrained: $y=0.2x$ cut} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/con1.eps} \caption{Constrained: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/cut_con1.eps} \caption{Constrained: $y=0.2x$ cut} \end{subfigure} \caption{Numerical results 2D function \eqref{testf2} with NNs of the structure $\{2, 20, 10, 1\}$. Top row: results by unconstrained NN formulation \eqref{MultNx}; Bottom row: results by constrained NN with \eqref{weight_general} and \eqref{b_general}. Left column: contour plots; Right column: function cut along $y=0.2x$. Dashed lines are the exact function.} \label{fig:2L2D} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/con0.eps} \caption{Unconstrained: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/cut_con0.eps} \caption{Unconstrained: $y=0.2x$ cut} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/con1.eps} \caption{Constrained: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/cut_con1.eps} \caption{Constrained: $y=0.2x$ cut} \end{subfigure} \caption{Numerical results 2D function \eqref{testf2} with NNs of the structure $\{2, 10, 10, 10, 10, 1\}$. Top row: results by unconstrained NN formulation \eqref{MultNx}; Bottom row: results by constrained NN with \eqref{weight_general} and \eqref{b_general}. Left column: contour plots; Right column: function cut along $y=0.2x$. Dashed lines are the exact function.} \label{fig:4L2D} \end{figure} \section{Summary} \label{sec:summary} In this paper we presented a set of constraints on multi-layer feedforward NNs with ReLU and binary activation functions. The weights in each neuron are constrained on the unit sphere, as opposed to the entire space. This effectively reduces the number of parameters in weights by one per neuron. The threshold in each neuron is constrained to a bounded interval, as opposed to the entire real line. We prove that the constrained NN formulation is equivalent to the standar unconstrained NN formulation. The constraints on the parameters reduce the search space for network training and can notably improve the training results. Our numerical examples for both single hidden layer and multiple hidden layers verify this finding. \section*{Acknowledgment} \clearpage \bibliographystyle{plain} \section{Introduction} \label{sec:intro} Interest in neural networks (NNs) has significantly increased in recent years because of the successes of deep networks in many practical applications. Complex and deep neural networks are known to be capable of learning very complex phenomena that are beyond the capabilities of many other traditional machine learning techniques. The amount of literature is too large to mention. Here we cite just a few review type publications \cite{montufar2014number, bianchini2014complexity, eldan2016power, poggio2017, DuSwamy2014, GoodfellowBC-2016, Schmidhuber2015}. In a NN network, each neuron produces an output in the following form \begin{equation} \sigma(\mathbf{w}\cdot \mathbf{x}_{in} + b), \end{equation} where the vector $\mathbf{x}_{in}$ represents the signal from all incoming connecting neurons, $\mathbf{w}$ are the weights for the input, $\sigma$ is the activation function, with $b$ as its threshold. In a complex (and deep) network with a large number of neurons, the total number of the free parameters $\mathbf{w}$ and $b$ can be exceedingly large. Their training thus poses a tremendous numerical challenge, as the objective function (loss function) to be optimized becomes highly non-convex and with highly complicated landscape (\cite{li2017visualizing}). Any numerical optimization procedures can be trapped in a local minimum and produce unsatisfactory training results. This paper is not concerned with numerical algorithm aspect of the NN training. Instead, the purpose of this paper is to show that the training of NNs can be conducted in a reduced parameter space, thus providing any numerical optimization algorithm a smaller space to search for the parameters. This reduction applies to the type of activation functions with the following scaling property: for any $\alpha\geq 0$, $\sigma(\alpha\cdot y) = \gamma(\alpha) \sigma(y)$, where $\gamma$ depends only on $\alpha$. The binary activation function \cite{mcculloch1943logical}, one of the first used activation functions, satisfies this property with $\gamma\equiv 1$. The rectified linear unit (ReLU) \cite{nair2010rectified, glorot2011deep}, one of the most widely used activation functions nowadays, satisfies this property with $\gamma = \alpha$. For NNs with this type of activation functions, we show that they can be equivalently trained in a reduced parameter space. More specifically, let the length of the weights $\mathbf{w}$ be $d$. Instead of training $\mathbf{w}$ in ${\mathbb R}^d$, they can be equivalently trained as unit vector with $\|\mathbf{w}\|=1$, which means $\mathbf{w}\in \mathbb{S}^{d-1}$, the unit sphere in ${\mathbb R}^d$. Moreover, if one is interested in approximating a function defined in a bounded domain $D$, the threshold can also be trained in a bounded interval $b\in [-X_B, X_B]$, where $X_B=\sup_{\mathbf{x}\in D} \|\mathbf{x}\|$, as opposed to the entire real line $b\in {\mathbb R}$. It is well known that the standard NN with single hidden layer is a universal approximator, c.f. \cite{pinkus1999, barron1993universal, hornik1991approximation}. Here we prove that our new formulation in the reduced parameter space is also a universal approximator, in the sense that its span is dense in $C({\mathbb R}^d)$. We then further extend the parameter space constraints to NNs with multiple hidden layers. The major advantage of the constraints in the weights and thresholds is that they significantly reduce the search space for these parameters during training. Consequently, this eliminates a potentially large number of undesirable local minima that may cause training optimization algorithm to terminate prematurely, which is one of the reasons for unsatisfactory training results. We then present examples for function approximation to verify this numerically. Using the same network structure, optimization solver, and identical random initialization, our numerical tests show that the training results in the new reduced parameter space is notably better than those from the standard network. More importantly, the training using the reduced parameter space is much more robust against initialization. This paper is organized as follows. In Section \ref{sec:single}, we first derive the constraints on the parameters using network with single hidden layer, while enforcing the equivalence of the network. In Section \ref{sec:proof}, we prove that the constrained NN formulation remains a universal approximator. In Section \ref{sec:multiple}, we present the constraints for NNs with multiple hidden layers. Finally in Section \ref{sec:examples}, we present numerical experiments to demonstrate the improvement of the network training using the reduced parameter space. We emphasize that this paper is not concerned with any particular training algorithm. Therefore, in our numerical tests we used the most standard optimization algorithm from Matlab\textregistered \section{Constraints for NN with Single Hidden Layer} \label{sec:single} Let us first consider the standard NN with a single hidden layer, in the context for approximating an unknown response function $f:{\mathbb R}^d \to {\mathbb R}$. The NN approximation using activation function $\sigma$ takes the following form, \begin{equation} \label{Nx} N(\mathbf{x}) = \sum_{j=1}^N c_j\sigma(\mathbf{w}_j\cdot\mathbf{x} + b_j),\quad \mathbf{x}\in{\mathbb R}^d, \end{equation} where $\mathbf{w}_j\in{\mathbb R}^d$ is the weight vector, $b_j\in{\mathbb R}$ the threshold, $c_j\in{\mathbb R}$, and $N$ is the width of the network. We restrict our discussion to following two activation functions. One is the rectified linear unit (ReLU), \begin{equation} \label{ReLU} \sigma(x) = \max(0, x). \end{equation} The other one is the binary activation function, also known as heaviside/step function, \begin{equation} \label{step} \sigma(x) = \left\{ \begin{array}{ll} 1, & x> 0,\\ 0, & x<0, \end{array} \right. \end{equation} with $\sigma(0)=\frac12$. We remark that these two activation functions satisfy the following scaling property: For any $y\in {\mathbb R}$ and $\alpha\geq 0$, there exists a constant $\gamma\geq 0$ such that \begin{equation} \label{scale} \sigma (\alpha\cdot y) = \gamma(\alpha)\sigma(y), \end{equation} where $\gamma$ depends only on $\alpha$ but not on $y$. The ReLU satisfies this property with $\gamma(\alpha) = \alpha$, which is known as {\em scale invariance}. The binary activation function satisfies the scaling property with $\gamma(\alpha)\equiv 1$. We also list the following properties, which are important for the method we present in this paper. \begin{itemize} \item For the binary activation function \eqref{step}, for any $x\in{\mathbb R}$, \begin{equation} \label{step-1} \sigma(x) +\sigma(-x)\equiv 1. \end{equation} \item For the ReLU activation function \eqref{ReLU}, for any $x\in{\mathbb R}$ and any $\alpha$, \begin{equation} \label{ReLU-1} L(x;\alpha):=\sigma\left(x+\frac{\alpha}{2}\right) -\sigma\left(x-\frac{\alpha}{2}\right) + \sigma\left(-x+\frac{\alpha}{2}\right) - \sigma\left(-x-\frac{\alpha}{2}\right)\equiv\alpha. \end{equation} \end{itemize} \subsection{Weight Constraints} We first show that the training of \eqref{Nx} can be equivalently conducted with constraint $\|\mathbf{w}_j\|=1$, i.e., unit vector. This is a straightforward result of the scaling property \eqref{scale} of the activation function. It effectively reduces the search space for the weights from $\mathbf{w}_j\in{\mathbb R}^d$ to $\mathbf{w}_j\in\mathbb{S}^{d-1}$, the unit sphere in ${\mathbb R}^d$. \begin{proposition} \label{prop:eq1} Any neural network construction \eqref{Nx} using the ReLU \eqref{ReLU} or the binary \eqref{step} activation functions has an equivalent form \begin{equation} \label{Nnew} \widetilde{N}(\mathbf{x}) = \sum_{j=1}^{\widetilde{N}} \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x} + \widetilde{b}_j), \quad \|\widetilde{\w}_j\|=1. \end{equation} \end{proposition} \begin{proof} Let us first assume $\|\mathbf{w}_j\|\neq 0$ for all $j=1,\dots,N$. We then have \begin{equation*} \begin{split} N(\mathbf{x}) &= \sum_{j=1}^N c_j\sigma(\mathbf{w}_j\cdot\mathbf{x} + b_j) \\ & =\sum_{j=1}^N c_j \sigma\left(\|\mathbf{w}_j\|\left(\frac{\mathbf{w}_j}{\|\mathbf{w}_j\|}\cdot\mathbf{x} + \frac{b_j}{\|\mathbf{w}_j\|}\right)\right) \\ &= \sum_{j=1}^N c_j \gamma(\|\mathbf{w}_j\|) \sigma\left(\frac{\mathbf{w}_j}{\|\mathbf{w}_j\|}\cdot\mathbf{x} + \frac{b_j}{\|\mathbf{w}_j\|}\right), \end{split} \end{equation*} where $\gamma$ is the factor in the scaling property \eqref{scale} satisfied by both ReLU and binary activation functions. Upon defining \begin{equation*} \widetilde{c}_j = c_j \gamma(\|\mathbf{w}_j\|), \quad \widetilde{\w}_j = \frac{\mathbf{w}_j}{\|\mathbf{w}_j\|}, \quad \widetilde{b}_j = \frac{b_j}{\|\mathbf{w}_j\|}, \end{equation*} we have the following equivalent form as in \eqref{Nnew} \begin{equation*} N(\mathbf{x})=\sum_{j=1}^N \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x}+\widetilde{b}_j). \end{equation*} Next, let us consider the case $\|\mathbf{w}_j\|=0$ for some $j\in\{1,\dots,N\}$. The contribution of this term to the construction \eqref{Nx} is thus \begin{equation*} c_j \sigma(b_j) =\left\{ \begin{array}{ll} \widehat{c}_j, & b_j\geq 0,\\ 0, & b_j<0, \end{array} \right. \end{equation*} where $\widehat{c}_j = c_j$ for the binary activation function and $\widehat{c}_j = c_j b_j$ for the ReLU function. Therefore, if $b_j<0$, this term in \eqref{Nx} vanishes. If $b_j\geq 0$, the contributions of these terms in \eqref{Nx} are constants, which can be represented by a combination of neuron outputs using the relations \eqref{step-1} and \eqref{ReLU-1}, for binary and ReLU activation functions, respectively. We thus obtain a new representation in the form of \eqref{Nnew} that includes all the terms in the original expression \eqref{Nx}. This completes the proof. \end{proof} The proof immediately gives us another equivalent form, by combining all the constant terms from \eqref{Nx} into a single constant first and then explicitly including it in the expression. \begin{corollary} Any neural network construction \eqref{Nx} using the ReLU \eqref{ReLU} or the binary \eqref{step} activation functions has an equivalent form \begin{equation} \label{Nnew0} \widetilde{N}(\mathbf{x}) = \widetilde{c}_0 + \sum_{j=1}^{\tilde{N}} \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x} + \widetilde{b}_j), \quad \|\widetilde{\w}_j\|=1. \end{equation}. \end{corollary} \subsection{Threshold Constraints} We now present constraints on the thresholds in \eqref{Nx}. This is applicable when the target function $f$ is defined on a bounded domain, i.e., $f:D\to {\mathbb R}$, with $D\subset {\mathbb R}^d$ bounded. This is often the case in practice. We demonstrate that any NN \eqref{Nx} can be trained equivalently in a bounded interval for each of the thresholds. This (significantly) reduces the search space for the thresholds. \begin{proposition} \label{prop:eq2} With the ReLU \eqref{ReLU} or the binary \eqref{step} activation function, let \eqref{Nx} be an approximator to a function $f: D\to {\mathbb R}$, where $D\subset {\mathbb R}^d$ is a bounded domain. Let \begin{equation} \label{Xb} X_B = \sup_{\mathbf{x}\in D} \|\mathbf{x}\|. \end{equation} Then, \eqref{Nx} has an equivalent form \begin{equation} \label{Nnew2} \widehat{N}(\mathbf{x}) = \sum_{j=1}^{\widehat{N}} \hat{c}_j\sigma(\hat{\w}_j\cdot\mathbf{x} + \hat{b}_j), \quad \|\hat{\w}_j\|=1, \quad -X_B\leq \hat{b}_j \leq X_B. \end{equation} \end{proposition} \begin{proof} Proposition \ref{prop:eq1} establishes that \eqref{Nx} has an equivalent form \eqref{Nnew} \begin{equation*} \widetilde{N}(\mathbf{x}) = \sum_{j=1}^{\widetilde{N}} \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x} + \widetilde{b}_j), \quad \|\widetilde{\w}_j\|=1. \end{equation*} Since $\widetilde{\w}_j$ is a unit vector, we have \begin{equation} \widetilde{\w}_j\cdot \mathbf{x} \in \left[-\|\mathbf{x}\|,\|\mathbf{x}\|\right] \subseteq [-X_B, X_B], \end{equation} where the bound \eqref{Xb} is used. Let us first consider the case $\widetilde{b}_j < -X_B$, then $\forall \mathbf{x}\in D$, $\widetilde{\w}_j\cdot \mathbf{x} + \widetilde{b}_j <0$, $\sigma(\widetilde{\w}_j\cdot \mathbf{x} + \widetilde{b}_j) = 0$, for both the ReLU and binary activation functions. Subsequently, this term has no contribution to the approximation and can be eliminated. Next, let us consider the case $\widetilde{b}_j > X_B$, then $\widetilde{\w}_j\cdot \mathbf{x} + \widetilde{b}_j >0$ for all $\mathbf{x}\in D$. Let $J=\{j_1,\dots j_L\}$, $L\geq 1$, be the set of terms in \eqref{Nnew} that satisfy this condition. We then have $\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell} >0$, for all $\mathbf{x}\in D$, and $\ell=1,\dots, L$. We now show that the net contribution of these terms in \eqref{Nnew} is included in the equivalent form \eqref{Nnew2}. \begin{enumerate} \item For the binary activation function \eqref{step}, the contribution of these terms to the approximation \eqref{Nnew} is \begin{equation*} N_J(\mathbf{x}) = \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \sigma\left(\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell}\right) = \sum_{\ell=1}^L \widetilde{c}_{j_\ell} = \const. \end{equation*} Again, using the relation \eqref{step-1}, any constant can be expressed by a combination of binary activation terms with thresholds $\widetilde{b}_j = 0$. Such terms are already included in \eqref{Nnew2}. \item For the ReLU activation \eqref{ReLU}, the contribution of these terms to \eqref{Nnew} is \begin{equation} \begin{split} N_J(\mathbf{x}) &= \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \sigma\left(\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell}\right) \\ &= \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \left(\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell}\right) \\ &= \sum_{\ell=1}^L \left(\widetilde{c}_{j_\ell} \widetilde{\w}_{j_\ell}\right)\cdot \mathbf{x} + \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \widetilde{b}_{j_\ell} \\ &= \widetilde{\w}^*_J \cdot \mathbf{x} + b^*_J \\ &= \sigma(\widetilde{\w}^*_J \cdot \mathbf{x}) - \sigma(-\widetilde{\w}^*_J \cdot \mathbf{x}) + b^*_J, \end{split} \end{equation} where the last equality follows the simple property of the ReLU function $\sigma(y) - \sigma(-y) = y$. Using Proposition \ref{prop:eq1}, the first two terms then have an equivalent form using unit weight $\widetilde{\w}_J^* = \widetilde{\w}_J^*/\|\widetilde{\w}_J^*\|$ and with zero threshold, which is included in \eqref{Nnew2}. For the constant $b_J^*$, we again invoke the relation \eqref{ReLU-1} and represent it by $\frac{b_J^*}{\alpha} L(\hat{\w}_J^*\cdot\mathbf{x}, \alpha)$, where $\hat{\w}_J^*$ is an arbitrary unit vector and $0<\alpha<X_B$. Obviously, this expression includes a collection of terms (4 terms as in \eqref{ReLU-1}), which are included in \eqref{Nnew2}. This completes the proof. \end{enumerate} \end{proof} The equivalence between the standard NN expression \eqref{Nx} and the constrained expression \eqref{Nnew2} indicates that the NN training can be conducted in a reduced parameter space. For the weights $\mathbf{w}_j$ in each neuron, its training can be conducted in $\mathbb{S}^{d-1}$, the $d$-dimensional unit sphere, as opposed to the entire space ${\mathbb R}^d$. For the threshold, its training can be conducted in the bounded interval $[-X_B, X_B]$, as opposed to the entire real line ${\mathbb R}$. The reduction of the parameter space can eliminate many potential local minima and therefore enhance the performance of numerical optimization during the training. We remark that the equivalent form in the reduced parameter space \eqref{Nnew2} has different number of ``active'' neurons than the original unrestricted case \eqref{Nx}. \section{Universal approximation property} \label{sec:proof} By universal approximation property, we aim to establish that the constrained formulations \eqref{Nnew} and \eqref{Nnew2} can approximate any continuous functions. To this end, we define the following set of functions on ${\mathbb R}^d$ \begin{equation} \mathcal{N} (\sigma;\Lambda,\Theta):=\textrm{Span}\left\{\sigma(\mathbf{w}\cdot\mathbf{x}+b): \mathbf{w}\in \Lambda,b\in \Theta \right\}, \end{equation} where $\Lambda\in {\mathbb R}^d$ is the weight set, $\Theta\in{\mathbb R}$ the threshold set. We also denote $\mathcal{N}_D(\sigma; \Lambda, \Theta)$ as the same set of functions when confined in a bounded domain $D\subseteq {\mathbb R}^d$. By following this definition, the standard NN expression and our two constrained expressions correspond to the following spaces. \begin{equation} \begin{split} \eqref{Nx} &\in \mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R}), \\ \eqref{Nnew} &\in \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R}), \\ \eqref{Nnew2} &\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, [-X_B, X_B]), \end{split} \end{equation} where $\mathbb{S}^{d-1}$ is the unit sphere in ${\mathbb R}^d$ because $\|\widetilde{\w}\|=1$. The universal approximation property for the standard unconstrained NN expression \eqref{Nx} has been well studied, cf. \cite{cybenko1989approximation, hornik1991approximation, mhaskar1992approximation, barron1993universal, leshno1993multilayer}, and \cite{pinkus1999} for a survey. Here we cite the following result for $\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})$. \begin{theorem}[\cite{leshno1993multilayer}, Theorem 1] \label{propLeshno} Let $\sigma$ be a function in $L_{loc}^\infty({\mathbb R})$, of which the set of discontinuities has Lebesgue measure zero. Then the set $\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})$ is dense in $C({\mathbb R}^d)$, in the topology of uniform convergence on compact sets, if and only if $\sigma$ is not an algebraic polynomial almost everywhere. \end{theorem} \subsection{Universal approximation property of \eqref{Nnew}} \label{sec:Dense1} We now examine the universal approximation property for the first constrained formulation \eqref{Nnew}. \begin{theorem} \label{thm1} Let $\sigma$ be the binary function \eqref{step} or the ReLU function \eqref{ReLU}, then we have \begin{equation} \label{thm1eq} \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})=\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R}). \end{equation} and the set $ \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})$ is dense in $C({\mathbb R}^d)$, in the topology of uniform convergence on compact sets. \end{theorem} \begin{proof} Obviously, we have $\mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})\subseteq\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})$. By Proposition \ref{prop:eq1}, any element $N(\mathbf{x})\in \mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R}) $ can be reformulated as an element $\widetilde{N}(\mathbf{x})\in \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})$. Therefore, we have $\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})\subseteq \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})$. This concludes the first statement \eqref{thm1eq}. Given the equivalence \eqref{thm1eq}, the denseness result immediately follows from Theorem \ref{propLeshno}, as both the ReLU and the binary activation functions are not polynomials and are continuous everywhere except at a set of zero Lebesgue measure. \end{proof} \subsection{Universal approximation property of \eqref{Nnew2}} We now examine the second constrained NN expression \eqref{Nnew2}. \begin{theorem} Let $\sigma$ be the binary \eqref{step} or the ReLU \eqref{ReLU} activation function. Let $\mathbf{x}\in D\subset {\mathbb R}^d$, where $D$ is closed and bounded with $X_B=\sup_{\mathbf{x}\in D} \|\mathbf{x}\|$. Define $\Theta=[-X_B, X_B]$, then \begin{equation} \label{thm2eq} \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)=\mathcal{N}_D(\sigma;{\mathbb R}^{d}, {\mathbb R}). \end{equation} Furthermore, $\mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ is dense in $C(D)$ in the topology of uniform convergence. \end{theorem} \begin{proof} Obviously we have $\mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)\subseteq \mathcal{N}_D(\sigma; {\mathbb R}^d, {\mathbb R})$. On the other hand, Proposition \ref{prop:eq2} establishes that for any element $N(\mathbf{x})\in \mathcal{N}_D(\sigma; {\mathbb R}^{d}, {\mathbb R})$, there exists an equivalent formulation $\widehat{N}(\mathbf{x})\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ for any $\mathbf{x}\in D$. This implies $\mathcal{N}_D(\sigma;{\mathbb R}^d, {\mathbb R})\subseteq \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$. We then have \eqref{thm2eq}. For the denseness of $\mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ in $C(D)$, let us consider any function $f\in C(D)$. By the Tietze extension theorem (cf. \cite{folland1999}), there exists an extension $F\in C({\mathbb R}^d)$ with $F(\mathbf{x})=f(\mathbf{x})$ for any $\mathbf{x}\in D$. Then, the denseness result of the standard unconstrained NN expression (Theorem \ref{propLeshno}) implies that, for the compact set $E=\{\mathbf{x}\in {\mathbb R}^d: \|\mathbf{x}\|\leq X_B\}$ and any given $\epsilon >0$, there exists $N(\mathbf{x})\in \mathcal{N}(\sigma; {\mathbb R}^{d}, {\mathbb R})$ such that \begin{equation*} \sup_{\mathbf{x}\in{\mathbb E}} |N(\mathbf{x})-F(\mathbf{x})|\leq\epsilon. \end{equation*} By Proposition \ref{prop:eq2}, there exists an equivalent constrained NN expression $\widehat{N}(\mathbf{x})\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ such that $\widehat{N}(\mathbf{x})=N(\mathbf{x})$ for any $\mathbf{x}\in D$. We then immediately have, for any $f\in C(D)$ and any given $\epsilon>0$, there exists $\widehat{N}(\mathbf{x})\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ such that \begin{equation*} \sup_{\mathbf{x}\in D} |\widehat{N}(\mathbf{x})-f(\mathbf{x})|=\sup_{\mathbf{x}\in D}|N(\mathbf{x})-f(\mathbf{x})|\leq \sup_{\mathbf{x}\in E}|N(\mathbf{x})-F(\mathbf{x})|\leq \epsilon. \end{equation*} The proof is now complete. \end{proof} \section{Constraints for NNs with Multiple Hidden Layers} \label{sec:multiple} We now generalize the previous result to feedforward NNs with multiple hidden layers. Let us again consider approximation of a multivariate function $f:D\to{\mathbb R}$, where $D\subseteq {\mathbb R}^d$ with $\sup_{\mathbf{x}\in D}\|\mathbf{x}\|=X_B<\infty$. Consider a feedforward NN with $M$ layers, $M\geq 3$, where $m=1$ is the input layer and $m=M$ the output layer. Let $J_m$, $m=1,\dots,M$ be the number of neurons in each layer. Obviously, we have $J_1=d$ and $J_M=1$ in our case. Let $\mathbf{y}^{(m)}\in {\mathbb R}^{J_m}$ be the output of the neurons in the $m$-th layer. Then, by following the notation from \cite{DuSwamy2014}, we can write \begin{equation}\label{MultNx} \begin{split} \mathbf{y}^{(1)} &= \mathbf{x}, \\ \mathbf{y}^{(m)} &= \sigma\left(\left[\mathbf{W}^{(m-1)}\right]^T \mathbf{y} ^{(m-1)}+\mathbf{b}^{(m)}\right), \qquad m=2,\dots, M-1,\\ \mathbf{y}^{(M)} &= \left[\mathbf{W}^{(M-1)}\right]^T \mathbf{y}^{(M-1)}. \end{split} \end{equation} where $\mathbf{W}^{(m-1)}\in {\mathbb R}^{J_{m-1}\times J_m}$ is the weight matrix and $\mathbf{b}^{(m)}$ is the threshold vector. In component form, the output of the $j$-th neuron in the $m$-th layer is \begin{equation} \label{NN_layer} y_j^{(m)} = \sigma\left( \left[\mathbf{w}_j^{(m-1)}\right]^T \mathbf{y}^{(m-1)} + b_j^{(m)}\right), \qquad j=1,\dots, J_m, \quad m=2,\dots, M-1, \end{equation} where $\mathbf{w}^{(m-1)}_{j}$ be the $j$-th column of $\mathbf{W}^{(m-1)}$. \subsection{Weight constraints} The derivation for the constraints on the weights vector $\mathbf{w}_j^{(m-1)}$ can be generalized directly from the single-layer case and we have the following weight constraints, \begin{equation} \label{weight_general} \left\|\mathbf{w}^{(m-1)}_{j}\right\| = 1, \qquad j=1,\dots, J_m, \quad m=2,\dots, M-1. \end{equation} \subsection{Threshold constraints} The constraints on the threshold $b_j^{(m)}$ depend on the bounds of the output from the previous layer $\mathbf{y}^{(m-1)}$. For the ReLU activation function \eqref{ReLU}, we derive from \eqref{NN_layer} that for $m=2,\dots,M$, \begin{equation} \begin{split} \left|y_j^{(m)}\right| &\leq \left| \left[\mathbf{w}_j^{(m-1)}\right]^T \mathbf{y}^{(m-1)} + b_j^{(m)}\right| \\ &\leq \left\|\mathbf{w}_j^{(m-1)}\right\| \left\|\mathbf{y}^{(m-1)}\right\| + \left|b_j^{(m)}\right| \\ &\leq \left\|\mathbf{y}^{(m-1)}\right\| + \left|b_j^{(m)}\right| \end{split} \end{equation} If the domain $D$ is bounded and with $X_B = \sup_{\mathbf{x}\in D} \|\mathbf{x}\| <+\infty$, then the constraints on the threshold can be recursively derived. Starting from $\|\mathbf{y}^{(1)}\| = \|\mathbf{x}\|\in [-X_B, X_B]$ and $b_j^{(2)}\in [-X_B,X_B]$, we then have \begin{equation} \label{b_general} b_j^{(m)} \in X_B\cdot[-2^{m-2}, 2^{m-2}], \qquad m=2,\dots,M-1, \quad j=1,\dots, J_m. \end{equation} For the binary activation function \eqref{step}, we derive from \eqref{NN_layer} that for $m=2,\dots,M-1$, \begin{equation} \left|y_j^{(m)}\right| \leq 1. \end{equation} Then, the bounds for the thresholds are \begin{equation} \begin{split} b_j^{(2)} &\in X_B\cdot[-1, 1], \qquad j=1,\dots, J_2,\\ b_j^{(m)} &\in [-1, 1], \qquad m=3,\dots,M-1, \quad j=1,\dots, J_m. \end{split} \end{equation} \section{Numerical Examples} \label{sec:examples} In this section we present numerical examples to demonstrate the properties of the constrained NN training. We focus exclusively on the ReLU activation function \eqref{ReLU} due to its overwhelming popularity in practice. Given a set of training samples $\{(\mathbf{x}_i, y_i)\}_{i=1}^n$, the weights and thresholds are trained by minimizing the following mean squared error (MSE) \begin{equation*} E=\frac{1}{n}\sum_{i=1}^n |N(\mathbf{x}_i)-y_i|^2. \end{equation*} We conduct the training using the standard unconstrained NN formulation \eqref{Nx} and our new constrained formulation \eqref{Nnew2} and compare the training results. In all tests, both formulations use exactly the same randomized initial conditions for the weights and thresholds. Since our new constrained formulation is irrespective of the numerical optimization algorithm, we use one of the most accessible optimization routines from MATLAB\textregistered, the function \texttt{fminunc} for unconstrained optimization and the function \texttt{fmincon} for constrained optimization. It is natural to explore the specific form of the constraints in \eqref{Nnew2} to design more effective constrained optimization algorithms. This is, however, out of the scope of the current paper. After training the networks, we evaluate the network approximation errors using another set of samples --- a validation sample set, which consists of randomly generated points that are independent of the training sample set. Even though our discussion applies to functions in arbitrary dimension $d\geq 1$, we present only the numerical results in $d=1$ and $d=2$ because they can be easily visualized. \subsection{Single Hidden Layer} We first examine the approximation results using NNs with single hidden layer, with and without constraints. \subsubsection{One dimensional tests}\label{example1} We first consider a one-dimensional smooth function \begin{equation}\label{testf1} f(x)=\sin(4\pi x), \quad x\in [0, 1]. \end{equation} The constrained formulation \eqref{Nnew2} becomes \begin{equation} \label{Nx_1D1} \widehat{N}(\mathbf{x}) = \sum_{j=1}^{\widehat{N}} \hat{c}_j\sigma(\hat{w}_j x + \hat{b}_j), \quad \hat{w}_j\in \{-1,1\}, \quad -1 \leq \hat{b}_j \leq 1. \end{equation} Due to the simple form of the weights and the domain $D=[0,1]$, the proof of Proposition \ref{prop:eq2} also gives us the following tighter bounds for the thresholds for this specific problem, \begin{equation} \label{Nx_1D2} \left\{ \begin{split} -1\leq \hat{b}_j \leq 0, &\qquad \textrm{if } \hat{w}_j = 1;\\ 0\leq \hat{b}_j \leq 1, &\qquad \textrm{if } \hat{w}_j=-1. \end{split} \right. \end{equation} We approximate \eqref{testf1} with NNs with one hidden layer, which consists of $20$ neurons. The size of the training data set is $200$. Numerical tests were performed for different choices of random initializations. It is known that NN training performance depends on the initialization. In Figures \ref{fig:1L1D_IC1}, \ref{fig:1L1D_IC2} and \ref{fig:1L1D_IC3}, we show the numerical results for three sets of different random initializations. In each set, the unconstrained NN \eqref{Nx}, the constrained NN \eqref{Nx_1D1} and the specialized constrained NN with \eqref{Nx_1D2} use the same random sequence for initialization. We observe that the standard NN formulation without constraints \eqref{Nx} produces training results critically dependent on the initialization. This is widely acknowledged in the literature. On the other hand, our new constrained NN formulations are more robust and produce better results that are less sensitive to the initialization. The tighter constraint \eqref{Nx_1D2} performs better than the general constraint \eqref{Nx_1D1}, which is not surprising. However, the tighter constraint is a special case for this particular problem and not available in the general case. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \label{fig1:con0} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/con0.eps} \caption{Unconstrained NN $N(x)$ \eqref{Nx}} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \label{fig1:con1} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/con1.eps} \caption{Constrained NN $\widehat{N}(x)$ \eqref{Nx_1D1}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \label{fig1:con2} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/con2.eps} \caption{Constrained NN $\widehat{\widehat{N}}(x)$ \eqref{Nx_1D2}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \label{fig1:MSE} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng911_N20_m500/MSE.eps} \caption{Convergence history} \end{subfigure} \caption{Numerical results for \eqref{testf1} with one sequence of random initialization.} \label{fig:1L1D_IC1} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/con0.eps} \caption{Unconstrained NN $N(x)$ \eqref{Nx}} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/con1.eps} \caption{Constrained NN $\widehat{N}(x)$ \eqref{Nx_1D1}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/con2.eps} \caption{Constrained NN $\widehat{\widehat{N}}(x)$ \eqref{Nx_1D2}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng12_N20_m500/MSE.eps} \caption{convergence history} \end{subfigure} \caption{Numerical results for \eqref{testf1} with a second sequence of random initialization.} \label{fig:1L1D_IC2} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/con0.eps} \caption{Unconstrained NN $N(x)$ \eqref{Nx}} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/con1.eps} \caption{Constrained NN $\widehat{N}(x)$ \eqref{Nx_1D1}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/con2.eps} \caption{Constrained NN $\widehat{\widehat{N}}(x)$ \eqref{Nx_1D2}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D1L/rng565_N20_m500/MSE.eps} \caption{convergence history} \end{subfigure} \caption{Numerical results for \eqref{testf1} with a third sequence of random initialization.} \label{fig:1L1D_IC3} \end{figure} \subsubsection{Two-dimensional tests} We next consider two-dimensional functions. In particular, we show result for the Franke's function \cite{franke1982} \begin{equation}\label{testf2} \begin{split} f(x,y)&=\frac{3}{4}\exp\left(-\frac{(9x-2)^2}{4}-\frac{(9y-2)^2}{4}\right)+\frac{3}{4} \exp\left(-\frac{(9x+1)^2}{49}-\frac{9y+1}{10}\right)\\ &+\frac{1}{2}\exp\left(-\frac{(9x-7)^2}{4}-\frac{(9y-3)^2}{4}\right)-\frac{1}{5}\exp\left(-(9x-4)^2-(9y-7)^2\right) \end{split} \end{equation} with $(x, y)\in [0,1]^2$. Again, we compare training results for both the standard NN without constraints \eqref{Nx} and our new constrained NN formulation \eqref{Nnew}, using the same random sequence for initialization. The NNs have one hidden layer with $40$ neurons. The size of the training set is $500$ and that of the validation set is $1,000$. The numerical results are shown in Figure \ref{fig:1L2D}. On the left column, the contour lines of the training results are shown, as well as those of the exact function. Here all contour lines are at the same values, from 0 to 1 with an increment of $0.1$. We observe that the constrained NN formulation produces visually better result than the standard unconstrained formulation. On the right column, we plot the function value along $y=0.2 x$. Again, the improvement of the constrained NN is visible. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/con0.eps} \caption{Unconstrained \eqref{Nx}: contours} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/cut_con0.eps} \caption{Unconstrained \eqref{Nx}: $y=0.2x$ cut} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/con1.eps} \caption{Constrained \eqref{Nnew2}: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D1L/rng4rng1_40_m500/cut_con1.eps} \caption{Constrained \eqref{Nnew2}: $y=0.2x$ cut} \end{subfigure} \caption{Numerical results for \eqref{testf2}. Top row: unconstrained formulation $N(x)$ \eqref{Nx}; Bottom row: constrained formulation $\widehat{N}(x)$ \eqref{Nnew2}.} \label{fig:1L2D} \end{figure} \subsection{Multiple Hidden Layers} We now consider feedforward NNs with multiple hidden layers. We present results for both the standard NN without constraints \eqref{MultNx} and the constrained ReLU NNs with the constraints \eqref{weight_general} and \eqref{b_general}. We use the standard notation $\{J_1,\dots, J_M\}$ to denote the network structure, where $J_m$ is the number of neurons in each layer. The hidden layers are $J_2,\dots, J_{M-1}$. Again, we focus on 1D and 2D functions for ease of visualization purpose, i.e., $J_1=1, 2$. \subsubsection{One dimensional tests} We first consider the one-dimensional function \eqref{testf1}. In Figure \ref{fig:2L1D}, we show the numerical results by NNs of $\{1, 20, 10, 1\}$, using three different sequences of random initializations, with and without constraints. We observe that the standard NN formulation without constraints \eqref{MultNx} produces widely different results. This is because of the potentially large number of local minima in the cost function and is not entirely surprising. On the other hand, using the exactly the same initialization, the NN formulation with constraints \eqref{weight_general} and \eqref{b_general} produces notably better results, and more importantly, is much less sensitive to the initialization. In Figure \ref{fig:4L1D}, we show the results for NNs with $\{1,5,5,5,5,1\}$ structure. We observe similar performance -- the constrained NN produces better results and is less sensitive to initialization. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng7_2010/con0.eps} \caption{Unconstrained NN} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng7_2010/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng4_2010/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng4_2010/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng5_2010/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D2L/rng5_2010/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \caption{Numerical results for 1D function \eqref{testf1} with feedforward NNs of $\{1, 20, 10, 1\}$. From top to bottom: training results using three different random sequences for initialization. Left column: results by unconstrained NN formulation \eqref{MultNx}; Right column: results by NN formulation with constraints \eqref{weight_general} and \eqref{b_general}.} \label{fig:2L1D} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng11_10by4_m500/con0.eps} \caption{Unconstrained NN} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng11_10by4_m500/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng2_5by4_m500/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng2_5by4_m500/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng6_10by4_m500/con0.eps} \caption{Unconstrained NN} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/1D4L/rng6_10by4_m500/con1.eps} \caption{Constrained NN with \eqref{weight_general} and \eqref{b_general}} \end{subfigure} \caption{Numerical results for 1D function \eqref{testf1} with feedforward NNs of $\{1, 5, 5, 5, 5, 1\}$. From top to bottom: training results using three different random sequences for initialization. Left column: results by unconstrained NN formulation \eqref{MultNx}; Right column: results by NN formulation with constraints \eqref{weight_general} and \eqref{b_general}.} \label{fig:4L1D} \end{figure} \subsubsection{Two dimensional tests} We now consider the two-dimensional Franke's function \eqref{testf2}. In Figure \ref{fig:2L2D}, the results by NNs with $\{2, 20, 10, 1\}$ structure are shown. In Figure \ref{fig:4L2D}, the results by NNs with $\{2, 10, 10, 10, 10, 1\}$ structure are shown. Both the contour lines (with exactly the same contour values: from 0 to 1 with increment $0.1$) and the function value at $y=0.2x$ are plotted, for both the unconstrained NN \eqref{MultNx} and the constrained NN with the constraints \eqref{weight_general} and \eqref{b_general}. Once again, the two cases use the same random sequence for initialization. The results show again the notably improvement of the training results by the constrained formulation. \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/con0.eps} \caption{Unconstrained: contours} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/cut_con0.eps} \caption{Unconstrained: $y=0.2x$ cut} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/con1.eps} \caption{Constrained: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D2L/rng2rng2_2010_m500/cut_con1.eps} \caption{Constrained: $y=0.2x$ cut} \end{subfigure} \caption{Numerical results 2D function \eqref{testf2} with NNs of the structure $\{2, 20, 10, 1\}$. Top row: results by unconstrained NN formulation \eqref{MultNx}; Bottom row: results by constrained NN with \eqref{weight_general} and \eqref{b_general}. Left column: contour plots; Right column: function cut along $y=0.2x$. Dashed lines are the exact function.} \label{fig:2L2D} \end{figure} \begin{figure}[ht] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/con0.eps} \caption{Unconstrained: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/cut_con0.eps} \caption{Unconstrained: $y=0.2x$ cut} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/con1.eps} \caption{Constrained: contours} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[width=1\linewidth]{./Figures/2D4L/rng1rng2_10by4_m500/cut_con1.eps} \caption{Constrained: $y=0.2x$ cut} \end{subfigure} \caption{Numerical results 2D function \eqref{testf2} with NNs of the structure $\{2, 10, 10, 10, 10, 1\}$. Top row: results by unconstrained NN formulation \eqref{MultNx}; Bottom row: results by constrained NN with \eqref{weight_general} and \eqref{b_general}. Left column: contour plots; Right column: function cut along $y=0.2x$. Dashed lines are the exact function.} \label{fig:4L2D} \end{figure} \section{Introduction} \label{sec:intro} Interest in neural networks (NNs) has significantly increased in recent years because of the successes of deep networks in many practical applications. Complex and deep neural networks are known to be capable of learning very complex phenomena that are beyond the capabilities of many other traditional machine learning techniques. The amount of literature is too large to mention. Here we cite just a few review type publications \cite{montufar2014number, bianchini2014complexity, eldan2016power, poggio2017, DuSwamy2014, GoodfellowBC-2016, Schmidhuber2015}. In a NN network, each neuron produces an output in the following form \begin{equation} \sigma(\mathbf{w}\cdot \mathbf{x}_{in} + b), \end{equation} where the vector $\mathbf{x}_{in}$ represents the signal from all incoming connecting neurons, $\mathbf{w}$ are the weights for the input, $\sigma$ is the activation function, with $b$ as its threshold. In a complex (and deep) network with a large number of neurons, the total number of the free parameters $\mathbf{w}$ and $b$ can be exceedingly large. Their training thus poses a tremendous numerical challenge, as the objective function (loss function) to be optimized becomes highly non-convex and with highly complicated landscape (\cite{li2017visualizing}). Any numerical optimization procedures can be trapped in a local minimum and produce unsatisfactory training results. This paper is not concerned with numerical algorithm aspect of the NN training. Instead, the purpose of this paper is to show that the training of NNs can be conducted in a reduced parameter space, thus providing any numerical optimization algorithm a smaller space to search for the parameters. This reduction applies to the type of activation functions with the following scaling property: for any $\alpha\geq 0$, $\sigma(\alpha\cdot y) = \gamma(\alpha) \sigma(y)$, where $\gamma$ depends only on $\alpha$. The binary activation function \cite{mcculloch1943logical}, one of the first used activation functions, satisfies this property with $\gamma\equiv 1$. The rectified linear unit (ReLU) \cite{nair2010rectified, glorot2011deep}, one of the most widely used activation functions nowadays, satisfies this property with $\gamma = \alpha$. For NNs with this type of activation functions, we show that they can be equivalently trained in a reduced parameter space. More specifically, let the length of the weights $\mathbf{w}$ be $d$. Instead of training $\mathbf{w}$ in ${\mathbb R}^d$, they can be equivalently trained as unit vector with $\|\mathbf{w}\|=1$, which means $\mathbf{w}\in \mathbb{S}^{d-1}$, the unit sphere in ${\mathbb R}^d$. Moreover, if one is interested in approximating a function defined in a bounded domain $D$, the threshold can also be trained in a bounded interval $b\in [-X_B, X_B]$, where $X_B=\sup_{\mathbf{x}\in D} \|\mathbf{x}\|$, as opposed to the entire real line $b\in {\mathbb R}$. It is well known that the standard NN with single hidden layer is a universal approximator, c.f. \cite{pinkus1999, barron1993universal, hornik1991approximation}. Here we prove that our new formulation in the reduced parameter space is also a universal approximator, in the sense that its span is dense in $C({\mathbb R}^d)$. We then further extend the parameter space constraints to NNs with multiple hidden layers. The major advantage of the constraints in the weights and thresholds is that they significantly reduce the search space for these parameters during training. Consequently, this eliminates a potentially large number of undesirable local minima that may cause training optimization algorithm to terminate prematurely, which is one of the reasons for unsatisfactory training results. We then present examples for function approximation to verify this numerically. Using the same network structure, optimization solver, and identical random initialization, our numerical tests show that the training results in the new reduced parameter space is notably better than those from the standard network. More importantly, the training using the reduced parameter space is much more robust against initialization. This paper is organized as follows. In Section \ref{sec:single}, we first derive the constraints on the parameters using network with single hidden layer, while enforcing the equivalence of the network. In Section \ref{sec:proof}, we prove that the constrained NN formulation remains a universal approximator. In Section \ref{sec:multiple}, we present the constraints for NNs with multiple hidden layers. Finally in Section \ref{sec:examples}, we present numerical experiments to demonstrate the improvement of the network training using the reduced parameter space. We emphasize that this paper is not concerned with any particular training algorithm. Therefore, in our numerical tests we used the most standard optimization algorithm from Matlab\textregistered \section*{Acknowledgment} \clearpage \bibliographystyle{plain} \section{Constraints for NNs with Multiple Hidden Layers} \label{sec:multiple} We now generalize the previous result to feedforward NNs with multiple hidden layers. Let us again consider approximation of a multivariate function $f:D\to{\mathbb R}$, where $D\subseteq {\mathbb R}^d$ with $\sup_{\mathbf{x}\in D}\|\mathbf{x}\|=X_B<\infty$. Consider a feedforward NN with $M$ layers, $M\geq 3$, where $m=1$ is the input layer and $m=M$ the output layer. Let $J_m$, $m=1,\dots,M$ be the number of neurons in each layer. Obviously, we have $J_1=d$ and $J_M=1$ in our case. Let $\mathbf{y}^{(m)}\in {\mathbb R}^{J_m}$ be the output of the neurons in the $m$-th layer. Then, by following the notation from \cite{DuSwamy2014}, we can write \begin{equation}\label{MultNx} \begin{split} \mathbf{y}^{(1)} &= \mathbf{x}, \\ \mathbf{y}^{(m)} &= \sigma\left(\left[\mathbf{W}^{(m-1)}\right]^T \mathbf{y} ^{(m-1)}+\mathbf{b}^{(m)}\right), \qquad m=2,\dots, M-1,\\ \mathbf{y}^{(M)} &= \left[\mathbf{W}^{(M-1)}\right]^T \mathbf{y}^{(M-1)}. \end{split} \end{equation} where $\mathbf{W}^{(m-1)}\in {\mathbb R}^{J_{m-1}\times J_m}$ is the weight matrix and $\mathbf{b}^{(m)}$ is the threshold vector. In component form, the output of the $j$-th neuron in the $m$-th layer is \begin{equation} \label{NN_layer} y_j^{(m)} = \sigma\left( \left[\mathbf{w}_j^{(m-1)}\right]^T \mathbf{y}^{(m-1)} + b_j^{(m)}\right), \qquad j=1,\dots, J_m, \quad m=2,\dots, M-1, \end{equation} where $\mathbf{w}^{(m-1)}_{j}$ be the $j$-th column of $\mathbf{W}^{(m-1)}$. \subsection{Weight constraints} The derivation for the constraints on the weights vector $\mathbf{w}_j^{(m-1)}$ can be generalized directly from the single-layer case and we have the following weight constraints, \begin{equation} \label{weight_general} \left\|\mathbf{w}^{(m-1)}_{j}\right\| = 1, \qquad j=1,\dots, J_m, \quad m=2,\dots, M-1. \end{equation} \subsection{Threshold constraints} The constraints on the threshold $b_j^{(m)}$ depend on the bounds of the output from the previous layer $\mathbf{y}^{(m-1)}$. For the ReLU activation function \eqref{ReLU}, we derive from \eqref{NN_layer} that for $m=2,\dots,M$, \begin{equation} \begin{split} \left|y_j^{(m)}\right| &\leq \left| \left[\mathbf{w}_j^{(m-1)}\right]^T \mathbf{y}^{(m-1)} + b_j^{(m)}\right| \\ &\leq \left\|\mathbf{w}_j^{(m-1)}\right\| \left\|\mathbf{y}^{(m-1)}\right\| + \left|b_j^{(m)}\right| \\ &\leq \left\|\mathbf{y}^{(m-1)}\right\| + \left|b_j^{(m)}\right| \end{split} \end{equation} If the domain $D$ is bounded and with $X_B = \sup_{\mathbf{x}\in D} \|\mathbf{x}\| <+\infty$, then the constraints on the threshold can be recursively derived. Starting from $\|\mathbf{y}^{(1)}\| = \|\mathbf{x}\|\in [-X_B, X_B]$ and $b_j^{(2)}\in [-X_B,X_B]$, we then have \begin{equation} \label{b_general} b_j^{(m)} \in X_B\cdot[-2^{m-2}, 2^{m-2}], \qquad m=2,\dots,M-1, \quad j=1,\dots, J_m. \end{equation} For the binary activation function \eqref{step}, we derive from \eqref{NN_layer} that for $m=2,\dots,M-1$, \begin{equation} \left|y_j^{(m)}\right| \leq 1. \end{equation} Then, the bounds for the thresholds are \begin{equation} \begin{split} b_j^{(2)} &\in X_B\cdot[-1, 1], \qquad j=1,\dots, J_2,\\ b_j^{(m)} &\in [-1, 1], \qquad m=3,\dots,M-1, \quad j=1,\dots, J_m. \end{split} \end{equation} \section{Universal approximation property} \label{sec:proof} By universal approximation property, we aim to establish that the constrained formulations \eqref{Nnew} and \eqref{Nnew2} can approximate any continuous functions. To this end, we define the following set of functions on ${\mathbb R}^d$ \begin{equation} \mathcal{N} (\sigma;\Lambda,\Theta):=\textrm{Span}\left\{\sigma(\mathbf{w}\cdot\mathbf{x}+b): \mathbf{w}\in \Lambda,b\in \Theta \right\}, \end{equation} where $\Lambda\in {\mathbb R}^d$ is the weight set, $\Theta\in{\mathbb R}$ the threshold set. We also denote $\mathcal{N}_D(\sigma; \Lambda, \Theta)$ as the same set of functions when confined in a bounded domain $D\subseteq {\mathbb R}^d$. By following this definition, the standard NN expression and our two constrained expressions correspond to the following spaces. \begin{equation} \begin{split} \eqref{Nx} &\in \mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R}), \\ \eqref{Nnew} &\in \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R}), \\ \eqref{Nnew2} &\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, [-X_B, X_B]), \end{split} \end{equation} where $\mathbb{S}^{d-1}$ is the unit sphere in ${\mathbb R}^d$ because $\|\widetilde{\w}\|=1$. The universal approximation property for the standard unconstrained NN expression \eqref{Nx} has been well studied, cf. \cite{cybenko1989approximation, hornik1991approximation, mhaskar1992approximation, barron1993universal, leshno1993multilayer}, and \cite{pinkus1999} for a survey. Here we cite the following result for $\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})$. \begin{theorem}[\cite{leshno1993multilayer}, Theorem 1] \label{propLeshno} Let $\sigma$ be a function in $L_{loc}^\infty({\mathbb R})$, of which the set of discontinuities has Lebesgue measure zero. Then the set $\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})$ is dense in $C({\mathbb R}^d)$, in the topology of uniform convergence on compact sets, if and only if $\sigma$ is not an algebraic polynomial almost everywhere. \end{theorem} \subsection{Universal approximation property of \eqref{Nnew}} \label{sec:Dense1} We now examine the universal approximation property for the first constrained formulation \eqref{Nnew}. \begin{theorem} \label{thm1} Let $\sigma$ be the binary function \eqref{step} or the ReLU function \eqref{ReLU}, then we have \begin{equation} \label{thm1eq} \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})=\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R}). \end{equation} and the set $ \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})$ is dense in $C({\mathbb R}^d)$, in the topology of uniform convergence on compact sets. \end{theorem} \begin{proof} Obviously, we have $\mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})\subseteq\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})$. By Proposition \ref{prop:eq1}, any element $N(\mathbf{x})\in \mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R}) $ can be reformulated as an element $\widetilde{N}(\mathbf{x})\in \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})$. Therefore, we have $\mathcal{N}(\sigma; {\mathbb R}^d, {\mathbb R})\subseteq \mathcal{N}(\sigma; \mathbb{S}^{d-1}, {\mathbb R})$. This concludes the first statement \eqref{thm1eq}. Given the equivalence \eqref{thm1eq}, the denseness result immediately follows from Theorem \ref{propLeshno}, as both the ReLU and the binary activation functions are not polynomials and are continuous everywhere except at a set of zero Lebesgue measure. \end{proof} \subsection{Universal approximation property of \eqref{Nnew2}} We now examine the second constrained NN expression \eqref{Nnew2}. \begin{theorem} Let $\sigma$ be the binary \eqref{step} or the ReLU \eqref{ReLU} activation function. Let $\mathbf{x}\in D\subset {\mathbb R}^d$, where $D$ is closed and bounded with $X_B=\sup_{\mathbf{x}\in D} \|\mathbf{x}\|$. Define $\Theta=[-X_B, X_B]$, then \begin{equation} \label{thm2eq} \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)=\mathcal{N}_D(\sigma;{\mathbb R}^{d}, {\mathbb R}). \end{equation} Furthermore, $\mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ is dense in $C(D)$ in the topology of uniform convergence. \end{theorem} \begin{proof} Obviously we have $\mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)\subseteq \mathcal{N}_D(\sigma; {\mathbb R}^d, {\mathbb R})$. On the other hand, Proposition \ref{prop:eq2} establishes that for any element $N(\mathbf{x})\in \mathcal{N}_D(\sigma; {\mathbb R}^{d}, {\mathbb R})$, there exists an equivalent formulation $\widehat{N}(\mathbf{x})\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ for any $\mathbf{x}\in D$. This implies $\mathcal{N}_D(\sigma;{\mathbb R}^d, {\mathbb R})\subseteq \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$. We then have \eqref{thm2eq}. For the denseness of $\mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ in $C(D)$, let us consider any function $f\in C(D)$. By the Tietze extension theorem (cf. \cite{folland1999}), there exists an extension $F\in C({\mathbb R}^d)$ with $F(\mathbf{x})=f(\mathbf{x})$ for any $\mathbf{x}\in D$. Then, the denseness result of the standard unconstrained NN expression (Theorem \ref{propLeshno}) implies that, for the compact set $E=\{\mathbf{x}\in {\mathbb R}^d: \|\mathbf{x}\|\leq X_B\}$ and any given $\epsilon >0$, there exists $N(\mathbf{x})\in \mathcal{N}(\sigma; {\mathbb R}^{d}, {\mathbb R})$ such that \begin{equation*} \sup_{\mathbf{x}\in{\mathbb E}} |N(\mathbf{x})-F(\mathbf{x})|\leq\epsilon. \end{equation*} By Proposition \ref{prop:eq2}, there exists an equivalent constrained NN expression $\widehat{N}(\mathbf{x})\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ such that $\widehat{N}(\mathbf{x})=N(\mathbf{x})$ for any $\mathbf{x}\in D$. We then immediately have, for any $f\in C(D)$ and any given $\epsilon>0$, there exists $\widehat{N}(\mathbf{x})\in \mathcal{N}_D(\sigma; \mathbb{S}^{d-1}, \Theta)$ such that \begin{equation*} \sup_{\mathbf{x}\in D} |\widehat{N}(\mathbf{x})-f(\mathbf{x})|=\sup_{\mathbf{x}\in D}|N(\mathbf{x})-f(\mathbf{x})|\leq \sup_{\mathbf{x}\in E}|N(\mathbf{x})-F(\mathbf{x})|\leq \epsilon. \end{equation*} The proof is now complete. \end{proof} \section{Constraints for NN with Single Hidden Layer} \label{sec:single} Let us first consider the standard NN with a single hidden layer, in the context for approximating an unknown response function $f:{\mathbb R}^d \to {\mathbb R}$. The NN approximation using activation function $\sigma$ takes the following form, \begin{equation} \label{Nx} N(\mathbf{x}) = \sum_{j=1}^N c_j\sigma(\mathbf{w}_j\cdot\mathbf{x} + b_j),\quad \mathbf{x}\in{\mathbb R}^d, \end{equation} where $\mathbf{w}_j\in{\mathbb R}^d$ is the weight vector, $b_j\in{\mathbb R}$ the threshold, $c_j\in{\mathbb R}$, and $N$ is the width of the network. We restrict our discussion to following two activation functions. One is the rectified linear unit (ReLU), \begin{equation} \label{ReLU} \sigma(x) = \max(0, x). \end{equation} The other one is the binary activation function, also known as heaviside/step function, \begin{equation} \label{step} \sigma(x) = \left\{ \begin{array}{ll} 1, & x> 0,\\ 0, & x<0, \end{array} \right. \end{equation} with $\sigma(0)=\frac12$. We remark that these two activation functions satisfy the following scaling property: For any $y\in {\mathbb R}$ and $\alpha\geq 0$, there exists a constant $\gamma\geq 0$ such that \begin{equation} \label{scale} \sigma (\alpha\cdot y) = \gamma(\alpha)\sigma(y), \end{equation} where $\gamma$ depends only on $\alpha$ but not on $y$. The ReLU satisfies this property with $\gamma(\alpha) = \alpha$, which is known as {\em scale invariance}. The binary activation function satisfies the scaling property with $\gamma(\alpha)\equiv 1$. We also list the following properties, which are important for the method we present in this paper. \begin{itemize} \item For the binary activation function \eqref{step}, for any $x\in{\mathbb R}$, \begin{equation} \label{step-1} \sigma(x) +\sigma(-x)\equiv 1. \end{equation} \item For the ReLU activation function \eqref{ReLU}, for any $x\in{\mathbb R}$ and any $\alpha$, \begin{equation} \label{ReLU-1} L(x;\alpha):=\sigma\left(x+\frac{\alpha}{2}\right) -\sigma\left(x-\frac{\alpha}{2}\right) + \sigma\left(-x+\frac{\alpha}{2}\right) - \sigma\left(-x-\frac{\alpha}{2}\right)\equiv\alpha. \end{equation} \end{itemize} \subsection{Weight Constraints} We first show that the training of \eqref{Nx} can be equivalently conducted with constraint $\|\mathbf{w}_j\|=1$, i.e., unit vector. This is a straightforward result of the scaling property \eqref{scale} of the activation function. It effectively reduces the search space for the weights from $\mathbf{w}_j\in{\mathbb R}^d$ to $\mathbf{w}_j\in\mathbb{S}^{d-1}$, the unit sphere in ${\mathbb R}^d$. \begin{proposition} \label{prop:eq1} Any neural network construction \eqref{Nx} using the ReLU \eqref{ReLU} or the binary \eqref{step} activation functions has an equivalent form \begin{equation} \label{Nnew} \widetilde{N}(\mathbf{x}) = \sum_{j=1}^{\widetilde{N}} \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x} + \widetilde{b}_j), \quad \|\widetilde{\w}_j\|=1. \end{equation} \end{proposition} \begin{proof} Let us first assume $\|\mathbf{w}_j\|\neq 0$ for all $j=1,\dots,N$. We then have \begin{equation*} \begin{split} N(\mathbf{x}) &= \sum_{j=1}^N c_j\sigma(\mathbf{w}_j\cdot\mathbf{x} + b_j) \\ & =\sum_{j=1}^N c_j \sigma\left(\|\mathbf{w}_j\|\left(\frac{\mathbf{w}_j}{\|\mathbf{w}_j\|}\cdot\mathbf{x} + \frac{b_j}{\|\mathbf{w}_j\|}\right)\right) \\ &= \sum_{j=1}^N c_j \gamma(\|\mathbf{w}_j\|) \sigma\left(\frac{\mathbf{w}_j}{\|\mathbf{w}_j\|}\cdot\mathbf{x} + \frac{b_j}{\|\mathbf{w}_j\|}\right), \end{split} \end{equation*} where $\gamma$ is the factor in the scaling property \eqref{scale} satisfied by both ReLU and binary activation functions. Upon defining \begin{equation*} \widetilde{c}_j = c_j \gamma(\|\mathbf{w}_j\|), \quad \widetilde{\w}_j = \frac{\mathbf{w}_j}{\|\mathbf{w}_j\|}, \quad \widetilde{b}_j = \frac{b_j}{\|\mathbf{w}_j\|}, \end{equation*} we have the following equivalent form as in \eqref{Nnew} \begin{equation*} N(\mathbf{x})=\sum_{j=1}^N \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x}+\widetilde{b}_j). \end{equation*} Next, let us consider the case $\|\mathbf{w}_j\|=0$ for some $j\in\{1,\dots,N\}$. The contribution of this term to the construction \eqref{Nx} is thus \begin{equation*} c_j \sigma(b_j) =\left\{ \begin{array}{ll} \widehat{c}_j, & b_j\geq 0,\\ 0, & b_j<0, \end{array} \right. \end{equation*} where $\widehat{c}_j = c_j$ for the binary activation function and $\widehat{c}_j = c_j b_j$ for the ReLU function. Therefore, if $b_j<0$, this term in \eqref{Nx} vanishes. If $b_j\geq 0$, the contributions of these terms in \eqref{Nx} are constants, which can be represented by a combination of neuron outputs using the relations \eqref{step-1} and \eqref{ReLU-1}, for binary and ReLU activation functions, respectively. We thus obtain a new representation in the form of \eqref{Nnew} that includes all the terms in the original expression \eqref{Nx}. This completes the proof. \end{proof} The proof immediately gives us another equivalent form, by combining all the constant terms from \eqref{Nx} into a single constant first and then explicitly including it in the expression. \begin{corollary} Any neural network construction \eqref{Nx} using the ReLU \eqref{ReLU} or the binary \eqref{step} activation functions has an equivalent form \begin{equation} \label{Nnew0} \widetilde{N}(\mathbf{x}) = \widetilde{c}_0 + \sum_{j=1}^{\tilde{N}} \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x} + \widetilde{b}_j), \quad \|\widetilde{\w}_j\|=1. \end{equation}. \end{corollary} \subsection{Threshold Constraints} We now present constraints on the thresholds in \eqref{Nx}. This is applicable when the target function $f$ is defined on a bounded domain, i.e., $f:D\to {\mathbb R}$, with $D\subset {\mathbb R}^d$ bounded. This is often the case in practice. We demonstrate that any NN \eqref{Nx} can be trained equivalently in a bounded interval for each of the thresholds. This (significantly) reduces the search space for the thresholds. \begin{proposition} \label{prop:eq2} With the ReLU \eqref{ReLU} or the binary \eqref{step} activation function, let \eqref{Nx} be an approximator to a function $f: D\to {\mathbb R}$, where $D\subset {\mathbb R}^d$ is a bounded domain. Let \begin{equation} \label{Xb} X_B = \sup_{\mathbf{x}\in D} \|\mathbf{x}\|. \end{equation} Then, \eqref{Nx} has an equivalent form \begin{equation} \label{Nnew2} \widehat{N}(\mathbf{x}) = \sum_{j=1}^{\widehat{N}} \hat{c}_j\sigma(\hat{\w}_j\cdot\mathbf{x} + \hat{b}_j), \quad \|\hat{\w}_j\|=1, \quad -X_B\leq \hat{b}_j \leq X_B. \end{equation} \end{proposition} \begin{proof} Proposition \ref{prop:eq1} establishes that \eqref{Nx} has an equivalent form \eqref{Nnew} \begin{equation*} \widetilde{N}(\mathbf{x}) = \sum_{j=1}^{\widetilde{N}} \widetilde{c}_j\sigma(\widetilde{\w}_j\cdot\mathbf{x} + \widetilde{b}_j), \quad \|\widetilde{\w}_j\|=1. \end{equation*} Since $\widetilde{\w}_j$ is a unit vector, we have \begin{equation} \widetilde{\w}_j\cdot \mathbf{x} \in \left[-\|\mathbf{x}\|,\|\mathbf{x}\|\right] \subseteq [-X_B, X_B], \end{equation} where the bound \eqref{Xb} is used. Let us first consider the case $\widetilde{b}_j < -X_B$, then $\forall \mathbf{x}\in D$, $\widetilde{\w}_j\cdot \mathbf{x} + \widetilde{b}_j <0$, $\sigma(\widetilde{\w}_j\cdot \mathbf{x} + \widetilde{b}_j) = 0$, for both the ReLU and binary activation functions. Subsequently, this term has no contribution to the approximation and can be eliminated. Next, let us consider the case $\widetilde{b}_j > X_B$, then $\widetilde{\w}_j\cdot \mathbf{x} + \widetilde{b}_j >0$ for all $\mathbf{x}\in D$. Let $J=\{j_1,\dots j_L\}$, $L\geq 1$, be the set of terms in \eqref{Nnew} that satisfy this condition. We then have $\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell} >0$, for all $\mathbf{x}\in D$, and $\ell=1,\dots, L$. We now show that the net contribution of these terms in \eqref{Nnew} is included in the equivalent form \eqref{Nnew2}. \begin{enumerate} \item For the binary activation function \eqref{step}, the contribution of these terms to the approximation \eqref{Nnew} is \begin{equation*} N_J(\mathbf{x}) = \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \sigma\left(\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell}\right) = \sum_{\ell=1}^L \widetilde{c}_{j_\ell} = \const. \end{equation*} Again, using the relation \eqref{step-1}, any constant can be expressed by a combination of binary activation terms with thresholds $\widetilde{b}_j = 0$. Such terms are already included in \eqref{Nnew2}. \item For the ReLU activation \eqref{ReLU}, the contribution of these terms to \eqref{Nnew} is \begin{equation} \begin{split} N_J(\mathbf{x}) &= \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \sigma\left(\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell}\right) \\ &= \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \left(\widetilde{\w}_{j_\ell}\cdot \mathbf{x} + \widetilde{b}_{j_\ell}\right) \\ &= \sum_{\ell=1}^L \left(\widetilde{c}_{j_\ell} \widetilde{\w}_{j_\ell}\right)\cdot \mathbf{x} + \sum_{\ell=1}^L \widetilde{c}_{j_\ell} \widetilde{b}_{j_\ell} \\ &= \widetilde{\w}^*_J \cdot \mathbf{x} + b^*_J \\ &= \sigma(\widetilde{\w}^*_J \cdot \mathbf{x}) - \sigma(-\widetilde{\w}^*_J \cdot \mathbf{x}) + b^*_J, \end{split} \end{equation} where the last equality follows the simple property of the ReLU function $\sigma(y) - \sigma(-y) = y$. Using Proposition \ref{prop:eq1}, the first two terms then have an equivalent form using unit weight $\widetilde{\w}_J^* = \widetilde{\w}_J^*/\|\widetilde{\w}_J^*\|$ and with zero threshold, which is included in \eqref{Nnew2}. For the constant $b_J^*$, we again invoke the relation \eqref{ReLU-1} and represent it by $\frac{b_J^*}{\alpha} L(\hat{\w}_J^*\cdot\mathbf{x}, \alpha)$, where $\hat{\w}_J^*$ is an arbitrary unit vector and $0<\alpha<X_B$. Obviously, this expression includes a collection of terms (4 terms as in \eqref{ReLU-1}), which are included in \eqref{Nnew2}. This completes the proof. \end{enumerate} \end{proof} The equivalence between the standard NN expression \eqref{Nx} and the constrained expression \eqref{Nnew2} indicates that the NN training can be conducted in a reduced parameter space. For the weights $\mathbf{w}_j$ in each neuron, its training can be conducted in $\mathbb{S}^{d-1}$, the $d$-dimensional unit sphere, as opposed to the entire space ${\mathbb R}^d$. For the threshold, its training can be conducted in the bounded interval $[-X_B, X_B]$, as opposed to the entire real line ${\mathbb R}$. The reduction of the parameter space can eliminate many potential local minima and therefore enhance the performance of numerical optimization during the training. We remark that the equivalent form in the reduced parameter space \eqref{Nnew2} has different number of ``active'' neurons than the original unrestricted case \eqref{Nx}.
{ "timestamp": "2020-01-30T02:15:10", "yymm": "1805", "arxiv_id": "1805.08340", "language": "en", "url": "https://arxiv.org/abs/1805.08340", "abstract": "For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.", "subjects": "Machine Learning (stat.ML); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)", "title": "Reducing Parameter Space for Neural Network Training", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9744347883040039, "lm_q2_score": 0.843895106480586, "lm_q1q2_score": 0.8223207494341946 }
https://arxiv.org/abs/1102.3394
Poincare Analyticity and the Complete Variational Equations
According to a theorem of Poincare, the solutions to differential equations are analytic functions of (and therefore have Taylor expansions in) the initial conditions and various parameters providing the right sides of the differential equations are analytic in the variables, the time, and the parameters. We describe how these Taylor expansions may be obtained, to any desired order, by integration of what we call the complete variational equations. As illustrated in a Duffing equation stroboscopic map example, these Taylor expansions, truncated at an appropriate order thereby providing polynomial approximations, can well reproduce the behavior (including infinite period doubling cascades and strange attractors) of the solutions of the underlying differential equations.
\section*{Abstract} According to a theorem of Poincar\'{e}, the solutions to differential equations are analytic functions of (and therefore have Taylor expansions in) the initial conditions and various parameters providing the right sides of the differential equations are analytic in the variables, the time, and the parameters. We describe how these Taylor expansions may be obtained, to any desired order, by integration of what we call the {\em complete} variational equations. As illustrated in a Duffing equation stroboscopic map example, these Taylor expansions, truncated at an appropriate order thereby providing polynomial approximations, can well reproduce the behavior (including infinite period doubling cascades and strange attractors) of the solutions of the underlying differential equations. \section{Introduction} \setcounter{equation}{0} In his prize-winning essay[1,2] and subsequent monumental work on celestial mechanics[3], Poincar\'{e} established and exploited the fact that solutions to differential equations are frequently analytic functions of the initial conditions and various parameters (should any occur). This result is often referred to as {\em Poincar\'{e} analyticity} or {\em Poincar\'{e}'s theorem} on analyticity. Specifically, consider any set of $m$ first-order differential equations of the form \begin{equation} \dot{z}_a = f_a(z_1,\cdots,z_m;t;\lambda_1,\cdots,\lambda_n),\quad a= 1,\cdots,m. \end{equation} Here $t$ is the independent variable, the $z_a$ are dependent variables, and the $\lambda_b$ are possible parameters. Let the quantities $z^0_a$ be initial conditions specified at some initial time $t=t^0$, \begin{equation} z_a(t^0)=z^0_a. \end{equation} Then, under mild conditions imposed on the functions $f_a$ that appear on the right side of (1.1) and thereby define the set of differential equations, there exists a \emph{unique} solution \begin{equation} z_a(t) = g_a(z^0_1, \cdots, z^0_m; t^0, t;\lambda_1,\cdots,\lambda_n), \ \ a = 1,m \end{equation} of (1.1) with the property \begin{equation} z_a(t^0) = g_a(z^0_1, \cdots, z^0_m; t^0, t^0;\lambda_1,\cdots,\lambda_n) = z^0_a, \ \ a = 1,m. \end{equation} Now assume that the functions $f_a$ are analytic (within some domain) in the quantities $z_a$, the time $t$, and the parameters $\lambda_b$. Then, according to Poincar\'{e}'s Theorem, the solution given by (1.3) will be analytic (again within some domain) in the initial conditions $z^0_a$, the times $t^0$ and $t$, and the parameters $\lambda_b$. Poincar\'{e} established this result on a case-by-case basis as needed using {\em Cauchy's} method of {\em majorants}. It is now more commonly established in general using {\em Picard iteration}, and appears as background material in many standard texts on ordinary differential equations [4]. If the solution $z_a(t)$ is analytic in the initial conditions $z^0_a$ and the parameters $\lambda_b$, then it is possible to expand it in the form of a Taylor series, with time-dependent coefficients, in the variables $z^0_a$ and $\lambda_b$. The aim of this paper is to describe how these Taylor coefficients can be found as solutions to what we will call the {\em complete variational} equations. To aid further discussion, it is useful to also rephrase our goal in the context of maps. Suppose we rewrite the set of first-order differential equations (1.1) in the more compact vector form \begin{equation} \dot{\mbox{\boldmath $z$}} = \mbox{\boldmath $f$} (\mbox{\boldmath $z$}; t;\mbox{\boldmath $\lambda$}). \end{equation} Then, again using vector notation, their solution can be written in the form \begin{equation} \mbox{\boldmath $z$}(t) = \mbox{\boldmath $g$} (\mbox{\boldmath $z$}^0; t^0,t;\mbox{\boldmath $\lambda$}). \end{equation} That is, the quantities $\mbox{\boldmath $z$}(t)$ at any time $t$ are uniquely specified by the initial quantities $\mbox{\boldmath $z$}^0$ given at the initial time $t^0$. We capitalize on this fact by introducing a slightly different notation. First, use $t^i$ instead of $t^0$ to denote the \emph{initial} time. Similarly use $\mbox{\boldmath $z$}^i$ to denote initial conditions by writing \begin{equation} \mbox{\boldmath $z$}^i = \mbox{\boldmath $z$}^0 = \mbox{\boldmath $z$}(t^i). \end{equation} Next, let $t^f$ be some \emph{final} time, and define final conditions $\mbox{\boldmath $z$}^f$ by writing \begin{equation} \mbox{\boldmath $z$}^f = \mbox{\boldmath $z$}(t^f). \end{equation} Then, with this notation, (1.6) can be rewritten in the form \begin{equation} \mbox{\boldmath $z$}^f = \mbox{\boldmath $g$} (\mbox{\boldmath $z$}^i; t^i,t^f;\mbox{\boldmath $\lambda$}). \end{equation} We now view (1.9) as a {\em map} that sends the initial conditions $\mbox{\boldmath $z$}^i$ to the final conditions $\mbox{\boldmath $z$}^f$. This map will be called the \emph{transfer map} between the times $t^i$ and $t^f$, and will be denoted by the symbol $\cal{M}$. What we have emphasized is that a set of first-order differential equations of the form (1.5) can be integrated to produce a transfer map $\cal{M}$. We express the fact that $\cal{M}$ sends $\mbox{\boldmath $z$}^i$ to $\mbox{\boldmath $z$}^f$ in symbols by writing \begin{equation} {\mbox{\boldmath $z$}^f} = {\cal{M}} {\mbox{\boldmath $z$}}^i, \end{equation} and illustrate this relation by the picture shown in Figure 1. We also note that $\cal{M}$ is always invertible: Given $\mbox{\boldmath $z$}^f$, $t^f$, and $t^i$, we can always integrate (1.5) backward in time from the moment $t=t^f$ to the moment $t=t^i$ and thereby find the initial conditions $\mbox{\boldmath $z$}^i$. In the context of maps, our goal is to find a Taylor representation for $\cal M$. If parameters are present, we may wish to have an expansion in some or all of them as well. \begin{figure}[htp] \centering \includegraphics*[height=2.1in,angle=0]{Figure1} \caption{The transfer map $\cal{M}$ sends the initial conditions $\mbox{\boldmath $z$}^i$ to the final conditions $\mbox{\boldmath $z$}^f$.} \end{figure} The organization of this paper is as follows: Section 2 derives the complete variational equations without and with dependence on some or all parameters. Sections 3 and 4 describe their solution using forward and backward integration. As an example, Section 5 treats the Duffing equation and describes the properties of an associated stroboscopic map $\cal{M}$. Section 6 sets up the complete variational equations for the Duffing equation, including some parameter dependence, and studies some of the properties of the map obtained by solving these variational equations numerically. There we will witness the remarkable fact that a truncated Taylor map approximation to $\cal M$ can reproduce the infinite period-doubling Feigenbaum cascade and associated strange attractor exhibited by the exact $\cal M$. Section 7 describes how the variational equations can be solved numerically. A final section provides a concluding summary. \section{Complete Variational Equations} \setcounter{equation}{0} This section derives the complete variational equations, first without parameter dependence, and then with parameter dependence. \subsection{Case of No or Ignored Parameter Dependence} Suppose the equations (1.1) do not depend on any parameters $\lambda_b$ or we do not wish to make expansions in them. We may then suppress their appearance to rewrite (1.1) in the form \begin{equation} \dot{z}_a = f_a(z,t), \ \ a = 1,m. \end{equation} Suppose that $z^d(t)$ is some given {\em design} solution to these equations, and we wish to study solutions in the vicinity of this solution. That is, we wish to make expansions about this solution. Introduce deviation variables $\zeta_a$ by writing \begin{equation} z_a = z^d_a + \zeta_a . \end{equation} Then the equations of motion (2.1) take the form \begin{equation} \dot{z}^d_a + \dot{\zeta}_a = f_a (z^d + \zeta ,t). \end{equation} In accord with our hypothesis of analyticity, assume that the right side of (2.3) is analytic about $z^d$. Then we may write the relation \begin{equation} f_a (z^d + \zeta ,t) = f_a (z^d,t) + g_a(z^d,t,\zeta ) \end{equation} where each $g_a$ has a Taylor expansion of the form \begin{equation} g_a(z^d,t,\zeta ) = \sum_r g_a^r(t) G_r(\zeta ). \end{equation} Here the $G_r(\zeta )$ are the various monomials in the $m$ variables $\zeta_b$ labeled by an {\em index} $r$ using some convenient labeling scheme, and the $g^r_a$ are (generally) time-dependent coefficients which we call {\em forcing terms}.\footnote{Here and in what follows the quantities $g_a$ are not to be confused with those appearing in (1.3).} By construction, all the monomials occurring in the right side of (2.5) have degree one or greater. We note that the $g^r_a(t)$ are known once $z^d(t)$ is given. By assumption, $z^d$ is a solution of (2.3) and therefore satisfies the relations \begin{equation} \dot{z}^d_a = f_a(z^d,t). \end{equation} It follows that the deviation variables satisfy the equations of motion \begin{equation} \dot{\zeta}_a = g_a(z^d,t,\zeta ) = \sum_r g_a^r(t)G_r(\zeta ). \end{equation} These equations are evidently generalizations of the usual first-degree (linear) variational equations, and will be called the {\em complete variational} equations. Consider the solution to the complete variational equations with {\em initial} conditions $\zeta^i_b$ specified at some initial time $t^i$. Following Poincar\'{e}, we expect that this solution will be an analytic function of the initial conditions $\zeta^i_b$. Also, since the right side of (2.7) vanishes when all $\zeta_b = 0$ [all the monomials $G_r$ in (2.7) have degree one or greater], $\zeta (t) = 0$ is a solution to (2.7). It follows that the solution to the complete variational equations has a Taylor expansion of the form \begin{equation} \zeta_a(t) = \sum_r h^r_a(t) G_r(\zeta^i) \end{equation} where the $h^r_a(t)$ are functions to be determined, and again all the monomials $G_r$ that occur have degree one or greater. When the quantities $h^r_a(t)$ are evaluated at some {\em final} time $t^f$, (2.8) provides a representation of the transfer map ${\cal M}$ about the design orbit in the Taylor form \begin{equation} \zeta^f_a = \zeta_a(t^f) = \sum_r h^r_a(t^f) G_r(\zeta^i). \end{equation} \subsection{Complete Variational Equations with Parameter Dependence} What can be done if we desire to have an expansion in parameters as well? Suppose that there are $n$ such parameters, or that we wish to have expansions in $n$ of them. The work of the previous section can be extended to handle this case by means of a simple trick: View the $n$ parameters as additional {\em variables}, and ``augment" the set of differential equations by additional differential equations that ensure these additional variables remain constant. In detail, suppose we label the parameters so that those in which we wish to have an expansion are $\lambda_1\cdots\lambda_n$. Introduce $n$ additional variables $z_{m+1},\cdots z_\ell$ where $\ell=m+n$ by making the replacements \begin{equation} \lambda_b\rightarrow z_{m+b}, \ \ b=1,n. \end{equation} Next augment the equations (1.1) by $n$ more of the form \begin{equation} \dot{z}_a=0, \ \ a=m+1,\ell. \end{equation} By this device we can rewrite the equations (1.1) in the form \begin{equation} \dot{z}_a = f_a(z,t), \ \ a = 1,\ell \end{equation} with the understanding that \begin{equation} f_a=f_a(z;t;\lambda^{\rm{rem}}),\ \ a=1,m, \end{equation} where $\lambda^{\rm{rem}}$ denotes the other {\em remaining} parameters, if any, and \begin{equation} f_a=0, \ \ a=m+1,\ell. \end{equation} For the first $m$ equations we impose, as before, the initial conditions \begin{equation} z_a(t^i)=z_a^i, \ \ a=1,m. \end{equation} For the remaining equations we impose the initial conditions \begin{equation} z_a(t^i)=\lambda_{a-m}, \ \ a=m+1,\ell. \end{equation} Note that the relations (2.14) then ensure the $z_a$ for $a>m$ retain these values for all $t$. To continue, let $z^d(t)$ be some design solution. Then, by construction, we have the result \begin{equation} z^d_a(t)=\lambda^d_{a-m}=\lambda_{a-m}, \ \ a=m+1,\ell. \end{equation} Again introduce deviation variables by writing \begin{equation} z_a=z_a^d+\zeta_a \ \ a=1,\ell. \end{equation} Then the quantities $\zeta_a$ for $a>m$ will describe deviations in the parameter values. Moreover, because we have assumed analyticity in the parameters as well, relations of the forms (2.4) and (2.5) will continue to hold except that the $G_r(\zeta)$ are now the various monomials in the $\ell$ variables $\zeta_b$. Relations of the forms (2.6) and (2.7) will also hold with the provisos (2.13) and (2.14) and \begin{equation} g_a^r(t)=0,\ \ a=m+1,\ell. \end{equation} Therefore, we will only need to integrate the equations of the forms (2.6) and (2.7) for $a\le m$. Finally, relations of the form (2.9) will continue to hold for $a\le m$ supplemented by the relations \begin{equation} \zeta_a^f=\zeta_a^i, \ \ a=m+1,\ell. \end{equation} Since the $G_r(\zeta^i)$ now involve $\ell$ variables, the relations of the form (2.9) will provide an expansion of the final quantities $\zeta^f_a$ (for $a\le m$) in terms of the initial quantities $\zeta^i_a$ (for $a\le m$) and also the parameter deviations $\zeta_a^i$ with $a=m+1,\ell$. \section{Solution of Complete Variational Equations Using Forward Integration} \setcounter{equation}{0} This section and the next describe two methods for the solution of the complete variational equations. This section describes the method that employs integration forward in time, and is the conceptually simpler of the two methods. \subsection{Method of Forward Integration} To determine the functions $h^r_a$, let us insert the expansion (2.8) into both sides of (2.7). With $r^{\prime \prime}$ as a dummy index, the left side becomes the relation \begin{equation} \dot{\zeta}_a = \sum_{r^{\prime \prime}} \dot{h}^{r^{\prime \prime}}_a(t) G_{r^{\prime \prime}}(\zeta^i). \end{equation} For the right side we find the intermediate result \begin{equation} \sum_r g^r_a(t) G_r(\zeta ) = \sum_r g^r_a(t) \ G_r\!\!\left( \sum_{r^{\prime}} h_1^{r^{\prime}}(t) G_{r^{\prime}}(\zeta^i), \cdots \sum_{r^{\prime}} h^{r^{\prime}}_m(t) G_{r^{\prime}}(\zeta^i)\right) . \end{equation} However, since the $G_r$ are monomials, there are relations of the form \begin{equation} G_r\!\!\left( \sum_{r^{\prime}} h_1^{r^{\prime}}(t) G_{r^{\prime}}(\zeta^i), \cdots \sum_{r^{\prime}} h^{r^{\prime}}_m(t) G_{r^{\prime}}(\zeta^i)\right) = \sum_{r^{\prime \prime}} U^{r^{\prime \prime}}_r (h^s_n) G_{r^{\prime \prime}} (\zeta^i), \end{equation} and therefore the right side of (2.7) can be rewritten in the form \begin{equation} \sum_r g^r_a(t) G_r(\zeta ) = \sum_{r^{\prime \prime}} \sum_r g^r_a(t) U^{r^{\prime \prime}}_r (h^s_n) G_{r^{\prime \prime}} (\zeta^i) . \end{equation} The notation $U^{r^{\prime \prime}}_r (h^s_n)$ is employed to indicate that these quantities might (at this stage of the argument) depend on all the $h^s_n$ with $n$ ranging from $1$ to $m$, and $s$ ranging over all possible values. Now, in accord with (2.7), equate the right sides of (3.1) and (3.4) to obtain the relation \begin{equation} \sum_{r^{\prime \prime}} \dot{h}^{r^{\prime \prime}}_a (t) G_{r^{\prime \prime}}(\zeta^i) = \sum_{r^{\prime \prime}} \sum_r g^r_a(t) U^{r^{\prime \prime}}_r (h^s_n) G_{r^{\prime \prime}} (\zeta^i). \end{equation} Since the monomials $G_{r^{\prime \prime}} (\zeta^i)$ are linearly independent, we must have the result \begin{equation} \dot{h}^{r^{\prime \prime}}_a (t) = \sum_r g^r_a(t) U^{r^{\prime \prime}}_r(h^s_n). \end{equation} We have found a set of differential equations that must be satisfied by the $h^r_a$. Moreover, from (2.8) there is the relation \begin{equation} \zeta_a(t^i) = \sum_r h^r_a(t^i) G_r(\zeta^i) = \zeta^i_a. \end{equation} Thus, all the functions $h^r_a(t)$ have a known value at the initial time $t^i$, and indeed are mostly initially zero. When the equations (3.6) are integrated {\em forward} from $t=t^i$ to $t=t^f$ to obtain the quantities $h^r_a(t^f)$, the result is the transfer map ${\cal M}$ about the design orbit in the Taylor form (2.9). Let us now examine the structure of this set of differential equations. A key observation is that the functions $U^{r^{\prime \prime}}_r(h^s_n)$ are {\em universal}. That is, as (3.3) indicates, they describe certain {\em combinatorial} properties of monomials. They depend only on the dimension $m$ of the system under study, and are the {\em same} for all such systems. As (2.7) shows, what are peculiar to any given system are the forcing terms $g^r_a(t)$. \subsection{Application of Forward Integration to the Two-variable Case} To see what is going on in more detail, it is instructive to work out the first nontrivial case, that with $m=2$. For two variables, all monomials in $(\zeta_1,\zeta_2)$ are of the form $(\zeta_1)^{j_1}(\zeta_2)^{j_2}$. Here, to simplify notation, we have dropped the superscript $i$. Table 1 below shows a convenient way of labeling such monomials, and for this labeling we write \begin{equation} G_r (\zeta ) = (\zeta_1)^{j_1} (\zeta_2)^{j_2} \end{equation} with \begin{equation} j_1=j_1(r) \ \ {\rm{and}} \ \ j_2=j_2(r). \end{equation} \begin{table}[h,t] \caption{A labeling scheme for monomials in two variables.} \begin{center} \begin{tabular}{cccc} $r$ & $j_1$ & $j_2$ & $D$ \\ \hline 1 & 1 & 0 & 1 \\ 2 & 0 & 1 & 1 \\ 3 & 2 & 0 & 2 \\ 4 & 1 & 1 & 2 \\ 5 & 0 & 2 & 2 \\ 6 & 3 & 0 & 3 \\ 7 & 2 & 1 & 3 \\ 8 & 1 & 2 & 3 \\ 9 & 0 & 3 & 3 \\ \end{tabular} \end{center} \end{table} \noindent Thus, for example, \begin{equation} G_1 = \zeta_1, \end{equation} \begin{equation} G_2 = \zeta_2, \end{equation} \begin{equation} G_3 = \zeta_1^2, \end{equation} \begin{equation} G_4 = \zeta_1\zeta_2, \end{equation} \begin{equation} G_5 = \zeta_2^2, \ {\rm etc}. \end{equation} For more detail about monomial labeling schemes, see Section 7. Let us now compute the first few $U^{r^{\prime \prime}}_r(h^s_n)$. From (3.3) and (3.10) we find the relation \begin{equation} G_1 \!\!\left( \sum_{r^{\prime}} h^{r^{\prime}}_1 G_{r^{\prime}}(\zeta), \sum_{r^{\prime}} h^{r^{\prime}}_2 G_{r^{\prime}}(\zeta)\right) = \sum_{r^{\prime}} h^{r^{\prime}}_1 G_{r^{\prime}}(\zeta) = \sum_{r^{\prime \prime}} U^{r^{\prime \prime}}_1 G_{r^{\prime \prime}}(\zeta). \end{equation} It follows that there is the result \begin{equation} U^{r^{\prime \prime}}_1 = h^{r^{\prime \prime}}_1. \end{equation} Similarly, from (3.3) and (3.11), we find the result \begin{equation} U^{r^{\prime \prime}}_2 = h^{r^{\prime \prime}}_2. \end{equation} From (3.3) and (3.12) we find the relation \begin{eqnarray} G_3 &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\left( \sum_{r^{\prime}} h^{r^{\prime}}_1 G_{r^{\prime}}(\zeta), \sum_{r^{\prime}} h^{r^{\prime}}_2 G_{r^{\prime}}(\zeta)\right) = \left( \sum_{r^{\prime}} h^{r^{\prime}}_1 G_{r^{\prime}}(\zeta)\right)^2 \nonumber \\ &=& \sum_{s,t} h_1^sh^t_1 G_s(\zeta ) G_t(\zeta )=\sum_{r^{\prime \prime}} U^{r^{\prime \prime}}_3 G_{r^{\prime \prime}}(\zeta). \end{eqnarray} Use of (3.18) and inspection of (3.10) through (3.14) yields the results \begin{equation} U^1_3 = 0, \end{equation} \begin{equation} U^2_3 = 0, \end{equation} \begin{equation} U^3_3 = (h^1_1)^2, \end{equation} \begin{equation} U^4_3 = 2h^1_1h^2_1, \end{equation} \begin{equation} U^5_3 = (h^2_1)^2. \end{equation} From (3.3) and (3.13) we find the relation \begin{eqnarray} G_4 &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\left( \sum_{r^{\prime}} h^{r^{\prime}}_1 G_{r^{\prime}}(\zeta), \sum_{r^{\prime}} h^{r^{\prime}}_2 G_{r^{\prime}}(\zeta)\right) = \left( \sum_{r^{\prime}} h_1^{r^{\prime}} G_{r^{\prime}}(\zeta )\right) \left( \sum_{r^{\prime}} h_2^{r^{\prime}} G_{r^{\prime}}(\zeta )\right) \nonumber \\ &=& \sum_{s,t} h_1^s h_2^t G_s(\zeta ) G_t(\zeta ) = \sum_{r^{\prime \prime}} U^{r^{\prime \prime}}_4 G_{r^{\prime \prime}}(\zeta). \end{eqnarray} It follows that there are the results \begin{equation} U^1_4 = 0, \end{equation} \begin{equation} U^2_4 = 0, \end{equation} \begin{equation} U^3_4 = h^1_1h_2^1, \end{equation} \begin{equation} U^4_4 = h^1_1h^2_2 + h^2_1h^1_2, \end{equation} \begin{equation} U^5_4 = h^2_1 h^2_2. \end{equation} Finally, from (3.3) and (3.14), we find the results \begin{equation} U^1_5 = 0, \end{equation} \begin{equation} U^2_5 = 0, \end{equation} \begin{equation} U^3_5 = (h^1_2)^2, \end{equation} \begin{equation} U^4_5 = 2h^1_2h^2_2, \end{equation} \begin{equation} U^5_5 = (h^2_2)^2. \end{equation} Two features now become apparent. As in Table 1, let $D(r)$ be the {\em degree} of the monomial with label $r$. Then, from the examples worked out, and quite generally from (3.3), we see that there is the relation \begin{equation} U^{r^{\prime \prime}}_r = 0 \ {\rm when} \ D(r) > D(r^{\prime \prime}). \end{equation} It follows that the sum on the right side of (3.6) always terminates. Second, for the arguments $h^s_n$ possibly appearing in $U^{r^{\prime \prime}}_r (h^s_n)$, we see that there is the relation \begin{equation} D(s) \leq D(r^{\prime \prime}). \end{equation} It follows, again see (3.6), that the right side of the differential equation for any $h^{r^{\prime \prime}}_a$ involves only the $h^s_n$ for which (3.36) holds. Therefore, to determine the coefficients $h^r_a(t^f)$ of the Taylor expansion (2.9) through terms of some degree $D$, it is only necessary to integrate a finite number of equations, and the right sides of these equations involve only the coefficients for this degree and lower. For example, to continue our discussion of the case of two variables, the equations (3.6) take the explicit form \begin{equation} \dot{h}^1_1(t) = \sum^2_{r=1} g^r_1(t) U^1_r = g^1_1(t) h^1_1(t) + g^2_1(t) h^1_2(t), \end{equation} \begin{equation} \dot{h}^1_2(t) = \sum^2_{r=1} g^r_2(t) U^1_r = g^1_2(t) h^1_1(t) + g^2_2(t) h^1_2(t), \end{equation} \begin{equation} \dot{h}^2_1(t) = \sum^2_{r=1} g^r_1(t) U^2_r = g^1_1(t) h^2_1(t) + g^2_1(t) h^2_2(t), \end{equation} \begin{equation} \dot{h}^2_2(t) = \sum^2_{r=1} g^r_2(t) U^2_r = g^1_2(t) h^2_1(t) + g^2_2(t) h^2_2(t), \end{equation} \begin{eqnarray} \dot{h}^3_1(t) &=& \sum^5_{r=1} g^r_1(t) U^3_r \nonumber \\ &=& g^1_1(t) h^3_1(t) + g^2_1(t) h^3_2(t) + g^3_1(t) [h^1_1(t)]^2 \nonumber\\ & & +g^4_1(t)h^1_1(t) h^1_2(t) + g^5_1(t) [h^1_2(t)]^2, \end{eqnarray} \begin{eqnarray} \dot{h}^3_2(t) &=& \sum^5_{r=1} g^r_2(t) U^3_r \nonumber \\ &=& g^1_2(t) h^3_1(t) + g^2_2(t) h^3_2(t) + g^3_2(t) [h^1_1(t)]^2 \nonumber\\ & & +g^4_2(t)h^1_1(t) h^1_2(t) + g^5_2(t) [h^1_2(t)]^2, \end{eqnarray} \begin{eqnarray} \dot{h}^4_1(t) &=& \sum^5_{r=1} g^r_1(t) U^4_r \nonumber \\ &=& g^1_1(t) h^4_1(t) + g^2_1(t) h^4_2(t) + 2g^3_1(t) h^1_1(t) h^2_1(t) \nonumber\\ & & +g^4_1(t)[h^1_1(t) h^2_2(t) + h^2_1(t)h^1_2(t)] + 2g^5_1(t) h^1_2(t)h^2_2(t), \end{eqnarray} \begin{eqnarray} \dot{h}^4_2(t) &=& \sum^5_{r=1} g^r_2(t) U^4_r \nonumber \\ &=& g^1_2(t) h^4_1(t) + g^2_2(t) h^4_2(t) + 2g^3_2(t) h^1_1(t) h^2_1(t) \nonumber\\ & &+ g^4_2(t)[h^1_1(t) h^2_2(t) + h^2_1(t)h^1_2(t)] + 2g^5_2(t) h^1_2(t)h^2_2(t), \end{eqnarray} \begin{eqnarray} \dot{h}^5_1(t) &=& \sum^5_{r=1} g^r_1(t) U^5_r \nonumber \\ &=& g^1_1(t) h^5_1(t) + g^2_1(t) h^5_2(t) + g^3_1(t) [h^2_1(t)]^2 \nonumber\\ & &+ g^4_1(t)h^2_1(t) h^2_2(t) + g^5_1(t) [h^2_2(t)]^2, \end{eqnarray} \begin{eqnarray} \dot{h}^5_2(t) &=& \sum^5_{r=1} g^r_2(t) U^5_r \nonumber \\ &=& g^1_2(t) h^5_1(t) + g^2_2(t) h^5_2(t) + g^3_2(t) [h^2_1(t)]^2 \nonumber\\ & &+ g^4_2(t)h^2_1(t) h^2_2(t) + g^5_2(t) [h^2_2(t)]^2, \ {\rm etc}. \end{eqnarray} And, from (3.7), we have the initial conditions \begin{equation} h^r_a(t^i) = \delta^r_a. \end{equation} We see that if we desire only the degree one terms in the expansion (2.8), then it is only necessary to integrate the equations (3.37) through (3.40) with the initial conditions (3.47). A moment's reflection shows that so doing amounts to integrating the first-degree variational equations. We also observe that if we desire only the degree one and degree two terms in the expansion (2.8), then it is only necessary to integrate the equations (3.37) through (3.46) with the initial conditions (3.47), etc. \section{Solution of Complete Variational Equations Using Backward Integration} \setcounter{equation}{0} There is another method of determining the $h^r_a$ that is surprising, ingenious, and in some ways superior to that just described. It involves integrating backward in time[5]. \subsection{Method of Backward Integration} Let us rewrite (2.9) in the slightly more explicit form \begin{equation} \zeta^f_a = \sum_r h^r_a(t^i,t^f) G_r(\zeta^i) \end{equation} to indicate that there are two times involved, $t^i$ and $t^f$. From this perspective, (3.6) is a set of differential equations for the quantities $(\partial /\partial t)h^r_a(t^i,t)$ that is to be integrated and evaluated at $t=t^f$. An alternate procedure is to seek a set of differential equations for the quantities $(\partial /\partial \bar{t})h^r_a(\bar{t},t^f)$ that is to be integrated and evaluated at $\bar{t}=t^i$. As a first step in considering this alternative, rewrite (4.1) in the form \begin{equation} \zeta^f_a = \sum_r h^r_a(\bar{t},t^f) G_r(\zeta (\bar{t})). \end{equation} Now reason as follows: \ If $\bar{t}$ is varied and at the same time the quantities $\zeta (\bar{t})$ are varied (evolve) so as to remain on the solution to (2.7) having final conditions $\zeta^f$, then the quantities $\zeta^f$ must remain {\em unchanged}. Consequently, there is the differential equation result \begin{equation} 0 = d\zeta^f_a/d\bar{t} = \sum_r [(\partial /\partial \bar{t}) h_a(\bar{t},t^f)] G_r(\zeta (\bar{t})) + \sum_r h^r_a (\bar{t},t^f) (d/d\bar{t}) G_r(\zeta (\bar{t})). \end{equation} Let us introduce the notation $\dot{h}^r_a(\bar{t},t^f)$ for $(\partial /\partial \bar{t}) h_a^r(\bar{t},t^f)$ so that the first term on the right side of (4.3) can be rewritten in the form \begin{equation} \sum_r [(\partial /\partial \bar{t}) h_a^r(\bar{t},t^f)] G_r(\zeta ) = \sum_r \dot{h}^r_a G_r(\zeta ). \end{equation} Next, begin working on the second term on the right side of (4.3) by replacing the summation index $r$ by the dummy index $r^{\prime}$, \begin{equation} \sum_r h^r_a(\bar{t},t^f) (d/d\bar{t}) G_r(\zeta (\bar{t})) = \sum_{r^{\prime}} h^{r^{\prime}}_a(\bar{t},t^f) (d/d\bar{t}) G_{r^{\prime}}(\zeta (\bar{t})). \end{equation} Now carry out the indicated differentiation using the chain rule and the relation (2.7) which describes how the quantities $\zeta$ vary along a solution, \begin{equation} (d/d\bar{t}) G_{r^{\prime}} (\zeta (\bar{t})) = \sum_b (\partial G_{r^{\prime}}/\partial \zeta_b) (d\zeta_b/d\bar{t}) = \sum_{br^{\prime \prime}} (\partial G_{r^{\prime}}/\partial \zeta_b) g^{r^{\prime \prime}}_b(\bar{t}) G_{r^{\prime \prime}} (\zeta (\bar{t})). \end{equation} Watch closely: \ Since the $G_r$ are simply standard monomials in the $\zeta$, there must be relations of the form \begin{equation} [(\partial /\partial \zeta_b) G_{r^{\prime}} (\zeta )] G_{r^{\prime \prime}} (\zeta ) = \sum_r C^r_{br^{\prime}r^{\prime \prime}} G_r(\zeta ) \end{equation} where the $C^r_{br^{\prime}r^{\prime \prime}}$ are {\em universal constant coefficients} that describe certain combinatorial properties of monomials. As a result, the second term on the right side of (4.3) can be written in the form \begin{equation} \sum_{r^{\prime}} h^{r^{\prime}}_a (\bar{t},t^f) (d/d\bar{t}) G_{r^{\prime}} (\zeta (\bar{t})) = \sum_r G_r(\zeta ) \sum_{br^{\prime}r^{\prime \prime}} C^r_{br^{\prime}r^{\prime \prime}} g^{r^{\prime \prime}}_b(\bar{t}) h^{r^{\prime}}_a (\bar{t},t^f). \end{equation} Since the monomials $G_r$ are linearly independent, the relations (4.3) through (4.8) imply the result \begin{equation} \dot{h}^r_a (\bar{t},t^f) = -\sum_{br^{\prime}r^{\prime \prime}} C^r_{br^{\prime}r^{\prime \prime}} g^{r^{\prime \prime}}_b(\bar{t}) h^{r^{\prime}}_a (\bar{t},t^f). \end{equation} This result is a set of differential equations for the $h^r_a$ that are to be integrated from $\bar{t} = t^f$ {\em back} to $\bar{t} = t^i$. Also, evaluating (4.2) for $\bar{t} = t^f$ gives the results \begin{equation} \zeta^f_a = \sum_r h^r_a (t^f,t^f) G_r(\zeta^f_a), \end{equation} from which it follows that (with the usual polynomial labeling) the $h^r_a$ satisfy the final conditions \begin{equation} h^r_a(t^f,t^f) = \delta^r_a. \end{equation} Therefore the solution to (4.9) is uniquely defined. Finally, it is evident from the definition (4.7) that the coefficients $C^r_{br^{\prime}r^{\prime \prime}}$ satisfy the relation \begin{equation} C^r_{br^{\prime}r^{\prime \prime}} = 0 \ {\rm unless} \ [D(r^{\prime})-1] + D(r^{\prime \prime}) = D(r). \end{equation} Therefore, since $D(r^{\prime \prime}) \geq 1$ in (4.9), it follows from (4.12) that the only $h^{r^{\prime}}_a$ that occur on the right side of (4.9) are those that satisfy \begin{equation} D(r^{\prime}) \leq D(r). \end{equation} Similarly, the only $g^{r^{\prime \prime}}_b$ that occur are those that satisfy \begin{equation} D(r^{\prime \prime}) \leq D(r). \end{equation} Therefore, as before, to determine the coefficients $h^r_a$ of the Taylor expansion (2.9) through terms of some degree $D$, it is only necessary to integrate a finite number of equations, and the right sides of these equations involve only the coefficients for this degree and lower. Comparison of the differential equation sets (3.6) and (4.9) shows that the latter has the remarkable property of being {\em linear} in the unknown quantities $h^r_a$. This feature means that the evaluation of the right side of (4.9) involves only the retrieval of certain universal constants $C^r_{br^{\prime}r^{\prime \prime}}$ and straight-forward multiplication and addition. By contrast, the use of (3.6) requires evaluation of the fairly complicated {\em nonlinear} functions $U^{r^{\prime \prime}}_r(h^s_n)$. Finally, it is easier to insure that a numerical integration procedure is working properly for a set of linear differential equations than it is for a nonlinear set. The only complication in the use of (4.9) is that the equations must be integrated backwards in $\bar{t}$. Correspondingly the equations (2.6) for the design solution must also be integrated backwards since they supply the required quantities $g^r_a$ through use of (2.4) and (2.5). This is no problem if the final quantities $z^d(t^{\rm fin})$ are known. However if only the initial quantities $z^d(t^{\rm in})$ are known, then the equations (2.6) for $z^d$ must first be integrated forward in time to find the final quantities $z^d(t^{\rm fin})$. \subsection{The Two-variable Case Revisited} For clarity, let us also apply this second method to the two-variable case. Table 2 shows the nonzero values of $C^r_{br^{\prime}r^{\prime \prime}}$ for $r \in [1,5]$ obtained using (3.10) through (3.14) and (4.7). Note that the rules (4.12) hold. Use of this Table shows that in the two-variable case the equations (4.9) take the form \begin{equation} \dot{h}^1_1(\bar{t},t^f) = - \ g^1_1(\bar{t}) h^1_1(\bar{t},t^f) - g^1_2(\bar{t}) h^2_1(\bar{t},t^f), \end{equation} \begin{equation} \dot{h}^1_2(\bar{t},t^f) = - \ g^1_1(\bar{t}) h^1_2(\bar{t},t^f) - g^1_2(\bar{t}) h^2_2(\bar{t},t^f), \end{equation} \begin{equation} \dot{h}^2_1(\bar{t},t^f) = - \ g^2_1(\bar{t}) h^1_1(\bar{t},t^f) - g^2_2(\bar{t}) h^2_1(\bar{t},t^f), \end{equation} \begin{equation} \dot{h}^2_2(\bar{t},t^f) = - \ g^2_1(\bar{t}) h^1_2(\bar{t},t^f) - g^2_2(\bar{t}) h^2_2(\bar{t},t^f), \end{equation} \begin{equation} \dot{h}^3_1(\bar{t},t^f) = - \ 2g^1_1(\bar{t}) h^3_1(\bar{t},t^f) - g^3_1(\bar{t}) h^1_1(\bar{t},t^f) -g^1_2(\bar{t}) h^4_1(\bar{t},t^f) - g^3_2(\bar{t}) h^2_1(\bar{t},t^f), \end{equation} \begin{equation} \dot{h}^3_2(\bar{t},t^f) = - \ 2g^1_1(\bar{t}) h^3_2(\bar{t},t^f) - g^3_1(\bar{t}) h^1_2(\bar{t},t^f) -g^1_2(\bar{t}) h^4_2(\bar{t},t^f) - g^3_2(\bar{t}) h^2_2(\bar{t},t^f), \end{equation} \begin{eqnarray} \dot{h}^4_1(\bar{t},t^f) &=& - \ g^1_1(\bar{t}) h^4_1(\bar{t},t^f) - 2g^2_1(\bar{t}) h^3_1(\bar{t},t^f) -g^4_1(\bar{t}) h^1_1(\bar{t},t^f) \nonumber \\ & & -2g^1_2(\bar{t}) h^5_1(\bar{t},t^f) - g^2_2(\bar{t}) h^4_1(\bar{t},t^f) - g^4_2(\bar{t}) h^2_1(\bar{t},t^f), \end{eqnarray} \begin{eqnarray} \dot{h}^4_2(\bar{t},t^f) &=& - \ g^1_1(\bar{t}) h^4_2(\bar{t},t^f) - 2g^2_1(\bar{t}) h^3_2(\bar{t},t^f) -g^4_1(\bar{t}) h^1_2(\bar{t},t^f) \nonumber \\ & &- 2g^1_2(\bar{t}) h^5_2(\bar{t},t^f) - g^2_2(\bar{t}) h^4_2(\bar{t},t^f) - g^4_2(\bar{t}) h^2_2(\bar{t},t^f), \end{eqnarray} \begin{equation} \dot{h}^5_1(\bar{t},t^f) = - \ g^2_1(\bar{t}) h^4_1(\bar{t},t^f) - g^5_1(\bar{t}) h^1_1(\bar{t},t^f) - 2g^2_2(\bar{t}) h^5_1(\bar{t},t^f) - g^5_2(\bar{t}) h^2_1(\bar{t},t^f), \end{equation} \begin{equation} \dot{h}^5_2(\bar{t},t^f) = - \ g^2_1(\bar{t}) h^4_2(\bar{t},t^f) - g^5_1(\bar{t}) h^1_2(\bar{t},t^f) - 2g^2_2(\bar{t}) h^5_2(\bar{t},t^f) - g^5_2(\bar{t}) h^2_2(\bar{t},t^f), \ {\rm etc.} \end{equation} As advertised, the right sides of (4.15) through (4.24) are indeed simpler than those of (3.37) through (3.46). \begin{table}[htp] \caption{Nonzero values of $C^r_{br^{\prime}r^{\prime \prime}}$ for $r\in [1,5]$ in the two-variable case.} \begin{center} \begin{tabular}{ccccc} $r$ & $b$ & $r^{\prime}$ & $r^{\prime \prime}$ & $C$ \\ \hline 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 2 & 1 & 1 \\ 2 & 1 & 1 & 2 & 1 \\ 2 & 2 & 2 & 2 & 1 \\ 3 & 1 & 1 & 3 & 1 \\ 3 & 1 & 3 & 1 & 2 \\ 3 & 2 & 2 & 3 & 1 \\ 3 & 2 & 4 & 1 & 1 \\ 4 & 1 & 1 & 4 & 1 \\ 4 & 1 & 3 & 2 & 2 \\ 4 & 1 & 4 & 1 & 1 \\ 4 & 2 & 2 & 4 & 1 \\ 4 & 2 & 4 & 2 & 1 \\ 4 & 2 & 5 & 1 & 2 \\ 5 & 1 & 1 & 5 & 1 \\ 5 & 1 & 4 & 2 & 1 \\ 5 & 2 & 2 & 5 & 1 \\ 5 & 2 & 5 & 2 & 1 \\ \end{tabular} \end{center} \end{table} \section{Duffing Equation Example} \setcounter{equation}{0} \subsection{Introduction} As an example application, this section studies some aspects of the {\em Duffing} equation. The behavior of the driven Duffing oscillator, like that of generic nonlinear systems, is enormously complicated. Consequently, we will be able to only touch on some of the highlights of this fascinating problem. Duffing's equation describes the behavior of a periodically driven damped {\em nonlinear} oscillator governed by the equation of motion \begin{equation} \ddot{x} + a\dot{x} + bx + cx^3 = d \cos (\Omega t + \psi). \end{equation} Here $\psi$ is an arbitrary phase factor that is often set to zero. For our purposes it is more convenient to set \begin{equation} \psi = \pi/2. \end{equation} Evidently any particular choice of $\psi$ simply results in a shift of the origin in time, and this shift has no physical consequence since the left side of (5.1) is independent of time. We assume $b,c>0$, which is the case of a positive hard spring restoring force.\footnote{Other authors consider other cases, particularly the `double well' case $b<0$ and $c>0$.} We make these assumptions because we want the Duffing oscillator to behave like an ordinary harmonic oscillator when the amplitude is small, and we want the motion to be bounded away from infinity when the amplitude is large. Then, by a suitable choice of time and length scales that introduces new variables $q$ and ${\tau}$, the equation of motion can be brought to the form \begin{equation} \ddot{q} + 2\beta \dot{q} + q + q^3 = -\epsilon \sin \omega \tau , \end{equation} where now a dot denotes $d/d\tau$ and we have made use of (5.2). In this form it is evident that there are 3 free parameters: $\beta$, $\epsilon$, and $\omega$. \subsection{Stroboscopic Map} While the Duffing equation is nonlinear, it does have the simplifying feature that the driving force is periodic with period \begin{equation} T = 2\pi /\omega . \end{equation} Let us convert (5.3) into a pair of first-order equations by making the definition \begin{equation} p = \dot{q}, \end{equation} with the result \[ \dot{q} = p, \] \begin{equation} \dot{p} = -2\beta p - q - q^3 - \epsilon \sin \omega \tau . \end{equation} Let $q^0,p^0$ denote initial conditions at $\tau = 0$, and let $q^1,p^1$ be the final conditions resulting from integrating the pair (5.6) one full period to the time $\tau = T$. Let $\cal{M}$ denote the transfer map that relates $q^1,p^1$ to $q^0,p^0$. Then, using the notation $z=(q,p)$, we may write \begin{equation} z^1 = {\cal{M}} z^0. \end{equation} Suppose we now integrate for a second full period to find $q^2,p^2$. Since the right side of (5.6) is periodic, the rules for integrating from $\tau = T$ to $\tau = 2T$ are the same as the rules for integrating from $\tau = 0$ to $\tau = T$. Therefore we may write \begin{equation} z^2 = {\cal{M}}z^1 = {\cal{M}}^2z^0, \end{equation} and in general \begin{equation} z^{n+1} = {\cal{M}}z^n = {{\cal{M}}}^{n+1}z^0. \end{equation} We may regard the quantities $z^n$ as the result of viewing the motion in the light provided by a stroboscope that flashes at the times\footnote{Note that, with the choice (5.2) for $\psi$, the driving term described by the right side of (5.3) vanishes at the stroboscopic times $\tau^n$.} \begin{equation} \tau^n = nT. \end{equation} Because of the periodicity of the right side of the equations of motion, the rule for sending $z^n$ to $z^{n+1}$ over the intervals between successive flashes is always the same, namely $\cal{M}$. For these reasons $\cal{M}$ is called a \emph{stroboscopic map}. Despite the explicit time dependence in the equations of motion, because of periodicity we have been able to describe the long-term motion by the repeated application of a single fixed map. \subsection{Feigenbaum Diagram Overview} One way to study a map and analyze its properties, in this case the Duffing stroboscopic map, is to find its fixed points. When these fixed points are found, one can then display how they appear, move, and vanish as various parameters are varied. Such a display is often called a {\em Feigenbaum} diagram. This subsection will present selected Feigenbaum diagrams, including the infinite period doubling cascade and its associated strange attractor, for the stroboscopic map obtained by high-accuracy numerical integration of the equations of motion (5.6). They will be made by observing the behavior of fixed points as the driving frequency $\omega$ is varied. For simplicity, the damping parameter will be held constant at the value $\beta=0.10$. Various sample values will be used for the driving strength $\epsilon$.\footnote{Of course, one can also make Feigenbaum diagrams in which some other parameter, say $\epsilon$, is varied while the others, including $\omega$, are held fixed.} Let us begin with the case of small driving strength. When the driving strength is small, we know from energy considerations that the steady-state response will be small, and therefore the behavior of the steady-state solution will be much like that of the driven damped {\em linear} harmonic oscillator. That is, for small amplitude motion, the $q^3$ term in (5.3) will be negligible compared to the other terms. We also know that, because of damping, there will be only {\em one} steady-state solution, and therefore $\cal M$ has only {\em one} fixed point $z^f$ such that \begin{equation} {\cal M}z^f=z^f. \end{equation} Finally, again because of damping, we know that this fixed point is {\em stable}. That is, if $\cal M$ is applied repeatedly to a point near $z^f$, the result is a sequence of points that approach ever closer to $z^f$. For this reason a stable fixed point is also called an {\em attractor}. \subsubsection{A Simple Feigenbaum Diagram} Figure 2 shows the values of $q_f(\omega)$ and for the case $\epsilon=0.150$, and Figure 3 shows $p_f(\omega)$. In the figures the phase-space axes are labeled as $q_\infty$ and $p_\infty$ to indicate that what are being displayed are steady-state values reached after a large number of applications of $\cal M$. As anticipated, we observe from Figures 2 and 3 that the response is much like the resonance response of the driven damped linear harmonic oscillator.\footnote{It was the desire for $q_\infty$ to exhibit a resonance-like peak as a function of $\omega$ that dictated the choice (5.2) for $\psi$.} Note that the coefficient of $q$ in (5.12) is 1, and therefore at small amplitudes, where $q^3$ can be neglected, the Duffing oscillator described by (5.3) has a natural frequency near 1. Correspondingly, Figure 2 displays a large response when the driving frequency has the value $\omega\simeq1$. Observe, however, that the response, while similar, is not exactly like that of the driven damped linear harmonic oscillator. For example, the resonance peak at $\omega\simeq1$ is slightly tipped to the right, and there is also a small peak for $\omega\simeq1/3$. \begin{figure}[htp] \centering \includegraphics*[height=4.5in,angle=0]{Figure2} \caption{Feigenbaum diagram showing limiting values $q_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = .15$) for the stroboscopic Duffing map.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[width=0.7\textwidth]{Figure3} \caption{Feigenbaum diagram showing limiting values $p_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = .15$) for the stroboscopic Duffing map.} \end{figure} Our strategy for further exploration will be to increase the value of the driving strength $\epsilon$, all the while observing the stroboscopic Duffing map Feigenbaum diagram as a function of $\omega$. We hasten to add the disclaimer that the driven Duffing oscillator displays an enormously rich behavior that varies widely with the parameter values $\beta$, $\epsilon$, $\omega$, and we shall be able to give a brief summary of some of it. Also, for brevity, we shall generally only display $q_f(\omega)$. \subsubsection{Saddle-Node (Blue-Sky) Bifurcations} Figure 4 shows the $q$ Feigenbaum diagram for the case of somewhat larger driving strength, $\epsilon=1.50$. For this driving strength the resonance peak, which previously occurred at $\omega\simeq1$, has shifted to a higher frequency and taken on a more complicated structure. There are now also noticeable resonances at lower frequencies, with the most prominent one being at $\omega\simeq1/2$. Examination of Figure 4 shows that for $\omega\le1.5$ there is a single stable fixed point whose trail is shown in black. Then, as $\omega$ is increased, a pair of fixed points is born at $\omega\simeq1.8$.\footnote{Actually, in the analytic spirit of Poincar\'{e}, these fixed points also exist for smaller values of $\omega$, but are then complex. They first become purely real, and therefore physically apparent, when $\omega\simeq1.8$.} One of them is stable. The other, whose trail as $\omega$ is varied is shown in red, is {\em unstable}. That is, if ${\cal M}$ is applied repeatedly to a point near this fixed point, the result is a sequence of points that move ever farther away from the fixed point. For this reason an unstable fixed point is also called a {\em repellor}. This appearance of two fixed points out of nowhere is called a saddle-node {\em bifurcation} or a {\em blue-sky} bifurcation, and the associated Fiegenbaum diagram is then sometimes called a bifurcation diagram.\footnote{Strictly speaking, a Feigenbaum diagram displays only the trails of stable fixed points while a bifurcation diagram displays the trails of all fixed points.} The original stable fixed point persists as $\omega$ is further increased so that over some $\omega$ range there are 3 fixed points. Then, as $\omega$ is further increased, the original fixed point and the unstable fixed point move until they meet and annihilate when $\omega\simeq2.6$.\footnote{Actually, they are not destroyed, but instead become complex and therefore disappear from view.} This disappearance is called an inverse saddle-node or inverse blue-sky bifurcation. Finally, for still larger $\omega$ values there is again only one fixed point, and it is stable. \begin{figure}[htp] \centering \includegraphics*[width=5in,angle=0]{Figure4} \caption{Feigenbaum/bifurcation diagram showing limiting values $q_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = 1.5$) for the stroboscopic Duffing map. Trails of the stable fixed points are shown in black. Also shown, in red, is the trail of the unstable fixed point. Finally, jumps in the steady-state amplitude are illustrated by vertical dashed lines at $\omega \simeq 1.8$ and $\omega \simeq 2.6$.} \end{figure} The appearance and disappearance of stable-unstable fixed-point pairs, as $\omega$ is varied, has a striking dynamical consequence. Suppose, for example in the case of Figure 4, that the driving frequency $\omega$ is below the value $\omega \simeq 1.8$ where the saddle-node bifurcation occurs. Then there is only one fixed point, and it is attracting. Now suppose $\omega$ is {\em slowly} increased. Then, since the fixed-point solution is attracting, the solution for the slowly increasing $\omega$ case will remain near this solution. See the upper black trail in Figure 4. This ``tracking" will continue until $\omega$ reaches the value $\omega \simeq 2.6$ where the inverse saddle-node bifurcation occurs. At this value the fixed point being followed disappears. Consequently, since the one remaining fixed point is also an attractor, the solution evolves very quickly to that of the remaining fixed point. It happens that the oscillation amplitude associated with this fixed point is much smaller, and therefore there appears to be a sudden jump in oscillation amplitude to a smaller value. Now suppose $\omega$ is slowly decreased from a value above the value $\omega \simeq 2.6$ where the inverse saddle-node bifurcation occurs. Then the solution will remain near that of the fixed point lying on the bottom black trail in Figure 4. This tracking will continue until $\omega$ reaches the value $\omega \simeq 1.8$ where the fixed point being followed disappears. Again, since the remaining fixed point is attracting, the solution will now evolve to that of the remaining fixed point. The result is a jump to a larger oscillation amplitude. Evidently the steady-state oscillation amplitude exhibits {\em hysteresis} as $\omega$ is slowly varied back and forth over an interval that begins below the value where the first saddle-node bifurcation occurs and ends at a value above that where the inverse saddle-node bifurcation occurs. \subsubsection{Pitchfork Bifurcations} Let us continue to increase $\epsilon$. Figure 5 shows that a qualitatively new feature appears when $\epsilon$ is near 2.2: a {\em bubble} is formed between the major resonant peak (the one that has saddle-node bifurcated) and the subresonant peak immediately to its left. To explore the nature of this bubble, let us make $\epsilon$ still larger, which, we anticipate, will result in the bubble becoming larger. Figure 6 shows the Feigenbaum diagram in the case $\epsilon = 5.5$. Now the major resonant peak and the subresonant peak have moved to larger $\omega$ values. Correspondingly, the bubble between them has also moved to larger $\omega$ values. Moreover, it is larger, yet another smaller bubble has formed, and the subresonant peak between them has also undergone a saddle-node bifurcation. For future use, we will call the major resonant peak the {\em first} or {\em leading} saddle-node bifurcation, and we will call the subresonant peak between the two bubbles the {\em second} saddle-node bifurcation, etc. Also, we will call the bubble just to the left of the first saddle-node bifurcation the {\em first} or {\em leading} bubble, and the next bubble will be called the {\em second} bubble, etc. We also note that three short trails have appeared in Figure 6 just to the right of $\omega=4$. They correspond to a period-three bifurcation followed shortly thereafter by an inverse bifurcation. Actually, much closer examination shows that there are six trails consisting of three closely-spaced pairs. Each pair comprises a stable and an unstable fixed point of the map ${\cal {M}}^3$. They are not fixed points of $\cal M$ itself, but rather are sent into each other in cyclic fashion under the action of $\cal M$. \begin{figure}[htp] \centering \includegraphics*[width=0.75\textwidth]{Figure5} \caption{Feigenbaum diagram showing limiting values $q_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = 2.2$) for the stroboscopic Duffing map. It displays that a bubble has now formed at $\omega\approx.8$.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[width=0.75\textwidth]{Figure6} \caption{Feigenbaum diagram showing limiting values $q_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = 5.5$) for the stroboscopic Duffing map. The first bubble has grown, a second smaller bubble has formed to its left, and the sub-resonant peak between them has saddle-node bifurcated to become the second saddle-node bifurcation.} \end{figure} Figure 7 shows the larger (leading) bubble in Figure 6 in more detail and with the addition of red lines indicating the trails of unstable fixed points. It reveals that the bubble describes the {\em simultaneous} bifurcation of a single fixed point into three fixed points. Two of these fixed points are stable and the third, whose $q$ coordinate as a function of $\omega$ are shown as a red line, is unstable. What happens is that, as $\omega$ is increased, a {\em single} stable fixed point becomes a {\em triplet} of fixed points, two of which are stable and one of which is unstable. This is called a {\em pitchfork} bifurcation. Then, as $\omega$ is further increased, these three fixed points again merge, in an inverse pitchfork bifurcation, to form what is again a single stable fixed point. \begin{figure}[htp] \centering \includegraphics*[width=5in,angle=0]{Figure7} \caption{An enlargement of Figure 6 with the addition of red lines indicating the trails of unstable fixed points.} \end{figure} \newpage \subsubsection{A Plethora of Bifurcations and Period Doubling Cascades} We end our study of the Duffing equation by increasing $\epsilon$ from its earlier value $\epsilon=5.5$ to much larger values. First we will set $\epsilon=22.125$. Based on our experience so far, we might anticipate that the Feigenbaum diagram would become much more complicated. That is indeed the case. Figure 8 displays $q_\infty$ when $\beta=0.1$ and $\epsilon=22.125$, as a function of $\omega$, for the range $\omega \in (0,12)$. Evidently the behavior of the attractors for the stroboscopic Duffing map, which is what is shown in Figure 8, is extremely complicated. There are now a great many fixed points both of $\cal M$ itself and various powers of $\cal M$. Of particular interest to us are the two areas around $\omega=.8$ and $\omega=1.25$. They contain what has become of the first two bubbles in Figure 7, and are shown in greater magnification in Figure 9. What has happened is that bubbles have formed within bubbles, and bubbles have formed within these bubbles, etc. to form a cascade. However, these interior bubbles are not the result of pitchfork bifurcations, but rather the result of {\em period-doubling} bifurcations. For example, the bifurcation that creates the first bubble at $\omega\simeq1.2$ is a pitchfork bifurcation. But the successive bifurcations within the bubble are period-doubling bifurcations. In a period-doubling bifurcation a fixed point that is initially stable becomes unstable as $\omega$ is increased. When this happens, simultaneously two stable fixed points of ${\cal {M}}^2$ are born. They are not fixed points of $\cal M$ itself, but rather are sent into each other under the action of $\cal M$. Hence the name ``period doubling". The map $\cal M$ must be applied twice to send such a fixed point back into itself. In the next period doubling, fixed points of ${\cal {M}}^4$ are born, etc. However we note that, as $\omega$ increases, the sequence of period-doubling bifurcations only occurs a {\em finite} number of times and then undoes itself. Remarkably, when $\epsilon$ is just somewhat larger, {\em infinite} sequences of period doubling cascades can occur. Figure 10 shows what happens when $\epsilon=25$. \begin{figure}[htp] \centering \includegraphics*[height=5in,angle=90]{Figure8} \caption{Feigenbaum diagram showing limiting values $q_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = 22.125$) for the stroboscopic Duffing map.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[height=4in,angle=0]{Figure9} \caption{Enlarged portion of the Feigenbaum diagram of Figure 8 displaying limiting values $q_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = 22.125$) for the stroboscopic Duffing map. It shows part of the first bubble at the far right, the second bubble, and part of a third bubble at the far left. Examine the first and second bubbles. Each initially consists of two stable period-one fixed points. Each also contains the beginnings of period-doubling cascades. These cascades do not complete, but rather remerge to again result in a pair of stable period-one fixed points. There are also many higher-period fixed points and their associated cascades.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[height=5in,angle=90]{Figure10} \caption{Feigenbaum diagram showing limiting values $q_{\infty}$ as a function of $\omega$ (when $\beta = 0.1$ and $\epsilon = 25$) for the stroboscopic Duffing map.} \end{figure} \subsubsection{More Detailed View of Infinite Period Doubling Cascades} To display the infinite period doubling cascades in more detail, Figure 11 shows an enlargement of part of Figure 10. And Figures 12 and 13 show successive enlargements of parts of the first bubble in Figure 10. From Figure 11 we see that the first bubble forms as a result of a pitchfork bifurcation just to the right of $\omega=1.2$, and from Figures 12 and 13 we see that the first period doubling bifurcation occurs in the vicinity of $\omega=1.268$. From Figure 13 it is evident that successive period-doublings occur an infinite number of times to ultimately produce a chaotic region when $\omega$ exceeds $\omega\simeq1.29$. \begin{figure}[htp] \centering \includegraphics*[height=4in,angle=0]{Figure11} \caption{Enlargement of a portion Figure 10 showing the first, second, and third bubbles. The period-doubling cascades in each of the first and second bubbles complete. Then they undo themselves as $\omega$ is further increased. There is no period doubling in the third bubble when $\epsilon=25$.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[height=3.5in]{Figure12} \caption{Detail of part of the first bubble in Figure 11 showing upper and lower infinite period-doubling cascades. Part of the trail of the stable fixed point associated with the second saddle-node bifurcation accidentally appears to overlay the upper period doubling bifurcation. Finally, there are numerous cascades and remergings associated with higher-period fixed points.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[height=3.5in]{Figure13} \caption{Detail of part of the upper cascade in Figure 12 showing an infinite period-doubling cascade, followed by chaos, for what was initially a stable period-one fixed point.} \end{figure} \subsubsection{Strange Attractor} As evidence that the behavior in this region is chaotic, Figures 14 and 15 show portions of the {\em full} phase space, the $q,p$ plane, when $\omega=1.2902$. Note the evidence for fractal structure. The points appear to lie on a {\em strange attractor}. \begin{figure}[htp] \centering \includegraphics*[height=3.5in]{Figure14} \caption{Limiting values of $q_{\infty},p_{\infty}$ for the stroboscopic Duffing map when $\omega=1.2902$ (and $\beta=.1$ and $\epsilon=25$). They appear to lie on a strange attractor.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[height=3.5in]{Figure15} \caption{Enlargement of boxed portion of Figure 14 illustrating the beginning of self-similar fractal structure.} \end{figure} \newpage \section{Polynomial Approximation to Duffing Stroboscopic Map} \setcounter{equation}{0} In this section we will find the complete variational equations for the Duffing equation including dependence on the parameter $\omega$. The two remaining parameters, $\beta$ and $\epsilon$, will remain fixed. We will then study the properties of the resulting polynomial approximation to the Duffing stroboscopic map obtained by truncating its Taylor expansion. \subsection{Complete Variational Equations with Selected Parameter Dependence} To formulate the complete variational equations we could proceed as in Section 2.2 by setting $\omega=z_3$ and $z_3=\omega^d+\zeta_3$. So doing would require expanding the function $\sin(\omega\tau)=\sin[(\omega^d+\zeta_3)\tau]$ as a power series in $\zeta_3$. Such an expansion is, of course, possible, but it leads to variational equations with a large number of forcing terms $g_a^r$ since the expansion of $\sin[(\omega^d+\zeta_3)\tau]$ contains an infinite number of terms. In the case of Duffing's equation this complication can be avoided by a further change of variables. Recall that the first change of variables brought the Duffing equation to the form (5.3), which we now slightly rewrite as \begin{equation} q^{\prime \prime} + 2\beta q^\prime + q + q^3 = -\epsilon \sin \omega \tau \end{equation} where $d/d\tau$ is now denoted by a prime. Next make the further change of variables \begin{equation} q = \omega Q, \end{equation} \begin{equation} \omega = 1/\sigma , \end{equation} \begin{equation} \omega \tau = t. \end{equation} When this is done, there are the relations \begin{equation} q^\prime =\omega^2\dot{Q} \end{equation} and \begin{equation} q^{\prime \prime} = \omega^3 \ddot{Q} \end{equation} where now a dot denotes $d/dt$. [Note that the variable $t$ here is different from that in (5.1).] Correspondingly, Duffing's equation takes the form \begin{equation} \ddot{Q} + 2\beta \sigma \dot{Q} + \sigma^2Q + Q^3 = -\epsilon \sigma^3 \sin t. \end{equation} This equation can be converted to a first-order set of the form (2.1) by writing \begin{equation} Q = z_1 \end{equation} and \begin{equation} \dot{Q} = z_2. \end{equation} Following the method of Section 2.2, we augment the first-order equation set associated with (6.7) by adding the equation \begin{equation} \dot{\sigma} = 0. \end{equation} Then we may view $\sigma$ as a variable, and (6.10) guarantees that this variable remains a constant. Taken together, (6.7) and (6.10) may be converted to a first-order triplet of the form (2.12) by writing (6.8), (6.9), and \begin{equation} \sigma = z_3. \end{equation} Doing so gives the system \begin{equation} \dot{z}_1 = z_2, \end{equation} \begin{equation} \dot{z}_2 = -2\beta z_3z_2 -z^2_3z_1-z^3_1-\epsilon z^3_3 \sin t, \end{equation} \begin{equation} \dot{z}_3 = 0, \end{equation} and we see that there are the relations \begin{equation} f_1(z,t) = z_2, \end{equation} \begin{equation} f_2(z,t) = - \ 2\beta z_3z_2 -z^2_3z_1-z^3_1-\epsilon z^3_3 \sin t, \end{equation} \begin{equation} f_3(z,t) = 0. \end{equation} Note that, with the change of variables just made, the stroboscopic map is obtained by integrating the equations of motion from $t=0$ to $t=2\pi$. As before, we introduce deviation variables using (2.2) and carry out the steps (2.3) through (2.9). In particular, we write \begin{equation} z_3=z_3^d+\zeta_3=\sigma^d+\zeta_3. \end{equation} This time we are working with monomials in the three variables $\zeta_1$, $\zeta_2$, and $\zeta_3$. [That is, $a$ ranges from 1 to 3 in (2.12).] They are conveniently labeled using the indices $r$ given in Table 3 below. With regard to the expansions (2.4) and (2.5), we find the results \begin{equation} f_1(z^d+\zeta ,t) = z^d_2 + \zeta_2, \end{equation} \begin{eqnarray} f_2(z^d+\zeta ,t) &=& - \ 2\beta (z^d_3 + \zeta_3)(z^d_2+\zeta_2) - (z^d_3+\zeta_3)^2(z^d_1+\zeta_1) \nonumber \\ & &- (z^d_1+\zeta_1)^3 - \epsilon (z^d_3+\zeta_3)^3 \sin t\nonumber \\ &=&[-2\beta z_2^dz_3^d-z_1^d(z_3^d)^2-(z_1^d)^3 -\epsilon (z_3^d)^3\sin t]\nonumber \\ & &-[3(z_1^d)^2+(z_3^d)^2]\zeta_1-2\beta z_3^d\zeta_2 -[2\beta z_2^d+2z_1^dz_3^d+3\epsilon(z_3^d)^2\sin t ]\zeta_3 \nonumber \\ & &-2\beta\zeta_2\zeta_3-(z_1^d+3\epsilon z_3^d)\zeta_3^2-z_3^d\zeta_1\zeta_2-3z_1^d\zeta_1^2 \nonumber \\ & &-\zeta_1^3-\zeta_1\zeta_3^2-\epsilon(\sin t)\zeta_3^3, \end{eqnarray} \begin{equation} f_3(z^d+\zeta ,t) = 0. \end{equation} Note the right sides of (6.19) through (6.21) are at most cubic in the deviation variables $\zeta_a$. Therefore, from Table 3, we see that the index $r$ for the $g^r_a$ should range from 1 through 19. It follows that for Duffing's equation (with $\sigma$ parameter expansion) the only nonzero forcing terms are given by the relations \begin{equation} g^2_1 = 1, \end{equation} \begin{equation} g^1_2 = -3(z^d_1)^2 - (z^d_3)^2, \end{equation} \begin{equation} g^2_2 = -2\beta z^d_3, \end{equation} \begin{equation} g^3_2 = -2\beta z^d_2 - 2z^d_1z^d_3 - 3\epsilon (z^d_3)^2 \sin t, \end{equation} \begin{equation} g^4_2 = -3z^d_1, \end{equation} \begin{equation} g^6_2 = -2z^d_3, \end{equation} \begin{equation} g^8_2 = -2\beta , \end{equation} \begin{equation} g^9_2 = -z^d_1 - 3\epsilon z^d_3 \sin t, \end{equation} \begin{equation} g^{10}_2 = -1, \end{equation} \begin{equation} g^{15}_2 = -1, \end{equation} \begin{equation} g^{19}_2 = -\epsilon\sin t. \end{equation} \begin{table}[h,t] \caption{A labeling scheme for monomials in three variables.} \begin{center} \begin{tabular}{ccccc} $r$ & $j_1$ & $j_2$ & $j_3$ & $D$ \\ \hline 1 & 1 & 0 & 0 & 1 \\ 2 & 0 & 1 & 0 & 1 \\ 3 & 0 & 0 & 1 & 1 \\ 4 & 2 & 0 & 0 & 2 \\ 5 & 1 & 1 & 0 & 2 \\ 6 & 1 & 0 & 1 & 2 \\ 7 & 0 & 2 & 0 & 2 \\ 8 & 0 & 1 & 1 & 2 \\ 9 & 0 & 0 & 2 & 2 \\ 10 & 3 & 0 & 0 & 3 \\ 11 & 2 & 1 & 0 & 3 \\ 12 & 2 & 0 & 1 & 3 \\ 13 & 1 & 2 & 0 & 3 \\ 14 & 1 & 1 & 1 & 3 \\ 15 & 1 & 0 & 2 & 2 \\ 16 & 0 & 3 & 0 & 3 \\ 17 & 0 & 2 & 1 & 3 \\ 18 & 0 & 1 & 2 & 3 \\ 19 & 0 & 0 & 3 & 3 \\ \end{tabular} \end{center} \end{table} \newpage \subsection{Performance of Polynomial Approximation} Let ${\cal{M}}_8$ denote the $8^{\rm{th}}$-order polynomial map (with parameter dependence) approximation to the stroboscopic Duffing map $\cal M$. Provided the relevant phase-space region is not too large, we have found that ${\cal{M}}_8$ reproduces all the features, described in Section 5.3, of the exact map[6]. (The phase-space region must lie within the convergence domain of the Taylor expansion.) This reproduction might not be too surprising in the cases of elementary bifurcations such as saddle-node and pitchfork bifurcations. What is more fascinating, as we will see, is that ${\cal{M}}_8$ also reproduces the infinite period doubling cascade and associated strange attractor. \subsubsection{Infinite Period Doubling Cascade} Figure 16 shows the partial Feigenbaum diagram for the map ${\cal{M}}_8$ in the case that $\beta=0.1$ and $\epsilon=25$. The Taylor expansion is made about the point indicated by the {\em black dot}. This point has the coordinates \begin{equation} q_{\rm{bd}}=1.26082,{\;}p_{\rm{bd}}=2.05452,{\;}\omega_{\rm{bd}}=1.285. \end{equation} It was selected to be an unstable fixed point of $\cal M$, but that is not essential. Any nearby expansion point would have served as well. Note the remarkable resemblance of Figures 13 and 16. We have referred to Figure 16 as a {\em partial} Feigenbaum diagram because it shows only $q_\infty$ and not $p_\infty$. In order to give a complete picture, Figure 17 displays them both. \begin{figure}[htp] \centering \includegraphics*[width=4.5in]{Figure16} \caption{Partial Feigenbaum diagram for the map ${\cal {M}}_8$. The black dot marks the point about which ${\cal M}$ is expanded to yield ${\cal {M}}_8$}. \end{figure} \begin{figure}[htp] \centering \includegraphics*[width=4.5in]{Figure17} \caption{Full Feigenbaum diagram for the map ${\cal {M}}_8$. The black dot again marks the expansion point.} \end{figure} \newpage \subsubsection{Strange Attractor} As displayed in Figures 18 through 21 ${\cal {M}}_8$, like $\cal M$, appears to have a strange attractor. Note the remarkable agreement between Figures 14 and 15 for $\cal M$ and their ${\cal {M}}_8$ counterparts, Figures 18 and 19. In the case of ${\cal {M}}_8$ we have been able to obtain additional {\em enlargements}, Figures 20 and 21, further illustrating a self-similar fractal structure. Analogous figures are more difficult to obtain for the exact map $\cal M$ due to the excessive numerical integration time required. By contrast the map ${\cal {M}}_8$, because it is a simple polynomial, is easy to evaluate repeatedly. \begin{figure}[htp] \centering \includegraphics*[height=4.5in]{Figure18} \caption{Limiting values of $q_{\infty},p_{\infty}$ for the map ${\cal {M}}_8$ when $\omega=1.2902$. They appear to lie on a strange attractor.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[width=5.5in]{Figure19} \caption{Enlargement of boxed portion of Figure 18 illustrating the beginning of self-similar fractal structure.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[height=5.5in]{Figure20} \caption{Enlargement of boxed portion of Figure 19 illustrating the continuation of self-similar fractal structure.} \end{figure} \begin{figure}[htp] \centering \includegraphics*[height=5.5in]{Figure21} \caption{Enlargement of boxed portion of Figure 20 illustrating the further continuation of self-similar fractal structure.} \end{figure} \newpage \section{Numerical Implementation} \setcounter{equation}{0} The forward integration method (Section~3.1) can be implemented by a code employing the tools of {\em automatic differentiation} (AD) described by Neidinger [7].\footnote{Some authors refer to AD as {\em truncated power series algebra} (TPSA) since AD algorithms arise from manipulating multivariable truncated power series. Other authors refer to AD as {\em Differential Algebra} (DA).} In this approach arrays of Taylor coefficients of various functions are referred to as AD variables or {\em pyramids} since, as will be seen, they have a hyper-pyramidal structure. Generally the first entry in the array will be the value of the function about some expansion point, and the remaining entries will be the higher-order Taylor coefficients about the expansion point and truncated beyond some specified order. Such truncated Taylor expansions are also commonly called {\em jets}. In our application elements in these arrays will be addressed and manipulated with the aid of scalar indices associated with look-up tables generated at run time. We have also replaced the original APL implementation of Neidinger with a code written in the language of {\it Mathematica} (Version 6, or 7) [8]. Where necessary, for those unfamiliar with the details of {\it Mathematica}, we will explain the consequences of various {\it Mathematica} commands. The inputs to the code are the right sides (RS) of (1.1), or (2.1). Other input parameters are the number of variables $m$, the desired order of the Taylor map $p$, and the initial conditions $(z_a^d)^i$ for the design-solution equation (2.3). Various AD tools for describing and manipulating pyramids are outlined in Section 7.1. There we show how pyramid operations are encoded in the case of polynomial RS, as needed for the Duffing equation. For brevity, we omit the cases of rational, fractional power, and transcendental RS. These cases can also be handled using various methods based on functional identities and known Taylor coefficients, or the differential equations that such functions obey along with certain recursive relations [7]. In Section 7.2, based on the work of Section (7.1), we in effect obtain and integrate numerically the set of differential equations (3.6) in pyramid form, i.e. valid for any map order and any number of variables. Section 7.3 treats the specific case of the Duffing equation. A final Section 7.4 describes in more detail the relation between integrating equations for pyramids and the complete variational equations. \subsection{AD tools} This section describes how arithmetic expressions representing $f_a(\bmv z,t)$, the right sides of (1.1) where $\bmv z$ denotes the dependent variables, are replaced with expressions for arrays (pyramids) of Taylor coefficients. These pyramids in turn constitute the input to our code. Such an ad-hoc replacement, according to the problem at hand, as opposed to operator overloading where the kind of operation depends on the type of its argument, is also the approach taken in [7,9,10]. Let $u,v,w$ be general arithmetic expressions, i.e. scalar-valued functions of $\bmv z$. They contain various arithmetic operations such as addition, multiplication $(*)$, and raising to a power $(\wedge)$. (They may also entail the computation of various transcendental functions such as the sine function, etc. However, as stated earlier, for simplicity we will omit these cases.) The arguments of these operations may be a constant, a single variable or multiple variables $z_a$, or even some other expression. The idea of AD is to redefine the arithmetic operations in such a way (see Definition 1), that all functions $u,v,w$ can be consistently replaced with the arrays of coefficients of their Taylor expansions. For example, by redefining the usual product of numbers ($*$) and introducing the pyramid operation ${\tt PROD}$, $u*v$ is replaced with {\tt PROD[U,V]}. We use upper typewriter font for pyramids ({\tt U,V,\dots}) and for operations on pyramids ({\tt PROD, POW, \dots}). Everywhere, equalities written in typewriter fonts have equivalent {\it Mathematica} expressions. That is, they have associated realizations in {\it Mathematica} and directly correspond to various operations and commands in {\it Mathematica}. In effect, our code operates entirely on pyramids. However, as we will see, any pyramid expression contains, as its first entry, its usual arithmetic counterpart. We begin with a description of our method of monomial labeling. In brief, we list all monomials in a polynomial in some sequence, and label them by where they occur in the list. Next follow Definition~1 and the recipes for encoding operations on pyramids. Subsequently, by using Definition~2, which simply states the rule by which an arithmetic expressions is replaced with its pyramid counterpart, we show how a general expression can be encoded by using only the pyramid of a constant and of a single variable. \subsubsection{Labeling Scheme} A monomial $G_{\bmv j}(\bmv z )$ in $m$ variables is of the form \begin{equation} G_{\bmv j}(\bmv z )=(z_1)^{j_1} (z_2)^{j_2}\cdots (z_m)^{j_m}. \end{equation} Here we have introduced an exponent vector ${\bmv j}$ by the rule \begin{equation} {\bmv j}=(j_1,j_2,\cdots j_m). \end{equation} Evidently $\bmv j$ is an $m$-tuple of non-negative integers. The degree of $G_{\bmv j}(\bmv z )$, denoted by $|\bmv j|$, is given by the sum of exponents, \begin{equation} |\bmv j|=j_1+j_2+\cdots + j_m. \end{equation} The set of all exponents for monomials in $m$ variables with degree less than or equal to $p$ will be denoted by $\Gamma_m^p$, \begin{equation} \Gamma_m^p=\{\bmv j \; |\; |\bmv j| \leq p \}. \end{equation} It can be shown that this set has $L(m,p)$ entries with $L(m,p)$ given given by a binomial coefficient, \begin{equation} L(m,p)=\binom{p+m}{p}. \end{equation} In [6] this quantity is called $S_0(m,p)$. Assuming that $m$ and $p$ are fixed input variables, we will often write $\Gamma$ and $L$. With this notation, a Taylor series expansion (about the origin) of a scalar-valued function $u$ of $m$ variables $\bmv z = (z_1,z_2,\dots z_m)$, truncated beyond terms of degree $p$, can be written in the form \begin{equation} u(\bmv z)=\sum_{\bmv j \;\in\; \Gamma_m^p }{\tt U}(\bmv j)\; G_{\bmv j}(\bmv z ). \end{equation} Here, for now, ${\tt U}$ simply denotes an array of numerical coefficients. When employed in code that has symbolic manipulation capabilities, each ${\tt U}(\bmv j)$ may also be a symbolic quantity. To proceed, what is needed is some way of listing monomials systematically. With such a list, as already mentioned, we may assign a label $r$ to each monomial based on where it appears in the list. A summary of labeling methods, and an analysis of storage requirements, may be found in [6,9]. Here we describe one of them that is particularly useful for our purposes. The first step is to {\em order} the monomials or, equivalently, the exponent vectors. One possibility is {\em lexicographic} ({\em lex}) order. Consider two exponent vectors $\bmv j$ and $\bmv k$. Let $\bmv j - \bmv k$ be the vector whose entries are obtained by component-wise subtraction, \begin{equation} \bmv j - \bmv k=(j_1-k_1,j_2-k_2,\cdots j_m-k_m). \end{equation} We say that the exponent vector $\bmv j$ is lexicographically greater than the exponent vector $\bmv k$, and write $\bmv j >_{\rm{lex}}\bmv k$, if the {\em left-most nonzero} entry in $\bmv j - \bmv k$ is {\em positive}. Thus, for example in the case of monomials in three variables $(z_1,z_2,z_3)$ with exponents $(j_1,j_2,j_3)$, we have the ordering \begin{equation} (1,0,0)>_{\rm{lex}}(0,1,0)>_{\rm{lex}}(0,0,1) \end{equation} and \begin{equation} (4,2,1)>_{\rm{lex}}(4,2,0)>_{\rm{lex}}(2,5,1). \end{equation} For our purposes we have found it convenient to label the monomials in such a way that monomials of a given degree $D=|\bmv j|$ occur together. One possibility is to employ {\em graded} lexicographic ({\em glex}) ordering. If $\bmv j$ and $\bmv k$ are two exponent vectors, we say that $\bmv j>_{\rm{glex}}\bmv k$ if either $|\bmv j|>|\bmv k|$, or $|\bmv j|=|\bmv k|$ and $\bmv j>_{\rm{lex}}\bmv k$. Table 4 shows a list of monomials in three variables. As one goes down the list, first the monomial of degree $D=0$ appears, then the monomials of degree $D=1$, etc. Within each group of monomials of fixed degree the individual monomials appear in descending lex order. Note that Table 4 is similar to Table 3 except that it begins with the monomial of degree 0. \newpage \begin{table}[h,t] \caption{A labeling scheme for monomials in three variables.} \begin{center} \begin{tabular}{ccccc} $r$ & $j_1$ & $j_2$ & $j_3$ & $D$ \\ \hline 1 & 0 & 0 & 0 & 0\\ 2 & 1 & 0 & 0 & 1\\ 3 & 0 & 1 & 0 & 1\\ 4 & 0 & 0 & 1 & 1\\ 5 & 2 & 0 & 0 & 2\\ 6 & 1 & 1 & 0 & 2\\ 7 & 1 & 0 & 1 & 2\\ 8 & 0 & 2 & 0 & 2\\ 9 & 0 & 1 & 1 & 2\\ 10 & 0 & 0 & 2 & 2\\ 11 & 3 & 0 & 0 & 3\\ 12 & 2 & 1 & 0 & 3\\ 13 & 2 & 0 & 1 & 3\\ 14 & 1 & 2 & 0 & 3\\ 15 & 1 & 1 & 1 & 3\\ 16 & 1 & 0 & 2 & 3\\ 17 & 0 & 3 & 0 & 3 \\ 18 & 0 & 2 & 1 & 3\\ 19 & 0 & 1 & 2 & 3\\ 20 & 0 & 0 & 3 & 3\\ $.$ & $.$ & $.$ & $.$ & $.$\\ $.$ & $.$ & $.$ & $.$ & $.$\\ $.$ & $.$ & $.$ & $.$ & $.$\\ 28 & 1 & 2 & 1 & 4\\ $.$ & $.$ & $.$ & $.$ & $.$\\ $.$ & $.$ & $.$ & $.$ & $.$\\ $.$ & $.$ & $.$ & $.$ & $.$\\ \end{tabular} \end{center} \end{table} We give the name {\em modified glex sequencing} to a monomial listing of the kind shown in Table 4. This is the labeling scheme we will use. Other possible listings include ascending true glex order in which monomials appear in ascending lex order within each group of degree $D$, and lex order for the whole monomial list as in [7]. With the aid of the scalar index $r$ the relation (7.6) can be rewritten in the form \begin{equation} u(\bmv z)=\sum_{r=1}^{L(m,p)} {\tt U}(r) G_r(\bmv z), \end{equation} because (by construction and with fixed $m$) for each positive integer $r$ there is a unique exponent ${\bmv j}(r)$, and for each ${\bmv j}$ there is a unique $r$. Here $\tt U$ may be viewed as a vector with entries ${\tt U(r)}$, and $G_r(\bmv z)$ denotes $G_{{\bmv j}(r)}(\bmv z)$. Consider, in an $m$-dimensional space, the points defined by the vectors ${\bmv j}\in\Gamma^p_m$. See (7.4). Figure 22 displays them in the case $m=3$ and $p=4$. Evidently they form a grid that lies on the surface and interior of what can be viewed as an $m$-dimensional {\em pyramid} in $m$-dimensional space. At each grid point there is an associated coefficient ${\tt U(r)}$. Because of its association with this pyramidal structure, we will refer to the entire set of coefficients in (7.6) or (7.10) as the {\it pyramid} ${\tt U}$ of $u(\bmv z)$. \begin{figure}[h] \begin{center} \includegraphics[angle=0,width=.35\textwidth]{Figure22} \caption{\label{fig:pyram} A grid of points representing the set $\Gamma^4_3$. For future reference a subset of $\Gamma^4_3$, called a {\em box}, is shown in blue.} \end{center} \end{figure} \subsubsection{Implementation of Labeling Scheme} We have seen that use of modified glex sequencing, for any specified number of variables $m$, provides a labeling rule such that for each positive integer $r$ there is a unique exponent ${\bmv j}(r)$, and for each ${\bmv j}$ there is a unique $r$. That is, there is a invertible function $r({\bmv j})$ that provides a 1-to-1 correspondence between the positive integers and the exponent vectors ${\bmv j}$. To proceed further, it would be useful to have this function and its inverse in more explicit form. First, we will learn that there is a formula for $r({\bmv j})$, which we will call the {\em Giorgilli} formula [6]. For any specified $m$ the exponent vectors ${\bmv j}$ take the form (7.2) where all the entries $j_i$ are positive integers or zero. Begin by defining the integers \begin{equation} n(\ell ;j_1,\cdots ,j_m) = \ell -1 + \sum^{\ell -1}_{k=0} j_{m-k} \end{equation} for $\ell \in \{ 1,2,\cdots m\}$. Then, to the general monomial $G_{\bmv j}({\bmv z})$ or exponent vector ${\bmv j}$, assign the label \begin{equation} r({\bmv j}) = r(j_1,\cdots j_m) = 1+\sum^m_{\ell =1} \ {\rm Binomial} \ [n(\ell ;j_1,\cdots j_m),\ell ]. \end{equation} Here the quantities \begin{equation} {\rm Binomial} \ [n,\ell ] = \left( \begin{array}{c} n \\ \ell \end{array}\right) = \left\{ \begin{array}{ccc} \frac{n!}{\ell !(n-\ell )!} & , & 0 \leq \ell \leq n \\ 0 & , & {\rm otherwise}\end{array}\right\} \ \end{equation} denote the usual binomial coefficients. It can be shown that this formula reproduces the results of modified glex sequencing [6]. Below is simple {\em Mathematica} code that implements the Giorgilli formula in the case of three variables, and evaluates it for selected exponents $\bm j$. Observe that these evaluations agree with results in Table 4. \begin{eqnarray} &&{\tt Gfor[j1\_, j2\_, j3\_] := (}\nonumber\\ &&{\tt s1 = j3;{\;}s2 = 1 + j3 + j2;{\;}s3 = 2 + j3 + j2 + j1;}\nonumber\\ &&{\tt t1 = Binomial[s1, 1];{\;}t2 = Binomial[s2, 2];{\;}t3 = Binomial[s3, 3];}\nonumber\\ &&{\tt r = 1 + t1 + t2 + t3;{\;}r}\nonumber\\ &&{\tt )}\nonumber\\ &&{\tt Gfor[0, 0, 0]}\nonumber\\ &&{\tt Gfor[1, 0, 0]}\nonumber\\ &&{\tt Gfor[2, 0, 1]}\nonumber\\ &&{\tt Gfor[1, 2, 1]}\nonumber\\ &&{ 1}\nonumber\\ &&{ 2}\nonumber\\ &&{ 13}\nonumber\\ &&{ 28} \end{eqnarray} Second, for the inverse relation, we have found it convenient to introduce a rectangular matrix associated with the set $\Gamma_m^p$. By abuse of notation, it will also be called $\Gamma$. It has $L(m,p)$ rows and $m$ columns with entries \begin{equation} \Gamma_{r,a}=j_a(r). \end{equation} For example, looking a Table 4, we see (when $m=3$) that $\Gamma_{1,1}=0$ and $\Gamma_{17,2}=3$. Indeed, if the first and last columns of Table 4 are removed, what remains (when $m=3$) is the matrix $\Gamma_{r,a}$. In the language of [6], $\Gamma$ is a {\em look up table} that, given $r$, produces the associated ${\bmv j}$. In our {\em Mathematica} implementation $\Gamma$ is the matrix ${\tt GAMMA}$ with elements ${\tt GAMMA[[r,a]]}$. The matrix ${\tt GAMMA}$ is constructed using the {\em Mathematica} code illustrated below, \begin{eqnarray} &&{\tt Needs[\texttt{"}Combinatorica`\texttt{"}];}\nonumber\\ &&{\tt m=3;p=4;}\nonumber\\ &&{\tt GAMMA = Compositions[0, m];}\nonumber\\ &&{\tt Do[GAMMA = Join[GAMMA, Reverse[Compositions[d, m]]], \{d, 1, p, 1\}];}\nonumber\\ &&{\tt L=Length[GAMMA]}\nonumber\\ &&{\tt r=17; a=2;}\nonumber\\ &&{\tt GAMMA[[r]]}\nonumber\\ &&{\tt GAMMA[[r,a]]}\nonumber\\ &&{ 35\nonumber}\\ &&{ \{0,3,0\}}\nonumber\\ &&{ 3} \end{eqnarray} It employs the {\em Mathematica} commands \p{Compositions}, \p{Reverse}, and \p{Join}. We will first describe the ingredients of this code and illustrate the function of each: \begin{itemize} \item The command ${\tt Needs[\texttt{"}Combinatorica`\texttt{"}];}$ loads a combinatorial package. \item The command \p{Compositions[i, m]} produces, as a list of arrays (a rectangular array), all {\em compositions} (under addition) of the integer $i$ into $m$ integer parts. Furthermore, the compositions appear in {\em ascending} lex order. For example, the command \p{Compositions[0, 3]} produces the single row \begin{eqnarray} &&0\ \ 0\ \ 0 \end{eqnarray} As a second example, the command \p{Compositions[1, 3]} produces the rectangular array \begin{eqnarray} &&0\ \ 0\ \ 1 \nonumber\\ &&0\ \ 1\ \ 0 \nonumber\\ &&1\ \ 0\ \ 0 \end{eqnarray} As a third example, the command \p{Compositions[2, 3]} produces the rectangular array \begin{eqnarray} &&0\ \ 0\ \ 2 \nonumber\\ &&0\ \ 1\ \ 1 \nonumber\\ &&0\ \ 2\ \ 0 \nonumber\\ &&1\ \ 0\ \ 1 \nonumber\\ &&1\ \ 1\ \ 0 \nonumber\\ &&2\ \ 0\ \ 0 \end{eqnarray} \item The command \p{Reverse} acts on the list of arrays, and reverses the order of the list while leaving the arrays intact. For example, the nested sequence of commands \p{Reverse[Compositions[1, 3]]} produces the rectangular array \begin{eqnarray} &&1\ \ 0\ \ 0\nonumber\\ &&0\ \ 1\ \ 0\nonumber\\ &&0\ \ 0\ \ 1 \end{eqnarray} As a second example, the nested sequence of commands \p{Reverse[Compositions[2, 3]]} produces the rectangular array \begin{eqnarray} &&2\ \ 0\ \ 0\nonumber\\ &&1\ \ 1\ \ 0\nonumber\\ &&1\ \ 0\ \ 1\nonumber\\ &&0\ \ 2\ \ 0\nonumber\\ &&0\ \ 1\ \ 1\nonumber\\ &&0\ \ 0\ \ 2 \end{eqnarray} Now the compositions appear in {\em descending} lex order. \item Look, for example, at Table 4. We see that the exponents $j_a$ for the $r=1$ entry are those appearing in (7.17). Next, exponents for the $r=2$ through $r=4$ entries are those appearing in (7.20). Following them, the exponents for the $r=5$ through $r=10$ entries, are those appearing in (7.21), etc. Evidently, to produce the exponent list of Table 4, what we must do is successively {\em join} various lists. That is what the {\em Mathematica} command \p{Join} can accomplish. \end{itemize} We are now ready to describe how ${\tt GAMMA}$ is constructed: \begin{itemize} \item The second line in (7.16) sets the values of $m$ and $p$. They are assigned the values $m=3$ and $p=4$ for this example, which will construct ${\tt GAMMA}$ for the case of Table 4. The third line in (7.16) initially sets ${\tt GAMMA}$ to a row of $m$ zeroes. The fourth line is a \p{Do} loop that successively redefines ${\tt GAMMA}$ by generating and joining to it successive descending lex order compositions. The net result is the exponent list of Table 4. \item The quantity $L=L(m,p)$ is obtained by applying the {\em Mathematica} command \p{Length} to the the rectangular array ${\tt GAMMA}$. \item The last 6 lines of (7.16) illustrate that $L$ is computed properly and that the command ${\tt GAMMA[[r,a]]}$ accesses the array ${\tt GAMMA}$ in the desired fashion. Specifically, in this example, we find from (7.5) that $L(3,4)=35$ in agreement with the {\em Mathematica} output for $L$. Moreover, ${\tt GAMMA[[17]]}$ produces the exponent array $\{0,3,0\}$, in agreement with the $r=17$ entry in Table 4, and ${\tt GAMMA[[17,2]]}$ produces $\Gamma_{17,2}=3$, as expected. \end{itemize} \subsubsection{Pyramid Operations: Addition and Multiplication} Here we {\it derive} the pyramid operations in terms of $\bmv j$-vectors by using the ordering previously described, and provide scripts to {\it encode} them in the $r$-representation (7.10). \newtheorem{mydef}{Definition} \begin{mydef} Suppose that $w(\bm z)$ arises from carrying out various {\it arithmetic operations} on $u(\bmv z)$ and $v(\bmv z)$, and the pyramids \p{U} and \p{V} are known. The corresponding pyramid operation on \p{U} and \p{V} is so defined that it yields the pyramid \p{W} of $w(\bm z)$. \end{mydef} Here we assume that $u,v,w$ are polynomials such as (7.6). We begin with the operations of scalar multiplication and addition, which are easy to implement. If \begin{equation} w(\bmv z) = c\; u(\bmv z), \end{equation} then \begin{equation} {\tt W}(r)=c\;{\tt U}(r), \end{equation} and we write \begin{equation} {\tt W}=c\;{\tt U }. \end{equation} If \begin{equation} w(\bmv z) = u(\bmv z)+ v(\bmv z), \end{equation} then \begin{equation} {\tt W}(r)={\tt U}(r)+{\tt V}(r), \end{equation} and we write \begin{equation} {\tt W}={\tt U}+{\tt V}. \end{equation} In both cases all operations are performed coordinate-wise (as for vectors). Implementation of scalar multiplication and addition is easy in {\em Mathematica} because, as the example below illustrates, it has built in vector routines. There we define two vectors, multiply them by scalars, and add the resulting vectors. \begin{eqnarray} &&{\tt Unprotect[V];}\nonumber\\ &&{\tt U=\{1,2,3\};}\nonumber\\ &&{\tt V=\{4,5,6\};}\nonumber\\ &&{\tt W=.1U+.2V}\nonumber\\ && \{.9,1.2,1.5\} \end{eqnarray} Since \p{V} is a ``protected" symbol in the {\em Mathematica} language, and, for purposes of illustration, we wish to use it as an ordinary vector variable, it must first be unprotected as in line 1 above. The last line shows that the {\em Mathematica} output is indeed the desired result. The operation of polynomial multiplication is more involved. Now we have the relation \begin{equation} w(\bmv z) = u(\bmv z)*v(\bmv z), \end{equation} and we want to encode \begin{equation} {\tt W} = {\tt PROD[U,V]}. \end{equation} Let us write $u(\bmv x)$ in the form (7.6), but with a change of dummy indices, so that it has the representation \begin{equation} u(\bmv z)=\sum_{\bmv i \;\in\; \Gamma_m^p }{\tt U}(\bmv i)\; G_{\bmv i}(\bmv z ). \end{equation} Similarly, write $v(\bmv z)$ in the form \begin{equation} v(\bmv z)=\sum_{\bmv j \;\in\; \Gamma_m^p }{\tt V}(\bmv j)\; G_{\bmv j}(\bmv z ). \end{equation} Then, according to Leibniz, there is the result \begin{equation} u(\bmv z)*v(\bmv z)= \sum_{\bmv i \;\in\; \Gamma_m^p }{\;}\sum_{\bmv j \;\in\; \Gamma_m^p } {\tt U}(\bmv i){\tt V}(\bmv j)G_{\bmv i}(\bmv z )*G_{\bmv j}(\bmv z ). \end{equation} From (7.1) we observe that \begin{eqnarray} G_{\bmv i}(\bmv z )*G_{\bmv j}(\bmv z )&=& (z_1)^{i_1} (z_2)^{i_2}\cdots (z_m)^{i_m}*(z_1)^{j_1} (z_2)^{j_2}\cdots (z_m)^{j_m}\nonumber\\ &=&(z_1)^{i_1+j_1} (z_2)^{i_2+j_2}\cdots (z_m)^{i_m+j_m}= G_{{\bmv i}+{\bmv j}}(\bmv z ). \end{eqnarray} Therefore, we may also write \begin{equation} u(\bmv z)*v(\bmv z)= \sum_{\bmv i \;\in\; \Gamma_m^p }{\;}\sum_{\bmv j \;\in\; \Gamma_m^p } {\tt U}(\bmv i){\tt V}(\bmv j)G_{{\bmv i}+{\bmv j}}(\bmv z ). \end{equation} Now we see that there are two complications. First, there may be terms on the right side of (7.35) whose degree is higher than $p$ and therefore need not be computed. Second, there are generally many terms on the right side of (7.35) that contribute to a given monomial term in $w(\bmv z) = u(\bmv z)*v(\bmv z)$. Suppose we write \begin{equation} w(\bmv z)=\sum_{\bmv k} {\tt W}(\bmv k)\; G_{\bmv k}(\bmv z ). \end{equation} Upon comparing (7.35) and (7.36) we conclude that \begin{equation} {\tt W}(\bmv k)= \sum_{{\bmv i}+ {\bmv j}={\bmv k}} {\tt U}(\bmv i){\tt V}(\bmv j)=\sum_{\bmv j \le \bmv k } {\tt U}({\bmv k-\bmv j}) {\tt V} ({\bmv j}). \end{equation} Here, by $\bmv j \le \bmv k $, we mean that the sum ranges over all ${\bmv j}$ such that $j_a\le k_a$ for all $a\in[1,m]$. That is, \begin{equation} \bmv j \le \bmv k{\;} \Leftrightarrow{\;} j_a\le k_a{\;}{\rm{for}}{\;}{\rm{all}}{\;}a\in[1,m]. \end{equation} Evidently, to implement the relation (7.37) in terms of $r$ labels, we need to describe the exponent relation $\bmv j \le \bmv k $ in terms of $r$ labels. Suppose ${\bmv k}$ is some exponent vector with label $r({\bmv k})$ as, for example, in Table 4. Introduce the notation \begin{equation} k=r({\bmv k}). \end{equation} This notation may be somewhat confusing because $k$ is not the norm of the vector ${\bmv k}$, but rather the label associated with ${\bmv k}$. However, this notation is very convenient. Now, given a label $k$, we can find ${\bmv k}$. Indeed, from (7.15), we have the result \begin{equation} k_a=\Gamma_{k,a}. \end{equation} Having found ${\bmv k}$, we define a set of exponents $B_k$ by the rule \begin{equation} B_k=\{{\bmv j}|\bmv j \le \bmv k\}. \end{equation} This set of exponents is called the $k^{\rm{th}}$ {\em box}. For example (when $m=3$), suppose $k=28$. Then we see from Table 4 that ${\bmv k}$ = $(1,2,1)$. Table 5 lists, in modified glex order, all the vectors in $B_{28}$, i.e. all vectors ${\bmv j}$ such that $\bmv j \le (1,2,1)$. These are the vectors shown in blue in Figure 22. Finally, with this notation, we can rewrite (7.37) in the form \begin{equation} {\tt W}({\bmv k})= \sum_{\bmv j \in B_k } {\tt U}({\bmv k-\bmv j}) {\tt V} ({\bmv j}). \end{equation} \begin{table}[h,t] \caption{The vectors in $B_{28}=\{{\bmv j}|{\bmv j}\le(1,2,1)\}$.} \begin{center} \begin{tabular}{ccccc} $r$ & $j_1$ & $j_2$ & $j_3$ & $D$ \\ \hline 1 & 0 & 0 & 0 & 0\\ 2 & 1 & 0 & 0 & 1\\ 3 & 0 & 1 & 0 & 1\\ 4 & 0 & 0 & 1 & 1\\ 6 & 1 & 1 & 0 & 2\\ 7 & 1 & 0 & 1 & 2\\ 8 & 0 & 2 & 0 & 2\\ 9 & 0 & 1 & 1 & 2\\ 14 & 1 & 2 & 0 & 3\\ 15 & 1 & 1 & 1 & 3\\ 18 & 0 & 2 & 1 & 3\\ 28 & 1 & 2 & 1 & 4\\ \end{tabular} \end{center} \end{table} What can be said about the vectors $({\bmv k-\bmv j})$ as $\bmv j$ ranges over $B_\ell$? Table 6 lists, for example, the vectors $\bmv j\in B_{28}$ and the associated vectors $\bmv i$ with ${\bmv i}=({\bmv k-\bmv j})$. Also listed are the labels $r(\bmv j)$ and $r(\bmv i)$. Compare columns 2,3,4, which specify the $\bmv j\in B_{28}$, with columns 5,6,7, which specify the associated ${\bmv i}$ vectors. We see that every vector that appears in the $\bmv j$ list also occurs somewhere in the $\bmv i$ list, and vice versa. This to be expected because the operation of multiplication is commutative: we can also write (7.37) in the form \begin{equation} {\tt W}({\bmv k})= \sum_{\bmv j \in B_k } {\tt U}({\bmv j}) {\tt V} ({\bmv k-\bmv j}). \end{equation} We also observe the more remarkable feature that the two lists are {\em reverses} of each other: running down the $\bmv j$ list gives the same vectors as running up the $\bmv i$ list, and vice versa. This feature is a consequence of our ordering procedure. As indicated earlier, what we really want is a version of (7.37) that involves labels instead of exponent vectors. Looking at Table 6, we see that this is easily done. We may equally well think of $B_k$ as containing a collection of labels $r({\bmv j})$, and we may introduce a {\em reversed} array $Brev_k$ of {\em complementary} labels $r^c({\bmv j})$ where \begin{equation} r^c({\bmv j})=r({\bmv i}). \end{equation} That is, for example, $B_{28}$ would consist of the first column of Table 6 and $Brev_{28}$ would consist of the last column of Table 6. Finally, we have already introduced $k$ as being the label associated with ${\bmv k}$. We these understandings in mind, we may rewrite (7.37) in the label form \begin{equation} {\tt W}(k)= \sum_{r \in B_k } {\tt U}(r^c) {\tt V} (r)=\sum_{r \in B_k } {\tt U}(r) {\tt V} (r^c). \end{equation} This is the rule ${\tt W} = {\tt PROD[U,V]}$ for multiplying pyramids. In the language of [6], $B_k$ and $Brev_k$ are {\em look back tables} that, given a $k$, look back to find all monomial pairs with labels $r,r^c$ which produce, when multiplied, the monomial with label $k$. \begin{table}[h,t] \caption{The vectors ${\bmv j}$ and ${\bmv i}= ({\bmv k-\bmv j})$ for ${\bmv j}\in B_{28}$ and $k_a=\Gamma_{28,a}$.} \begin{center} \begin{tabular}{cccccccc} $r({\bmv j})$&$j_1$&$j_2$& $j_3$&$i_1$&$i_2$&$i_3$&$r({\bmv i})$ \\ \hline 1 & 0 & 0 & 0 & 1&2&1&28\\ 2 & 1 & 0 & 0 & 0&2&1&18\\ 3 & 0 & 1 & 0 & 1&1&1&15\\ 4 & 0 & 0 & 1 & 1&2&0&14\\ 6 & 1 & 1 & 0 & 0&1&1&9\\ 7 & 1 & 0 & 1 & 0&2&0&8\\ 8 & 0 & 2 & 0 & 1&0&1&7\\ 9 & 0 & 1 & 1 & 1&1&0&6\\ 14 & 1 & 2 & 0 & 0&0&1&4\\ 15 & 1 & 1 & 1 & 0&1&0&3\\ 18 & 0 & 2 & 1 & 1&0&0&2\\ 28 & 1 & 2 & 1 & 0&0&0&1\\ \end{tabular} \end{center} \end{table} \subsubsection{Implementation of Multiplication} The code shown below in (7.46) illustrates how $B_k$ and $Brev_k$ are constructed using {\em Mathematica}. \begin{eqnarray} &&{\tt JSK[list\_, K\_] :=}\nonumber\\ &&{\tt Position[Apply[And,Thread[\texttt{\#}1\texttt{<=\#2\&[\#,K]]]\& /@}\; list, True]//Flatten;} \nonumber\\ &&{\tt B = Table[JSK[GAMMA, GAMMA[[k]]], \{k, 1, L\}];}\nonumber\\ &&{\tt Brev = Reverse\; \texttt{/@}\; B;} \end{eqnarray} As before, some explanation is required. The main tasks are to implement the $\bmv j \le \bmv k $ operation (7.38) and then to employ this implementation. We will begin by implementing the $\bmv j \le \bmv k $ operation. Several steps are required, and each of them is described briefly below: \begin{itemize} \item When {\em Mathematica} is presented with a statement of the form $j<=k$, with $j$ and $k$ being {\em integers}, it replies with the answer True or the answer False. (Here $j<=k$ denotes $j\le k$.) Two sample {\em Mathematica} runs are shown below: \begin{eqnarray} &&\texttt{3\;<=\;4}\nonumber\\ &&{\rm{True}} \end{eqnarray} \begin{eqnarray} &&\texttt{5\;<=\;4}\nonumber\\ &&{\rm{False}} \end{eqnarray} \item A {\em Mathematica} function can be constructed that does the same thing. It takes the form \begin{eqnarray} &&{\tt \texttt{\#1\;<=\;\#2\;\&\;}[j,k]} \end{eqnarray} Here the symbols {\tt \#1} and {\tt \#2} set up two {\em slots} and the symbol {\tt \&} means the operation to its left is to be regarded as a function and is to be applied to the arguments to its right by inserting the arguments into the slots. Below is a short {\em Mathematica} run illustrating this feature. \begin{eqnarray} &&{\tt j=3; k=4;}\nonumber\\ &&{\tt \texttt{\#1\;<=\;\#2\;\&\;}[j,k]}\nonumber\\ &&{\rm{True}} \end{eqnarray} Observe that the output of this run agrees with that of (7.47). \item The same operation can be performed on pairs of {\em arrays} (rather than pairs of numbers) in such a way that corresponding entries from each array are compared, with the output then being an array of True and False answers. This is done using the {\em Mathematica} command \p{Thread}. Below is a short {\em Mathematica} run illustrating this feature. \begin{eqnarray} &&{\tt j=\{1,2,3\};k=\{4,5,1\};}\nonumber\\ &&{\tt Thread[ \texttt{\#1\;<=\;\#2\;\&\;}[j,k]]}\nonumber\\ &&\{{\rm{True}},{\rm{True}},{\rm{False}}\} \end{eqnarray} Note that the first two answers in the output array are True because the statements $1\le4$ and $2\le5$ are true. The last answer in the output array is False because the statement $3\le1$ is false. \item Suppose, now, that we are given two arrays $\bm j$ and $\bm k$ and we want to determine if ${\bm j}\le {\bm k}$ in the sense of (7.38). This can be done by {\em applying} the logical \p{And} operation (using the {\em Mathematica} command \p{Apply}) to the True/False output array described above. Below is a short {\em Mathematica} run illustrating this feature. \begin{eqnarray} &&{\tt j=\{1,2,3\};k=\{4,5,1\};}\nonumber\\ &&{\tt Apply[And,Thread[\texttt{\#1\;<=\;\#2{\;}\&{\;}}[j,k]]]}\nonumber\\ &&{\rm{False}} \end{eqnarray} Note that the output answer is False because at least one of the entries in the output array in (7.51) is False. The output answer would be True if, and only if, all entries in the output array in (7.51) were True. \item Now that the ${\bm j}\le {\bm k}$ operation has been defined for two exponent arrays, we would like to construct a related operator/function, to be called \p{JSK}. (Here the letter $\tt S$ stands for {\em smaller than or equal to}.) It will depend on the exponent array $\bm k$, and its task will be to search a list of exponent arrays to find those $\bm j$ within it that satisfy ${\bm j}\le {\bm k}$. The first step in this direction is to slightly modify the function appearing in (7.52). Below is a short {\em Mathematica} run that specifies this modified function and illustrates that it has the same effect. \begin{eqnarray} &&{\tt j=\{1,2,3\};k=\{4,5,1\};}\nonumber\\ &&{\tt Apply[And,Thread[ \texttt{\#1\;<=\;\#2\;\&\;}[\texttt{\#},k]]]\;\texttt{\&}\;}[j]\nonumber\\ &&{\rm{False}} \end{eqnarray} Comparison of the functions in (7.52) and (7.53) reveals that what has been done is to replace the argument $j$ in (7.52) by a slot {\tt \#}, then follow the function by the character {\tt \&}, and finally add the symbols ${\tt [j]}$. What this modification does is to redefine the function in such a way that it acts on what follows the second {\tt \&}. \item The next step is to extend the function appearing in (7.53) so that it acts on a list of exponent arrays. To do this, we replace the symbols ${\tt [j]}$ by the symbols {\tt /@} ${\tt list}$. The symbols {\tt /@} indicate that what stands to their left is to act on what stands to their right, and what stands to their right is a list of exponent arrays. The result of this action will be a list of True/False results with one result for each exponent array in the list. Below is a short {\em Mathematica} run that illustrates how the further modified function acts on lists. \begin{eqnarray} &&{\tt k=\{4,5,1\};}\nonumber\\ &&{\tt ja=\{3,4,1\};jb=\{1,2,3\};jc=\{1,2,1\};}\nonumber\\ &&{\tt list=\{ja,jb,jc\};}\nonumber\\ &&{\tt Apply[And,Thread[\texttt{\#1\;<=\;\#2\;\&\;}[ \texttt{\#},k]]]\; \texttt{\&\;/@}\;list }\nonumber\\ &&\{\rm{True},\rm{False},\rm{True}\} \end{eqnarray} Observe that the output answer list is $\{\rm{True},\rm{False},\rm{True}\}$ because $\{3,4,1\}\le\{4,5,1\}$ is True, $\{1,2,3\}\le\{4,5,1\}$ is False, and $\{1,2,1\}\le\{4,5,1\}$ is True. \item What we would really like to know is where the True items are in the list, because that will tell us where the $\bm j$ that satisfy ${\bm j}\le {\bm k}$ reside. This can be accomplished by use of the {\em Mathematica} command \p{Position} in conjunction with the result True. Below is a short {\em Mathematica} run that illustrates how this works. \begin{eqnarray} &&{\tt k=\{4,5,1\};}\nonumber\\ &&{\tt ja=\{3,4,1\};jb=\{1,2,3\};jc=\{1,2,1\};}\nonumber\\ &&{\tt list=\{ja,jb,jc\};}\nonumber\\ &&{\tt Position[Apply[And,Thread[\texttt{\#1\;<=\;\#2\;\&\;}[\texttt{\#},k]]]\;\texttt{\&\;/@}\;list,True]} \nonumber\\ &&\{\{1\},{\;}\{3\}\} \end{eqnarray} Note that the output is an array of positions in the list for which ${\bm j}\le {\bm k}$. There is, however, still one defect. Namely, the output array is an array of single-element subarrays, and we would like it to be simply an array of location numbers. This defect can be remedied by appending the {\em Mathematica} command \p{Flatten}, preceded by {\tt //}, to the instruction string in (7.55). The short {\em Mathematica} run below illustrates this modification. \begin{eqnarray} &&{\tt k=\{4,5,1\};}\nonumber\\ &&{\tt ja=\{3,4,1\};jb=\{1,2,3\};jc=\{1,2,1\};}\nonumber\\ &&{\tt list=\{ja,jb,jc\};}\nonumber\\ &&{\tt Position[Apply[And,Thread[\texttt{\#1\;<=\;\#2\;\&\;}[\texttt{\#},k]]]\;\texttt{\&\;/@}\;list,True]\texttt{//}Flatten} \nonumber\\ &&\{1,{\;}3\} \end{eqnarray} Now the output is a simple array containing the positions in the list for which ${\bm j}\le {\bm k}$. \item The last step is to employ the ingredients in (7.56) to define the operator ${\tt JSK[list,k]}$. The short {\em Mathematica} run below illustrates how this can be done. \begin{eqnarray} &&{\tt k=\{4,5,1\};}\nonumber\\ &&{\tt ja=\{3,4,1\};jb=\{1,2,3\};jc=\{1,2,1\};}\nonumber\\ &&{\tt list=\{ja,jb,jc\};}\nonumber\\ &&{\tt JSK[list\_,k\_]\;:=}\nonumber\\ &&{\tt Position[Apply[And,Thread[\texttt{\#1\;<=\;\#2\;\&\;}[\texttt{\#},k]]]\;\texttt{\&\;/@}\;list,True]\texttt{//}Flatten;}\nonumber\\ &&{\tt JSK[list,\;k]}\nonumber\\ &&\{1,{\;}3\} \end{eqnarray} Lines 4 and 5 above define the operator ${\tt JSK[list,k]}$, line 6 invokes it, and line 7 displays its output, which agrees with the output of (7.56). \item With the operator ${\tt JSK[list,k]}$ in hand, we are prepared to construct tables $B$ and $Brev$ that will contain the $B_k$ and the $Brev_k$. The short {\em Mathematica} run below illustrates how this can be done. \begin{eqnarray} &&{\tt B=Table[JSK[GAMMA, GAMMA[[k]]], \{k, 1, L, 1\}] ;}\nonumber\\ &&{\tt Brev=Reverse\;\texttt{/@}\; B;}\nonumber\\ &&{\tt B[[8]]}\nonumber\\ &&{\tt Brev[[8]]}\nonumber\\ &&{\tt B[[28]]}\nonumber\\ &&{\tt Brev[[28]]}\nonumber\\ &&\{1,3,8\}\nonumber\\ &&\{8,3,1\}\nonumber\\ &&\{1,2,3,4,6,7,8,9,14,15,18,28\}\nonumber\\ && \{28,18,15,14,9,8,7,6,4,3,2,1\} \end{eqnarray} The first line employs the {\em Mathematica} command \p{Table} in combination with an implied Do loop to produce a two-dimensional array $\tt B$. Values of $k$ in the range $[1,L]$ are selected sequentially. For each $k$ value the associated exponent array ${\bmv k}(k)={\tt GAMMA[[k]]}$ is obtained. The operator \p{JSK} then searches the full \p{GAMMA} array to find the list of $r$ values associated with the ${\bmv j}\le{\bmv k}$. All these $r$ values are listed in a row. Thus, the array $\tt B$ consists of list of $L$ rows, of varying width. The rows are labeled by $k\in[1,L]$, and in each row are the $r$ values associated with the ${\bmv j}\le{\bmv k}$. In the second line the {\em Mathematica} command \p{Reverse} is applied to $\tt B$ to produce a second array called ${\tt Brev}$. Its rows are the reverse of those in $\tt B$. For example, as the {\em Mathematica} run illustrates, ${\tt B[[8]]}$, which is the 8$^{th}$ row of $\tt B$, contains the list $\{1,3,8\}$, and ${\tt Brev[[8]]}$ contains the list $\{8,3,1\}$. Inspection of the $r=8$ monomial in Table 4, that with exponents $\{0,2,0\}$, shows that it has the monomials with exponents \{0,0,0\}, \{0,1,0\}, and \{0,2,0\} as factors. And further inspection of Table 4 shows that the exponents of these factors have the $r$ values $\{1,3,8\}$. Similarly ${\tt B[[28]]}$, which is the 28$^{th}$ row of $B$, contains the same entries that appear in the first column of Table 6. And ${\tt Brev[[28]]}$, which is the 28$^{th}$ row of $Brev$, contains the same entries that appear in the last column of Table 6. \end{itemize} Finally, we need to explain how the arrays $B$ and $Brev$ can be employed to carry out polynomial multiplication. This can be done using the {\em Mathematica} dot product command: \begin{itemize} \item The exhibit below shows a simple {\em Mathematica} run that illustrates the use of the dot product command. \begin{eqnarray} &&{\tt Unprotect[V];}\nonumber\\ &&{\tt U=\{.1,.2,.3,.4,.5,.6,.7,.8\} ;}\nonumber\\ &&{\tt V=\{1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8\};}\nonumber\\ &&{\tt U.V}\nonumber\\ &&{\tt u=\{1,3,5\};}\nonumber\\ &&{\tt v=\{6,4,2\};}\nonumber\\ &&{\tt U[[u]]}\nonumber\\ &&{\tt V[[v]]}\nonumber\\ &&{\tt U[[u]].V[[v]]}\nonumber\\ &&5.64\nonumber\\ &&\{.1,.3,.5\}\nonumber\\ &&\{1.6,1.4,1.2\}\nonumber\\ &&1.18 \end{eqnarray} As before, $\tt V$ must be unprotected. See line 1. The rest of the first part this run (lines 2 through 4) defines two vectors $\tt U$ and $\tt V$ and then computes their dot product. Note that if we multiply the entries in $\tt U$ and $\tt V$ pairwise and add, we get the result \begin{equation} .1\times1.1+.2\times1.2 +\cdots+.8\times1.8=5.64,\nonumber\\ \end{equation} which agrees with the {\em Mathematica} result for ${\tt U}\cdot {\tt V}$. See line 10. The second part of this {\em Mathematica} run, lines 5 through 9, illustrates a powerful feature of the {\em Mathematica} language. Suppose, as illustrated, we define two arrays $\tt u$ and $\tt v$ of integers, and use these arrays as {\em arguments} for the vectors by writing ${\tt U[[u]]}$ and ${\tt V[[v]]}$. Then {\em Mathematica} uses the integers in the two arrays $\tt u$ and $\tt v$ as labels to select the corresponding entries in $\tt U$ and $\tt V$, and from these entries it makes new corresponding vectors. In this example, the 1$^{st}$, 3$^{rd}$, and 5$^{th}$ entries in $\tt U$ are $.1$, $.3$, and $.5$. And the 6$^{th}$, 4$^{th}$, and 2$^{nd}$ entries in $\tt V$ are $1.6$, $1.4$, and $1.2$. Consequently, we find that \begin{equation} {\tt U[[u]]}=\{.1,.3,.5\},\nonumber\\ \end{equation} \begin{equation} {\tt V[[v]]}=\{1.6,1.4,1.2\},\nonumber\\ \end{equation} in agreement with lines 11 and 12 of the {\em Mathematica} results. Correspondingly, we expect that ${\tt U[[u]]}\cdot {\tt V[[v]]}$ will have the value \begin{equation} {\tt U[[u]]}\cdot {\tt V[[v]]}=.1\times1.6+.3\times1.4+.5\times1.2=1.18,\nonumber\\ \end{equation} in agreement with the last line of the {\em Mathematica} output. \item Now suppose, as an example, that we set $k=8$ and use ${\tt B[[k]]}$ and ${\tt Brev[[k]]}$ in place of the arrays $\tt u$ and $\tt v$. The {\em Mathematica} fragment below shows what happens when this is done. \begin{eqnarray} &&{\tt k=8;}\nonumber\\ &&{\tt B[[k]]}\nonumber\\ &&{\tt Brev[[k]]}\nonumber\\ &&{\tt U[[B[[k]]]]}\nonumber\\ &&{\tt V[[Brev[[k]]]]}\nonumber\\ &&{\tt U[[B[[k]]]]\cdot B[[Brev[[k]]]]}\nonumber\\ &&\{1,3,8\}\nonumber\\ &&\{8,3,1\}\nonumber\\ &&\{.1,.3,.8\}\nonumber\\ &&\{1.8,1.3,1.1\}\nonumber\\ &&1.45 \end{eqnarray} From (7.58) we see that ${\tt B[[8]]}=\{1,3,8\}$ and ${\tt Brev[[8]]}=\{8,3,1\}$ in agreement with lines 7 and 8 of the {\em Mathematica} output above. Also, the 1$^{st}$, 3$^{rd}$, and 8$^{th}$ entries in $\tt U$ are .1, .3, and .8. And the 8$^{th}$, 3$^{rd}$, and 1$^{st}$ entries in $\tt V$ are 1.8, 1.3, and 1.1. Therefore we expect the results \begin{equation} {\tt U[[B[[k]]]]}= \{.1,.3,.8\},\nonumber\\ \end{equation} \begin{equation} {\tt V[[Brev[[k]]]]}= \{1.8,1.3,1.1\},\nonumber\\ \end{equation} \begin{equation} {\tt U[[B[[k]]]]}\cdot {\tt V[[Brev[[k]]]]}= .1\times1.8+.3\times1.3+.8\times1.1=1.45,\nonumber\\ \end{equation} in agreement with the last three lines of (7.60). \item Finally, suppose we carry out the operation ${\tt U[[B[[k]]]]}\cdot {\tt V[[Brev[[k]]]]}$ for all $k\in[1,L]$ and put the results together in a Table with entries labeled by $k$. According to (7.45), the result will be the pyramid for the product of the two polynomials whose individual pyramids are $\tt U$ and $\tt V$. The {\em Mathematica} fragment below shows how this can be done to define a {\em product} function, called \p{PROD}, that acts on general pyramids $\tt U$ and $\tt V$, using the command Table with an implied Do loop over $k$. \begin{equation} {\tt PROD[U\_, V\_] := Table[U[[B[[k]]]]\cdot V[[Brev[[k]]]], \{k, 1, L, 1\}]}; \nonumber\\ \end{equation} \end{itemize} \subsubsection{Implementation of Powers} With operation of multiplication in hand, it is easy to implement the operation of raising a pyramid to a power. The code shown below in (7.61) demonstrates how this can be done. \begin{eqnarray} &&{\tt POWER[U\_, 0] := C1;}\nonumber\\ &&{\tt POWER[U\_, 1] := U;}\nonumber\\ &&{\tt POWER[U\_, 2] := PROD[U, U];}\nonumber\\ &&{\tt POWER[U\_, 3] := PROD[U, POWER[U, 2]];}\nonumber\\ &&... \end{eqnarray} Here ${\tt C1}$ is the pyramid for the Taylor series having {\em one} as its {\em constant} term and all other terms zero, \begin{equation} {\tt C1=\{1,0,0,0,\cdots\}}. \end{equation} It can be set up by the {\em Mathematica} code \begin{equation} {\tt C1=Table[KroneckerDelta[k,1],\{k,1,L,1\}];} \end{equation} which employs the Table command, the Kronecker delta function, and an implied Do loop over $k$. This code should be executed before executing (7.61), but after the value of $L$ has been established. \subsubsection{Replacement Rule} \begin{mydef} The transformation $A(\bmv z) \leadsto {\tt A} $ means replacement of every real variable $z_a$ in the {\it arithmetic expression} $A(\bmv z)$ with an associated pyramid, and of every operation on real variables in $A(\bmv z)$ with the associated operation on pyramids. \end{mydef} Automatic Differentiation is based on the following corollary: if $A(\bmv z) \leadsto {\tt A} $, then ${\tt A}$ is the pyramid of $A(\bmv z)$. For simplicity, we will begin our discussion of the replacement rule with examples involving only a single variable $z$. In this case monomial labeling, the relation between labels and exponents, is given directly by the simple rules \begin{equation} r(j)=1+j{\;}{\rm{and}}{\;}j(r)=r-1. \end{equation} See table 7. \begin{table}[h,t] \caption{A labeling scheme for monomials in one variable.} \begin{center} \begin{tabular}{cc} $r$ & $j$ \\ \hline 1 & 0 \\ 2 & 1 \\ 3 & 2 \\ 4 & 3 \\ $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ \\ \end{tabular} \end{center} \end{table} As a first example, consider the expression \begin{equation} A=2+3(z*z). \end{equation} We have agreed to consider the case $m=1$. Suppose we also set $p=2$, in which case $L=3$. In ascending glex order, see Table 7, the pyramid for $A$ is then \begin{equation} 2+3z^2\leadsto{\tt A}=(2,0,3). \end{equation} Now imagine that $A$ was not such a simple polynomial, but some complicated expression. Then the pyramid ${\tt A}$ could be generated by computing derivatives of $A$ at $z=0$ and dividing them by the appropriate factorials. Automatic differentiation offers another way to find \p{A}. Assume that all operations in the arithmetic expression $A$ have been encoded according to {\it Definition 1}. For our example, these are $+$ and ${\tt PROD}$. Let $\tt C1$ and $\tt Z$ be the pyramids associated with $1$ and $z$, \begin{equation} 1\leadsto{\tt C1}=(1,0,0), \end{equation} \begin{equation} z\leadsto\tt Z=(0,1,0). \end{equation} The quantity $2+3z^2$ results from performing various arithmetic operations on $1$ and $z$. {\it Definition 1} says that the pyramid of $2+3z^2$ is identical to the pyramid obtained by performing the same operations on the pyramids ${\tt C1}$ and $\tt Z$. That is, suppose we replace $1$ and $z$ with correctly associated pyramids $\tt C1$ and $\tt Z$, and also replace $*$ with ${\tt PROD}$. Then, upon evaluating ${\tt PROD}$, multiplying by the appropriate scalar coefficients, and summing, the result will be the same pyramid $\tt A$, \begin{equation} 2{\;}{\tt C1}+3{\;}{\tt PROD[Z,Z]}= {\tt A}. \end{equation} In this way, by knowing only the basic pyramids ${\tt C1}$ and ${\tt Z}$ (prepared beforehand), one can compute the pyramid of an arbitrary $A(z)$. Finally, in contrast to numerical differentiation, all numerical operations involved are accurate to machine precision. {\em Mathematica} code that implements (7.69) will be presented shortly in (7.70). Frequently, if $A(z)$ is some complicated expression, the replacement rule will result in a long chain of nested pyramid operations. At every step in the chain, the pyramid resulting from the previous step will be combined with some other pyramid to produce a new pyramid. Each such operation has two arguments, and {\it Definition 1} applies to each step in the chain. Upon evaluating all pyramid operations, the final result will be the pyramid of $A(z)$. By using the replacement operator the above procedure can be represented as: $$1\leadsto {\tt C1}, \ \ z\leadsto {\tt Z}, \ \ A\leadsto {\tt A}. $$ The following general recipe then applies: In order to derive the pyramid associated with some arithmetic expression, apply the $\leadsto$ rule to all its variables, or parts, and replace all operations with operations on pyramids. Here ``apply the $\leadsto$ rule'' to something means replace that something with the associated pyramid. And the term ``parts'' means sub-expressions. {\it Definition 1} guarantees that the result will be the same pyramid $\tt A$ no matter how we split the arithmetic expression $A$ into sub-expressions. It is only necessary to recognize, in case of using sub-expressions, that one pyramid expression should be viewed as a function of another. For illustration, suppose we regard the $A$ given by (7.65) to be the composition of two functions, $F(z)=2+3z$ and $G(z)=z^2$, so that $A(z)=F(G(z))$. Instead of associating a constant and a single variable with their respective pyramids, let us now associate whole sub-expressions. In addition, let us label the pyramid expressions on the right of $\leadsto$ with with some names, $\tt F$ and $\tt G$: $$2+3z\leadsto2{\;}{\tt C1}+3{\;}{\tt Z}={\tt F[Z]}$$ $$z^2 \leadsto{\tt PROD[Z,Z]=G[Z] }$$ $$A(z) \leadsto{\tt F[G[Z]] =A}.$$ We have indicated the explicit dependence on $\tt Z$. It is important to note that $\tt F[Z]$ is a pyramid {\em expression} prior to executing any the pyramid operations, i.e it is not yet a pyramid, but is simply the result of formal replacements that follow the association rule. {\em Mathematica} code for the simple example (7.69) is shown below, \begin{eqnarray} &&{\tt C1=\{1,0,0 \}; }\nonumber\\ &&{\tt Z=\{0,1,0\}; }\nonumber\\ &&{\tt2\;C1+3\;PROD[Z,Z] }\nonumber\\ && \{2,0,3\} \end{eqnarray} Note that the result (7.70) agrees with (7.66). This example does not use any nested expressions. We will now illustrate how the same results can be obtained using nested expressions. We begin by displaying a simple {\em Mathematica} program/execution, that employs ordinary variables, and uses {\em Mathematica}'s intrinsic abilities to handle nested expressions. The program/execution is \begin{eqnarray} &&{\tt f[z\_]:=2+3z;}\nonumber\\ &&{\tt g[z\_]:=z^2;}\nonumber\\ &&{\tt f[g[z]]}\nonumber\\ &&2+3z^2 \end{eqnarray} With {\it Mathematica} the underscore in ${\tt z\_}$ indicates that $\tt z$ is a dummy variable name, and the symbols ${\tt :=}$ indicate that $\tt f$ is defined with a delayed assignment. That is what is done in line one above. The same is done in line two for $\tt g$. Line three requests evaluation of the nested function $ f(g(z))$, and the result of this evaluation is displayed in line four. Note that the result agrees with (7.65). With this background, we are ready to examine a program with analogous nested pyramid operations. The same comments apply regarding the use of underscores and delayed assignments. The program is \begin{eqnarray} &&{\tt C1=\{1,0,0 \};} \nonumber\\ &&{\tt Z=\{0,1,0\}; }\nonumber\\ &&{\tt F[Z\_]:=2{\;}C1+3{\;}Z;}\nonumber\\ &&{\tt G[Z\_]:=PROD[Z,Z];}\nonumber\\ &&{\tt F[G[Z]]}\nonumber\\ &&{\tt \{2,0,3\}} \end{eqnarray} Note that line (7.72) agrees with line (7.70), and is consistent with line (7.66). We close this subsection with an important consequence of the replacement rule and nested operations, which we call the {\em Taylor} rule. We begin by considering functions of a single variable. Suppose the function $G(x)$ has the special form \begin{equation} G(x)=z^d+x \end{equation} where $z^d$ is some constant. Let $F$ be some other function. Consider the composite (nested) function $A$ defined by \begin{equation} A(x)=F(G(x))=F(z^d+x). \end{equation} Then, assuming the necessary analyticity, by the chain rule $A$ evidently has a Taylor expansion in $x$ about the origin of the form \begin{eqnarray} A&=&A(0)+A^\prime(0)x+(1/2)A^{\prime\prime}(0)x^2+\cdots\nonumber\\ &=&F(z^d)+F^\prime(z^d)x+(1/2)F^{\prime\prime}(z^d)x^2+\cdots. \end{eqnarray} We conclude that if we know the Taylor expansion of $A$ about the origin, then we also know the Taylor expansion of $F$ about $z^d$, and vice versa. Suppose, for example, that \begin{equation} F(z)=1+2z+3z^2 \end{equation} and \begin{equation} z^d=4. \end{equation} Then there is the result \begin{equation} A(x)=F(G(x))=F(z^d+x)=1+2(4+x)+3(4+x)^2=57+26x+3x^2. \end{equation} We now show that this same result can be obtained using pyramids. The {\em Mathematica} fragment below illustrates how this can be done. \begin{eqnarray} &&{\tt C1 = \{1, 0, 0\};}\nonumber\\ &&{\tt X = \{0, 1, 0\};}\nonumber\\ &&{\tt zd = 4;}\nonumber\\ &&{\tt F[Z\_] := 1\;C1 + 2\;Z + 3\;PROD[Z, Z];}\nonumber\\ &&{\tt G[X\_] := zd{\;}C1 + X;}\nonumber\\ &&{\tt F[G[X]]}\nonumber\\ &&\{57, 26, 3\} \end{eqnarray} Note that (7.79) agrees with (7.78). Let us also illustrate the Taylor rule in the two-variable case. Let $F(z_1,z_2)$ be some function of two variables. Introduce the functions $G(x_1)$ and $H(x_1)$ having the special forms \begin{equation} G(x_1)=z_1^d+x_1, \end{equation} \begin{equation} H(x_2)=z_2^d+x_2, \end{equation} where $z_1^d$ and $z_2^d$ are some constants. Consider the function $A$ defined by \begin{equation} A(x_1,x_2)=F(G(x_1),H(x_2))=F(z_1^d+x_1,z_2^d+x_2). \end{equation} Then, again assuming the necessary analyticity, by the chain rule $A$ evidently has a Taylor expansion in $x_1$ and $x_2$ about the origin $(0,0)$ of the form \begin{eqnarray} A&=&A(0,0)+[\partial_1A(0,0)]x_1+[\partial_2A(0,0)]x_2\nonumber\\ &&+(1/2)[(\partial_1)^2A(0,0)]x_1^2+[\partial_1\partial_2A(0,0)]x_1x_2+(1/2)[(\partial_2)^2A(0,0)]x_2^2+\cdots\nonumber\\ &=&F(z_1^d,z_2^d)+[\partial_1F(z_1^d,z_2^d)]x_1+[\partial_2F(z_1^d,z_2^d)]x_2\nonumber\\ &&+(1/2)[(\partial_1)^2F(z_1^d,z_2^d)]x_1^2+[\partial_1\partial_2AF(z_1^d,z_2^d)]x_1x_2+(1/2)[(\partial_2)^2A(F(z_1^d,z_2^d))]x_2^2+\cdots\nonumber\\ \end{eqnarray} where \begin{equation} \partial_1=\partial/\partial x_1, {\;}{\;}\partial_2=\partial/\partial x_2 \end{equation} when acting on $A$, and \begin{equation} \partial_1=\partial/\partial z_1, {\;}{\;}\partial_2=\partial/\partial z_2 \end{equation} when acting on $F$. We conclude that if we know the Taylor expansion of $A$ about the origin $(0,0)$, then we also know the Taylor expansion of $F$ about $(z_1^d,z_2^d)$, and vice versa. As a concrete example, suppose that \begin{equation} F(z_1,z_2)=1 + 2 z_1 + 3 z_2 + 4 z_1^2 + 5 z_1 z_2 + 6 z_2^2 \end{equation} and \begin{equation} z_1^d=7,{\;}{\;}z_2^d=8. \end{equation} Then, hand calculation shows that $F(G(x_1),H(x_2))$ takes the form \begin{eqnarray} F(z_1^d+x_1,z_2^d+x_2)&=&F(G(x_1),H(x_2))\nonumber\\ &=&899 + 98 x_1 + 4 x_1^2 + 134 x_2 + 5 x_1 x_2 + 6 x_2^2. \end{eqnarray} Below is a {\em Mathematica} execution that finds the same result, \begin{eqnarray} &&{\tt F[z1\_, z2\_] := 1 + 2{\;}z1 + 3{\;}z2 + 4{\;}z1^2 + 5{\;}z1{\;} z2 + 6{\;} z2^2}\nonumber\\ &&{\tt G[x1\_] := zd1 + x1;}\nonumber\\ &&{\tt H[x2\_] := zd2 + x2;}\nonumber\\ &&{\tt zd1 = 7;}\nonumber\\ &&{\tt zd2 = 8;}\nonumber\\ &&{\tt A=F[G[x1], H[x2]]}\nonumber\\ &&{\tt Expand[A]}\nonumber\\ &&1 + 2{\;}(7 + x1) + 4{\;}(7 + x1)^2 + 3{\;}(8 + x2) + 5{\;}(7 + x1){\;}(8 + x2) + 6{\;}(8 + x2)^2\nonumber\\ && 899 + 98{\;}x1 + 4{\;}x1^2 + 134{\;}x2 + 5{\;}x1{\;}x2 + {\;} x2^2\nonumber\\ \end{eqnarray} \newpage The calculation above dealt with the case of a function of two ordinary variables. We now illustrate, for the same example, that there is an analogous result for pyramids. For future reference, Table 8 shows our standard modified glex sequencing applied to the case of two variables. \begin{table}[h,t] \caption{A labeling scheme for monomials in two variables.} \begin{center} \begin{tabular}{ccc} $r$ & $j_1$ & $j_2$ \\ \hline 1 & 0 & 0 \\ 2 & 1 & 0 \\ 3 & 0 & 1 \\ 4 & 2 & 0 \\ 5 & 1 & 1 \\ 6 & 0 & 2 \\ 7 & 3 & 0 \\ 8 & 2 & 1 \\ 9 & 1 & 2 \\ 10 & 0 & 3 \\ $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ \\ \end{tabular} \end{center} \end{table} Following the replacement rule, we should make the substitutions \begin{equation} z^d_1 + x_1\leadsto{\tt zd1{\;}C1+X1}, \end{equation} \begin{equation} z^d_2 + x_2\leadsto{\tt zd2{\;}C1+X2}, \end{equation} \begin{eqnarray} &&1 + 2{\;}z_1 + 3{\;}z_2 + 4{\;}z_1^2 + 5{\;}z_1{\;}z_2 + 6{\;}z_2^2\leadsto\nonumber\\ &&{\tt C1+2{\;}Z1+3{\;}Z2+4{\;}PROD[Z1,Z1]+5{\;}PROD[Z1,Z2]+6{\;}PROD[Z2,Z2]}. \nonumber\\ \end{eqnarray} The {\em Mathematica} fragment below, executed for the case $m=2$ and $p=2$, in which case $L=6$, illustrates how the analogous result is obtained using pyramids, \begin{eqnarray} &&{\tt C1 = \{1, 0, 0, 0, 0, 0\};}\nonumber\\ &&{\tt X1 = \{0, 1, 0, 0, 0, 0\};}\nonumber\\ &&{\tt X2 = \{0, 0, 1, 0, 0, 0\};}\nonumber\\ &&{\tt F[Z1\_, Z2\_] := C1 + 2{\;}Z1 + 3{\;}Z2 + 4{\;}PROD[Z1, Z1] + 5{\;}PROD[Z1, Z2]}\nonumber\\ &&{\tt + 6{\;}PROD[Z2, Z2];}\nonumber\\ &&{\tt G[X1\_] := z01{\;}C1 + X1;}\nonumber\\ &&{\tt H[X2\_] := z02{\;}C1 + X2;}\nonumber\\ &&{\tt zd1 = 7;}\nonumber\\ &&{\tt zd2 = 8;}\nonumber\\ &&{\tt F[G[X1], H[X2]]}\nonumber\\ &&\{899, 98, 134, 4, 5, 6\} \end{eqnarray} Note that, when use is made of Table 8, the last line of (7.93) agrees with (7.88) and the last line of (7.89). \subsection{Replacement Rule and Numerical Integration} \subsubsection{Numerical Integration} Consider the set of differential equations (1.5). A standard procedure for their numerical integration from an initial time $t^i=t^0$ to some final time $t^f$ is to divide the time axis into a large number of steps $N$, each of small duration $h$, thereby introducing successive times $t^n$ defined by the relation \begin{equation} t^n = t^0 + nh {\;}{\;}{\rm{with}}{\;}{\;}n=0,1,\cdots,N. \end{equation} By construction, there will also be the relation \begin{equation} Nh=t^f-t^i. \end{equation} The goal is to compute the vectors $\bm{z}^n$, where \begin{equation} \bm{z}^n = \bm{z} (t^n), \end{equation} starting from the vector $\bm{z}^0$. The vector $\bm{z}^0$ is assumed given as a set of definite numbers, i.e. the initial conditions at $t^0$. If we assume Poincar\'{e} analyticity in $t$, we may convert the set of differential equations (1.5) into a set of recursion relations for the $\bm{z}^n$ in such a way that the $\bm{z}^n$ obtained by solving the recursion relations differ from the true $\bm{z}^n$ by only small truncation errors of order $h^m$. (Here $m$ is {\em not} the number of variables, but rather some fixed integer describing the accuracy of the integration method.) One such procedure, fourth-order {\em Runge Kutta} (RK4), is the set of marching/recursion rules \begin{equation} \bm{z}^{n+1} = \bm{z}^n + \frac{1}{6} (\bm{a} + 2\bm{b} + 2\bm{c} + \bm{d}) \end{equation} where, at each step, \begin{equation} \bm{a} = h \bm{f} (\bm{z}^n, t^n), \end{equation} \[ \bm{b} = h \bm{f} (\bm{z}^n + \frac{1}{2} \bm{a}, t^n + \frac{1}{2} h), \] \[ \bm{c} = h \bm{f} (\bm{z}^n + \frac{1}{2} \bm{b}, t^n + \frac{1}{2} h), \] \[ \bm{d} = h \bm{f} (\bm{z}^n + \bm{c}, t^n + h). \] Thanks to the genius of Runge and Kutta, the relations (7.97) and (7.98) have been constructed in such a way that the method is locally (at each step) correct through order $h^4$, and makes local truncation errors of order $h^5$. In the case of a single variable, and therefore a single differential equation, the relations (7.97) and (7.98) may be encoded in the {\em Mathematica} form shown below. Here ${\tt Zvar}$ is the dependent variable, $\tt t$ is the time, ${\tt Zt}$ is a temporary variable, ${\tt tt}$ is a temporary time, and $\tt ns$ is the number of integration steps. The program employs a Do loop over $\tt i$ so that the operations (7.97) and (7.98) are carried out ${\tt ns}$ times. \begin{eqnarray} &&\hspace{-.5cm}{\tt RK4:=(} \nonumber \\ &&{\tt t0=t};\nonumber\\ &&\hspace{-.5cm}{\tt Do[ } \nonumber \\ &&{\tt Aa=h{\;}F[Zvar,t]; }\nonumber \\ &&{\tt Zt=Zvar+(1/2)Aa; }\nonumber \\ &&{\tt tt=t+h/2; }\nonumber \\ &&{\tt Bb=h{\;}F[Zt,tt]; }\nonumber \\ &&{\tt Zt=Zvar+(1/2)Bb; } \nonumber \\ &&{\tt Cc=h{\;}F[Zt,tt]; }\nonumber \\ &&{\tt Zt=Zvar+Cc; }\nonumber \\ &&{\tt tt=t+h;}\nonumber \\ &&{\tt Dd=h{\;}F[Zt,tt]; }\nonumber \\ &&{\tt Zvar=Zvar+(1/6)(Aa+2{\;}Bb +2{\;}Cc+Dd); }\nonumber \\ &&{\tt t=t0+i{\;}h;},\nonumber \\ &&{\tt \{i,1,ns,1\} }\nonumber \\ &&{\tt ] } \nonumber \\ &&\hspace{-.3cm}{\tt ) } \nonumber\\ \end{eqnarray} \subsubsection{Replacement Rule, Single Equation/Variable Case} We now make what, for our purposes, is a fundamental observation: The operations that occur in the Runge Kutta recursion rules (7.97) and (7.98) and realized in the code above can be extended to pyramids by application of the replacement rule. In particular, the dependent variable $\bm{z}$ can be replaced by a pyramid, and the various operations involved in the recursion rules can be replaced by pyramid operations. Indeed if we look at the code above, apart from the evaluation of $\tt F$, we see that the quantities ${\tt Zvar}$, ${\tt Zt}$, ${\tt Aa}$, ${\tt Bb}$, ${\tt Cc}$, and ${\tt Dd}$ can be viewed, if we wish, as pyramids since the only operations involved are scalar multiplication and addition. The only requirement for a pyramidal interpretation of the {\tt RK4} {\em Mathematica} code is that the right side of the differential equation, ${\tt F[*,*]}$, be defined for pyramids. Finally, we remark that the features that make it possible to interpret the {\tt RK4} {\em Mathematica} code either in terms of ordinary variables or pyramidal variables will hold for {\em Mathematica} realizations of many other familiar numerical integration methods including other forms of Runge Kutta, predictor-corrector methods, and extrapolation methods. To make these ideas concrete, and to understand their implications, let us begin with a simple example. Suppose, in the single variable case, that the right side of the differential equation has the simple form \begin{equation} f(z,t)=-2tz^2. \end{equation} The differential equation with this right side can be integrated analytically to yield the solution \begin{equation} z(t)=z^0/[1+z^0(t-t^0)^2]. \end{equation} In particular, for the case $t^0=0$, $z^0=1$, and $t=1$, there is the result \begin{equation} z(1)=z^0/[1+z^0]=1/2. \end{equation} Let us also integrate the differential equation with the right side (7.100) numerically. Shown below is the result of running the associated {\em Mathematica} Runge Kutta code for this case. \begin{eqnarray} &&{\tt Clear[\texttt{"}Global`*\texttt{"}];}\nonumber \\ &&{\tt F[Z\_,t\_]:=-2{\;} t{\;} Z^2;}\nonumber\\ &&{\tt h=.1;}\nonumber\\ &&{\tt ns=10;}\nonumber\\ &&{\tt t=0;}\nonumber\\ &&{\tt Zvar=1.;}\nonumber\\ &&{\tt RK4;}\nonumber\\ &&{\tt t}\nonumber\\ &&{\tt Zvar}\nonumber\\ &&1.\nonumber\\ &&0.500001\nonumber\\ \end{eqnarray} Note that the last line of (7.103) agrees with (7.102) save for a ``1" in the last entry. As expected, and as experimentation shows, this small difference, due to accumulated truncation error, becomes even smaller if ${\tt h}$ is decreased (and correspondingly, ${\tt ns}$ is increased). Suppose we expand the solution (7.102) about the design initial condition $z^{d0}=1$ by replacing $z^0$ by $z^{d0}+x$ and expanding the result in a Taylor series in $x$ about the point $x$=0. Below is a {\em Mathematica} run that performs this task. \begin{eqnarray} &&{\tt zd0 = 1;}\nonumber\\ &&{\tt Series[(zd0 + x)/(1 + zd0 + x), \{x, 0, 5\}]}\nonumber\\ &&\frac{1}{2}+\frac{x}{4}-\frac{x^2}{8}+\frac{x^3}{16}-\frac{x^4}{32}+\frac{x^5}{64}+O[x]^6\nonumber\\ \end{eqnarray} We will now see that the same Taylor series can be obtained by the operation of numerical integration applied to pyramids. The {\em Mathematica} code below shows, for our example differential equation, the application of numerical integration to pyramids. \begin{eqnarray} &&{\tt Clear[\texttt{"}Global`*\texttt{"}];}\nonumber\\ &&{\tt Needs[\texttt{"}Combinatorica`\texttt{"}];}\nonumber\\ &&{\tt m = 1; p = 5;}\nonumber\\ &&{\tt GAMMA = Compositions[0, m];}\nonumber\\ &&{\tt Do[GAMMA = Join[GAMMA, Reverse[Compositions[d, m]]], \{d, 1, p, 1\}];}\nonumber\\ &&{\tt L = Length[GAMMA];}\nonumber\\ &&{\tt JSK[list\_,k\_]\;:=}\nonumber\\ &&{\tt Position[Apply[And,Thread[\texttt{\#1\;<=\;\#2\;\&\;}[\texttt{\#},k]]]\;\texttt{\&\;/@}\;list,True]\texttt{//}Flatten;}\nonumber\\ &&{\tt B = Table[JSK[GAMMA, GAMMA[[r]]], \{r, 1, L, 1\}];}\nonumber\\ &&{\tt Brev = Reverse \texttt{/@}\; B;}\nonumber\\ &&{\tt PROD[U\_, V\_] := Table[U[[B[[k]]]].V[[Brev[[k]]]], \{k, 1, L, 1\}];}\nonumber\\ &&{\tt F[Z\_, t\_] := -2{\;}t{\;}PROD[Z, Z];}\nonumber\\ &&{\tt h = .01;}\nonumber\\ &&{\tt ns = 100;}\nonumber\\ &&{\tt t = 0;}\nonumber\\ &&{\tt zd0 = 1;}\nonumber\\ &&{\tt C1 = \{1, 0, 0, 0, 0, 0\};}\nonumber\\ &&{\tt X = \{0, 1, 0, 0, 0, 0\};}\nonumber\\ &&{\tt Zvar = zd0{\;}C1 + X;}\nonumber\\ &&{\tt RK4;}\nonumber\\ &&{\tt t}\nonumber\\ &&{\tt Zvar}\nonumber\\ &&1.\nonumber\\ &&\{0.5, 0.25, -0.125, 0.0625, -0.03125, 0.015625\} \end{eqnarray} The first 11 lines of the code set up what should be by now the familiar procedure for labeling and multiplying pyramids. In particular, $m=1$ because we are dealing with a single variable, and $p=5$ since we wish to work through fifth order. The line \begin{equation} {\tt F[Z\_, t\_] := -2{\;}t{\;}PROD[Z, Z]} \end{equation} defines ${\tt F[*,*]}$ for the case of pyramids, and is the result of applying the replacement rule to the right side of $f$ as given by (7.100), \begin{equation} -2{\;}t{\;}z^2\leadsto -2{\;}{\tt t}{\;}{\tt PROD[Z, Z]}. \end{equation} Lines 13 through 15 are the same as lines 3 through 5 in (7.103) except that, in order to improve numerical accuracy, the step size $\tt h$ has been decreased and correspondingly the number of steps ${\tt ns}$ has been increased. Lines 16 through 19 now initialize ${\tt Zvar}$ as a pyramid with constant part ${\tt zd0}$ and first-order monomial part 1, \begin{equation} {\tt Zvar = zd0{\;}C1 + X}. \end{equation} These lines are the pyramid equivalent of line 6 in (7.103). Finally lines 20 through 22 are the same as lines 7 through 9 in (7.103). In particular, the line {\tt RK4} in (7.103) and the line {\tt RK4} in (7.105) refer to exactly the {\em same} code, namely that in (7.99). Let us now compare the outputs of (7.103) and (7.105). Comparing the penultimate lines in each we see that the final time $t=1$ is the same in each case. Comparing the last lines shows that the output ${\tt Zvar}$ for (7.105) is a pyramid whose first entry agrees with the last line of (7.103). Finally, all the entries in the pyramid output agree with the Taylor coefficients in the expansion (7.104). We see, in the case of numerical integration (of a single differential equation), that replacing the dependent variable by a pyramid, with the initial value of the pyramid given by (7.108), produces a Taylor expansion of the final condition in terms of the initial condition. What accounts for this near miraculous result? It's the Taylor rule described at the end of Section 7.1.6. We have already learned that to expand some function $F(z)$ about some point $z^d$ we must evaluate $F(z^d+x)$. See (7.74). We know that the final $Zvar$, call it $Zvar^{\rm{fin}}$, is an analytic function of the initial $Zvar$, call it $Zvar^{\rm{in}}$, so that we may write \begin{equation} Zvar^{\rm{fin}}=Zvar^{\rm{fin}}(Zvar^{\rm{in}})=g(Zvar^{\rm{in}}) \end{equation} where $g$ is the function that results from following the trajectory from $t=t^{\rm{in}}$ to $t=t^{\rm{fin}}$. Therefore, by the Taylor rule, to expand $Zvar^{\rm{fin}}$ about $Zvar^{\rm{in}}=z^{d0}$, we must evaluate $Zvar^{\rm{fin}}(z^{d0}+x)$. That, with the aid of pyramids, is what the code (7.105) accomplishes. \subsubsection{Multi Equation/Variable Case} Because of {\em Mathematica's} built-in provisions for handling arrays, the work of the previous section can easily be extended to the case of several differential equations. Consider, as an example, the two-variable case for which $\bm{f}$ has the form \begin{eqnarray} &&f_1(\bm{z},t)=-z_1^2,\nonumber\\ &&f_2(\bm{z},t)=+2z_1z_2. \end{eqnarray} The differential equations associated with this $\bm{f}$ can be solved in closed form to yield, with the understanding that $t^0=0$, the solution \begin{eqnarray} &&z_1(t)=z_1^0/(1+tz_1^0),\nonumber\\ &&z_2(t)=z_2^0(1+tz_1^0)^2. \end{eqnarray} For the final time $t=1$ we find the result \begin{eqnarray} &&z_1(1)=z_1^0/(1+z_1^0),\nonumber\\ &&z_2(1)=z_2^0(1+z_1^0)^2. \end{eqnarray} Let us expand the solution (7.112) about the design initial conditions \begin{eqnarray} &&z_1^{d0}=1,\nonumber\\ &&z_2^{d0}=2, \end{eqnarray} by writing \begin{eqnarray} &&z^0_1=z_1^{d0}+x_1=1+x_1,\nonumber\\ &&z^0_2=z_2^{d0}+x_2=2+x_2. \end{eqnarray} Doing so gives the results \begin{eqnarray} z_1(1)&=&(1+x_1)/(2+x_1)=(2+x_1-1)/(2+x_1)=1-1/(2+x_1)=\nonumber\\ &=&1-(1/2)(1+x_1/2)^{-1}=1-(1/2)[1-x_1/2+(x_1/2)^2-(x_1/2)^3+\cdots\nonumber\\ &=&(1/2)+(1/4)x_1-(1/8)x_1^2+(1/16)x_1^3+\cdots,\nonumber\\ \end{eqnarray} \begin{eqnarray} z_2(1)&=&(2+x_2)(2+x_1)^2\nonumber\\ &=&8+8x_1+4x_2+2x_1^2+4x_1x_2+x_1^2x_2. \end{eqnarray} We will now explore how this same result can be obtained using the replacement rule applied to the operation of numerical integration. As before, we will label individual monomials by an integer $r$. Recall that Table 8 shows our standard modified glex sequencing applied to the case of two variables. The {\em Mathematica} code below shows, for our two-variable example differential equation, the application of numerical integration to pyramids. Before describing the code in some detail, we take note of the bottom two lines. When interpreted with the aid of Table 8, we see that the penultimate line of (7.117) agrees with (7.115), and the last line of (7.117) nearly agrees with (7.116). The only discrepancy is that for the monomial with label $r=7$ in the last line of (7.117). In the {\em Mathematica} output it has the value $-1.16563\times10^{-7}$ while, according to (7.116), the true value should be zero. This small discrepancy arises from the truncation error inherent in the RK4 algorithm, and becomes smaller as the step size {\tt h} is decreased (and {\tt ns} is correspondingly increased), or if some more accurate integration algorithm is used. We conclude that, with the use of pyramids, it is also possible in the two-variable case to obtain Taylor expansions of the final conditions in terms of the initial conditions. Indeed, what is involved is again the Taylor rule applied, in this instance, to the case of two variables. \begin{eqnarray} &&{\tt Clear[\texttt{"}Global`*\texttt{"}];}\nonumber\\ &&{\tt Needs[\texttt{"}Combinatorica`\texttt{"}];}\nonumber\\ &&{\tt m = 2; p = 3;}\nonumber\\ &&{\tt GAMMA = Compositions[0, m];}\nonumber\\ &&{\tt Do[GAMMA = Join[GAMMA, Reverse[Compositions[d, m]]], \{d, 1, p, 1\}];}\nonumber\\ &&{\tt L = Length[GAMMA];}\nonumber\\ &&{\tt JSK[list\_,k\_]\;:=}\nonumber\\ &&{\tt Position[Apply[And,Thread[\texttt{\#1\;<=\;\#2\;\&\;}[\texttt{\#},k]]]\;\texttt{\&\;/@}\;list,True]\texttt{//}Flatten;}\nonumber\\ &&{\tt B = Table[JSK[GAMMA, GAMMA[[r]]], \{r, 1, L, 1\}];}\nonumber\\ &&{\tt Brev = Reverse \texttt{/@}\; B;}\nonumber\\ &&{\tt PROD[U\_, V\_] := Table[U[[B[[k]]]].V[[Brev[[k]]]], \{k, 1, L, 1\}];}\nonumber\\ &&{\tt F[Z\_, t\_] := \{-PROD[Z[[1]], Z[[1]]], 2.{\;}PROD[Z[[1]], Z[[2]]]\};}\nonumber\\ &&{\tt h = .01;}\nonumber\\ &&{\tt ns = 100;}\nonumber\\ &&{\tt t = 0;}\nonumber\\ &&{\tt zd0 = \{1.,2.\};}\nonumber\\ &&{\tt C1 = Table[KroneckerDelta[k,1],\{k,1,L,1\}];}\nonumber\\ &&{\tt X[1] = Table[KroneckerDelta[k,2],\{k,1,L,1\}];}\nonumber\\ &&{\tt X[2] = Table[KroneckerDelta[k,3],\{k,1,L,1\}];}\nonumber\\ &&{\tt Zvar = \{zd0[[1]]{\;}C1 + X[1],zd0[[2]]{\;}C1 + X[2]\};}\nonumber\\ &&{\tt RK4;}\nonumber\\ &&{\tt t}\nonumber\\ &&{\tt Zvar}\nonumber\\ &&1.\nonumber\\ &&\{\{0.5, 0.25,0., -0.125, 0., 0., 0.0625, 0.,0.,0,\},\nonumber\\ &&{\;}{\;}\{8.,8.,4.,2.,4.,0.,-1.16563\times10^{-7},1.,0.,0.\}\} \end{eqnarray} Let us compare the structures of the routines for the single variable case and multi (two) variable case as illustrated in (7.105) and (7.117). The first difference occurs at line 3 where the number of variables $m$ and the maximum degree $p$ are specified. In (7.117) $m$ is set to 2 because we wish to treat the case of two variables, and $p$ is set to 3 simply to limit the lengths of the output arrays. The next difference occurs in line 12 where the right side $\tt F$ of the differential equation is specified. The major feature of the definition of $\tt F$ in (7.117) is that it is specified as two pyramids because the right side of the definition has the structure ${\tt \{*,*\}}$ where each item $ *$ is an instruction for computing a pyramid. In particular, the two pyramids are those for the two components of $\bmv f$ as given by (7.110) and use of the replacement rule, \begin{equation} -z_1^2\leadsto {\tt -PROD[Z[[1]], Z[[1]]]}, \end{equation} \begin{equation} 2z_1z_2\leadsto 2.{\tt {\;}PROD[Z[[1]], Z[[2]]]}. \end{equation} The next differences occur in lines 16 through 20 of (7.117). In line 16, since specification of the initial conditions now requires two numbers, see (7.113), ${\tt zd0}$ is specified as a two-component array. In lines 17 and 18 of (7.105) the pyramids ${\tt C1}$ and $\tt X$ are set up explicitly for the case $p=5$. By contrast, in lines 17 through 19 of (7.117), the pyramids ${\tt C1}$, ${\tt X[1]}$, and ${\tt X[2]}$ are set up for general $p$ with the aid of the Table command and the Kronecker delta function. Recall (7.62) and observe from Tables 4, 7, and 8 that, no matter what the values of $m$ and $p$, the constant monomial has the label $r=1$ and the monomial $x_1$ has the label $r=2$. Moreover, as long as $m\ge2$ and no matter what the value of $p$, the $x_2$ monomial has the label $r=3$. Finally, compare line 19 in (7.105) with line 20 in (7.117), both of which define the initial ${\tt Zvar}$. We see that the difference is that in (7.105) ${\tt Zvar}$ is defined as a single pyramid while in (7.117) it is defined as a pair of pyramids of the form $\{*,*\}$. Most remarkably, all other corresponding lines in (7.105) and (7.117) are the same. In particular, the {\em same} RK4 code, namely that given by (7.99), is used in the scalar case (7.103), the single pyramid case (7.105), and the two-pyramid case (7.117). This multi-use is possible because of the convenient way in which {\em Mathematica} handles arrays. We conclude that the pattern for the multivariable case is now clear. Only the following items need to be specified in an $m$ dependent way: \begin{itemize} \item The value of $m$. \item The entries in $\tt F$ with entries entered as an array $\{*,*,\cdots\}$ of $m$ pyramids. \item The design initial condition array ${\tt zd0}$. \item The pyramids for ${\tt C1}$ and ${\tt X[1]}$ through ${\tt X[m]}$. \item The entries for the initial ${\tt Zvar}$ specified as an array ${\tt \{zd0[[1]]} \;{\tt C1 + X[1]}{\tt ,zd0[[2]]} \;{\tt C1 + X[2]}{\tt ,\cdots,zd0[[m]]} \;{\tt C1 + X[m]\}}$ of $m$ pyramids. \end{itemize} \subsection{Duffing Equation Application} Let us now apply the methods just developed to the case of the Duffing equation with parameter dependence as described by the relations (6.12) through (6.17). {\em Mathematica} code for this purpose is shown below. By looking at the final lines that result from executing this code, we see that the final output is an array of the form ${\tt \{\{*\},\{*\},\{*\}\}}$. That is, the final output is an array of three pyramids. This is what we expect, because now we are dealing with three variables. See line 3 of the code, which sets m=3. Also, for convenience of viewing, results are calculated and displayed only through third order as a consequence of setting ${\tt p=3}$. \begin{eqnarray*} &&{\tt Clear[\texttt{"}Global`*\texttt{"}];}\nonumber\\ &&{\tt Needs[\texttt{"}Combinatorica`\texttt{"}];}\nonumber\\ &&{\tt m = 3; p = 3;}\nonumber\\ &&{\tt GAMMA = Compositions[0, m];}\nonumber\\ &&{\tt Do[GAMMA = Join[GAMMA, Reverse[Compositions[d, m]]], \{d, 1, p, 1\}];}\nonumber\\ &&{\tt L = Length[GAMMA];}\nonumber\\ &&{\tt JSK[list\_,k\_]\;:=}\nonumber\\ &&{\tt Position[Apply[And,Thread[\texttt{\#1\;<=\;\#2\;\&\;}[\texttt{\#},k]]]\;\texttt{\&\;/@}\;list,True]\texttt{//}Flatten;}\nonumber\\ &&{\tt B = Table[JSK[GAMMA, GAMMA[[r]]], \{r, 1, L, 1\}];}\nonumber\\ &&{\tt Brev = Reverse \texttt{/@}\; B;}\nonumber\\ &&{\tt PROD[U\_, V\_] := Table[U[[B[[k]]]].V[[Brev[[k]]]], \{k, 1, L, 1\}];}\nonumber\\ &&{\tt POWER[U\_, 2] := PROD[U, U];}\nonumber\\ &&{\tt POWER[U\_, 3] := PROD[U, POWER[U, 2]];}\nonumber\\ &&{\tt C0 = Table[0, \{k, 1, L, 1\}];}\nonumber\\ &&{\tt F[Z\_, t\_] := \{Z[[2]],}\nonumber\\ &&{\tt -2.{\;}beta{\;}PROD[Z[[3]], Z[[2]]] - PROD[POWER[Z[[3]], 2], Z[[1]]] -}\nonumber\\ &&{\tt POWER[Z[[1]], 3] - eps{\;}Sin[t]{\;}POWER[Z[[3]], 3],}\nonumber\\ &&{\tt C0\};}\nonumber\\ \end{eqnarray*} \begin{eqnarray} &&{\tt ns = 100;}\nonumber\\ &&{\tt t = 0;}\nonumber\\ &&{\tt h = (2 Pi)/ns;}\nonumber\\ &&{\tt beta = .1;eps = 1.5;}\nonumber\\ &&{\tt zd0 = \{.3,.4,.5\};}\nonumber\\ &&{\tt C1 = Table[KroneckerDelta[k,1],\{k,1,L,1\}];}\nonumber\\ &&{\tt X[1] = Table[KroneckerDelta[k,2],\{k,1,L,1\}];}\nonumber\\ &&{\tt X[2] = Table[KroneckerDelta[k,3],\{k,1,L,1\}];}\nonumber\\ &&{\tt X[3] = Table[KroneckerDelta[k, 4], \{k, 1, L, 1\}];}\nonumber\\ &&{\tt Zvar = \{zd0[[1]]{\;}C1 + X[1], zd0[[2]]{\;}C1 + X[2], zd0[[3]]{\;}C1 + X[3]\};}\nonumber\\ &&{\tt RK4;}\nonumber\\ &&{\tt t}\nonumber\\ &&{\tt Zvar}\nonumber\\ &&2\pi\nonumber\\ &&\{\{-0.0493158, 0.973942, -0.110494, 5.51271, 3.54684, 3.46678, \nonumber\\ &&{\;}{\;}{\;}11.2762, 2.36463, 1.0985, 23.3332, -1.03541, -3.23761, -12.8064, \nonumber\\ &&{\;}{\;}{\;}4.03421, -23.4342, -17.8967, 1.96148, 5.07403, -36.9009, 25.1379\},\nonumber\\ &&{\;}{\;}\{0.439713, 1.05904, 0.427613, 3.3177, 0.0872459, 0.635397, -3.02822, \nonumber\\ &&{\;}{\;}{\;}1.77416, -4.10115, 3.16981, -2.43002, -5.33643, -7.77038, -6.08476,\nonumber\\ &&{\;}{\;}{\;}-0.541465, -21.1672, -1.4091, -9.54326, 14.6334, -39.2312\},\nonumber\\ &&{\;}{\;}\{0.5, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0\}\}\nonumber\\ \end{eqnarray} The first unusual fragments in the code are lines 12 and 13, which define functions that implement the calculation of second and third powers of pyramids. Recall Section 7.1.5. The first new fragment is line 14, which defines the pyramid ${\tt C0}$ with the aid of the Table command and an implied Do loop. As a result of executing this code, ${\tt C0}$ is an array of $L$ zeroes. The next three lines, lines 15 through 18, define $\tt F$, which specifies the right sides of equations (6.12) through (6.14). See (6.15) through (6.18). The right side of $\tt F$ is of the form ${\tt \{*,*,*\}}$, an array of three pyramids. By looking at (6.15) and recalling the replacement rule, we see that the first pyramid should be ${\tt Z[[2]]}$, \begin{equation} z_2\leadsto {\tt Z[[2]]}. \end{equation} The second pyramid on the right side of $\tt F$ is more complicated. It arises by applying the replacement rule to the right side of (6.16) to obtain the associated pyramid, \begin{eqnarray} &&- \ 2\beta z_3z_2 -z^2_3z_1-z^3_1-\epsilon z^3_3 \sin t\leadsto\nonumber\\ &&{\tt -2.{\;}beta{\;}PROD[Z[[3]], Z[[2]]] - PROD[POWER[Z[[3]], 2], Z[[1]]] -}\nonumber\\ &&{\tt POWER[Z[[1]], 3] - eps{\;}Sin[t]{\;}POWER[Z[[3]], 3]}. \end{eqnarray} The third pyramid on the right side of $F$ is simplicity itself. From (6.17) we see that this pyramid should be the result of applying the replacement rule to the number $0$. Hence, this pyramid is $\tt C0$, \begin{equation} 0\leadsto {\tt C0=\{0,0,\cdots,0\}}. \end{equation} The remaining lines of the code require little comment. Line 20 sets the initial time to $0$, and line 21 defines $h$ in such a way that the final value of $t$ will be $2\pi$. Line 22 establishes the parameter values $\beta=.1$ and $\epsilon=1.5$, which are those for Figure 4. Line 23 specifies that the design initial condition is \begin{equation} z_1(0)=z_1^{d0}=.3,{\;}z_2(0)=z_2^{d0}=.4,{\;}z_3(0)=z_3^{d0}=.5=\sigma, \end{equation} and consequently \begin{equation} \omega=1/\sigma=2. \end{equation} See (6.3). Also, it follows from (6.2) and (6.5) that \begin{equation} q(0)=\omega Q(0)=\omega z_1(0)=(2)(.3)=.6, \end{equation} \begin{equation} q^\prime(0)=\omega^2\dot{Q}(0)=\omega^2z_2(0)=(2^2)(.4)=1.6. \end{equation} Next, lines 24 through 28 specify that the expansion is to be carried out about the initial conditions (7.124). Finally, line 29 invokes the {\tt RK4} code given by (7.99). That is, as before, {\em no} modifications are required in the integration code. A few more comments about the output are appropriate. Line 32 shows that the final time $t$ is indeed $2\pi$, as desired. The remaining output lines display the three pyramids that specify the final value of {\tt Zvar}. From the first entry in each pyramid we see that \begin{equation} z_1(2\pi)=-0.0493158, \end{equation} \begin{equation} z_2(2\pi)=0.439713, \end{equation} \begin{equation} z_3(2\pi)=.5, \end{equation} when there are no deviations in the initial conditions. The remaining entries in the pyramids are the coefficients in the Taylor series that describe the changes in the final conditions that occur when changes are made in the initial conditions (including the parameter $\sigma$). We are, of course, particularly interested in the first two pyramids. The third pyramid has entries only in the first place and the fourth place, and these entries are the same as those in the third pyramid pyramid for ${\tt Zvar}$ at the start of the integration, namely those in ${\tt zd0[3]{\;}C1+X[3]}$. The fact that the third pyramid in ${\tt Zvar}$ remains constant is the expected consequence of (6.17). We should also describe how the ${\cal{M}}_8$ employed in Section 6.2 was actually computed. It could have been computed by setting $p=8$ in (7.120) and specifying a large number of steps $ns$ to insure good accuracy. Of course, when $p=8$, the pyramids are large. Therefore, one does not usually print them out, but rather writes them to files or sends them directly to other programs for further use. However, rather than using RK4 in (7.120), we replaced it with an adaptive 4-$5^{\rm{th}}$ order Runge-Kutta-Fehlberg routine that dynamically adjusts the time step $h$ during the course of integration to achieve a specified local accuracy, and we required that the error at each step be no larger than $10^{-12}$. Like the RK4 routine, the Runge-Kutta-Fehlberg routine, when implemented in {\em Mathematica}, has the property that it can integrate any number of equations both in scalar variable and pyramid form without any changes in the code.\footnote{A {\em Mathematica} version of this code is available from the first author upon request.} \subsection{Relation to the Complete Variational Equations} At this point it may not be obvious to the reader that the use of pyramids in integration routines to obtain Taylor expansions is the same as integrating the complete variational equations. We now show that the integration of pyramid equations is equivalent to the forward integration of the complete variational equations. For simplicity, we will examine the single variable case with no parameter dependence. The reader who has mastered this case should be able to generalize the results obtained to the general case. In the single variable case with no parameter dependence (2.1) becomes \begin{equation} \dot{z}=f(z,t). \end{equation} Let $z^d(t)$ be some design solution and introduce a deviation variable $\zeta$ by writing \begin{equation} z=z^d+\zeta. \end{equation} Then the equation of motion (7.131) takes the form \begin{equation} \dot{z}^d+\dot{\zeta}=f(z^d+\zeta,t). \end{equation} Also, the relations (2.4) and (2.5) take the form \begin{equation} f(z^d+\zeta,t)=f(z^d,t)+g(z^d,t,\zeta) \end{equation} where $g$ has an expansion of the form \begin{equation} g(z^d,t,\zeta)=\sum_{j=1}^\infty g^j(t)\zeta^j. \end{equation} Finally, (2.6) and (2.7) become \begin{equation} \dot{z}^d=f(z^d,t), \end{equation} \begin{equation} \dot{\zeta}=g(z^d,t,\zeta)=\sum_{j=1}^\infty g^j(t)\zeta^j, \end{equation} and (2.8) becomes \begin{equation} \zeta=\sum_{j=1}^\infty h^j(t)(\zeta_i)^j. \end{equation} Insertion of (7.138) into both sides of (7.137) and equating like powers of $\zeta_i$ now yields the set of differential equations \begin{equation} \dot{h}^{j^{\prime \prime}} (t) = \sum_{j=1}^\infty g^j(t) U^{j^{\prime \prime}}_j(h^s){\;}{\rm{with}}{\;}j,j^{\prime\prime}\ge1 \end{equation} where the (universal) functions $U^{j^{\prime \prime}}_j(h^s)$ are given by the relations \begin{equation} \left(\sum_{j^\prime=1}^\infty h^{j^\prime}(\zeta_i)^{j^\prime}\right)^j= \sum_{j^{\prime\prime}=1}^\infty U^{j^{\prime \prime}}_j (h^s)(\zeta_i)^{j^{\prime \prime}}. \end{equation} The equations (7.136) and (7.139) are to be integrated from $t=t^{\rm{in}}=t^0$ to $t=t^{\rm{fin}}$ with the initial conditions \begin{equation} z^d(t^0)=z^{d0}, \end{equation} \begin{equation} h^1(t^0)=1, \end{equation} \begin{equation} h^{j^{\prime\prime}}(t^0)=0{\;}{\rm{for}}{\;}j^{\prime\prime}>1. \end{equation} Let us now consider the numerical integration of pyramids. Upon some reflection, we see that the numerical integration of pyramids is equivalent to finding the numerical solution to a differential equation with pyramid arguments. For example, in the single-variable case, let ${\tt {Zvar}}(t)$ be the pyramid appearing in the integration process. Then, its integration is equivalent to solving numerically the pyramid differential equation \begin{equation} (d/dt){\tt {Zvar}}(t)={\tt {F}}({\tt {Zvar}},t). \end{equation} We now work out the consequences of this observation. By the inverse of the replacement rule, we may associate a Taylor series with the pyramid ${\tt {Zvar}}(t)$ by writing \begin{equation} {\tt {Zvar}}(t)\leadsto c_0(t)+\sum_{j\ge 1}c_j(t)x^j. \end{equation} By (1.45) it is intended that the entries in the pyramid ${\tt {Zvar}}(t)$ be used to construct a corresponding Taylor series with variable $x$. In view of (7.108), there are the initial conditions \begin{equation} c_0(t_0)=z^d(t_0), \end{equation} \begin{equation} c_1(t_0)=1, \end{equation} \begin{equation} c_j(t_0)=0{\;}{\rm{for}}{\;}j>1. \end{equation} We next seek the differential equations that determine the time evolution of the $c_j(t)$. Under the inverse replacement rule, there is also the correspondence \begin{equation} (d/dt){\tt {Zvar}}(t)\leadsto {\dot{c}}_0(t)+\sum_{j\ge 1}{\dot{c}}_j(t)x^j. \end{equation} We have found a representation for the left side of (7.144). We need to do the same for the right side. That is, we need the Taylor series associated with the pyramid ${\tt {F}}({\tt {Zvar}},t)$. By the inverse replacement rule, it will be given by the relation \begin{equation} {\tt {F}}({\tt {Zvar}},t)\leadsto f(\sum_{j\ge0}c_j(t)x^j,t). \end{equation} Here it is understood that the right side of (7.150) is to be expanded in a Taylor series about $x=0$. From (7.134), (7.135), and (7.140) we have the relations \begin{eqnarray} f(\sum_{j\ge0}c_j(t)x^j,t)&=&f(c_0(t))+g(c_0(t),t,\sum_{j\ge1}c_j(t)x^j)\nonumber\\ &=&f(c_0(t))+\sum_{k\ge1}g^k(t)(\sum_{j\ge1}c_j(t)x^j))^k\nonumber\\ &=&f(c_0(t))+\sum_{k\ge1}g^k(t)\sum_{j\ge1}U_k^j(c_\ell)x^j.\nonumber\\ \end{eqnarray} Therefore, there is the inverse replacement rule \begin{equation} {\tt {F}}({\tt {Zvar}},t)\leadsto f(c_0(t))+\sum_{k\ge1}g^k(t)\sum_{j\ge1}U_k^j(c_\ell)x^j. \end{equation} Upon comparing like powers of $x$ in (7.149) and (7.152), we see that the pyramid differential equation (7.144) is equivalent to the set of differential equations \begin{equation} {\dot{c}}_0(t)=f(c_0(t)), \end{equation} \begin{equation} {\dot{c}}_j(t)=\sum_{k\ge1}g^k(t)U_k^j(c_\ell). \end{equation} Finally, compare the initial conditions (7.141) through (7.143) with the initial conditions (7.146) through (7.148), and compare the differential equations (7.136) and (7.139) with the differential equations (7.153) and (7.154). We conclude that that there must be the relations \begin{equation} c_0(t)=z^d(t), \end{equation} \begin{equation} c_j(t)=h^j(t){\;}{\rm{for}}{\;}j\ge1. \end{equation} We have verified, in the single variable case, that the use of pyramids in integration routines is equivalent to the solution of the complete variational equations using forward integration. As stated earlier, verification of the analogous $m$-variable result is left to the reader. We also observe the wonderful convenience that, when pyramid operations are implemented and employed, it is not necessary to explicitly work out the forcing terms $g_a^r(t)$ and the functions $U^{r^{\prime\prime}}_r(h^s_n)$, nor is it necessary to set up the equations (3.6). All these complications are handled implicitly and automatically by the pyramid routines. \section{Concluding Summary} \setcounter{equation}{0} Poincar\'{e} analyticity implies that transfer maps arising from ordinary differential equations can be expanded as Taylor series in the initial conditions and also in whatever parameters may be present. Section 2 showed that the determination of these expansions is equivalent to solving the complete variational equations, and Sections 3 and 4 showed that the complete variational equations can be solved either by forward or backward integration. Sections 5 and 6 applied this procedure for the Duffing stroboscopic map and found, remarkably, that an $8^{\rm{th}}$ order polynomial approximation to this map produced an infinite period doubling cascade and apparent strange attractor that closely resembled those of the exact map. A final section described computer methods for automatically setting up and numerically integrating the complete variational equations.
{ "timestamp": "2011-02-17T02:02:31", "yymm": "1102", "arxiv_id": "1102.3394", "language": "en", "url": "https://arxiv.org/abs/1102.3394", "abstract": "According to a theorem of Poincare, the solutions to differential equations are analytic functions of (and therefore have Taylor expansions in) the initial conditions and various parameters providing the right sides of the differential equations are analytic in the variables, the time, and the parameters. We describe how these Taylor expansions may be obtained, to any desired order, by integration of what we call the complete variational equations. As illustrated in a Duffing equation stroboscopic map example, these Taylor expansions, truncated at an appropriate order thereby providing polynomial approximations, can well reproduce the behavior (including infinite period doubling cascades and strange attractors) of the solutions of the underlying differential equations.", "subjects": "Mathematical Physics (math-ph); Classical Analysis and ODEs (math.CA); Dynamical Systems (math.DS); Accelerator Physics (physics.acc-ph)", "title": "Poincare Analyticity and the Complete Variational Equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9867771755256741, "lm_q2_score": 0.8333245973817158, "lm_q1q2_score": 0.8223056925003991 }
https://arxiv.org/abs/1810.06275
Functional limit theorems for random walks
We survey some geometrical properties of trajectories of $d$-dimensional random walks via the application of functional limit theorems. We focus on the functional law of large numbers and functional central limit theorem (Donsker's theorem). For the latter, we survey the underlying weak convergence theory, drawing heavily on the exposition of Billingsley, but explicitly treat the multidimensional case.Our two main applications are to the convex hull of a random walk and the centre of mass process associated to a random walk. In particular, we establish the limit sets of the convex hull in the two distinct cases of zero and non-zero drift which provides insight into the diameter, mean width, volume and surface area functionals. For the centre of mass process, we find the limiting processes in both the law of large numbers and central limit theorem domains.
\section{Introduction} The early days of limit theorems in the form of a \lq law of averages\rq~saw slow progress. The first direct study was the theorem of Bernoulli \cite{bernoulli} on the sums of binary random variables, but this was only stated in 1713 over a century after comments of Cardano in his work on dice games \cite{cardano} and 50 years after Halley's treatise of mortality rates\cite{halley} which clearly expressed a knowledge of decreasing errors in large samples. The term \lq law of large numbers\rq~itself, was not coined until one of Poisson's late works on probability theory in 1837 \cite{poisson}, in which the sum of Bernoulli random variables with varying probabilities of success were shown to converge to the sum of the probabilities; the theorem was only rigorously proved by Chebyshev in 1867 \cite{chebyshev}. The first description of a law for more general random variables was produced in 1929 by Khinchin\cite{khinchin} and this became the weak law of large numbers. In the succeeding couple of years, Kolmogorov \cite{kolmogorov} improved the result to establish the well known strong law, in particular, he showed that if $\xi_1,\xi_2,\ldots,\xi_n$ are independent, identically distributed (i.i.d.) random variables with finite expectation $\mu$ we have \begin{align*} \frac{1}{n} \left(\sum_{i=1}^n \xi_i-n\mu\right) \overset{\textup{a.s.}}{\longrightarrow} 0, \text{ as } n \to \infty. \end{align*} All of these laws of large numbers capture the same idea that was understood by Cardano $500$ years ago. The next natural question which Cardano could only associate to \lq luck\rq~is, how large are the errors likely to be for a given number of trials? The Lindeberg--L\'{e}vy central limit theorem, see \cite[pp.~247--249]{fischer}, answers this question when the random variables are i.i.d. with finite variance $\sigma^2$: \begin{align} \label{eqn:clt} \frac{1}{\sqrt{n}} (S_n - n \mu) \overset{\textup{d}}{\longrightarrow} \mathcal{N}( 0, \sigma^2 ), \text{ as } n \to \infty, \end{align} where $\mathcal{N}( 0, \sigma^2 )$ is the normal distribution with mean $0$ and variance $\sigma^2$. These early results refer only to the sums of random variables because this was the quantity of interest in the contexts of long run profit in gambling games, and errors when sampling large amounts of data. However, if the results are considered as within the topic of random walks there are some further natural questions that arise. If the law of large numbers says that after $n$ steps, the position of the walk is approximately $n \mu$ from where you started, can a stronger claim be made, namely that you move towards that point at a constant rate? If so, what does the deviation from this direct path look like? Do the results extend to higher dimensions, or trajectories with discontinuities? The answer to all of these questions is yes, with the fact that the trajectory convergence is equivalent to the strong law of large numbers found in \cite{glynn}, and the higher dimensional and discontinuous path results can be found in Whitt's book \cite{whitt2002stochastic}. Many of these results build on the theory presented in the book of Billingsley \cite{cpm}, and all together they are the subject of the first half of this survey. To formally describe these problems, we introduce a couple of definitions and assumptions, starting with our description of a random walk: \begin{description} \item [\namedlabel{ass:walk}{\textbf{W}$_{\mu}$}] Let $d \in \mathbb{N}$, and suppose that $\xi, \xi_1, \xi_2, \ldots$ are i.i.d.~random variables in $\mathbb{R}^d$ with $\Exp\|\xi\| < \infty$ and $\Exp\xi=\mu$. The random walk $(S_n,n\in \mathbb{Z}_+)$ is the sequence of partial sums $S_n := \sum_{i=1}^n \xi_i$ with $S_0 := 0$. \end{description} Often this description is too broad and just as the Lindeberg-L\'evy central limit theorem requires finite variance, we will use some restriction on the higher moments of the underlying process, in the form of one of the following two conditions: \begin{description} \item [\namedlabel{ass:Sigma}{\textbf{V}}] Suppose that $\Exp[\|\xi\|^2] < \infty$ and write $\Sigma := \Exp[(\xi-\mu)(\xi-\mu)^{\scalebox{0.6}{$\top$}}]$. Here $\Sigma$ is a nonnegative-definite, symmetric $d$ by $d$ matrix; we write $\sigma^2 := \trace \Sigma = \Exp [ \| \xi - \mu \|^2 ]$, \end{description} \begin{description} \item [\namedlabel{ass:momentp}{\textbf{M}$_p$}] Suppose that $\Exp[\|\xi\|^p]<\infty$. \end{description} In order to answer the question of progress towards $n\mu$, it is necessary to define the trajectory of the walk. First, consider the discrete jump process of the partial sums with each time step rescaled by $1/n$, so the partial sums are indexed by times in the the interval $[0,1]$, in fact they are at the times $\frac{k}{n}$ with $k = 0, 1, \ldots, n$. However, we wish to consider a continuous-time trajectory, so we have two choices on how to fill in the gaps. Either we can say the walk moves linearly from each partial sum to the next, in which case the trajectory is \[X_n(t) := n^{-1} \left(S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor) \xi_{\lfloor nt \rfloor + 1}\right),\] or we can consider the trajectory where the walk \lq stays still\rq~and makes small jumps when it reaches the next time indexing a new partial sum, in which case we have \[X'_n(t) := \frac{1}{n} S_{\lfloor nt \rfloor}.\] Using this notation, the trajectory equivalent of Kolmogorov's strong law is as expected, see for example \cite[p.~20]{whitt2002stochastic}, \[X_n(t) \overset{\textup{a.s.}}{\longrightarrow} \mu t, ~{\rm as~} n \to \infty,\] in the sense of convergence of continuous functions of $t \in [0,1]$ (a formal definition is given later). It is not unreasonable to expect a similar result for $X'_n(t)$, but it is clear that this will require some care in dealing with the discontinuities. We discuss these results in Section \ref{FLLN} and show their use by applying the theory to the maximum functional. As for developing the central limit theorem, the invariance principle proved by Donsker in 1951 \cite{donsker} which can be generalised to $d$-dimensions, see for example \cite[Theorem~4.3.5]{whitt2002stochastic}, states that the appropriately-scaled trajectories converge weakly to a continuous-time process with normally-distributed increments --- namely the $d$-dimensional Brownian motion $b_d$. In particular, if we have the covariance matrix $\Sigma$ as defined at \eqref{ass:Sigma}, then \begin{align*} n^{-1/2} \left( S_{\lfloor nt \rfloor} - n \mu t \right) \Rightarrow \Sigma^{1/2 } b_d(t), \mathrm{\ as\ } n \to \infty, \end{align*} in the sense of weak convergence of functions (again, formal definitions come later). This is the subject of Section \ref{FCLT}, where we provide a proof of the result which differs from that in \cite{whitt2002stochastic} following more closely the work of Billingsley\cite{cpm} using Etemadi's inequality \cite[Theorem~22.5]{billpm} which we extend to $d$-dimensions instead of using Tychonoff's theorem. This section also includes the mapping theorem for weak convergence. Here we also exhibit the usefulness of these results by applying them to the maximum functional again, and also by generalising the classical arcsine law, see for example \cite[p.~93]{feller1}. With all the theory now presented, we introduce some set theory notation in Section \ref{sec:set} and consider the partial sums as a random point set and consider the set's convergence to the path from $0$ to $n \mu$. This theory can stand alone and produce some results on the diameter of these points, but will also combine with the limit theorems in Section \ref{sec:hulls} where we consider the convex hull of the random walk. Many results on the convex hull already exist in the literature but with the theory described above, we can extend many of these results to higher dimensions and consider some new functionals. Finally, in Section~\ref{sec:COM}, we apply the mapping theorem and limit theorems to the centre of mass process of a random walk in $\mathbb{R}^d$, $G_n:= \frac{1}{n}\sum_{i=1}^n S_i$. For comparison, we establish the equivalent of Kolmogorov's, and Lindeberg's and L\'evy's results, before using the theory to find the trajectory equivalent of the strong law of large numbers, \[\frac{1}{n}G_{\lfloor nt \rfloor} \overset{\textup{a.s.}}{\longrightarrow} \frac{\mu t}{2} {~\rm as~} n \to \infty.\] In the special case of $\mu=0$, we go further and consider the central limit theorem with the scaling $1/\sqrt{n}$ with the usual extra assumption \eqref{ass:Sigma}. We get \[\frac{1}{\sqrt{n}}\left(G_{\lfloor n t \rfloor}\right)_{t\in [0,1]} \Rightarrow \mathcal{GP}(0,K) {\rm ~as~} n\rightarrow \infty,\] where $\mathcal{GP}(0,K)$ is a Gaussian process with mean $0$ and some symmetric covariance matrix $K$ which depends on $\Sigma$. The appendix contains the proof of Etemadi's inequality in $d$-dimensions. \subsection{Further development and other applications} For many of the results in this paper the second moment assumption on the increments of the random walk is required. It is possible to relax this assumption and retrieve more general theorems in the case where the increments are in the domain of attraction of a stable law, in which case we would see L\'{e}vy processes as the limit process for the functional central limit theorem, instead of Brownian motion. Some references relating to this area of study are \cite{press, samorodnitsky1994stable,whitt2002stochastic,KampfLastMolchanov}. Of course there are also numerous further functionals of the random walk processes which may be relevant to different applications, for the convex hull one could consider the number of faces, for example. It would also be of interest to convolve our two main functionals and study the convex hull of the centre of mass process. \section{Notation and preliminaries} \subsection{Basic notation} We use the notation $(\Omega, \mathcal{F}, P)$ to denote a probability triple, where $\Omega$ is the sample space, the $\sigma$-algebra $\mathcal{F}$ is the collection of all events, and $P$ is a probability measure. On occasion it will be necessary to use the subscript $P_n$ to represent a sequence of probability measures and where the measure is specified, either explicitly or via an associated random variable, we will use $\Pr$ or $\Pr_n$. Thus, we say the distribution of a random variable $X$ taking values in a measurable space $(S, \mathcal{S})$ is determined by $\Pr ( X \in B)$ for all $B \in \mathcal{S}$. This survey is concerned with various types of convergence for random variables. We use the notation $\overset{\textup{a.s.}}{\longrightarrow}$ for almost-sure convergence, $\overset{\textup{p}}{\longrightarrow}$ for convergence in probability, $\overset{\textup{d}}{\longrightarrow}$ for convergence in distribution, and $\Rightarrow$ for weak convergence; we give definitions later. Given $x \in \mathbb{R}^d$ we write $x = (x_1, \ldots, x_d)^{\scalebox{0.6}{$\top$}}$ for the vector in Cartesian coordinates; for definiteness, vectors are viewed as column vectors throughout. We write $\| \, \cdot \, \|$ for the Euclidean norm on $\mathbb{R}^d$, so that $\|x\| := ( x_1^2 + \cdots + x_d^2 )^{1/2}$. For $x, y \in \mathbb{R}^d$ we write the Euclidean distance between points $x$ and $y$ as $\rho_E ( x, y) := \| x -y \|$. For $x \in \mathbb{R}^d \setminus \{ 0 \}$ we write $\hat x := x / \| x \|$ for the unit vector in the $x$ direction; it is convenient to set $\hat{0}:=0$. The unit sphere in $\mathbb{R}^d$ is denoted by $\mathbb{S}^{d-1} := \{ x \in \mathbb{R}^d : \| x \| =1 \}$ and for the unit ball in $\mathbb{R}^d$ we write $\mathbb{B}^d:= \{ x \in \mathbb{R}^d : \| x \| \leq 1 \}$. Let $(S,\rho)$ be a metric space. For $A \subseteq S$ and $x \in S$ we set $\rho ( x, A ) := \inf_{y \in A} \rho (x, y)$, and for $A, B \subseteq S$ we set $\rho (A, B) := \inf_{x \in A} \inf_{y \in B} \rho (x,y)$. In the case where $S = \mathbb{R}^d$, we use $\rho_E$ for $\rho$ in the same way. We denote the $d$-dimensional normal distribution by $\mathcal{N}(\mu,\Sigma)$, where $\mu$ is the $d$-dimensional mean vector, and $\Sigma$ is the $d \times d$ covariance matrix; $\Sigma$ is nonnegative-definite and symmetric. We permit the notation $\mathcal{N} (\mu , 0)$, where $0$ is the matrix of all $0$s, to stand for the degenerate normal distribution concentrated at $\mu$. We write $I_d$ for the $d$-dimensional identity matrix. The nonnegative-definite symmetric matrix $\Sigma$ has a (unique) nonnegative-definite symmetric square-root $\Sigma^{1/2}$ satisfying $( \Sigma^{1/2} )^2 = \Sigma$. The matrix $\Sigma^{1/2}$ induces the linear map $x \mapsto \Sigma^{1/2} x$ on $\mathbb{R}^d$. We write $b_d = (b_d(t), t \in \mathbb{R}_+)$ for standard $d$-dimensional Brownian motion started at $b_d(0) = 0$; we work in a probability space on which $b_d$ is continuous. The process $\Sigma^{1/2} b_d$ is \emph{correlated} Brownian motion with covariance matrix $\Sigma$. If $d=1$ we write simply $b$ for $b_1$. For $A \subseteq \mathbb{R}^d$, we denote the closure of $A$ by $\cl A$, the complement of $A$ by $A^\mathrm{c} := \mathbb{R}^d \setminus A$, and the boundary of $A$ by $\partial A := \cl A \cap \cl A^\mathrm{c}$. The interior of $A \subseteq \mathbb{R}^d$ is $\inte A := A \setminus \partial A$ and we use $\mathds{1}_A(x)$ to denote the indicator function of a set, that is \[\mathds{1}_A(x)=\begin{cases}1 ~{\rm if~} x\in A;\\ 0 {\rm~otherwise.~}\end{cases}\] We write $\mathcal{B}_d$ for the Borel $\sigma$-algebra on $\mathbb{R}^d$. For a measurable $A \subseteq \mathbb{R}^d$ we write $\mu_d (A)$ for the $d$-dimensional Lebesgue measure of $A$. For the set of points at distance of at most $\varepsilon$ from $A$ we use $A^{\varepsilon}=\{ x \in \mathbb{R}^d : \rho_E (x,A) \leq \varepsilon \}$. The projection of $A$ on to the space perpendicular to $u \in \mathbb{S}^{d-1}$ will be denoted by $A|u^{\perp}$. For $x \in \mathbb{R}$ we set $x^+ := \max \{ x, 0\}$, $x^- := \max \{ -x, 0\}$, so that $x = x^+ - x^-$ and \[\mathrm{sgn}(x)=\begin{cases}1 ~{\rm if~} x>0;\\ 0 {\rm~if~} x=0;\\ -1 {\rm~if~} x<0.\end{cases}\] For $x, y \in \mathbb{R}$ we write $x \wedge y := \min \{ x, y\}$ and $x \vee y := \max \{ x, y\}$. Given functions $f$ and $g$, the function $f \circ g$ is defined by $(f \circ g) (x) = f (g (x))$. \subsection{Spaces of trajectories} \label{subsec:CandD} Since our aim is to consider the limiting behaviour of trajectories of random walks we first discuss the spaces that these trajectories and their scaling limits inhabit. Throughout this paper our trajectories will be indexed over the interval $[0,1]$. Let $\mathcal{M}^d := \mathcal{M}^d [0,1]$ denote the set of all bounded measurable $f : [0,1] \to \mathbb{R}^d$; we call elements of $\mathcal{M}^d$ \emph{trajectories}. Let $\mathcal{M}_0^d:= \{ f \in \mathcal{M}^d : f(0) =0 \}$. For $t \in [0,1]$ and $f \in \mathcal{M}^d$ we define the \emph{interval image} $f[0,t] := \{ f(x) : x \in [0,t]\}$. For $f \in \mathcal{M}^d$ we write $\| f \|_\infty := \sup_{0 \leq t \leq 1} \| f(t) \|$ for the supremum norm of $f$. We endow $\mathcal{M}^d$ with the supremum metric \begin{equation} \label{eqn:supnorm} \rho_{\infty}(f,g) := \|f-g\|_{\infty} := \sup_{0\leq t\leq 1} \|f(t)-g(t)\|, \text{ for } f, g \in \mathcal{M}^d. \end{equation} For $d=1$ we write simply $\mathcal{M} := \mathcal{M}^1$. It will also be occasionally useful to consider the canonical projection at time $t \in [0,1]$, $\pi_t : \mathcal{M}^d \to \mathbb{R}^d$, defined as $\pi_t f = f(t)$ for $f \in \mathcal{M}^d$. Our limit theorems are stated in terms of convergence of elements of a metric space. We will primarily be interested in trajectories that are either continuous, or have at most countably many jump discontinuities. We will restrict our attention to the corresponding subspaces of $\mathcal{M}^d$. For $f \in \mathcal{M}^d$, we write $D_f \subseteq [0,1]$ for the set of discontinuities of $f$. The set $\mathcal{C}^d:=\mathcal{C}^d[0,1]$ is the set of continuous $d$-dimensional functions on the unit interval (the set of $f:[0,1]\rightarrow \mathbb{R}^d$ such that $\lim_{x\rightarrow c}f(x)=f(c)$ for all $c\in [0,1]$). The space $(\mathcal{C}^d,\rho_\infty)$ is the corresponding metric space with the distance between two elements defined by~\eqref{eqn:supnorm}. Note that functions in $\mathcal{C}^d$ are bounded. Let $\mathcal{C}_0^d:= \{ f \in \mathcal{C}^d : f(0) =0 \}$. In the case $d=1$, we write simply $\mathcal{C} := \mathcal{C}^1$. We consider also the set $\mathcal{D}^d := \mathcal{D}^d[0,1]$, the set of right-continuous $d$-dimensional functions with left-hand limits on the unit interval, that is, \begin{enumerate} \item For $0\leq t<1, f(t+)=\lim_{s\downarrow t}f(s)$ exists and $f(t+)=f(t)$. \item For $0<t\leq 1, f(t-)=\lim_{s\uparrow t}f(s)$ exists. \end{enumerate} Functions in $\mathcal{D}^d$ are bounded, and have (at most) countably many discontinuities of the first type (jump discontinuities): see \cite[pp.~121--122]{cpm}. Let $\mathcal{D}_0^d:= \{ f \in \mathcal{D}^d : f(0) =0 \}$. In the case $d=1$, we write simply $\mathcal{D} := \mathcal{D}^1$. The supremum metric \eqref{eqn:supnorm} is well-defined on $\mathcal{D}^d$, and in some cases we will consider the metric space $(\mathcal{D}^d, \rho_\infty)$. However, the following example motivates the consideration of an alternative metric. \begin{example}\label{ex:threefunctions} Consider the following three functions, \begin{minipage}{0.4\textwidth} \begin{align*} f(t)&=\begin{cases} 1 & \mathrm{for\ }t\in[0,1/2);\\ 0 & \mathrm{for\ }t\in[1/2,1]; \end{cases}\\ g(t)&=\begin{cases} 0.8 & \mathrm{for\ }t\in[0,1/2);\\ 0.2 & \mathrm{for\ }t \in[1/2,1]; \end{cases}\\ h(t)&= \begin{cases} 0.95 & \mathrm{for\ } t\in [0,0.49);\\ 0.05 & \mathrm{for\ }t\in [0.49,1]. \end{cases} \end{align*} \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[scale=0.28]{example1_1.pdf} \end{minipage} Taking an overview of the plot, it seems reasonable to suggest that the blue function, $h(t)$, is \lq closer\rq~to the light-green function, $f(t)$, than the red function, $g(t)$, is to the light-green function. However, if we consider the supremum metric, we find that $\rho_{\infty}(f,h)=0.95$ whilst $\rho_{\infty}(f,g)=0.2$. This is due to the slightly earlier jump at $t=0.49$ for $h$, which, for a small interval of $t$, takes the function to the larger Euclidean distance of $0.95$ from $f$. So for processes with jumps, we may wish to consider a different measure of distance. \end{example} Our candidate for a more suitable metric on $\mathcal{D}^d$ is the \emph{Skorokhod metric}, which allows some small perturbations in the $t$ direction to be considered when we measure the distance between two functions. Roughly speaking, we define the distance as the smallest perturbation needed in either the space or time direction to map the functions onto each other. Formally (see \cite[p.~123]{pollard}), let $\Lambda$ denote the class of strictly increasing, continuous mappings of $[0,1]$ onto itself. If $\lambda \in \Lambda$, then $\lambda(0)=0$ and $\lambda(1)=1$, and the inverse $\lambda^{-1}$ is also in $\Lambda$. For $f$ and $g$ in $\mathcal{M}^d$, define $\rho_S(f,g)$ to be the infimum of those positive $\varepsilon$ for which there exists in $\Lambda$ a $\lambda$ satisfying \begin{equation} \label{eqn:Skorokhod1} \sup_{0\leq t\leq 1}|\lambda (t)-t|=\sup_{0\leq t\leq 1}|t-\lambda^{-1}(t)|<\varepsilon \end{equation} and \begin{equation} \label{eqn:Skorokhod2} \sup_{0\leq t\leq 1}\|f(t)-g(\lambda(t))\|=\sup_{0\leq t\leq 1}\|f(\lambda^{-1}(t))-g(t)\|<\varepsilon. \end{equation} To express this in more compact form, let $I$ be the identity map on $[0,1]$; then \begin{equation} \label{eq:skor-def} \rho_S(f,g) \coloneqq \inf_{\lambda\in \Lambda}\left\{ \left\|\lambda-I\right\|_{\infty}\vee\left\|f-g\circ\lambda\right\|_{\infty}\right\} . \end{equation} \begin{example} Consider the functions $f(t)$, $g(t)$ and $h(t)$ from Example \ref{ex:threefunctions}. The distance $\rho_S(f,g)=0.2$ because there is no perturbation of the time which would decrease the Euclidean distance between $f$ and $g$. However, when we consider $f$ and $h$, we could define $$\lambda(t) \coloneqq \begin{cases} \frac{49}{50}t & \mathrm{for\ }t \in [0,1/2)\\ \frac{51}{50}t-\frac{1}{50} & \mathrm{for\ }t \in [1/2,1]. \end{cases}$$ It turns out that this $\lambda$ is optimal giving $\rho_S(f,h)=0.05$ because equation \eqref{eqn:Skorokhod1} gives us a lower bound of $\varepsilon=0.01$ attained when $t=0.5$ and $\lambda(t)=0.49$, and equation \eqref{eqn:Skorokhod2} gives the lower bound of $\varepsilon=0.05$, which is attained at any $t\in [0,1]$. \end{example} We state the following simple fact so we can refer to it later. \begin{lemma} \label{lem:skor} For any $f, g \in \mathcal{M}^d$ we have $\rho_S (f,g) \leq \rho_\infty (f,g)$. \end{lemma} \begin{proof} The infimum in~\eqref{eq:skor-def} is bounded above by the value at $\lambda = I$. \end{proof} For certain technical reasons, it is sometimes more convenient to work with an alternative metric on $\mathcal{D}^d$, first described by Kolmogorov \cite{kolmogorov1956}, denoted $\rho_S^\circ$. For $\lambda \in \Lambda$, we set \begin{equation*} \label{lambdacirc} \|\lambda \|^\circ := \sup_{s<t} \left\lvert \log \frac{\lambda (t) - \lambda (s)}{t - s}\right\rvert. \end{equation*} If the slope of the chord between $(s, \lambda(s))$ and $(t, \lambda(t))$ is always close to 1, then $\|\lambda\|^\circ$ is close to 0. We then define \[ \rho_S^\circ(f,g) \coloneqq \inf_{\lambda \in \Lambda} \left\lbrace \left\| \lambda \right\|^\circ \vee \left\| f - g\circ \lambda\right\|_{\infty} \right\rbrace. \] We also make the following technical observation about $\|\lambda\|^\circ$ which will be required in a couple of proofs. \begin{lemma} \label{lem:estlam} Let $\lambda \in \Lambda$. Define $c ( \lambda ) := \max \{ \mathrm{e}^{\|\lambda\|^\circ}-1, 1- \mathrm{e}^{-\|\lambda\|^\circ}\}$. Then we have \begin{equation} \label{s3} |\lambda(t)-t| \le t c (\lambda) , \text{ for all } t \in [0,1]; \end{equation} and \begin{equation} \label{s1} | \lambda' (t) - 1 | \leq c ( \lambda ) ,\text{ almost everywhere on } t \in (0,1) . \end{equation} \end{lemma} \begin{proof} From the definition of $\|\lambda\|^\circ$, we have that for any $t \in [0,1)$ and $h>0$ sufficiently small, \[ \log \left| \frac{\lambda(t+h)-\lambda(t)}{h} \right| \le \|\lambda\|^\circ \] so that \begin{equation} \label{lambdabound} \mathrm{e}^{-\|\lambda\|^\circ} \le \frac{\lambda(t+h)-\lambda(t)}{h} \le \mathrm{e}^{\|\lambda\|^\circ}. \end{equation} By Lebesgue's theorem on the differentiability of monotone functions, see \cite[p.~321]{kolmogorov2012}, $\lambda' (t)$ exists almost everywhere on $t \in (0,1)$, and when it does exist, we have from \eqref{lambdabound} that \[ \mathrm{e}^{-\|\lambda\|^\circ} \le \lambda'(t) \le \mathrm{e}^{\|\lambda\|^\circ} . \] Hence we see that \eqref{s1} holds as required. For the first assertion, since $\lambda (0 ) = 0$, another application of the definition of $\| \lambda\|^\circ$ shows that $\log | \lambda (t) / t | \leq \|\lambda\|^\circ$ for all $t \in (0,1)$, so that $|\lambda(t)| \le t \mathrm{e}^{\|\lambda\|^\circ}$, hence \[ t \mathrm{e}^{-\|\lambda\|^\circ} \le \lambda(t) \le t\mathrm{e}^{\|\lambda\|^\circ}, \text{ for all } t \in [0,1]. \] It follows that\[ -t\left(1-\mathrm{e}^{-\|\lambda\|^\circ}\right) \le \lambda(t)-t \le t \left(\mathrm{e}^{\|\lambda\|^\circ}-1\right), \] and so we get \eqref{s3} as required. Hence we completed the proof. \end{proof} One important result about the two metrics $\rho_S$ and $\rho_S^\circ$ concerns their equivalence. \begin{proposition}\cite[Theorem~7]{kolmogorov1956} \label{prop:equivmetrics} The metrics $\rho_S^\circ$ and $\rho_S$ are equivalent. That is, for a sequence of functions $f,f_1,f_2,\ldots$ on $\mathcal{D}^d$, $\rho_S(f_n,f)\rightarrow 0$ as $n \rightarrow \infty$ if and only if $\rho_S^\circ(f_n,f) \rightarrow 0$ as $n\rightarrow \infty$. \end{proposition} The fact that the metrics are equivalent means we can use either one to prove continuity of a functional on $\mathcal{D}^d$ and the result will hold for the other, we will use the metric which is simplest for each application. Likewise, almost sure statements using one metric carry over to the other. Note also that as equivalent metrics, $\rho_S$ and $\rho_S^\circ$ generate the same topology (open sets) on $\mathcal{D}^d$, and hence also the same Borel sets. One reason we may wish to consider $\rho_S^\circ$ is that it has the advantage that it provides a metric with which the space $\mathcal{D}^d$ is a complete metric space. \begin{theorem} \label{thm:Cdsep} The space $\mathcal{C}^d$ is separable and complete under $\rho_{\infty}$. \end{theorem} \begin{theorem} \label{thm:Ddsep} The space $\mathcal{D}^d$ is separable under $\rho_S$ and $\rho^\circ_{S}$, and complete under $\rho^\circ_{S}$. \end{theorem} The one-dimensional case of Theorem~\ref{thm:Cdsep} is discussed at \cite[p.~11]{cpm}. The separability extends to higher dimensions by, for example \cite[\S 4A2Q]{fremlin}. This result also implies that there exists some measure for which the space is complete, and it is a simple exercise to see that every one-dimensional projection of a Cauchy sequence under $\rho_\infty$ in $d$-dimensions is also a Cauchy sequence and therefore has a limit in the product space. As mentioned above, the one-dimensional case of Theorem~\ref{thm:Ddsep} was proven by Kolmogorov in \cite{kolmogorov1956}, but is also discussed at \cite[Theorem~12.2]{cpm}. The separability for higher dimensions extends as in the continuous case, using \cite[\S 4A2Q]{fremlin} and the completeness of the space under the measure $\rho_S^\circ$ also follows with a similar simple calculation. \subsection{Modulus of continuity} \label{sec:modulus} For $f \in \mathcal{C}^d$, the associated modulus of continuity is defined by \begin{equation*} \label{eqn:Cmodulusofcont} w_f(\delta):=\sup_{|s-t|<\delta} \|f(s)-f(t)\|, \text{ for } 0<\delta\leq 1. \end{equation*} In $\mathcal{D}^d$, the analogous concept is a little more involved (see \cite[p.~122]{cpm}). A set $\{t_i : 0 \leq i \leq v\}$ which has $0=t_0<t_1<\cdots< t_v=1$ is called $\delta$-sparse if it also satisfies $\min_{1\leq i\leq v}(t_i-t_{i-1})>\delta$. Then define, for $0<\delta \leq 1$, \begin{align} \label{eqn:Dmodulusofcont} w'_f(\delta) & := \inf_{\{t_i\}} \max_{1 \leq i \leq v} \sup_{t,s \in [t_{i-1}, t_i)} \|f(t) - f(s)\|, \end{align} where the infimum extends over all $\delta$-sparse sets $\{t_i\}$. \subsection{Random elements of metric spaces} \label{sec:MMeasSpaces} Let $(S, \rho)$ denote a metric space and let $\mathcal{S}$ denote the Borel $\sigma$-algebra, that is, the $\sigma$-algebra on $S$ generated by the open sets. On probability space $(\Omega, \mathcal{F}, \Pr)$, a random variable $X$ taking values in a measurable space $(S, \mathcal{S})$ is a mapping $X : \Omega \to S$ that is measurable, i.e., $X^{-1}(B) \in \mathcal{F}$ for any $B \in \mathcal{S}$. Note that $\Pr$ induces a measure on $(S, \mathcal{S})$ via $\Pr ( X \in \, \cdot \,)$. The triple $(S,\mathcal{S},\rho)$ we call a \emph{metric measure space}, we understand that $\mathcal{S}$ is always the Borel $\sigma$-algebra, and we talk about $X$ as being a random element of $(S,\mathcal{S},\rho)$. In what follows we discuss convergence of sequences of such random elements, starting with the definition of almost-sure convergence in the next section. \section{Functional laws of large numbers} \label{FLLN} \subsection{Almost-sure convergence and the strong law} \begin{definition}[Almost-sure convergence] Let $X, X_1, X_2, \ldots$ be random variables on the probability space $(\Omega, \mathcal{F}, \Pr)$ taking values in the metric measure space $(S, \mathcal{S}, \rho)$. We write $X_n \overset{\textup{a.s.}}{\longrightarrow} X$ if $\rho ( X_n, X) \overset{\textup{a.s.}}{\longrightarrow} 0$, i.e., if \[ \Pr \left( \left\{ \omega \in \Omega : \lim_{n \to \infty} \rho ( X_n (\omega) , X(\omega) ) = 0 \right\} \right) = 1 .\] \end{definition} Given two metric measure spaces $(S, \mathcal{S}, \rho)$ and $(S', \mathcal{S}', \rho')$ and a measurable function $h : S \to S'$, the set $D_h$ of discontinuities of $h$ satisfies $D_h \in \mathcal{S}$: see \cite[p.~243]{cpm}, and hence $\Pr(X\in D_h)$ is well defined. Using this, the following result gives conditions under which almost-sure convergence is preserved under mappings. \begin{theorem}[Mapping theorem for almost-sure convergence] \label{thm:mapping} Let $X, X_1, X_2, \ldots$ be random variables on the probability space $(\Omega, \mathcal{F}, \Pr)$ taking values in the metric measure space $(S, \mathcal{S}, \rho)$. Let $(S',\mathcal{S}',\rho')$ be another metric measure space, and let $h: (S, \mathcal{S}, \rho) \to (S', \mathcal{S}', \rho')$ be measurable. If $X_n \overset{\textup{a.s.}}{\longrightarrow} X$ and $\Pr ( X \in D_h) = 0$, then $h(X_n) \overset{\textup{a.s.}}{\longrightarrow} h(X)$. \end{theorem} \begin{proof} For any $\omega$ such that $h$ is continuous at $X(\omega)$, $X_n(\omega) \to X(\omega)$ implies that $h(X_n(\omega)) \to h(X(\omega))$. Then \begin{align*} \Pr(\{\omega \in \Omega : \rho' ( h(X_n(\omega)) , h(X(\omega)) ) \to 0 \mathrm{\ as\ } n \to \infty\}) &\geq \Pr(\{\omega \in \Omega : h \mathrm{\ is\ continuous\ at\ } X(\omega)\})\\ &= \Pr(X \in D_h^\mathrm{c}) = 1, \end{align*} so that $h(X_n) \overset{\textup{a.s.}}{\longrightarrow} h(X)$. \end{proof} Since the $\sigma$-algebra $\mathcal{S}$ is always the Borel $\sigma$-algebra, we often omit this and talk about almost-sure convergence on $(S, \rho)$ instead of $(S, \mathcal{S}, \rho)$. The formal result of the classical strong law of large numbers for $d$-dimensional random walks is as follows, see for example \cite[Theorem~2.4.1]{durrett}. \begin{theorem}[Strong law of large numbers] \label{thm:SLLN} Consider the random walk defined at~\eqref{ass:walk}. Then, $n^{-1} S_n \overset{\textup{a.s.}}{\longrightarrow} \mu$ in the sense of almost-sure convergence on $(\mathbb{R}^d,\rho_E)$. \end{theorem} \subsection{The functional law of large numbers} Consider the random walk on $\mathbb{R}^d$ as defined at \eqref{ass:walk}. We construct a trajectory with time scaled to $[0,1]$ and space scaled as in the strong law of large numbers; the trajectory is an element of $\mathcal{M}^d$ that takes values $n^{-1} S_k$ at times $t = k/n$ for $k \in \{0,1,\ldots, n\}$. There are two standard ways to construct this trajectory: either use linear interpolation between the partial sums at the times $k/n$ and $(k+1)/n$ or make it piecewise constant (constant over intervals $[k/n,(k+1)/n)$). The first approach ensures a continuous trajectory and so gives an element of $\mathcal{C}^d$; the second introduces $n$ jump discontinuities to each path, so it yields an element of $\mathcal{D}^d$. The formal definitions are as follows. Define for $n \in \mathbb{N}$ and all $t \in [0,1]$, \begin{align} X_n(t) & := n^{-1} \left(S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor) \xi_{\lfloor nt \rfloor + 1}\right); \nonumber \\ X'_n(t) & := n^{-1} S_{\lfloor nt \rfloor}. \label{eqn:markovXn} \end{align} Then $X_n \in \mathcal{C}^d_0$ and $X_n' \in \mathcal{D}_0^d$. See Figure~\ref{ex:traj} for an illustration of the two variants. \begin{figure} \centering \includegraphics[scale=0.4]{trajectories.pdf} \caption{An example of a possible random walk, and the two continuous-time trajectories we can create for it; the continuous interpolating $X_n(t)$ in red and the piecewise constant process $X'_n(t)$ in black.} \label{ex:traj} \end{figure} Intuitively, we expect the trajectories to \lq look like\rq~they increase at a rate $\mu$. The strong law of large numbers allows us to develop a functional law of large numbers, which shows that this is indeed the case. Theorem~\ref{thm:flln} is apparently stronger than Theorem~\ref{thm:SLLN} since convergence in the $\rho_\infty$ metric implies convergence of the endpoints $X_n (1) = n^{-1} S_n \overset{\textup{a.s.}}{\longrightarrow} \mu = I_\mu (1)$ and $X_n' (1) = n^{-1} S_n \overset{\textup{a.s.}}{\longrightarrow} \mu = I_\mu (1)$. However, we will see that Theorem~\ref{thm:flln} is in fact just a recasting of Theorem~\ref{thm:SLLN}, so the two results are equivalent. See Figure \ref{XnCS} for a simulation and for example \cite[p.~26]{whitt2002stochastic} for a reference. \begin{theorem}[Functional law of large numbers] \label{thm:flln} Consider the random walk defined at~\eqref{ass:walk}. Let $I_\mu \in \mathcal{C}^d$ be the function defined by $I_\mu (t) := \mu t$ for $t \in [0,1]$. \begin{itemize} \item[(a)] We have $X_n \overset{\textup{a.s.}}{\longrightarrow} I_\mu$ on $(\mathcal{C}_0^d, \rho_\infty)$. \item[(b)] We have $X_n' \overset{\textup{a.s.}}{\longrightarrow} I_\mu$ on $(\mathcal{D}_0^d, \rho_\infty)$. \end{itemize} \end{theorem} \begin{remark} By Lemma~\ref{lem:skor}, part~(b) also shows that $X_n' \overset{\textup{a.s.}}{\longrightarrow} I_\mu$ on $(\mathcal{D}^d_0, \rho_S )$ and Proposition~\ref{prop:equivmetrics} in turn shows that $X_n' \overset{\textup{a.s.}}{\longrightarrow} I_\mu$ on $(\mathcal{D}^d_0, \rho_S^\circ)$. \end{remark} \begin{figure} \centering \includegraphics[scale=0.4]{XnCS} \caption{Simulation of the trajectories $X_n(t)$ of a random walk on $\mathbb{R}$ with $\mu = 0.5$ for $n=100$ in blue and $n=10000$ in red.} \label{XnCS} \end{figure} \begin{proof}[Proof of Theorem~\ref{thm:flln}] Let $\varepsilon > 0$. By Theorem~\ref{thm:SLLN}, there exists $N_{\varepsilon}$ with $\Pr ( N_\varepsilon < \infty) =1 $ such that, for all $n \geq N_{\varepsilon}$, $\| n^{-1} S_n - \mu \| \leq \varepsilon$. Then \begin{align} \label{eq:fslln1} \sup_{N_{\varepsilon}/n \leq t \leq 1} \left \| X'_n (t) - \mu t \right \| & \leq \sup_{N_{\varepsilon}/n \leq t \leq 1} \left \| X'_n(t) - \frac{\lfloor nt \rfloor}{n} \mu \right \| + \sup_{N_{\varepsilon}/n \leq t \leq 1} \left \| \frac{\lfloor nt \rfloor}{n} \mu - t \mu \right \| \nonumber\\ & \leq \sup_{N_{\varepsilon}/n \leq t \leq 1} \left( \frac{\lfloor nt \rfloor}{n} \right) \left \| \frac{S_{\lfloor nt \rfloor}}{\lfloor nt \rfloor} - \mu \right \| + \sup_{0 \leq t \leq 1} \left | \frac{\lfloor nt \rfloor}{n} - t \right | \| \mu \| \nonumber\\ & \leq \varepsilon + \frac{\| \mu \|}{n}. \end{align} On the other hand, \begin{align} \label{eq:fslln2} \sup_{0 \leq t \leq N_{\varepsilon}/n} \left \| X'_n(t) - \mu t \right \| & \leq \frac{1}{n} \max_{0 \leq k \leq N_{\varepsilon}} \| S_k\| + \frac{N_{\varepsilon} \| \mu \|}{n} \overset{\textup{a.s.}}{\longrightarrow} 0 {~\rm as~} n\rightarrow \infty, \end{align} since $\Pr (N_\varepsilon < \infty) =1$. Thus combining~\eqref{eq:fslln1} and~\eqref{eq:fslln2} we obtain \[ \limsup_{n \to \infty} \sup_{0 \leq t \leq 1} \left \| X'_n (t) - \mu t \right \| \leq \varepsilon ,\] and since $\varepsilon >0$ was arbitrary, we get $\rho_\infty ( X'_n , I_\mu ) \overset{\textup{a.s.}}{\longrightarrow} 0$, proving part (b). Let $X''_n(t) = S_{\lfloor nt \rfloor +1}$. A similar argument to that above shows that, for $n \geq 1$, \begin{align*} \sup_{N_\varepsilon/n \leq t \leq 1} \| X''_n(t) - \mu t\| & \leq \sup_{N_\varepsilon /n \leq t \leq 1} \left( \frac{ \lfloor nt \rfloor + 1}{n} \right) \left\| \frac{S_{\lfloor nt \rfloor +1}}{\lfloor nt \rfloor + 1} - \mu \right\| + \sup_{0 \leq t \leq 1 } \left| \frac{\lfloor nt \rfloor +1}{n} - t \right| \| \mu \| \\ & \leq 2 \varepsilon + \frac{\| \mu \|}{n} , \end{align*} and \begin{align*} \sup_{0 \leq t \leq N_\varepsilon/n} \left \| X''_n(t) - \mu t \right \| & \leq \frac{1}{n} \max_{0 \leq k \leq N_\varepsilon +1 } \| S_k\| + \frac{N_{\varepsilon} \| \mu \|}{n} \overset{\textup{a.s.}}{\longrightarrow} 0 {~\rm as~} n\rightarrow \infty. \end{align*} It follows that $\rho_\infty (X''_n(t), I_\mu) \overset{\textup{a.s.}}{\longrightarrow} 0$ as well. Let $\alpha_n (t) = nt - \lfloor nt \rfloor $; note that $\alpha_n(t) \in [0,1)$ for all $n \geq 1$ and all $t \in [0,1]$. Then \[ X_n (t) = X_n'(t) + n^{-1} \alpha_n(t) \xi _{\lfloor nt \rfloor +1} = (1-\alpha_n (t) ) X_n ' (t) + \alpha_n (t) X_n'' (t) ,\] so that \begin{align*} \rho_\infty (X_n, I_\mu) & = \sup_{0\leq t \leq 1} \|(1-\alpha_n(t))(X_n'(t)-I_\mu(t)) + \alpha_n(t) (X_n''(t)-I_\mu(t))\| \\ & \leq \sup_{0 \leq t \leq 1} | 1 - \alpha_n (t) | \| X_n' (t) - I_\mu (t) \| + \sup_{0 \leq t \leq 1} | \alpha_n (t) | \| X_n'' (t) - I_\mu (t) \| \\ & \leq \rho_\infty ( X_n' , I_\mu) + \rho_\infty (X_n'' , I_\mu ) ,\end{align*} which tends to $0$ a.s., establishing part (a). \end{proof} \subsection{The maximum functional} \label{sec:max} As a first example of the theory developed above, we take $d=1$ and consider the \emph{maximum functional} $M : \mathcal{M} \to \mathbb{R}$ defined by $M (f) := \sup_{0 \leq t \leq 1} f (t)$. Note that $|M(f)| \leq \| f \|_\infty$. The next result shows that $M$ is a continuous map from $(\mathcal{M}, \rho_\infty)$ to $(\mathbb{R}, \rho_E)$ and also a continuous map from $(\mathcal{M}, \rho_S)$ to $(\mathbb{R}, \rho_E)$. \begin{theorem} \label{thm:maximum} Let $d=1$. For any $f, g \in \mathcal{M}$ we have $| M (f ) - M (g) | \leq \rho_S ( f,g ) \leq \rho_\infty (f,g)$. \end{theorem} \begin{proof} Take $f, g \in \mathcal{M}$, and suppose without loss of generality that $\sup_{s \in [0,1]} f(s) \geq \sup_{t \in [0,1]} g(t)$. Let $\Lambda'$ be the set of $\lambda : [0,1] \to [0,1]$ that are surjective, i.e., $\lambda [0,1] = [0,1]$. Note that $\Lambda \subseteq \Lambda'$. Then for any $\lambda \in \Lambda'$, \begin{align*} | M(f) - M(g) | & = \sup_{s \in [0,1]} f(s) - \sup_{t \in [0,1]} g(t) \\ & = \sup_{s \in [0,1]} f(s) - \sup_{t \in [0,1]} g \circ \lambda (t),\end{align*} since $\lambda [0,1] = [0,1]$. Hence \begin{align*} | M (f) - M(g) | & = \sup_{s \in [0,1]} \left( f(s) - \sup_{t \in [0,1]} g \circ \lambda (t) \right) \\ & \leq \sup_{s \in [0,1]} \left( f(s) - g \circ \lambda (s) \right) \\ & \leq \sup_{s \in [0,1]} \left| f(s) - g \circ \lambda (s) \right| \\ & = \| f - g \circ \lambda \|_{\infty} \\ & \leq \| \lambda - I \|_{\infty} \vee \| f - g \circ \lambda \|_{\infty}. \end{align*} We therefore have that \begin{align*} |M(f) - M(g)| & \leq \inf_{\lambda \in \Lambda'} \left\{ \| \lambda - I \|_{\infty} \vee \| f - g \circ\lambda \|_{\infty} \right\} \leq \rho_S(f,g), \end{align*} since $\Lambda \subseteq \Lambda'$. Lemma~\ref{lem:skor} completes the proof. \end{proof} Since we have shown the maximum functional is continuous, we can also apply the mapping theorem to the functional law of large numbers, to obtain the following result. \begin{theorem} Consider the random walk defined at~\eqref{ass:walk} with $d=1$. Then, as $n \to \infty$, \[ \frac{1}{n} \max_{0 \leq k \leq n} S_k \overset{\textup{a.s.}}{\longrightarrow} \mu^+ .\] \end{theorem} \begin{proof} Let $X'_n(t)$ be as defined at \eqref{eqn:markovXn}. The functional strong law of large numbers, Theorem~\ref{thm:flln}, says that $X_n' \overset{\textup{a.s.}}{\longrightarrow} I_\mu$ on $(\mathcal{D},\rho_\infty)$, while Theorem~\ref{thm:maximum} says that $M$ is continuous. Thus the mapping theorem, Theorem~\ref{thm:mapping}, implies that $M ( X_n' ) \overset{\textup{a.s.}}{\longrightarrow} M (I_\mu)$ on $(\mathbb{R},\rho_E)$. But $M ( X'_n ) = n^{-1} \max_{0 \leq k \leq n} S_k$ and $M ( I_\mu ) = \mu^+$, giving the result. \end{proof} \section{Functional central limit theorems}\label{FCLT} \subsection{Weak convergence and the central limit theorem} In this section we state and prove the functional central limit theorem, which extends the Lindeberg-L\'evy central limit theorem~\eqref{eqn:clt} in a similar way that the functional law of large numbers extended the usual strong law as demonstrated in Section~\ref{FLLN}. First we recall the notion of convergence in distribution for random variables on $\mathbb{R}^d$, in order to extend this appropriately which will allow us to state the functional theorem. Given a random variable $X$ taking values in $\mathbb{R}^d$, we write $X = (X_1, \ldots, X_d)^{\scalebox{0.6}{$\top$}}$ in components. The distribution function $F$ of $X$ is defined for $t = (t_1,\ldots, t_d)^{\scalebox{0.6}{$\top$}} \in \mathbb{R}^d$ by $F(t) := \Pr ( X_1 \leq t_1, \ldots, X_d \leq t_d )$. \begin{definition} Let $X, X_1, X_2, \ldots$ be a sequence of $\mathbb{R}^d$-valued random variables with corresponding distribution functions $F, F_1, F_2, \ldots$. Then we say that $X_n$ converges in distribution to $X$, and write $X_n \overset{\textup{d}}{\longrightarrow} X$, if $\lim_{n\to\infty}F_n(t)=F(t)$ for all continuity points $t$ of $F$. \end{definition} For random variables taking values in general metric spaces (such as our spaces of trajectories) the concept that generalizes convergence in distribution is \emph{weak convergence}. First we define the concept for measures. \begin{definition}\label{def:wc} The probability measures $P_1,P_2,\ldots$ defined on a metric measure space $(S, \mathcal{S}, \rho)$ converge weakly to $P$, that is, $P_n \Rightarrow P$, if \[ \int_{S}f \textup{d} P_n \rightarrow \int_{S}f \textup{d} P \] for all bounded, continuous $f: S \to \mathbb{R}$. \end{definition} It is often more convenient to speak of weak convergence of random variables. Consider a random variable $X$ on $(\Omega, \mathcal{F}, \Pr)$, taking values in a metric measure space $(S, \mathcal{S}, \rho)$. Consider also a sequence of random variables $X_n$, defined on possibly different probability spaces $(\Omega_n, \mathcal{F}_n, \Pr_n)$, but all taking values in the same metric measure space $(S, \mathcal{S}, \rho)$. We associate with $X, X_1, X_2, \ldots$ probability measures $P, P_1, P_2, \ldots$ on $(S, \mathcal{S}, \rho)$ in the natural way: for any $B \in \mathcal{S}$, \begin{align} \label{eq:measure-rv} P ( B ) = \Pr ( X \in B ), \text{ and } P_n ( B ) = \Pr_n ( X_n \in B ) . \end{align} \begin{definition} In this context, we say that $X_n \Rightarrow X$ if $P_n \Rightarrow P$. \end{definition} In other words, $X_n \Rightarrow X$ if $\lim_{n\to\infty} \Exp_n f(X_n) = \Exp f(X)$ for all bounded, uniformly continuous $f : S \to \mathbb{R}$, where $\Exp$ and $\Exp_n$ are expectations under $\Pr$ and $\Pr_n$, respectively. \begin{remark} In the case where $(S,\mathcal{S}, \rho)$ is $(\mathbb{R}^d,\mathcal{B}_d,\rho_E)$, where $\mathcal{B}_d$ is the Borel $\sigma$-algebra of $\mathbb{R}^d$, weak convergence reduces to convergence in distribution: see \cite[p.~203]{gut}. \end{remark} As with the almost-sure convergence in the previous section, it will be necessary, and fruitful, to consider mappings of random variables and thus necessary to show the convergence of the random variables is preserved under certain mappings. For this we state an analogue of Theorem~\ref{thm:mapping} which holds for weak convergence. This result can be found for example as \cite[Theorem~2.7]{cpm}. We defer the proof until Section~\ref{sec:weak-theory}. Recall that given two metric measure spaces $(S, \mathcal{S}, \rho)$ and $(S', \mathcal{S}', \rho')$ and a function $h : S \to S'$, the set of discontinuities of $h$ is denoted by $D_h$. \begin{theorem}[Mapping theorem for weak convergence] \label{thm:mapweak} Let $P, P_1, P_2, \ldots$ be a sequence of probability measures on a metric measure space $(S, \mathcal{S}, \rho)$. Let $(S',\mathcal{S}',\rho')$ be another metric measure space, and let $h: (S, \mathcal{S}, \rho) \to (S', \mathcal{S}', \rho')$ be measurable. For each $n$, we define $P_n h^{-1}$ a probability measure on $(S', \mathcal{S}', \rho')$ by $P_nh^{-1} ( A ) = P_n ( h^{-1} ( A ) )$ for $A \in \mathcal{S}'$. If $P_n \Rightarrow P$ and $P ( D_h ) = 0$, then $P_n h^{-1} \Rightarrow P h^{-1}$. \end{theorem} Again, we may recast this result about weak convergence of measures in the language of weak convergence of random variables. Consider a random variable $X$ on $(\Omega, \mathcal{F}, \Pr)$, taking values in a metric measure space $(S, \mathcal{S}, \rho)$. Consider also a sequence of random variables $X_n$, defined on possibly different probability spaces $(\Omega_n, \mathcal{F}_n, \Pr_n)$, but all taking values in the same metric measure space $(S, \mathcal{S}, \rho)$. Let $(S', \mathcal{S}', \rho')$ be another metric measure space. Let $h: (S, \mathcal{S}, \rho) \to (S', \mathcal{S}', \rho')$ be measurable. By~\eqref{eq:measure-rv}, we may thus deduce from Theorem~\ref{thm:mapweak} the following. \begin{corollary} \label{cor:weak-mapping} If $X_n \Rightarrow X$ and $\Pr ( X \in D_h ) =0$, then $h(X_n) \Rightarrow h(X)$. \end{corollary} The final classical ingredient for this section is the multidimensional version of the classical central limit theorem which can be found for example as \cite[Theorem~3.9.6]{durrett}. We also state a theorem of P\'{o}lya, see for example~\cite[Theorem~2.6.1]{lehmann}, which will allow us to take the convergence in the central limit theorem to be uniform convergence. \begin{theorem}[Multidimensional central limit theorem] \label{thm:CLT} Suppose that we have a random walk as defined at \eqref{ass:walk} satisfying \eqref{ass:Sigma}. Then as $n \to \infty$, \[ \frac{1}{\sqrt{n}} \left( S_n - n\mu \right) \overset{\textup{d}}{\longrightarrow} \mathcal{N}(0,\Sigma). \] \end{theorem} \begin{theorem}\label{thm:polya} Let $F_1,F_2,\ldots$ be a sequence of cumulative distribution functions such that $F_n \overset{\textup{d}}{\longrightarrow} F$. If $F$ is continuous, then $F_n(x)$ converges to $F(x)$ uniformly in $x$. \end{theorem} \subsection{Functional central limit theorem and some applications} \label{Donsker} The functional version of the central limit theorem is known as \emph{Donsker's theorem}. For simplicity of exposition, and with the applications that come later in mind, we consider only the zero drift case where $\mu =0$. Again we work with trajectories indexed by $[0,1]$; now the scaling is the central limit theorem scaling rather than the law of large numbers scaling. Precisely, for $n \in \mathbb{N}$ and $t \in [0,1]$ we define \begin{align} \label{eqn:trajectories} Y_n(t) & := \frac{1}{ \sqrt{n}} \left( S_{\lfloor nt \rfloor} + (nt - \lfloor nt \rfloor) \xi_{\lfloor nt \rfloor + 1} \right);\\ Y'_n(t) & := \frac{1}{\sqrt{n}} S_{\lfloor nt \rfloor}.\nonumber \end{align} Here $Y_n \in \mathcal{C}_0^d$ and $Y'_n \in \mathcal{D}_0^d$. Despite the latter living in $\mathcal{D}_0^d$, in the domain of the central limit theorem, we would not expect a single increment to be macroscopic on the scale of $\sqrt{n}$, so we might expect to see a limiting distribution with continuous trajectories, and increments which are independent and normally distributed. The obvious candidate for such a limit process is the $d$-dimensional Brownian motion. \begin{theorem}[Donsker's theorem] \label{thm:Donsker-dd} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$ and satisfying \eqref{ass:Sigma}. \begin{itemize} \item[(a)] We have $Y_n \Rightarrow \Sigma^{1/2}b_d$ in the sense of weak convergence on $(\mathcal{C}_0^d, \rho_{\infty})$. \item[(b)] We have $Y'_n \Rightarrow \Sigma^{1/2} b_d$ in the sense of weak convergence on $(\mathcal{D}_0^d, \rho_S)$. \end{itemize} \end{theorem} This result in the one-dimensional case was proved by Donsker in 1951~\cite{donsker}. We point the reader to \cite[\S 5]{ethier} for a comprehensive discussion of both $d$-dimensional Brownian motion and the steps leading to this result. Figure~\ref{sim:fclt} shows some simulations of one-dimensional processes. \begin{remark} \label{rem:wcequivalence} Part (b) of Theorem~\ref{thm:Donsker-dd} is stated for the space $(\mathcal{D}_0^d,\rho_S)$, but weak convergence on $(\mathcal{D}_0^d,\rho_S)$ is equivalent to weak convergence on $(\mathcal{D}_0^d,\rho_S^\circ)$. To see this, recall the definition of weak convergence from Definition~\ref{def:wc}, and note Proposition~\ref{prop:equivmetrics} which tells us that a continuous function $f$ under one metric is continuous under the other. Thus, the set of bounded continuous functions is the same in both metric spaces and so weak convergence must be equivalent. \end{remark} We discuss the proof of Theorem~\ref{thm:Donsker-dd} later in this section, but first, demonstrate the power of Donsker's theorem with the mapping theorem by returning to our example of the maximum functional in $d=1$ as defined in Section~\ref{sec:max}, followed by a further example --- a generalisation of the arcsine law. \begin{figure} \centering \includegraphics[scale=0.4]{YnCS} \caption{Simulation of a sample path of $Y_n(t)$ in the case where $d=1$, for $n=100$ in blue and $n=10000$ in red.}\label{sim:fclt} \end{figure} \begin{theorem} \label{thm:max-dist} Suppose that we have a random walk as defined at \eqref{ass:walk} with $d=1$, $\mu =0$, and satisfying \eqref{ass:Sigma}. Then as $n \to \infty$, \[ \frac{1}{\sqrt{n}} \max_{0 \leq k\leq n} S_k \overset{\textup{d}}{\longrightarrow} \sigma \sup_{t \in [0,1]} b(t). \] \end{theorem} \begin{proof} In the case $d=1$, $\Sigma$ is the scalar $\sigma^2$. Donsker's theorem, Theorem~\ref{thm:Donsker-dd}, together with the mapping theorem, Corollary~\ref{cor:weak-mapping}, and continuity of the function $M : (\mathcal{D}, \rho_S ) \to (\mathbb{R}_+, \rho_E)$, Theorem~\ref{thm:maximum}, shows that $$ M ( Y_n') = \sup_{t \in [0,1]} Y'_n(t) \overset{\textup{d}}{\longrightarrow} M ( \sigma b ) = \sigma \sup_{t \in [0,1]} b(t).$$ But we have that $\sup_{t\in[0,1]}Y'_n(t) = n^{-1/2} \max \{S_0,S_1,\ldots,S_n\}$, completing the proof. \end{proof} The distribution of $\sup_{t \in [0,1]} b(t)$ can be determined by the reflection principle for Brownian motion, and so Theorem~\ref{thm:max-dist} gives us the limiting distribution for $\max_{1 \leq k \leq n} S_k$: see~\cite[pp.~91--93]{cpm}. We now turn to our second example. The classical arcsine law states the following \cite[p.~82]{feller1}, first established for the simple symmetric random walk. \begin{theorem} If $0 < \gamma < 1$, the probability that an $n$-step simple symmetric random walk spends less than $\gamma n$ time on the positive side tends to $2\pi^{-1} \arcsin \sqrt{\gamma}$ as $n\rightarrow \infty$. \end{theorem} We wish to extend this to higher dimensions, which requires a generalisation of the functional itself. In \cite{bingham} the functional which generalises \lq time on the positive side\rq~to \lq time in the positive quadrant\rq~is considered and shown not to follow an arc-sine distribution by comparison of moments. The generalisation that we will consider is $\pi_n(A)$, defined to be the proportion of time the normalised walk spends in a given subset of the sphere. Formally, recall $\hat{x}:=x/\|x\|$ for $x \neq 0$ and $\hat{0}:=0$, then, for a measurable set $A \subseteq \mathbb{S}^{d-1}$, \[ \pi_n(A):= \frac{1}{n} \sum_{i=1}^n \1{ \hat S_i \in A } .\] \begin{theorem}\label{thm:arcsineconv} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu=0$, and satisfying \eqref{ass:Sigma}. Let $\hat{b}^\Sigma_d(t):=\Sigma^{1/2} b_d(t)/\|\Sigma^{1/2} b_d(t)\|$, the $d$-dimensional Brownian motion projected onto the sphere and $A \subseteq \mathbb{S}^{d-1}$ with $\mu_{d-1}(\partial A)=0$, where $\mu_{d-1}$ here denotes Haar measure on $\mathbb{S}^{d-1}$. Then as $n\rightarrow \infty$, \[ \pi_n(A) \overset{\textup{d}}{\longrightarrow} \int_0^1 \1{\hat{b}_d^\Sigma(t)\in A} \textup{d} t.\] \end{theorem} As with all our examples, we must prove the continuity of the functional in order to complete the proof. First, for measurable $A \subseteq \mathbb{S}^{d-1}$ and $f \in \mathcal{D}^d$, define \[ \varpi_A (f) := \int_0^1 \mathds{1} \left\{ \widehat{ f(t) } \in A\right\} \textup{d} t .\] Note that $\pi_n(A) = \varpi_A (Y_n')$. \begin{lemma} \label{lem:occupation_continuity} Fix a measurable $A \subseteq \mathbb{S}^{d-1}$. Then, as a function from $(\mathcal{D}^d,\rho_S)$ to $([0,1],\rho_E)$, $f \mapsto \varpi_A (f)$ is continuous on the set \[ F_A := \left\{ f \in \mathcal{D}^d : \int_0^1 \mathds{1} \left\{ \widehat{ f(t) } \in \{ 0 \} \cup \partial A \right\} \textup{d} t = 0 \right\}. \] \end{lemma} \begin{proof} Since $\rho_S$ and $\rho_S^\circ$ are equivalent (see Proposition~\ref{prop:equivmetrics}), it suffices to work with the latter. For $f \in \mathcal{D}^d$ define for all measurable $B \subseteq \mathbb{R}^d$, \[ \nu_f (B) := \int_0^1 \1{ f(t) \in B} \textup{d} t .\] Note that $\nu_f$ is a finite measure on $\mathbb{R}^d$. Now let $\tilde A = \{ r x : x \in A, \, r > 0\}$, then $\varpi_A (f) = \nu_f (\tilde A)$. Take $f, g \in \mathcal{D}^d$ and suppose, without loss of generality, that $\nu_f (\tilde A) \geq \nu_g (\tilde A)$, let $\tilde A^\varepsilon = \{ x \in \mathbb{R}^d : \rho (x, \tilde A ) \leq \varepsilon \}$ and let $\tilde A_\varepsilon = \{ x \in \mathbb{R}^d : \rho (x, \mathbb{R}^d \setminus \tilde A ) \geq \varepsilon\}$, then $\tilde A_\varepsilon \subseteq \tilde A \subseteq \tilde A^\varepsilon$ and \begin{align*} | \varpi_A (f) - \varpi_A (g) | = \nu_f(\tilde A)- \nu_g (\tilde A) = \nu_f ( \tilde A_\varepsilon ) - \nu_g ( \tilde A^\varepsilon ) + \nu_f ( \tilde A \setminus \tilde A_\varepsilon) + \nu_g (\tilde A^\varepsilon \setminus \tilde A) .\end{align*} If $f, g \in F_A$ then since $\hat x \in \partial A$ implies $x \in \partial \tilde A$, we have that as $\varepsilon \to 0$, by continuity of measures along monotone limits, $\nu_f ( \tilde A \setminus \tilde A_\varepsilon) \to \nu_f ( \partial \tilde A ) =0 $, and $\nu_g (\tilde A^\varepsilon \setminus \tilde A) \to \nu_g ( \partial \tilde A) = 0$. Moreover, with the change of variable $t = \lambda(s)$ in the $\nu_g$-integral, \begin{align*} \nu_f ( \tilde A_\varepsilon ) - \nu_g ( \tilde A^\varepsilon ) & = \int_0^1 \1{ f(t) \in \tilde A_\varepsilon } \textup{d} t - \int_0^1 \1{ g ( t) \in \tilde A^\varepsilon} \textup{d} t \\ & = \int_0^1 \1{ f(t) \in \tilde A_\varepsilon} \textup{d} t - \int_0^1 \lambda' (s) \1{ g ( \lambda (s) ) \in \tilde A^\varepsilon} \textup{d} s\\ & \leq \| \lambda' - 1 \| + \int_0^1 \1{ f(t) \in \tilde A_\varepsilon, \, g(\lambda(t)) \notin \tilde A^\varepsilon} \textup{d} t .\end{align*} Here we have that \[ \int_0^1 \mathds{1} \left\{ f(t) \in \tilde A_\varepsilon, \, g(\lambda(t)) \notin \tilde A^\varepsilon \right\} \textup{d} t \leq \mathds{1} \left\{\| f - g \circ \lambda \| \geq 2 \varepsilon \right\} .\] In particular, given $f \in F_A$ and $\varepsilon >0$, we can choose $\delta$ sufficiently small so that any $g$ with $\rho^\circ_S (f,g) < \delta$ has a $\lambda$ for which, by Lemma~\ref{lem:estlam}, $\| \lambda' - 1 \| \leq c(\lambda) < \varepsilon$ and $\| f -g \circ \lambda \| < \varepsilon$. Hence, since $\varepsilon>0$ was arbitrary, $| \varpi_A (f) - \varpi_A (g) | \to 0$ as $\rho^\circ_S (f,g) \to 0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:arcsineconv}] Fix $A \subseteq \mathbb{S}^{d-1}$ with $\mu_{d-1} (\partial A ) = 0$. By Donsker's theorem, Theorem 4.9(b), $Y_n' \Rightarrow \Sigma^{1/2} b_d$, and since $F_A$ has measure $1$ under the law of $\Sigma^{1/2} b_d$, the continuous mapping theorem, Theorem 4.5, with Lemma~\ref{lem:occupation_continuity} shows that $\pi_n ( A ) = \varpi_A ( Y_n') \overset{\textup{d}}{\longrightarrow} \varpi_A ( \Sigma^{1/2} b_d )$. \end{proof} In particular, we can use this result to determine that there is no limit for the proportion of time spent in any non-trivial set. \begin{corollary} For any set $A \subseteq \mathbb{S}^{d-1}$ with $0<\mu_{d-1}(A)<\mu_{d-1}(\mathbb{S}^{d-1})$ and $\mu_{d-1}(\partial A)=0$, \begin{equation*} \liminf_{n \rightarrow \infty} \pi_n(A) = 0 ~\text{a.s.} \qquad {\textrm and} \qquad \limsup_{n \rightarrow \infty} \pi_n(A) = 1 ~\text{a.s.} \end{equation*} \end{corollary} \begin{proof} We will use the Hewitt-Savage zero-one law\cite[p.~180]{durrett}. In order to do so, we need to show that $\limsup_{n\rightarrow \infty} \pi_n(A)$ and $\liminf_{n\rightarrow \infty} \pi_n(A)$ are exchangeable random variables. For this, note \begin{align*} \limsup_{n\rightarrow \infty} \pi_n(A) &= \limsup_{n\rightarrow \infty} \left( \frac{1}{n} \sum_{i=1}^k \1{\hat{S}_i \in A} + \frac{1}{n} \sum_{i=k+1}^n \1{\hat{S}_i \in A} \right)\\ & = \limsup_{n\rightarrow \infty} \frac{1}{n} \sum_{i=k+1}^n \1{\hat{S}_i \in A} \ ~\text{a.s.} \end{align*} which clearly does not depend on the order of the first $k$ increments, and since $k$ was arbitrary, it is clearly exchangeable. The exact same argument is true for the $\liminf$ as well. Thus, it will be sufficient to show that $\Pr(\limsup_{n\rightarrow \infty} \pi_n(A) \geq 1-\varepsilon) >0$ for any $\varepsilon >0$, and $\Pr(\liminf_{n\rightarrow \infty} \pi_n(A) \leq \varepsilon) >0$ for any $\varepsilon >0$. For the former, note \begin{align} \Pr(\limsup_{n\rightarrow \infty} \pi_n(A) \geq 1-\varepsilon) &\geq \Pr(\pi_n(A)>1-\varepsilon {\rm\ i.o.}) \nonumber \\ &\geq \Pr(\cap_{n=1}^\infty \cup_{m\geq n} \{ \pi_m(A) > 1-\varepsilon \})\nonumber\\ &= \lim_{n \rightarrow \infty} \Pr(\cup_{m\geq n} \{ \pi_m(A) > 1-\varepsilon \})\nonumber\\ &\geq \lim_{n\rightarrow \infty} \Pr(\pi_n(A) > 1-\varepsilon). \label{eqn:arcsine1} \end{align} Then Theorem~\ref{thm:arcsineconv} states that $\pi_n(A) \overset{\textup{d}}{\longrightarrow} \int_0^1 \1{\hat{b}_d^\Sigma(t) \in A} \textup{d} t$ so for all but countably many $\varepsilon>0$, \[\lim_{n\rightarrow \infty} \Pr(\pi_n(A) > 1-\varepsilon) = \Pr\left( \int_0^1 \1{\hat{b}_d^\Sigma(t) \in A} \textup{d} t > 1-\varepsilon \right).\] Now, recall $\inte A:=A \setminus \partial A$ is the interior of $A$, which is an open set, see for example \cite[pp.~44--46]{kelley}. By the assumptions $\mu_{d-1}(A)>0$ and $\mu_{d-1}(\partial A)=0$ it follows that $\mu_{d-1}(\inte A)>0$ and so the interior is non-empty. Since the interior is an open, non-empty subset of $A$, it follows that there exists at least one ball, call it $A_\varepsilon$, with radius $\varepsilon>0$ such that $A_\varepsilon \subseteq A$. Then it is easy to see that there is positive probability that $\hat{b}_d^\Sigma (t)$ stays in $A_\varepsilon$ for all $t\in [\varepsilon,1]$ for any $\varepsilon>0$ (allowing the path to move away from $0$). Thus, combining this with \eqref{eqn:arcsine1}, we have \[\Pr\left(\limsup_{n\rightarrow \infty} \pi_n(A) \geq 1-\varepsilon\right) \geq \lim_{n\rightarrow \infty}\Pr(\pi_n(A) \geq 1-\varepsilon) = \Pr\left( \int_0^1 \1{\hat{b}_d^\Sigma(t) \in A} \textup{d} t > 1-\varepsilon \right)>0\] for any $\varepsilon>0$, and the Hewitt-Savage zero-one law gives us $\limsup_{n\rightarrow \infty}\pi_n(A) = 1 ~\text{a.s.}$ as required. Finally, note that $\pi_n(A^c)\leq 1-\pi_n(A)$ (the inequality is due to possible visits to $0$) and since $\mu_{d-1}(A)+\mu_{d-1}(A^c) = \mu_{d-1}(\mathbb{S}^{d-1})$, we get $0 < \mu_{d-1}(A^c)<\mu_{d-1}(\mathbb{S}^{d-1})$. Also, $\partial A = \partial A^c$, see for example \cite[p.~46]{kelley}, so $\mu_{d-1}(\partial A^c)=\mu_{d-1}(\partial A)=0$. Thus, the conditions of the previous calculation are in fact satisfied for $A^c$, so $\liminf_{n\rightarrow \infty}\pi_n(A) \leq 1- \limsup_{n\rightarrow \infty}\pi_n(A^c) = 1-1 = 0 ~\text{a.s.}$ which completes the proof. \end{proof} \subsection{Proof overviews and a motivating example} In order to prove both the mappping theorem and Donsker's theorem, we will need to delve further into weak convergence theory. First, in Section \ref{sec:weak-theory} we will present different characterisations of weak convergence and note Slutsky's theorem in this context, all of which will be necessary for the proofs. Then we will present the proof of the mapping theorem. For the proof of Donsker's theorem, it could be suggested that a sufficient method would be to take some finite number of points on the trajectory, and see if the distribution of their location converges to the equivalent distribution for such points on a Brownian path. We now demonstrate why this will not be sufficient. First, for $t \in [0,1]$ and $f \in \mathcal{M}^d$, recall the \emph{projection} $\pi_t : \mathcal{M}^d \to \mathbb{R}^d$ is denoted $\pi_t f := f (t)$. More generally, for $k \in \mathbb{N}$ and $t_1, t_2, \ldots, t_k \in [0,1]$, we define $\pi_{t_1, t_2, \ldots, t_k} : \mathcal{M}^d \to (\mathbb{R}^d )^k$ by \[ \pi_{t_1, \ldots, t_k} f := ( f(t_1 ), \ldots, f(t_k) ). \] We say the finite-dimensional distributions of a function converge if we have the following, \begin{description} \item [\namedlabel{ass:finiteDimDist}{\textbf{FDD}}] \begin{itemize} \item[(i)] If $X, X_1, X_2, \ldots$ is a sequence in $\mathcal{C}^d$ then, for all $t_i \in [0,1]$, \[\pi_{t_1, t_2, \ldots, t_k} X_n = \left(X_n(t_1), X_n(t_2), \ldots , X_n(t_k)\right) \Rightarrow \left( X(t_1), X(t_2), \ldots, X(t_k) \right) = \pi_{t_1, t_2, \ldots, t_k} X,\] where the convergence is on $(\mathbb{R}^d)^k$. \item[(ii)] If $X, X_1, X_2, \ldots$ is a sequence in $\mathcal{D}^d$ then, \[\pi_{t_1, t_2, \ldots, t_k} X_n = \left(X_n(t_1), X_n(t_2), \ldots , X_n(t_k)\right) \Rightarrow \left( X(t_1), X(t_2), \ldots, X(t_k) \right) = \pi_{t_1, t_2, \ldots, t_k} X, \] where the convergence is on $(\mathbb{R}^d)^k$ and holds for all $(t_1, t_2, \ldots, t_k)$ such that each $\pi_{t_i}$ is continuous. \end{itemize} Note that, in both cases, the weak convergence on $(\mathbb{R}^d)^k$ is convergence in distribution. \end{description} Noting that $\|\pi_t f - \pi_t g\| \leq \|f-g\|_{\infty}$ and so $\|\pi_{t_1,t_2,\ldots,t_k} f -\pi_{t_1,t_2,\ldots,t_k} g\| \leq \sqrt{k}\|f-g\|_{\infty}$, it follows that the projection is a continuous function from $(\mathcal{C}^d,\rho_\infty)$ to $(\mathbb{R}^d,\rho_E)$, hence it is a direct consequence of the mapping theorem, Theorem \ref{thm:mapweak}, that, if $X_n \Rightarrow X$ on $\mathcal{C}^d$, then the finite-dimensional distributions also converge. Unfortunately, the reverse is not necessarily true; there exist sequences of probability measures whose finite-dimensional distributions converge weakly, though the measures themselves do not. \begin{example}\label{ex:FDD} Consider the following functions, with examples $z_3$ plotted in blue, $z_4$ plotted in light-green and $z_{10}$ plotted in red; \begin{minipage}{0.4\textwidth} \begin{align*} z_n(t)&=\begin{cases} nt & \mathrm{for\ }t\in[0,1/n);\\ 2-nt & \mathrm{for\ }t\in[1/n,2/n);\\ 0 & \mathrm{for\ }t \geq 2/n . \end{cases}\\ \end{align*} \end{minipage} \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.28]{zn1.pdf} \end{minipage} If we set $P_n = \delta_{z_n}$, the point mass at the function $z_n$, and $P= \delta_0$, then as soon as $t_i \geq 2n^{-1}$ for all $i$, $\pi_{t_1, \dots, t_k}z_n = (0, \dots, 0) = \pi_{t_1, \dots, t_k} 0$, so weak convergence of finite-dimensional distributions holds; but, since $\rho_{\infty}(z_n,0)=1$ for all $n$, $z_n \nrightarrow 0$ so $P_n \nRightarrow P$; we do not have weak convergence. \end{example} Based on this example, it is clear that we need a further condition on the family $\{P_n\}$. For trajectories in $\mathcal{C}_0^d$ it happens that such a sufficient condition is relative compactness, but it is hard to directly prove that a family of measures is relatively compact. However, Prokhorov's theorem tells us that tightness implies relative compactness, so we can work with tightness. Finally, we will use a couple of probability bounds on the running maximum of the trajectory to prove the tightness in $\mathcal{C}_0^d$. We complete the proof by showing the finite-dimensional distributions do in fact converge in this case. Of course, the results for continuous trajectories are only enough to prove part (a) of Theorem \ref{thm:Donsker-dd}. For part (b), we will show that tightness is still a sufficient condition to ensure the finite-dimensional limit is in fact the weak limit of the family of measures. Then we note some relevant changes to the conditions for tightness which we will prove are satisfied in $\mathcal{D}_0^d$. Finally, we show the finite-dimensional distribution convergence enabling us to conclude the weak convergence statement of Theorem~\ref{thm:Donsker-dd} part (b). \begin{remark} \label{rem:donsker} It suffices to prove Donsker's theorem for the case $\Sigma=I_d$. To see this, consider the walk defined at \eqref{ass:walk} with $\mu =0$, for which \eqref{ass:Sigma} holds with some arbitrary $\Sigma$. Assuming $\sigma^2>0$, if any of the eigenvalues of $\Sigma$ are zero, then the walk is not truly $d$-dimensional and can be mapped to a walk with smaller dimension such that the covariance matrix for this walk is positive definite, see for example \cite[p.~4]{lawler}. Any results where this is the case, would of course then relate to weak convergence to the Brownian path in the lower dimension contained on the hypersurface, and this statement can be mapped back to the original space. Hence, it suffices to assume $\Sigma$ is positive definite. Indeed, if $\Sigma$ is positive definite, then the (unique) symmetric square-root $\Sigma^{1/2}$ is also positive definite, and $\Sigma^{1/2}$ has inverse $\Sigma^{-1/2}$. Then set $\zeta := \Sigma^{-1/2} \xi$, and let $\zeta_i= \Sigma^{-1/2}\xi_i$ for $i \in \mathbb{N}$. By linearity of expectation, $\Exp \zeta = \Sigma^{-1/2} \Exp \xi = 0$ and \[ \Exp [ \zeta \zeta^{\scalebox{0.6}{$\top$}} ] = \Exp [ \Sigma^{-1/2}\xi \xi^{\scalebox{0.6}{$\top$}} \Sigma^{-1/2} ] = \Sigma^{-1/2} \Exp [ \xi \xi^{\scalebox{0.6}{$\top$}} ] \Sigma^{-1/2} = I_d .\] Let $\tilde S_n := \sum_{i=1}^n \zeta_i$ be the random walk associated with $\zeta$. Then $\tilde S_n = \Sigma^{-1/2} S_n$, and $\tilde S_n$ satisfies \eqref{ass:walk} and \eqref{ass:Sigma} with $\mu =0$ and $\Sigma=I_d$. The analogue of $Y'_n$ for $\tilde S_n$ is \[ \tilde Y'_n (t) = n^{-1/2} \tilde S_{\lfloor nt \rfloor} = \Sigma^{-1/2} Y'_n (t) ,\] so $Y'_n = \Sigma^{1/2} \tilde Y'_n$. The case of Theorem~\ref{thm:Donsker-dd}(b) where $\Sigma = I_d$ yields $\tilde Y'_n \Rightarrow b_d$. Since $x \mapsto \Sigma^{1/2} x$ is continuous, the mapping theorem, Theorem~\ref{thm:mapweak}, shows that $ Y'_n = \Sigma^{1/2} \tilde Y'_n \Rightarrow \Sigma^{1/2} b_d$, which is the conclusion of Theorem~\ref{thm:Donsker-dd}(b) in the general case. A similar argument holds for Theorem~\ref{thm:Donsker-dd}(a). Thus we can conclude that Donsker's theorem holds for general $\Sigma$ following from the special case where $\Sigma = I_d$. \end{remark} \subsection{Some weak convergence theory} \label{sec:weak-theory} The Portmanteau theorem (see e.g.~\cite[Theorem~2.1]{cpm}) gives several different characterisations of weak convergence. We only state them in terms of probability measures, but throughout consider random variables $X_n$ to be endowed with the respective measure $P_n$, and hence statements like (ii) could be written as convergence of expectations of the respective random variables, notation we will use later. \begin{theorem}[Portmanteau theorem] \label{thm:port} Let $P, P_1, P_2, \ldots$ be probability measures on metric measure space $(S,\mathcal{S},\rho)$. The following statements are equivalent. \begin{itemize} \item[(i)] $P_n \Rightarrow P$. \item[(ii)] $\int_{S} f \textup{d} P_n \rightarrow \int_{S} f \textup{d} P$ for all bounded, uniformly continuous $f$. \item[(iii)] $\limsup_{n\to\infty} P_n ( F ) \leq P ( F )$ for all closed sets $F$. \item[(iv)] $\liminf_{n\to\infty} P_n ( G ) \geq P ( G )$ for all open sets $G$. \item[(v)] $\lim_{n\to\infty} P_n ( A ) = P ( A )$ for all $A$ such that $P( \partial A ) = 0$. \end{itemize} \end{theorem} \begin{proof} First, note that (i) implies (ii) by definition. Next we show that (ii) implies (iii). Let $F$ be a closed set, let $\varepsilon>0$ and recall $\mathds{1}_A$ is the indicator function of the set $A$. Take $f$ defined by $f(x) = (1- \varepsilon^{-1} \rho(x,F))^+$, so $f(x) =1$ for $x \in F$ and $f(x) = 0$ for $x \notin F^\varepsilon$, which gives $\mathds{1}_F(x) \leq f(x) \leq \mathds{1}_{F^{\varepsilon}}(x)$. Thus $f$ is bounded. A simple calculation also shows $|f(x)-f(y)|\leq \varepsilon^{-1} \rho(x,y)$, so $f$ is also uniformly continuous. Then, \[ \limsup_{n\to \infty} P_n ( F ) = \limsup_{n\to \infty} \int \mathds{1}_F \textup{d} P_n \leq \limsup_{n\to\infty} \int f \textup{d} P_n .\] So by (ii) we get \[ \limsup_{n\to \infty} P_n ( F ) \leq \int f \textup{d} P \leq \int \mathds{1}_{F^{\varepsilon}} \textup{d} P = P ( F^{\varepsilon} ). \] Take $\varepsilon = 1/k$. Since $F$ is closed, $F = \cap_{k \in \mathbb{N}} F^{1/k}$. Then continuity along monotone limits shows that $P( F^{1/k} ) \downarrow P(F)$ as $k \to \infty$, and we obtain (iii). Next, observe that (iii) is equivalent to (iv) by complementation. We next show that (iii) and (iv) together imply (v). Indeed, \begin{align*} P (\cl A) &\geq \limsup_{n\to\infty} P_n ( \cl A ) \geq \limsup_{n \to \infty} P_n (A) \\ & \geq \liminf_{n \to \infty} P_n ( A ) \geq \liminf_{n \to \infty} P_n ( \inte A ) \geq P ( \inte A ). \end{align*} If $P ( \partial A ) = 0$, then the extreme terms have the same value, and we obtain (v). Finally, we show that (v) implies (i). Take $f$ bounded and continuous; assume without loss of generality that $0 < f(x) < 1$ for all $x$. Let $t \geq 0$. Note that $\{ x \in S : f(x) > t \}^\mathrm{c} = \{ x \in S : f(x) \leq t \}$, and, since $f$ is continuous, $\cl \{ x \in S : f(x) > t \} \subseteq \{ x \in S : f(x) \geq t \}$. Hence \[ \partial \{ x \in S : f(x) > t \} \subseteq \{ x \in S : f(x) = t \}. \] Here we have that $P ( \{ x \in S : f(x) = t \} ) =0$ except for countably many $t$. To see this, consider $\{t:P(\{ x \in S : f(x)=t\})\in (1/(n+1),1/n]\}$ for each $n \in \mathbb{N}$. The number of elements in each of these sets must be finite, or the law of total probability is contradicted, and thus we can label the set of $t$ starting with those in the set with $n=1$, then $n=2$, and so on, hence there are only countably many of such $t$. Using the short-hand $\{ f > t \} = \{ x \in S : f(x) > t \}$, we have by Fubini's theorem that \[ \int_S f \textup{d} P_n = \int_S \int_0^1 \mathds{1}_{\{ f > t \}} \textup{d} t \textup{d} P_n = \int_0^1 \int_S \mathds{1}_{\{ f > t \}} \textup{d} P_n \textup{d} t = \int_0^1 P_n ( \{ f > t \} ) \textup{d} t,\] and then by (v) and the bounded convergence theorem, we obtain \begin{align*} \int_S f \textup{d} P_n = \int_0^{1} P_n ( \{ f > t \} ) \textup{d} t \to \int_0^1 P ( \{ f > t \} ) \textup{d} t = \int_S f \textup{d} P , \end{align*} which completes the proof. \end{proof} Another useful consequence of the Portmanteau theorem is the following characterisation of weak convergence~\cite[Theorem~2.6]{cpm}, which we do state in terms of random variables. \begin{theorem} \label{thm:subsequence} $X_n \Rightarrow X$ if and only if every subsequence $\{X_{n_i}\}$ contains a further subsequence converging weakly to $X$. \end{theorem} \begin{proof} The \lq only if\rq~part is easy: if $X_n \Rightarrow X$, then for any bounded, continuous $f$ we have $\Exp f(X_n) = \int f \textup{d} P_n \to \int f \textup{d} P = \Exp f(X)$, and then by properties of convergence of real numbers we have that any subsequence of $\Exp f(X_{n_i})=\int f \textup{d} P_{n_i}$ also converges to $\int f \textup{d} P = \Exp f(X)$, i.e., $X_{n_i} \Rightarrow X$. For the \lq if\rq~part, we prove the contrapositive. Suppose that $X_n \nRightarrow X$, then $\Exp f(X_n) = \int f \textup{d} P_n \not\to \int f \textup{d} P = \Exp f(X)$ for some bounded, continuous $f$. We then have that for some subsequence $n_i$ of $\mathbb{N}$ and some $\varepsilon > 0$, $|\Exp f(X_{n_i})-\Exp f(X)| = | \int f \textup{d} P_{n_i} - \int f \textup{d} P | > \varepsilon$ for all $i$, so that $X_{n_i}$ has no weakly convergent subsequence. \end{proof} Here we also note Slutsky's result (see e.g.~\cite[Theorem~3.1]{cpm}) stated in the context of weak convergence. \begin{theorem} \label{thm:slutsky} Suppose that $X, X_1, Y_1, X_2, Y_2, \ldots$ are random variables on a probability space $(\Omega, \mathcal{F}, \Pr)$ taking values in metric measure space $(S, \mathcal{S}, \rho)$. If $X_n \Rightarrow X$ and $\rho (X_n, Y_n) \overset{\textup{p}}{\longrightarrow} 0$, then $Y_n \Rightarrow X$. \end{theorem} \subsection{Proof of the mapping theorem} The Portmanteau theorem is enough for us to prove the mapping theorem. \begin{proof}[Proof of Theorem~\ref{thm:mapweak}.] Given that $P_n \Rightarrow P$, it follows that for any $F \in S'$, \begin{equation}\label{eqn:map2} \limsup_{n\to\infty} P_n \left( h^{-1}F\right) \leq \limsup_{n\to\infty} P_n \left( \cl ( h^{-1} F ) \right) \leq P \left( \cl ( h^{-1} F ) \right) , \end{equation} by the equivalence of parts (i) and (iii) of the Portmanteau theorem, Theorem~\ref{thm:port}. Also, let $F \in \mathcal{S}'$ be closed; then, since $h$ is measurable, $h^{-1} F \in \mathcal{S}$. If $x \in \cl ( h^{-1} F )$, then there exist $x_n \in h^{-1}F$ such that $\rho(x_n,x)\to 0$. Since $h ( x_n) \in F$, we have $h(x_n) \to h ( x ) \in \cl F = F$ if $h$ is continuous at $x$. We therefore have \begin{equation}\label{eqn:map1} D_h^\mathrm{c} \cap \cl (h^{-1}F ) \subseteq h^{-1} F. \end{equation} Combining \eqref{eqn:map1} and \eqref{eqn:map2} gives \begin{align*}\limsup_{n\to\infty} P_n \left( h^{-1}F\right) \leq P(\cl (h^{-1} F)) = P \left(D^\mathrm{c}_h \cap \cl ( h^{-1} F ) \right) \leq P( h^{-1} F ), \end{align*} since $P ( D_h^\mathrm{c} ) = 1$. This holds true for all closed F, thus another application of parts (i) and (iii) of the Portmanteau theorem yields weak convergence of $P_n h^{-1}$ to $P h^{-1}$. \end{proof} \subsection{Weak convergence conditions for continuous trajectories} \label{sec:donsker-proof} In order to show weak convergence in the case of $\mathcal{C}^d$ we need to show a collection of probability measures on $\mathcal{C}^d$ is \textit{relatively compact} for which we have the following definition stated for an arbitrary measure space. \begin{description} \item [\namedlabel{ass:relCompact}{\textbf{RC}}] A collection of probability measures $\Pi$ on $(S, \mathcal{S})$ is called relatively compact if for every sequence $P_n$ of elements of $\Pi$, there exists a weakly convergent subsequence $P_{n_m}$. \end{description} We say that \eqref{ass:relCompact} holds for random variables $X_1,X_2,\ldots$ if \eqref{ass:relCompact} holds for probability measures $P_1,P_2,\ldots$ and the random variables and probability measures are associated as described at \eqref{eq:measure-rv}. Considering Theorem \ref{thm:subsequence}, it seems that the two concepts of relative compactness and convergence of finite-dimensional distributions would be sufficient to determine weak convergence. The following result confirms that this is in fact the case. We state the result for random variables, the result for probability measures can be found as Example 5.1 from \cite{cpm}. \begin{theorem}\label{thm:rc} For elements $X,X_1,X_2,\ldots$ of $\mathcal{C}^d$, if \eqref{ass:finiteDimDist} and \eqref{ass:relCompact} hold, then $X_n \Rightarrow X$. \end{theorem} \begin{proof} By \eqref{ass:relCompact} we have that any subsequence $X_{n_m}$ has a further subsequence $X_{n_{m_i}}$ such that $X_{n_{m_i}}\Rightarrow Y$ for some random variable $Y$, possibly depending on the subsequences chosen. Then the mapping theorem implies $\pi_{t_1, \dots, t_k} X_{n_{m_i}} \Rightarrow \pi_{t_1, \dots, t_k} Y$. But by \eqref{ass:finiteDimDist}, we have $\pi_{t_1, \dots, t_k}X_{n_{m_i}} \Rightarrow \pi_{t_1, \dots, t_k}X$, so $\pi_{t_1, \dots, t_k} X$ has the same distribution as $\pi_{t_1, \dots, t_k} Y$. Since the class of finite-dimensional sets is a separating class for $\mathcal{C}^d$, see \cite[p.~12]{cpm}, this implies that $X$ and $Y$ have the same distribution, and since the subsequences were arbitrary, we have that all such subsequences contain a further subsequence which weakly converges to $X$. By the \lq only if\rq~statement in Theorem \ref{thm:subsequence}, we complete the proof. \end{proof} It is difficult to prove relative compactness directly; however, a more convenient condition that we can work with and which implies relative compactness in certain spaces is \emph{tightness}. For a family of probability measures tightness is defined as follows. \begin{description} \item [\namedlabel{ass:tight}{\textbf{T}}] A family $\Pi$ of probability measures on metric measure space $(S, \mathcal{S},\rho)$ is called \emph{tight} if for every $\varepsilon > 0$ there exists a compact $K \in \mathcal{S}$ such that for all $P \in \Pi$, $P ( K ) > 1 - \varepsilon$. \end{description} Again, we use the terminology in the natural way for random variables: a collection $(X_\alpha, \alpha \in I)$ of random variables on a probability space $(\Omega,\mathcal{F},\Pr)$ and taking values in a metric measure space $(S,\mathcal{S},\rho)$ is tight if the collection of probability measures $(P_\alpha, \alpha \in I)$, defined by $P_\alpha ( B) = \Pr ( X_\alpha \in B )$ for $B \in \mathcal{S}$, is tight. To formalise the statement tightness implies relative compactness we state the following theorem of Prokhorov \cite[Theorems~5.1~\&~5.2]{cpm}. \begin{theorem}[Prokhorov's theorem]\label{thm:Prok} \eqref{ass:tight} implies \eqref{ass:relCompact}. If $S$ is separable and complete, and $\Pi$ satisfies \eqref{ass:relCompact}, then $\Pi$ also satisfies \eqref{ass:tight}. \end{theorem} \noindent Here we only need the implication \eqref{ass:tight} implies \eqref{ass:relCompact}, however note that Theorem \ref{thm:Cdsep} tells us $\mathcal{C}^d$ is separable and complete, so we do indeed have that tightness and relative compactness are equivalent in this space. Instead of replicating the full proof of Billingsley here \cite[pp.~59--63]{cpm}, we only give an outline of the proof of the first statement, the proof of the second is brief so we do provide that here. \begin{proof} Using the tightness, one can construct a sequence of increasing compact sets which cover deterministically large amounts of the probability mass for all the probability measures $P_n$. Then a measure theory result states that we can use this sequence to construct a countable class of sets for which any element of an arbitrary open set $G$ must lie in one of these sets. Taking the $\sigma$-algebra of the compact sets and these countable sets, we get a countable class of compact sets which contain good approximating sets of the arbitrary set $G$, we will call this class $\mathcal{H}$. Now, since the class was countable, a Cantor diagonal method allows us to be sure that there exists a subsequence $P_{n_i}$ for which $\lim_{i\rightarrow \infty} P_{n_i} H$ exists for all $H \in \mathcal{H}$. Then we will try to find a probability measure $P$ such that \[ P(G)= \sup_{H\subset G} \lim_{i\rightarrow \infty} P_{n_i} H.\] If this was true, then since the supremum is over $H\subset G$ we have $P(G) \leq \liminf_{i\rightarrow \infty} P_{n_i} G$, which is condition (iv) of the Portmanteau theorem, Theorem~\ref{thm:port} so we have $P_{n_i} \Rightarrow P$ as desired. The proof that such a measure exists can be found at \cite[pp.~61--63]{cpm}, we move on to the reverse implication. Consider a non-decreasing sequence of open sets $G_n$ with $\lim_{n\rightarrow \infty} G_n = S$. For each $\varepsilon$, there exists an $n$ for which $P(G_n)>1-\varepsilon$ for all $P \in \Pi$, otherwise the relative compactness assumption would mean the limit of this subsequence of bad measures is the whole space but with non-total probability. Now consider a sequence of open balls $A_{k_1},A_{k_2},\ldots$ with radius $1/k$ which cover $S$, and take $n_k$ such that $P(\cup_{i\leq n_k} A_{k_i}) > 1- 2^{-k}\varepsilon$ for all $P \in \Pi$ which we can do by the previous fact. Then by completeness of $S$, there exists a compact set $K \in S$ defined by \[K= \cap_{k\geq 1} \cup_{i\leq n_k} A_{k_i}, \] with $P(K)>1-\varepsilon$ for all $P \in \Pi$, hence tightness holds. \end{proof} \begin{corollary} \label{thm:tight} For elements $X,X_1,X_2,\ldots$ of $\mathcal{C}^d$, if \eqref{ass:finiteDimDist} and \eqref{ass:tight} hold, then $X_n \Rightarrow X$. \end{corollary} \subsection{Tightness conditions for continuous trajectories} Having proven that tightness is sufficient, we need to find a way to prove the tightness holds. In order to do this, we first need to state the Arzel\`{a}-Ascoli theorem in $d$-dimensions. The proof at \cite[Theorem~7.25]{rudin} is not dimension dependent so carries across. Recall a subset $A$ of a topological subspace is relatively compact if it has a compact closure. \begin{theorem}\label{thm:AAinCd} A set $A$ in $\mathcal{C}^d$ is relatively compact if and only if \[\sup_{f \in A}\|f(0)\|<\infty {\rm \quad and \quad} \lim_{\delta\rightarrow 0} \sup_{f\in A} w_f(\delta)=0.\] \end{theorem} This allows us to generalise the conditions for tightness at \cite[Theorem~7.3]{cpm} to $d$-dimensions. \begin{lemma} \label{lem:tightC} Let $P_n$ be a sequence of probability measures on $\mathcal{C}^d$. Then \eqref{ass:tight} holds if and only if the following two conditions hold. \begin{enumerate} \item[(i)] We have \begin{equation} \lim_{a\rightarrow \infty}\limsup_{n\to\infty} P_n ( \{ f :\| f(0) \| \geq a \} )=0.\label{eqn:tightCone} \end{equation} \item[(ii)] For each $\varepsilon>0$, \begin{equation} \lim_{\delta \downarrow 0 }\limsup_{n \to \infty} P_n \left( \{ f : w_f (\delta) \geq \varepsilon \} \right) = 0. \label{eqn:tightCtwo} \end{equation} \end{enumerate} \end{lemma} \begin{proof} For the \lq only if\rq~case, given some $\gamma>0$, consider a compact $K$ such that $P_n(K)>1-\gamma$ for all $n$; such a $K$ exists by the tightness. Since $K$ is compact, Theorem~\ref{thm:AAinCd} tells us that $\sup_{f\in K}\|f(0)\|<\infty$ so $K \subseteq \{f:\|f(0)\|\leq a\}$ for a large enough choice of $a$. Further, $\lim_{\delta\rightarrow 0} \sup_{f\in K} w_f(\delta)=0$ so for a small enough choice of $\delta$, $K \subseteq \{f: w_f(\delta)\leq \varepsilon\}$. These two facts imply \eqref{eqn:tightCone} and \eqref{eqn:tightCtwo} respectively. For the reverse implication, we start by recalling Theorem~\ref{thm:Cdsep} which says that $\mathcal{C}^d$ is separable and complete under $\rho_\infty$. Noting that a single measure clearly satisfies \eqref{ass:relCompact}, it follows from Prokhorov's theorem, Theorem~\ref{thm:Prok} that a single measure is tight. Then, using the \lq only if\rq~part of this lemma, for a fixed probability measure $P$, and a given $\gamma>0$ there is an $a$ such that $P(\{f:\|f(0)\|\geq a\})\leq \gamma$, and for a given $\varepsilon$ and $\gamma$ there is a $\delta$ such that $P(\{f: w_f(\delta)\geq\varepsilon\})\leq \gamma$. If we have \eqref{eqn:tightCone} and \eqref{eqn:tightCtwo}, then there exists a finite $n_0$ such that, for all $n> n_0$, \begin{equation}\label{eqn:prob1} P_n(\{f : \|f(0)\|\geq a\}) \leq \gamma, \end{equation} holds for some large enough $a$ and \begin{equation}\label{eqn:prob2}P_n(\{f:w_f(\delta)\geq \varepsilon\})\leq \gamma, \end{equation} holds for some small enough $\delta$. Then, for each of the finitely many measures $P_1,P_2,\ldots,P_{n_0}$ we have tightness so \eqref{eqn:prob1} and \eqref{eqn:prob2} still hold for these measures, possibly requiring a larger choice of $a$ or smaller choice of $\delta$. Using this, we can assume there exists some $a$ and some $\delta$ for which \eqref{eqn:prob1} and \eqref{eqn:prob2} hold for all $n$. Using this assumption, given $\gamma$, we can choose $a$ and $\delta_k$ such that the sets $B=\{f:\|f(0)\|\leq a\}$ and $B_k=\{f:w_f(\delta_k)<1/k\}$ have probabilities $P_n(B)\geq 1-\gamma$ and $P_n(B_k) \geq 1-\gamma 2^{-k}$ for all $n$. Consider the set $K=\cl (B \cap (\cap_{k\geq 1} B_k ))$ which has $P_n(K)\geq 1-\gamma - \gamma 2^{-k} \geq 1-2\gamma$ for all $n$. This closed set satisfies both conditions of Theorem~\ref{thm:AAinCd}, so it is compact, hence the $\{P_n\}$ are tight. \end{proof} The next ingredient we need is a theorem bounding the modulus of continuity which is the $d$-dimensional equivalent to \cite[Theorem~7.4]{cpm}. \begin{theorem} Suppose that $0=t_0<t_1<\ldots<t_k=1$ and $\min_{1<i<k}(t_i-t_{i-1})\geq \delta$. Then, for arbitrary $f\in \mathcal{C}^d$, \begin{equation} \label{eqn:modContBound} w_f(\delta)\leq 3 \max_{1\leq i \leq k} \sup_{t_{i-1}\leq s \leq t_i}\|f(s)-f(t_{i-1})\|, \end{equation} and, for any probability measure $P$ on $\mathcal{C}^d$, \begin{equation}\label{eqn:probModCont}P\{f: w_f(\delta)\geq 3 \varepsilon\} \leq \sum_{i=1}^k P \left\{ f: \sup_{t_{i-1}\leq s \leq t_i} \|f(s)-f(t_{i-1})\|\geq \varepsilon \right\}. \end{equation} \end{theorem} \begin{proof} Let $m$ be the maximum in \eqref{eqn:modContBound}. If $s$ and $t$ lie in the same interval $I_i = [t_{i-1},t_i]$, then $\|f(s)-f(t)\|\leq \|f(s)-f(t_{i-1})\|+\|f(t)-f(t_{i-1})\| \leq 2m$. If $s$ and $t$ lie in adjacent intervals $I_i$ and $I_{i+1}$, then $\|f(s)-f(t)\| \leq \|f(s)-f(t_{i-1})\|+\|f(t_{i-1})-f(t_i)\|+\|f(t_i)-f(t)\| \leq 3m$. If $|s-t| \leq \delta$ then $s$ and $t$ must either lie in the same interval, or adjacent ones, which proves \eqref{eqn:modContBound}. The second statement follows by Boole's inequality. \end{proof} Next we present a lemma that gives a sufficient condition for tightness in $\mathcal{C}^d_0$. \begin{lemma} \label{lem:bill} Suppose that we have a random walk as defined at \eqref{ass:walk}, and define $Y_n$ as at~\eqref{eqn:trajectories}. Then a sufficient condition for $\{Y_n:n\in \mathbb{N}\}$ to be tight is \begin{align} \label{eqn:lemma} \lim_{\lambda \to \infty} \limsup_{n \to \infty} \lambda^2 \Pr \left( \max_{0 \leq j \leq n} \| S_j \| \geq \lambda \sqrt{n} \right) = 0. \end{align} \end{lemma} \begin{proof} We will show the two conditions in Lemma~\ref{lem:tightC} hold. The first, \eqref{eqn:tightCone}, clearly holds, since $Y_n(0)=0$. For the second condition, we use the bound in \eqref{eqn:probModCont}. In particular, we take $t_i = m_i/n$ for integers $m_i$ satisfying $0=m_0<m_1<\ldots<m_k=n$. Then the supremum in \eqref{eqn:probModCont} becomes a maximum of differences as follows, \[ \Pr\left( w_{Y_n}(\delta)\geq 3\varepsilon\right) \leq \sum_{i=1}^k \Pr \left( \max_{m_{i-1}\leq j\leq m_i} \frac{\|S_j-S_{m_{i-1}}\|}{\sqrt{n}}\geq \varepsilon \right) = \sum_{i=1}^k \Pr \left( \max_{0\leq j\leq m_i-m_{i-1}} \|S_j\| \geq \varepsilon \sqrt{n}\right),\] where the equality is due to the identical distribution of the increments. For this to hold, of course we need the choice of $m_i$ to satisfy the condition $\min_{1<i<k} (m_i - m_{i-1})n^{-1} \geq \delta$. We can further simplify this choice by taking $m_i=im$ for each $i<k$ and some $m>1$. In order to satisfy the criterion we take $m = \lceil n\delta \rceil$. By this choice, we naturally fix $k=\lceil n/m \rceil$, with $m_k=n$. Note that this means, for large enough $n$, $|k- \delta^{-1}| \leq 1$, so for large enough $n$ and $\delta<1$, we have $k < 2\delta^{-1}$. Also, for large enough $n$, $|n/m- \delta^{-1}|<1$ so for large enough $n$ and $\delta<1/2$, we have $n>m/2\delta$. Using these inequalities, we have, for large enough $n$ and small enough $\delta$, \[ \Pr\left( w_{Y_n}(\delta)\geq 3\varepsilon\right) \leq \sum_{i=1}^k \Pr \left( \max_{0\leq j\leq m_i-m_{i-1}} \|S_j\| \geq \varepsilon \sqrt{n}\right) \leq \frac{2}{\delta} \Pr\left( \max_{0\leq j\leq m}\|S_j\| \geq \frac{\varepsilon\sqrt{m}}{\sqrt{2\delta}}\right). \] If we now take $\lambda= \varepsilon/\sqrt{2\delta}$, we get, \[\limsup_{n\rightarrow \infty}\Pr\left( w_{Y_n}(\delta)\geq 3\varepsilon\right) \leq \frac{4\lambda^2}{\varepsilon^2} \limsup_{m\rightarrow \infty}\Pr\left( \max_{0\leq j\leq m}\|S_j\| \geq \lambda\sqrt{m}\right). \] Now, under the suggested condition \eqref{eqn:lemma}, for a fixed $\varepsilon$ and any $\gamma>0$, there exists a $\lambda$ such that \[ \frac{4\lambda^2}{\varepsilon^2} \limsup_{m\rightarrow \infty}\Pr\left( \max_{0\leq j\leq m}\|S_j\| \geq \lambda\sqrt{m}\right) < \gamma. \] Fixing $\varepsilon$ and a large enough $\lambda$ means fixing $\delta$ to be small enough. The second condition in Lemma~\ref{lem:tightC} follows, and the proof is complete. \end{proof} We state the following estimate separately because it will be useful in the proof of both parts of Theorem~\ref{thm:Donsker-dd}, not just in the continuous case. \begin{lemma} \label{lem:rwmax} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$, and satisfying \eqref{ass:Sigma} with $\Sigma = I_d$. Then there exists a constant $C \in \mathbb{R}_+$ such that for all $k \in \mathbb{N}$ and all $\lambda \geq 0$, \[ \limsup_{n \to \infty} \Pr \left( \max_{0 \leq j \leq \lfloor n/ k \rfloor} \| S_j \| \geq \lambda \sqrt{n} \right) \leq C k^{-2} \lambda^{-4} .\] \end{lemma} \begin{proof} Let $Z\sim \mathcal{N} (0,I_d)$. Then by Markov's inequality there is a constant $C \in \mathbb{R}_+$ depending only on $d$ such that, for all $a \geq 0$, \begin{equation} \label{eq:markov} \Pr \left( \| Z \| \geq \frac{a}{3} \right) = \Pr \left(\|Z\|^4 \geq \left( \frac{a}{3}\right)^4\right) \leq C a^{-4} .\end{equation} We apply the $d$-dimensional version of Etemadi's inequality (see Lemma~\ref{lem:etemadi}) to obtain, for $\lambda \geq 0$, \begin{align* \Pr \left( \max_{0 \leq j \leq \lfloor n/k \rfloor} \| S_j \| \geq \lambda \sqrt{n} \right) \leq 3 \max_{0 \leq j \leq \lfloor n/k \rfloor} \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right). \end{align*} Now for any $n_0 \in \mathbb{N}$, \begin{align*} \max_{0 \leq j \leq \lfloor n/k \rfloor} \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right) & \leq \max_{0 \leq j \leq n_0} \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right)+ \max_{ n_0 \leq j \leq \lfloor n/k \rfloor } \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right) \\ & \leq \max_{0 \leq j \leq n_0} \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right)+ \max_{ n_0 \leq j \leq \lfloor n/k \rfloor} \Pr \left( j^{-1/2} \| S_j \| \geq \frac{\lambda \sqrt{n/j}}{3} \right)\\ & \leq \max_{0 \leq j \leq n_0} \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right)+ \max_{n_0 \leq j \leq \lfloor n/k \rfloor} \Pr \left( j^{-1/2} \| S_j \| \geq \frac{\lambda \sqrt{k}}{3} \right) . \\ & \leq \max_{0 \leq j \leq n_0} \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right)+ \max_{j \geq n_0} \Pr \left( j^{-1/2} \| S_j \| \geq \frac{\lambda \sqrt{k}}{3} \right) . \end{align*} Now if we consider Theorem~\ref{thm:CLT} in conjunction with Theorem~\ref{thm:polya}, and the $a = \lambda \sqrt{k}$ case of~\eqref{eq:markov}, then we can choose $n_0$ sufficiently large so that for all $k \in \mathbb{N}$ and all $\lambda \geq 0$, \[ \max_{j \geq n_0} \Pr \left( j^{-1/2} \| S_j \| \geq \frac{\lambda \sqrt{k}}{3} \right) \leq 2 C k^{-2} \lambda^{-4} .\] Therefore, \begin{align*} \limsup_{n \rightarrow \infty} \left\{ \max_{0 \leq j \leq n_0} \Pr \left( \| S_j \| \geq \frac{\lambda \sqrt{n}}{3} \right) + \max_{j \geq n_0} \Pr \left( j^{-1/2} \| S_j \| \geq \frac{\lambda \sqrt{k}}{3} \right) \right\} \leq 2 C k^{-2} \lambda^{-4}, \end{align*} which gives the claimed result. \end{proof} \subsection{Donsker's theorem for \texorpdfstring{$d$}{}-dimensional continuous trajectories - proof} Now we are ready to complete the statement that the measures associated with trajectories in $\mathcal{C}_0^d$ are tight, so we must turn our attention to showing that the finite-dimensional distributions do in fact converge to those of Brownian motion. The following lemma will again be useful for both the continuous and discontinuous cases, hence we state it as a separate result. \begin{lemma} \label{lem:fdd} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$, and satisfying \eqref{ass:Sigma} with $\Sigma = I_d$. Then for any $0 \leq t_1 < t_2 < \cdots < t_k \leq 1$, we have that as $n \to \infty$, \[ n^{-1/2} \left( S_{\lfloor n t_1 \rfloor} , S_{\lfloor n t_2 \rfloor} - S_{\lfloor n t_1 \rfloor} , \ldots, S_{\lfloor n t_k \rfloor} - S_{\lfloor n t_{k-1} \rfloor} \right) \overset{\textup{d}}{\longrightarrow} \left( b_d (t_1) , b_d(t_2) - b_d(t_1) , \ldots, b_d(t_k) - b_d(t_{k-1} ) \right) .\] \end{lemma} \begin{proof} The idea is contained already in the case $k=2$, so for simplicity we present that case here. By the Markov property, $S_{\lfloor nt_2 \rfloor} - S_{\lfloor nt_1 \rfloor}$ and $S_{\lfloor nt_1 \rfloor}$ are independent. By the multidimensional central limit theorem, Theorem~\ref{thm:CLT}, we have \[ \frac{1}{\sqrt{n}} S_{\lfloor nt_1 \rfloor} = \left( \frac{ \sqrt{\lfloor nt_1 \rfloor}}{\sqrt{n}} \right) \frac{1}{\sqrt{\lfloor nt_1 \rfloor}} S_{\lfloor nt_1 \rfloor} \overset{\textup{d}}{\longrightarrow} t_1^{1/2} Z_1 ,\] where $Z_1 \sim \mathcal{N} (0, I_d)$, using the fact that, if $\alpha_n \to \alpha$ in $\mathbb{R}$ and $\zeta_n \overset{\textup{d}}{\longrightarrow} \zeta$ in $\mathbb{R}^d$, then $\alpha_n \zeta_n \overset{\textup{d}}{\longrightarrow} \alpha \zeta$ in $\mathbb{R}^d$. Similarly, \[ \frac{1}{\sqrt{n}} \left( S_{\lfloor nt_2 \rfloor} - S_{\lfloor nt_1 \rfloor} \right) \overset{\textup{d}}{\longrightarrow} (t_2 - t_1)^{1/2} Z_2 ,\] where $Z_2 \sim \mathcal{N} (0,I_d)$. Here $Z_2$ is independent of $Z_1$ because if $X_n \overset{\textup{d}}{\longrightarrow} X$ and $Y_n \overset{\textup{d}}{\longrightarrow} Y$, and $X_n$ and $Y_n$ are pairwise independent, then $(X_n,Y_n)\overset{\textup{d}}{\longrightarrow} (X,Y)$ where $(X,Y)$ are independent. \end{proof} Now we can complete the proof of part (a) of Donsker's theorem. \begin{proof}[Proof of Theorem~\ref{thm:Donsker-dd}(a)] We follow~\cite[\S 8]{cpm}, and aim to apply Corollary~\ref{thm:tight}. Recall from Remark~\ref{rem:donsker} that it suffices to consider the case where $\Sigma = I_d$. First we must establish convergence of the finite-dimensional distributions of $Y_n$. We need to show that for any $0 \leq t_1 < t_2 < \cdots < t_k \leq 1$ we have \[ \left( Y_{n} (t_1) , Y_{n} (t_2) , \ldots, Y_{n} ( t_k ) \right) \overset{\textup{d}}{\longrightarrow} ( b (t_1) , b(t_2 ) , \ldots, b(t_k) ) .\] By continuity of the function $(x_1,x_2,\ldots,x_k) \mapsto \left(x_1, x_1+x_2,\ldots,\sum_{i=1}^k x_i\right)$, it is sufficient to prove that \[ ( Y_{n} (t_1), Y_n (t_2) - Y_n (t_1 ) , \ldots, Y_n (t_k) - Y_n (t_{k-1} ) ) \overset{\textup{d}}{\longrightarrow} ( b (t_1) , b(t_2) - b(t_1) ,\ldots, b(t_k) - b(t_{k-1} ) ) .\] Lemma~\ref{lem:fdd} provides the main step here, but there is a little more work due to the definition of $Y_n$ in terms of interpolation. Again, the main idea is contained in the case $k=2$ so we describe only that case here. Let $0 \leq t_1 < t_2 \leq 1$. Using~\eqref{eqn:trajectories} we may write \begin{align*} \left( Y_n(t_2), Y_n(t_2) - Y_n(t_1) \right) &= \frac{1}{\sqrt{n}} \left( S_{\lfloor nt_1 \rfloor}, S_{\lfloor nt_2 \rfloor} - S_{\lfloor nt_1 \rfloor} \right) + \left( \psi_{n,t_1}, \psi_{n,t_2} - \psi_{n,t_1} \right), \end{align*} where $\psi_{n,t} := \frac{nt- \lfloor nt \rfloor}{\sqrt{n}} \xi_{\lfloor nt \rfloor + 1}$. Using Markov's inequality, we have that for $r>0$, \begin{align*} \Pr(\|\xi \| \ge r ) \le \frac{\Exp [ \| \xi \|^2 ]}{r^2} = \frac{ \mathrm{tr~} \Sigma }{r^2} = \frac{d}{r^2} , \end{align*} since $\mu=0$ and $\Sigma = I_d$. Since $\| \psi_{n,t} \| \leq n^{-1/2} \| \xi_{\lfloor nt \rfloor + 1} \|$, we get \begin{align*} \Pr \left( \| \psi_{n,t} \| > r \right) & \le \Pr \left(\|\xi_{\lfloor nt \rfloor +1}\| \ge r \sqrt{n} \right) \leq \frac{d}{r^2 n} . \end{align*} It follows that $\psi_{n,t_1} \overset{\textup{p}}{\longrightarrow} 0$, and similarly for $\psi_{n,t_2} - \psi_{n,t_1}$. Hence $(\psi_{n,t_1}, \psi_{n,t_2} - \psi_{n,t_1}) \overset{\textup{p}}{\longrightarrow} 0$. Thus by Lemma~\ref{lem:fdd} and Theorem~\ref{thm:slutsky}, we get \[ \left( Y_n(t_2), Y_n(t_2) - Y_n(t_1) \right) \overset{\textup{d}}{\longrightarrow} \left( t_1^{1/2} Z_1 , (t_2 - t_1)^{1/2} Z_2 \right) ,\] which is exactly the distribution of $(b(t_1), b(t_2) - b(t_1))$, as required. Next we use Lemma~\ref{lem:bill} to establish tightness. The $k=1$ case of Lemma~\ref{lem:rwmax} shows that \[ \limsup_{n \to \infty} \lambda^2 \Pr \left( \max_{0 \leq j \leq n} \| S_j \| \geq \lambda \sqrt{n} \right) \leq C \lambda^{-2} ,\] which converges to $0$ as $\lambda \to \infty$. Thus Lemma~\ref{lem:bill} gives tightness, and Theorem~\ref{thm:tight} completes the proof of part (a) of Theorem~\ref{thm:Donsker-dd}. \end{proof} \subsection{Weak convergence conditions in the Skorokhod topology} Now we turn to part (b) of Theorem~\ref{thm:Donsker-dd}. The first difference for trajectories with discontinuities is that the spaces $\mathcal{D}$ and $\mathcal{D}^d$ do not automatically have the class of finite-dimensional sets as a separating class. This means the proof of Theorem \ref{thm:rc} does not translate to this setting. However, we extract the following result from Theorem 12.5 of \cite{cpm} which will help us. \begin{theorem}\label{Thm:FDDSepClass} Let $T\subseteq [0,1]$ with $1\in T$ such that $T$ is dense in $[0,1]$, then the class of finite-dimensional sets taking values in $T$ is a separating class of $\mathcal{D}^d$. \end{theorem} To prove this result we recall, without proof, some standard results from measure theory, see e.g. \cite[Theorem~A.1.4]{durrett}. \begin{definition} Any non-empty collection of sets $\mathcal{P}$ is a $\pi$-system if for any $A,B \in \mathcal{P}$, then $A \cap B \in \mathcal{P}$. \end{definition} \begin{theorem} \cite[Theorem~3.3]{billpm} \label{thm:pisystem} Suppose that $P_1$ and $P_2$ are probability measures on $\sigma(\mathcal{P})$, where $\mathcal{P}$ is a $\pi$-system and $\sigma(\mathcal{P})$ is the $\sigma$-algebra generated by $\mathcal{P}$. If $P_1$ and $P_2$ agree on $\mathcal{P}$ then they agree on $\sigma(\mathcal{P})$. \end{theorem} We omit the proof of this result because it would require a considerable diversion into Dynkin's $\pi - \lambda$ theorem which is already well covered ground in the literature, see \cite[Theorem~3.2]{billpm}. \begin{proof}[Proof of Theorem~\ref{Thm:FDDSepClass}] For the duration of this proof, let $\mathcal{B}$ denote the Borel subsets of $(\mathcal{D}^d,\rho_S)$, and recall that $\mathcal{B}_d$ denotes the Borel subsets of $\mathbb{R}^d$. Let $\mathcal{C}$ denote the finite cylinder sets over $T$, that is, the collection of all subsets of $\mathcal{D}^d$ of the form \begin{equation} \label{eq:cylinder} \left\{ f \in \mathcal{D}^d : \pi_{t_0, t_1, \ldots, t_k} f \in \prod_{i=1}^k A_i \right\} ,\end{equation} where $k \in \mathbb{Z}_+$, $t_1, \ldots, t_k \in T$, and $A_1, A_2, \ldots, A_k \in \mathcal{B}_d$. If $C_1, C_2 \in \mathcal{C}$ are of the form~\eqref{eq:cylinder} with $k = k_1, k_2$ respectively, then $C_1 \cap C_2$ is also a set of the form~\eqref{eq:cylinder} with $k = k_1 + k_2$. Thus $\mathcal{C}$ is a $\pi$-system. It generates the $\sigma$-algebra $\sigma (\mathcal{C})$. By the assumption that $T$ is dense, there is a sequence $t_1 > t_2 > \cdots$ of elements of $T$ such that $t_n \downarrow 0$ as $n \to \infty$, and then any $f \in \mathcal{D}^d$ has $\pi_0 f = \lim_{n \to \infty} \pi_{t_n} f$ by right continuity. Hence $\pi_0 = \lim_{n \to \infty} \pi_{t_n}$ pointwise, and so $\pi_0$ is a limit of functions measurable with respect to $\sigma (\mathcal{C})$, and hence is itself measurable with respect to $\sigma (\mathcal{C})$. Thus we may assume that $0 \in T$. Then, for a given $m \in \mathbb{N}$, choose a positive integer $k$ and points $s_0, s_1, \ldots, s_k$ of $T$ such that $0 = s_0 < \cdots < s_k =1$ and $\max_{1 \leq i \leq k} (s_i - s_{i-1} ) < m^{-1}$. For $\alpha = (\alpha_0, \ldots, \alpha_k)$ in $(\mathbb{R}^d)^{k+1}$, let $V_m \alpha$ be the element of $\mathcal{D}^d$ such that $V_m \alpha (t) = \alpha_{i-1}$ for $t \in [s_{i-1},s_i)$ for each $1 \leq i \leq k$, and $V_m \alpha (1) = \alpha_k$. Since $V_m : (\mathbb{R}^d)^{k+1} \to \mathcal{D}^d$ is continuous, it is measurable, i.e., $V_m^{-1} ( B) \in \mathcal{B}_{d(k+1)}$ for each $B \in \mathcal{B}$. Since $\pi_{s_0,\ldots,s_k}$ is measurable from $(\mathcal{D}^d, \sigma(\mathcal{C}))$ to $((\mathbb{R}^d)^{k+1},\mathcal{B}_{d(k+1)})$, the composition $V_m \pi_{s_0,\ldots,s_k}$ is measurable from $(\mathcal{D}^d, \sigma(\mathcal{C}))$ to $(\mathcal{D}^d, \mathcal{B})$. It is a straightforward exercise to show that $\rho_S (f, V_m \pi_{s_0,\ldots,s_k} f ) \leq \max ( m^{-1}, w_f' (m^{-1} ) )$ for any $f \in \mathcal{D}^d$, which implies that $f = \lim_{m \to \infty} V_m \pi_{s_0,\ldots,s_k} f$. Hence the identity function on $\mathcal{D}^d$ is a limit of a sequence of functions measurable from $(\mathcal{D}^d, \sigma(\mathcal{C}))$ to $(\mathcal{D}^d, \mathcal{B})$ and hence is itself measurable from $(\mathcal{D}^d, \sigma(\mathcal{C}))$ to $(\mathcal{D}^d, \mathcal{B})$. It follows that $\sigma(\mathcal{C}) = \mathcal{B}$, i.e., the $\pi$-system $\mathcal{C}$ generates the full Borel $\sigma$-algebra. Theorem~\ref{thm:pisystem} now completes the proof. \end{proof} Now, we can take $T\subseteq [0,1]$ to be the set of continuity points of $X\in \mathcal{D}^d$, which must contain $1$ by the right continuity of $\mathcal{D}^d$ and must be dense because the set of discontinuity points has measure $0$. Thus, we have the following replacement of Corollary~\ref{thm:tight}, with the proof now being identical to that of Theorem~\ref{thm:rc}, with the use of Prokhorov's theorem to allow us to claim the result for tightness not relative compactness. \begin{theorem}\label{thm:FDDandTinD} For elements $X, X_1, X_2, \ldots$ of $\mathcal{D}^d$, if \eqref{ass:finiteDimDist} and \eqref{ass:tight} hold, then $X_n \Rightarrow X$. \end{theorem} \subsection{Tightness conditions in Skorokhod topology} First we need to state a generalised form of the Arzel\`{a}-Ascoli theorem, not only for the Skorokhod topology case, but also in $d$-dimensions. The proof of the Skorokhod case in $1$-dimension was done at \cite[Theorem~12.3]{cpm}, but the proof has no dimensional dependency so we refrain from copying it here. Recall the definition of $w'_f$ from~\eqref{eqn:Dmodulusofcont}. \begin{theorem}\label{thm:AAinDd} A set $A$ in $\mathcal{D}^d$ is relatively compact if and only if \[\sup_{f \in A}\|f\|_{\infty}<\infty {\rm \quad and \quad} \lim_{\delta\rightarrow 0} \sup_{f\in A} w'_f(\delta)=0.\] \end{theorem} Now we can also generalize the tightness conditions of \cite[Theorem~13.2]{cpm} to $d$-dimensions, the proof reads the same as that for Lemma~\ref{lem:tightC} with the modulus of continuity $w_f$ replaced with $w'_f$ so we omit it. \begin{lemma} \label{lem:tightD} Let $P_n$ be a sequence of probability measures on $\mathcal{D}^d$. Then \eqref{ass:tight} holds if and only if the following two conditions hold. \begin{enumerate} \item[(i)] We have \begin{equation} \lim_{a\rightarrow \infty}\limsup_{n\to\infty} P_n ( \{ f :\| f \|_\infty \geq a \} )=0.\label{eqn:tightDone} \end{equation} \item[(ii)] For each $\varepsilon>0$, \begin{equation} \lim_{\delta \downarrow 0 }\limsup_{n \to \infty} P_n \left( \{ f : w^{\prime}_f (\delta) \geq \varepsilon \} \right) = 0. \label{eqn:tightDtwo} \end{equation} \end{enumerate} \end{lemma} \subsection{Donsker's theorem in \texorpdfstring{$d$}{}-dimensional Skorokhod space - proof} \begin{proof}[Proof of Theorem~\ref{thm:Donsker-dd}(b)] The convergence of the finite-dimensional distributions is a consequence of Lemma~\ref{lem:fdd} and the continuous mapping theorem, Theorem~\ref{thm:mapping}, which is applicable because the mapping $(x_1,x_2,\ldots,x_k)\mapsto (x_1,x_1+x_2,\ldots,\sum_{i=1}^k x_i)$ defined for $x_1,\ldots,x_k \in \mathbb{R}^d$ is continuous. For tightness, it will be sufficient to check the conditions in Lemma~\ref{lem:tightD} applied to the measures $P_n$ defined by $P_n ( B) = \Pr ( Y'_n \in B )$. The condition~\eqref{eqn:tightDone} then becomes \begin{equation} \lim_{a\to\infty}\limsup_{n\to\infty} \Pr \left( \max_{0 \leq j \leq n} \left\| S_j \right\|\geq a \sqrt{n} \right) =0, \nonumber \end{equation} which is easily verified by the $k=1$ case of Lemma~\ref{lem:rwmax}. The condition~\eqref{eqn:tightDtwo} becomes $$ \lim_{\delta\downarrow 0}\limsup_{n\to\infty} \Pr \left( \inf_{\{t_i\}} \max_{1\leq i\leq v} \sup_{t,s\in [t_{i-1},t_i)}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) = 0,$$ where the infimum is over all $\delta$-sparse sets $\{t_0, t_1, \ldots, t_v\}$. It suffices to suppose $\delta = 1/2k$, with $k \in \mathbb{N}$, and then choose $t_i = i/k$ and $v=k$ to obtain an upper bound for the probability. This gives \begin{align*} \Pr \left( \inf_{\{t_i\}} \max_{1\leq i\leq v} \sup_{t,s\in [t_{i-1},t_i)}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) & \leq \Pr \left( \max_{1\leq i\leq v} \sup_{t,s\in [\frac{i-1}{k},\frac{i}{k})}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) \\ & = \Pr \left( \bigcup_{i=1}^k \left\{ \sup_{t,s\in [\frac{i-1}{k},\frac{i}{k})}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right\} \right) \\ & \leq \sum_{i=1}^k \Pr \left( \sup_{t,s\in [\frac{i-1}{k},\frac{i}{k})}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) . \end{align*} Here we have $\| Y_n' (s) - Y_n' (t) \| = \sum_{j=\lfloor ns \rfloor +1}^{\lfloor nt \rfloor} \xi_j$ if $s<t$ (and we can restrict the supremum to such $t,s$) so that the distribution of $\sup_{t,s\in [\frac{i-1}{k},\frac{i}{k})}\|Y'_n(s)-Y'_n(t)\|$ is the same for each $i$. Hence \begin{align*} & \lim_{\delta\downarrow 0}\limsup_{n\to\infty} \Pr \left( \inf_{\{t_i\}} \max_{1\leq i\leq v} \sup_{t,s\in [t_{i-1},t_i)}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) \\ &{} \qquad \qquad \qquad {} \leq \lim_{k \to \infty} \limsup_{n \to \infty} k \Pr \left( \sup_{t,s\in [0,\frac{1}{k})}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) . \end{align*} Here we have that \begin{align*} \Pr \left( \sup_{t,s\in [0,\frac{1}{k})}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) & \leq \Pr \left( \sup_{t,s\in [0,\frac{1}{k})}( \|Y'_n(s)\| + \|Y'_n(t)\| ) \geq \varepsilon \right) \\ &= \Pr \left( \sup_{t \in [0,\frac{1}{k})} \|Y'_n(t)\| \geq \varepsilon /2 \right) \\ & = \Pr \left( \max_{0 \leq j \leq \lfloor n/k \rfloor} \| S_j \| \geq (\varepsilon/2) \sqrt{n} \right) . \end{align*} Then by Lemma~\ref{lem:rwmax} we have that \[\lim_{k \to \infty} \limsup_{n \to \infty} k \Pr \left( \sup_{t,s\in [0,\frac{1}{k})}\|Y'_n(s)-Y'_n(t)\|\geq \varepsilon \right) \leq \lim_{k \to \infty} C k^{-1} (\varepsilon/2)^{-4} = 0, \] which verifies condition~\eqref{eqn:tightDtwo}. This completes the proof of tightness which, with the convergence of the finite-dimensional distributions and Theorem~\ref{thm:FDDandTinD}, completes the proof. \end{proof} \section{Set convergence} \label{sec:set} \subsection{Hausdorff distance} \label{sec:Haus} In this section we establish convergence of the set $\{S_0, S_1, \ldots, S_n \}$, suitably scaled, in the context of the law of large numbers and the central limit theorem. We present some consequences of this convergence in the subsequent sections. Note that we need an appropriate metric space of sets on which this convergence should take place. The purpose of this subsection is to set this up. Let $\mathfrak{S}^d_0$ denote the collection of bounded subsets of $\mathbb{R}^d$ containing $0$. Let $\mathfrak{K}^d_0$ denote the set of compact subsets of $\mathbb{R}^d$ containing $0$. Recall that $A^\varepsilon=\{x\in \mathbb{R}^d : \rho_E(x,A)\leq \varepsilon\}$ and $\rho_E(x,A)$ is the distance between a point $x$ and a set $A$. For $A, B \in \mathfrak{S}^d_0$, the \emph{Hausdorff distance} between $A$ and $B$ is given by either of the following two equivalent definitions \cite[p.~84]{gruber}: \begin{align} \rho_H(A,B) &\coloneqq \max \left\{ \sup_{x\in A}\rho_E(x,B),\sup_{y\in B}\rho_E (y,A)\right\}, \label{eqn:Hausdef}\\ \rho_H(A,B) &\coloneqq \inf\{\varepsilon \geq 0: A \subseteq B^\varepsilon \text{ and } B \subseteq A^\varepsilon\}. \label{eqn:Hausinf} \end{align} Note that $\rho_H$ is a metric on $\mathfrak{K}^d_0$. On $\mathfrak{S}^d_0$, $\rho_H$ is a only a pseudometric, since while the triangle inequality still holds, $\rho_H(A,B) = 0$ does not imply $A = B$ (e.g.~take an open set $A$ and take $B$ to be its closure; see Lemma~\ref{lemma:clHauss} below). Thus convergence must take place in $(\mathfrak{K}^d_0, \rho_H)$. We need the following observations about the Hausdorff distance. \begin{lemma}\label{lemma:HaussSkorSup} Consider functions $f, g \in \mathcal{M}_0^d$. Then $f[0,1], g[0,1] \in \mathfrak{S}^d_0$ and \[\rho_H ( f[0,1],g[0,1] ) \leq \rho_S(f,g) \leq \rho_{\infty}(f,g).\] \end{lemma} \begin{proof} Let $\Lambda'$ denote the set of all $\lambda : [0,1] \rightarrow [0,1]$ such that $\lambda[0,1]=[0,1]$. Then by~\eqref{eqn:Hausdef}, \begin{align*} \rho_H (f[0,1],g[0,1] ) & =\sup_{t \in [0,1]} \rho_E (f(t),g[0,1] ) \vee \sup_{t \in [0,1]} \rho_E (g(t),f[0,1] ) \\ &= \sup_{t\in[0,1]} \inf_{s\in[0,1]} \|f(t)-g(s)\| \vee \sup_{t\in[0,1]}\inf_{s\in[0,1]} \|g(t)-f(s)\|\\ &= \sup_{t\in[0,1]} \inf_{s\in[0,1]} \|f(t)-g\circ\lambda(s)\| \vee \sup_{t\in[0,1]}\inf_{s\in[0,1]} \|g\circ\lambda(t)-f(s)\|, \end{align*} for any $\lambda \in\Lambda'$. Using the fact that for any $t \in [0, 1]$, $\inf_{s\in[0,1]} h(s) \leq h(t)$, we get \[ \rho_H (f[0,1],g[0,1] ) \leq \sup_{t \in [0,1] }\| f (t) - g \circ \lambda (t)\| ,\] for any $\lambda \in \Lambda'$, and hence \[\rho_H (f[0,1],g[0,1] ) \leq \inf_{\lambda \in \Lambda'} \|f-g\circ \lambda\|_{\infty} .\] It follows that $\rho_H (f[0,1],g[0,1] ) \leq \rho_S (f,g)$, and Lemma~\ref{lem:skor} completes the proof. \end{proof} Note that if $ f \in \mathcal{C}^d_0$ then $f[0,1]$ is the continuous image of a compact set, containing $f(0) = 0$, and hence $f[0, 1] \in \mathfrak{K}^d_0$. Thus Lemma~\ref{lemma:HaussSkorSup} shows that $f \mapsto f[0, 1]$ is a continuous map from $(\mathcal{C}^d_0 , \rho_\infty)$ to $(\mathfrak{K}^d_0 , \rho_H)$. For $f\in \mathcal{D}^d_0$, we need to work instead with $\cl f[0, 1]$. We need the following simple fact. \begin{lemma}\label{lemma:clHauss} For any $A,B \in \mathfrak{S}^d_0$, \[\rho_H( \cl A,B)=\rho_H(A,B).\] \end{lemma} \begin{proof} Clearly $A\subseteq\cl A$, so \begin{equation} \label{eq:clHauss1} \sup_{x\in\cl A} \rho_E(x,B) \geq \sup_{x\in A} \rho_E(x,B). \end{equation} For any $z \in \cl A$, there exist $z_n \in A$ such that $z_n \to z$; by continuity, $\rho_E(z_n,B) \to \rho_E(z,B)$. Also, since $z_n \in A$, it is clear that $\rho_E(z_n,B) \leq \sup_{x\in A} \rho_E(x,B)$, which gives $\rho_E(z,B) \leq \sup_{x\in A} \rho_E(x,B)$, and hence \begin{equation} \label{eq:clHauss2} \sup_{z \in \cl A} \rho_E(z,B) \leq \sup_{x \in A} \rho_E(x,B). \end{equation} Combining~\eqref{eq:clHauss1} and~\eqref{eq:clHauss2} shows that $\sup_{x \in\cl A} \rho_E (x,B) = \sup_{x \in A} \rho_E(x,B)$. Since $A \subseteq \cl A$ we have $\rho_E(y, \cl A) \leq \rho_E(y,A)$ for all $y \in B$. For any $z \in \cl A$, there exist $z_n \in A$ such that $z_n\to z$. Then \[ \rho_E(y, z) = \lim_{n \to \infty} \rho_E(y , z_n) \geq \rho_E (y,A), \] so that for any $y \in B$, \[ \rho_E (y, \cl A) = \inf_{z \in \cl A} \rho_E(y , z) \geq \rho_E(y,A). \] Hence $\rho_E(y, \cl A) = \rho_E (y,A)$ for any $y \in B$, so the result follows from \eqref{eqn:Hausdef}. \end{proof} Combining the preceding two lemmas gives the following result, which shows that $f \mapsto \cl f[0, 1]$ is a continuous map from $(\mathcal{D}^d_0, \rho_S)$ to $(\mathfrak{K}^d_0,\rho_H)$. \begin{corollary}\label{cor:HclSkor} Consider functions $f, g \in \mathcal{D}^d_0$. Then $\cl f[0, 1], \cl g[0, 1] \in \mathfrak{K}^d_0$ and \[\rho_H ( \cl f[0,1], \cl g[0,1] )\leq \rho_S(f,g) \leq \rho_\infty (f,g) .\] \end{corollary} \begin{proof} First note that $\{ f(x) : x\in [0,1]\}$ is contained in the closed Euclidean ball centred at the origin with radius $\| f \|_\infty$, which is finite for $f \in \mathcal{D}_0^d$ \cite[p.~121]{cpm}. Thus if $f, g \in \mathcal{D}^d_0$, then $f[0,1], g[0,1]$ are bounded, and hence their closures are compact. We use Lemma~\ref{lemma:clHauss} twice to see $\rho_H (\cl f[0,1],\cl g[0,1])=\rho_H (f[0,1],g[0,1] )$, and the result then follows from Lemma~\ref{lemma:HaussSkorSup}. \end{proof} \subsection{Convergence of random walk points} Now we can present our limit theorems for the set $\{S_0, S_1, \ldots, S_n \}$. First we state a law of large numbers. Recall that $I_\mu(t) =\mu t$. \begin{theorem}\label{thm:SetFLLN} Suppose that we have a random walk as defined at~\eqref{ass:walk}. Then, as elements of $(\mathfrak{K}^d_0,\rho_H)$, \[n^{-1}\{S_0,S_1,\ldots,S_n\}\overset{\textup{a.s.}}{\longrightarrow} I_{\mu}[0,1] . \] \end{theorem} \begin{proof} The functional law of large numbers, Theorem~\ref{thm:flln}(b), shows that $X'_n\overset{\textup{a.s.}}{\longrightarrow} I_\mu$ on $(\mathcal{D}^d_0, \rho_\infty)$. Corollary~\ref{cor:HclSkor} shows that $f \mapsto \cl f[0, 1]$ is continuous from $(\mathcal{D}^d_0, \rho_\infty)$ to $(\mathfrak{K}^d_0, \rho_H)$, so the mapping theorem, Theorem~\ref{thm:mapping}, shows that $\cl X'_n [0, 1] \overset{\textup{a.s.}}{\longrightarrow} \cl I_\mu [0, 1]$; note that $\cl X'_n[0, 1] = X'_n[0, 1] = n^{-1} \{ S_0, S_1, \ldots, S_n \}$ and $\cl I_\mu[0, 1] = I_\mu[0, 1]$. \end{proof} \begin{theorem}\label{thm:SetWeakConv} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$ and satisfying \eqref{ass:Sigma}. Then, as elements of $(\mathfrak{K}^d_0, \rho_H)$, \[ n^{-1/2}\{S_0,S_1,\ldots,S_n\} \Rightarrow \Sigma^{1/2}b_d[0,1] . \] \end{theorem} \begin{proof} Donsker's theorem, Theorem~\ref{thm:Donsker-dd}(b), shows that $Y'_n \Rightarrow \Sigma^{1/2} b_d$ on $(\mathcal{D}^d_0, \rho_S)$. Corollary~\ref{cor:HclSkor} shows that $f \mapsto \cl f[0, 1]$ is continuous from $(\mathcal{D}^d_0, \rho_S)$ to $(\mathfrak{K}^d_0, \rho_H)$, so the mapping theorem, Theorem~\ref{thm:mapweak}, shows that $\cl Y'_n [0, 1] \Rightarrow \cl \Sigma^{1/2}b_d[0,1]$; note that $\cl Y'_n [0, 1] = Y'_n [0,1] = n^{-1/2}\{S_0,S_1,\ldots,S_n\}$ and $\cl \Sigma^{1/2}b_d[0,1] = \Sigma^{1/2}b_d[0,1]$. \end{proof} \subsection{Diameter of random walks} As a first application of the results of this section (we see another application in Section~\ref{sec:hulls}), we consider the \emph{diameter} of the random walk. Define \[ D_n := \diam \{S_0,\ldots,S_n\} = \max_{0 \leq i, j \leq n} \| S_i - S_j \| .\] The following is a generalisation to $d$-dimensions of the $2$-dimensional almost-sure result contained in \cite[Theorem~1.3]{mcrwade}. \begin{theorem}\label{thm:RWDiam} Suppose that we have a random walk as defined at \eqref{ass:walk}. \begin{itemize} \item[(a)] $n^{-1}D_n \overset{\textup{a.s.}}{\longrightarrow} \|\mu\|$ as $n \to \infty$. \item[(b)] If $\mu=0$ and \eqref{ass:Sigma} holds, then $n^{-1/2}D_n \overset{\textup{d}}{\longrightarrow} \diam ( \Sigma^{1/2} b_d[0,1] )$ as $n \to \infty$. \end{itemize} \end{theorem} The theorem rests on the following result, which shows that $A \mapsto \diam A$ is continuous from $(\mathfrak{K}^d_0, \rho_H)$ to $(\mathbb{R}_+, \rho_E)$ which can also be found at \cite[Lemma~3.5]{mcrwade}. \begin{lemma} \label{lemma:DiamCont} For any $A,B \in \mathfrak{S}^d_0$, \[| \diam A-\diam B |\leq 2 \rho_H(A,B) .\] \end{lemma} \begin{proof} Let $\rho_H(A,B)=r$. From~\eqref{eqn:Hausinf} we have that for any $x_1,x_2 \in A$ and any $s>r$, there exist $y_1,y_2\in B$ such that $\rho_E(x_i,y_i)\leq s$. Then, \[\rho_E(x_1,x_2)\leq \rho_E(x_1,y_1)+\rho_E(y_1,y_2)+\rho_E(y_2,x_2) \leq 2s + \diam B .\] Hence \[ \diam A = \sup_{x_1, x_2 \in A} \rho_E ( x_1, x_2 ) \leq 2s + \diam B ,\] and since $s > r$ was arbitrary we get $\diam A - \diam B \leq 2r$. Similarly, $\diam B - \diam A \leq 2 r$, giving the result. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:RWDiam}] For part (a), we have from the law of large numbers for sets, Theorem~\ref{thm:SetFLLN}, that $n^{-1} \{ S_0, S_1, \ldots, S_n\} \overset{\textup{a.s.}}{\longrightarrow} I_\mu [0,1]$ on $(\mathfrak{K}^d_0, \rho_H)$, while Lemma~\ref{lemma:DiamCont} shows that $A \mapsto \diam A$ is continuous from $(\mathfrak{K}^d_0, \rho_H)$ to $(\mathbb{R}_+, \rho_E)$. Thus the mapping theorem, Theorem~\ref{thm:mapping}, yields $n^{-1} D_n \overset{\textup{a.s.}}{\longrightarrow} \diam ( I_\mu [0,1] ) = \| \mu \|$. For part (b), we have from the central limit theorem for sets, Theorem~\ref{thm:SetWeakConv}, that $n^{-1/2} ~\{ S_0 , S_1, \ldots, S_n\}$ $\Rightarrow \Sigma^{1/2} b_d[0,1]$ on $(\mathfrak{K}^d_0, \rho_H)$. Lemma~\ref{lemma:DiamCont} together with the mapping theorem, Theorem~\ref{thm:mapweak}, yield the result. \end{proof} \section{Convex hulls} \label{sec:hulls} \subsection{Introduction} Given $A \subseteq \mathbb{R}^d$, we denote the \emph{convex hull} of $A$ by $\hull A$, the smallest convex set containing $A$. Given the random walk $S_n$, we are interested in this section its associated convex hull, $\hull \{ S_0, \ldots, S_n\}$. We apply the compact-set convergence results for $\{ S_0,\ldots, S_n\}$ from Section~\ref{sec:set} to deduce compact-set convergence of $\hull \{ S_0, \ldots, S_n\}$, and then we give some applications to properties of the convex hull such as its volume or surface area. First, we give a brief survey of existing results on convex hulls of random walks. The first considerations of the subject were largely combinatorial in their nature. Sparre Andersen \cite{sparreandersen1954} considered the space-time picture of a one-dimensional random walk and studied the number of vertices of the convex minorant, the bottom half of the convex hull. He established that this number, $H_n$, grows like $\log n$ where $n$ is the length of the walk. This was contrasted by Qiao and Steele \cite{qiaosteele1994} more recently when they proved that, for any natural number $m$, $H_n = m$ infinitely often for some increment distribution $Z$. Another functional that has been largely studied is the perimeter length of the convex hull. Spitzer and Widom \cite{spitzer1961circumference} used combinatorial results with Cauchy's theorem for the length of convex polygons, which states \[ L=\int_0^\pi D(\theta) \textup{d}\theta, \] where $L$ is the length of the perimeter of a convex polygon, and $D(\theta)$ is the width of the projection of the polygon in the direction $\theta$. From this, the elegant formula $\Exp L_n = 2 \sum_{i=1}^n \frac{1}{i} \Exp\|S_i\|$ was established. This was extended by Snyder and Steele in 1993 \cite{snyder1993convex} in proving a law of large numbers and some bounds on the variance of the perimeter length. Steele further developed results of this type by relating functionals of the convex hull to functions of permutations of random vectors, reaffirming the results of Sparre Andersen, and Snyder and Steele. In particular he commented that \begin{quote} This \ldots tells us that the expected length of the concave majorant grows exactly like the length of the line from $(0,0)$ to the point $(n, \Exp S_n) = (n,n\mu)$. \end{quote} Of course this is the heuristic of our result, Theorem~\ref{thm:flln}, formalising the convergence of the walk to exactly this line. In even more recent work, Wade and Xu studied the perimeter length and the area of the convex hull in two papers concerning the zero drift and non-zero drift case separately \cite{wade2015convex,wade2015drift}. In the zero drift case, they established limit theorems for the expectation and variance of both the perimeter length and area, using similar analysis to that presented here, considering the convergence of the hull to that of a Brownian path. In the non-zero drift case, a different technique was used and a variance convergence and a central limit theorem was established for walks, provided the distribution of $Z$ was not supported on a single straight line. Other notable works relating to the perimeter length and area include a large deviation study by Akopyan and Vysotsky \cite{akopyanVysotsky} and a paper describing further expansions of the expectation of both functionals, restricted to the case of symmetric or Gaussian increments, which was written by Grebenkov, Lanoisel\'{e}e and Majumdar \cite{GrebenkovLanoiseleeMajumdar}. In what follows, we improve Snyder and Steele's result, by using the functional law of large numbers to remove the necessity for the finite second moment condition and then extend Wade and Xu's results to higher dimensions using the $d$-dimensional Donsker's theorem. We also consider an additional functional, the mean width of the convex hull. \subsection{Trajectories and hulls} We use the notation for sets of subsets of $\mathbb{R}^d$ and for the Hausdorff distance $\rho_H$ from Section~\ref{sec:Haus}. We need the following result. \begin{lemma}\label{lemma:hullHaus} For any $A,B \in \mathfrak{S}^d_0$, \[\rho_H( \hull A, \hull B ) \leq \rho_H(A,B) .\] \end{lemma} \begin{proof} For any $x \in \hull A$ there exist finitely many points $x_1, x_2, \ldots, x_n \in A$ and $\lambda_1, \lambda_2, \ldots, \lambda_n$ with $\lambda_i \geq 0$, $\sum_{i=1}^n \lambda_i =1$, for which $x = \sum_{i=1}^n \lambda_i x_i$ (see e.g.~\cite[p.~42]{gruber}). Let $r := \rho_H (A, B)$. For any $s>r$, we have from~\eqref{eqn:Hausinf} that for each $x_i \in A$ there exists $y_i \in B$ such that $\rho_E (x_i, y_i ) \leq s$. Now consider $y=\sum_{i=1}^n \lambda_i y_i \in \hull B$. Then \[ \rho_E(x,y) \leq \sum_{i=1}^n \lambda_i \rho_E ( x_i,y_i ) \leq s . \] This calculation implies that $\hull A \subseteq ( \hull B )^s$, and by a similar argument we get $\hull B \subseteq (\hull A)^s$. With~\eqref{eqn:Hausinf} we get $\rho_H ( \hull A, \hull B) \leq s$. Since $s >r$ was arbitrary, the result follows. \end{proof} Let $\mathfrak{C}^d_0$ denote the set convex compact subsets of $\mathbb{R}^d$ containing $0$. For $A \in \mathfrak{C}^d_0$, we define the \emph{support function} of A by \begin{equation} \label{eq:support} h_A(x) \coloneqq \sup_{y\in A}(x\cdot y), \text{ for any } x\in \mathbb{R}^d .\end{equation} Then for $A, B \in \mathfrak{C}^d_0$ we have another equivalent description of $\rho_H(A,B)$ (see e.g.~\cite[p.~84]{gruber}): \begin{equation} \rho_H(A,B) = \sup_{e\in \mathbb{S}^{d-1}}|h_A(e)-h_B(e)|. \label{eqn:Haussupport} \end{equation} Given $f \in \mathcal{D}^d_0$, we have $\cl f[0,1]$ is compact and contains $0 = f(0)$. A theorem of Carath\'eodory \cite[p.~44]{gruber} says that if $A$ is compact then so is $\hull A$; hence $\hull \cl f[0,1]$ is compact. Moreover, we have that $\hull \cl A = \cl \hull A$ \cite[p.~45]{gruber}. Hence if $f \in \mathcal{D}^d_0$ then $\cl \hull f[0,1] \in \mathfrak{C}^d_0$. Of course, if $f \in \mathcal{C}^d_0$ then $f[0,1]$ and hence $\hull f[0,1]$ is already compact. The following result shows that $f \mapsto \cl \hull f[0,1]$ is a continuous map from $(\mathcal{D}^d_0,\rho_S)$ to $(\mathfrak{C}^d_0,\rho_H)$. This fact is also found as Lemma~5.1 in the recent paper of Molchanov and Wespi~\cite{molchanov2016convex}. \begin{lemma}\label{lemma:hullHausSkor} Consider two functions $f, g \in \mathcal{M}^d_0$. Then, \[\rho_H \left(\cl \hull f[0,1], \cl \hull g[0,1]\right) \leq \rho_S (f,g ).\] \end{lemma} \begin{proof} First, Lemma~\ref{lemma:clHauss} (twice) and Lemma~\ref{lemma:hullHaus} yield \begin{align*} \rho_H \left(\cl\hull f[0,1],\cl \hull g[0,1]\right) = \rho_H \left( \hull f[0,1], \hull g[0,1]\right) \leq \rho_H ( f[0,1], g[0,1] ) .\end{align*} Lemma~\ref{lemma:HaussSkorSup} completes the proof. \end{proof} \subsection{Limit theorems for convex hulls} The following is our law of large numbers for the convex hull. \begin{theorem}\label{thm:hullFLLN} Suppose that we have a random walk as defined at~\eqref{ass:walk}. Then, as elements of $(\mathfrak{C}_0^d, \rho_H)$, \[n^{-1}\hull \{S_0,\ldots,S_n\} \overset{\textup{a.s.}}{\longrightarrow} I_{\mu}[0,1].\] \end{theorem} \begin{proof} Theorem~\ref{thm:SetFLLN} shows that $n^{-1} \{ S_0, \ldots, S_n\} \overset{\textup{a.s.}}{\longrightarrow} I_{\mu} [0,1]$ on $(\mathfrak{K}^d_0, \rho_H)$. Lemma~\ref{lemma:hullHaus} shows that $A \mapsto \hull A$ is a continuous map from $(\mathfrak{K}^d_0, \rho_H)$ to $(\mathfrak{C}^d_0, \rho_H)$, so the mapping theorem, Theorem~\ref{thm:mapping}, implies that $\hull n^{-1} \{ S_0, \ldots, S_n \} \overset{\textup{a.s.}}{\longrightarrow} \hull I_{\mu} [0,1]$. Here $\hull I_{\mu} [0,1] = I_{\mu} [0,1]$, and, since the convex hull is preserved under scaling, $\hull n^{-1} \{ S_0, \ldots, S_n \} = n^{-1} \hull \{ S_0, \ldots, S_n \}$. \end{proof} Next we state the accompanying central limit theorem. Let $h_d := \hull b_d[0,1]$, the convex hull of $d$-dimensional Brownian motion run for unit time. \begin{theorem} \label{thm:hullCLT} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$ and satisfying \eqref{ass:Sigma}. Then, as elements of $(\mathfrak{C}_0^d, \rho_H)$, \[n^{-1/2} \hull \{S_0,\ldots,S_n\} \Rightarrow \Sigma^{1/2}h_d.\] \end{theorem} \begin{proof} Theorem~\ref{thm:SetWeakConv} shows that $n^{-1/2} \{ S_0, \ldots, S_n\} \Rightarrow \Sigma^{1/2} b_d[0,1]$ on $(\mathfrak{K}^d_0, \rho_H)$. Lemma~\ref{lemma:hullHaus} shows that $A \mapsto \hull A$ is a continuous map from $(\mathfrak{K}^d_0, \rho_H)$ to $(\mathfrak{C}^d_0, \rho_H)$, so the mapping theorem, Theorem~\ref{thm:mapweak}, implies that $\hull n^{-1/2} \{ S_0, \ldots, S_n \} \Rightarrow \hull \Sigma^{1/2} b_d[0,1]$. Since the convex hull is preserved under affine transformations, $\hull \Sigma^{1/2} b_d[0,1] = \Sigma^{1/2} \hull b_d[0,1]$. \end{proof} \begin{remark} Alternatively, we could obtain Theorems~\ref{thm:hullFLLN} and~\ref{thm:hullCLT} directly from the functional law of large numbers, Theorem~\ref{thm:flln}, and Donsker's theorem, Theorem~\ref{thm:Donsker-dd}, using Lemma~\ref{lemma:hullHausSkor}. \end{remark} Suppose now $d\geq 2$. To obtain second-order results in the case where $\mu \neq 0$, an additional scaling limit is required. Let $\{e_1,\ldots,e_d\}$ be the standard orthonormal basis of $\mathbb{R}^d$, and supposing that $\mu \neq 0$, let $\{u_1, \ldots, u_d \}$ be another orthonormal basis of $\mathbb{R}^d$ with $u_1 = \hat \mu$. Then we transform $\xi$ into $\xi'$ by taking \[ \xi' = (\xi'_1,\xi'_2,\ldots,\xi'_d) := (\xi\cdot u_1, \xi \cdot u_2,\ldots , \xi \cdot u_d),\] and consider $\xi'_{\perp} := (\xi'_2,\ldots, \xi'_d)$. Note that, since $\Exp \xi \cdot u_k = \mu \cdot u_k = 0$ for $k \neq 1$, we have $\Exp \xi'_\perp = 0$. Then set \begin{equation} \label{eqn:sigmamuperp} \Sigma_{\mu_{\perp}} := \Exp [ \xi'_\perp (\xi'_\perp)^{\scalebox{0.6}{$\top$}} ] . \end{equation} This defines a $(d-1)$-dimensional covariance matrix, describing the covariances of the process projected onto the hyperplane orthogonal to the mean vector. Note that $\Sigma_{\mu_{\perp}}$ is non-negative definite and hence it has a unique non-negative definite symmetric square root matrix $\Sigma_{\mu_{\perp}}^{1/2}$. It will be useful to have notation for $\Sigma_{\mu_{\perp}}^{1/2}$ extended back to a $d$-dimensional matrix which we will denote as $\tilde{\Sigma}_{\mu_\perp}^{1/2}$, specifically we define \begin{equation}\label{eqn:sigmaextended} \tilde{\Sigma}_{\mu_\perp}^{1/2} := \left( \begin{array}{cccc} 1 & 0 & \hdots & 0\\ 0 & & & \\ \vdots & &\scalebox{1.3}{ $\Sigma_{\mu_{\perp}}^{1/2}$} & \\ 0 & & & \end{array}\right). \end{equation} We will need a new weak convergence result and as we took a mapping of the increments above, we need to define a different mapping for the walk process itself, for which we use a $d$-dimensional analogue of that used in \cite{wade2015convex}. Namely, for $n\in \mathbb{N}$, define $\psi_{n,\mu}:\mathbb{R}^d \rightarrow \mathbb{R}^d$ by the image of $x \in \mathbb{R}^d$ in Cartesian components: \begin{equation*} \psi_{n,\mu}(x)=\left( \frac{x \cdot u_1}{n\|\mu\|},\frac{x\cdot u_2}{\sqrt{n}},\ldots,\frac{x\cdot u_d}{\sqrt{n}}\right), \end{equation*} where $\{u_1,\ldots,u_d\}$ is the orthonormal basis defined above. We extend this, and subsequent similar notation, to sets in the usual way, $\psi_{n,\mu}(A)=\{\psi_{n,\mu}(x):x\in A\}$. This mapping has an effect which is the natural extension of its $2$-dimensional equivalent, rotating $\mathbb{R}^d$ mapping $\hat{\mu}$ to the unit vector in the horizontal direction, and scaling space with a horizontal shrinking factor of $\|\mu\|n$, but now also a factor of $\sqrt{n}$ in all $d-1$ directions orthogonal to the horizontal. We will also need some notation for the first component of the mapping, and the $d-1$ vector containing the elements orthogonal to the mean, so we define the following: \[\psi_{n,\mu}^1(x):= \frac{x\cdot u_1}{n\|\mu\|} \qquad {\rm and} \qquad \psi_{n,\mu}^{\perp}(x):= \left( \frac{x\cdot u_2}{\sqrt{n}},\ldots,\frac{x\cdot u_d}{\sqrt{n}}\right).\] Naturally, we also need to define a new limiting process which combines the drift with Brownian motion in a time-space way. We denote this $\tilde b_d(t)$, which is defined as \begin{equation} \label{eqn:btilde} \tilde b_d (t) = (t, b_{d-1}(t)), {\rm \ for\ }t \in [0,1], \end{equation} where we use the notation $b_{d-1}$ to be clear that we mean $(d-1)$-dimensional Brownian motion. We use the notation $\tilde h_d^\Sigma := \hull \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde b_d[0,1]$, the hull of $\tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde b_d$ run for unit time. \begin{lemma}\label{lemma:driftmapping} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu \neq 0$ and satisfying \eqref{ass:Sigma}. Then, as $n\rightarrow \infty$, as elements of $(\mathfrak{C}^d_0, \rho_H)$, \[ \psi_{n,\mu}( \hull\{S_0,S_1,\ldots,S_n\} ) \Rightarrow \tilde{h}_d^\Sigma . \] \end{lemma} \begin{proof} First, note that, since $\psi_{n,\mu}$ is an affine transformation, we have \[\psi_{n,\mu}(\hull\{S_0,\ldots,S_n\}) = \hull\left(\psi_{n,\mu} (\{S_0,\ldots,S_n\})\right).\] Noting that $A \mapsto \hull A$ is continuous from $(\mathfrak{K}^d_0,\rho_H)$ to $(\mathfrak{C}^d_0,\rho_H)$ by Lemma~\ref{lemma:hullHaus}, the continuous mapping theorem, Theorem~\ref{thm:mapweak}, means it is sufficient to show \begin{equation}\label{eqn:btildewc} \psi_{n,\mu} ( \left\{ S_0,\ldots,S_n\right\}) \Rightarrow \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde{b}_d [0,1] {\rm\ on\ } (\mathfrak{K}^d_0,\rho_H). \end{equation} In order to show this, we first define a new unscaled trajectory as $W'_n(t):=S_{\lfloor nt \rfloor}$. Then we will show that, \begin{equation} \label{eqn:drifttrajectories} \psi_{n,\mu}(W'_n)\Rightarrow \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde{b}_d, {\rm \ on\ } (\mathcal{D}_0^d,\rho_S). \end{equation} First, recall Theorem~\ref{thm:slutsky}: if $X_n, Y_n,$ and $X$ are elements of a metric space $(S,\rho)$, such that $X_n \Rightarrow X$ and $\rho(X_n,Y_n) \overset{\textup{p}}{\longrightarrow} 0$, then $Y_n \Rightarrow X$. Taking $X_n = (I, \psi_{n,\mu}^\perp(W'_n))$ where we recall $I$ is the identity map on $[0,1]$, $Y_n = \psi_{n,\mu}(W'_n)$ and $X = \tilde{\Sigma}_{\mu_\perp}^{1/2}\tilde{b}_d$, all elements of $(\mathcal{D}_0^d,\rho_S)$ it suffices to show that \begin{equation}\label{eqn:ethier1} \rho_S(\psi_{n,\mu}(W'_n),(I, \psi_{n,\mu}^\perp(W'_n))) \overset{\textup{p}}{\longrightarrow} 0, \end{equation} and \begin{equation}\label{eqn:ethier2} (I, \psi_{n,\mu}^\perp(W'_n)) \Rightarrow \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde{b}_d(t), {\rm \ on \ } (\mathcal{D}_0^d,\rho_S). \end{equation} To prove \eqref{eqn:ethier1}, notice that $\psi_{n,\mu}^1 (W'_n)$ is the piecewise constant trajectory of a one-dimensional walk with $\|\mu\|>0$ now normalised by $\|\mu\|^{-1}n^{-1}$, so Theorem~\ref{thm:flln} applies and we have \begin{equation}\label{eqn:psi1} \lim_{n\rightarrow\infty} \psi_{n,\mu}^1(W'_n) = I\ ~\text{a.s.} \end{equation} Using Lemma~\ref{lem:skor} it becomes a simple exercise to see that, for $f \in \mathcal{C}_0^{d-1}$ and $g,h \in \mathcal{C}_0$ we have $\rho_S((f,g),(f,h)) \leq \rho_\infty((f,g),(f,h)) = \rho_\infty(g,h)$, which shows that \eqref{eqn:psi1} implies \eqref{eqn:ethier1}. For \eqref{eqn:ethier2}, note $\psi_{n,\mu}^\perp W'_n$ is the piecewise constant trajectory of a $(d-1)$-dimensional walk with $\mu=0$, normalised by $n^{-1/2}$ so Theorem~\ref{thm:Donsker-dd} gives \begin{equation* \psi_{n,\mu}^\perp (W'_n) \Rightarrow \Sigma_{\mu_{\perp}}^{1/2} b_{d-1} {\rm \ on \ } (\mathcal{D}_0^{d-1},\rho_S). \end{equation*} This implies that, for all bounded, continuous $f:\mathcal{D}_0^{d-1} \mapsto \mathbb{R}$, \begin{equation}\label{eqn:finnonzero} \Exp[f(\psi_{n,\mu}^\perp (W'_n))] \to \Exp[f(\Sigma_{\mu_{\perp}}^{1/2} b_{d-1})], {\rm \ as \ } n \rightarrow \infty. \end{equation} Now consider $\Exp[g(I,\psi_{n,\mu}^\perp (W'_n)]$ for any bounded, continuous $g:\mathcal{D}_0^d\mapsto \mathbb{R}$. Then, since $I$ is a non-random function, there exists a function $f$, chosen such that $f(\cdot)=g(I,\cdot)$ which is itself bounded and continuous on $\mathcal{D}_0^{d-1}$. By \eqref{eqn:finnonzero}, it follows that \[ \Exp[g(I,\psi_{n,\mu}^\perp (W'_n)]= \Exp[f(\psi_{n,\mu}^\perp (W'_n))] \to \Exp[f(\Sigma_{\mu_{\perp}}^{1/2} b_{d-1})] = \Exp[g(I,\Sigma_{\mu_{\perp}}^{1/2} b_{d-1})], \] and noting $g(I,\Sigma_{\mu_{\perp}}^{1/2} b_{d-1})=g(\tilde{\Sigma}_{\mu_\perp}^{1/2}\tilde{b}_d)$, we have proven \eqref{eqn:ethier2} and hence \eqref{eqn:drifttrajectories}. The final step is to notice that Corollary~\ref{cor:HclSkor} shows that $f \mapsto \cl f[0, 1]$ is continuous from $(\mathcal{D}^d_0, \rho_S)$ to $(\mathfrak{K}^d_0, \rho_H)$, so the mapping theorem, Theorem~\ref{thm:mapweak}, with \eqref{eqn:drifttrajectories} shows that $\cl \psi_{n,\mu}(W_n[0, 1]) \Rightarrow \cl \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde{b}_d[0,1]$. Observing that $\cl \psi_{n,\mu}(W_n[0, 1]) = \psi_{n,\mu}(\{S_0,\ldots,S_n\})$ and $\cl \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde{b}_d[0,1] = \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde{b}_d[0,1]$, we have proven \eqref{eqn:btildewc} and so the proof is complete. \end{proof} \subsection{Applications to functionals of convex hulls} We consider three functionals defined on non-empty convex compact sets. First, let $\calW : \mathfrak{C}^d_0 \to \mathbb{R}_+$ denote the \emph{mean width} defined by \[ \calW ( A ) := \int_{\mathbb{S}^{d-1}} h_A (e) \textup{d} e ,\] where $h_A$ is the support function of $A$ as defined at~\eqref{eq:support}. Define the \emph{volume} functional by \[ \mathcal{V} (A ) := \mu_d (A) ,\] the $d$-dimensional Lebesgue measure of $A$. Also we follow Gruber \cite[p.~104]{gruber} and define the \emph{surface area} functional by \[ \mathcal{S} ( A ) := \lim_{\varepsilon \downarrow 0} \left( \frac{ \mathcal{V} (A^\varepsilon) - \mathcal{V}(A)}{ \varepsilon } \right); \] which was a definition originally suggested by Minkowski; the limit exists by the Steiner formula of integral geometry \cite[Theorem~6.6]{gruber} which states, for $S \in \mathfrak{C}^d$, \begin{equation}\label{eqn:steiner} \mu_d(S^\lambda) = \sum_{i=0}^d \binom{d}{i} Q_i(S) \lambda^i, \end{equation} where $\binom{x}{y}$ is the binomial coefficient with the convention $\binom{x}{0}=1$, and $Q_i(S)$ are the quermassintegrals of $S$. For the random walk, we use the notation \[ \calW_n := \calW ( \hull \{S_0, \ldots, S_n\} ) ; ~~~ \mathcal{V}_n := \mathcal{V} ( \hull \{S_0, \ldots, S_n\} ) ; ~~~ \mathcal{S}_n := \mathcal{S} ( \hull \{S_0, \ldots, S_n\} ) .\] We first investigate basic continuity properties of these functionals. We define the Euler gamma function by \[ \Gamma (t) := \int_{0}^{\infty } x^{t-1} \mathrm{e}^{-x} \textup{d} x, \text{ for } t >0. \] \begin{lemma}\label{lemma:convexhullcont} Suppose that $A, B \in \mathfrak{C}^d_0$. Then \begin{align} \rho_E(\calW(A),\calW(B))&\leq 2\pi \rho_H(A,B)^{d-1}\ ;\label{eqn:WcurlCont}\\[0.2in] \rho_E(\mathcal{S}(A),\mathcal{S}(B))&\leq (d-1)\left(\frac{2\pi^{(d-1)/2}(\diam(B)+\rho_H(A,B))^{d-2}}{\Gamma(\frac{d-1}{2})}\right)^{d-1}\cdot \rho_H(A,B)^{d-1}; \label{eqn:LcurlCont}\\[0.2in] \rho_E(\mathcal{V}(A),\mathcal{V}(B))&\leq \pi^{d-1} \rho_H(A,B)^d \nonumber\\ &\quad + \max\limits_{S\in\{A,B\}}\left( \mathcal{S}(S)+\sum\limits_{i=2}^{d-1}2\pi\max\left\{\frac{\mathrm{diam}(S)}{2},1\right\}^d \rho_H(A,B)^{i-1}\right)\rho_H(A,B)\ .\label{eqn:AcurlCont} \end{align} \end{lemma} Before we complete the proofs of these inequalities we note Cauchy's surface area formula and a further geometric lemma. Recall that if $\nu_{d}$ is the volume of the unit ball in $d$-dimensions, then Cauchy's surface area formula \cite[p.~106]{gruber} states that for $A \in \mathfrak{C}^d$, \[ \mathcal{S}(A)=\frac{1}{\nu_{d-1}}\int_{\mathbb{S}^{d-1}}\mu_{d-1}(A|u^{\perp})\textup{d} u, \] where $A|u^{\perp}$ denotes the projection of $A$ onto the $d-1$-dimensional subspace of $\mathbb{R}^d$ perpendicular to $u$. \begin{remark}\label{rem:Cauchy} In $d=2$ Cauchy's formula says $\mathcal{S}(A)=\calW(A)$. \end{remark} The geometric lemma is a bound on the Lebesgue measure of the difference in volume of two convex sets. \begin{lemma}\label{lemma:lebmeashulldiff} Consider two sets $S_1,S_2,\in\mathfrak{C}^{d}_0$ with $\rho_H(S_1,S_2)=r$, then \[\mu_{d}(S_1 \backslash S_2)\leq \frac{2\pi^{d/2}(\diam(S_2)+r)^{d-1}}{\Gamma (\frac{d}{2})}\cdot r.\] \end{lemma} \begin{proof} First we recall \eqref{eqn:steiner} and note that $Q_0(S)=\mu_d(S)$; for a comprehensive discussion on quermassintegrals see \cite[Ch.~6]{gruber}. We also note one further result of Steiner, see \cite[Theorem~6.14]{gruber} which states, for $S \in \mathfrak{C}^d$, \begin{equation*} \mu_{d-1}(\partial (S^\lambda)) = d \sum_{i=0}^{d-1} \binom{d-1}{i} Q_{i+1}(S) \lambda^i = \sum_{i=1}^{d} i \binom{d}{i} Q_{i}(S) \lambda^{i-1}. \end{equation*} It is a simple exercise by comparison of terms in the summations and use of the fact $Q_0(S)=\mu_d(S)$ to see \begin{equation}\label{eqn:volSAdifference} \mu_d(S^\lambda) - \mu_d(S) = \lambda \sum_{i=1}^d \binom{d}{i} Q_{i}(S)\lambda^{i-1} \leq \lambda \mu_{d-1}(\partial (S^\lambda)). \end{equation} Now, if $\rho_H(S_1,S_2)=r$, for any $s>r$, $S_1 \subseteq S_2^s$, so $S_1 \setminus S_2 \subseteq S_2^s \setminus S_2$. It follows from \eqref{eqn:volSAdifference}, \begin{equation}\label{eqn:setlem1} \mu_{d}(S_1\setminus S_2)\leq \mu_d(S_2^s \setminus S_2) = \mu_d(S_2^s) - \mu_d(S_2) \leq s \mu_{d-1}(\partial(S_2^s)). \end{equation} Now, recall $\mathbb{B}^d$ is the $d$-dimensional unit ball. Then notice that it follows from Cauchy's formula that for convex sets $A$ and $B$ such that $A\subseteq B$, $S(A)\leq S(B)$, so, because $S_2^s \subseteq (\diam(S_2)+s)\mathbb{B}^d$, we have \begin{equation}\label{eqn:setlem2} s\mu_{d-1}(\partial (S_2^s))\leq s\mu_{d-1}(\partial(\diam(S_2)+s)(\mathbb{B}^d)). \end{equation} Since $s>r$ was arbitrary, the statement of the lemma follows from \eqref{eqn:setlem1}, \eqref{eqn:setlem2} and the surface area formula for $\mathbb{B}^d$, see for example \cite[p.~136]{sommerville}. \end{proof} \noindent Now we turn to the proof of Lemma \ref{lemma:convexhullcont}. \begin{proof}[Proof of Lemma~\ref{lemma:convexhullcont}] We first prove \eqref{eqn:WcurlCont}. By Cauchy's formula and the triangle inequality, $$|\calW(A)-\calW(B)|=\left|\int_{\mathbb{S}^{d-1}}(h_A(e)-h_B(e))\textup{d} e\right|\leq 2\pi \left(\sup\limits_{e\in \mathbb{S}^{d-1}}|h_A(e)-h_B(e)|\right)^{d-1},$$ which holds because the $2\pi$ is the constant when $d=2$, and all other constants are less than this. This equation with \eqref{eqn:Haussupport} gives \eqref{eqn:WcurlCont}. Next we consider \eqref{eqn:LcurlCont}. Suppose, without loss of generality, $S(A) \geq S(B)$. Then, by Cauchy's surface area formula and the volume of $\mathbb{B}^d$ formula, see \cite[p.~136]{sommerville}, \begin{align*} \rho_E(\mathcal{S}(A),\mathcal{S}(B))&=\frac{\Gamma(\frac{d+1}{2})}{\pi^{(d-1)/2}} \int_{\mathbb{S}^{d-1}}\left( \mu_{d-1}(A|u^{\perp})-\mu_{d-1}(B|u^{\perp})\right)\textup{d} u\\ &\leq \frac{\Gamma(\frac{d+1}{2})}{\pi^{(d-1)/2}} \int_{\mathbb{S}^{d-1}}\left( \mu_{d-1}(A|u^{\perp})-\mu_{d-1}(A \cap B|u^{\perp})\right) \textup{d} u\\ &\leq \frac{\Gamma(\frac{d+1}{2})}{\pi^{(d-1)/2}} \int_{\mathbb{S}^{d-1}}\mu_{d-1}(A\setminus B|u^{\perp})\textup{d} u.\\ \intertext{Now, noting that $A\setminus B | u^{\perp} = (A|u^{\perp}) \setminus (B| u^{\perp})$, that for a set $B$, $\diam(B|u^{\perp})\leq \diam(B)$, and that $\rho_H(A|u^{\perp},B|u^{\perp})\leq \rho_H(A,B)=r$ we can apply Lemma~\ref{lemma:lebmeashulldiff} to get,} \rho_E(\mathcal{S}(A),\mathcal{S}(B))&\leq \frac{\Gamma(\frac{d+1}{2})}{\pi^{(d-1)/2}} \int_{\mathbb{S}^{d-1}}\frac{2\pi^{(d-1)/2}(\diam(B)+r)^{d-2}\cdot r}{\Gamma(\frac{d-1}{2})}\textup{d} u\\ & = \frac{\Gamma(\frac{d+1}{2})}{\pi^{(d-1)/2}} \cdot \frac{2\pi^{(d-1)/2}}{\Gamma(\frac{d-1}{2})}\cdot \left(\frac{2\pi^{(d-1)/2}(\diam(B)+r)^{d-2}\cdot r}{\Gamma(\frac{d-1}{2})}\right)^{d-1}\\ & = (d-1)\left(\frac{2\pi^{(d-1)/2}(\diam(B)+r)^{d-2}}{\Gamma(\frac{d-1}{2})}\right)^{d-1}\cdot r^{d-1} \end{align*} and the result follows. And finally, we consider \eqref{eqn:AcurlCont}. Set $r=\rho_H(A,B)$. Then, by \eqref{eqn:Hausinf}, $A\subseteq B^s$ for any $s>r$. Hence, \begin{align*} \mathcal{V}(A) &\leq \mathcal{V}(B^s)\\ &\leq \mathcal{V}(B) + \calW(B)s + \pi^{d-1}s^d + \sum_{i=2}^{d-1}Q_i(B)s^i\ , \end{align*} by the Steiner formula \eqref{eqn:steiner} where $Q_i$ are the quermassintegrals. However, as discussed at \cite[p.~109]{gruber} the quermassintegrals can be expressed as the mean of the $(d-i)$-dimensional volumes of the projections of the set $B$ into $(d-i)$ dimensional subspaces. Thus we can establish the crude bound $Q_i \leq 2\pi \left(\max\left\{\frac{\diam B}{2},1\right\}\right)^d$ for all $i\in\{2,\ldots,d-1\}$ and so each $Q_i$ is finite because $B$ is compact (assume $d$ fixed). By symmetry we can get a similar inequality starting from $\mathcal{V}(B)$ and since $s>r$ was arbitrary, \eqref{eqn:AcurlCont} follows. \end{proof} So now we have the weak convergence result, continuity of the relevant functionals and the mapping theorem, we can return to the weak convergence of the functionals. The $2$-dimensional statements for the surface area and volume were previously studied in \cite{wade2015convex}. \begin{theorem}\label{thm:ddimLcurlAcurlzerodrift} Suppose we have the walk defined at \eqref{ass:walk} with $\mu=0$, \eqref{ass:Sigma} and, $\calW_n$, $\mathcal{S}_n$ and $\mathcal{V}_n$ are the mean width, surface area and volume respectively of the hull of the $d$-dimensional random walk. Then, as $n\rightarrow \infty$, \begin{align*} n^{-1/2} \calW_n &\overset{\textup{d}}{\longrightarrow} \calW \left( \Sigma^{1/2} h_d \right)\\ n^{-(d-1)/2}\mathcal{S}_n &\overset{\textup{d}}{\longrightarrow} \mathcal{S}\left(\Sigma^{1/2}h_d\right)\\ n^{-d/2} \mathcal{V}_n &\overset{\textup{d}}{\longrightarrow} \mathcal{V}\left(\Sigma^{1/2}h_1^d\right)=v_d \sqrt{\det(\Sigma)} \end{align*} where $v_d$ is the volume of $h_d$. \end{theorem} \begin{proof} Notice that Theorem \ref{thm:hullCLT} gives \[n^{-1/2}\mathrm{hull}\{S_0,S_1,\ldots,S_n\}\Rightarrow \Sigma^{1/2}h_d, {\rm \ on\ } (\mathfrak{C}^d_0,\rho_H),\] where $h_d$ is the hull of the $d$-dimensional Brownian motion starting at $b_d(0)=0$. Using this fact and Lemma \ref{lemma:convexhullcont}, it only remains to observe that the rescaling of the walk by $n^{-1/2}$ in all directions rescales $\calW$ by $n^{-1/2}$, $\mathcal{S}$ by $n^{-(d-1)/2}$ and $\mathcal{V}$ by $n^{-d/2}$ which are continuous functions and therefore the mapping from the original walk to that of Brownian motion is also continuous. The result with the given limits follows, with the additional equality for the volume functional following from the Jacobian of the transformation $x \mapsto \Sigma^{1/2} x$ being $\sqrt{\det \Sigma}$. \end{proof} In the special case $d=2$, $\calL_n := \mathcal{S}_n$ is the perimeter length of $\hull \{S_0, \ldots, S_n\}$; Cauchy's formula also confirms that $\calL_n$ is equal to $\calW_n$ in this case, see Remark~\ref{rem:Cauchy}. \begin{theorem} \label{thm:perimeter-lln} Suppose that we have a random walk as defined at~\eqref{ass:walk}. Then \[n^{-1} \calL_n \overset{\textup{a.s.}}{\longrightarrow} 2 \| \mu \| .\] \end{theorem} \begin{remark} This result was proven in \cite{mcrwade} \lq directly\rq~from the strong law of large numbers and Cauchy's surface area formula. Snyder and Steele~\cite{snyder1993convex} had previously obtained the result under the stronger condition $\Exp ( \| \xi \|^2 ) < \infty$ as a consequence of an upper bound on $\Var \calL_n$ deduced from Steele's version of the Efron--Stein inequality. In fact, Snyder and Steele state the result only for the case $\mu \neq 0$, but their proof works equally well when $\mu = 0$. \end{remark} \begin{proof} Using $\calL_n = \calW_n$ in the case $d=2$, the almost-sure convergence of Theorem~\ref{thm:hullFLLN}, the continuity of $\calW_n$ from Lemma~\ref{lemma:convexhullcont}, and the continuous mapping theorem from Theorem~\ref{thm:mapping} to establish $n^{-1} \calL_n \overset{\textup{a.s.}}{\longrightarrow} \calW(I_\mu [0,1])$. Without loss of generality, we will assume $\mu = \|\mu\| e_{\pi/2}$ in order to calculate the right hand side explicitly: \begin{align*} \calW(I_\mu [0,1]) = \int_{\mathbb{S}} h_{I_\mu[0,1]}(e)\textup{d} e &= \int_0^\pi (0, \|\mu\|) \cdot (\cos \theta, \sin \theta) \textup{d} \theta + \int_\pi^{2\pi} (0,0)\cdot (\cos \theta,\sin \theta) \textup{d} \theta\\ &= -\|\mu\|\cos \pi + \|\mu\| \cos 0 = 2\|\mu\|.\qedhere \end{align*} \end{proof} We finish this section with the weak convergence statement for the $d$-dimensional volume of the walk with drift. This was also studied in \cite{wade2015convex} for the specific case $d=2$. \begin{theorem}\label{thm:ddimAcurlwithdrift} Suppose we have the walk defined at (\ref{ass:walk}) with $\|\mu\| > 0$, (\ref{ass:Sigma}) and $\mathcal{V}_n$ is the volume of the hull of the $d$-dimensional random walk. Then, as $n\rightarrow \infty$, \[n^{-(d+1)/2} \mathcal{V}_n\overset{\textup{d}}{\longrightarrow} \|\mu\| \sqrt{\det \Sigma_{\mu_\perp}} \tilde{v}_d,\] where $\tilde{v}_d$ is the volume of $\tilde{h}_d := \hull \tilde{b}_d[0,1]$ where $\tilde{b}_d[0,1]=\left\{\tilde{b}_d(t):t\in [0,1]\right\}$ with $\tilde{b}_d(t)$ described at \eqref{eqn:btilde} and $\Sigma_{\mu_\perp}$ as described at \eqref{eqn:sigmamuperp}. \end{theorem} \begin{proof} Recall the definition of $\tilde{\Sigma}_{\mu_\perp}^{1/2}$ from~\eqref{eqn:sigmaextended}. Then note that $\hull \tilde{\Sigma}_{\mu_\perp}^{1/2} \tilde{b}_d[0,1] = \tilde{\Sigma}_{\mu_\perp}^{1/2} \hull \tilde{b}_d[0,1]$ because left multiplication by $\tilde{\Sigma}_{\mu_\perp}^{1/2}$ is an affine transformation, and that $\mathcal{V}(\tilde{\Sigma}_{\mu_\perp}^{1/2} A) = \sqrt{\det\tilde{\Sigma}_{\mu_\perp}}\mathcal{V}(A) = \sqrt{\det\Sigma_{\mu_\perp}}\mathcal{V}(A)$ because $\sqrt{\det\tilde{\Sigma}_{\mu_\perp}}$ is the Jacobian of the transformation. It follows, \begin{equation}\label{eqn:voltransform} \mathcal{V}(\psi_n^{\mu}(A))=n^{-(d+1)/2}\left(\|\mu\|\sqrt{\det\Sigma_{\mu_\perp}}\right)^{-1} \mathcal{V}(A) \end{equation} for $A\in \mathfrak{C}^d_0$. Then we use Lemma \ref{lemma:driftmapping}, the continuous mapping theorem, and the continuity of the functional, Lemma \ref{lemma:convexhullcont} in the usual way with~\eqref{eqn:voltransform} to complete the proof. \end{proof} \section{Centre of mass}\label{sec:COM} \subsection{Introduction} \label{introduction} Given a random walk, as defined at \eqref{ass:walk}, we denote the centre of mass of the process $(G_n, n \in \mathbb{Z}_+)$ using the definition $G_n := \frac{1}{n}\sum_{i=1}^{n}{S_i}$ for all $n \ge 1$ and the convention $G_0:=0$. Random walks can be used to model physical polymer molecules \cite{CMVW,RC} in which case the centre of mass is of obvious physical relevance. The random walk can also be used to model animal behaviour, and the motion of both macroscopic and microscopic organisms \cite{PH, PB3}. In this context the centre of mass is a natural summary statistic of an animal's roaming behaviour. We apply the theory of the functional law of large numbers and the functional central limit theorem to state a convergence results for the centre of mass process. To give an idea of the behaviour exhibited by this functional, we have performed some simulations which are exhibited in Figure~\ref{sim:comlln} and Figure~\ref{sim:comclt}. The former shows a comparison of the centre of mass and the random walk itself with the scaling of $1/n$, for two differing values of $n$ and mean drift $\mu$. We can see the respective centre of mass processes exhibit the behaviour described in Theorem~\ref{comt1} and Theorem \ref{comt3}, specifically that they converge to a straight line with slope $\mu /2$ when $n \to \infty$. The second simulation also compares the centre of mass processes to random walks, but this time we use a different scaling of $1 / \sqrt{n}$, and we still show the two different values of $n$, but have fixed $\mu=0$ for both cases. These trajectories show Theorem \ref{comt2} and Theorem \ref{comt4} in action. \begin{figure}[!ht] \centering \begin{multicols}{2} \includegraphics[width=245pt]{XnCSG} \caption{Sample paths of $X_n(t)$, the lighter colour, and $G_{\lfloor nt \rfloor}/n$, the darker colour, for the cases $n=100$ in blue with $\mu=-1$ and $n=10000$ in red with $\mu=1$.} \label{sim:comlln} \includegraphics[width=245pt]{YnCSG} \caption{Sample paths of $Y_n(t)$, the lighter colour, and $G_{\lfloor nt \rfloor}/\sqrt{n}$, the darker colour, for the cases $n=100$ in blue and $n=10000$ in red, both with $\mu=0$.} \label{sim:comclt} \end{multicols} \end{figure} Throughout this section it is more convenient to work with the metric $\rho_S^\circ$ instead of $\rho_S$, thus, the continuity lemmas, Lemma~\ref{coml1} and Lemma~\ref{coml2}, are stated in terms of $\rho_S^\circ$. However, in light of Proposition~\ref{prop:equivmetrics} and Remark~\ref{rem:wcequivalence}, we state the results in the theorems as convergence under the standard metric $\rho_S$ despite the proofs using $\rho_S^\circ$. \subsection{Law of large numbers and central limit theorem} The aim of this section is to establish the following limit theorems. The first result is a law of large numbers. A direct proof of the case $t=1$, using the strong law for the random walk, Theorem~\ref{thm:SLLN}, is given in \cite{lowade}. In Section~\ref{sec:comflt} we extend these results to functional limit theorems. \begin{theorem} \label{comt1} Consider the random walk defined at~\eqref{ass:walk}. Let $t \in [0,1]$. Then, \[ \frac{1}{n} G_{\lfloor n t \rfloor} \overset{\textup{a.s.}}{\longrightarrow} \frac{\mu t }{2}, \text{ as } n \to \infty. \] \end{theorem} The next result is a central limit theorem. \begin{theorem} \label{comt2} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$ and satisfying \eqref{ass:Sigma}. Let $t \in [0,1]$. Then, \[ \frac{1}{\sqrt{n}}G_{\lfloor nt \rfloor} \overset{\textup{d}}{\longrightarrow} \mathcal{N} \left(0,\frac{t\Sigma}{3} \right) , \text{ as } n \to \infty. \] \end{theorem} Again, an alternative proof using the representation \begin{equation} \label{eq:weighted-sum} G_n = \sum_{i=1}^n \left( \frac{n-i+1}{n} \right) \xi_i \end{equation} and the central limit theorem for triangular arrays, is given in \cite{lowade}. \begin{remark} The central limit theorem for $G_n$ is similar to the central limit theorem for $S_n$, Theorem \ref{thm:CLT}, but with a factor of $1/3$ in the variance; but the recurrence and transience behaviour is very different, see \cite{lowade}. \end{remark} The method of proof for these two theorems is to view the centre of mass as an appropriate functional from $(\mathcal{D}^d, \rho_S^\circ)$ to $(\mathbb{R}^d, \rho_E)$ and then apply our functional limit theorem results. First, for all $f \in \mathcal{M}^d_0$, we define for $t \in [0,1]$ a functional $g_t: \mathcal{M}^d_0 \to \mathbb{R}^d$ given by $g_0(f) :=f( 0 ) = 0$ and \[ g_t(f) :=\frac{1}{t}\int_0^t f(s) \textup{d} s , \text{ for } t \in (0,1] . \] Note that for any $c \in \mathbb{R}$, the function $g_t$ is \emph{homogeneous} in the sense that $g_t (c f) = c g_t (f)$ for all $t \in [0,1]$. The slight complication is that $g_t (X_n')$, for example, is not exactly equal to $n^{-1} G_{\lfloor nt \rfloor}$. To deal with this, we will need the following estimate both here and when we consider functional limit theorems, below. We write $S_{\lfloor n \cdot \rfloor}$ to represent the function $t \mapsto S_{\lfloor n t \rfloor}$ over $t \in [0,1]$. \begin{lemma} \label{coml5} For $n \in \mathbb{N}$ and $t \in [0,1]$, define $\Delta_n (t) := g_t ( S_{\lfloor n \cdot \rfloor} ) - G_{\lfloor n t \rfloor}$. Then for any $\alpha >0$, we have $n^{-\alpha} \| \Delta_n \|_\infty \overset{\textup{a.s.}}{\longrightarrow} 0$ as $n \to \infty$. \end{lemma} \begin{proof} First note that $\Delta_n (0) = S_0 - \frac{1}{n} G_0 = 0$ since $S_0 = G_0 = 0$. Now for $t>0$, we have \begin{align*} g_t( S_{\lfloor n \cdot \rfloor} ) &= \frac{1}{t}\int_0^t S_{\lfloor ns \rfloor} \textup{d} s \nonumber \\ &= \frac{1}{t} \left( \sum_{k=0}^{\lfloor nt \rfloor -1} \int_{k/n}^{(k+1)/n} S_{\lfloor ns \rfloor} \textup{d} s + \int_{\lfloor nt \rfloor/n}^t S_{\lfloor ns \rfloor} \textup{d} s \right) \nonumber \\ &= \frac{1}{nt} \left[ \sum_{k=0}^{\lfloor nt \rfloor -1}S_k + (nt- \lfloor nt \rfloor )S_{\lfloor nt \rfloor} \right] \nonumber \\ &= \frac{1}{nt} G_{\lfloor nt \rfloor} - \frac{1}{nt} S_{\lfloor nt \rfloor} + \frac{nt-\lfloor nt \rfloor }{nt} S_{\lfloor nt \rfloor} \nonumber \\ &= G_{\lfloor nt \rfloor} - \frac{nt- \lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} + \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor}. \end{align*} Hence for $t >0$ we have \begin{equation*} \Delta_n(t) = \left( \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor} \right) - \left( \frac{nt- \lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} \right). \end{equation*} Now, since $-1 \leq nt - \lfloor nt \rfloor -1 \leq 0$ and $S_0 = 0$ we have \begin{align*} \sup_{0 < t \le 1} \left\| \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor} \right\| & \leq \max_{1 \leq k \leq n-1} \sup_{t \in \left[ \frac{k}{n} , \frac{k+1}{n} \right]} \frac{1}{nt} \| S_{\lfloor n t \rfloor} \| \\ & \leq \sup_{1 \leq k \leq n} \frac{1}{k} \| S_k \| .\end{align*} Similarly, \[ \sup_{0 < t \le 1} \left\| \frac{nt-\lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} \right\| \leq \sup_{1 \leq k \leq n} \frac{1}{k} \| G_k \| . \] The strong law of large numbers for $S_n$, Theorem~\ref{thm:SLLN}, implies that $\| S_n \| \leq n \left(1 + \| \mu \| \right)$ for all $n \geq N$ with $\Pr (N < \infty) =1$. Moreover, \[ \| G_n \| \leq \frac{1}{n} \sum_{i=1}^n \| S_i \| \leq \frac{1}{n} \sum_{i=1}^N \| S_i \| + \frac{1}{n} \sum_{i=N}^n i \left(1 + \| \mu \| \right) \leq \frac{1}{n} \sum_{i=1}^N \| S_i \| + n \left(1 + \| \mu \| \right) . \] Hence \[ \limsup_{n \to \infty} n^{-\alpha} \sup_{0 < t \leq 1} \left\| \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor} \right\| \leq \limsup_{n \to \infty} n^{-\alpha} \left( \sup_{1 \leq k \leq N} \| S_k \| + \left(1 + \| \mu \| \right) \right) = 0 , ~\text{a.s.} \] and \[ \limsup_{n \to \infty} n^{-\alpha} \sup_{0 < t \leq 1} \left\| \frac{nt- \lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} \right\| \leq \limsup_{n \to \infty} n^{-\alpha} \left( \frac{1}{n^2} \sum_{i=1}^N \| S_i \| + \left(1 + \| \mu \| \right) \right) = 0 , ~\text{a.s.} \] This completes the proof. \end{proof} To prove Theorems~\ref{comt1} and~\ref{comt2}, we will need to show that the functional $g_t$ is continuous. This is the content of the next result. \begin{lemma} \label{coml1} For any $t \in [0,1]$, the functional $f \mapsto g_t(f)$ is continuous as a map from $(\mathcal{D}^d_0, \rho_S^\circ)$ to $(\mathbb{R}^d, \rho_E)$. \end{lemma} \begin{proof Consider $f_1, f_2 \in \mathcal{D}^d_0$. For $t=0$ we have $\rho_E ( g_0 (f_1) , g_0 (f_2 ) ) = \rho_E (0,0) = 0 \leq \rho_S^\circ ( f_1, f_2)$. Thus fix $t \in (0,1]$ and let $\lambda \in \Lambda$. Then, by the triangle inequality, \begin{align} \rho_E (g_t(f_1),g_t(f_2)) & = \left\| \frac{1}{t} \int_0^t\left[f_1(s)-f_2(s)\right] \textup{d} s\right\| \nonumber\\ &\leq \left\|\frac{1}{t}\int_0^t\left[f_1(s)-f_2\circ \lambda (s)\right] \textup{d} s\right\| + \left\|\frac{1}{t}\int_0^t\left[f_2 \circ \lambda (s)-f_2(s)\right] \textup{d} s\right\| \nonumber\\ & \leq \left\| f_1 - f_2 \circ \lambda \right\|_\infty + \left\|\frac{1}{t}\int_0^t\left[f_2 \left( \lambda (s)\right) -f_2(s)\right] \textup{d} s\right\| . \label{eqn:comEbound} \end{align} Then for the second term we have, \begin{align*} \left\| \frac{1}{t} \int_0^t [ f_2 (\lambda (s)) - f_2 (s) ] \textup{d} s \right\| & \leq \left\| \frac{1}{t} \int_0^t f_2 (\lambda (s)) \textup{d} s - \frac{1}{t} \int_0^{\lambda(t)} f_2 (s) \textup{d} s \right\| + \int_{t \wedge \lambda(t)}^{t \vee \lambda(t)} \| f_2 (s) \| \textup{d} s \\ & \leq \left\| \frac{1}{t} \int_0^t f_2 (\lambda (s)) \textup{d} s - \frac{1}{\lambda(t)} \int_0^{\lambda(t)} f_2 (s) \textup{d} s \right\| \\ & {} \qquad {} + \left| \frac{1}{t} - \frac{1}{\lambda(t)} \right| \int_0^{\lambda(t)} \| f_2 (s) \| \textup{d} s + | t - \lambda(t) | \| f_2 \|_\infty . \end{align*} Now in the second integral on the right-hand side put $s = \lambda(u)$ so that $\textup{d} s = \lambda'(u) \textup{d} u$ for almost every $u \in (0,1)$, and then \begin{align} \left\| \frac{1}{t} \int_0^t [ f_2 (\lambda (s)) - f_2 (s) ] \textup{d} s \right\| & \leq \left\| \frac{1}{t} \int_0^t f_2 (\lambda (s)) \textup{d} s - \frac{1}{t} \int_0^{t} f_2 ( \lambda(u)) \lambda'(u) \textup{d} u \right\| \nonumber\\ & {} \qquad {} + \frac{| \lambda(t) - t |}{t} \| f_2 \|_\infty + \| f_2 \|_\infty c (\lambda) \nonumber\\ & \leq \frac{1}{t} \int_0^t \| f_2 (\lambda (s) ) \| | 1 - \lambda'(s) | \textup{d} s + \frac{| \lambda(t) - t |}{t} \| f_2 \|_\infty + \| f_2 \|_\infty c (\lambda) \nonumber\\ & \leq 3 \| f_2 \|_\infty c(\lambda) ,\label{eqn:combound2} \end{align} using Lemma~\ref{lem:estlam} several times. Then combining \eqref{eqn:comEbound} and \eqref{eqn:combound2}, we get \begin{align*} \rho_E (g_t(f_1),g_t(f_2)) &\le \left\| f_1 - f_2 \circ \lambda \right\|_\infty + 3 \|f_2\|_\infty c ( \lambda ) . \end{align*} Let $\varepsilon >0$. Recall that $\rho_S^\circ(f,g) \coloneqq \inf_{\lambda \in \Lambda} \left\lbrace \left\| \lambda \right\|^\circ \vee \left\| f - g\circ \lambda\right\|_{\infty} \right\rbrace$. Now $c (\lambda ) \to 0$ as $\| \lambda \|^\circ \to 0$, so if $\rho_S^\circ (f_1, f_2) \to 0$ we can find $\lambda \in \Lambda$ such that $\| f_1 - f_2 \circ \lambda \|_\infty < \varepsilon$ and $\| \lambda \|^\circ$ is small enough so that $\| f_2 \|_\infty c (\lambda ) < \varepsilon$ too (note $\| f_2 \|_\infty < \infty$). Then $\rho_E (g_t(f_1),g_t(f_2)) \leq 4 \varepsilon$, and since $\varepsilon>0$ was arbitrary, the result follows. \end{proof} \noindent Now we are ready to prove our Theorem \ref{comt1}. \begin{proof}[Proof of Theorem \ref{comt1}] First, we know that $X'_n$ converges to $I_\mu$ a.s.~where $I_\mu(t)=\mu t$, as defined in Theorem~\ref{thm:flln}. With Lemma \ref{coml1}, we can apply the continuous mapping theorem, Theorem~\ref{thm:mapping}, for the function $g_t$; thus we get, for $t \in [0,1]$, \begin{equation} g_t(X'_n) \to g_t(I_\mu), \quad \text{a.s.} \label{cal10} \end{equation} With a quick calculation we see that, for $t>0$, \begin{equation} g_t(I_\mu) = \frac{1}{t}\int_0^t \mu s \textup{d} s = \frac{\mu t}{2} = \frac{1}{2} I_\mu(t), \label{cal1} \end{equation} and $g_0(I_\mu)=I_\mu (0)=0$. Now we see that \begin{equation*} \frac{1}{n} G_{\lfloor nt \rfloor} = \frac{1}{n} g_t \left( S_{\lfloor n \cdot \rfloor} \right) -\frac{1}{n} \Delta_n(t) = g_t(X_n') - \frac{1}{n} \Delta_n(t). \end{equation*} Now using the convergence in \eqref{cal10} and equation \eqref{cal1}, together with the $\alpha=1$ case of Lemma~\ref{coml5}, we complete the proof. \end{proof} \noindent The next theorem to prove is the central limit theorem. Recall that we will have an extra assumption that $\mu = 0$ for a meaningful analysis. This time we use the same function $g_t(f)$, but we take \begin{equation*} Y'_n := \frac{1}{\sqrt{n}}S_{\lfloor nt \rfloor} \end{equation*} instead of $X'_n$ to get the right meaningful scaling. We shall rewrite $g_t(b_d)$ in the following stochastic integral form to simplify later calculations. \begin{lemma} \label{sitrick} For any $t \ge 0$, \begin{equation*} g_t(b_d) = \int_0^t \left( 1 - \frac{s}{t} \right) \textup{d} b_d(s). \end{equation*} \end{lemma} \begin{remark} This is the continuous analogue of equation~\eqref{eq:weighted-sum}. \end{remark} \begin{proof} We apply the integration by part formula for stochastic calculus, see e.g. \cite[p.~129]{medvegyev}, and get \begin{align*} \int_0^t b_d(s) \textup{d} s + \int_0^t s \textup{d} b_d(s) = t b_d(t) = \int_0^t t \textup{d} b_d(s). \end{align*} Dividing by $t>0$ and rearranging, we get \begin{align*} \frac{1}{t}\int_0^t b_d(s) \textup{d} s = \int_0^t \left(1- \frac{s}{t}\right) \textup{d} b_d(s). \end{align*} Hence we obtain the statement we needed from the definition of $g_t$. \end{proof} Now we are ready to prove Theorem \ref{comt2}. \begin{proof}[Proof of Theorem \ref{comt2}] Recall that Donsker's theorem implies that $Y'_n \Rightarrow \Sigma^{1/2}b_d$ on $(\mathcal{D}^d_0, \rho_S^\circ)$, where $\Sigma$ is the covariance matrix of $\xi$ and $b_d$ is the standard Brownian motion in $d$ dimensions. Using Lemma~\ref{coml1} and the continuous mapping theorem, Theorem~\ref{thm:mapweak}, we get, for $t \in [0,1]$, \begin{equation} g_t(Y'_n) \overset{\textup{d}}{\longrightarrow} g_t\left( \Sigma^{1/2}b_d \right). \label{gc1} \end{equation} Using Lemma~\ref{sitrick}, we see that \begin{align*} g_t\left(\Sigma^{1/2}b_d\right) = \Sigma^{1/2} \int_0^t \left(1- \frac{s}{t}\right) \textup{d} b_d(s), \end{align*} and as the integrand is deterministic, the integral is a Wiener integral and so it is normally distributed, see \cite[p.~11]{kuo}, and hence $g_t\left( \Sigma^{1/2}b_d \right)$ is also normal. So all we left to do now is to find the variance of $g_t\left( \Sigma^{1/2}b_d \right)$. We should first consider the expression \begin{equation*} B(t) = \int_0^t b_d(s) \textup{d} s \end{equation*} We calculate that \begin{align} \Var (B(t))= \Exp \left[B(t)B(t)^{\scalebox{0.6}{$\top$}} \right] &= \Exp \left[\int_0^t b_d(r)\textup{d} r \times \int_0^t b_d^{\scalebox{0.6}{$\top$}} (s) \textup{d} s \right] \nonumber \\ &= \Exp \left[\int_0^t \int_0^t b_d(r)b_d^{\scalebox{0.6}{$\top$}} (s) \textup{d} r\textup{d} s \right] \nonumber \\ &= \int_0^t \int_0^t \Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}} (s)\right] \textup{d} r\textup{d} s. \label{b1} \end{align} The change of order of the expectation and the integration in the last step is guaranteed by Fubini's theorem as the function is integrable. To evaluate the expectation, we first consider the case that $r>s$, then we can write $b_d(r)=b_d(s)+(b_d(r)-b_d(s))$, then as $b_d(s)$ is independent of $b_d(r)-b_d(s)$, we get \begin{equation*} \Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s)\right] = \Exp \left[(b_d(r)-b_d(s))b_d^{\scalebox{0.6}{$\top$}}(s)\right] + \Exp \left[b_d(s) b_d^{\scalebox{0.6}{$\top$}}(s) \right] = \Var(b_d(s))= s I_d. \end{equation*} where $I_d$ is the $d$-dimensional identity matrix. Similarly, for the case $r<s$, we get $\Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s)\right]=r I_d$. Combining the two cases, we get $\Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s)\right]=\min(r,s)I_d$. Putting this back to equation \eqref{b1}, we have for $t \ge 0$, \begin{equation*} \Var (B(t)) = \int_0^t \int_0^t \min(r,s) I_d \textup{d} r\textup{d} s = \frac{t^3}{3} I_d. \end{equation*} The integral is essentially just finding the volume of a pyramid with a square base of side length $t$, with height $t$, attained at the point $(t,t)$. So we get $B(t) \sim \mathcal{N}(0,t^3 I_d/3)$. Hence for $t>0$, \begin{equation*} g_t\left(\Sigma^{\frac{1}{2}}b_d\right) = \frac{\Sigma^{\frac{1}{2}}}{t} B(t) \sim \mathcal{N}\left(0,\frac{t\Sigma}{3}\right) \end{equation*} Together with the convergence \eqref{gc1}, for all $t>0$, we get \begin{equation*} g_t(Y'_n) = \frac{1}{\sqrt{n}} \left( G_{\lfloor nt \rfloor} + \Delta_n(t) \right) \overset{\textup{d}}{\longrightarrow} \mathcal{N}\left(0,\frac{t\Sigma}{3}\right) \mathrm{\ as\ } n \to \infty. \end{equation*} With an implication of Lemma \ref{coml5} that $\Delta_n (t)/ \sqrt{n} \to 0$ a.s.~as $n \to \infty$, by Slutsky's theorem, Theorem~\ref{thm:slutsky}, we obtain, for all $t>0$, \begin{equation} \frac{1}{\sqrt{n}} G_{\lfloor nt \rfloor} \overset{\textup{d}}{\longrightarrow} \mathcal{N}\left(0,\frac{t\Sigma}{3}\right) \mathrm{\ as\ } n \to \infty, \label{tfix2} \end{equation} which is also true for $t=0$. Notice that at $t=0$, the normal distribution $\mathcal{N}(0,0) \equiv 0$. So equation \eqref{tfix2} is true for all $t \in [0,1]$. Hence the proof is completed. \end{proof} \subsection{Functional limit theorems} \label{sec:comflt} The aim of this section is to extend the law of large numbers and central limit theorem to the whole trajectory of the centre of mass process. Here are the results. Recall that $I_\mu (t) = \mu t$. \begin{theorem} \label{comt3} Consider the random walk defined at~\eqref{ass:walk}. Then, as $n \to \infty$, as elements of $(\mathcal{D}^d_0, \rho_S)$, \[ \frac{1}{n}\left(G_{\lfloor nt \rfloor}\right)_{t \in [0,1]} \overset{\textup{a.s.}}{\longrightarrow} \frac{1}{2} I_\mu . \] \end{theorem} \begin{theorem} \label{comt4} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$ and satisfying \eqref{ass:Sigma}. Then, as $n \to \infty$, as elements of $(\mathcal{D}^d_0, \rho_S)$, \[ \frac{1}{\sqrt{n}}\left(G_{\lfloor nt \rfloor}\right)_{t \in [0,1]} \Rightarrow \mathcal{GP}(0,K), \] where $\mathcal{GP}(0,K)$ is a Gaussian process with mean $0$ and the symmetric covariance function $K$ defined by \[ K(t_1,t_2) = \begin{cases} t_1 \Sigma (3t_2 -t_1)/( 6 t_2), & \text{for } 0 < t_1 \le t_2; \\ t_2 \Sigma /3, & \text{for } t_1=0, t_2 \ne 0; \\ 0 , & \text{for } t_1=t_2=0. \end{cases} \] \end{theorem} In order to prove Theorem \ref{comt3} and Theorem \ref{comt4}, we need to introduce a bigger functional which contains all the information in the trajectory. Define the functional $g$ acting on $f \in \mathcal{M}^d_0$ by $g(f) (t) = g_t (f)$ for all $t \in [0,1]$. Note that if $f \in \mathcal{M}^d_0$ then $g(f)(0) = g_0 (f) = f(0) =0$. For any fixed $f \in \mathcal{D}^d_0$, observe that $g_t(f)$ is continuous in $t$. The latter is true because $t \mapsto \frac{1}{t}$ and $t \mapsto \int_0^t f(s) \textup{d} s$ are both continuous on $(0,1]$, so $t \mapsto g_t$ is continuous on $(0,1]$. To check the continuity at $t=0$, we use l'H\^opital's rule to see that \begin{equation*} \lim_{t \to 0^+}\left(\frac{\int_0^t f(s) \textup{d} s}{t}\right) = \lim_{t \to 0^+}\left(\frac{\frac{\textup{d}}{\textup{d} t}\int_0^t f(s) \textup{d} s}{1}\right) = \lim_{t \to 0^+}f(t)=f(0), \end{equation*} using the fact that $f$ is right-continuous at $0$. So we conclude that if $f \in \mathcal{D}^d_0$, then $g(f) \in \mathcal{C}^d_0$. The next result shows that $f \mapsto g(f)$ is continuous. \begin{lemma} \label{coml2} The functional $f \mapsto g(f)$ is continuous as a map from $(\mathcal{D}^d_0, \rho_S^\circ)$ to $(\mathcal{C}^d_0, \rho_S^\circ)$. \end{lemma} \begin{proof} Take $f_1, f_2 \in \mathcal{D}^d_0$. For $f_1, f_2 \in \mathcal{D}^d_0$, \begin{align*} \rho_S^\circ (g (f_1) , g(f_2) ) & = \inf_{\lambda \in \Lambda} \{ \| \lambda \|^\circ \vee \| g (f_1) , g(f_2) \circ \lambda \|_\infty \} ,\end{align*} where \[ \| g (f_1) , g(f_2) \circ \lambda \|_\infty = \sup_{0\leq t \leq 1} \left\| g_t (f_1) - g_{\lambda(t)} (f_2 ) \right\| .\] Now since $\lambda (0) =0$ we have $g_0 (f_1) - g_{\lambda(0)} (f_2) = 0$. For $t \in (0,1]$, we have \begin{align*} g_t (f_1) - g_{\lambda(t)} (f_2 ) & = \frac{1}{t} \int_0^t f_1(s) \textup{d} s - \frac{1}{\lambda(t)} \int_0^{\lambda(t)} f_2(s) \textup{d} s \\ &= \frac{1}{t} \int_0^t f_1(s) \textup{d} s - \frac{1}{t} \int_0^{\lambda(t)} f_2(s) \textup{d} s - \left( \frac{1}{\lambda(t)}- \frac{1}{t} \right) \int_0^{\lambda(t)} f_2(s) \textup{d} s \\ &= \frac{1}{t} \int_0^t f_1(s) \textup{d} s - \frac{1}{t} \int_0^{t} f_2(\lambda(u)) \lambda'(u) \textup{d} u - \left( \frac{t- \lambda(t)}{t \lambda(t)} \right) \int_0^{\lambda(t)} f_2(s)\textup{d} s \\ &= \frac{1}{t} \int_0^t ( f_1(s) - f_2 \circ \lambda (s) ) \textup{d} s - \left( \frac{t- \lambda(t)}{t \lambda(t)} \right) \int_0^{\lambda(t)} f_2(s) \textup{d} s - \frac{1}{t}\int_0^t f_2(\lambda(u))(\lambda'(u) -1 ) \textup{d} u. \end{align*} It follows that, for $t \in (0,1]$, \begin{align*} \left\| g_t (f_1) - g_{\lambda(t)} (f_2 ) \right\| &\le \left\| f_1 - f_2 \circ \lambda \right\|_\infty + \frac{|t- \lambda(t)|}{t} \|f_2\|_\infty + \|f_2\|_\infty \| \lambda' - 1 \|_\infty .\end{align*} Applying the estimates~\eqref{s1} and~\eqref{s3} in Lemma~\ref{lem:estlam}, we get \begin{align*} \inf_{\lambda \in \Lambda} \left\| g (f_1) - g (f_2 )\circ \lambda \right\|_\infty &\le \inf_{\lambda \in \Lambda} \left\| f_1 - f_2 \circ \lambda \right\|_\infty + 2 \left\| f_2 \right\|_\infty \inf_{\lambda \in \Lambda} c(\lambda). \end{align*} However, $\frac{c(\lambda)}{\|\lambda\|^\circ} \to 1$ as $\|\lambda\|^\circ \to 0$, so for any $\varepsilon >0$, we can find $\delta>0$ such that $\rho_S^\circ(f_1, f_2) \le \delta$, implies that $\inf_{\lambda \in \Lambda} \left\| f_1 - f_2 \circ \lambda \right\|_\infty \le \varepsilon$ and $\inf_{\lambda \in \Lambda} c(\lambda) \le \varepsilon$. Finally noting that $\|\lambda\|^\circ \leq \rho_S^\circ(f_1,f_2)$, we have \begin{align*} \rho_S^\circ \left(g(f_1),g(f_2)\right) \le \rho_S^\circ(f_1,f_2) \vee (\varepsilon + 2 \left\| f_2 \right\|_\infty \varepsilon). \end{align*} Since $\varepsilon>0$ was arbitrary, the result follows. \end{proof} \noindent Now we are ready to proof Theorem \ref{comt3} and Theorem \ref{comt4}. \begin{proof}[Proof of Theorem \ref{comt3}] With the fact that $\frac{1}{n} g_t \left(S_{\lfloor n \cdot \rfloor}\right) = g_t \left(\frac{1}{n} S_{\lfloor n \cdot \rfloor} \right)$, by the functional law of large numbers Theorem~\ref{thm:flln}(b), Lemma~\ref{coml2} and the mapping theorem for almost-sure convergence Theorem~\ref{thm:mapping}, we get \begin{equation*} \left(\frac{1}{n} g_t \left( S_{\lfloor n \cdot \rfloor}\right)\right)_{t \in [0,1]} \overset{\textup{a.s.}}{\longrightarrow} \left( g_t(I_\mu)\right)_{t \in [0,1]} \quad \text{on } (\mathcal{D}^d_0,\rho_S). \end{equation*} By Lemma~\ref{coml5} we also have \begin{align*} \rho_S^\circ \left(\left( \frac{1}{n} G_{\lfloor n t \rfloor} \right)_{t \in [0,1]} , \left(\frac{1}{n} g_t \left( S_{\lfloor n \cdot \rfloor}\right)\right)_{t \in [0,1]}\right) &\le \left\| \left( \frac{1}{n} G_{\lfloor n t \rfloor} \right)_{t \in [0,1]} - \left(\frac{1}{n} g_t \left( S_{\lfloor n \cdot \rfloor}\right)\right)_{t \in [0,1]} \right\|_\infty \\ &= \frac{1}{n} \left\|\Delta_n \right\|_\infty \to 0 \quad \text{a.s.} \end{align*} Hence by equation~\eqref{cal1}, we have \begin{equation*} \left( \frac{1}{n} G_{\lfloor n t \rfloor} \right)_{t \in [0,1]} \overset{\textup{a.s.}}{\longrightarrow} \frac{1}{2} I_\mu \quad \text{on }(\mathcal{D}^d_0,\rho_S). \qedhere \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{comt4}] Similarly to the proof of Theorem \ref{comt2}, applying Donsker's theorem and the continuous mapping theorem to $g(f)$, with the continuity of $g(f)$ given by Lemma \ref{coml2}, we get \begin{equation} \label{convwt} g(Y'_n) \Rightarrow g(\Sigma^{\frac{1}{2}}b_d) \quad \text{on } (\mathcal{D}^d_0,\rho_S). \end{equation} The next step is to prove $g(\Sigma^{\frac{1}{2}}b_d)$ is a non-stationary Gaussian process, i.e. every finite collection of $g_t(\Sigma^{\frac{1}{2}}b_d)$ has a multivariate normal distribution, see \cite[p.~4]{piterbarg}. We will use a definition of multivariate normal distribution, e.g. see \cite[p.~121]{gutintermediate}, that $X \in \mathbb{R}^m$ is multivariate normal if and only if $u^{\scalebox{0.6}{$\top$}} X$ is normal for all $u \in \mathbb{S}^{m-1}$. We will use this fact with $m=dk$, and without loss of generality, assume $t_1 \le t_2 \le \cdots \le t_k$. Using Lemma~\ref{sitrick}, consider \begin{align*} \sum_{l=1}^k \alpha_l g_{t_l} &= \sum_{l=1}^k \int_0^{t_l} \alpha_l \left( 1-\frac{s}{{t_l}} \right) \textup{d} b_s \\ &= \int_0^{\max\{t_1, t_2, \cdots, t_k\}} f(s) \textup{d} b_s \end{align*} where \begin{align*} f(s) &= \begin{cases} \sum_{l_1=1}^k \alpha_{l_1} \left( 1-\frac{s}{{t_{l_1}}} \right), & \text{if } s \leq t_1; \\ \sum_{l_2=2}^k \alpha_{l_2} \left( 1-\frac{s}{{t_{l_2}}} \right), & \text{if } t_1 \leq s \leq t_2; \\ \quad \vdots \quad \\ \alpha_n \left( 1-\frac{s}{{t_k}} \right), & \text{if } t_{k-1} \leq s \leq t_k; \\ 0, & \text{otherwise. } \end{cases} \end{align*} Now as $f(s)$ is piecewise continuous, the whole integral is a Wiener integral which is normal, by \cite[p.~11]{kuo}. Hence $g(\Sigma^{\frac{1}{2}}b_d)$ is a Gaussian process. So now all left to do now is to find the covariance functions, which completely categorize the Gaussian process. Denote this non-stationary Gaussian process by $\mathcal{GP}(0,K(t_1,t_2))$, where $K(t_1,t_2)$ is the covariance function of $g(\Sigma^{\frac{1}{2}}b_d)$ evaluated at the points $t_1$ and $t_2$. We have mean $0$ is because we have a zero mean at any fix $t$. Now to calculate $K$, for non-zero $t_1$ and $t_2$, without loss of generality, suppose $0< t_1 \le t_2$, then \begin{align*} K(t_1,t_2) &= \Cov \left(g_{t_1} \left(\Sigma^\frac{1}{2}b_d \right), g_{t_2} \left(\Sigma ^\frac{1}{2}b_d \right)\right) \\ &= \Exp \left[ \left(g_{t_1}\left(\Sigma^{\frac{1}{2}}b_d \right)\right) \left(g_{t_2}\left(\Sigma ^{\frac{1}{2}} b_d\right)\right)^{\scalebox{0.6}{$\top$}} \right] \\ &= \Exp\left[\left(\frac{1}{t_1} \int_0^{t_1}\Sigma^{\frac{1}{2}} b_d(r) \textup{d} r\right) \left(\frac{1}{t_2} \int_0^{t_2}\Sigma^{\frac{1}{2}} b_d(s) \textup{d} s\right)^{\scalebox{0.6}{$\top$}} \right] \end{align*} Now we apply the Fubini's theorem to swap the integral and expectation to get \begin{align} \frac{1}{t_1 t_2} \int_0^{t_1} \int_0^{t_2} \Sigma^{\frac{1}{2}} \Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s) \right] \Sigma^{\frac{1}{2}} \textup{d} r \textup{d} s &= \frac{\Sigma}{t_1 t_2} \int_0^{t_1} \int_0^{t_2} \min(r,s) \textup{d} r \textup{d} s \nonumber \\ &= \frac{\Sigma}{t_1 t_2} \left[ \frac{t_1^3}{3} + \frac{t_1^2}{2}(t_2-t_1) \right] \nonumber \\ &= \frac{t_1 (3t_2 -t_1) }{6 t_2}\Sigma . \label{callst} \end{align} If one of $t_1$ or $t_2$ is $0$, then we just have $K(0,t) = t\Sigma/3$ putting $t_1=0$ and $t_2=t$ in equation \eqref{callst}. If both of them are zero, then $K(0,0)=0$ by the definition that $g_0\left(\Sigma^{\frac{1}{2}}b_d\right)=0$. So we have \begin{equation*} g\left(\Sigma^{\frac{1}{2}}b_d\right) \Rightarrow \mathcal{GP}(0,K) \quad \text{on } (\mathcal{D}^d_0,\rho_S). \end{equation*} Together with the convergence~\eqref{convwt}, we have \begin{equation*} g(Y'_n) = \frac{1}{\sqrt{n}}\left(G_{\lfloor nt \rfloor} + \Delta_n (t)\right)_{t \in [0,1]} \Rightarrow \mathcal{GP}(0,K). \end{equation*} Lastly, choosing $\alpha = 1/2$ in Lemma~\ref{coml5} so that $\Delta_n / \sqrt{n} \to 0$ a.s.~on $(\mathcal{D}^d_0,\rho_S)$ as $n \to \infty$, we apply Slutsky's theorem, Theorem~\ref{thm:slutsky}, we get the desired result. So we have proved the last theorem of this chapter. \end{proof} \begin{comment} \section{Centre of mass} \subsection{Introduction} \label{introduction} {\bf TO DO:} tidy up this section. Suppose that $(G_n, n \in \mathbb{Z}_+)$ is the center of mass of a time and spatially homogeneous random walk as in \ref{ass:walk}, which will be denoted $G_n := \frac{1}{n}\sum_{i=1}^{n}{S_i}$ for all $n \ge 1$ and $G_0=0$. We state a law of large numbers of the center of mass process. We will end this subsection with some simulations on the asymptotic behaviour of the center of mass process. The first diagram shows a comparison of both the center of mass and the random walk itself with the scaling of $1/n$, on two cases, with different size of $n$ and mean drift $\mu$. We can see the processes follow Theorem \ref{coml1} and Theorem \ref{coml3}; that they converge to a straight line with slope $\mu /2$ and $\mu$ respectively when $n \to \infty$. \begin{figure}[!ht] \centering \begin{multicols}{2} \includegraphics[width=245pt]{XnCSG} \caption{Sample paths of $X_n(t)$, the lighter colour, and $G_{\lfloor nt \rfloor}/n$, the darker colour, for the cases $n=100$ in blue with $\mu=-1$ and $n=10000$ in red with $\mu=1$.} \label{sim:comlln} \includegraphics[width=245pt]{YnCSG} \caption{Sample paths of $Y_n(t)$, the lighter colour, and $G_{\lfloor nt \rfloor}/\sqrt{n}$, the darker colour, for the cases $n=100$ in blue and $n=10000$ in red, both with $\mu=0$.} \label{sim:comclt} \end{multicols} \end{figure} The second simulation has a different scaling $1 / \sqrt{n}$ of the same processes, in the special case of $\mu=0$, to show Theorem \ref{coml2} and Theorem \ref{coml4} in action. \subsection{Law of large numbers and central limit theorem} The aim of this section is to establish the following limit theorems, which contain a law of large numbers and central limit theorem in the case $t=1$. In Section~\ref{sec:comflt} we strengthen these results to functional limit theorems. \begin{theorem} \label{comt1} Consider the random walk defined at~\eqref{ass:walk}. Let $t \in [0,1]$. Then, \[ \frac{1}{n} G_{\lfloor n t \rfloor} \overset{\textup{a.s.}}{\longrightarrow} \frac{\mu t }{2}, \text{ as } n \to \infty. \] \end{theorem} \begin{theorem} \label{comt2} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$ and satisfying \eqref{ass:Sigma}. Let $t \in [0,1]$. Then, \[ \frac{1}{\sqrt{n}}G_{\lfloor nt \rfloor} \overset{\textup{d}}{\longrightarrow} \mathcal{N} \left(0,\tfrac{t}{3} \Sigma \right) , \text{ as } n \to \infty. \] \end{theorem} The method of proof for these two theorems is to view the centre of mass as an appropriate functional from $(\mathcal{D}^d, \rho_S)$ to $(\mathbb{R}^d, \rho_E)$ and then apply our functional limit theorem results. First, for all $f \in \mathcal{M}^d_0$ and $t \in [0,1]$ , we define for $t \in [0,1]$ a functional $g_t: \mathcal{M}^d_0 \to \mathbb{R}^d$ given by $g_0(f) :=f( 0 ) = 0$ and \[ g_t(f) :=\frac{1}{t}\int_0^t f(s) \textup{d} s , \text{ for } t \in (0,1] . \] Note that for any $c \in \mathbb{R}$, the function $g_t$ is \emph{homogeneous} in the sense that $g_t (c f) = c g_t (f)$ for all $t \in [0,1]$. The slight complication is that $g_t (X_n')$, for example, is not exactly equal to $n^{-1} G_{\lfloor nt \rfloor}$. To deal with this, we will need the following estimate both here and when we consider functional limit theorems, below. We write $S_{\lfloor n \cdot \rfloor}$ to represent the function $t \mapsto S_{\lfloor n t \rfloor}$ over $t \in [0,1]$. \begin{lemma} \label{coml5} For $n \in \mathbb{N}$ and $t \in [0,1]$, define $\Delta_n (t) := g_t ( S_{\lfloor n \cdot \rfloor} ) - G_{\lfloor n t \rfloor}$. Then for any $\alpha >0$, we have $n^{-\alpha} \| \Delta_n \|_\infty \overset{\textup{a.s.}}{\longrightarrow} 0$ as $n \to \infty$. \end{lemma} \begin{proof} First note that $\Delta_n (0) = X_n'(0) - \frac{1}{n} G_0 = 0$ since $S_0 = G_0 = 0$. Now for $t>0$, we have \begin{align} g_t( S_{\lfloor n \cdot \rfloor} ) &= \frac{1}{t}\int_0^t S_{\lfloor ns \rfloor} \textup{d} s \nonumber \\ &= \frac{1}{t} \left( \sum_{k=0}^{\lfloor nt \rfloor -1} \int_{k/n}^{(k+1)/n} S_{\lfloor ns \rfloor} \textup{d} s + \int_{\lfloor nt \rfloor/n}^t S_{\lfloor ns \rfloor} \textup{d} s \right) \nonumber \\ &= \frac{1}{nt} \left[ \sum_{k=0}^{\lfloor nt \rfloor -1}S_k + (nt- \lfloor nt \rfloor )S_{\lfloor nt \rfloor} \right] \nonumber \\ &= G_{\lfloor nt \rfloor} + \frac{\lfloor nt \rfloor}{nt} \left( \frac{\lfloor nt \rfloor -1}{\lfloor nt \rfloor}G_{\lfloor nt \rfloor -1} \right) - G_{\lfloor nt \rfloor} + \frac{1}{n t} (nt- \lfloor nt \rfloor) S_{\lfloor nt \rfloor} \nonumber \\ &= G_{\lfloor nt \rfloor} + \frac{\lfloor nt \rfloor}{nt} \left( G_{\lfloor nt \rfloor} - \frac{S_{\lfloor nt \rfloor}}{\lfloor nt \rfloor} \right) - G_{\lfloor nt \rfloor} + \frac{1}{n t} (nt- \lfloor nt \rfloor) S_{\lfloor nt \rfloor} \nonumber \\ &= G_{\lfloor nt \rfloor} + \frac{\lfloor nt \rfloor - nt}{nt} G_{\lfloor nt \rfloor} -\frac{1}{nt} S_{\lfloor nt \rfloor} + \frac{nt-\lfloor nt \rfloor }{nt} S_{\lfloor nt \rfloor} \nonumber \\ &= G_{\lfloor nt \rfloor} - \frac{nt- \lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} + \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor} \nonumber \\ &= G_{\lfloor nt \rfloor} + \Delta_n(t) . \label{g2} \end{align} Hence for $t >0$ we have \begin{equation} \Delta_n(t) = \left( \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor} \right) - \left( \frac{nt- \lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} \right). \label{delta} \end{equation} Now, since $| nt - \lfloor nt \rfloor -1 | \leq 1$ and $S_0 = 0$ we have \begin{align*} \sup_{0 < t \le 1} \left\| \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor} \right\| & \leq \sup_{0 < t \leq 1} \frac{1}{nt} \| S_{\lfloor n t \rfloor} \| \\ & = \sup_{\frac{1}{n} \leq t \leq 1} \frac{1}{nt} \| S_{\lfloor n t \rfloor} \| \\ & = \max_{1 \leq k \leq n} \sup_{t \in [ \frac{k}{n} , \frac{k+1}{n} )} \frac{1}{nt} \| S_{\lfloor n t \rfloor} \| \\ & \leq \sup_{1 \leq k \leq n} \frac{1}{k} \| S_k \| .\end{align*} Similarly, \[ \sup_{0 < t \le 1} \left\| \frac{nt-\lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} \right\| \leq \sup_{1 \leq k \leq n} \frac{1}{k} \| G_k \| . \] The strong law of large numbers for $S_n$ (Theorem~\ref{thm:SLLN}) implies that $\| S_n \| \leq 2 n \| \mu \|$ for all $n \geq N$ with $\Pr (N < \infty) =1$. Moreover, \[ \| G_n \| \leq \frac{1}{n} \sum_{i=1}^n \| S_i \| \leq \frac{1}{n} \sum_{i=1}^N \| S_i \| + \frac{1}{n} \sum_{i=N}^n 2 i \| \mu \| \leq \frac{1}{n} \sum_{i=1}^N \| S_i \| + 2 n \| \mu \| . \] Hence \[ \limsup_{n \to \infty} n^{-\alpha} \sup_{0 < t \leq 1} \left\| \frac{nt-\lfloor nt \rfloor -1 }{nt} S_{\lfloor nt \rfloor} \right\| \leq \limsup_{n \to \infty} n^{-\alpha} \left( \sup_{1 \leq k \leq N} \| S_k \| + 2 \| \mu \| \right) = 0 , ~\text{a.s.} ,\] and \[ \limsup_{n \to \infty} n^{-\alpha} \sup_{0 < t \leq 1} \left\| \frac{nt- \lfloor nt \rfloor }{nt} G_{\lfloor nt \rfloor} \right\| \leq \limsup_{n \to \infty} n^{-\alpha} \left( \frac{1}{n} \sum_{i=1}^N \| S_i \| + 2 \| \mu \| \right) = 0 , ~\text{a.s.} \] This completes the proof. \end{proof} To prove Theorems~\ref{comt1} and~\ref{comt2}, we will need to show that the functional $g_t$ is continuous. This is the content of the next result. \begin{lemma} \label{coml1} For any $t \in [0,1]$, the functional $f \mapsto g_t(f)$ is continuous as a map from $(\mathcal{D}^d_0, \rho_S)$ to $(\mathbb{R}^d, \rho_E)$. \end{lemma} {\bf Doesn't the proof of Lemma~\ref{coml2} work here too?} \begin{proof} Consider $f_1, f_2 \in \mathcal{D}^d_0$. For $t=0$ we have $\rho_E ( g_0 (f_1) , g_0 (f_2 ) ) = \rho_E (0,0) = 0 \leq \rho_S ( f_1, f_2)$. Thus fix $t \in (0,1]$. We aim to show that $\rho_S (f_1,f_2) \to 0$ implies $\rho_E (g_t(f_1),g_t(f_2)) \to 0$. Let $\lambda \in \Lambda$. Then, by the triangle inequality, \begin{align*} \rho_E (g_t(f_1),g_t(f_2)) & = \left\| \frac{1}{t} \int_0^t\left[f_1(s)-f_2(s)\right] \textup{d} s\right\| \\ &\leq \left\|\frac{1}{t}\int_0^t\left[f_1(s)-f_2\circ \lambda (s)\right] \textup{d} s\right\| + \left\|\frac{1}{t}\int_0^t\left[f_2 \circ \lambda (s)-f_2(s)\right] \textup{d} s\right\| \\ & \leq \left\| f_1 - f_2 \circ \lambda \right\|_\infty + \left\|\frac{1}{t}\int_0^t\left[f_2 \circ \lambda (s)-f_2(s)\right] \textup{d} s\right\| . \end{align*} {\bf need to fix this:} For the first term, note $\rho_S^\circ = \{\rho_\infty(\lambda^*,I) \vee \rho_\infty(f_1,f_2\circ \lambda^*)\}$ so the integrand is bounded above by $\rho_S^\circ(f_1,f_2)$, and thus so is the whole first term. For the second term, we consider two step functions that give upper and lower bounds for the centre of mass of $f_2(s)$, namely $H_n^+(s)$ and $H_n^-(s)$. Let the step functions have steps at $(t_i)_{i=0,\ldots,n}=\{t_0=0 \cup \{t^D\} \cup \{t^{\pi}\} \cup t_n=t\}$ where $t^D$ is the set of points where $f_2$ is discontinuous and $t^{\pi}$ is a set of evenly distributed points on $[0,t]$. Let, \[H_n^+(s)= \begin{cases} \max\limits_{t_i\leq s < t_{i+1}}f_2(s) \ &\textrm{for\ } t_i\leq s < t_{i+1},\ \textrm{when\ } i=0,\ldots,n-2, \\ \max\limits_{t_{n-1}\leq s \leq t_{n}}f_2(s) \ &\textrm{for\ } t_{n-1} \leq s \leq t_{n}=t, \end{cases}\] and, \[H_n^-(s)= \begin{cases} \min\limits_{t_i\leq s < t_{i+1}}f_2(s) \ &\textrm{for\ } t_i\leq s < t_{i+1},\ \textrm{when\ } i=0,\ldots,n-2, \\ \min\limits_{t_{n-1}\leq s \leq t_{n}}f_2(s) \ &\textrm{for\ } t_{n-1} \leq s \leq t_{n}=t. \end{cases}\] We now consider an upper bound of the second term of REF, using $H_n^+(s)$ and $H_n^-(s)$: \begin{align*} \left\|\frac{1}{t}\int_0^t\left[f_2 \circ \lambda^*(s)-f_2(s)\right]\textup{d} s\right\| &\leq \left\|\frac{1}{t}\int_0^t\left[H_n^+ \circ \lambda^*(s)-H_n^-(s)\right]\textup{d} s\right\| \\ &\leq \left\|\frac{1}{t}\sum_{i=0}^{n}\left[H_n^+(t_i)\cdot (\lambda^*(t_{i+1})-\lambda^*(t_i))-H_n^-(t_i)\cdot (t_{i+1}-t_i)\right]\right\|\\ &\leq \left\|\frac{1}{t}\sum_{i=0}^{n}\left[(H_n^+(t_i)-H_n^-(t_i))(t_{i+1}-t_i) + 2\rho_\infty(\lambda^*,I)H_n^+(t_i)\right]\right\|\\ &\leq \left\|\frac{2\rho_\infty(\lambda^*,I)}{t}\sum_{i=0}^{n}H_n^+(t_i)\right\| +\left\|\frac{1}{t}\sum_{i=0}^{n}(H_n^+(t_i)-H_n^-(t_i))(t_{i+1}-t_i)\right\|. \end{align*} From this last display we can see that the first term depends linearly on $\rho_\infty(\lambda^*,I)$ which is bound by $\rho_S^\circ(f_1,f_2)$, since the sum is finite. For the second term, we note this bound is true for all partitions, and the sum goes to $0$ for a suitably chosen increasing partition $t^{\pi}$, by monotone convergence. This gives the continuity of $\rho_\infty(g_t(f_1),g_t(f_2))$. \end{proof} \noindent Now we are ready to prove our Theorem \ref{comt1}. \begin{proof}[Proof of Theorem \ref{comt1}] First, we know that $X'_n$ converges to $I_\mu$ a.s. where $I_\mu(t)=\mu t$, as defined in Theorem \ref{thm:flln}. With Lemma \ref{coml1}, we can apply the continuous mapping theorem on the function $g_t(X'_n)$, thus we get, for $t \in [0,1]$, \begin{equation*} g_t(X'_n) \to g_t(I_\mu) \quad \text{a.s.} \end{equation*} With a quick calculation we see for $t>0$, \begin{equation} g_t(I_\mu) = \frac{1}{t}\int_0^t \mu s \textup{d} s = \frac{1}{t}\left[\frac{\mu s^2}{2}\right]_0^t = \frac{\mu t}{2} \label{cal1} \end{equation} and $g_0(I_\mu)=I_\mu (0)=0$. Using equation \eqref{g2}, for all fixed $t>0$, we get \begin{equation*} \frac{1}{n} \times \frac{\lfloor nt \rfloor}{nt} G_{\lfloor nt \rfloor} + \frac{1}{n} \Delta_n(t) \to \frac{\mu t}{2} \quad \text{a.s.} \end{equation*} Now with the fact that for any $t>0$, we have $\lfloor nt \rfloor / nt \to 1$ and the implication from Lemma \eqref{coml5} that $\Delta_n(t) /n \to 0$ a.s. as $n \to \infty$, we get \begin{equation} \frac{1}{n} G_{\lfloor nt \rfloor} \to \frac{\mu t}{2} \quad \text{a.s.} \label{tfix} \end{equation} which is also true for $t=0$. So equation \eqref{tfix} is true for all $t \in [0,1]$. Putting $t=1$ into the above equation will give us the right statement we needed. \end{proof} \noindent The next theorem to prove is the central limit theorem. Recall that we will have an extra assumption that $\mu = 0$ for a meaningful analysis. This time we use the same function $g_t(f)$, but we take \begin{equation} Y'_n := \frac{1}{\sqrt{n}}S_{\lfloor nt \rfloor} \end{equation} instead of $X'_n$ to get the right meaningful scaling. From the calculation in equation \eqref{g2}, we can see that the calculation follows by changing the scaling from $n$ to $\sqrt{n}$, so we obtain \begin{equation} g_t(Y'_n)=\frac{1}{\sqrt{n}} \times \frac{\lfloor nt \rfloor}{nt} G_{\lfloor nt \rfloor} + \frac{1}{\sqrt{n}} \Delta_n(t) \label{g4} \end{equation} for all $t>0$, with the same $\Delta_n(t)$ as before. Now we are ready to prove Theorem \ref{comt2}. \begin{proof}[Proof of Theorem \ref{comt2}] This time we apply Donsker's theorem on the function $g_t(Y'_n)$ with the fact that $Y'_n \Rightarrow M^{1/2}b_d$, recall that $M$ is the covariance matrix of $Z_n$ and $b_d$ is the standard Brownian motion in $d$-dimensions. Then we get, for $t \in [0,1]$, \begin{equation} g_t(Y'_n) \overset{\textup{d}}{\longrightarrow} g_t(M^{\frac{1}{2}}b_d). \label{gc1} \end{equation} Notice the expression on the right hand side involves an integral with Brownian motion. We should first consider the expression \begin{equation} B(t) = \int_0^t b_d(s) \textup{d} s \end{equation} Considering the right Riemann sum of $n$ intervals, we define \begin{equation} B_n(t) = \frac{t}{n} \sum_{k=1}^n b_d \left(\frac{kt}{n} \right), \end{equation} then as the Brownian motion is a continuous function, the Riemann sum converges and we know $B_n(t) \to B(t)$ a.s. We also know the sum on the right hand side has a multivariate normal distribution, and so thus each of the $B_n(t)$, so $B(t)$ is also normal. It is easy to see that $B(t)$ has mean zero as the standard Brownian motion has mean zero. Next we need to find the variance of $B(t)$. \begin{align} \Var (B(t))= \Exp \left[B(t)B(t)^{\scalebox{0.6}{$\top$}} \right] &= \Exp \left[\int_0^t b_d(r)\textup{d} r \times \int_0^t b_d^{\scalebox{0.6}{$\top$}} (s) \textup{d} s \right] \nonumber \\ &= \Exp \left[\int_0^t \int_0^t b_d(r)b_d^{\scalebox{0.6}{$\top$}} (s) \textup{d} r\textup{d} s \right] \nonumber \\ &= \int_0^t \int_0^t \Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}} (s)\right] \textup{d} r\textup{d} s \label{b1} \end{align} The change of order of the expectation and the integration in the last step is guaranteed by Fubini's theorem as the function is integrable. To evaluate the expectation, we first consider the case that $r>s$, then we can write $b_d(r)=b_d(s)+(b_d(r)-b_d(s))$, then as $b_d(s)$ is independent of $b_d(r)-b_d(s)$, we get \begin{equation} \Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s)\right] = \Exp \left[(b_d(r)-b_d(s))b_d^{\scalebox{0.6}{$\top$}}(s)\right] + \Exp \left[b_d(s) b_d^{\scalebox{0.6}{$\top$}}(s) \right] = \Var(b_d(s))= s I_d. \end{equation} where $I_d$ is the $d$-dimensional identity matrix. Similarly, for the case $r<s$, we get $\Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s)\right]=r I_d$. Combine the two cases, we get $\Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s)\right]=\min(r,s)I_d$. Putting this back to equation \eqref{b1}, we have for $t \ge 0$, \begin{equation} \Var (B(t)) = \int_0^t \int_0^t \min(r,s) I_d \textup{d} r\textup{d} s = \frac{t^3}{3} I_d \end{equation} The integral is essentially just finding the volume of a pyramid with a square base of side length $t$, with height $t$, attained at the point $(t,t)$. So we get $B(t) \sim \mathcal{N}(0,t^3/3)$. Hence for $t>0$, \begin{equation} g_t(M^{\frac{1}{2}}b_d) = \frac{1}{t}\int_0^t M^{\frac{1}{2}} b_d(s) \textup{d} s = \frac{M^{\frac{1}{2}}}{t} B(t) \to \mathcal{N}\left(0,\frac{tM}{3}\right) \label{cal2} \end{equation} Putting this back to equation \eqref{gc1}, for all $t>0$, we get \begin{equation} \frac{1}{\sqrt{n}} \left( G_{\lfloor nt \rfloor} + \Delta_n(t) \right) \overset{\textup{d}}{\longrightarrow} \mathcal{N}\left(0,\frac{tM}{3}\right) \mathrm{\ as\ } n \to \infty. \end{equation} With an implication of Lemma \ref{coml5} that $\Delta_n (t)/ \sqrt{n} \to 0$ a.s. as $n \to \infty$, we obtain, for all $t>0$, \begin{equation} \frac{1}{\sqrt{n}} G_{\lfloor nt \rfloor} \overset{\textup{d}}{\longrightarrow} \mathcal{N}\left(0,\frac{tM}{3}\right) \mathrm{\ as\ } n \to \infty, \label{tfix2} \end{equation} which is also true for $t=0$. Notice that at $t=0$, the normal distribution $\mathcal{N}(0,0) \equiv 0$. So equation \eqref{tfix2} is true for all $t \in [0,1]$. Putting $t=1$, and noting that we get the desired statement in the theorem, the proof is completed. \end{proof} \subsection{Functional limit theorems} \label{sec:comflt} The aim of this section is to extend the law of large numbers and central limit theorem to the whole trajectory of the centre of mass process. Here are the results. Recall that $I_\mu (t) = \mu t$. \begin{theorem} \label{comt3} Consider the random walk defined at~\eqref{ass:walk}. Then, as $n \to \infty$, as elements of $(\mathcal{D}^d, \rho_S)$, \[ \frac{1}{n}\left(G_{\lfloor nt \rfloor}\right)_{t \in [0,1]} \overset{\textup{a.s.}}{\longrightarrow} \frac{1}{2} I_\mu . \] \end{theorem} \begin{theorem} \label{comt4} Suppose that we have a random walk as defined at \eqref{ass:walk} with $\mu =0$ and satisfying \eqref{ass:Sigma}. Then, as $n \to \infty$, as elements of $(\mathcal{D}^d, \rho_S)$, \[ \frac{1}{\sqrt{n}}\left(G_{\lfloor nt \rfloor}\right)_{t \in [0,1]} \Rightarrow \mathcal{GP}(0,K), \] where $\mathcal{GP}(0,K)$ is a Gaussian process with mean $0$ and the symmetric covariance function $K$ defined by \[ K(t_1,t_2) = \begin{cases} t_1 M (3t_2 -t_1)/( 6 t_2), & \text{for } 0 < t_1 \le t_2 \\ t_2 M/3, & \text{for } t_1=0, t_2 \ne 0 \\ 0 , & \text{for } t_1=t_2=0 \end{cases} \] \end{theorem} In order to prove Theorem \ref{comt3} and Theorem \ref{comt4}, we need to introduce a bigger functional which contains all the information in the trajectory. Define the functional $g$ acting on $f \in \mathcal{M}^d_0$ by $g(f) (t) = g_t (f)$ for all $t \in [0,1]$. Note that if $f \in \mathcal{M}^d_0$ then $g(f)(0) = g_0 (f) = f(0) =0$. For any fixed $f \in \mathcal{D}^d_0$, observe that $g_t(f)$ is continuous in $t$. The latter is true because $t \mapsto \frac{1}{t}$ and $t \mapsto \int_0^t f(s) \textup{d} s$ are both continuous on $(0,1]$, so $t \mapsto g_t$ is continuous on $(0,1]$. To check the continuity at $t=0$, we use l'H\^opital's rule to see that \begin{equation*} \lim_{t \to 0^+}\left(\frac{\int_0^t f(s) \textup{d} s}{t}\right) = \lim_{t \to 0^+}\left(\frac{\frac{\textup{d}}{\textup{d} t}\int_0^t f(s) \textup{d} s}{1}\right) = \lim_{t \to 0^+}f(t)=f(0), \end{equation*} using the fact that $f$ is right-continuous at $0$. So we conclude that if $f \in \mathcal{D}^d_0$, then $g(f) \in \mathcal{C}^d_0$. The next result shows that $f \mapsto g(f)$ is continuous. \begin{lemma} \label{coml2} The functional $f \mapsto g(f)$ is continuous as a map from $(\mathcal{D}^d_0, \rho_S)$ to $(\mathcal{C}^d_0, \rho_S)$. \end{lemma} \begin{proof} Take $f_1, f_2 \in \mathcal{D}^d_0$. It suffices to show that $\rho_{S}^\circ ( f_1, f_2 ) \to 0$ implies that $\rho_S ( g(f_1), g(f_2) ) \to 0$, since $\rho_S^\circ$ and $\rho_S$ are equivalent metrics on $\mathcal{D}^d_0$ (see {\bf REF}). First we make the following observations about the metric $\rho_S^\circ$. Let $\lambda \in \Lambda$. From the definition of $\|\lambda\|^\circ$, we have that for any $t \in [0,1)$ and $h>0$ sufficiently small, \[ \log \left| \frac{\lambda(t+h)-\lambda(t)}{h} \right| \le \|\lambda\|^\circ \] so that \[ \mathrm{e}^{-\|\lambda\|^\circ} \le \frac{\lambda(t+h)-\lambda(t)}{h} \le \mathrm{e}^{\|\lambda\|^\circ} .\] Since $\lambda$ is strictly increasing, $\lambda' (t)$ exists for all but at most countably many $t \in (0,1)$ [{\bf REF}], and when it does exist, we thus have \[ \mathrm{e}^{-\|\lambda\|^\circ} \le \lambda'(t) \le \mathrm{e}^{\|\lambda\|^\circ} . \] Setting $c ( \lambda ) := \max \{ \mathrm{e}^{\|\lambda\|^\circ}-1, 1- \mathrm{e}^{-\|\lambda\|^\circ}\}$, we see that \begin{equation} \label{s1} | \lambda' (t) - 1 | \leq c ( \lambda ) , \text{ for all but countably many } t \in (0,1) . \end{equation} Since $\lambda (0 ) = 0$, another application of the definition of $\| \lambda\|^\circ$ shows that $\log | \lambda (t) / t | \leq \|\lambda\|^\circ$ for all $t \in (0,1)$, so that $|\lambda(t)| \le t \mathrm{e}^{\|\lambda\|^\circ}$, hence \[ t \mathrm{e}^{-\|\lambda\|^\circ} \le \lambda(t) \le t\mathrm{e}^{\|\lambda\|^\circ}, \text{ for all } t \in [0,1]. \] It follows that\[ -t\left(1-\mathrm{e}^{-\|\lambda\|^\circ}\right) \le \lambda(t)-t \le t \left(\mathrm{e}^{\|\lambda\|^\circ}-1\right), \] and hence we get \begin{equation} \label{s3} |\lambda(t)-t| \le t c (\lambda) , \text{ for all } t \in [0,1]. \end{equation} Now we are ready to prove the claim. For $f_1, f_2 \in \mathcal{D}^d_0$, \begin{align*} \rho_S (g (f_1) , g(f_2) ) & = \inf_{\lambda \in \Lambda} \{ \| \lambda \|^\circ \vee \| g (f_1) , g(f_2) \circ \lambda \|_\infty \} ,\end{align*} where \[ \| g (f_1) , g(f_2) \circ \lambda \|_\infty = \sup_{0\leq t \leq 1} \left\| g_t (f_1) - g_{\lambda(t)} (f_2 ) \right\| .\] Now since $\lambda (0) =0$ we have $g_0 (f_1) - g_{\lambda(0)} (f_2) = 0$. For $t \in (0,1]$, we have \begin{align*} g_t (f_1) - g_{\lambda(t)} (f_2 ) & = \frac{1}{t} \int_0^t f_1(s) \textup{d} s - \frac{1}{\lambda(t)} \int_0^{\lambda(t)} f_2(s) \textup{d} s \\ &= \frac{1}{t} \int_0^t f_1(s) \textup{d} s - \frac{1}{t} \int_0^{\lambda(t)} f_2(s) \textup{d} s - \left( \frac{1}{\lambda(t)}- \frac{1}{t} \right) \int_0^{\lambda(t)} f_2(s) \textup{d} s \\ &= \frac{1}{t} \int_0^t f_1(s) \textup{d} s - \frac{1}{t} \int_0^{t} f_2(\lambda(u)) \lambda'(u) \textup{d} u - \left( \frac{t- \lambda(t)}{t \lambda(t)} \right) \int_0^{\lambda(t)} f_2(s)\textup{d} s \\ &= \frac{1}{t} \int_0^t ( f_1(s) - f_2 \circ \lambda (s) ) \textup{d} s - \left( \frac{t- \lambda(t)}{t \lambda(t)} \right) \int_0^{\lambda(t)} f_2(s) \textup{d} s - \frac{1}{t}\int_0^t f_2(\lambda(u))(\lambda'(u) -1 ) \textup{d} u. \end{align*} It follows that, for $t \in (0,1]$, \begin{align*} \left\| g_t (f_1) - g_{\lambda(t)} (f_2 ) \right\| &\le \left\| f_1 - f_2 \circ \lambda \right\|_\infty + \frac{|t- \lambda(t)|}{t} \|f_2\|_\infty + \|f_2\|_\infty \| \lambda' - 1 \|_\infty .\end{align*} Applying the estimates~\eqref{s1} and~\eqref{s3} we get \begin{align*} \sup_{0 \leq t \leq 1} \left\| g_t (f_1) - g_{\lambda(t)} (f_2 ) \right\| &\le \left\| f_1 - f_2 \circ \lambda \right\|_\infty + 2 \|f_2\|_\infty c ( \lambda ) . \end{align*} Let $\varepsilon >0$. Now $c (\lambda ) \to 0$ as $\| \lambda \|^\circ \to 0$, so if $\rho_S (f_1, f_2) \to 0$ we can find $\lambda \in \Lambda$ such that $\| f_1 - f_2 \circ \lambda \|_\infty < \varepsilon$ and $\| \lambda \|^\circ$ is small enough so that $\| f_2 \|_\infty c (\lambda ) < \varepsilon$ too (note $\| f_2 \|_\infty < \infty$). Then $\rho_S (g(f_1), g(f_2) ) \leq 3 \varepsilon$, and since $\varepsilon>0$ was arbitrary, the result follows. \end{proof} Another important observation before we prove Theorem \ref{comt3} and Theorem \ref{comt4} is that with equation \eqref{g2} and equation \eqref{g4}, we can only know that the process converges pointwise but here we need uniform convergence. So we need to prove \begin{equation} \left\| \frac{1}{n}G_{\lfloor n \cdot \rfloor} - \frac{1}{2} I_\mu \right\|_\infty \overset{\textup{a.s.}}{\longrightarrow} 0 , \text{ as }n \to \infty. \end{equation} in $D$, and \begin{equation} \frac{1}{\sqrt{n}}G_{\lfloor n \cdot \rfloor} \Rightarrow g(M^{\frac{1}{2}}b_d) \end{equation} in the sense of weak convergence on $D$ as $n \to \infty$. Hence, we should first introduce the following lemmas. {\bf DO we need these?} \begin{lemma} \label{coml3} Suppose we have the assumption (\ref{ass:walk}), then \begin{equation} \sup_{0 < t \le 1} \left\| \frac{1}{n}G_{\lfloor nt \rfloor} - \left[ \frac{1}{n} \times \frac{\lfloor nt \rfloor}{nt} G_{\lfloor nt \rfloor} + \frac{1}{n} \Delta_n(t) \right] \right\| \overset{\textup{a.s.}}{\longrightarrow} 0 , \text{ as }n \to \infty. \label{gsup} \end{equation} \end{lemma} \begin{lemma} \label{coml4} Suppose we have the assumption (\ref{ass:walk}), then \begin{equation} \sup_{0 < t \le 1} \left\| \frac{1}{\sqrt{n}}G_{\lfloor nt \rfloor} - \left[ \frac{1}{\sqrt{n}} \times \frac{\lfloor nt \rfloor}{nt} G_{\lfloor nt \rfloor} + \frac{1}{\sqrt{n}} \Delta_n(t) \right] \right\| \overset{\textup{a.s.}}{\longrightarrow} 0 , \text{ as }n \to \infty. \end{equation} \end{lemma} We exclude the case of $t=0$ here in both lemmas, because $\frac{1}{n}G_{0}$, $\frac{1}{2} I_\mu(0)$ and $g(M^{\frac{1}{2}}b_d)(0)$ are all zero. It is clear that it suffices to prove only Lemma \ref{coml4} as it will imply Lemma \ref{coml3} by dividing by $\sqrt{n}$ on both sides. \begin{proof}[Proof of Lemma \ref{coml4}] First, consider the main terms with $G_{\lfloor nt \rfloor}$. Now \begin{align*} & \quad \sup_{0 < t \le 1} \left\| \left( \frac{1}{\sqrt{n}}G_{\lfloor nt \rfloor} - \frac{1}{\sqrt{n}} \times \frac{\lfloor nt \rfloor}{nt} G_{\lfloor nt \rfloor} \right) \right\| \\ &\le \sup_{0 < t \le 1} \frac{|nt-\lfloor nt \rfloor|}{n^{3/2}t} \|G_{\lfloor nt \rfloor|} \| \\ &\le \sup_{0 < t \le 1/n} \frac{nt}{n^{3/2}t} \|G_{0} \| + \sup_{1/n \le t < 1/n^{3/4}} \left\| \frac{G_{\lfloor nt \rfloor}}{n^{3/2}t} \right\| + \sup_{1/n^{3/4} \le t \le 1/\sqrt{n}} \left\| \frac{G_{\lfloor nt \rfloor}}{n^{3/2}t} \right\| \\ & \qquad + \sup_{1/\sqrt{n} \le t \le 1/\sqrt[4]{n}} \left\| \frac{G_{\lfloor nt \rfloor}}{n^{3/2}t} \right\| + \sup_{1/\sqrt[4]{n} \le t \le 1} \left\| \frac{G_{\lfloor nt \rfloor}}{n^{3/2}t} \right\| \\ &\le \left\| \frac{G_0}{\sqrt{n}} \right\| + \sup_{1 < k \le \sqrt[4]{n}} \left\| \frac{G_{\lfloor k \rfloor}}{\sqrt{n}} \right\| + \sup_{\sqrt[4]{n} < k \le \sqrt{n}} \left\| \frac{G_{\lfloor k \rfloor}}{n^{3/4}} \right\| + \sup_{\sqrt{n} < k \le n^{3/4}} \left\| \frac{G_{\lfloor k \rfloor}}{n} \right\| + \sup_{n^{3/4} < k \le n} \left\| \frac{G_{\lfloor k \rfloor}}{n^{5/4}} \right\| \end{align*} The first term is $0$ as $G_0=0$. For the other terms, by Theorem \ref{comt1}, we know $\|G_n\| \le \mu n$ for $n \ge N$, where $N < \infty$ a.s. So we get \begin{equation*} \sup_{1 < k \le \sqrt[4]{n}} \left\| \frac{G_{\lfloor k \rfloor}}{\sqrt{n}} \right\| \le \frac{1}{\sqrt{n}} \sup_{0 < k \le N} \left\| G_{\lfloor k \rfloor} \right\| + \sup_{N < k \le \sqrt[4]{n}} \|\mu\| \left(\frac{\lfloor k \rfloor}{\sqrt{n}}\right) \to 0 \end{equation*} as $n \to \infty$ because the supremum is over a finite number of $G_{\lfloor k \rfloor}$ in the first term. Similarly, we also have \begin{align*} \sup_{\sqrt[4]{n} < k \le \sqrt{n}} \left\| \frac{G_{\lfloor k \rfloor}}{n^{3/4}} \right\| &\le \frac{1}{n^{3/4}} \sup_{0 < k \le N} \left\| G_{\lfloor k \rfloor} \right\| + \sup_{N < k \le \sqrt{n}} \|\mu\| \left( \frac{\lfloor k \rfloor}{n^{3/4}} \right) \to 0 \\ \sup_{\sqrt{n} < k \le n^{3/4}} \left\| \frac{G_{\lfloor k \rfloor}}{n} \right\| &\le \frac{1}{n} \sup_{0 < k \le N} \left\| G_{\lfloor k \rfloor} \right\| + \sup_{N < k \le n^{3/4}} \|\mu\| \left( \frac{\lfloor k \rfloor}{n} \right) \to 0 \\ \sup_{n^{3/4} < k \le n} \left\| \frac{G_{\lfloor k \rfloor}}{n^{5/4}} \right\| &\le \frac{1}{n^{5/4}} \sup_{0 < k \le N} \left\| G_{\lfloor k \rfloor} \right\| + \sup_{N < k \le n} \|\mu\| \left( \frac{\lfloor k \rfloor}{n^{5/4}} \right) \to 0 \end{align*} as $n \to \infty$. Summing up all the terms here and Lemma \ref{coml5}, using the triangle inequality, we get the statement \eqref{gsup} as desired. \end{proof} \begin{proof}[Alternative proof of Lemma \ref{coml4}] By definition, $G_n$ is an average of $S_1, S_2, \cdots, S_n$, we immediately have $\sup_{0 < t \le 1} \left\|G_{\lfloor nt \rfloor} \right\| \le \sup_{0 < t \le 1} \left\|S_{\lfloor nt \rfloor} \right\|$. So we can get \begin{align*} & \quad \sup_{0 < t \le 1} \left\| \left( \frac{1}{\sqrt{n}}G_{\lfloor nt \rfloor} - \frac{1}{\sqrt{n}} \times \frac{\lfloor nt \rfloor}{nt} G_{\lfloor nt \rfloor} \right) \right\| \\ &\le \sup_{0 < t \le 1} \frac{| nt-\lfloor nt \rfloor |}{n^{3/2}t} \| G_{\lfloor nt \rfloor} \| \\ &\le \sup_{0 < t \le 1} \left\| \frac{S_{\lfloor nt \rfloor}}{n^{3/2}t} \right\| \end{align*} because $0 \le nt-\lfloor nt \rfloor \le 1$. Now from the calculation in Lemma \ref{coml5} and Lemma \ref{coml5} itself, the result follows. \end{proof} \noindent Now we are ready to proof Theorem \ref{comt3} and Theorem \ref{comt4}. \begin{proof}[Proof of Theorem \ref{comt3}] Similar to the proof of Theorem \ref{comt1}, we apply the continuous mapping theorem on $g(f)$, with the continuity of $g(f)$ given by Lemma \ref{coml2} we immediately see \begin{equation} g(X'_n) \overset{\textup{a.s.}}{\longrightarrow} g(I_\mu) \label{c1} \end{equation} Also, from equation \eqref{cal1}, we know that \begin{equation} g(I_\mu)=\left( \frac{1}{2} \mu t \right)_{t \in [0,1]} \end{equation} Finally, with equation \eqref{g2} and Lemma \ref{coml3}, using the triangle inequality, we know \begin{equation*} \left\| \frac{1}{n}G_{\lfloor n \cdot \rfloor} - \frac{1}{2} I_\mu \right\|_\infty \le \sup_{0 < t \le 1} \left\| \frac{1}{n}G_{\lfloor nt \rfloor} - g_t(X'_n) \right\| + \sup_{0 < t \le 1} \left\| g_t(X_n) - \frac{1}{2} I_\mu \right\| \overset{\textup{a.s.}}{\longrightarrow} 0 . \end{equation*} Hence the proof is completed. \end{proof} \begin{proof}[Proof of Theorem \ref{comt4}] Similar to the proof of Theorem \ref{comt2}, apply Donsker's theorem to $g(f)$, with the continuity of $g(f)$ given by Lemma \ref{coml2}, we get \begin{equation} g(Y'_n) \to g(M^{\frac{1}{2}}b_d) \end{equation} Next from equation \eqref{cal2} , we know for any fixed $t$, $g_t(M^{\frac{1}{2}}b_d)$ is normally distributed with mean $0$ and variance $tM/3$. Furthermore, given a fixed $t_1$, for $t_1 \le t_2$ the increments \begin{align*} g_{t_2}(M^{\frac{1}{2}}b_d) - g_{t_1}(M^{\frac{1}{2}}b_d) &= \frac{1}{t_2} \int_0^{t_2}M^{\frac{1}{2}} b_d(r) \textup{d} r - \frac{1}{t_1} \int_0^{t_1}M^{\frac{1}{2}} b_d(s) \textup{d} s \\ &= \frac{1}{t_2} \int_{t_1}^{t_2}M^{\frac{1}{2}} b_d(r) \textup{d} r + \left(\frac{1}{t_2} - \frac{1}{t_1}\right) \int_{0}^{t_1}M^{\frac{1}{2}} b_d(r) \textup{d} r \\ &=\frac{M^{\frac{1}{2}}}{t_2}\left[(t_2-t_1)b_d(t_1) + B(t_2-t_1)\right] + \left(\frac{1}{t_2} - \frac{1}{t_1}\right)M^{\frac{1}{2}}B(t_1) \\ &\to \mathcal{N} \left(\frac{(t_2-t_1)M^{\frac{1}{2}}}{t_2}b_d(t_1) \raisebox{-10pt}{ , } \frac{(t_2-t_1)^3M ^{\frac{1}{2}}}{3t_2}+\frac{(t_1-t_2)t_1^3M^{\frac{1}{2}}}{3t_1 t_2}\right) \\ &= \mathcal{N} \left(\frac{(t_2-t_1)M^{\frac{1}{2}}}{t_2}b_d(t_1) \raisebox{-10pt}{ , } \frac{(t_2-t_1)(t_2-2t_1)M ^{\frac{1}{2}}}{3}\right). \end{align*} So we can see the increments are normally distributed. This means $g(M^{\frac{1}{2}}b_d)(t)$ is multivariate normally distributed and thus we showed that $g(M^{\frac{1}{2}}b_d)$ is a non-stationary Gaussian process $\mathcal{GP}(0,K(t_1,t_2))$, where $K(t_1,t_2)$ is the covariance function of $g(M^{\frac{1}{2}}b_d)$ evaluated at the points $t_1$ and $t_2$. We have mean $0$ is because we have a zero mean at any fix $t$. Now to calculate $K$, for non-zero $t_1$ and $t_2$, without loss of generality, suppose $0< t_1 \le t_2$, then \begin{align*} K(t_1,t_2) &= \Cov \left(g_{t_1} \left(M^\frac{1}{2}b_d(t_1) \right), g_{t_2} \left(M^\frac{1}{2}b_d(t_2)\right)\right) \\ &= \Exp \left[g_{t_1}\left(M^{\frac{1}{2}}b_d(t_1) \right) g_{t_2}\left(M^{\frac{1}{2}} b_d(t_2)\right) \right] \\ &= \Exp\left[\frac{1}{t_1} \int_0^{t_1}M^{\frac{1}{2}} b_d(r) \textup{d} r \times\frac{1}{t_2} \int_0^{t_2}M^{\frac{1}{2}} b_d(s) \textup{d} s \right] \\ &= \frac{1}{t_1 t_2} \int_0^{t_1} \int_0^{t_2} M^{\frac{1}{2}} \Exp \left[b_d(r)b_d^{\scalebox{0.6}{$\top$}}(s) \right] M^{\frac{1}{2}} \textup{d} r \textup{d} s \\ &= \frac{M}{t_1 t_2} \int_0^{t_1} \int_0^{t_2} min(r,s) \textup{d} r \textup{d} s \\ &= \frac{M}{t_1 t_2} \left[ \frac{t_1^3}{3} + \frac{t_1^2}{2}(t_2-t_1) \right] \\ &= \frac{M t_1}{6 t_2}(3t_2 -t_1) \end{align*} If one of $t_1$ or $t_2$ is $0$, then we just have $K(0,t) = tM/3$. If both of them are zero, then $K(0,0)=0$ by the definition that $g_0(M^{\frac{1}{2}}b_d)=0$. Lastly, with equation \eqref{g2} and Lemma \ref{coml4}, by Slutsky's theorem (Theorem~\ref{thm:slutsky}), we get the result as desired. So we have proved the last theorem of this chapter. \end{proof} \end{comment}
{ "timestamp": "2018-10-16T02:19:38", "yymm": "1810", "arxiv_id": "1810.06275", "language": "en", "url": "https://arxiv.org/abs/1810.06275", "abstract": "We survey some geometrical properties of trajectories of $d$-dimensional random walks via the application of functional limit theorems. We focus on the functional law of large numbers and functional central limit theorem (Donsker's theorem). For the latter, we survey the underlying weak convergence theory, drawing heavily on the exposition of Billingsley, but explicitly treat the multidimensional case.Our two main applications are to the convex hull of a random walk and the centre of mass process associated to a random walk. In particular, we establish the limit sets of the convex hull in the two distinct cases of zero and non-zero drift which provides insight into the diameter, mean width, volume and surface area functionals. For the centre of mass process, we find the limiting processes in both the law of large numbers and central limit theorem domains.", "subjects": "Probability (math.PR)", "title": "Functional limit theorems for random walks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9893474894743616, "lm_q2_score": 0.8311430394931456, "lm_q1q2_score": 0.8222892795166338 }
https://arxiv.org/abs/1706.05975
The discrete yet ubiquitous theorems of Carathéodory, Helly, Sperner, Tucker, and Tverberg
We discuss five discrete results: the lemmas of Sperner and Tucker from combinatorial topology and the theorems of Carathéodory, Helly, and Tverberg from combinatorial geometry. We explore their connections and emphasize their broad impact in application areas such as game theory, graph theory, mathematical optimization, computational geometry, etc.
\section*{Acknowledgments} J. A. De Loera was partially supported by the LabEx Bezout (ANR-10-LABX-58). He is grateful to the labex BEZOUT and the CERMICS research center at \'Ecole National des Ponts et Chauss\'ees for the support received, and the enjoyable and welcoming environment in which the topics in this paper were discussed and built over several visits. De Loera was also partially supported by NSF grant DMS-1522158. Mustafa was supported by the grant ANR SAGA (JCJC-14-CE25-0016-01). Goaoc was partially supported by Institut Universitaire de France. The authors are truly grateful to the two anonymous referees who provided a very large and very detailed set of comments, insights, and corrections. Their reading of this work was truly invaluable and it took a lot of effort and time. Very special thanks to Peter Landweber who gave us numerous corrections too. We are also grateful to the following colleagues who gave us support and many comments and references: Ian Agol, Imre B\'ar\'any, Pavle Blagojevi\'c, Boris Bukh, Sabrina Enriquez, Florian Frick, Alexei Garber, Tommy Hogan, Andreas Holmsen, Wolfgang Mulzer, Oleg Musin, Deborah Oliveros, J\'anos Pach, D\H{o}m\H{o}t\H{o}r P\'alv\H{o}lgyi, Axel Parmentier, Lily Silverstein, Pablo Sober\'on, Francis Su, Uli Wagner, and G\"unter M. Ziegler. We are grateful for their help. \bibliographystyle{acm} \section{Data point sets} \label{s:datapoints} In this section we consider some computer science results that are either applications of the main theorems or strongly related to them. There are simply far too many results for us to do justice to even a small number of them, thus we restrict ourselves to a few central themes, including classification, geometric shape analysis, and partitioning of $n$ points in $d$-dimensional Euclidean spaces. These will involve an interplay between affine geometric and topological techniques, offering the usual mix of advantages and drawbacks of the two: affine tools -- Radon's lemma, Helly's theorem, linear programming duality, simplicial decompositions -- will imply fast algorithms, though apply to a restricted group of geometric objects. Topological tools -- the Borsuk-Ulam theorem, Tucker's lemma---yield broader structural statements, though, at this moment, settling the algorithmic feasibility of these methods remains a major open problem. \subsection{Equipartitioning: ham sandwich theorem and its relatives} We say that a hyperplane $h$ \emph{bisects} a set $P$ of points if the two open halfspaces defined by $h$ contain at most $ \frac{|P|}{2} $ points of $P$. Note that if $|P|$ is odd, then a point of $P$ must necessarily lie on $h$. The famous \emph{ham sandwich theorem} is the starting point of a large number of results concerning equipartitioning of geometric objects with other geometric objects. \begin{theorem} Let $P_1, \ldots, P_d$ be finite point sets in $\mathbb{R}^d$. Then there exists a hyperplane $h$ that simultaneously bisects each $P_i$, $i = 1, \ldots, d$. \label{thm:hamsandwich} \end{theorem} The theorem holds more generally for finite Borel measures that evaluate to zero on every affine hyperplane. All known proofs of Theorem~\ref{thm:hamsandwich} are essentially topological in nature. A classical proof follows from the Borsuk-Ulam theorem by identifying points of $\mathbb{S}^d$ with hyperplanes in $\mathbb{R}^d$, and where the function $f: \mathbb{S}^d \to \mathbb{R}^d$ encodes the ``unbalance'' of the $d$ point sets (more generally, measures) with respect to that hyperplane~\cite{matousek2003using}. We now outline another proof of the ham sandwich theorem using Tucker's lemma. The proof we present was found independently by Holmsen and by the third author (both unpublished). For simplicity we will assume that the given point sets $P_1, \ldots, P_d$ are in general position, and let $\bigcup_{i=1}^d P_i = \{\ve p_1, \ldots, \ve p_n\}$. Each pair $(\boldsymbol{a},b)\in(\mathbb{R}^d\times\mathbb{R}_+)\setminus\{\boldsymbol{0},0\}$ induces a {\em sign pattern} $\boldsymbol{x}\in\{+,-,0\}^n$ with $x_j$ being the sign of $\boldsymbol{a}\cdot p_j+b$. We will apply Tucker's lemma (Proposition~\ref{tuckerlemma}) on an abstract simplicial complex induced by these sign patterns to show that there is a pair $(\boldsymbol{a},b)$ whose sign pattern $\boldsymbol{x}$ is such that $\{p_j\colon j\in \boldsymbol{x}^+\}$ and $\{p_j\colon j\in \boldsymbol{x}^-\}$ each contain at most half of each $P_i$. Then the hyperplane $\{\boldsymbol{y}\in\mathbb{R}^d\colon\boldsymbol{a}\cdot\boldsymbol{y}+b=0\}$ bisects each $P_i$. Denote by $\mathcal{P}$ the partially ordered set of all achievable sign patterns endowed with the partial order $\preceq$ (see Section~\ref{s:topology} before Tucker's octahedral lemma). It is a well-known result from oriented matroid theory that the order complex $\mathsf{T}$ of $\mathcal{P}$ is a triangulation of $\mathbb{B}^d$ that is symmetric on its boundary. Suppose, for contradiction, that there is no hyperplane bisecting each $P_i$. Given an $\boldsymbol{x}$ in $\mathcal{P}$, we define $\lambda(\boldsymbol{x})$ to be $\varepsilon i$, where $i$ is the smallest index such that either $\{\ve p_j\colon j\in \boldsymbol{x}^+\}\cap P_i$ or $\{\ve p_j\colon j\in \boldsymbol{x}^-\}\cap P_i$ contains more than half of the points of $P_i$, and where $\varepsilon$ is $+$ if it is the first set and is $-$ if it is the second. This map $\lambda$ is clearly antipodal on the boundary of $\mathsf{T}$ and labels the vertices of $\mathsf{T}$ with the elements of $\{\pm 1,\ldots,\pm d\}$. According to the Tucker lemma, there exists an edge $\boldsymbol{u}\boldsymbol{v}$ of $\mathsf{T}$ with $\lambda(\boldsymbol{u})+\lambda(\boldsymbol{v})=0$. Without loss of generality, we assume that $\boldsymbol{u}\preceq\boldsymbol{v}$ and that $-\lambda(\boldsymbol{u})=\lambda(\boldsymbol{v})=k$ for some $k\in[d]$. By definition of $\lambda$, we have then $|\{\ve p_j\colon j\in \boldsymbol{u}^-\}\cap P_k|>|P_k|/2$ and $|\{\ve p_j\colon j\in \boldsymbol{v}^+\}\cap P_k|>|P_k|/2$. Combined with $\boldsymbol{u}^-\subseteq\boldsymbol{v}^-$, it implies that $|\{\ve p_j\colon j\in \boldsymbol{v}^-\cup\boldsymbol{v}^+\}\cap P_k|>|P_k|$, a contradiction. We will see further applications of the ham sandwich theorem later on, but for now we point out that it gives another proof of Theorem~\ref{thm:necklace}: given an open necklace with $t$ types of beads to be divided equally between two thieves, embed the beads of the necklace along a moment curve in $\mathbb{R}^t$, and use a hyperplane $h$ guaranteed by Theorem~\ref{thm:hamsandwich} to bisect each type of bead. As any hyperplane intersects a moment curve at $t$ points, $h$ splits the open necklace into $t+1$ pieces that can then be divided among the two thieves. Note that from the above discussion, we have the following ``computational hierarchy'': computing a solution to this variation of the octahedral Tucker problem is harder than computing a ham sandwich cut (note also that this implies that the latter is in PPA), which is harder than computing a solution to the fair splitting necklace problem. In particular, the Filos-Ratsikas and Goldberg's paper \cite{2018Filos-Ratsikas} proves that computing the ham sandwich cut is PPA-complete (see Section \ref{cakes+necklaces}). As far as the efficiency aspects of Theorem~\ref{thm:hamsandwich} are concerned, a line bisecting two given point sets of total $n$ points in $\mathbb{R}^2$ can be computed in $O(n)$ time. In $\mathbb{R}^d$, the best algorithm to computing a ham sandwich cut for $d$ point sets in $\mathbb{R}^d$ runs in time $O(n^{d-1})$~\cite{LMS94}; in fact the algorithm proposed presents a new proof of Theorem~\ref{thm:hamsandwich} that proceeds by induction on the dimension and thus is more amenable to efficient algorithm design. By now there are dozens of variants of the ham sandwich theorem, and generalizations to other types of bisecting and bisected objects, We now present a few nice examples. For the ham sandwich theorem, see also \cite{matousek2003using,Zivaljevic:Handbook} and references therein. One variation is called the \emph{center transversal theorem}: given $s+1$ point sets in $\mathbb{R}^d$ where $s \in \{0, \ldots, d-1\}$, there exists an $s$-dimensional affine subspace $h$ of $\mathbb{R}^d$ such that any hyperplane containing $h$ has at least $\frac{1}{(d-s+1)}$-th fraction of the points of $P_i$ on each side, for $i= 0, \ldots, s+1$. In fact, it has been conjectured that the constant $\frac{1}{(d-s+1)}$ can be replaced by $\frac{(s+1)}{(d+s+1)}$, see \cite{BMN08}. Theorem~\ref{thm:hamsandwich} is the case $s = d-1$, and the case $s=0$ is the important centerpoint theorem that we encountered in the introduction and will visit again later. We refer the reader to the book~\cite{matousek2003using} for many variants of this and related theorems in $\mathbb{R}^2$ and higher dimensions. Here are now two famous conjectures. \begin{conjecture} Let $P$ be a set of points in $\mathbb{R}^4$. Then there exists a set $\mathcal{H}$ of four hyperplanes such that each of the resulting $16$ open regions of $\mathbb{R}^4 \setminus \mathcal{H}$ contains at most $\frac{|P|}{16}$ points of $P$. \end{conjecture} In the papers \cite{pfag2015,pfag2016}, the authors showed that in $\mathbb{R}^5$ it is indeed possible to find four hyperplanes that divide the set into 16 equal parts. See also \cite{SimonS}. We should mention here a related theorem of Yao and Yao~\cite{YaoYao85} (see also Theorem~\ref{thm:simplicialpartitions}): Given a set $P$ of $n$ points in $\mathbb{R}^d$, one can partition $\mathbb{R}^d$ into $2^d$ regions such that the interior of each region contains at most $\frac{n}{2^d}$ points of $P$, and any hyperplane intersects the interior of at most $2^d-1$ regions. Next we consider another conjecture by Tverberg and Vre{\'c}ica~\cite{TverbergVrecica}. \begin{conjecture} Let $0 \le k \le d-1$ be a given parameter, and let $P_0, P_1, \ldots, P_k$ be finite point sets in $\mathbb{R}^d$. If $|P_i| = (r_i-1)(d-k+1)+1$ for $i=0,1,\ldots, k$, then each $P_i$ can be partitioned into $r_i$ parts $P_{i,1}, P_{i,2}, \ldots, P_{i,r_i}$ such that the sets $\{\conv(P_{i,j}) \colon 0 \le i \le k, 1 \le j \le r_i\}$ can be intersected by a $k$-dimensional affine space. \end{conjecture} Note that Tverberg's theorem is the case when $k=0$ in the above statement. If one wants to partition more than $d$ point sets in $\mathbb{R}^d$, then hyperplanes are often insufficient; however the following important variant of the ham sandwich theorem, due to Stone and Tukey~\cite{ST42}, shows that then polynomials of a sufficiently high degree can be used to do the partition. We say that a $d$-variate polynomial $f \in \mathbb{R}[x_1, \ldots, x_d]$ \emph{bisects} a point set $P \subseteq \mathbb{R}^d$ if it evaluates to negative on at most $\frac{|P|}{2}$ points of $P$ and likewise evaluates to positive on at most $\frac{|P|}{2}$ points of $P$. Note that for the case of polynomials of degree one, this coincides with our earlier definition of bisection for hyperplanes. \begin{theorem} Let $P_1, \ldots, P_s$ be finite point sets in $\mathbb{R}^d$. Then there exists a $d$-variate polynomial $f \in \mathbb{R}[x_1, \ldots, x_d]$ of degree $O(s^{\frac{1}{d}})$ such that $f$ bisects each $P_i$, $i= 1, \ldots, s$. \label{thm:tukeystone} \end{theorem} The idea is to reduce the above problem to the usual ham sandwich theorem in a suitably high dimension. As a $d$-variate polynomial of degree $D$ has $d' = \binom{D+d}{d}-1$ monomials (aside from the constant term), identify each such monomial with a distinct dimension of $\mathbb{R}^{d'}$. Then each $d$-variate polynomial $f$ of degree $D$ can be identified with a hyperplane in $\mathbb{R}^{d'}$, where the coefficients defining the hyperplane in $\mathbb{R}^{d'}$ (i.e., the $d'$ coordinates of the normal vector of the hyperplane) correspond to the coefficients of the corresponding monomials of $f$. This also gives a mapping -- called \emph{Veronese mapping} -- of the points in $P_1 \cup \cdots \cup P_s$ to $\mathbb{R}^{d'}$, where the $i$-th coordinate of a point $p \in \mathbb{R}^d$ is the value of the corresponding monomial on $p$. One can now use Theorem~\ref{thm:hamsandwich} on the $d$ sets corresponding to the lifted points of $P_1, \ldots, P_s$ to get a hyperplane $h$ in $\mathbb{R}^{d'}$ that bisects each of the $s$ lifted sets. Note that to use the ham sandwich theorem, we require $s \leq d' = \binom{D+d}{d}-1$ and thus need to satisfy the constraint $D = \Omega( s^{\frac{1}{d}} )$. The ham sandwich hyperplane $h$ corresponds to the required $d$-variate polynomial in $\mathbb{R}^{d}$, of degree $O( s^{\frac{1}{d}} )$. \subsection{Parametrized partitioning of data via geometric methods} \label{s:geompart} So far the partitioning statements have been of the type where the input geometric configuration precisely fixes the output type -- e.g., given $d$ point sets in $\mathbb{R}^d$ fixes the output of Theorem~\ref{thm:hamsandwich} to be a hyperplane. Or given $s$ point sets in $\mathbb{R}^d$ in Theorem~\ref{thm:tukeystone} fixes the output to be a polynomial of degree $O(s^{\frac{1}{d}})$. Now we consider statements where, besides the input geometric configuration, one is also given an independent parameter $r$ and the complexity of the output is a function of both the geometric configuration as well as the value of $r$. Thus one gets a hierarchy of output structures (varying with $r$), and one is free to choose the value of $r$ depending on the precise problem at hand. This turns out to be very useful for designing hierarchical data structures where one can pick the value of $r$ to maximize computational efficiency. These kinds of partitioning statements -- which we call \emph{parameterized spatial partitioning} -- have been a key theme in discrete and computational geometry for both algorithmic and proof purposes. Consider, for example, \emph{Hopcroft's problem} studied in the early 1980s: given a set of lines $L$ and a set of points $P$, an \emph{incidence} is a pair $(\ve p, l)$, where $\ve p \in P$, $l \in L$, and the point $\ve p$ lies on the line $l$. Then given $L$ and $P$, is there an efficient method to determine if there exists at least one incidence between them? It is not difficult to see that a spatial partition that either partitions the points or partitions the line (in a suitable sense) is useful for decomposing the original problem into several problems of smaller size; the current best algorithm with a running time of $2^{O(\log^* n)} \cdot n^{\frac{4}{3}}$ is based on such techniques. We refer the reader to~\cite{E96} for details on this and other related results on Hopcroft's problem. For a more mathematical application in a similar setting, consider the following question posed by Erd\H os~\cite{E46}: what is the maximum number of incidences between any $n$ points and any $n$ lines in the plane? Erd\H os observed that as the bipartite incidence graph between points and lines is $K_{2,2}$-free, this is upper-bounded by $O\big(n^{\frac{3}{2}}\big)$. More generally, the maximum number of incidences between $m$ lines and $n$ points is $O( n \sqrt{m} )$. On the other hand, a set of points in a `grid-like' configuration exhibits $\Omega\big(n^{\frac{4}{3}}\big)$ incidences, and Erd\H os conjectured this to be, asymptotically, the right bound. This question was resolved affirmatively by Szemer\'edi and Trotter by a complicated combinatorial argument~\cite{ST83}. We sketch here a beautiful and simple proof of this theorem by Clarkson et al. \cite{CEGSW90} that showcases the use of spatial partitioning for proving combinatorial bounds. Given a set $L$ of $n$ lines in the plane, suppose that for any parameter $r>1$, there exists a partition of the plane into $t = O(r^2)$ (possibly unbounded) interior-disjoint triangles $\Pi = \{ \triangle_1, \ldots, \triangle_t \}$ such that each triangle $\triangle_i$, for $i= 1 \ldots t$, intersects at most $\frac{n}{r}$ lines of $L$ in its interior. Now one can partition the incidences between $P$ and $L$ into those for which the points lie on boundary of some triangle, and those for which the points lie in the interior of some triangle of $\Pi$. It is not difficult to see that the former can be only $O(nr)$; on the other hand, a triangle $\triangle_i$ intersecting $m_i$ lines of $L$ in its interior and containing $n_i$ points of $P$ can contain $O( n_i \sqrt{m_i} )$ incidences in its interior (via the graph-theoretic bound). This gives the overall number of incidences lying in the interior of triangles to be $\sum_i O( n_i \sqrt{m_i} ) = O(n \sqrt{ \frac{n}{r} })$. Thus for any $r>1$, the total number of incidences is bounded by $O\left(nr + \frac{n^{\frac{3}{2}}}{\sqrt{r}}\right)$, and setting $r= \Theta(n^{\frac{1}{3}})$ gives the desired bound! We now elaborate on this structural partitioning problem and its variations. The key behind the proof is the partition of $\mathbb{R}^2$ into triangles, each of which intersects ``proportionally few'' lines of $L$. More generally, in any dimension one can show the existence of a similar partition of hyperplanes~\cite{C93}. \begin{theorem} Given a set $\mathcal{H}$ of hyperplanes in $\mathbb{R}^d$ and a parameter $r \geq 1$, there exists a partition of $\mathbb{R}^d$ into $\Theta( r^d )$ interior-disjoint simplices such that the interior of any simplex intersects at most $\frac{|\mathcal{H}|}{r}$ hyperplanes of $\mathcal{H}$. \label{thm:cuttings} \end{theorem} Such a partition is called a $ \frac{1}{r} $-cutting of $\mathcal{H}$. Note that the bound of $\Theta(r^d)$ cannot be improved: each simplex can contain at most $(\frac{|\mathcal{H}|}{r})^d$ vertices induced by $d$-tuples of $\mathcal{H}$ in its interior, and so there must be $\Omega(r^d)$ simplices to account for all the $\Theta( |\mathcal{H}|^d )$ vertices induced by $\mathcal{H}$. The intuition behind Theorem~\ref{thm:cuttings} is as follows. Pick each hyperplane of $\mathcal{H}$ into a random sample $R$ independently with probability $p$ (to be set later). Then $\mathrm{E} [|R| ]= |\mathcal{H}| \cdot p$, and so $R$ partitions $\mathbb{R}^d$ into $O\big( (|\mathcal{H}|p)^d \big)$ induced cells, each of which can then be further partitioned into simplices, to get a partition of $\mathbb{R}^d$ into an expected total of $O\big( (|\mathcal{H}|p)^d \big)$ simplices. Furthermore note that each such simplex intersects, in expectation, $\frac{1}{p}$ hyperplanes of $\mathcal{H}$ in its interior. Setting $p = \frac{r}{|\mathcal{H}|}$ gives the required statement. This argument was done `in expectation', and it is non-trivial to convert it with the same asymptotic bounds to where each simplex is guaranteed to intersect no more than $\frac{|\mathcal{H}|}{r}$ hyperplanes of $\mathcal{H}$. The proof of Theorem~\ref{thm:cuttings} is usually presented in the more general abstract framework of the theory of $\varepsilon$-nets, whose setting we briefly describe now. Given a base set of elements $X$, a set system $\mathcal{R}$ on $X$, and a parameter $0 \leq \varepsilon \leq 1$, call a set $N \subseteq X$ an $\varepsilon$-net for $\mathcal{R}$ if $N$ contains at least one element from each $R \in \mathcal{R}$ with $|R| \geq \varepsilon \cdot |X|$. In the case of $\frac{1}{r}$-cuttings, the set $X$ is the set of hyperplanes $\mathcal{H}$, and $R \in \mathcal{R}$ if and only if there exists a simplex $\Delta$ with $R = \mathrm{int}(\Delta) \cap \mathcal{H}$. Then a $\frac{1}{r}$-cutting can be constructed by taking an $\varepsilon$-net $N$ for $\mathcal{R}$ with $\varepsilon = \frac{1}{r}$ and partitioning $\mathbb{R}^d$ into $O( |N|^d )$ simplices using $N$. As the interior of each simplex induced by $N$ does not intersect any hyperplane of $N$, it can only intersect less than $\varepsilon \cdot |\mathcal{H}| = \frac{|\mathcal{H}|}{r}$ hyperplanes of $\mathcal{H}$, as desired. Bounds on $\varepsilon$-nets have been extensively studied for set systems satisfying a combinatorial condition, called the \emph{VC dimension}~\cite{VC71}: given a set system $(X, \mathcal{R})$, define the projection of $\mathcal{R}$ onto a set $Y \subseteq X$, denoted $\mathcal{R}|_Y$, as the set system $\mathcal{R}|_Y = \{ Y \cap R \colon R \in \mathcal{R}\}$. We say that a subset $Y$ is \emph{shattered} by $\mathcal{R}$ if all $2^{|Y|}$ subsets of $Y$ can be realized by intersection with some set of $\mathcal{R}$, i.e., if $|\mathcal{R}|_Y| = 2^{|Y|}$. Then the VC dimension of $\mathcal{R}$ is defined to be the size of the largest set that is shattered by $\mathcal{R}$. The VC dimension plays an important role in the theory of set systems derived from geometric configurations due to the fact that the VC dimension of such systems is usually quite small. For example, consider the set system where $X$ is a finite set of $n$ points in $\mathbb{R}^d$, and the subsets in $\mathcal{R}$ are derived from intersection with halfspaces; here $R \in \mathcal{R}$ if and only if there exists a halfspace $h$ such that $R = h \cap X$. The VC dimension of this set system is $d+1$; in other words, given any set $X$ of $d+2$ points in $\mathbb{R}^d$, one cannot `separate' all subsets of $X$ by intersection with halfspaces. This is an immediate consequence of Radon's lemma (recall that Radon's lemma is the case of Tverberg's theorem for two parts, $r = 2$, namely that any set $P$ of $d+2$ points in $\mathbb{R}^d$ can be partitioned into two subsets $P_1, P_2$ such that $\conv(P_1) \cap \conv(P_2) \neq \varnothing$): the convex hulls of the Radon partitions intersect, and thus cannot be separated by a hyperplane. On the other hand, any set of $d+1$ points in general position can be shattered by halfspaces in $\mathbb{R}^d$. By Veronese maps, this implies more generally bounded VC dimension for set systems induced by geometric objects of bounded algebraic complexity (see~\cite[Chapter 10]{Mbook}). Returning to $\varepsilon$-nets, building on the work of Vapnik and Chervonenkis~\cite{VC71}, Haussler and Welzl~\cite{Haussler:1987fr} showed the existence of small $\varepsilon$-nets as a function of the VC dimension of a set system. \begin{theorem} Let $(X, \mathcal{R})$ be a finite set system with VC dimension at most $d\geq 1$. Then for any real parameter $\varepsilon > 0$, there exists an $\varepsilon$-net for $\mathcal{R}$ of size $O(\frac{d}{\varepsilon} \log \frac{1}{\varepsilon} )$. \label{thm:hwepsnetsthm} \end{theorem} The power of this theorem derives from the fact that the size is independent of the number of elements in $X$ and the number of sets in $\mathcal{R}$. Combined with the VC dimension bound of $d+1$ on set systems induced on points in $\mathbb{R}^d$ by halfspaces, Theorem~\ref{thm:hwepsnetsthm} implies the existence of $\varepsilon$-nets of size $O \left( \frac{d}{\varepsilon} \log \frac{1}{\varepsilon} \right)$, which has been shown to be optimal~\cite{KMP16}. The bounds of Theorem~\ref{thm:hwepsnetsthm} can be further improved for many geometric set systems, and recent work presents a unified framework for these bounds~\cite{CGKS12, MDG16,V10}. We refer the reader to the books~\cite[Chapter 15]{PA95},~\cite[Chapter 10]{Mbook},~\cite[Chapter 4]{C00} and~\cite[Chapter 5]{M99} for a more detailed exposition on $\varepsilon$-nets and their many applications. The theory of VC dimension fails to help in the construction of $\varepsilon$-nets when it is unbounded. A basic case is the set system induced by convex objects in $\mathbb{R}^d$; namely given a set $P$ of $n$ points in $\mathbb{R}^d$ and a parameter $\varepsilon>0$, one would like to show the existence of a small set $Q \subset \mathbb{R}^d$, called a \emph{weak $\varepsilon$-net} for $P$ induced by convex objects, such that any convex object containing at least $\varepsilon n$ points of $P$ must contain at least one point of $Q$. Note here that $Q$ can be any set of points in $\mathbb{R}^d$, and is not just limited to being a subset of $P$ -- hence the term ``weak''. An initial bound of $O(\frac{1}{\varepsilon^{d+1}})$ on the size of $Q$ was shown by Alon et al.~\cite{ABFK92} (their proof uses the colorful Carath\'eodory's theorem together with Tverberg's theorem; see the discussion about simplicial depth in Section \ref{depthsec}), and this was improved to $\tilde{O}(\frac{1}{\varepsilon^{d}})$ by Chazelle et al.~\cite{CEGGSW93}. This was improved even further, by logarithmic factors, by Matou{\v{s}}ek and Wagner~\cite{MW02} whose elegant proof we outline now. Partition $P$ into $t$ equal sized subsets $\mathcal{P} = \{P_1, \ldots, P_t\}$, for a parameter $t$ that is chosen optimally, such that any hyperplane intersects the convex hulls of $O(t^{1-1/d})$ subsets of $\mathcal{P}$. Let $C$ be a convex object containing $\varepsilon n$ points of $P$. When $C$ intersects ``few'' sets of $\mathcal{P}$, the proportion of points of $C$ contained in some $P_i \in \mathcal{P}$ is higher than $\varepsilon$, hence $C$ can be hit by a set constructed inductively for $P_i$. Otherwise, $C$ intersects many sets of $\mathcal{P}$. In this case, pick one arbitrary point from each set of $\mathcal{P}$ intersecting $C$. Let $\ve q$ be the centerpoint of those points. Then $\ve q$ must be contained in $C$, as otherwise the hyperplane separating $\ve q$ from $C$ must intersect the convex hulls of many sets in $\mathcal{P}$, a contradiction to the definition of $\mathcal{P}$. For the case $d=2$, Rubin~\cite{rubin18} proved the bound of $O( \frac{1}{\varepsilon^{1.5+\delta}} )$, where $\delta > 0$ is an arbitrarily small constant. Finding asymptotically optimal bounds for weak $\varepsilon$-nets induced by convex objects in $\mathbb{R}^d$ is a tantalizing open problem. The best known lower bounds of $\Omega( \frac{1}{\varepsilon} (\log \frac{1}{\varepsilon})^d )$~\cite{BMV11} are quite far from the upper bounds. On the other hand, there are partial results that indicate the upper bounds that can be improved; e.g., it is known~\cite{MR08} that one can construct weak $\varepsilon$-nets from $\tilde{O}(\frac{1}{\varepsilon})$ points of $P$ in $\mathbb{R}^d$: pick a set $Q'$ of $\tilde{O}(\frac{1}{\varepsilon})$ points that form an $\varepsilon$-net for the set system induced by the intersection of $d$ halfspaces. Then adding points lying in Tverberg partitions of carefully chosen subsets of $Q'$ results in a weak $\varepsilon$-net, though of size $O(\frac{1}{\varepsilon^{d+2}})$. For a different formulation, one can also fix an integer parameter $k > 0$, and then ask for the minimization problem. Find $\varepsilon = \varepsilon(k)$ such that for any set $P$ of points in $\mathbb{R}^d$, there exists a set $Q \subset \mathbb{R}^d$ of $k$ points such that all convex objects containing at least $\varepsilon \cdot |P|$ points of $P$ are hit by $Q$ (see~\cite{MR09b}). \begin{oproblem} What is the asymptotically best bound for the size of the smallest weak $\varepsilon$-net for the set system induced by convex objects in $\mathbb{R}^d$? \end{oproblem} We next turn to the algorithmic aspects of spatial partitioning. There has been substantial work on improving the constants in the bounds on $\varepsilon$-nets, as they are directly linked to the approximation ratios of algorithms for the geometric hitting set problem~\cite{BG95, ERS05}. Given a set $P$ of points in $\mathbb{R}^d$ and a set $\mathcal{O}$ of geometric objects, the goal is to compute a minimum subset of $P$ that hits all the objects in $\mathcal{O}$. This can be written as an integer program, which is then approximated as follows: (a) solve the linear relaxation of the integer program (i.e., the linear program obtained by replacing the integral constraints with real ones), and (b) assign weights to the points of $P$ according to the linear program, and finally (c) compute a $\frac{1}{W}$-net $N$ for the set system induced by $\mathcal{O}$, where $W$ is the value of the linear programming relaxation. As the weight of the points contained in each object of $\mathcal{O}$ is at least $1$ by the linear program, $N$ is a hitting set for $\mathcal{O}$. The size of $N$ is then bounded by the size of a $\frac{1}{W}$-net; e.g., if an $\varepsilon$-net exists of size $\frac{c}{\varepsilon}$ for some constant $c$, then $N$ has size at most $c \cdot W$ -- i.e., at most $c$ times the optimal solution. We refer the reader to the recent surveys~\cite{CMR16,MV16} for precise bounds on $\varepsilon$-nets and cuttings for various geometric set systems as well as the current-best algorithms for computing such nets. Cuttings, together with linear programming duality (or alternatively, Farkas' lemma), can be used to derive another important partitioning tool we have already encountered in the construction of weak $\varepsilon$-nets -- the simplicial partition theorem~\cite{M92}. \begin{theorem} Let $P$ be a set of points in $\mathbb{R}^d$, and $t > 1$ a given integer parameter. Then there exists a partition of $P$ into $t$ sets $\mathcal{P} = \{P_1, \ldots, P_t\}$ such that $(a)$ $|P_i| \leq \frac{2 |P|}{t}$ for all $i = 1, \ldots, t$ and $(b)$ any hyperplane intersects the convex-hull of $O\big( t^{1-\frac{1}{d}} \big)$ sets of $\mathcal{P}$. \label{thm:simplicialpartitions} \end{theorem} We outline the general idea of the proof. Given $P$, one can first ``discretize'' the space of all possible hyperplanes in $\mathbb{R}^d$ into a finite set $\mathcal{H}$ of hyperplanes, so that one only needs to construct a partition $\mathcal{P}$ such that each hyperplane of $\mathcal{H}$ intersects the convex hulls of $O(t^{1-\frac{1}{d}})$ sets of $\mathcal{P}$. Construct a $\Theta(\frac{1}{t^{1/d}})$-cutting for $\mathcal{H}$, consisting of at most $t$ simplices, and let $\mathcal{P}'$ be the collection of points of $P$ lying in each simplex of the cutting. Then, on average, $|P'| = \Theta(\frac{|P|}{t})$ for each $P' \in \mathcal{P}'$, and furthermore the convex hull of each $P'$ is intersected by at most $\frac{|\mathcal{H}|}{t^{1/d}}$ hyperplanes of $\mathcal{H}$. This is, in a suitable sense, the dual of the statement that we want, where each hyperplane should intersect few cells in the cutting (and thus few convex hulls of sets in $\P'$). However we are not far off -- the total number of intersecting pairs of hyperplanes of $\mathcal{H}$ with convex-hulls of sets in $\mathcal{P}'$ is $O(t \cdot \frac{|\mathcal{H}|}{t^{1/d}}) = O( t^{1-\frac{1}{d}} \cdot \mathcal{H})$. In other words, the ``average'' hyperplane in $\mathcal{H}$ intersects the convex hulls of $O(t^{1-\frac{1}{d}})$ sets of $\mathcal{P}'$. Now LP duality~\cite{HP09} (or Farkas' lemma~\cite{GKM14}) shows the existence of the required partition. \subsection{Parametrized partitioning of data via topological methods} Now we move to more recent approaches to parameterized spatial partitioning. These methods use equipartitioning results such as the ham sandwich theorem and Theorem~\ref{thm:tukeystone} to show the existence of polynomials that induce parametrized partitions of $\mathbb{R}^d$. The resulting statements are similar in spirit and use to those we saw earlier for affine objects. The main advantage of these newer approaches is that partitioning space with polynomials circumvents several difficult technical issues that arise when dealing with piece-wise linear objects. On the other hand, new computational challenges arise in the topological approaches, as they often do not lend themselves to efficient algorithm design. We present now a ``polynomial version'' of Theorem~\ref{thm:simplicialpartitions}, discovered by Guth and Katz~\cite{GK15}. For more on the impact of this theorem see the book \cite{GuthBook}. In what follows given a polynomial $f$, we denote by $Z(f)$ the zero set of $f$. \begin{theorem} Let $P$ be a set of $n$ points in $\mathbb{R}^d$, and let $r>1$ be a given parameter. Then there exists a $d$-variate polynomial $f$ of degree $O(r^{\frac{1}{d}})$ such that each connected component of $\mathbb{R}^d \setminus Z(f)$ contains at most $\frac{|P|}{r}$ points. \label{thm:polynomialpartitions} \end{theorem} Here the points lying in the components of $\mathbb{R}^d \setminus Z(f)$ play the role of the sets in Theorem~\ref{thm:simplicialpartitions}. Observe that any hyperplane $h$ in $\mathbb{R}^d$ intersects $O\big( (\deg(f))^{d-1} \big) = O(r^{1-\frac{1}{d}})$ different components of $\mathbb{R}^d \setminus Z(f)$, quantitatively the same bound as in Theorem~\ref{thm:simplicialpartitions}. In contrast with Theorem~\ref{thm:simplicialpartitions}, these components are interior disjoint, though they will, in general, not be convex. We sketch a proof: partition $P$ into two equal sized sets $P_1, P_2$ by a polynomial, say $f_0$, of degree $O(1)$ (using Theorem~\ref{thm:hamsandwich}, for example). Then partition these two point sets $P_1$ and $P_2$ into four equal disjoint subsets $P_3, P_4, P_5, P_6$ using a polynomial, say $f_1$, of degree $O(2^{\frac{1}{d}})$ via Theorem~\ref{thm:tukeystone}. Continuing, let $f_i$ be a polynomial of degree at most $O((2^i)^{\frac{1}{d}})$ that equipartitions $2^{i}$ equal-sized disjoint subsets of $P$, for $i = 0, \ldots, \log r$. Note here that as long as $2^i \leq d$, a hyperplane suffices for $f_i$. The key observation now is that the polynomial $f$ formed by taking the product of all these polynomials -- namely $f = \prod_{i=0}^{\log r} f_i$ -- is the required polynomial: as the zero set of $f$ is simply the union of the zero sets of all the $f_i$'s, each connected region of $\mathbb{R}^d \setminus Z(f)$ can contain at most $\frac{|P|}{2^{\log r}} = \frac{|P|}{r}$ points of $P$. The degree of $f$ can be bounded as $$\deg(f) \leq \sum_{i=0}^{\log r} \deg(f_i) \leq \sum_{i=0}^{\log r} O\left( (2^i)^{\frac{1}{d}} \right) = O\left( r^{\frac{1}{d}} \right).$$ Theorem~\ref{thm:polynomialpartitions} gives another proof of the Szemer\'edi-Trotter theorem to bound the number of incidences between a set $P$ of $n$ points and a set $L$ of $n$ lines in the plane, this time by partitioning the points instead of the lines: apply Theorem~\ref{thm:polynomialpartitions} on $P$ with $r = n^{\frac{2}{3}}$ to get a polynomial $f$, with $\deg(f) = O(n^{\frac{1}{3}})$. Note that each line of $L$ intersects $O(\sqrt{r}) = O( n^{\frac{1}{3}})$ components of $\mathbb{R}^2 \setminus Z(f)$ and each of the $O(r) = O(n^{\frac{2}{3}})$ components of $\mathbb{R}^2 \setminus Z(f)$ contains at most $O(\frac{n}{r})= O(n^{\frac{1}{3}})$ points of $P$. A simple calculation by summing up the incidences across each component induced by $f$ shows that the overall number of incidences is bounded by $O(n^{\frac{4}{3}})$. On the computational side, an efficient algorithm to compute the partition guaranteed by Theorem~\ref{thm:polynomialpartitions} was discovered by Agarwal et al.~\cite{AMS13}, who then used it to construct efficient data-structures for answering range queries with constant-complexity semi-algebraic sets as ranges, in time close to $O(n^{1-\frac{1}{d}})$. We now move on to our last topic in this section, a recent beautiful theorem of Guth~\cite{G15} that gives a very general theorem that implies many statements that were previously regarded as unconnected. \begin{theorem} Let $\Gamma$ be a set of $k$-dimensional varieties in $\mathbb{R}^d$, each defined by at most $m$ polynomial equations of degree at most $D$. For any parameter $r \geq 1$, there exists a $d$-variate polynomial $f$ of degree $O(r^{\frac{1}{d-k}})$, so that each component of $\mathbb{R}^d \setminus Z(f)$ intersects at most $\frac{|\Gamma|}{r}$ varieties of $\Gamma$. The constant in the asymptotic notation depends on $D, m$ and $d$. \label{thm:polynomialvaritiespartitioning} \end{theorem} Note that this theorem implies Theorem~\ref{thm:polynomialpartitions} by setting $k = 0$ and implies Theorem~\ref{thm:cuttings} (strictly speaking, a polynomial version of it where $\mathbb{R}^d$ is partitioned into the components of $\mathbb{R}^d \setminus Z(f)$ instead of simplices) by setting $k= d-1$. See~\cite{BBZ16} for an exciting new alternate proof of Theorem~\ref{thm:polynomialvaritiespartitioning} that extends it to a setting with several algebraic varieties. We present a brief sketch of the proof of Theorem~\ref{thm:polynomialvaritiespartitioning}, which builds upon the proof of Theorem~\ref{thm:polynomialpartitions} in a natural way. The goal, as before, is to find polynomials $f_0, \ldots, f_s$, where $s \leq \log r^{\frac{d}{d-k}}$ and $\deg(f_i) \leq 2^{\frac{i}{d}}$ for each $i=0, \ldots, s$. Then the required polynomial will be $$ f = \prod_{i=0}^{ \log r^{\frac{d}{d-k}}} f_i, \quad \text{with} \quad \deg(f) \leq \sum_{i=0}^{\log r^{\frac{d}{d-k}}} \deg(f_i) \leq \sum_{i=0}^{\log r^{\frac{d}{d-k}}} O\left( 2^{\frac{i}{d}} \right) = O\left( r^{\frac{1}{d-k}} \right),$$ as required. To see the idea behind the proof of Theorem~\ref{thm:polynomialvaritiespartitioning}, trace the proof of Theorem~\ref{thm:polynomialpartitions} backwards: the polynomials $f_i$'s were constructed using Theorem~\ref{thm:tukeystone}, whose proof used the ham sandwich theorem in a suitably high dimension, whose proof identified the coefficients of the polynomial with points of the sphere $\mathbb{S}^{d'}$ in a suitable dimension $d'$ and then applied the Borsuk-Ulam theorem. Note that the $f_i$'s were constructed \emph{independently} via an iterative argument, where first $f_0$ was used to partition the given point set $P$ into two sets; these two sets were then equipartitioned with $f_1$ and so on. This approach relied crucially on the fact that a point, outside the zero set of any polynomial $f_i$, lies in precisely one cell induced by their product. This fact fails for $k$-dimensional varieties when $k > 0$: assume one has constructed polynomials $f_0, \ldots, f_{s'-1}$, $s' < s$, such that each component induced by $f' = \Pi_{i=0}^{s'-1} f_i$ has the same number of incidences with the varieties in $\Gamma$. Then, even if the next polynomial $f_{s'}$ equipartitions the incidences of each component of $f'$, that does not imply that each component induced by the polynomial $f' \cdot f_{s'}$ will have the same number of incidences with $\Gamma$. The key idea here is to compute $f_0, \ldots, f_s$ \emph{simultaneously}. Thus we will identify, as before, the coefficients of each $f_i$ with some $\mathbb{S}^{d_i}$, but now instead of applying the Borsuk-Ulam theorem separately for each $i$, we will consider the product of these spheres, and apply the following natural variant of the Borsuk-Ulam theorem to get the required polynomials $f_0, \ldots, f_s$ in one step. For an integer $s\geq 1$, let $X_s = \prod_{i=1}^s {\mathbb{S}}^{2^{i-1}}$, and for each $i \in[s]$, define the functions \begin{align*} Fl_i(\ve x_1, \ldots, \ve x_{i-1}, \ve x_i, \ve x_{i+1}, \ldots, \ve x_s) = (\ve x_1, \ldots, \ve x_{i-1}, -\ve x_i, \ve x_{i+1}, \ldots, \ve x_s). \end{align*} \noindent For each $v \in \mathbb{Z} \setminus \{0\}$, let $f_v: X_s \rightarrow \mathbb{R}$ be a continuous function with the property that $f_v(Fl_i(\ve x)) = (-1)^{v_i}f_v(\ve x)$. Then there exists a point $\ve x \in X_s$ where $f_v(\ve x) = 0$ for all $v \in \mathbb{Z} \setminus \{0\}$. We conclude this section with an open problem -- the affine version of Theorem~\ref{thm:polynomialvaritiespartitioning}. \begin{conjecture} For any set $\mathcal{H}_1$ of $n$ $k_1$-dimensional flats, any set $\mathcal{H}_2$ of $m$ $k_2$-dimensional flats in $\mathbb{R}^d$, and any integer $r$, there exists a partition of $\mathbb{R}^d$ into $O(r^d)$ simplices such that (a) each simplex intersects $O\big(\frac{n}{r^{d-k_1}}\big)$ flats of $\mathcal{H}_1$, and (b) each flat in $\mathcal{H}_2$ intersects $O\big(r^{k_2}\big)$ simplices. \end{conjecture} \subsection{Depth of point sets} \label{depthsec} Data science aims to understand the features of data sets. The goal of data depth measures is to generalize the idea of the statistical median of a set of reals to higher dimensions: the data consists of a finite set $P$ of points in $\R^d$, and the goal is to compute a point $\ve q \in \R^d$ that is a ``combinatorial center'' of the data $P$. As we will see, there are several natural ways to measure data depth, and they are related to each other in sometimes surprising ways. Figure~\ref{fig:datadepth} shows a set of points in $\R^2$ (circles), with ``combinatorial centers'' under three different measures: halfspace depth (cross), simplicial depth (box) and Oja depth (shaded disk). As the figure shows, the points for these three measures are geometrically close. \begin{figure} \centering \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=50mm,page=24]{figures-final} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[width=50mm,page=25]{figures-final} \end{minipage} \caption{(left) A set of points in $\mathbb{R}^2$ together with three centers under halfspace (cross), simplicial (box) and Oja (disk) depth measures, and (right) the $\beta$-deep regions under halfspace depth measure.} \label{fig:datadepth} \end{figure} Given integers $d$ and $n$, let $\P^d_n$ be the point set in $\R^d$ of size $n$, with at least $\lfloor \frac{n}{d+1} \rfloor$ points placed at each of the vertices of the standard simplex. Slightly perturb each point so that all $n$ points are distinct, and in general position. This point set will be very useful for the remainder of this section, as it essentially captures both the intuition as well as the worst-case behavior with respect to many depth measures. \subsubsection*{Halfspace depth.} \label{half-space depth:subsec} Given a set $P$ of $n$ points in $\R^d$, the \emph{halfspace depth} of a point $\ve q \in \R^d$ with respect to $P$ is the minimum number of points of $P$ in any closed halfspace containing $q$: $$ \textsc{Halfspace-Depth}(\ve q, P) = \min_{\textrm{halfspace } H, \ \ve q \in H} |H \cap P|.$$ Define the halfspace depth of $P$ as the maximum halfspace depth of any point in $\R^d$ (this has also been called \emph{Tukey depth}~\cite{T75}). The separation theorem implies that any point outside $\conv(P)$ has halfspace depth zero. It is a non-trivial fact, first shown by Rado in 1947, that points of high halfspace depth exist for every point set. \begin{theorem}[Centerpoint theorem~\cite{R47}] Any set of $n$ points in $\R^d$ has half\-space depth at least $\lceil \frac{n}{d+1} \rceil$. \label{thm:centerpointthm} \end{theorem} Recall such a point is called a \emph{centerpoint} of $P$ and we saw its importance and generalizations in various places along this survey (e.g., in Section \ref{centerpointsinoptima}). It turns out the centerpoint theorem is optimal, in the sense that the bound $\lceil \frac{n}{d+1} \rceil$ cannot be improved $\P^d_n$ is an example of a point set where it is not possible to do better. By now there are several proofs of the centerpoint theorem: using Brouwer's fixed-point theorem~\cite{C69}, using Helly's theorem~\cite{Mbook}, following from Tverberg's theorem, and an elementary extremal argument by induction on the dimension $d$~\cite{MR09b}. Perhaps the following proof is the simplest: observe that any point $q \in \R^d$ hitting all convex objects containing greater than $\frac{d}{d+1}n$ points of $P$ is a centerpoint, whose existence now follows from Helly's theorem. The centerpoint theorem and its generalizations have found several applications in combinatorial geometry, statistics, geometric algorithms, meshing, and related areas. A beautiful example is by Miller and Thurston~\cite{MT90}, who showed that given a set $\mathcal{D}$ of $n$ disjoint disks in the plane, there exists another disk $B \subset \R^2$ intersecting $O(\sqrt{n})$ disks of $\mathcal{D}$, and with at least $\frac{n}{4}$ disks of $\mathcal{D}$ lying completely in the two connected components of $\R^2$ induced by $B$. To see this, use an inverse stereographic projection to lift the centers of the disks of $\mathcal{D}$ to a set $P$ of points lying on a carefully chosen sphere in $\R^3$; then, with high probability, the image of the intersection of a random hyperplane through the centerpoint of $P$ with the sphere is the required disk $D$! A point of highest halfspace depth with respect to $P$ is called a \emph{Tukey median} of $P$. It may not be unique. In general, the set of points of halfspace depth at least $\beta n$, for $0 \leq \beta \leq 1$, forms a convex region called the $\beta$-deep region of $P$. It is the intersection of all halfspaces containing more than $(1-\beta)n$ points of $P$. Each facet of this region is supported by a hyperplane that passes through $d$ points of $P$. Figure~\ref{fig:datadepth} shows that set of all such regions for the earlier point set. Mart{\'{\i}}nez-Sandoval and {Tamam}, gave a generalization of Tukey depth in \cite{leo+tamam} which connects depth to fractional Helly theorems. \textit{Algorithms.} There has been considerable work on the algorithmic question of computing points of large halfspace depth. We first discuss the case in $\mathbb{R}^2$, which is by now settled. A centerpoint of $n$ points in $\R^2$ can be computed in $O(n)$ time~\cite{JM93}, the key tool being the linear-time algorithm for computing ham sandwich partitions of two point sets in the plane. Chan~\cite{C04} gave an $O(n \log n)$ time randomized algorithm for computing a point of the highest halfspace depth, i.e., a Tukey median for a set of points in the plane. The set of all depth contours of $n$ points in $\R^2$ can be computed in time $O(n^2)$~\cite{MRRSSSS01}. A real-time GPU-based algorithm for computing the set of all deep regions of a two-dimensional point set was given in~\cite{KMV02}. Turning to $\R^d$, $d \geq 3$, the current-best algorithms for both computing any centerpoint and the highest depth point take $O(n^{d-1})$ time~\cite{C04}. Clarkson et al.~\cite{CEMST96} presented an iterative method to compute approximate centerpoints: the algorithm constructs a $(d+2)$-ary tree $T$, where the $n$ leaves of $T$ are the input points, and each internal node represents the Radon point (namely, the unique point lying in the common intersection of the convex-hulls of the two Tverberg partitions of these $d+2$ points) of its $d+2$ children. This method was improved to the current best algorithm~\cite{MW13} which computes a point of halfspace depth at least $\frac{n}{4(d+1)^3}$ in time $d^{O(\log d)}n$. In fact, this method computes an approximate Tverberg partition; namely, a partition of $P$ into $\lceil \frac{n}{4(d+1)^3} \rceil$ sets whose convex-hulls have a common intersection. \begin{oproblem} Can a centerpoint of $n$ points in $\R^3$ be computed in $\tilde{O} (n)$ time? \end{oproblem} \subsubsection*{Simplicial Depth.} A straightforward implication of Proposition~\ref{p:cara++} is that given a set $P$ of $n$ points in $\mathbb{R}^{d}$, any point $\ve q \in \conv(P)$ lies in the convex hull of at least $n - d$ tuples of $\binom{P}{d+1}$. In fact, any centerpoint must be contained in many more simplices spanned by $(d+1)$-tuples of points in $P$. Not surprisingly, the number of $(d+1)$-tuples of $P$ whose convex hull contains $\ve q$ is positively correlated to the halfspace depth of $\ve q$. This leads to the related depth measure of \emph{simplicial depth}, first defined by Liu~\cite{L88}, which is the number of simplices spanned by points of $P$ containing a given point: $$\textsc{Simplicial-Depth}(\ve q, P) = \left| \left\{ Q \in \binom{P}{d+1} \colon \ve q \in \conv(Q) \right\} \right|.$$ The simplicial depth of $P$ is the highest simplicial depth of any point $\ve q \in \R^d$. As mentioned earlier, there is a close relation between halfspace depth and simplicial depth; the current best bound~\cite{W03} shows that a point of halfspace depth $\tau n$ has simplicial depth at least \begin{equation*} \frac{(d+1)\tau^d - 2d\tau^{d+1}}{(d+1)!} \cdot n^{d+1} - O(n^d). \end{equation*} B\'{a}r\'{a}ny~\cite{baranys-caratheodory} showed that the colorful Carath\'eodory's theorem together with Tverberg's theorem implies that there always exists a point contained in at least $\frac{1}{d! (d+1)^{d+1}} \cdot n^{d+1}$ simplices spanned by $P$. Let $c_d$ be a constant such that any set $P$ of $n$ points has simplicial depth at least $c_d \cdot n^{d+1}$. The optimal dependency on $c_d$ is a long-standing open problem. Bukh, Matou\v{s}ek, and Nivash~\cite{BMN08} constructed $n$ points in $\R^d$ so that no point in $\R^d$ is contained in, up to lower-order terms, more than $(\frac{n}{d+1})^{d+1}$ simplices defined by $P$. Furthermore, they conjectured that this is the optimal bound. \begin{conjecture} Any set of $n$ points in $\R^d$ has simplicial depth at least $(\lceil \frac{n}{(d+1)}\rceil)^{d+1}$. \end{conjecture} For $d=2$, a positive answer to the above conjecture was known already in 1984 by Boros and F\"{u}redi~\cite{BF82}. Bukh~\cite{B05} gave a beautiful short proof: the required point set is the common intersection point of three lines in $\R^2$, having a common intersection and where each of the six induced cones contain at least $\frac{n}{6}$ points of $P$; the existence of such three lines follows via an elementary topological argument. For $d=3$, an elementary argument shows that $c_3 \geq 0.0023$~\cite{BMRR10b}. Using algebraic topology machinery, Gromov~\cite{G10} improved the bound to the value $c_3 \geq 0.0026 $. This bound for $\R^3$ has since been improved even further by Matou\v{s}ek and Wagner~\cite{MW14} to $0.00263$, and then by Kr\'al et al.~\cite{KMS12} to $0.0031$. In fact, Gromov proves the bound for general $d$, showing that $$ c_d \geq \frac{2d}{(d+1)!^2(d+1)}.$$ His proof has since been simplified by Karasev~\cite{K12}. Using the concepts of Gale diagrams and secondary polytopes, one can observe that the concept of simplicial depth is equivalent to a different maximization problem: given a set of points on the sphere $\mathbb{S}^d$, what is the triangulation that uses those points (but possibly not all) as vertices that has the largest number of $d$-dimensional simplices? (see~\cite[Chapter 5]{triangbook} and the references therein). We conclude this discussion of simplicial depth with \emph{colorful simplicial depth}, which was introduced by Deza et al.~\cite{Dezaetal2006}. Consider a set of points $P$ in $\mathbb{R}^d$ partitioned into $(d+1)$ color classes $P = P_0 \cup \cdots \cup P_d$. Suppose that $P$ has the property that the origin $\ve o$ is in the relative interior of each $\conv(P_i)$, for $0 \le i \le d$. Recall from Section~\ref{caratheodory-thms} that a colorful simplex is a simplex where each of the vertices comes from a different $P_i$. While the colorful Carath\'eodory theorem asserts the existence of at least one colorful simplex containing $\ve o$, one can further ask about the number of distinct colorful simplices containing $\ve o$ that must always exist. Define the \emph{colorful simplicial depth} of $P$, denoted $\textsc{ColorfulSimp-Depth}(P)$, as the number of colorful simplices in $P$ containing $\ve o$. Deza et al.~\cite{Dezaetal2006} proposed some lower bounds on the colorful simplicial depth, and they conjectured that if $|P_i| = d+1$ for $0 \leq i \leq d$, then $\textsc{ColorfulSimp-Depth}(P) \geq d^2 + 1$. This was proven by Sarrabezolles~\cite{S15}. The bound is optimal by the work of Deza et al.~\cite{Dezaetal2006}. They also conjectured the following upper bound which was shown by Adiprasito \emph{et al.}~\cite{adiprasito2016colorful}: let $P = P_0 \cup \ldots \cup P_d$ be a point set in $\mathbb{R}^d$ with $|P_i| \ge 2$ for all $0 \le i \le d$. If no colorful simplex $S$ spanned by $P$ of dimension $d-1$ contains the origin $\ve o$ in its convex hull, then $ \textsc{ColorfulSimp-Depth}(P) \ \le \ 1 + \prod_{i=0}^d (|P_i| - 1)$. \textit{Algorithms.} For the case of the plane, computing the simplicial depth of a query point can be done in time $O(n \log n)$~\cite{GSW92}, which is optimal. Aloupis et al.~\cite{ALST03} presented an algorithm to compute a point of highest simplicial depth in $\R^2$ in time $O(n^4)$. Using the fact that finding the highest simplicial depth is Gale dual to the problem of finding a maximum triangulation of points in the sphere, then one can set up an integer program to find a point of largest simplicial depth for any point configuration (see \cite[Chapter 8]{triangbook}). A GPU-based algorithm for computing simplicial depth and colorful simplicial depth of point sets in the plane was given in~\cite{KMV06}. \subsubsection*{Ray-shooting depth.} It turns out that the previous two measures -- halfspace depth and simplicial depth -- are further related to each other via an even more general depth measure, called \emph{ray-shooting depth}. Given a set $P$ of $n$ points in $\R^d$, let $E_P$ be the set of all ${n \choose d}$ $(d-1)$-simplices spanned by points of $P$. Given a point $\ve q \in \R^d$ and a direction $\ve u\in \mathbb{S}^{d-1}$, let $r(\ve q, \ve u)$ be the half-infinite ray from $\ve q$ in direction $\ve u$. Then the \emph{ray-shooting depth} of a point $\ve q \in \R^d$ is defined as $$\textsc{Rayshooting-Depth}(\ve q, P) = \min_{\ve u\in \mathbb{S}^{d-1}} \big| \big\{ e \in E_P \colon r(\ve q,\ve u) \cap e \neq \varnothing \big\} \big|.$$ The ray-shooting depth of $P$ is the maximum ray-shooting depth of any point in $\R^d$. The notion of ray-shooting depth was first introduced in Fox et al.~\cite{FGLNP11}, who proved the following using Brouwer's fixed point theorem. \begin{theorem} Any set of $n$ points in $\R^2$ has ray-shooting depth at least $\frac{n^2}{9}$. \end{theorem} Note that any point realizing the maximum ray-shooting depth must have halfspace depth at least $\frac{n}{3}$ \emph{and} simplicial depth at least $\frac{n^3}{27}$: let $\ve q$ be a point with ray-shooting depth at least $\frac{n^2}{9}$. Then any line through $\ve q$ must intersect at least $\frac{2n^2}{9}$ segments in $E_P$, so both halfspaces defined by it must contain at least $\frac{n}{3}$ points. For simplicial depth, consider, for each point $\ve p \in P$, the ray from $\ve q$ in the direction $\overrightarrow{\ve p \ve q}$. Then for every edge $\{\ve p_i, \ve p_j\}$ that intersects this ray, the triangle defined by $\{\ve p, \ve p_i, \ve p_j\}$ must contain $\ve q$. Summing up these triangles over all points, each triangle can be counted three times, and so $\ve q$ lies in at least $\frac{n^3}{27}$ distinct triangles spanned by $P$. The problem of showing the existence of a point with large ray-shooting depth is open in higher dimensions. \begin{conjecture} Any set of $n$ points in $\R^d$ has ray-shooting depth at least $\big( \lceil \frac{n}{d+1} \rceil\big)^d$. \end{conjecture} Other notions of ray-shooting depth for convex sets, instead of points sets, were studied in \cite{Fulek:2009vz}. \textit{Algorithms.} The proof in~\cite{FGLNP11} is topological and does not give a method to compute such a point. A combinatorial proof, together with efficient algorithms were obtained in Mustafa et al.~\cite{MRS11}, who showed how compute a point of ray-shooting depth at least $\frac{n^2}{9}$ in time $\tilde{O}(n^2)$. \subsubsection*{Oja depth.} It turns out that ray-shooting depth is related to another older depth measure, the \emph{Oja depth} of a point set, first defined by Oja~\cite{O83}. Assume, without loss of generality, that $\vol(\conv(P)) = 1$. Then define the Oja depth of a point $\ve q$ with respect to $P$ as $$\textsc{Oja-Depth}(\ve q, P) = \sum_{\substack{P' \subseteq P\\|P'|=d}} \vol \big(\conv(P' \cup \{\ve q\})\big).$$ The Oja depth of $P$ is the minimum Oja depth over all $\ve q \in \R^d$. It is easy to see that the Oja depth of $\P^d_n$ is at least $(\frac{n}{d+1})^d$. The conjecture~\cite{CDILM10} is that the lower bound given by $\P^d_n$ is essentially tight. \begin{conjecture} The Oja depth of any set of $n$ points in $\R^d$ is at most $\big(\frac{n}{d+1}\big)^d.$ \end{conjecture} The conjecture has been resolved only for the case $d=2$ by Mustafa \emph{et al.}~\cite{MTW14}. For general $d$, it can be shown that the center of mass of $P$ has Oja depth at most ${n \choose d}/(d+1)$~\cite{CDILM10}. This estimate can be improved via ray-shooting depth, as the Oja depth of any point $\ve q$ which has ray-shooting depth at least $\frac{n^2}{9}$ is at least $\frac{n^2}{7.2}$. The reason is the number of triangles spanned by pairs of points in $P$ and the point $\ve q$, containing any point $\ve p\in \R^2$, is at most the number of edges spanned by $P$ intersecting the ray $\overrightarrow{\ve q \ve p}$, which is at most $\frac{n^2}{4} - \frac{n^2}{9} = \frac{n^2}{7.2}$. Integrating over all $\ve p \in \R^2$ gives the required bound. A calculation in $\R^d$ gives the current-best bound~\cite{MTW14}: \begin{theorem} Every set of $n$ points in $\R^d$, $d \geq 3$, has Oja depth at most $$ \frac{2n^{d}}{2^{d}d!} - \frac{2d}{(d+1)^2(d+1)!} {n \choose d} + O(n^{d-1}). $$ \end{theorem} \textit{Algorithms.} For the case $d=2$, Rousseeuw and Ruts~\cite{RR96} presented a $O(n^5 \log n)$ time algorithm for computing the lowest depth point, which was then improved to the current-best algorithm with running time $O(n \log^3 n)$~\cite{ALST03}. A point of Oja depth at most $\frac{n^2}{9}$ can be computed in $O(n \log n)$ time~\cite{MTW14}. For general $d$, various heuristics for computing points with low Oja depth were given by Ronkainen, Oja and Orponen~\cite{ROO01}. \subsubsection*{Regression depth.} The next depth measure, unlike earlier measures, is a combinatorial analogue of fitting a hyperplane through a set of points. Therefore it will be more convenient to state it in the dual. Given a point $\ve p \in \R^d$, let $p^*$ be its dual hyperplane, and for a set of points $P$, let $P^* = \{p^* \colon \ve p \in P\}$. Then define the \emph{regression depth} of a point as $$\textsc{Regression-Depth}(\ve q, P) = \min_{\ve u\in \mathbb{S}^{d-1}} \big| \big\{ H \in P^* \colon r(\ve q, \ve u) \cap H \neq \varnothing \big\} \big|.$$ The regression depth of a set $P$ of points in $\R^d$ is the maximum regression depth of any point $\ve q \in \R^d$. It was introduced by Rousseeuw and Hubert~\cite{RH99}, who showed that any set $P$ of $n$ points in $\R^2$ has regression depth at least $\lceil \frac{n}{3} \rceil$. Their proof is elegant: given the set $P$ of $n$ points, let $P_1, P_2, P_3$ be a partition of $P$ by consecutive $x$-coordinate values, and where $|P_i| \leq \lceil \frac{n}{3} \rceil$ for $i=1,2,3$. Then the required line is the ham sandwich cut of the two sets $P_1 \cup P_2$ and $P_2 \cup P_3$. The optimal bound for general $d$ was discovered later. \begin{theorem}[\cite{ABET00, K08, Mi02}] Any set of $n$ points in $\R^d$ has regression depth at least $\lceil \frac{n}{d+1} \rceil$. \end{theorem} \noindent Given a set $X \subseteq \mathbb{R}^d$ and a point $q \in \mathbb{R}^d$, the closest point in $X$ to $q$ (if it exists) is denoted by $\texttt{c}(q, X)$. The proof in~\cite{K08} deduces it from the centerpoint theorem: define the function $f(\ve q)$ that maps $\ve q \in \R^d$ to a centerpoint of the set $\{ \texttt{c}(\ve q, p^*) \colon \ve p \in P\}$. This can be done so that $f(\cdot)$ is continuous and maps a sufficiently large ball to itself; then observe that the dual of any fixed point of $f(\cdot)$ is the required hyperplane. \textit{Algorithms.} The method of~\cite{RH99} gives a linear-time algorithm for computing a point of regression depth at least $\lceil \frac{n}{3} \rceil$ immediately, as it uses only ham sandwich cuts. A point of maximum regression depth in $\R^2$ can be computed in time $O(n \log n)$~\cite{MMRSSS08}, improving upon an earlier $O(n\log^2 n)$ time algorithm~\cite{LS03}. For $d \geq 3$, the best algorithm takes time $O(n^d)$~\cite{MMRSSS08} to compute a point of maximum regression depth. \subsubsection*{The $k$-centerpoint conjectures.} It turns out that many of the discussed depth measures are special cases of the following more general conjecture, first proposed by Mustafa et al.~\cite{MRS15}. See also the related paper \cite{Karasev:2011jv}. \begin{conjecture} For any set $P$ of $n$ points in $\R^d$ and any integer $0 \leq k \leq d$, there exists a point $\ve q \in \R^d$ such that any $(d-k)$-half flat through $\ve q$ intersects at least $\big(\lceil \frac{n}{d+1} \rceil \big)^{k+1}$ of the $k$-simplices spanned by $P$. \end{conjecture} The case $k=0$ corresponds to halfspace depth, $k=d$ to simplicial depth, and $k=d-1$ to ray-shooting depth. It is not hard to show that given a set $P$ of $n$ points in $\R^d$, and an integer $0 \leq k \leq d-1$, there exists a point $\ve q \in \R^d$ such that any $(d-k)$-half flat through $\ve q$ intersects at least $$ \max \left\{ {\frac{n}{d+1} \choose k+1}, \ \ \ \frac{2d}{(d+1)(d+1)! {n \choose d-k}} \cdot {n \choose d+1} \right\} $$ $k$-simplices spanned by $P$. For simplicity assume that $|P|$ is a multiple of $(d+1)$. The proof follows from the use of Tverberg's theorem to partition $P$ into $t = \frac{n}{(d+1)}$ sets $P_1, \ldots, P_t$ such that there exists a point $\ve q$ with $\ve q \in \conv(P_i)$ for all $i$. Consider any $(d-k)$-dimensional half-flat $\mathcal{F}$ through $\ve q$, where $\partial \mathcal{F}$ is a $(d-k-1)$-dimensional flat containing $\ve q$. Project $\mathcal{F}$ onto a $(k+1)$-dimensional subspace $\mathcal{H}$ orthogonal to $\partial \mathcal{F}$ such that the projection of $\mathcal{F}$ is a ray $\ve r$ in $\mathcal{H}$, and $\partial \mathcal{F}$ and $\ve q$ are projected to the point $\ve q'$. And let $P'_1, \ldots, P'_t$ be the projected sets whose convex-hulls now contain the point $\ve q'$. Then note that the $k$-dimensional simplex spanned by $(k+1)$ points $Q' \subset P'$ intersects the ray $\ve r$ if and only if the $k$-dimensional simplex defined by the corresponding set $Q$ in $\R^d$ intersects the flat $\mathcal{F}$. Now apply the single-point version (i.e., given any point $\ve s\in \R^d$ and $d$ sets $P_1, \ldots, P_d$ in $\R^d$ such that each $\conv(P_i)$ contains the origin, there exists a $d$-simplex spanned by $\ve s$ and one point from each $P_i$ which also contains the origin) of colorful Carath\'{e}odory's theorem to every $(k+1)$-tuple of sets, say $P'_1, \ldots, P'_{k+1}$, together with the point $\ve s$ at infinity in the direction antipodal to the direction of $\ve r$ to get a `colorful' simplex, defined by $\ve s$ and one point from each $P'_i$, and containing $\ve q'$. Then the ray $\ve r$ must intersect the $k$-simplex defined by the $(k+1)$ points of $P'$, and so the corresponding points of $P$ in $\R^d$ span a $(k+1)$-simplex intersecting $\mathcal{F}$. In total, we get $n/(d+1) \choose k+1$ of the $k$-simplices intersecting $\mathcal{F}$. Another way is to use the simplicial depth bound of Gromov, that given any set $P$ of $n$ points in $\R^d$, there exists a point $\ve q$ lying in $2d/((d+1)(d+1)!) \cdot {n \choose d+1}$ $d$-simplices. Now take any $(d-k)$-half flat through $\ve q$. It must intersect at least one $k$-simplex of each $d$-simplex containing it, where each $k$-simplex is counted at most $n \choose d-k$ times. This implies the stated bound. In the plane ($d=2$) the centerpoint theorem, can be re-stated as follows: Given a set P of $n$ points in the plane, there exists a point $q$ such that if you take any line L passing through $q$, and move it continuously until it arrives outside the convex hull of $P$, then along this motion, the line will intersect at least $n/3$ points of $P$. The following more general statement has also been conjectured in~\cite{MRS15}: \begin{conjecture}\label{lastconjecture} Given a set $P$ of $n$ points in $\mathbb{R}^d$, and an integer $0 \leq k \leq d$, there exists a point $q \in \mathbb{R}^d$ such that the following holds: Let $F_q, F_{o}$ be two $\left(d-k-1\right)$-flats, such that $q \in F_q$ and $F_{o}$ does not intersect the convex-hull of $P$. Then any continuous motion family of $\left(d-k-1\right)$-flats, starting at $F_q$ and ending at $F_o$, must intersect at least $\left( \lceil \frac{n}{d+1} \rceil \right)^{k+1}$ $k$-simplices spanned by $P$. \end{conjecture} In Conjecture \ref{lastconjecture}, the case $k=d$ gives a `-$1$'-flat moving to infinity, which can be treated as a stationary point. The validity of these conjectures for the planar case $d=2$ follows from the work of Gromov~\cite{G10}. \section{Combinatorial convexity} \label{s:convexgeo} We now focus on three classical combinatorial theorems about convex sets first identified in the early 20th century. These are the theorems of Carath\'eodory, Helly and Tverberg. The importance of convexity in applications, and hence of these three theorems, owes much to the computational effectiveness of convex optimization algorithms both in practice and in theory \cite{bertsekasbook,boyd2004convex}. This encourages applied mathematicians to look for convexity, or for ways to approximate complicated sets using convex sets. Surprisingly, convexity appears in unexpected settings. Extensive surveys were devoted to (subsets and variations on) some of these three theorems by Danzer, Gr\"unbaum, and Klee~\cite{DGKsurvey63}, Eckhoff \cite{Eckhoff:1993survey}, Holmsen and Wenger~\cite{holmsen+wenger} and Amenta, De Loera, and Sober\'on. \cite{amentaetal2017helly}. An account of early variations of Carath\'eodory's theorem is in the memoir by Reay~\cite{Reay-mem}. There is an abundant literature on \emph{axiomatic convexity}, which studies analogues of the theorems of Carath\'eodory, Helly, and Tverberg, not over Euclidean spaces as we do here, but over purely combinatorial abstract settings, for instance in the convexity spaces defined by arbitrary graphs, finite geometries, matroids, greedoids, etc. The three theorems play a significant, and interesting role, there too, but we do not cover this topic here. We refer the interested reader to the references~\cite{Duchetsurvey1987,Kay:1971uf,convexityspaces-vandevel}. \subsection{Carath\'eodory} \label{caratheodory-thms} We will first consider Carath\'eodory-type theorems that certify membership of a point in the convex hull of a set via linear non-negative combinations. The original theorem of Carath\'eodory~\cite{originalCaratheodory} asserts that any point in the convex hull of a finite point set in $\mathbb{R}^d$ is a convex combination of some at most $d+1$ of these points. Equivalently, if a vector $\ve b$ belongs to the \emph{cone} of $X = \{\boldsymbol{v}_1, \boldsymbol{v}_2, \ldots, \boldsymbol{v}_n\} \subset \mathbb{R}^d$ (i.e., the \emph{positive hull} of all non-negative \emph{real} linear combinations of vectors in $X$), then $\ve b$ is a positive combination of at most $d$ vectors of $X$. To see this, let $A = \pth{\begin{matrix} \boldsymbol{v}_1 & \boldsymbol{v}_2 & \cdots & \boldsymbol{v}_n \end{matrix}}$ and assume that $\ve{\tilde{x}}$ is a solution of \begin{equation}\label{eq:Cara} \begin{array}{rcl} A \ve x & = & \ve b\\ \ve x & \ge & \ve 0. \end{array} \end{equation} If the support of $\ve{\tilde{x}}$ has size at least $d+1$, then $A \ve x = \ve 0$ has some nontrivial solution $\ve z$ with support contained \smallskip\noindent \begin{minipage}{12cm} in the support of $\ve{\tilde{x}}$. For an adequate value of $t$, the vector $\ve{\tilde{x}}+t\ve z$ is a solution of System~\eqref{eq:Cara} with smaller support than $\ve{\tilde{x}}$. A closer examination of this argument yields that, in the plane, any point in the convex hull of four points lies in two of the triangles they span (as illustrated on the right). The following strengthening of Carath\'eodory's Theorem will be useful in optimization. \end{minipage} \hfill \begin{minipage}{4cm} \centering \includegraphics[page=14]{figures-final} \end{minipage} \begin{proposition}\label{p:cara++} Any point in the convex hull of (at least) $d+2$ points in $\mathbb{R}^d$ lies in the convex hull of at least two $(d+1)$-element subsets of these points. \end{proposition} \noindent A geometric proof of Proposition~\ref{p:cara++} with any one of the $d+2$ points, say $\ve p$, and the point $\ve x$ in their convex \smallskip\noindent \begin{minipage}{4cm} \centering \includegraphics[page=15]{figures-final} \end{minipage} \hfill \begin{minipage}{12cm} hull. Shoot a ray from $\ve p$ to $\ve x$ and collect the (at most) $d$ vertices of the face of a (triangulation of the) convex hull through which this ray exits; these $d$ points and $\ve p$ contain the point $\ve x$ in their convex hull, and any of the $d+2$ points can be used as origin of the ray. The figure on the left illustrates this process in the case of a (blue) point $\ve x$ contained in the convex hull of the (white) vertices of a cube in $\mathbb{R}^3$. The green projections to faces help us find the four white points containing $\ve x$. \end{minipage} \bigskip Several variants of Carath\'eodory's theorem have been developed. For instance, Steinitz \cite {originalsteinitz} proved that a point in the interior of the convex hull of a set $S$ lies in the interior of the convex hull of some $2d$ points of $S$. A related classical result is the \emph{Krein-Milman theorem}~\cite{originalkreinmilman}: if $C$ is a convex set, then every point in $C$ is a convex combination of its extreme points, i.e., those that are not convex combinations of others in the set. Allowing more flexible representations, Dobbins~\cite{dobbins2015point} proved that any point in an $ab$-dimensional polytope is the barycenter of $a$ points on its $b$-dimensional skeleton (another proof of Dobbins' theorem is shown in \cite{blafrickzie}). \bigskip One of the most applicable and powerful variants is the \emph{colorful Carath\'eodory} theorem. We saw this in the introduction already. \begin{namedtheorem}[Colorful Carath\'eodory theorem] Let $C_1, C_2, \ldots, C_{d+1}$ be point sets in $\mathbb{R}^d$. If a point $\ve p$ is in the convex hull of every $C_i$, then there exist $\ve x_1 \in C_1$, $\ve x_2 \in C_2$, \ldots, $\ve x_{d+1} \in C_{d+1}$ such that $\ve p$ lies in the convex hull of $\{\ve x_1,\ve x_2, \ldots, \ve x_{d+1}\}$. \end{namedtheorem} \noindent In other words, if the origin is contained in the convex hull of each of $d+1$ point sets $C_1, C_2, \ldots, C_{d+1}$ (the color classes), then it is contained in a colorful ``simplex'', i.e., one where each of the vertices comes from a different $C_i$ (with the understanding that this ``simplex'' may be degenerate). This theorem was discovered by B\'ar\'any~\cite{baranys-caratheodory} who showed that a colorful simplex that minimizes the distance to the origin must contain it. Indeed, when the distance is still positive, it is attained on a facet and the vertex opposite to that facet may be changed to further decrease the distance to the origin. This approach inspired new proofs and algorithms, by minimization, of the colorful Carath\'eodory theorem \cite{barany1995caratheodory} and other results such as Tverberg's theorem~\cite{Roudneff-ejc,Tverberg1981,TverbergVrecica} (see more on Tverberg's theorem later in Section \ref{tv-section}). An alternative proof of the colorful Carath\'eodory theorem applies Meshulam's lemma (Proposition~\ref{p:meshulamlemma}) to the join of two abstract simplicial complexes built on top of $\bigcup_{i=1}^{d+1}C_i$: one has a simplex for any subset of points with no repeated color, and the other has a simplex for every subset of points not surrounding the origin. (The labeling is given by the identification of the vertices of the join to $\bigcup_{i=1}^{d+1}C_i$.) This approach emerged from a colorful Helly theorem of Kalai and Meshulam~\cite{Kalai:2005cm} (see below) and later allowed a purely combinatorial generalization of the colorful Carath\'eodory theorem by Holmsen~\cite{Holmsen-colorful} where the geometry and the colorfulness are abstracted away into, respectively, an oriented matroid and a matroid. See also the paper \cite{Holmsen+Karasev}. The assumption of the colorful Carath\'eodory theorem ensures that not only one, but actually many colorful simplices exist; we come back to this question when discussing simplicial depth in Section~\ref{depthsec}. This also underlies its connection to Tverberg's theorem which we discussed in Section \ref{tv-section}. Many variations and strengthenings of the colorful Carath\'eodory theorem have been explored starting with B\'ar\'any's seminal paper~\cite{baranys-caratheodory} and other collaborations \cite{barany1995caratheodory}. Recent strengthenings include Deza et al. colorful simplicial depth \cite{Dezaetal2006} (discussed in Section \ref{depthsec}) and Frick and Zerbib's common generalization of the colorful Carath\'eodory theorem and the KKM theorem \cite{frick+zerbib}. Another key variation, discovered independently by Arocha et al.~\cite{ABBFM09} and Holmsen et al.~\cite{HPT08}, is that the assumption that the convex hull of each $C_i$, $1 \leq i \leq d+1$, contains the origin can be weakened to only require that the convex hull of each pair $C_i \cup C_j$, $1 \leq i < j \leq d+1$, contains the origin. There are examples showing that it is not sufficient that the convex hulls of triples contain the origin, but weaker relaxations are possible~\cite{deza+meunier:cara}. Arocha et al.~\cite{ABBFM09} also proved another ``very colorful Carath\'eodory theorem''. Via point-hyperplane duality, one can derive from the colorful Carath\'eodory theorem, a colorful theorem of Helly. We will not discuss this in detail, but let us explain the basics. Consider $d+1$ families $F_1, F_2,\dots,F_{d+1}$ of convex sets inside $\mathbb{R}^d$ (researchers think of these as colors). Assume that every \emph{colorful} selection of $d+1$ of the sets, i.e., one set from each $F_i$, has a non-empty common intersection. Then, the classical colorful Helly theorem of Lov\'asz (see \cite{baranys-caratheodory}) says that there is at least one family $F_i$, whose sets have a non-empty intersection. Here is now the dual of the \emph{very} colorful Carath\'eodory theorem of \cite{ABBFM09}: given a finite family of halfspaces in $\mathbb{R}^d$ colored with $d+1$ colors, if every colorful selection of $d+1$ halfspaces has a non-empty common intersection, then there exist \emph{two} colors classes all of whose members intersect. For more examples and references, see \cite{Martinez-Sandovaletal18} and references therein. The proof of the colorful Cara\-th\'eo\-dory theorem also implies that given $d+1$ point sets $C_1, \ldots, C_{d+1}$ and a convex set $C$, either one $C_i$ can be separated from~$C$ by a hyperplane, or there exists a colorful simplex intersecting $C$. Building on this, Mustafa and Ray~\cite{MR16} showed that given $\lfloor \frac{d}{2} \rfloor +1$ sets of points in $\mathbb{R}^d$ and a convex object $C$, then either one of the sets can be separated from $C$ by a constant number of hyperplanes, or there is a $\lfloor \frac{d}{2} \rfloor$-dimensional colorful simplex intersecting $C$. \bigskip The \emph{integer} Carath\'eodory problem considers a finite set $X \subset \mathbb{Z}^d$ and $\ve v \in \mathbb{Z}^d$ in its positive hull and asks whether $\ve v$ can be written as a non-negative, \emph{integer} linear combination of some elements of $X$, and, if true, how many elements are needed. The answer to the first question is negative in general. For instance consider \begin{equation}\label{eq:non-hilbert} X = \left\{ \left(\begin{array}{c} 1 \\ 0 \end{array}\right), \left(\begin{array}{c} 1 \\ 2 \end{array}\right)\right\} \quad \hbox{and} \quad \ve v = \left(\begin{array}{c} 1 \\ 1 \end{array}\right), \end{equation} $\ve v$ is an integral vector of the positive hull of $X$ but is not an \emph{integer} non-negative combination of elements of $X$. It is thus natural to restrict one's attention to subsets $X \subset \mathbb{Z}^d$ such that every integral point of the cone of $X$ can be written as a non-negative integer combination of elements of $X$; such sets are called \emph{Hilbert generating sets}. This restriction is reasonable given that the integer points of any rational polyhedral cone $C$ have a finite Hilbert generating set. However, even in this setting, there is no version of an integer Carath\'eodory theorem with a bound on the size of the representation depending only on the dimension. Take for example \begin{equation}\label{eq:hilbert-unbounded} \begin{aligned} X_n & = \{2^i\ve e_j + \ve e_d \colon 0 \le i \le n-1 \hbox{ and } 1 \le j \le d-1\} \subset \mathbb{Z}^d \\ & \hbox{and} \quad \ve v_n = (2^n-1,2^n-1, \ldots, 2^n-1, n(d-1))^T, \end{aligned} \end{equation} $\ve v_n$ can be written as an integer combination over $X_n$, but any such combination requires at least $n$ summands. Notice that the coefficients in Example~\eqref{eq:hilbert-unbounded} grow quickly with $n$. Such growth is necessary to force larger and larger sums because a Carath\'eodory-type theorem is possible if one wants to bound the number of summands in terms of the dimension and the size of the coordinates. The best upper bound in that direction was recently obtained in \cite{sparseintcaratheo}. An earlier bound appears in \cite{eisenbrandshmonin-caratheodory}. \begin{theorem}\label{Bound_via_siegel} Let $X=\{{\ve x}_1,\ldots, {\ve x}_t\} \subseteq \mathbb{Z}^d\setminus \{\ve 0\}$ be a finite set, let $\| X \|_{\infty} = \max_{{\ve x} \in X} \| {\ve x} \|_{\infty}$, let $W = \pth{\begin{matrix} \ve x_1 & \cdots & \ve x_t\end{matrix}}$, and let $\Lambda$ denote the sublattice of $\mathbb{Z}^t$ of the integer points in the row space of $W$. Any vector representable as non-negative integer combination over $X$ can be written as a combination of at most $\min\{ \operatorname{rank} W + \log \det(\Lambda), \ 2d\log(2\sqrt{d}||X||_{\infty}) \}$ terms. \end{theorem} \noindent The proof of Theorem~\ref{Bound_via_siegel} starts from some non-negative integer combination and uses some element of the kernel of $W$ to eliminate one of the summands. This is very similar to the classical proof of the real-valued Carath\'eodory theorem, but now the kernel element must be an \emph{integral} vector with coordinate entries in $\{-1,1,0\}$. That such a vector exists is no longer a rank argument, but follows from Siegel's lemma~(see \cite{siegel,vaalerbest}). Interestingly, a full-fledged integer Carath\'eodory theorem, depending only on the dimension, does exist for Hilbert bases of pointed cones. Let us explain. First of all, a \emph{Hilbert basis} is an inclusion-minimal Hilbert generating set. We say a cone is \emph{pointed} if it contains no linear subspace other than the nullspace. It is known a pointed cone has a unique Hilbert basis~ (see e.g., \cite[Corollary 2.6.4]{alggeo4optimization}). In contrast, when the cone is not pointed, there is no uniqueness, for instance $\{ (x,y) : x+y=0 \}$ has two Hilbert bases $\{(1,-1),(-1,1)\}$ and $\{(2,-2), (-1,1)\}$. As we will see in Sections \ref{s:hilbertbasisinIP1} and \ref{s:hilbertbasisinIP2}, Hilbert bases play an important role in optimization theory and in the solution of integer optimization problems. Here is the best known upper bound of the number of Hilbert basis elements necessary to write a vector. This is due to Seb\H{o}~\cite{sebocaratheodory}: \begin{theorem}\label{thm:sebo} If the pointed cone $C$ is generated by a Hilbert basis $X \subseteq \mathbb{Z}^d$, then any of its integral points can be written as a non-negative integer combination of at most $2d-2$ elements of $X$. \end{theorem} \noindent A weaker version of Seb\H{o}'s theorem, with a constant $2d-1$, was previously obtained by Cook, Fonlupt, and Schrijver~\cite{cook+fonlupt+schrijver}. Note that the sets $X_n$ in Example~\eqref{eq:hilbert-unbounded} do define pointed cones, but are not the Hilbert bases of those cones. Seb\"o's theorem gives an upper bound, but the best known lower bound on the size of the linear combination is only $d+\lfloor \frac{d}{6} \rfloor$ for $d \ge 6$~\cite{brunsetal}, so that leaves an open important problem: \begin{oproblem} Determine the best possible constant for the integer Carath\'eo\-dory theorem on Hilbert bases of pointed cones. \end{oproblem} \noindent The answer is known to be $d$ for $d=3$~\cite{sebocaratheodory} and in some special cases such as the cone formed by the bases of any matroid~\cite{gijswijt+regts}. \bigskip We conclude with an \emph{approximate} Carath\'{e}odory theorem, recently recovered by Barman~\cite{Barman}, which has an interesting application in game theory (see Section~\ref{s:nash-complexity}). Informally, it says that any point in the convex hull of a point set $X \subseteq \mathbb{R}^d$ can be approximated by a convex combination of few elements of $X$. The precise relation between the quality of approximation and the size of the convex combination is quantified as follows: \begin{theorem}\label{thm:caratheodory-p} Let $p \in [2, \infty)$ and let $X \subseteq \mathbb{R}^d$. For any point $\ve a \in \conv(X)$ and any $\varepsilon >0$, there is a point $\ve b$ such that $(i)$ $\|\ve a-\ve b\|_p \le \varepsilon$, and $(ii)$ $\ve b$ can be expressed as a linear combination of at most $4p \pth{\frac{\max_{\ve x \in X} \| \ve x \|_p}{\varepsilon}}^2$ vectors from $X$. \end{theorem} \noindent Observe that the number of points used to represent the approximation $\ve b$ to $\ve a$ is independent of the ambient dimension $d$. The point $\ve b$ can in fact be chosen as the barycenter of points of $X$ with non-negative integer weights, the sum of the weights being at most $k = 4p \pth{\frac{\max_{\ve x \in X} \| \ve x \|_p}{\varepsilon}}^2$. Barman's nice probabilistic proof writes $a$ as a barycentric combination of $d+1$ points of $X$ (by Carath\'eodory's theorem) and finds the $k$ points adding to $\ve b$ by sampling those $d+1$ points using the weights as probabilities (some special care must be taken to ensure a bound independent of the dimension). Theorem~\ref{thm:caratheodory-p} can also be derived from Maurey's lemma in functional analysis (see Pisier~\cite{pisier1980remarques} and Carl~\cite{carl1985inequalities}). See also~\cite{BHPR16} for the derivation of a related theorem using the Perceptron algorithm~\cite{N62}. A very recent new generalization of Theorem \ref{thm:caratheodory-p} was presented by Adiprasito, B\'ar\'any and Mustafa in \cite{adiprasito+barany+mustafa}. They proved that, given a point set $P\subset \mathbb{R}^d$ of cardinality $n$, a point $\ve a \in \mathrm{conv} (P)$, and an integer $r\le d$, $r \le n$, then there exist a subset $Q\subset P$ of $r$ elements such that the distance between $\ve a$ and $\mathrm{conv} (Q)$ is less then $\mathrm{diam} P/\sqrt {2r}$. Here the diameter of $P$ is the largest distance between a pair of points in $P$. \subsection{Helly}\label{s:Helly} \emph{Helly's theorem} asserts that for a finite family of convex subsets of $\mathbb{R}^d$ with at least $d+1$ members, if every $d+1$ members intersect, then the whole family intersects. In the contrapositive, the empty intersection of finitely many convex sets in $\mathbb{R}^d$ is always witnessed by the empty intersection of some $d+1$ of the sets. See Figure~\ref{f:Helly-pic} \begin{figure}[h] \begin{center} \includegraphics[page=16]{figures-final} \caption{Helly's theorem in the plane.\label{f:Helly-pic}} \end{center} \end{figure} The special case of Helly's theorem where each subset is a halfspace is of particular interest. Since a family of halfspaces not containing the origin has empty intersection if and only if their inner normals positively span the space, Helly's theorem for halfspaces is equivalent to Carath\'eodory's theorem for their polars. Note that the polar of a set of hyperplanes is a set of ray vectors. The case of halfspaces implies the general case because, given a family of convex sets in $\mathbb{R}^d$, we can replace each set by a polytope that it contains without altering the intersection patterns. It suffices to take a witness point in the intersection of every subfamily, and then replace each set by the convex hull of the witness points that it contains. Helly's original proof used the separability of compact convex sets by hyperplanes to set up an induction on the dimension. It is spelled out in the survey of Danzer et al.~\cite[Section~1]{DGKsurvey63} along with references to eight other proofs. The most common proof deduces Helly's theorem from \emph{Radon's lemma} (the case $r=2$ of Tverberg's theorem)~\cite[Section~1.3]{Mbook}. Starting with $k\ge d+2$ convex sets $C_1, C_2, \ldots, C_k$ where any $k-1$ intersect, one picks a witness point $\ve w_i \in \bigcap_{j \neq i} C_j$. Partition $\ve w_1,\ve w_2, \ldots,\ve w_k$ into two subsets with intersecting convex hulls, and observe that this intersection point lies in every convex set. A proof of Helly's theorem, due to Krasnosselsky~\cite{krano}, fits well with the theme of our survey as it uses the KKM theorem. Lift the $\ve w_i$ to the vertex $\ve e_i$ of the simplex $\Delta_{k-1}$. The map $\ve e_i \mapsto \ve w_i$ extends into a linear map $f:\Delta_{k-1} \to \mathbb{R}^d$, and setting $D_i = f^{-1}(\conv \{\ve w_j : j \neq i\})$ produces $k$ closed subsets of $\Delta_{k-1}$. Each facet of $\Delta_{k-1}$ is covered by a distinct $D_i$ and the $D_i$'s cover~$\Delta_{k-1}$ by Carath\'eodory's theorem in $\mathbb{R}^d$. The KKM theorem (see Section~\ref{subsec:continuousversionspernertucker}) \smallskip\noindent \begin{minipage}{12cm} yields a point $\ve w \in \bigcap_i D_i$ and the point $f(\ve w)$ is contained in every~$C_i$. Chakerian showed in~\cite{Chakerian-HellyBrouwer} that Helly's theorem also follows from Brouwer's fixed point theorem, in a similar fashion as the proof of the KKM theorem: the function, instead of moving every point $\ve x$ by the vector of its distances to each $C_i$, moves every point $\ve x$ to the barycenter of its projections on each $C_i$. (See the right-hand picture for an illustration: the projections of $\ve x$ to the three convex sets, red, green, and blue, are shown by the white points; the black square is their barycenter). \end{minipage} \hfill \begin{minipage}{4cm} \centering \includegraphics[page=17]{figures-final} \end{minipage} \bigskip \noindent \begin{minipage}{3.5cm} \centering \includegraphics[page=18]{figures-final} \end{minipage} \hfill \begin{minipage}{12cm} \quad Helly's theorem for integral points was first established by Doignon~\cite[Proposition~4.2]{Doi1973}, and rediscovered later by Scarf~\cite{Sca1977} and Bell~\cite{Bell:1977tm}. Hoffman~\cite{Hoffman:1979ix} observed that the techniques apply for more cases than the integer lattice. The proofs of Doignon, Bell, and Hoffman hinge on the following insight: if a polytope in $\mathbb{R}^k$ has $m$ vertices, each with integer coordinates, and contains no other integral point, then no two vertices may have all their coordinates of the same parity (else their mid-point would yield a contradiction) and thus $m$ is at most~$2^k$. Bell's proof starts with a family of $m$ halfspaces in $\mathbb{R}^k$ whose intersection contains no integer point and such that removing any \makebox[\linewidth][s]{halfspace would enlarge the intersection to include some integer point.} \end{minipage} \smallskip\noindent Translating each halfspace in the direction of its outer normal until every facet contains a witness point with integer coordinates, one gets witness points that must be distinct and form a polytope as above, so $m$ is at most $2^k$. Hoffman's proof is more complicated but holds in a more general axiomatic setting. Scarf's proof is algorithmic and relies on Sperner's lemma (see also the variation by Todd~\cite{todd1977number}). Remark that the equivalence via polarity between Helly's and Carath\'eodory's theorems in~$\mathbb{R}^d$ does not carry over to~$\mathbb{Z}^d$, as the bounds are respectively $2^d$ and at most $2d-2$ (by Theorem~\ref{thm:sebo}). \bigskip Some of our applications will use the following version where some of the coordinates in the intersection may be required to be integers. \begin{namedtheorem}[Mixed Helly theorem] \label{mixedhelly} Let $\mathcal{F}$ be a finite family of convex sets in $\mathbb{R}^{d+k}$ of cardinality at least $(d+1)2^k$. If every $(d+1)2^k$ members of $\mathcal{F}$ have a common point whose last $k$ coordinates are integer numbers, then all members of $\mathcal{F}$ have an intersection point whose last $k$ coordinates are integer numbers. \end{namedtheorem} \noindent The mixed Helly theorem was announced by Hoffman~\cite{Hoffman:1979ix} as one of the outcomes of his axiomatic setting; he however deferred details of the proof for the mixed analogue of the property of the Doignon theorem to a forthcoming paper, which never appeared. A complete proof came decades later and is due to Averkov and Weismantel~\cite{AW2012}. Their proof proceeds in two steps. They start with a family of halfspaces in $\mathbb{R}^{d+k}$ whose intersection is a nonempty full-dimensional polytope $P$ containing no point whose last $k$ coordinates are integer (the general case follows). They project $P$ onto the last $k$ coordinates, obtaining a polytope $T(P)$ in $\mathbb{R}^k$ with no integer point; Doignon's theorem then ensures that at most $2^k$ of the halfspaces supporting the facets of $T(P)$ already intersect with no integer point. By Carath\'eodory's theorem in $\mathbb{R}^{d+k}$, each $k$-dimensional halfspace is the projection of the intersection of some at most $d+1$ of the original halfspaces; the bound follows. \bigskip \emph{Fractional} versions of Helly's theorem play an important role in the study of sampling and hitting geometric set systems. Here, \emph{fractional} means that one only assumes that some constant fraction of the subfamilies (of a given size) intersect, and concludes that some constant fraction of the whole family intersect. \begin{theoremp}[Fractional Helly theorem]\label{thm:fracHelly} Let $0 < \alpha \le 1$ and $\mathcal{F}$ be a family of $n$ convex sets in $\mathbb{R}^d$. If $\alpha \binom{n}{d+1}$ of the $(d+1)$-element subsets of $\mathcal{F}$ have non-empty intersection, then some $(1-(1-\alpha)^{\frac1{d+1}})n$ elements of $\mathcal{F}$ intersect. \end{theoremp} \noindent The first result in this direction was proven by Katchalski and Liu~\cite{Kat79frac}. Starting with a family of $n$ convex sets, they assign to any subfamily the lexicographically minimum point in their intersection. The set of points lexicographically \emph{larger} than a given point is convex, so Helly's theorem ensures that the minimal point of the intersection of $k \ge d$ convex sets is also the minimal point in the intersection of some $d$ among them. A weak version of the above theorem, where the size of the intersecting subfamily is only guaranteed to be at least~$\frac{\alpha}{d+1}n$, then follows from a pigeonhole argument. There are few settings in which fractional Helly theorems are known. On the one hand, Matou\v{s}ek~\cite{Matousek:2004cs} proved, via a general sampling technique due to Clarkson~\cite{clarkson-rs,clarkson-shor}, that any set system with bounded VC dimension affords a fractional Helly theorem; his approach holds for other measures of complexity than the VC dimension~\cite{pinchasi-fracHelly}); we come back to the notions of VC dimension in Section~\ref{s:geompart}. On the other hand, B\'ar\'any and Matou\v{s}ek~\cite{baranymatousek} established a fractional Helly theorem for lattices, including over the integers. It is surprising that they only have to check the non-empty integral intersection of a positive fraction of $(d+1)$-tuples, instead of the expected $2^d$-tuples of intersections. The bound of $(1-(1-\alpha)^{\frac1{d+1}})n$ in Theorem~\ref{thm:fracHelly} is sharp and was obtained by Kalai~\cite{Kalai:1984bg} and, independently, by Eckhoff~\cite{eckhoff1985upper}, via a study of nerve complexes that led to a more general \emph{topological} point of view. The \emph{nerve} of a family of convex sets is the abstract simplicial complex with a vertex for every set in the family, and a simplex for every intersecting sub-family. Helly's theorem and its fractional version easily translate in terms of nerves: the former states that the nerve cannot contain the boundary of a $(\ge d)$-dimensional simplex without containing the simplex, and the latter asserts that if the nerve contains a positive fraction of the $d$-dimensional faces, then it must contain a simplex of dimension a positive fraction of $n$. Kalai's proof uses his technique of \emph{algebraic shifting}~\cite{Kalai01algebraicshifting} to study how the number of simplices of various dimensions behaves as the nerve is simplified through a sequence of \emph{$d$-collapses}, a type of filtration available to nerves of convex sets~\cite{tancer2013intersection}. If a complex is $d$-collapsible, then all its subcomplexes have trivial homology in dimension $d$ and above, i.e., it is \emph{$d$-Leray}. Kalai's proof of Theorem~\ref{thm:fracHelly} extends to $d$-Leray complexes (see~\cite[$\mathsection 5.2$]{HT06}), and Alon et al.~\cite{AKMM} further proved that families of subsets of $\mathbb{R}^d$ that are closed under intersection and whose nerve is $d$-Leray also admit weak $\varepsilon$-nets and $(p,q)$-theorems; examples of such families include good covers (this follows from the Nerve theorem~\cite{nerve}) and acyclic covers~\cite{acyclic}. Topological versions of Helly theorem have a further application in geometric group theory \cite{Farb}. \bigskip Fairly general \emph{topological} Helly theorems can be derived from non-embeddability results via a construction reminiscent, again, of the setup of the KKM theorem. Let us illustrate the basic idea with five sets in the plane. If any four intersect and any three have path-connected intersection, then we can draw $K_5$, the complete graph on $5$ vertices, \emph{inside} the family by placing each vertex in the intersections of four sets (different vertices missing different sets) and connecting any two vertices by a path contained in the intersection of the three sets that contain them both. By a classical theorem of Kuratowski for planar graphs \cite{diestel-graphtheory}, there exist two edges that have no common vertex and that intersect. This intersection point must be in all five sets. An induction on the same idea yields that in a family of planar sets, where intersections are empty or path-connected, if every four sets intersect, then they all must intersect. In higher dimension, where all graphs can be drawn without crossing, the same approach can be combined with non-embeddability results derived from the Borsuk-Ulam theorem, e.g., the \emph{Van Kampen-Flores theorem} which states that $\Delta_{2k+2}^{(k)}$ does not embed in $\mathbb{R}^{2k}$~\cite{vanKampen:KomplexeInEuklidischenRaeumen-1932,Flores:NichtEinbettbar-1933}; cf.~\cite[Chapter~5]{matousek2003using}. The discussion we present below on the topological Tverberg theorem is also connected to embeddability of complexes, e.g., the paper \cite{Blagojevic:2014ul} proves that the topological Radon theorem implies the Van Kampen-Flores theorem. The most general result in this direction~\cite{BH-journal} is that any family $\mathcal{F}$ of subsets in~$\mathbb{R}^d$ admits a Helly-type theorem in which the constant that replaces $d+1$ in the case of convex sets is bounded as a function of the dimension $d$ and \[ b = \max_{G \subseteq \mathcal{F}, 0 \le i \le \lceil \frac{d}2\rceil-1} \tilde{\beta}_i(\cap G),\] where $\tilde{\beta}_i(X)$ denotes the $i$-th reduced Betti numbers, over $\mathbb{Z}_2$, of a space $X$. Non-embeddability arguments and the study of nerves offer two different pathways to topological Helly theorems. While the former allows more flexible assumptions, the latter offers more powerful conclusions in the form of a sharp fractional Helly theorem. It is not known whether the benefits of both approaches could be combined. \begin{oproblem} Given $b$ and $d$, is there a fractional Helly theorem for families $\mathcal{F}$ of subsets in~$\mathbb{R}^d$ where $\tilde{\beta}_i(\cap G) \le b$ for any $G \subseteq \mathcal{F}$ and any $0 \le i \le \lceil\frac{d}2\rceil-1$? \end{oproblem} \noindent This open question relates to a more systematic effort to build a theory of \emph{homological VC dimension}~\cite[Conjectures~6 and~7]{kalai_conjectures}. There are some recent results for planar sets with connected intersections (the case $d=2$ and $b=0$)~\cite{NervesMinors}. Helly-type theorems are too many to list them all in this survey, but we wish to point out at least one more variation. \emph{Quantitative} Helly's theorem were introduced by B\'ar\'any, Katchalski, and Pach in \cite{baranykatchalskipach}. In this family of Helly-style theorems one is not content with a non-empty intersection of a family, but the intersections must have measurable or enumerable information in the hypothesis and the conclusion. Typical measurements include the volume, the diameter, or the number of points in a lattice. Motivated by applications in optimization, in the last two years several papers have been published on this subject, both for continuous \cite{Brazitikos2017,DeLoeraetalquant,Naszodi2016} and for discrete \cite{alievetal,Averkovetal-tightbounds,chestnutetal,Rolnick+Soberon} quantitative Helly-type theorems. For other recent Helly-type theorems see \cite{amentaetal2017helly}. We next discuss the fifth remarkable theorem of our survey. \subsection{Tverberg} \label{tv-section} \noindent Tverberg-type theorems allow for the partition of finite point sets so that the convex hulls of the parts intersect. In its original form we have: \begin{namedtheorem}[Tverberg theorem] \label{thm:Tverberg} Any set of at least $(r-1)(d+1)+1$ points in $\mathbb{R}^d$ can be partitioned into $r$ subsets whose convex hulls all have at least one point in common. \end{namedtheorem} \begin{figure} \centering \includegraphics[page=19]{figures-final} \caption{Tverberg's theorem in the plane: the two types of partitions for $r=3$ (left) and for $r=4$ (right).\label{f:Tverberg-partitions}} \end{figure} \noindent Such a division in parts is often called a \emph{Tverberg partition}; {\blue see Figure~\ref{f:Tverberg-partitions}}. The case $r=2$ is known as \emph{Radon's lemma}~\cite{originalRadon} and the case $d=2$, but general $r$, was proven by Birch~\cite{Birch:1959ii}, before Tverberg proved the general statement. Tverberg's first proof of his theorem~\cite{Tverberg:1966tb} relies on a deformation argument: start with a configuration with a known Tverberg partition, and move the points continuously to the target configuration. This process is such that, while the number of Tverberg partitions may change, there will always be one present. A simpler proof consists in arguing that a partition of the point set minimizing an adequate function must be a Tverberg partition; this idea, which originates in B\'ar\'any's proof of the colorful Carath\'eodory theorem, was gradually refined by Tverberg~\cite{Tverberg1981}, Tverberg and Vre\'cica~\cite{TverbergVrecica} and Roudneff~\cite{Roudneff-ejc} (Roudneff minimizes the sum of the squared distances between a point and the convex hulls of the parts). Another proof, due to Sarkaria~\cite{Sarkaria:1992vt}, uses bilinear algebra to deduce Tverberg's theorem from the colorful Carath\'eodory theorem; the idea behind Sarkaria's proof was later made simpler (using explicit tensors instead of number fields) and more algorithmic by B\'ar\'any and Onn in~\cite{baranyonn-colorfulLP}. Recently, B\'ar\'any and Sober\'on revisited these ideas and prove a new generalization of Tverberg's theorem using affine combinations \cite{Tverbergplusminus}. \bigskip Just as for Carath\'eodory's and Helly's theorems, there is an \emph{integer Tverberg} theorem. Its most recent version guarantees that any set of at least $(r-1)d 2^d +1$ integer points in $\mathbb{Z}^d$ can be partitioned into $r$ parts whose convex hulls have a point of $\mathbb{Z}^d$ in common~\cite{integertverberg2017}. The proof of this upper bound goes as follows. From the integer Helly theorem, one can prove that any finite set of integer points $S \subset {\mathbb Z}^d$ has an integer \emph{centerpoint}: a point $\ve p$ such that for every hyperplane $H$ containing $\ve p$, one of its half-spaces contains at least $|S|/2^d$ points from $S$. Using this centerpoint, it is not difficult to see that $(r-1)d 2^d +1$ points can be partitioned into $r$ pairwise disjoint simplices all containing $\ve p$. We will see more about centerpoints in Sections \ref{centerpointsinoptima} and \ref{half-space depth:subsec}. The upper bound on the integer Tverberg number is not known to be sharp, and the best lower bound of $2^d (r-1)+1$ is due to Doignon (this result was communicated in \cite{Eckhoff:2000jw}). Recently in \cite{lowdimTverberg} the authors showed that the Tverberg number in $\mathbb{Z}^2$ is exactly $4r-3$ when $r\geq 3$ and improved the upper bounds for the Tverberg numbers of $\mathbb{Z}^3$. \begin{figure} \centering \includegraphics[page=20]{figures-final} \caption{The lower bound for the integral Radon Theorem. Left: Five integral points in the plane with no integral Radon partition. Right: A configuration of $k$ integral points in $\mathbb{R}^d$ with no Radon partition can be turned into a configuration of $2k$ integral points in $\mathbb{R}^{d+1}$ with no Radon partition.\label{f:Radon-integral}} \end{figure} The special case of bipartition, i.e., $r=2$, is called the \emph{integral Radon theorem}. A sharper upper bound of $ d2^d-d+3$ and a lower bound of $\frac54 2^{d} +1$ were established by Onn~\cite{onn+radon} (see Figure~\ref{f:Radon-integral}). Even low dimensional cases are hard. The only sharp bound known, also due to Onn, is for $r=d=2$: any six integral points in the plane have an integral Radon partition. An upper bound of $17$ for the case $d=3$ was proven by Bezdek and Blokhuis~\cite{Bezdek2003}. \begin{oproblem} Determine the exact value of the integer Tverberg numbers. In particular, is the integer Radon number for $d=3$ less than 17? Is it bigger than 11? \end{oproblem} \noindent More generally, there is the notion of an \emph{integer quantitative Tverberg number}: any set of at least \linebreak $\pth{(2^d-2)\left\lceil \tfrac23(k+1)\right\rceil+2}(r-1)kd+k$ integer points can be partitioned into $r$ parts whose convex hulls have $k$ integral points in common~\cite{integertverberg2017}; similar results hold more generally for sets that are discrete, i.e., intersect any compact in only finitely many points, for instance the difference between a lattice and one of its sublattices. Recent improvements on the quantitative integer Helly theorem~\cite{Averkovetal-tightbounds,chestnutetal} leads to sharper upper bounds for the Tverberg version. \begin{oproblem} Determine tighter lower and upper bounds on the integer quantitative Tverberg numbers. \end{oproblem} \bigskip Tverberg's theorem can be understood as stating that for any \emph{linear} map from $\Delta_{(r-1)(d+1)}^{(d)}$, the $d$-dimensional skeleton of the simplex with $(r-1)(d+1)+1$ vertices, into $\mathbb{R}^d$ there must exist $r$ disjoint simplices whose images intersect. This reformulation invites the question, going as far back as 1979 (see ending of the important paper~\cite{Bajmoczy:1979bj}), whether the same conclusion holds for all \emph{continuous} maps. In other words, is there a topological Tverberg theorem? For $r=2$, this is the question of non-embeddability discussed above in relation to topological Helly theorems. \begin{figure} \centering \includegraphics[page=21]{figures-final} \caption{The topological Tverberg theorem in the plane ($r=3$). Left: A configuration of seven points and its Tverberg partition into three parts. Center: The linear map used in the left is deformed continuously so as to break the previous Tverberg partition, every edge not represented is kept straight. Right: Nonetheless, a new Tverberg partition emerges.\label{f:Tverberg-topo}. Note that the topological Tverberg conjecture remains open in dimension two when $r$ is not a prime-power. See~\cite{50tverbergsurvey} and, for the two-dimensional case,~ \cite{topotverbergwinding}.} \end{figure} Positive answers were obtained first for $r=2$ (the \emph{topological Radon theorem}) by Bajm\'oczy and B\'ar\'any~\cite{Bajmoczy:1979bj}, then for $r$ prime by B\'ar\'any et al.~\cite{Barany:1981vh}, and for $r$ a power of a prime by \"Ozaydin~\cite{Oza87} and independently, but later, by Volovikov~\cite{Volovikov-TT} and Sarkaria~\cite{Sarkaria-TT}. Matou\v{s}ek~\cite[Chapters~5 and~6]{matousek2003using} offers an accessible introduction to the techniques behind the topological Tverberg theorem. For $r=2$, the proof of the topological Radon theorem uses the notion of \emph{deleted product} of a geometric simplicial complex $\mathsf{K}$ with itself, defined as \[ {\mathsf{K}_{\Delta}^{2}}=\{ \sigma \times \tau : \sigma, \tau \in \mathsf{K}, \ \sigma \cap \tau = \varnothing \}. \] Now, for contradiction, suppose there exists a continuous map $f:|\mathsf{K}| \to \mathbb{R}^d$ with the property that points in distinct faces are mapped to distinct points. It induces another continuous map $\tilde{f}: | {\mathsf{K}_{\Delta}^{2}}| \to \mathbb{S}^{d-1}$ where $ \tilde{f}(x_1,x_2) =\frac{f(x_1)-f(x_2)}{\|f(x_1)-f(x_2)\|}.$ The map $\tilde{f}$ commutes with the central symmetries of $\mathbb{S}^{d-1}$ and ${\mathsf{K}_{\Delta}^{2}}$, where the central symmetry of ${\mathsf{K}_{\Delta}^{2}}$ is the exchange of the two components. When $\mathsf{K}$ is the boundary of the $d$-dimensional simplex, ${\mathsf{K}_{\Delta}^{2}}$ is homotopy equivalent to $\mathbb{S}^d$ (this is not trivial). Since the Borsuk-Ulam theorem prevents the existence of an antipodal map from $\mathbb{S}^d$ to $\mathbb{S}^{d-1}$, the continuous $f$ will have two faces intersecting in its image, which gives us a contradiction. More generally, to prove the topological Tverberg theorem when $r$ is a prime, one may start with a map from $\mathsf{K}=\Delta_{(r-1)(d+1)}^{(d)}$ to $\mathbb{R}^d$ with no $r$-wise intersection and use it to build another map from an $r$-fold deleted product ${\mathsf{K}_{\Delta}^{r}}$, replace the antipodality by the action of the symmetric group, and apply a generalization of the Borsuk-Ulam theorem such as Dold's theorem~\cite{Dold:1983wr}. The case of $r$ a prime power is technically more involved. (Note that this outline leaves out some issues such as dimension-reduction considerations~\cite{topotverbergwinding}.) By the late 1990's, the widespread belief that B\'ar\'any's question had a positive answer for every $r$ and $d$ was known as the \emph{topological Tverberg conjecture}. It was only recently refuted by Frick~\cite{frick15}, who completed an approach of Mabillard and Wagner~\cite{Mabillard2014} building on \"Ozaydin's work~\cite{Oza87}. In a nutshell, \"Ozaydin proved that an equivariant map from the adequate $r$-fold product $\tilde{X}$ exists if and only if $r$ is not a prime power. Mabillard and Wagner proposed an isotopy-based approach to construct a map with no $r$-wise intersection when such an equivariant map exists, but could only develop it in codimension larger than what the topological Tverberg conjecture allows. Frick overcame this codimension restriction, producing the first series of counter-examples to the topological Tverberg conjecture. The current state of affairs is that a counter-example is known for every $r$ that is not a prime power and for every $d \ge 2r$. See the survey \cite{50tverbergsurvey} for more details. \bigskip To conclude our discussion of ``all things Tverberg'', let us highlight some natural variants inspired by Tverberg's theorem for which only very partial results are known. One can find the most recent variants and extensions of Tverberg's theorem in \cite{barany+soberonsurvey, DeLoeraetal-tv-2018, Por-2018}. A \emph{tolerant Tverberg theorem}, due to Sober\'{o}n and Strausz~\cite{Soberon:2012er}, asserts that any set of $(t+1)(r-1)(d+1)+1$ points can be partitioned into $r$ parts such that, after deletion of any $t$ points, what remains is a Tverberg partition. This bound was improved to $r(t+2)-1$ for $d=1$ and to $2r(t+2)-1$ for $d=2$~\cite{Mulzer:2013je} (bound for $d=1$ is tight). Recently there have been two significant improvements, Garc{\'i}a-Col{\'i}n et al. \cite{nataliaetal2017} gave an asymptotically tight bound for the tolerant Tverberg theorem when the dimension and the size of the partition are fixed. Later, in \cite{pablo2018}, Sober\'on used the probabilistic method to give another asymptotic bound that is polynomial in all three parameters. Still we can ask for precise values. \begin{oproblem} What is the smallest number $n$ such that any set of $n$ points in $\mathbb{R}^d$ has a Tverberg partition into $r$ parts that tolerates the deletion of $t$ points? \end{oproblem} \noindent A related Carath\'eodory-type variation of Tverberg's theorem~\cite{ABBFM09} considers $r$ linear maps $f_1,\dots,f_r$, assumes that $f_1(e) \cap f_2(e) \cap \dots \cap f_r(e)$ is non-empty for every $1$-dimensional edge $e$ of $\Delta_{(r-1)(d+1)}$, and concludes the existence of disjoint faces in the simplex $\Delta_{(r-1)(d+1)}$, $\sigma_1,\sigma_2,\dots,\sigma_r$ of dimensions summing to $(r-1)(d+1)+1-r$ and such that $f_1(\sigma_1) \cap f_2(\sigma_2) \cap \dots \cap f_r(\sigma_r)$ is not empty. A conjectured \emph{relaxed} version of Tverberg's theorem, due to Reay, goes as follows. Denote by $T(d,r,k)$ the minimum positive integer number~$n$ such that any set of ~$n$ points $a_1,\ldots,a_n$ in~$\mathbb{R}^d$ (not necessarily distinct) admits a partition into~$r$ pairwise disjoint sets $A_1, \dots, A_r$ such that any size~$k$ subfamily of $\{\conv(A_1), \conv(A_2),$ $\ldots,\conv(A_r) \}$ has a nonempty intersection. Note that Tverberg's theorem says $T(d,r,r) = (r-1)(d+1)+1$; Reay conjectured that the Tverberg constant is tight even for smaller values of ~$k$: \begin{conjecture} $T(d,r,k) =(r-1)(d+1)+1$ for all $2 \le k \le r$. \end{conjecture} \noindent Note that if all $\conv(A_i)$ intersect then they intersect $k$-wise, so $T(d,r,k)$ is at most $T(d,r,r)$ and in particular $T(d,r,k)$ is finite for any $k \le r$. Moreover, Helly's theorem ensures that $T(d,r,k) = T(d,r,r)$ for every $d+1 \le k \le r$. The conjecture is known to hold for $d + 1 \le 2k - 1$ or $k < r < \frac{d+1}{d+1-k}k$, and some weaker bounds are known in several other cases; we refer the interested reader to~\cite{asadaetal2} and the discussion therein. Let us take this opportunity to mention another famous conjecture of flavor similar to Reay's conjecture. A \emph{thrackle} is a graph that can be drawn in the plane in such a way that any pair of edges intersects precisely once, either at a common vertex or a transverse intersection point. \begin{conjecture}[Conway's Thrackle conjecture] For any thrackle, the number of edges is at most the number of vertices. \end{conjecture} \noindent The conjecture is known to hold if all edges are drawn as straight line segments~\cite{E46} (it is akin to Reay's setup for $k=2$). We refer the interested reader to the recent progress of Fulek and Pach~\cite{fulek2010} and the discussion and references therein. The next big open question was stated in 1979 by Sierksma (he offered an entire Dutch cheese as a prize for whoever can solve this problem). In unpublished mimeographed notes he conjectured about the \emph{number} of distinct Tverberg partitions of a set of points guaranteed to exist for $(r-1)(d+1)+1$ points in $\mathbb{R}^d$. \begin{conjecture}[Sierksma] Any set of $(r-1)(d+1)+1$ points in $\mathbb{R}^d$ has at least $((r-1)!)^d$ distinct Tverberg partitions into $r$ parts. \end{conjecture} \noindent We do not state the lower bounds here, as they are a bit cumbersome, but lower bounds for the number of Tverberg partitions were first obtained when $r$ is prime by Vu{\v{c}}i{\'{c}} and {\v{Z}}ivaljevi{\'{c}} in~\cite{vucicrade}. They used topological tools to settle this. Hell showed that these bound also hold when $r$ is a prime power \cite{hell1}. Later Hell, without any topology, provided better bounds for the case of the plane~\cite{Hell2008}. Last, but certainly not least, the \emph{colorful} Tverberg conjecture was formulated by B\'ar\'any and Larman~\cite{Barany:1992tx} some 25 years ago, but only a few results are known today (see~\cite{barany+soberonsurvey,blamaszie,Blagojevicetal2015,colortv-ZivaljevicV92} and the references therein). \begin{conjecture}\label{conjecture-colorful-Tverberg} Let $F_1, F_2, \ldots, F_{d+1} \subset \mathbb{R}^d$ be sets of $r$ points each. There exists a partition of $\bigcup_{i=1}^{d+1}F_i$ into $r$ sets $A_1, A_2, \ldots, A_{r}$, of $d+1$ points each, such that every $A_i$ contains exactly one point from every $F_j$ and $\bigcap_{j=1}^r A_j \not= \varnothing$. \end{conjecture} \noindent Further conjectures along the lines of the B\'ar\'any-Larman conjecture, with colorful, discrete and quantitative flavors were formulated by De Loera et al. ~\cite{integertverberg2017}. \subsection{Computational considerations} \label{compu-convgeo} We continue our discussion of computational issues begun in Section \ref{fixed-point-computing}. We remark that the Carath\'eodory and Helly theorems, in their classical real-valued versions from Section \ref{s:convexgeo}, are dual to each other and, essentially, if one has an algorithm to find the Carath\'eodory decomposition of a vector in terms of other vectors, one also has an algorithm for finding an intersection point for a family of convex sets. For Helly's theorem, one wishes to find a point in the intersection of convex sets. The problem of finding such an intersection point can be thought of as a special case of the problem of minimizing a convex function over convex sets. We will see explicit cases for this problem later in Section \ref{s:optimization}, where the convex sets are explicitly given by constraints (i.e., equations and inequalities), but for now all that we need to know is that: (a) a whole range of different algorithms for solving such problems exist, (b) some of these algorithms are in fact efficient, and (c) depending on the type of input constraints (e.g., convex sets defined linear inequalities versus arbitrary constraints) one can be even more efficient \cite{bertsekasbook,boyd2004convex}. By the convexity assumption, a local minimum is also a global minimum and, thanks to the Helly and Carath\'eodory theorems, there are nice necessary and sufficient conditions for when we have found an optimum. We discuss more about this in Section \ref{s:optimization}. Convex optimization problems have been classified in levels of increased computational difficulty and different specialized algorithms are available (e.g., Least squares, linear programming, Conic optimization, Semidefinite programming, etc). For example, Carath\'eodory's theorem, in the simplest real-valued form presented in Section \ref{caratheodory-thms}, can be formulated as a linear programming problem and Helly's theorem for halfspaces is reducible to linear programming too. The good news is linear programs have efficient algorithmic solutions, both in theory and in practice \cite{Sch86}. Still, even if we move to the most general version of (real-valued) Helly's theorem of finite family of arbitrary convex sets, the challenge is to solve a \emph{convex programming} problem that has many types of algorithms. We cannot cover them here but we recommend \cite{bertsekasbook,boyd2004convex} for excellent introductions to convexity algorithms and computational methods. Compare now the good news above to the bad news involving the integer and mixed-integer versions of the Carath\'eodory and Helly theorems. We are now in the realm of combinatorial, integer, and mixed-integer programming where, for the most part, the problems are not efficiently solvable. Solving combinatorial or mixed-integer optimization problems, that is, finding an optimal solution to such problems, can be a difficult task. Unlike linear programming, whose feasible region is a convex (polyhedral) set, in combinatorial problems, one must search a lattice of feasible points or, in the mixed-integer case, a set of disjoint half-lines or line segments to find an optimal solution. Thus, unlike linear programming, finding a global optimum to the problem requires us to relax, approximate, decompose the solution space, and sometimes we are forced to enumerate all possibilities. As an example of the higher computational difficulty of discrete versions of the Helly and Carath\'eodory theorems let us look at the problem of computing a Hilbert basis. This was a particularly simple case for the integer version of Carath\'eodory's theorem presented in Section \ref{caratheodory-thms}. Alas, it was proved in \cite{durandetal} that deciding whether a given solution belongs to the Hilbert basis of a given system is coNP-complete. Thus, even in this tame case, the integral Carath\'eodory property is hard to realize computationally. We chose to highlight the colorful Carath\'eodory theorem because it is so general and because it can be used to prove the original Carath\'eodory theorem and many other existence theorems in high-dimensional discrete geometry, such as Tverberg's theorem or the centerpoint theorem (see Section \ref{s:datapoints} for details). While the original Carath\'eodory's theorem can be cast as a linear program and thus a solution can be implemented in polynomial time, much less is known about the algorithmic complexity of its colorful version. More precisely, the algorithmic colorful Carath\'eodory problem is the computational problem of finding such a colorful choice of elements as described in the theorem. Despite several efforts in the past, the computational complexity of the colorful Carath\'eodory problem in arbitrary dimension is still open. In \cite{PPAD-ColorfulCaratheodory}, Meunier et al. showed that the problem lies in the complexity class PPAD. \begin{oproblem} What is the complexity of finding a colorful simplex under the hypotheses of the colorful Carath\'eodory theorem? \end{oproblem} This question was formulated for the first time by B\'ar\'any and Onn in \cite{baranyonn-colorfulLP} and there they formulated a general family of related questions that come under the name \emph{colorful linear programming}. Meunier and Sarrabezolles~\cite{meunier-sarrabezolles2} have shown that a closely related problem is PPAD-complete: given $d+1$ pairs of points $P_1,\dots,P_{d+1}\in \mathbb{Q}^d$ and a colorful choice that contains the origin in its convex hull, find another colorful choice of points that contains the origin in their convex hull. Since we have no exact combinatorial polynomial-time algorithms for the colorful Carath\'eodory theorem, approximation iterative algorithms are of interest. This was first considered in \cite{baranyonn-colorfulLP}, but other researchers, e.g., \cite{mulzerstein2015}, have approached this problem too. Let us now speak about computational complexity of Tverberg's theorem. Sarka\-ria's proof of Tverberg's theorem (later simplified by B\'ar\'any and Onn \cite{barany1995caratheodory}) gives a polynomial-time way to compute a Tverberg partitions from a colorful Carath\'eodory choice with the origin in its convex hull. In this way, the computational issues about Tverberg's theorem are closely connected to computational issues regarding the colorful Carath\'eodory theorem. Since one can calculate a Tverberg partition from a colorful selection of Carath\'eodory, one can show Tverberg's theorem belongs to the class PPAD. One of the simplest, yet most frustrating, aspects of Tverberg's result is that it is not clear how to find a Tverberg partition. So it is natural to ask: \begin{oproblem} Is there a polynomial-time algorithm to find a Tverberg partition when one exists? That is, given $n=(d+1)(m+1)+1$ points in $\mathbb{R}^d$, compute, in time polynomial in $n$, a partition into $m$ parts with intersecting convex hulls. \end{oproblem} Since finding an $m$-Tverberg partition is an open question, approximate versions of Tverberg's theorem are of interest. Mulzer et al. \cite{mulzerstein2015} designed a deterministic algorithm that finds a Tverberg partition into $n/4(d+1)^3$ parts in time $d^{O(\log d)} n$. This means that for every fixed dimension one can compute an approximate Tverberg point (and hence also an approximate centerpoint) in linear time. Rolnick and Sober\'on \cite{rolnick+soberon2} proposed probabilistic algorithms for computing Tverberg partitions into $n/d^3$ parts with error probability $\epsilon$, and with time complexity that is weakly polynomial in $n,d, \log(\frac{1}{\epsilon})$. \section{Games and fair division} \label{s-games+fairness+independence} Mathematics and the social sciences have had rich interactions since Condorcet's seminal work on the analysis of voting systems. The relevance of (combinatorial) convexity and topology to this interdisciplinary research was first established in the 1940-50's through the work of Nash, von Neumann, Gale, Shapley, Scarf and many others, and it has been confirmed in the following decades in the development of fair-division algorithms and computational social choice (see e.g., \cite{bramsetal1,bramsetal2, collectionmathdecisionsgames,Nisanetal-2007} and the many references therein). This section shows how our five discrete theorems appear in these topics too. \subsection{Strategic games} Game theory studies a broad range of ``games'' that model situations where several agents collaborate or compete. Strategic games model the situations where $N$ players interact by each choosing from finitely many ``strategies'' to play and enjoy a payoff depending on the strategies chosen by all players (his or her choice included). Formally, each player is modeled by the pair $(S_i,u^i)$ where $S_i$ is a finite set of strategies available to him or her and a payoff function $u^i : S_1\times\cdots\times S_N \to \mathbb{R}$. A central theme in the theory of strategic games is the search for equilibria, where each player's choice is the best response to the other players' choices. \subsubsection{Nash equilibria} Formally, a \emph{Nash equilibrium in pure strategies} is a choice of strategy for each player $s_1 \in S_1, \ldots, s_N \in S_N$ such that for $i=1,\ldots,N$ and all $g \in S_i$ \[ u^i(s_1, \ldots, s_{i-1},s_i,s_{i+1}, \ldots, s_N) \ge u^i(s_1, \ldots, s_{i-1}, g, s_{i+1}, \ldots, s_N).\] Let us illustrate pure Nash equilibria with the \emph{max-cut game}, where an arbitrary graph $G = (V, E)$ is fixed and each vertex $x \in V$ represents a player. Each player $x$ chooses from two strategies $S_x=\{1, -1\}$, and his/her payoff function is the number of neighbors of $x$ in $G$ with a different strategy: \[ u^x(s_1, \dots, s_{|V|}) = |\{y \colon xy \in E, \ \hbox{and} \ s_x \not= s_y\}|.\] Any bipartition of $V$ that maximizes the number of edges between the two parts, also called \emph{maximum cut}, is a pure Nash equilibrium. Indeed, if a player could strictly increase his or her payoff by switching strategy, then this switch would increase the value of the cut. \medskip Now, unfortunately not every $N$-player game has a \emph{pure} Nash equilibrium. For example, consider the \emph{matching penny game}. Players Alice and Bob simultaneously select heads or tails of a coin. If the choice is the same, then Alice wins one penny and Bob loses a penny. If they choose differently, then Bob wins a penny and Alice loses a penny. Each player thus has a choice of two strategies, and the payoffs for each player can be recorded in two $2 \times 2$ matrices ($A$ and $B$, for each player). Note that the pure strategies alone offer no Nash equilibrium as this is a winner-takes-all situation. (In the matching penny game, the payoff matrices can be put together to show $A+B=0$; this is an example of a \emph{zero-sum game}, a notion to which we come back later.) \bigskip It turns out that equilibria always exist when considering a \emph{randomized choice of the strategies}. We now allow more freedom to the players by making them choose not a single strategy, but a probability distribution over all their strategies. Once choices are made, a random strategy is selected for each player from his or her distribution, and the (random) payoff is determined. Each player then wants to maximize her or his expected payoff. Formally, a \emph{mixed strategy} for player $i$ is a probability distribution $m_i$ on the set of pure strategies $S_i$. For instance, in the matching penny game, this means that each player decides his or her move according to a biased coin flip and is free to choose the bias. Note that the mixed strategies include all the pure strategies as a special case too. The set of all possible mixed strategies are the vectors that lie in the convex polytope $M=\prod \Delta_{|S_i|-1}$. We define a product probability measure on $S=S_1\times\cdots\times S_N$ by $P_m(s)=\prod_{i=1}^N m_i(s_i)$ where $s=(s_1,s_2,\dots,s_N)$. Therefore the expected payoff for this probability distribution $P_m(s)$ for the $i$-th player is $$U^i(m_1, \ldots, m_N)=U^i(m)=\sum_{s=(s_1,s_2,\dots,s_N) \in S} P_m(s) u^i(s).$$ The mixed strategies $m=(m_1, \ldots, m_N)$ form a \emph{Nash equilibrium in mixed strategies} if for each player $i$ and for all probability distributions $p$ on $S_i$, modifying $m_i$ to $p$ does not increase the expected payoff with respect to the choices of other players, that is \[ U^i(m_1, \ldots,m_i, \ldots, m_N) \ge U^i(m_1, \ldots, m_{i-1}, p, m_{i+1}, \ldots, m_N).\] The literature has plenty of examples of two-player games~\cite[$\mathsection 8.1$]{matousek2007understanding}. Thinking about three or more players is more delicate, as illustrated by Nash's three-man poker game~\cite[p. 293]{nashpaper2}. The existence of Nash equilibria for mixed strategies -- the theorem, for which John Nash received the Nobel prize -- is one of the most celebrated applications of combinatorial topology, following from Brouwer's theorem. See \cite{nashpaper,nashpaper2}. \begin{theorem}[Nash's theorem]\label{thm:Nash} Every $N$-player game with continuous payoff functions has at least one Nash equilibrium in mixed strategies. \end{theorem} \noindent Nash's original, very short, proof~\cite{nashpaper} makes strong use of the combinatorial topology and convexity arguments. It considers the set-valued function that maps each $N$-tuple of mixed strategies $m=(m_1, \ldots, m_N)$ to the set of $N$-tuples $(t_1, \ldots, t_N)$ where $t_i$ is a best response to $(m_1, \ldots, m_{i-1}, m_{i+1}, \ldots, m_N)$. Because the probability distributions on $S_i$ are the points of the simplex $\Delta_{|S_i|-1}$, Kakutani's theorem (we saw this in Section \ref{s:topology} after Brouwer) ensures this function has a fixed point, which is the desired equilibrium. Nash gave a second proof using Brouwer's fixed point theorem \cite{nashpaper2}. For this, Nash constructed a continuous function $f$ from the polytope $M$, associated to the game above, into itself. For $m \in M$, we define $f(m)$ component-wise as follows: $f(m)=(f_i(m),\dots,f_N(m))$, and each entry $f_i(m)$ is equal to $(f_{i1}(m),f_{i2}(m),\dots,f_{it_i}(m))$ where % {\small $$f_{ij}(m)=\frac{m_{ij} + \max(0, u^i(m_1, \ldots, m_{i-1}, s^{(i)}_j, m_{i+1}, \ldots, m_N)- u^i(m_1, \ldots,m_i, \ldots, m_N) )} {1 + \sum_{k=1}^{|S_i|} \max(0, u^i(m_1, \ldots, m_{i-1}, s^{(i)}_k, m_{i+1}, \ldots, m_N)- u^i(m_1, \ldots,m_i, \ldots, m_N) )},$$ } where the $s_j^{(i)}$'s are the pure strategies available for player $i$. \medskip Nash showed this function $f$ is a multivariate continuous map from the polytope $M$ into itself and thus, by Brouwer's fixed point theorem, it must have at least one fixed point. Nash then went to show that any fixed point of this function is in fact a Nash equilibrium. \bigskip The polytope $M=\prod \Delta_{|S_i|-1}$ is actually a Cartesian product of simplices, sometimes called a \emph{simplotope}. The special structure of simplotopes has been exploited for the computation of fixed-points (see \cite{simplotopes,todd1993new} and references there) and in the algebraic solution of equilibrium problems via polynomial equations \cite{mclennan2005}. There is one more reason why knowing $M$ is a polyhedron is useful. Despite its geometric beauty and intricacy, there are several modeling limitations with the notion of Nash equilibria. For example, the assumption behind Nash mixed strategies equilibria is that the choices of each player are independent of those of his/her opponents, but that may not hold. Alternative mathematical models that adjust the definition of Nash mixed strategies to allow \emph{correlated equilibria} appear in \cite{aumann}. In other words, the expression for the payoff function does not use anymore the easy product probability structure of a simplotope, but can have a more complicated polyhedral geometry. For example, it is known that all correlated equilibria are described by a finite set of linear inequalities and that it is non-empty, independently of Nash theorem. See \cite{gilboa+zemel} and its references. \subsubsection{Two-player games}\label{s:nash-complexity} Nash equilibria have been largely investigated in the area of algorithmic game theory, see for instance the introductory chapter of~\cite[$\mathsection 2$]{Nisanetal-2007}. We only discuss here some of their relations to combinatorial topology and convexity. \bigskip The Nash equilibria for two players can be formulated as a \emph{linear complementarity problem}, the theory of which subsumes both linear programming and two-player game theory (an introduction is in \cite{cottleetalbook}). Let $A$ and $B$ denote the $m \times n$ payoff matrices of the first and second players, respectively. By definition, a pair $(\boldsymbol{x}^*, \boldsymbol{y}^*) \in \Delta_{m-1} \times \Delta_{n-1}$ is a Nash equilibrium if and only if \begin{equation}\label{eq:NELP} \boldsymbol{x}^{*T} A\boldsymbol{y}^* \geq\boldsymbol{x}^T A\boldsymbol{y}^* \quad \forall \boldsymbol{x} \in \Delta_{m-1} \quad \textrm{ and } \quad\boldsymbol{x}^{*T} B\boldsymbol{y}^* \geq\boldsymbol{x}^{*T} B\boldsymbol{y} \quad \forall \boldsymbol{y} \in \Delta_{n-1}. \end{equation} Here comes the linear complementarity formulation: \begin{proposition}\label{p:lincomp} The pair $(\boldsymbol{x}^*, \boldsymbol{y}^*) \in \Delta_{m-1} \times \Delta_{n-1}$ satisfies~\eqref{eq:NELP} if and only if there exist $\ve u^* \ge \ve 0$, $\ve v^* \ge \ve 0$, $s \ge 0$, and $t \ge 0$ such that % \[ A\boldsymbol{y}^* + \ve u^* = s \ve 1, \quad B^T\boldsymbol{x}^* + \ve v^* = t \ve 1, \quad \text{and} \quad \boldsymbol{x}^{*T}\ve u^* = \boldsymbol{y}^{*T}\ve v^* = 0.\] \end{proposition} \noindent Since all vectors are non-negative, the conditions on inner products imply that the supports of $\boldsymbol{x}^*$ and $\ve u^*$ are disjoint, and similarly for $\boldsymbol{y}^*$ and $\ve v^*$, hence the aforementioned complementarity. The proof of Proposition~\ref{p:lincomp} goes as follows. Start with a Nash equilibrium $(\boldsymbol{x}^*, \boldsymbol{y}^*)$ and let $s = \boldsymbol{x}^{*T} A\boldsymbol{y}^*$ and $t = \boldsymbol{x}^{*T} B\boldsymbol{y}^*$; the values of $\ve u^*$ and $\ve v^*$ are forced and the complementarity conditions follow from multiplying the first equation by $\boldsymbol{x}^{*T}$ and the second equation by $\boldsymbol{y}^{*T}$. Conversely, the complementarity $\boldsymbol{x}^{*T}\ve u^* =0$ and $\boldsymbol{x}^* \in \Delta_{m-1}$ yield that $s = \boldsymbol{x}^{*T} A\boldsymbol{y}^*$ (and similarly $t = \boldsymbol{x}^{*T} B\boldsymbol{y}^*$); the positivity of $\boldsymbol{x}^T \ve u^*$ for any $\boldsymbol{x} \in \Delta_{m-1}$ implies that $\boldsymbol{x}^*$ is a best response to $\boldsymbol{y}^*$; by a similar argument, $\boldsymbol{y}^*$ is a best response to~$\boldsymbol{x}^*$. \bigskip The linear complementarity formulation of Proposition~\ref{p:lincomp} yields, after adequate rescaling, the standard method to compute two-player Nash equilibria: find a non-trivial solution (i.e., other than $\boldsymbol{x} = \boldsymbol{y} = \ve 0$) to the linear complementarity system \begin{equation}\label{eq:lincompspe} \pth{\begin{matrix} A & I_m\end{matrix}} \pth{\begin{array}{c} \boldsymbol{y} \\ \ve u\end{array}} = \ve 1, \quad \pth{\begin{matrix} I_n & B^T \end{matrix}} \pth{\begin{array}{c} \ve v \\ \boldsymbol{x} \end{array}} = \ve 1, \quad \text{and} \quad \boldsymbol{x}^T\ve u = \boldsymbol{y}^T\ve v = 0. \end{equation} The standard method to solve a linear complementarity system of this form is the \emph{Lemke-Howson pivoting algorithm}~\cite{lemkehowson64} which operates on feasible bases. A \emph{feasible basis} of a linear system with non-negativity constraints is a set of indices of columns whose induced sub-matrix has same rank as the system and defines a non-negative solution. The trivial solution $\boldsymbol{x}=\boldsymbol{y}=\ve 0$ gives feasible bases $I_1$ for $\pth{\begin{array}{c} \ve v \\ \boldsymbol{x} \end{array}}$ and $J_1$ for $\pth{\begin{array}{c} \boldsymbol{y} \\ \ve u\end{array}}$, and these bases are \emph{complementary}, i.e., have disjoint supports. Pick an (arbitrary) element $k_1 \notin I_1$. By Carath\'eodory's theorem (in the form of Proposition~\ref{p:cara++}), $I_1 \cup \{k_1\}$ contains another feasible basis $I_2$ for the system $B^T\boldsymbol{x} + \ve v = \ve 1$. Switching from $(I_1, J_1)$ to $(I_2,J_1)$ is our first pivot step. Remark that $I_2$ and $J_1$ are no longer disjoint, as they share $k_1$. To remedy this, note that the part of $I_2$ corresponding to $\ve v$ lost some element $k_2$ which is also absent from the part of $J_1$ corresponding to $\boldsymbol{y}$. We can thus make $k_2$ enter $J_1$ without further degrading the complementarity; Carath\'eodory's theorem ensures that $J_1 \cup \{k_2\}$ contains another feasible basis $J_2$, and we pivot from $(I_2,J_1)$ to $(I_2,J_2)$. The part of $J_2$ corresponding to $\ve u$ lost some element $k_3$. If $k_3 = k_1$ then $I_2$ and $J_2$ are complementary, and determine our non-trivial solution, otherwise we continue pivoting by making $k_3$ enter $I_2$, etc. until a pivot makes $k_1$ exit one of the bases; the pair at hands is then complementary. The Lemke-Howson algorithm is guaranteed to terminate under the non-dege\-ne\-ra\-cy assumption that $\ve 1$ is not a positive combination of less than $m$ columns of $\pth{\begin{matrix} A & I_m\end{matrix}}$ or less than $n$ columns of $\pth{\begin{matrix} I_n & B^T \end{matrix}}$. This follows from a non-degenerate Carath\'eodory theorem (after Proposition~\ref{p:cara++}): any point in the convex hull of $d+2$ points of $\mathbb{R}^d$ that is non-degenerate, i.e., is not in the convex hull of some $d$ of them, lies in the convex hulls of exactly two $(d+1)$-element subsets. (Note that the two non-degeneracy assumptions stated above are equivalent via the convex/conic change of viewpoint.) Now, for any pair~$(I, J)$ of non-complementary feasible bases encountered by the algorithm, there is exactly one index $k$ not in $I \cup J$. A pivot step makes $k$ enter either $I$ or $J$; in each case, the non-degenerate Carath\'eodory theorem yields exactly one other pair of feasible bases. Every pair~$(I, J)$ of non-complementary feasible bases encountered by the algorithm thus has exactly two neighbors through pivot steps. Since the trivial solution $(I_1,J_1)$ has exactly one neighbor, there is no place where the walk can loop back. Remark that this algorithm gives an alternate (constructive) proof of the existence of a Nash equilibrium for two-player games. The argument that proves termination also reveals that, from a computational complexity point of view, linear complementarity systems of the form of Equation~\eqref{eq:lincompspe} are in the PPAD class. Let us point out that the Lemke-Howson algorithm can be understood as a Sperner-type search for a fully-labeled simplex in a pseudomanifold. Let $V = \{\pm1, \pm2,$ $\ldots, \pm(m+n)\}$, where positive integers are understood as indices of columns of $\pth{\begin{matrix} A & I_m\end{matrix}}$ and negative integers are understood as (minus) indices of columns of $\pth{\begin{matrix} I_n & B^T \end{matrix}}$. Consider the simplicial complex $\mathsf{K}$ on~$V$ whose maximal simplices are the union of the complement of a feasible basis of $\pth{\begin{matrix} A & I_m\end{matrix}}$ and the complement of a feasible basis of $\pth{\begin{matrix} I_n & B^T \end{matrix}}$. The non-degenerate Carath\'eodory theorem spelled out above ensures that $\mathsf{K}$ is a pseudomanifold without boundary. If every $i\in V$ is labeled by its absolute value $|i|$, the fully-labeled simplices correspond exactly to the complementary feasible bases. \bigskip As a byproduct of the linear complementarity formulation of Proposition~\ref{p:lincomp}, we also get that the problem of computing a Nash equilibrium for two players is well-posed from the point of view of computational complexity: if the input involves only rational data, there is an equilibrium that involves only rational data and has encoding size polynomial in the input size (see for instance the discussion in the survey of McKelvey and McLennan~\cite{mckelvey1996computation}). This is in sharp contrast with the case of three or more players: Nash's three-player poker game~\cite{nashpaper2} shows that a three-player game with finitely many strategies and rational payoff arrays may have a (unique) Nash equilibrium with irrational coordinates. The Lemke-Howson algorithm has exponential complexity in the worst-case~\cite{savani-vonstengel} and solving general linear complementarity problems is NP-hard~\cite{Chung1989}. The problem of computing a Nash equilibrium for two players is PPAD-complete~\cite{chen+deng+teng2009} (see also~\cite{daskalakis+goldberg+papadimitriu2006}). The intractability for games with three or more players is even more stringent, as many decision problems are $\exists \mathbb{R}$-complete, i.e., as difficult as deciding the emptiness of a general semi-algebraic set; this includes in particular deciding whether a $3$-player game has a Nash equilibrium within $\ell_\infty$-distance $r$ from a given distribution~$\ve x$~\cite[Corollary~5.5]{Schaefer2015} or the existence of more than one equilibrium or of equilibria with payoff or support conditions~\cite{gargetal2015}. Behind this $\exists \mathbb{R}$-completeness lurks a more daunting fact: Datta's universality theorem~\cite{Datta03} asserts that arbitrarily complicated semi-algebraic sets can be encoded as sets of Nash equilibria (formally: every real algebraic variety is isomorphic to the set of mixed Nash equilibria of some $3$-player game). Whether the $\exists \mathbb{R}$-completeness results stated above could follow from Datta's proof is an interesting open question~\cite[Remark~5.6]{Schaefer2015} . \begin{oproblem} Can Datta's universality theorem be improved to yield an efficient polynomial-time, reduction between any semi-algebraic set and the Nash equilibria of a game? \end{oproblem} \noindent Another open question is whether both few players \emph{and} few strategies per player already give rise to universality. \begin{oproblem} Is there a universality result for the set of Nash equilibria of games with a constant number of players and a constant number of strategies? \end{oproblem} \subsubsection{Zero-sum games.} The two-player games where what is won by a player is lost by the other are called \emph{zero-sum} games; this is the case when the payoff matrices satisfy $A=-B$ in the formulation~\eqref{eq:NELP}. In this special case, it is customary to consider that one player aims at maximizing the payoff while the other tries to minimize it. Nash's theorem then asserts that there exist $\boldsymbol{x}^* \in \Delta_{n-1}$ and $\boldsymbol{y}^* \in \Delta_{m-1}$ such that \begin{equation}\label{eq:nzs} \begin{array}{ll} \forall \boldsymbol{y} \in \Delta_{m-1}, & {\boldsymbol{x}^*}^T A \boldsymbol{y} \ge {\boldsymbol{x}^*}^T A \boldsymbol{y}^* \\ \forall \boldsymbol{x} \in \Delta_{n-1}, & {\boldsymbol{x}}^T A \boldsymbol{y}^* \le {\boldsymbol{x}^*}^T A \boldsymbol{y}^*. \end{array} \end{equation} \noindent This readily implies \begin{equation}\label{eq:minmax} \begin{aligned} {\boldsymbol{x}^*}^T A \boldsymbol{y}^* \ge & \max_{\boldsymbol{x}\in\Delta_{n-1}} {\boldsymbol{x}}^T A \boldsymbol{y}^* \ge \min_{\boldsymbol{y}\in\Delta_{m-1}} \max_{\boldsymbol{x}\in\Delta_{n-1}} {\boldsymbol{x}}^T A \boldsymbol{y} \\ & \ge \max_{\boldsymbol{x}\in\Delta_{n-1}} \min_{\boldsymbol{y}\in\Delta_{m-1}} {\boldsymbol{x}}^T A \boldsymbol{y} \ge \min_{\boldsymbol{y}\in\Delta_{m-1}} {\boldsymbol{x}^*}^T A \boldsymbol{y} \ge {\boldsymbol{x}^*}^T A \boldsymbol{y}^*. \end{aligned} \end{equation} The only inequality that does not follow from Equation~\eqref{eq:nzs}, the central one, holds in fact for arbitrary bivariate functions (see Section~\ref{s:lpkkt}). Altogether, this yields the min-max theorem of von Neumann. \begin{theorem}\label{thm:VonNeumann} For any $A \in \mathbb{R}^{m\times n}$, $$\max_{\boldsymbol{x}\in\Delta_{n-1}}\min_{\boldsymbol{y}\in\Delta_{m-1}}\boldsymbol{x}^TA\boldsymbol{y}= \min_{\boldsymbol{y}\in\Delta_{m-1}}\max_{\boldsymbol{x}\in\Delta_{n-1}}\boldsymbol{x}^TA\boldsymbol{y}.$$ \end{theorem} \noindent In game-theoretic language, the real number $\boldsymbol{x}^{*T}A\boldsymbol{y}^{*}$ is the \emph{value} of the game. Von Neumann's theorem has a nice ``asynchronous'' interpretation: for any choice of a strategy by the minimizing player, the maximizing player can respond so as to ensure a payoff at least the value of the game. Moreover, if the maximizing player cares only about achieving the value of the game, the strategy $\boldsymbol{x}^*$ will work regardless of what the opponent plays. (Of course, symmetric statements hold for the minimizing player.) In zero-sum games, in each of the (possibly many) Nash equilibria, every player gets the same payoff. This is specific to zero-sum games and fails already for broader types of two-player games. A classical result of Dantzig \cite{dantzig-minmax} says that the minmax identity of von Neumann's theorem can be proved, without help of combinatorial topology, via linear programming duality and is thus polynomially solvable. We will discuss more about this in Section~\ref{s:optimization}. More generally, if the rank of $A+B$ is constant then the problem is polynomial~\cite{kannan+theobald}. \bigskip Formulation~\eqref{eq:NELP} suggests an approximate relaxation and the approximate Cara\-th\'e\-odory theorem~\ref{thm:caratheodory-p} provides a positive complexity result. We say a mixed strategy pair $(\boldsymbol{x},\boldsymbol{y})$ is {\em an $\varepsilon$-Nash equilibrium} if $$\boldsymbol{x}^T A\boldsymbol{y} \geq\boldsymbol{e}_i^T A\boldsymbol{y} - \varepsilon \quad \forall \ i \in [n] \qquad \textrm{ and }\qquad \boldsymbol{x}^T B\boldsymbol{y}\geq\boldsymbol{x}^T B\boldsymbol{e}_j - \varepsilon \quad \forall \ j \in [n].$$ Intuitively, a mixed strategy pair is an $\varepsilon$-Nash equilibrium if no player can benefit more than $\varepsilon$, in expectation, by a unilateral deviation. The case when $A+B=0$ is precisely the case of zero-sum games, for which we know efficient algorithms exist. Barman, using the approximate Carath\'eodory theorem presented in Theorem \ref{thm:caratheodory-p}, provided an extension in \cite{Barman}. \begin{theorem} \label{thm:nash} Suppose that all entries of the payoff matrices $A, B$ lie in $[-1,1]$. If the number of non-zero entries in $A+B$ is at most $s$, then an $\varepsilon$-Nash equilibrium of $(A,B)$ can be computed in time $n^{ O \left( \frac{\log (\max{(s,4)})}{\varepsilon^2} \right)}$. \end{theorem} \noindent This, in particular, gives a polynomial-time approximation scheme for Nash equilibrium in games with fixed column sparsity $s$. Moreover, for arbitrary bi-matrix games -- since $s$ can be at most $n$ -- the running time of this algorithm matches the best-known upper bound, which was obtained by Lipton, Markakis, and Mehta \cite{liptonetal}. \subsection{Two fair-division problems: cakes and necklaces} \label{cakes+necklaces} In various situations players are eager to divide goods in a ``fair way''. There are several examples of such fair-division problems where our five theorems (or their variations) play a key role. We review some famous examples, all with combinatorial-topological proofs. Before we start we remark there are other interesting mathematical challenges arising in distributing resources, some we will not cover here, such as \emph{gerrymandering}, which is the practice of drawing political maps to gain an advantage. See e.g., \cite{soberon-gerrymanderingetc} for connections to the five theorems in this survey. \subsubsection*{Cake cutting} A cake is a metaphor for a heterogeneous, divisible good, such as a piece of land or an inheritance. We consider the problem of dividing a cake between $r$ players in such a way that each player prefers his or her part to any other part. We call this \emph{envy-free}. Let us point out that the literature about fair division, including this and other types of cake-cutting problems, is both old and huge; see e.g.,~\cite{brams+taylor,bramsetal1, bramsetal2,roberson+webb,rothe-surveys, Steinhaus}. For example, one of the first envy-free division results was shown by Dubins and Spanier \cite{dubins+spanier}. One setting where pieces are connected is the following: The cake is identified with the unit interval $\left[ 0, 1 \right]$ and a {\em division} of the cake into $r$ pieces is an $r$-tuple $\boldsymbol{x}=\left( x_{1}, \ldots , x_{r} \right) $, with $x_{j}\geq 0$ for all $j \in[r]$ and $\sum_{j=1}^{r} x_{j} = 1$; in other words, a division is a point $\boldsymbol{x}$ in the $(r-1)$-dimensional simplex $\Delta_{r-1}$. Here, $x_{j}$ represents the size of the $j$-th piece, when ordered from left to right. The preferences of player $i$ are modeled by a function $P^i$ mapping each division $\boldsymbol{x} \in \Delta_{r-1}$ to a nonempty subset of $[r]$ (indexing the pieces that he or she prefers). A division $\boldsymbol{x}$ is \emph{envy-free} if there exists a choice of pairwise distinct indices, one from each $P^i(\boldsymbol{x})$. It is natural to assume that the set $P^i(\boldsymbol{x})$ of preferences of player $i$ never contains the index of a piece of size zero, (i.e., all players are hungry) and it is common to suppose that the $P^i$'s are \emph{closed}, that is if $\lim_{n \to \infty} \boldsymbol{x}_n = \boldsymbol{x}$ and $j \in P^i(\boldsymbol{x}_n)$ for every $n$, then $j \in P^i(\boldsymbol{x})$. Stromquist~\cite{stromquist1} and Woodall \cite{woodall} proved the following result independently. \begin{theorem}\label{thm:division} Under the assumptions that all $r$ players are hungry and the preference functions $P^i$ are closed, there exists an envy-free division with connected pieces. \end{theorem} \noindent The original proof relies on the KKM theorem we saw in Section \ref{s:topology}. An unpublished proof due to Forest Simmons was improved and adapted by Su~\cite{Su99}. The idea is to refine the usual derivation of Brouwer's theorem from Sperner's lemma. Take a sequence $(\mathsf{T}_n)$ of triangulations of $\Delta_{r-1}$ whose edge-length goes to $0$. Assign every vertex to a player in a way that every full-dimensional simplex of $\mathsf{T}_n$ has a vertex assigned to each player; this may not be possible for any sequence of triangulations, but taking iterated barycentric subdivisions does the job. We label or color every vertex $\boldsymbol{x}$ assigned to player $i$ by some (arbitrary) element from $P^i(\boldsymbol{x})$. The assumption that players are hungry ensures that this is a Sperner labeling. The limit of a converging subsequence of fully-labeled simplices is, by the closedness assumption, an envy-free division. It may be disappointing in practice that one only gets an iterative infinite process converging to an envy-free division, but it has been shown in \cite{stromquist2} that there exists no finite procedure, if you require each person to get a connected piece (i.e., the cake is cut by a minimal number of cuts). In fact, Aziz and Mackenzie \cite{Aziz+Mackenzie2016} showed that there is in fact a bounded finite procedure for $r$-person cake cutting bounded by a huge number of steps, but this would involving breaking the cake into a ridiculous number of pieces. Thus for now one cannot get minimal number of cuts in a finite procedure, but if you allow a lot of cuts, you get a division that is impractical as it destroys the cake. This is why an infinite process converging to an envy-free solution with a minimal number of cuts makes sense. Let us comment that the difficulty of the process is not so surprising perhaps. It is known that, in the polynomial-time function model, where the utility functions are given explicitly by polynomial-time algorithms, the envy-free cake-cutting problem has the same complexity as finding a Brouwer's fixed point, or, more formally, it is PPAD-complete~\cite{DeQiSa12}. \medskip The polytopal version of Sperner's lemma (Proposition \ref{p:polytopalsperner} in Section \ref{s:convexgeo}) has many interesting game theoretic applications. Su \cite{su-hex} recently gave a simple elegant proof that Hex does not end in a draw using it. Cloutier, Nyman, and Su \cite{multicake} applied the polytopal Sperner's lemma to the \emph{multi-cake multi-player fair-division} problems. In this type of problems the players have several cakes to choose from, but choices from one cake influence each other, e.g., for a player the amount of vanilla cake may influence how much chocolate cake to order, or after some vanilla cake the player may not want any chocolate cake. Cloutier et al. asked whether there exists an integer $r(q, m)$, independent of the preferences, such that there exists an envy-free division of the $m$ cakes not requiring to divide each cake into more than $r(q, m)$ pieces, some of which are assigned to each of the $q$ players (some of the pieces can remain unassigned). Note that Theorem \ref{thm:division} for a single cake asserts that $r(q, 1) = q$. They also used the polytopal version of Sperner's lemma, and they proved the existence of $r(2, 2)$ and $r(2, 3)$ and that $r(2, 2) = 3$ and $r(2, 3)\leq 4$. This means that two cakes can be divided into three pieces each in such a way that two players receive the pieces and everyone is satisfied with the fairness of division. Moreover they asked whether $r(2, m)\leq m + 1$. Recently Lebert, Meunier, and Carbonneaux \cite{lebertetal} have shown $r(2,m)$ exists for any $m \geq 2$ and its value is at most $m(m-1)+1$. They used again the polytopal version of Sperner's lemma and an inequality between the matching number and the fractional matching number in $m$-partite hypergraphs. Similarly, they showed $r(3, 2)$ exists and $r(3, 2) \leq 5$. Several interesting open questions remain, consider for example: \begin{oproblem} Can the bound on $r(2,m)$ be improved? Can one assure the existence of $r(q,m)$ for all values of $q,m$? \end{oproblem} Finally, there are other surprising variations of the cake-division theorem. Consider one where there are no cuts involved and one divides objects that are not physically divisible. Consider a house with $n$ rooms and a total rent amount to be divided among $n$ roommates. Assume that for each possible division of the rent amount each roommate can point to one or more rooms as preferred. Then, the theorem proved by Su in \cite{Su99} states that there exists a division of the rent and an assignment of rooms to each participant, such that each player receives one of his/her preferred rooms. Similar assumptions on the preference function that we made for cake-cutting before Theorem \ref{thm:division} must hold again to make this happen, for instance, now it is assumed that each roommate prefers a free room over paying rent. Once more the proof of this theorem is grounded in Sperner's lemma. In most results on fair-division it is assumed that no player is happy with an empty piece of cake, but imagine a part of the cake is undesirable (burnt cake anyone?) another recent variation in \cite{Meunier+Zerbib,Segal-Halevi} considers the possibility players may prefer an empty piece. It is well-known that Gale's colorful KKM theorem (see Section \ref{subsec:continuousversionspernertucker}) has interesting applications in economics, e.g., for the existence of economics equilibria. Now, Asada et al. \cite{asadaetal} used Gale's colorful KKM theorem to prove an extension, by Woodall \cite{woodall}, of Theorem \ref{thm:division} and a similar extension of the rental-harmony result of Su. It turns out that there are envy-free cake divisions for any number of players, even if the preferences of one person remain secret! Say the situation is one where one of the cake-cutters (maybe the person celebrating a birthday) is not providing preference information, but still the cutting of the cake can be made without anyone being envious. Similarly, for deciding what the rent should be it suffices to consider the information of all but one of the roommates and still none of them will be jealous. The authors provided a rather nice existence proof of such fair divisions. Recently Frick, Houston-Edwards, and Meunier gave an iterative approximation algorithm for the solution \cite{frickhoustonmeunier}. \subsubsection*{Necklace splitting} Another fair division problem asks for the fair splitting, between two thieves, of an open necklace with beads threaded along the string. Here, fair means that, for each type of bead, the number of beads assigned to each of the thieves differ by at most one (say because the thieves are unaware of the value of the beads). Perhaps surprisingly, this can be achieved using only a few cuts (which turns out to be convenient should the string material be precious). Contrary to the classical statement, with this notion of fairness we do not need to add any conditions on divisibility. \begin{theorem}[Necklace theorem]\label{thm:necklace} There exists a fair splitting of a necklace that uses no more cuts than there are bead types. \end{theorem} \noindent The result is optimal as the number of bead types many cuts are sometimes necessary (e.g., if the beads of the same type are consecutive). Theorem~\ref{thm:necklace} was first proved by Goldberg and West~\cite{GW85} and a simpler proof using the Borsuk-Ulam theorem was proposed by Alon and West~\cite{AW86}, who also came up with the above popular formulation. More generally the challenge of finding a division of an object into two portions so that each of $n$ people believes the portions are equal is called \emph{consensus-halving}. Note that necklace-splitting is a special case because different preferences are represented by different beads. In \cite{simmons+su} Simmons and Su showed how a non-constructive existence result on consensus-halving can be obtained from Borsuk-Ulam. They also showed, by a direct application of Tucker's lemma, how one can construct an approximate consensus-halving (up to a pre-specified tolerance level). Later on, a combinatorial proof due to P\'alv\"olgyi~\cite{Pal09} used the \emph{octahedral} Tucker lemma, instead of the Borsuk-Ulam theorem, for necklace-splitting. Here is a sketch of that idea. Let $n$ and $t$ denote, respectively, the numbers of beads and bead types, let $a_j$ denote the number of beads of type $j$, and assume, \emph{ad absurdum}, that any fair splitting uses more than $t$ cuts. Every vector $\boldsymbol{x} \in \{+,-,0\}^n$ defines a partial assignment of the beads to the two thieves (identified with $+$ and $-$ respectively). If every extension of $\boldsymbol{x}$ into a complete assignment has at most $t$ cuts, then no such extension can be a fair splitting and there is some $j$ such that more than $\frac{a_j}2$ beads of type $j$ are assigned to the same thief by $\boldsymbol{x}$; define $\lambda(\boldsymbol{x})$ to be the smallest such index $j$, signed by the thief who gets more than $\frac{a_j}2$ beads of that type. If there is an extension of $\boldsymbol{x}$ into a complete assignment with more than $t$ cuts, define $\lambda(\boldsymbol{x})$ to be the maximum number of cuts achieved by a completion, signed by the first component of that completion (that this sign is well-defined is straightforward). This map $\lambda$ satisfies the condition of the octahedral Tucker lemma with $m=n-1$ and therefore cannot exist, contradicting the initial assumption. \bigskip The necklace splitting problem naturally generalizes from $2$ to any number $q$ of thieves. Alon~\cite{Alon:1987ur} showed that $(q-1)t$ cuts suffice (note that they are sometimes necessary). His proof first assumes $q$ to be prime and replaces (in a non-trivial way) the stronger form of the Borsuk-Ulam theorem due to B\'ar\'any, Shlosman, and Sz\H{u}cs (see Lemma 4.1 in \cite{Alon:1987ur} or Statement B' in \cite{Barany:1981vh}). The case of general $q$ follows easily by a recursive argument. (The original proof assumed $a_j$ to be divisible by $q$ but this was subsequently relaxed by Alon et al.~\cite{alon2006algorithmic}, who obtained ``fair roundings'' via integrality properties of flows.) It is not known, however, whether P\'alv\"olgyi's proof can be adapted to the case with more than two thieves; perhaps an ingredient for this could be $\mathbb{Z}_q$-generalizations of the octahedral Tucker lemma~\cite{meunier2011chromatic,Z-GK}. Another open question~\cite{Pal09} relates to the rounding: For those $a_j$'s not divisible by $q$, can one choose the thieves who get the additional beads in the fair splitting? This is easily seen to be true for two thieves and it is also true for three~\cite{alishahi2017fair}; it is open for $q \ge 4$. \begin{oproblem} Is it possible to choose for each type $j$ the thieves who get $\lceil a_j/q\rceil$ and those who get $\lfloor a_j/q\rfloor$ in the fair splitting? \end{oproblem} \bigskip For two thieves, a fair splitting can be computed in linear time for $t=2$ and in $O(n^{t-2})$ time for $t \ge 3$~\cite{GW85}. Let us mention again consensus-halving, the problem of dividing an object into two portions so that each of $n$ players believes the portions are equal appears in many contexts. In \cite{simmons+su} the Borsuk-Ulam theorem and Tucker's lemma were used for this purpose. A well-known challenge of computational complexity was to decide whether the computation of a fair splitting is a PPA-complete problem \cite{ppad-original}. This was just recently settled by Filos-Ratsikas and Goldberg \cite{Filos-Ratsikas:2018,2018Filos-Ratsikas}. The fact that splitting necklace is PPA-complete implies that the algorithmic version of the octahedral Tucker is PPA-complete as well, but there is also a paper directly proving that the algorithmic Octahedral Tucker's lemma is PPA-complete \cite{dengetal2017} (see Section \ref{fixed-point-computing}). The topic of fair-division is very active and once more we can only point to a few additional types of results. One may, for instance, explore necklace splittings with the added constraint that adjacent pieces of the necklace cannot be claimed by certain pairs of thieves; for example, Asada et al. \cite{asadaetal} prove that four thieves on a circle can share the beads of the necklace, with the restriction that the two pairs of non-adjacent thieves will not receive adjacent pieces of the necklace. There are also several nice high-dimensional generalizations of (convex) splitting of booty; see \cite{plave+pablo,mark+rade} and the references therein. \section{Graphs} \label{s-graphs} Graphs are often used to model problems where pairwise interactions are prominent. This includes situations where graph-like structures are apparent, for instance road or train networks, or situations as in Euler's famous problem on the bridges of K\"onigsberg (although the curious reader may check that Euler's original article does not use any graph-related notion, but argues purely in terms of words coding paths). In other situations, graphs are not evident, but exist implicitly; for instance, to describe time dependencies between tasks in scheduling problems. Graph theory developed in many independent directions, driven both by applications (e.g., finding graph matchings to resolve assignments \cite{Sch03}) and deep structural questions (e.g., the graph minor theory~\cite{lovasz2006graph}). Its interaction with (combinatorial) topology started in the mid-1970s with the proof by Lov\'asz~\cite[Theorem~6.1]{Bjorner-survey} of the conjecture of Frank and Maurer that any $k$-connected graph $G=(V,E)$ can be partitioned into $k$ subsets that induce connected subgraphs, have prescribed size (summing to $|V|$), and each contains a prescribed element. (This result was independently given a non-topological proof by Gy\H{o}ry~\cite{gyori1976division}.) Lov\'asz~\cite{Lovasz:1978wu} followed up shortly after with an astonishing solution to the Kneser conjecture based on the Borsuk-Ulam theorem. \subsection{Chromatic number of graphs} \label{s-chromatic} A \emph{coloring} of a graph by $k$ \emph{colors} is a map from its vertex set into $[k]$; a coloring is \emph{proper} if any adjacent vertices have different colors. The \emph{chromatic number} of a graph is the smallest integer $k$ such that a proper coloring by $k$ colors exists. Graph colorings arise in applications such as frequence assignment~\cite{Aardal2007} or scheduling~\cite{de1997combinatorics}. Proving good lower bounds on chromatic numbers is usually a difficult task, as one needs to show that \emph{all} colorings with fewer colors are improper. (In contrast, proving an upper bound on the chromatic number of a given graph only requires to exhibit \emph{one} proper coloring.) In some cases, such as the perfect graphs discussed in Section~\ref{s:kernels}, sharp lower bounds can be obtained from the existence of large cliques. Kneser graphs are archetypes where this clique criterion fails dramatically. The {\em Kneser graph} $\KG(n,k)$, where $n$ and $k$ are two integers, is the graph with vertex set $\binom{[n]}{k}$ -- the $k$-element subsets of $[n]$ -- and where two subsets form an edge if they are disjoint. When $n \ge 2k-1$, a natural coloring of $\KG(n,k)$ assigns to every $k$-element subset that intersects $[n-2k+1]$ its minimal element and $n-2k+2$ to all remaining subsets (see Figure~\ref{fig:kg52}: $\KG(5,2)$ is also known as Petersen's graph). Kneser conjectured this to be optimal in 1955. \begin{figure}[ht] \centering \includegraphics[page=22]{figures-final} \caption{A proper coloring with three colors: 1-\emph{red}, 2-\emph{blue}, 3-\emph{yellow} on the Kneser graph $\KG(5,2)$.} \label{fig:kg52} \end{figure} \bigskip Lov\'asz approached the Kneser conjecture via a topological invariant. Given a graph $G=(V,E)$, consider the simplicial complex $N(G)$ that encodes its neighborhoods: $N(G)$ has the same vertex set as $G$, and a subset of the vertices forms a simplex whenever they have a common neighbor in $G$. The key invariant is the (homotopy) connectivity of $N(G)$: if $N(G)$ is $(k-1)$-connected, then $G$ is not $(k+1)$-colorable. In the case of Kneser graphs, this yields a lower bound that proves the conjecture. The interested reader may refer to the book of Matou\v{s}ek~\cite[Section~3.3]{matousek2003using} and the surveys of Bj\"orner~\cite[Section~6]{Bjorner-survey} and B\'ar\'any~\cite[Section~5]{barany1993geometric} for details. The idea of associating a simplicial complex to a graph and to relate the chromatic number of the latter to topological properties of the former has been especially fruitful and there are now various complexes that can be used to obtain lower bounds for the chromatic number of a graph, see \cite{Matousek:2002wl} and~\cite[Chapter~19]{kozlov2007combinatorial} for surveys on that approach. Recently Frick, used Tverberg-type results to show bounds for chromatic numbers of (generalized) Kneser graphs and hypergraphs \cite{FrickNerves,FrickKneser}. \bigskip The octahedral Tucker lemma emerged from a purely combinatorial proof of the Kneser conjecture due to Matou\v{s}ek~\cite{Matousek:2004hm}. Ziegler~\cite{Z-GK} then showed that his method can combinatorialize other topological arguments for chromatic numbers. Let us illustrate this method on a bound due to Dol'nikov which deals with hypergraphs. Recall that a \emph{hypergraph} is a pair $\mathcal{H} = (V,E)$ where $V$ is a finite set (the vertices) and $E$ a set of subsets of $V$ (the edges); in particular, hypergraphs whose edges all have size two are graphs. Given a hypergraph $\mathcal{H} = (V,E)$ and a subset $S \subseteq V$ of its vertices, the hypergraph $\mathcal{H}[S]$ induced on $S$ has vertex set $S$ and edge set $\{e\in E \colon e\subseteq S\}$. A hypergraph $\mathcal{H}$ is \emph{$2$-colorable} if $V$ can be colored so that no edge is monochromatic. To any hypergraph $\mathcal{H}= (V,E)$ we associate the {\em generalized Kneser graph} $\KG(\mathcal{H}) = (V',E')$ where \[ V' = E \quad \hbox{and} \quad E' = \{ef \colon e,f \in E \mbox{ and } e \cap f = \varnothing\}.\] In particular, the generalized Kneser graph of the $k$-uniform hypergraph with vertex set $[n]$ is the usual Kneser graph $\KG(n,k)$. The {\em colorability defect} $\cd(\mathcal{H})$ of a hypergraph $\mathcal{H}$ is the minimum number of vertices to be removed from $\mathcal{H}$ to ensure that the remaining vertices can be $2$-colored so that no edge of $\mathcal{H}$ is monochromatic (edges with at least one removed vertex are discarded). \begin{theorem}[Dol'nikov~\cite{dol1981transversals}]\label{t:Dol} For any hypergraph $\mathcal{H}$, the chromatic number of $\KG(\mathcal{H})$ is at least $\cd(\mathcal{H})$. \end{theorem} \noindent The combinatorial proof of Theorem~\ref{t:Dol} goes essentially as follows. Consider a hypergraph $\mathcal{H}$ with vertex set (identified with) $[n]$. Given a proper coloring $c$ of $\KG(\mathcal{H})$ by $[t]$, we define a map $\lambda:\{+,-,0\}^n\setminus\{\boldsymbol{0}\}\rightarrow\{\pm 1,\ldots,\pm m\}$ as follows. Consider $\boldsymbol{x} \in \{+,-,0\}^n$. If at least one edge of $\mathcal{H}$ is entirely contained in $\boldsymbol{x}^+$ or in $\boldsymbol{x}^-$, then we choose such an edge with smallest color $a$ and we set \[ \lambda(\boldsymbol{x}) = \varepsilon(n-\cd(\mathcal{H})+a),\] where $\varepsilon \in \{-,+\}$ records which of $\boldsymbol{x}^+$ or $\boldsymbol{x}^-$ contains the edge. (The sign $\varepsilon$ is unambiguously defined because the coloring is proper.) If neither $\boldsymbol{x}^+$ nor $\boldsymbol{x}^-$ contains an edge of $\mathcal{H}$, then we define $\lambda(\ve x)$ to be $\varepsilon(|\ve x^+|+|\ve x^-|)$, where the sign $\varepsilon$ is the first nonzero entry of $\ve x$. In the latter case, the edges of $\mathcal{H}$ contained in $\boldsymbol{x}^+ \cup \boldsymbol{x}^-$ induce a subgraph of $\mathcal{H}$ that is $2$-colorable, so $|\boldsymbol{x}^+ \cup \boldsymbol{x}^-| \le n-\cd(\mathcal{H})$. It then follows that $m = n-\cd(\mathcal{H})+t$ suffices and that in either case, $\boldsymbol{x}$ is mapped to disjoint sets of labels, which helps checking that $\lambda$ satisfies the condition of the octahedral Tucker lemma. As a consequence, $n-\cd(\mathcal{H}) + t\ge n$ and the announced inequality follows. Since every graph is isomorphic to some (actually, many) generalized Kneser graph, Theorem~\ref{t:Dol} provides a lower bound on the chromatic number of any graph. In the case of Kneser graphs, this bound is sharp. There exist refinements of the colorability defect that yield better combinatorial bounds~\cite{alishahi2015chromatic}. Cases of equality for Theorem~\ref{t:Dol} are remarkable for reasons related to circular chromatic numbers~\cite{alishahi2016strengthening} (see~\cite{zhu2006recent} for an introduction to circular chromatic numbers). Deciding whether the chromatic number of $\KG(\mathcal{H})$ equals $\cd(\mathcal{H})$ is a natural problem asked in \cite{alishahi2016strengthening}. Very recently this has been proved to be NP-hard by Meunier and Mizrahi (personal communication). \bigskip Some generalizations of the Kneser conjecture are still open. A $k$-element subset $A$ of $[n]$ is \emph{$s$-stable} if for any $i,j \in A$ we have $s \le |i-j|\le n-s$ (or, equivalently, if $i$ and $j$ are distance at least $s$ apart in $\mathbb{Z}_n$). Let $\KG_{s\text{-stab}}(n,k)$ denote the graph with vertices the $s$-stable $k$-element subsets of $[n]$, and where two vertices span an edge if they are disjoint. The graph $\KG_{2\text{-stab}}(n,k)$, known as Schrijver's graph~\cite{schgraph}, has the same chromatic number as the Kneser graph; since $\KG_{2\text{-stab}}(n,k)$ is a subgraph of $\KG(n,k)$, this strengthens Lov\'asz's result. As a special case of a conjecture on hypergraphs, Meunier~\cite{meunier2011chromatic} proposed: \begin{conjecture} For any $s \ge 2$ and $n \ge ks \ge 1$, the chromatic number of $\KG_{s\text{-stab}}(n,k)$ is $n-s(k-1)$. \end{conjecture} \noindent Besides the case $s=2$, the conjecture is known for all even $s$~\cite{chen2015multichromatic} and for $s\ge 4$ and $n$ sufficiently large~\cite{jonsson2012chromatic}. Some progress has been made by Chen \cite{chen2017}. See also \cite{FrickKneser} for questions related Kneser hypergraphs. \medskip A more systematic viewpoint recasts graph colorings as special cases of graph homomorphisms~\cite{hahn1997graph}. A \emph{homomorphism} from a graph $G=(V,E)$ to a graph $H=(W,F)$ is a map $f:V \to W$ such that for every edge $vv' \in E$ the image $f(v)f(v')$ is an edge of $H$, that is $f(v)f(v') \in F$. A proper coloring of $G$ with $k$ colors corresponds to a homomorphism from $G$ to the complete graph with $k$ vertices. More generally, associating with every vertex of $G$ a $k$-element subset of $[n]$ so that adjacent vertices have disjoint subsets amounts to finding a homomorphism from $G$ to $\KG(n,k)$. The structure of homomorphisms of Kneser graphs remains to be elucidated, as is shown by the following, broadly open, conjecture. \begin{conjecture}[Stahl~\cite{stahl1976n}] Let $n,k,k'$ be integers and let $q$ and $r$ be such that $k' = qk-r$ with $0 \le r <k$. There is a homomorphism $\KG(n,k) \to \KG(n',k')$ if and only if $n' \ge qn - 2r$. \end{conjecture} \noindent For more details on partial progress on Stahl's conjecture, see~\cite[$\mathsection 3.4$]{hahn1997graph}. \subsection{Colorful independent sets}\label{subsubsec:hall} A subset of vertices of a graph is \emph{independent} if there is no edge between any pair of them. Independent sets are sometimes called \emph{stable sets}. This notion is central in graph theory. For instance, a proper coloring of a graph can be understood as a partition of its vertices into independent sets, namely the sets of vertices with same color. The search for independent sets is also natural in applications such as the design of error-correcting codes~\cite[$\mathsection 29$]{matouvsek2010thirty}. \bigskip Methods from combinatorial topology were particularly effective in finding independent sets with color constraints, in the spirit of the colorful theorems in combinatorial convexity. The following example was first stated explicitly by Aharoni et al.~\cite{ABZ07}, who traced it back to the proof of a result of Haxell~\cite[Theorem~3]{Ha95} on hypergraph matchings. \begin{theorem}\label{thm:haxell} Let $G$ be a colored graph with maximum degree $\Delta$. There exists an independent set of $G$ that intersects every color class of size at least $2\Delta$. \end{theorem} \noindent (Note that the coloring of $G$ is not required to be proper.) It suffices to prove the statement in the case where every color class has size at least $2\Delta$ because deleting every vertex in a color class of size less than $2\Delta$ preserves independence and does not augment the maximum degree. The gist of the proof of Aharoni et al.~\cite{ABZ07} is to apply Meshulam's lemma (Proposition~\ref{p:meshulamlemma}) to the \emph{independence complex} $\mathsf{K}$ of $G$, the simplicial complex consisting of its independent sets. The connectivity of $\mathsf{K}$ can be controlled via a variety of domination graph parameters; this principle, which underlies some proofs of Meshulam~\cite{meshulam2003domination}, was made explicit by Aharoni et al.~\cite[Theorem~2.3]{ABZ07} and given a detailed proof by Adamaszek and Barmak~\cite{adamaszek2011lower}. In particular, given a coloring of $V(G)$ by $[k]$, a subset $I \subseteq [k]$ and an integer $j$, if no $2j+3$ vertices of $G$ dominate the vertices with colors in $I$, then $\widetilde{H}_j\pth{\mathsf{K}[\lambda^{-1}(I)],\mathbb{Z}_2}$ is trivial. (Recall that $X$ \emph{dominates} $Y$ if every vertex of $Y$ has a neighbor in $X$.) Here, the condition holds for $j = |I|-2$ because dominating $2\Delta|I|$ vertices requires at least $2|I|$ vertices when the maximum degree is $\Delta$. By Meshulam's lemma (Proposition~\ref{p:meshulamlemma}), the independence complex therefore contains a colorful simplex; this is an independent set that intersects every color class. Theorem~\ref{thm:haxell} can be improved for graphs with more structure as the following example shows~\cite{alishahi2017fair}. \begin{theorem}\label{thm:fair_rep} In any coloring of a path, it is possible to delete a vertex of each color so that the remaining vertices can be partitioned into two independent sets $A$ and $B$ such that $-1 \le |A \cap U| - |B \cap U| \le 1$ for every color class $U$. \end{theorem} \noindent (Again, the coloring does not need to be proper.) We sketch here a direct proof based on the octahedral Tucker lemma in a way reminiscent of P\'alvolgyi's proof of the necklace theorem (Theorem~\ref{thm:necklace}). We identify the vertex set of the path with $[n]$ and denote the color classes by $U_1,\ldots,U_t$. The existence of the two disjoint independent sets will be ensured via the notion of alternating subsequences, which has been useful in other similar contexts (e.g.,). A sequence of elements in $\{+,-,0\}^n$ is {\em alternating} if all terms are nonzero and any two consecutive terms are different. Given an $\ve x=(x_1,\ldots,x_n)\in\{+,-,0\}^n$, we denote by $\alt(\ve x)$ the maximum length of an alternating subsequence of $x_1,\ldots,x_n$. The definition of the map $\lambda$ to which we will apply the octahedral Tucker lemma requires the quantity $s=\max\big\{\alt(\ve x)\colon \ve x\in\{+,-,0\}^n\;\,\mbox{s.t.}\;I(\ve x)=\varnothing\big\}$, where \[\begin{aligned} I(\ve x)=\big\{i\in[t]\colon & |\ve x^+\cap U_i|=|\ve x^-\cap U_i|=|U_i|/2\\ & \mbox{or}\quad \max(|\ve x^+\cap U_i|,|\ve x^-\cap U_i|)>|U_i|/2\big\}.\end{aligned}\] Note that $s\geq 0$. Consider a nonzero vector $\ve x\in\{+,-,0\}^n$. We distinguish two cases. In the case where $I(\ve x)\neq\varnothing$, we set $\lambda(\ve x)=\pm (s+i')$, where $i'$ is the maximum element in $I(\ve x)$ and where the sign is defined as follows. When $|\ve x^+\cap U_{i'}|=|\ve x^-\cap U_{i'}|=|U_{i'}|/2$, the sign is $+$ if $\min(\ve x^+\cap U_{i'})<\min(\ve x^-\cap U_{i'})$ and $-$ otherwise. When $\max(|\ve x^+\cap U_{i'}|,|\ve x^-\cap U_{i'}|)>|U_{i'}|/2$, the sign is $+$ if $|\ve x^+\cap U_{i'}|>|U_{i'}|/2$, and $-$ otherwise. In the case where $I(\ve x)=\varnothing$, we set $\lambda(\ve x)=\pm\alt(\ve x)$, where the sign is the first nonzero element of $\ve x$. Similarly to the proofs of Theorems~\ref{thm:necklace} and~\ref{thm:fair_rep}, it can be checked that the map $\lambda$ satisfies the condition of the octahedral Tucker lemma with $m=s+t$. We have thus $s+t\geq n$ and there exists $\ve z'\in\{+,-,0\}^n$ such that $I(\ve z')=\varnothing$ and $\alt(\ve z')\geq n-t$. It implies that there exists $\ve z\in\{+,-,0\}^n$ such that $I(\ve z)=\varnothing$ and $\alt(\ve z)=|\ve z^+|+|\ve z^-|=n-t$. Let $A=\ve z^+$ and $B=\ve z^-$. They are both independent sets. Since $I(\ve z)=\varnothing$, we have $|A\cap U_i|+|B\cap U_i|\leq |U_i|-1$ for all $i$. The fact that $|A|+|B|=n-t$ leads then to $|A\cap U_i|+|B\cap U_i|=|U_i|-1$ for all $i$ and the conclusion follows. \bigskip Many statements about independent sets have analogues in terms of \emph{matchings}, where a matching in a graph is a set of disjoint edges. This is natural since the matchings of a graph $G$ are the independent sets of its \emph{line graph}, that is the graph in which vertices are the edges of $G$, and where edges with a common endpoint are connected. Matchings are important for theory and applications (see \cite{lovasz+plummer2009matching} for an excellent book about matchings). For instance, many resource management problems, take after the following example: given a set of tasks, a set of workers, and for each task the list of compatible workers, assign to each task a different worker or report that no such assignment exists. The worker/tasks compatibilities can be modeled by a bipartite graph, so the question is whether there exists a matching that covers the vertex class modeling the tasks. Colorful matchings still raise many questions, for instance the following well-known conjecture due to Brualdi~\cite{B91} and Stein~\cite{S75}. \begin{conjecture} If the edge set of $K_{n,n}$ (complete bipartite graph with $n$ vertices in each side) is partitioned into sets $E_1,\ldots,E_n$ of size $n$ each, then there exists a matching in $K_{n,n}$ consisting of one edge from all but possibly one $E_i$. \end{conjecture} \noindent A famous conjecture of Ryser about Latin squares~\cite{R67} asserts that if $n$ is odd, then under the same condition as the Brualdi-Stein conjecture, there exists a perfect matching intersecting each $E_i$ once. \subsection{Kernels in graphs} \label{s:kernels} A \emph{kernel} in a directed graph is a subset $K$ of the vertices that is \emph{independent} (no two vertices of $K$ are joined by an arc) and \emph{absorbing} (every vertex $v \notin K$ has an outgoing arc $v \to u$ to a vertex $u \in K$). Kernels naturally arise in certain combinatorial games, where they model the set of winning positions~\cite{VoNMo44}, or in stable matchings, as the stable matchings of a graph with preferences are the kernels of the associated line graph~\cite[$\mathsection 33$]{TheBook}. Kernels proved effective in revisiting classical questions and are for instance at the heart of the proof by Galvin of Dinitz's conjecture on list colorings~\cite[$\mathsection 33$]{TheBook}. Not every directed graph has a kernel (consider a directed cycle of length three); this is in sharp contrast with the non-directed case, where the independent absorbing sets are the inclusion-maximal independent sets. As shown by a series of works by Richardson, Duchet, Meyniel, Galeana-S\'anchez and Neumann-Lara~\cite{Duc80,DuMe83,GaNe84,Ric53}, a sufficient condition for the existence of a kernel is that each odd directed cycle has two chords whose heads are two consecutive vertices of the cycle. In particular, any acyclic directed graph has a kernel; this situation is actually what motivated, in the context of combinatorial games~\cite{VoNMo44}, the introduction of kernels. In general, however, deciding if a directed graph has a kernel is NP-complete~\cite{Chv73}. \bigskip Sperner's lemma comes up in the following relation between kernels and perfect graphs. A graph is \emph{perfect} if for all its induced subgraphs, including itself, the chromatic number is equal to the clique number. These are precisely the graphs for which the trivial lower bound on the chromatic number is sharp for them and all their induced subgraphs. The relation between kernels and perfect graphs is a special case of a conjecture of Berge and Duchet~\cite{BeDu83} proved by Boros and Gurvich~\cite{BoGu96}. \begin{theorem}\label{thm:BG} Any orientation of a perfect graph with no directed cycle of length three has a kernel. \end{theorem} \noindent The original proof translates any directed graph into a coalitional game, where the players are the cliques, and the outcomes are the stable sets. Under the theorem's assumptions, the game has stability properties that ensure, via results from coalitional game theory, the existence of a ``non-rejecting'' outcome, which translates into the desired kernel. A simpler and much more direct approach based on Scarf's lemma was proposed by Aharoni and Holzman~\cite{AhHo98} and further simplified by Kir\'aly and Pap~\cite{KiPa09} using Sperner's lemma. The proof of Kir\'aly and Pap goes as follows. Consider an orientation $D=(V,A)$ of a perfect graph with no directed cycle of length $3$. Let $P \subseteq \mathbb{R}^V$ denote the polyhedron of (possibly negative) vertex weights summing to at most $1$ on every clique. This polyhedron has at least one extreme point (assigning $1$ to any maximal independent set of $D$ and $0$ to the remaining vertices does the job), so it is pointed. Moreover, it has exactly $n$ independent extreme directions (the $-\ve e_v$ where $\ve e_v$ is the unit vector associated to vertex $v$). Every facet of $P$ corresponds to a clique on which the weights sum to exactly~$1$. Since there is no directed cycle of length three, every clique has a {\em source}, i.e., a vertex that is absorbed by all other vertices of the clique. Label each facet by the source of the corresponding clique. Note that a facet containing $-\ve e_v$ is not labeled by $v$. By Corollary~\ref{cor:KP}, the polyhedron has an extreme point $\ve\omega$ that is incident to facets of each label. Now, consider the weights on $V$ defined by $\ve\omega$. If we could find an independent set $K$ of $D$ intersecting every clique of weight~$1$, this set $K$ would also be absorbing: indeed, every vertex $v$ labels a facet incident to $\ve\omega$, so it is a source of a clique of weight one which intersects $K$. The existence of $K$ follows from a classical lemma of Lov\'asz: in a perfect graph, there exists an independent set intersecting every clique of maximum weight. (This commonly used lemma is perhaps difficult to find spelled out in this form; a standard way to prove it is to use perfectness and coloring for cliques of maximum cardinality, then apply the vertex replication lemma of perfect graphs~\cite{lovasz1972}, to allow rational weights, then generalize to real weights using linear programming.) \bigskip Theorem~\ref{thm:BG} ensures the existence of a kernel, but the proof does not give any efficient method to compute one. \begin{oproblem} What is the complexity of computing a kernel in an orientation of a perfect graph with no directed cycle of length three? \end{oproblem} \section{Introduction} \label{s:intro} This article surveys the theory and applications of five elementary theorems. Two of them, due to Sperner and Tucker, are from combinatorial topology and are well-known for being the discrete analogues of Brouwer's fixed point theorem and the Borsuk-Ulam theorem. The other three, due to Carath\'eodory, Helly, and Tverberg, are the pillars of combinatorial convexity. These theorems are between fifty and one hundred years old, which is not very old as far as mathematics go, but have already produced a closely-knit family of results in combinatorial geometry and topology. They have also found spectacular applications in, among others places, mathematical optimization, equilibrium theorems for games, graph theory, fair-division problems, the theory of geometric algorithms and data analysis. The first goal of this paper is to introduce some of the many reformulations and variations of our five theorems and explore how these results fit together. It is convenient to split this presentation into two parts. Sections~\ref{s:topology} and~\ref{s:convexgeo}, discuss Sperner-Tucker and Carath\'eodory-Helly-Tverberg respectively. At a coarse level, the former deals with combinatorial topology and the latter deals with combinatorial geometry. In each case, we include a special section on algorithmic aspects of these results relevant later for applications. The second goal of this survey is to sample some of the many applications of our five theorems. There, we proceed by broad areas and examine in Sections~\ref{s-games+fairness+independence} to~\ref{s:datapoints}, examples from game theory and fair division, from graph theory, from optimization, and from geometric data analysis. Some of our illustrations are classical (e.g., Nash equilibria, von Neumann's min-max theorem, linear programming), others are more specialized (e.g., Dol'nikov's colorability defect or the polynomial partitioning technique). We aim to show that our five theorems provide simple proofs of them all. This led us to present some new proofs, for instance for \emph{Meshulam's lemma} (Section~\ref{s:topology}) or for the \emph{ham sandwich theorem} (Section~\ref{s:datapoints}). The research topics that we discuss are vibrant and have already prompted a number of prior surveys~\cite{amentaetal2017helly,50tverbergsurvey,barany+soberonsurvey,Blagojevic+Zieglersurvey,DGKsurvey63,Eckhoff:1993survey,kalai1995survey,holmsen+wenger,3nziegler}, but other surveys were focused on a single one of the five theorems or did not cover applications. The important developments that we present here are from the past few years and emphasize both a global view and the value of geometric and topological ideas for modern applied and computational mathematics. This research area abounds with open questions, all the more enticing because they can often be stated without much technical apparatus. We made a particular effort to stress some of them. \subsection{The five theorems at a glance} Let us start with a classical rendition of \emph{Brouwer's fixed point theorem}. If you stand in your favorite Parisian boulangerie holding a map of the city in your hands, then crumple and squeeze it (without ripping it apart, \makebox[\linewidth][s]{mind you) and throw it to the ground, some point on the map must have landed right on top of its precise} \smallskip\noindent \begin{minipage}{4.1cm} \begin{center} \includegraphics[page=1,width=4 cm]{figures-final} \end{center} \end{minipage} \hfill \begin{minipage}{12cm} location. Brouwer's theorem follows from the classical \emph{Sperner lemma} on the labeling of triangulations (see Figure~\ref{figsperner}). Surprisingly, all that is needed to prove Sperner's lemma is to understand why a house with an odd number of openings (doors and windows) must have a room with an odd number of openings. This simplicity and its amazing applications attracted the attention of popular newspapers~\cite{NYT-sperner} and video sites~\cite{PBS-sperner}. Sperner's lemma is one of five theorems and we present it in detail in Section \ref{sec:sperner+tucker}. \end{minipage} \begin{figure}[hptb] \begin{center} \includegraphics[page=2]{figures-final} \end{center} \caption{Sperner's lemma in the plane. Start with a triangle with vertices colored red, green, blue (left). Subdivide it into smaller triangles that only meet at a common edge or a common vertex (center). Color every new vertex on an edge of the original triangle like either of the vertices of that original edge, and color the remaining vertices arbitrarily (right). At least one of the smaller triangles has vertices with pairwise distinct colors (cf. the shaded triangles on the right). \label{figsperner}} \end{figure} \noindent \begin{minipage}{12cm} ~In the game of Hex, two players take turns coloring, black and white, the hexagonal cells of an $11 \times 11$ diamond-shape board (see picture on the right); the opposite sides of the board have matching colors, and the player that manages to connect the two sides of his/her color wins (here, black wins). Since its invention by Hein in 1942, there has never been a draw in Hex. The fact that there is always a winner happens to have a geometric explanation: for any triangulation of the projective plane and any two-coloring of its vertices, one of the color classes spans a \makebox[\linewidth][s]{non-contractible cycle~\cite{Hex-Tucker}. (To see that this implies the impossibility of a draw} \end{minipage} \hfill \begin{minipage}{4cm} \begin{center} \includegraphics[page=3]{figures-final} \end{center} \end{minipage} \smallskip\noindent in Hex, take the dual of the hexagonal cell decomposition to obtain a triangulation of the diamond, then carefully identify the boundaries to turn that diamond into a projective plane.) This geometric property is equivalent to the two-dimensional case of Tucker's lemma, whose statement is given in the caption of Figure~\ref{figtucker}. Tucker's lemma is discussed in Section \ref{sec:sperner+tucker}, see in particular the detailed discussion following Proposition~\ref{tuckerlemma}. As a matter of fact, Gale~\cite{gale+hex} proved that the game of Hex cannot end in a draw using Brouwer's fixed point theorem and Nash~\cite{nashhex} proved that for boards of arbitrary size, the first player has a winning strategy. Another application of Tucker's lemma is the \emph{ham sandwich theorem}, which says that any three finite measures in $\mathbb{R}^3$ (such as a piece of bread, a slice of cheese, and a slice of ham for an open-faced sandwich) can be simultaneously bisected by a plane. \begin{figure}[hptb] \begin{center} \includegraphics[page=4]{figures-final} \end{center} \caption{Tucker's lemma in the plane. Start with a symmetric subdivision of the circle (left), and extend it into a triangulation of the disk (center). Label every vertex of the triangulation by $\{-2,-1,1,2\}$ so that antipodal points on the circle get opposite labels (right). There must exist an edge with opposite labels. \label{figtucker}} \end{figure} \noindent \begin{minipage}{4cm} \begin{center} \includegraphics[page=5]{figures-final} \end{center} \end{minipage} \hfill \begin{minipage}{12cm} Let us now consider finite point sets in the plane. It turns out that any seven points can be partitioned into three parts so that the triangles, segments, and points that they form have a point in common; for example, the seven points on the left admit $\{1,7\}$, $\{2,4,6\}$, and $\{3,5\}$ as such a partition. This is the simplest case of \emph{Tverberg's theorem}. Tverberg's theorem will be discuss at length in Section \ref{tv-section}. As the number of points grows, so does the number of possible parts in which we can partition the points while assuring all the convex hulls intersect: $10$ points allow four parts, $13$ points allow five parts, \ldots, and in general $3r-2$ points allow $r$ parts. A similar phenomenon holds in arbitrary dimension: any set of$(r-1)(d+1)+1$ points \makebox[\linewidth][s]{can be partitioned into $r$ parts whose convex hulls intersect. Coming back to our} \end{minipage} \vspace{0.1cm}\noindent (uncrumpled) map of Paris, consider the $302$ points that represent the subway stations. By Tverberg's theorem, they can be partitioned into $101$ parts, so that the corresponding 101 triangles and segments all intersect in a common point $c$. Observe that any line passing through $c$ must leave at least $101$ subway stations on either of its (closed) sides. The \emph{median} of a list of real numbers separates the list in half by sizes. The properties of point $c$ make it an acceptable two-dimensional generalization of the median, but now for the set of subway stations. More generally, the \emph{centerpoint theorem}, which follows from Tverberg's theorem, asserts that for any finite measure $\mu$ in the plane there is a point $c_\mu$ such that $\mu(H) \ge \frac13 \mu(\mathbb{R}^2)$ for every halfplane $H$ that contains $c_\mu$. As we will show in Section \ref{depthsec} centerpoints are very important objects in applications and have influenced geometry too, e.g., Tverberg himself was motivated to prove his famous theorem (which we discuss at length later), with intention of finding an elegant proof of the centerpoint theorem. See the end of his classic paper \cite{Tverberg:1966tb}. \bigskip \noindent \begin{minipage}{12cm} \quad In the (fully supervised) \emph{classification} problem in machine learning, one is given a data set (e.g., images), each with a tag (e.g., indicating whether the image depicts a cat or a car), and one is presented with new data to be tagged. A natural approach is to map the set of data to a set of points in some geometric space and look for a simple separation in the space (e.g., a line or a circle in the plane) that separates the points with different tags; for instance, perceptron neural networks -- some of the basic \makebox[\linewidth][s]{classifiers -- look for a hyperplane best separating the tagged point sets.} \end{minipage} \hfill \begin{minipage}{4cm} \begin{center} \includegraphics[page=6]{figures-final} \end{center} \end{minipage} \vspace{0.1cm}\noindent It is easy to conclude that a separator exists by producing an explicit hyperplane. \emph{Kirchberger's theorem} states that it is also easy to certify when \emph{no} line separator exists. In the plane, when no line separator exists, \makebox[\linewidth][s]{then there must exist a point of one of the colors, say blue, contained in a triangle of the opposite color, red, or} \vspace{0.1cm}\noindent \begin{minipage}{4cm} \begin{center} \includegraphics[page=7]{figures-final} \end{center} \end{minipage} \hfill \begin{minipage}{12cm} a red segment intersecting a blue segment. The set of all lines that define a halfplane containing a given point of $\mathbb{R}^2$ defines a convex cone in $\mathbb{R}^3$, so Kirchberger essentially reduces to an intersection property: if a finite family of convex sets in $\mathbb{R}^d$ has empty intersection, then some $d+1$ of these sets already have empty intersection. This property is also known as \emph{Helly's theorem} and is one of our main theorems. The curious reader may check that the centerpoint theorem, discussed above, also follows easily from Helly's theorem. It will be carefully studied in Section \ref{s:Helly}. \end{minipage} \bigskip \noindent \begin{minipage}{12cm} \quad Let us finally turn our attention to the geometry underlying the popular \emph{magic squares} from ancient China. A {magic square} is a $n \times n$ square grid of non-negative real numbers such that the entries along any row, column, and diagonals, all add up to the same value. Look at the four $3 \times 3$ examples on the right. It turns out that any $3 \times 3$ magic square can be written as a linear combination, with non-negative coefficients, of only three of these four magic squares! In fact, for any $n$ there exists a finite set $X_n$ of $n\times n$ magic squares \parbox[s]{\linewidth}{such that any other $n \times n$ magic square can be written using only $(n-1)^2-1$} \end{minipage} \hfill \begin{minipage}{5cm} \begin{center} \begin{tabular}{|c|c|c|} \hline 0&2&1\\ \hline 2&1&0\\ \hline 1&0&2\\ \hline \end{tabular} \quad \begin{tabular}{|c|c|c|} \hline 2&0&1\\ \hline 0&1&2\\ \hline 1&2&0\\ \hline \end{tabular} \\ \medskip \begin{tabular}{|c|c|c|} \hline 1&2&0\\ \hline 0&1&2\\ \hline 2&0&1\\ \hline \end{tabular} \quad \begin{tabular}{|c|c|c|} \hline 1&0&2\\ \hline 2&1&0\\ \hline 0&2&1\\ \hline \end{tabular} \end{center} \end{minipage} \vspace{0.1cm}\noindent elements of $X_n$. This last statement follows from \emph{Carath\'eodory's theorem}, which we will study carefully in Section \ref{caratheodory-thms} : any vector in a cone in $\mathbb{R}^d$ is a non-negative linear combination of extreme rays of the cone, and only dimension many are used in the linear combination. Indeed, the set of $n \times n$ magic squares forms a polyhedral cone in a vector space of dimension $(n-1)^2-1$. It may come as a surprise that no one knows what $X_n$ is for all $n \ge 6$. See \cite{Maya+Yo+Raymond}. \smallskip\noindent \begin{minipage}{4cm} \begin{center} \includegraphics[page=8]{figures-final} \end{center} \end{minipage} \hfill \begin{minipage}{12cm} \quad A colorful generalization of Carath\'eodory's theorem asserts that if three polygons in the plane, one with red vertices, one with green vertices, and one with blue vertices, all contain a given point $p \in \mathbb{R}^2$, then there exists a colorful triangle, using a vertex of each color, that also contains $p$. This implies that for the centerpoint $c$ that we constructed earlier for the Parisian subway stations from Tverberg's theorem, at least $\binom{101}{3}$ of the triangles spanned by the subway stations contain $c$. In fact, \makebox[\linewidth][s]{there is a quantitatively stronger statement given by the \emph{first selection}} \end{minipage} \smallskip \noindent \emph{lemma}. It states that for any set of $n$ points in the plane, there exists apoint covered by at least $\frac29 \binom{n}3$ of the triangles they span. We will see more about this topic in Section \ref{depthsec}. \subsection{Notation and preliminaries} We collect in this subsection notation, terminology, and general basic background on combinatorics, geometry, and topology that will be used in the rest of this survey. The advanced reader may want to skip or move quickly through this section. For a more thorough introduction to the topics listed here, we recommend the classical books and textbooks in combinatorial convexity~\cite{Bar2002,handbookofconvexgeo,gruber2007convex,Mbook} as well as~\cite[$\mathsection 5.3$]{Sch03}. For topological combinatorics and combinatorial aspects of algebraic topology see ~\cite{Longueville-book,matousek2003using,munkres1984elements}. Given $n \in \mathbb{N}$, we write $[n]$ to denote the set $\{1,2,\ldots, n\}$. If $X$ is a set and $k \in \mathbb{N}$, we write $\binom{X}{k}$ for the set of $k$-element subsets of $X$. The notation $\tilde{O}(\cdot)$ denotes asymptotic notation where we ignore poly-logarithmic factors: $f(n) = \widetilde{O}(g(n))$ if there exists $k \in \mathbb{N}$ such that $f(n) = O(g(n) \log^k g(n))$. We denote by $(\ve e_1, \ve e_2, \ldots, \ve e_d)$ the orthonormal frame of $\mathbb{R}^d$. Given two vectors $\boldsymbol{x}$ and $\boldsymbol{y}$ in $\mathbb{R}^d$, we write $\boldsymbol{x} \le \boldsymbol{y}$ to mean that $x_i \le y_i$ for $i =1, 2, \ldots, d$. We write $\mathbb{B}^d = \{ \ve x \in \mathbb{R}^d \colon \sum_{i=1}^d x_i^2 \leq 1 \}$ for the unit ball in $\mathbb{R}^d$ and $\mathbb{S}^{d} = \{ \ve x \in \mathbb{R}^{d+1} \colon \sum_{i=1}^{d+1} x_i^2 = 1 \}$ for the unit sphere in $\mathbb{R}^{d+1}$. \subsubsection{Polytopes, simplices, polyhedra, cones} Let $A \subseteq \mathbb{R}^d$ be a set. The {\em convex hull} of $A$, denoted by $\conv(A)$, is the intersection of all convex sets containing $A$. In other words, $\conv(A)$ is the smallest convex set containing $A$. It is well-known that \[ \conv(A)=\left\{\sum_{i=1}^{n} \gamma_i \ve a_i\colon n\in \mathbb{N},\; \ve a_i \in A,\; \gamma_i \geq 0, \ \text{and} \ \gamma_1+\cdots+\gamma_n=1\right\}.\] A {\em polytope} is the convex hull of a finite set of points in $\mathbb{R}^d$. Here are a few examples. The convex hull of affinely independent points is a \emph{simplex}; the {\em standard} $k$-dimensional simplex $\Delta_{k}$ is $\conv(\{\ve e_1,\dots,\ve e_{k+1}\})$. With $e_i$ is a the $i$-standard unit vector. The convex hull of $\ve e_1,-\ve e_1,\ve e_2,-\ve e_2,$ $\dots,\ve e_k,-\ve e_k$ is the $k$-dimensional \emph{cross-polytope}. The convex hull of all vectors with $0,1$ entries is the $d$-dimensional \emph{hypercube}. A \emph{face} of a polytope is its intersection with a hyperplane that avoids its relative interior. Faces of dimension $0$ are \emph{vertices} and inclusion-maximal faces are \emph{facets}. A face of a polytope (resp. simplex) is also a polytope (resp. simplex). There is a face of dimension $-1$, the empty set. A {\em polyhedron} is the intersection of finitely many halfspaces in $\mathbb{R}^d$. In particular, any polyhedron can be represented as $\{\boldsymbol{x} \in \mathbb{R}^d\colon A \boldsymbol{x} \le \ve b\}$ where $A$ is an $n \times d$ matrix and $\ve b \in \mathbb{R}^n$. A \emph{polyhedral cone} is a polyhedron that is closed under addition and scaling by a positive constant. In particular, any polyhedral cone writes as $\{\boldsymbol{x} \in \mathbb{R}^d: A \boldsymbol{x} \ge \ve 0\}$. The \emph{polar} of a point $\ve x \in \mathbb{R}^d\setminus\{\ve 0\}$ is the halfspace $\{\ve y \in \mathbb{R}^d\colon \ve x \cdot \ve y \le 1\}$ (and vice-versa) and the \emph{dual} of $\ve x$ is the hyperplane $\{\ve y \in {\mathbb R}^d\colon \ve x \cdot \ve y = 1\}$ bounding its polar; we speak of polarity or duality \emph{relative to $\ve o$} to represent the polarity or duality after the coordinate system has been translated to have its origin in $\ve o$. The \emph{Weyl-Minkowski theorem} asserts that the polytopes are exactly the bounded polyhedra. (This can be proven using polarity arguments.) Polytopes can thus be represented both as convex hulls of finitely many points, or as intersections of finitely many halfspaces. To see that these viewpoints are complementary, we invite the reader to prove that projections or intersections of polytopes are polytopes. The Weyl-Minkowski theorem similarly implies that any polyhedral cone also writes as the set of convex combinations of a finite set of rays emanating from the origin. By a theorem of Motzkin et al.~\cite{motzkin-double}, every polyhedron $P$ decomposes into the \emph{Minkowski sum} of a polytope $Q$ and a polyhedral cone $C$: $P = \{\boldsymbol{x} + \boldsymbol{y}: \boldsymbol{x} \in Q, \boldsymbol{y} \in C\}$. A polyhedron is \emph{pointed} if the largest affine subspace it contains is zero-dimensio\-nal. Any polyhedron can be decomposed into the Minkowski sum of a pointed polyhedron and a vector space. \subsubsection{Simplicial complexes} A \emph{geometric simplicial complex} is a family of simplices with two properties: the intersection of any two distinct simplices is a face of both of them; and it contains all the faces of every member of the family. Simplicial complexes may also be considered abstractly by only retaining which sets of vertices span a simplex. Formally, an \emph{abstract simplicial complex} $\mathsf{K}$ is a family of finite subsets (the \emph{faces}) of some ground set (the \emph{vertices}) that is closed under taking subsets: if $\sigma \in\mathsf{K}$ and $\tau \subseteq \sigma$, then $\tau \in\mathsf{K}$. We write $\sigma_n$ for the \emph{abstract $n$-dimensional simplex} consisting of all subsets of $[n+1]$. A \emph{facet} of a simplicial complex is an inclusion-maximal face. Let us stress that the meaning of the word \emph{face} (or \emph{facet}) depends on the context and can denote a polytope (for polytopes), a simplex (for simplices), or a set of vertices (for abstract simplicial complexes). In particular, we consider $\Delta_n$ to be a polytope, so it has $n+1$ faces of maximal dimension $n-1$; in contrast, $\sigma_n$ has a single face of maximal dimension $n$. Given a geometric simplicial complex $\mathsf{K}$, we let $|\mathsf{K}|$ denote the underlying topological space, that is $|\mathsf{K}| = \bigcup_{\sigma \in \mathsf{K}} \sigma$. If $\mathsf{L}$ is the abstract simplicial complex obtained from $\mathsf{K}$, we say that $\mathsf{K}$ is a \emph{geometric realization} of $\mathsf{L}$ and put $|\mathsf{L}| = |\mathsf{K}|$. (The reader can check that all this is well-defined up to homeomorphism.) A \emph{triangulation} of a topological space $X$ is a (geometric or abstract) simplicial complex whose underlying topological space is homeomorphic to $X$. \subsubsection{Homology} We will use some basic notions of homology, mostly simplicial homology over $\mathbb{Z}$ or $\mathbb{Z}_q = \mathbb{Z}/q\mathbb{Z}$. To allow readers unacquainted with homology to appreciate at least our simplest examples, we recall here the basic definitions. An important idea in homology theory is that topological spaces can be studied by associating to them some groups, called homology groups. These groups can be defined geometrically (in singular homology) or combinatorially (in simplicial homology) from a triangulation of the space. In the cases that we consider, these approaches produce isomorphic groups, and we mostly work with simplicial homology. Given a simplicial complex $\mathsf{K}$, we denote by $C_i(\mathsf{K},\mathbb{Z}_2)$ the set of finite formal sums of $i$-dimensional faces of~$\mathsf{K}$, and $C_{\bullet}(\mathsf{K},\mathbb{Z}_2) = \bigoplus_{i=0}^{\infty} C_i(\mathsf{K},\mathbb{Z}_2)$ is the \emph{chain complex} of $\mathsf{K}$. The map that sends every $i$-face of $\mathsf{K}$ to the formal sum, with coefficients in $\mathbb{Z}_2$, of its $(i-1)$-faces extends linearly, over $\mathbb{Z}_2$, to a map $\partial_i: C_i(\mathsf{K},\mathbb{Z}_2) \to C_{i-1}(\mathsf{K},\mathbb{Z}_2)$. Notice that $C_{\bullet}(\mathsf{K},\mathbb{Z}_2)$ has an additive group structure and that the sum of the $\partial_i$ is a morphism from $C_{\bullet}(\mathsf{K},\mathbb{Z}_2)$ into itself; this morphism is called the \emph{boundary map} of $C_{\bullet}(\mathsf{K},\mathbb{Z}_2)$. Note that with $\mathbb{Z}_2$-coefficients finite formal sums are simply subsets and the boundary operator maps to proper subsets that appear an odd number of times. It turns out that $\partial_{i-1} \circ \partial_i = 0$, so we can define the \emph{$i$-th homology group} of $\mathsf{K}$ over $\mathbb{Z}_2$ coefficients as the quotient group $H_i(\mathsf{K},\mathbb{Z}_2) = \ker \partial_i / \im \partial_{i+1}$. Intuitively, the rank of $H_i(\mathsf{K},\mathbb{Z}_2)$ relates to the number of independent holes of dimension $i$ in $|\mathsf{K}|$; for example, the rank of $H_0(\mathsf{K},\mathbb{Z}_2)$ counts the number of connected components of $|\mathsf{K}|$. In particular, if $\mathsf{K}$ is a single vertex then all its homology groups are trivial except the $0$-th one. The \emph{reduced homology groups} $\widetilde{H}_i$ modify slightly $H_0$ so that it is trivial for connected sets; in dimension $i \ge 1$, homology groups and reduced homology groups coincide and $\widetilde{H}_{-1}$ is defined as $0$ for nonempty complexes. Going from $\mathbb{Z}_2$ to other coefficient groups involves only one technical complication: the definition of $\partial_i$ involves some sign book-keeping, so as to ensure that $\partial_{i-1} \circ \partial_i = 0$. See~\cite{munkres1984elements} for details. \section{Optimization} \label{s:optimization} Broadly speaking, mathematical optimization develops mathematical tools for solving optimization problems. We illustrate in this section how the theorems of Carath\'eodory, Sperner and Helly and their variations provide original viewpoints on different aspects of this field. \subsection{Linear programming} A \emph{linear program} (LP) asks for the minimum of a linear function under a set of linear constraints and is usually written \begin{equation}\label{eq:lp} \begin{aligned} \min \quad & \ve c^T \ve x \\ \text{s.t.}\quad & A \ve x = \ve b\\ & \ve x \geq \ve 0. \end{aligned} \end{equation} Here, $A$ is a $m \times n$ matrix, $\ve x$ a vector of $n$ indeterminates, $\ve b$ and $\ve c$ vectors in $\mathbb{R}^m$ and~$\mathbb{R}^n$, respectively, and $\boldsymbol{x} \geq \ve 0$ means that each row of $\ve x$ is non-negative. Linear programs may come in different presentations, with $\max$ in place of $\min$ or possibly inequalities in place of equalities; these presentations are essentially equivalent~\cite[$\mathsection 4$]{matousek2007understanding}. Linear programming is by now a central tool in operations research as it allows to model a variety of resource management problems~\cite[$\mathsection 2$]{matousek2007understanding} and can be solved fairly effectively in practice. The theory of linear programming builds on the study of systems of linear inequalities. While this seems to be just small variation from linear algebra, linear programming was only systematized in the late 1940's. \subsubsection{The simplex algorithm}\label{s:simplex} Carath\'eodory's theorem underlies the \emph{simplex algorithm}, arguably the standard method to solve linear optimization problems. On the one hand, Carath\'eodory's theorem gives a way to discretize an a priori continuous problem. Indeed, the cone version of Carath\'eodory's theorem ensures that if the system $A\ve x = \ve b$ with $\ve x \ge \ve 0$ admits a solution, then it admits a solution with support of size the rank of $A$. Such a support, understood as a set of indices of columns of $A$, is a \emph{feasible basis}. A closer inspection of the proof of Proposition~\ref{p:cara++} reveals that the optimum of~\eqref{eq:lp}, if one exists, is attained on a solution supported by a feasible basis. Since a feasible basis determines a unique solution of $A\ve x = \ve b$, the optimum can be found in finite (but possibly long) time by enumerating feasible bases which are combinatorially described by their support. On the other hand, Carath\'eodory's theorem, in the form of Proposition~\ref{p:cara++}, also explains the pivoting mechanics of the \emph{simplex algorithm}. Suppose there exists an optimum, and that we have a feasible basis~$B$ determining a solution $\ve x^*$. It turns out that if $\boldsymbol{x}^*$ is not optimal, there exists $i \notin B$ such that increasing $x^*_i$ improves (i.e., decreases) the objective $\ve c^T \ve x$. The set $B \cup \{i\}$ contains another feasible basis, and it cannot define a worse solution than $\boldsymbol{x}^*$ (again, a consequence of the proof of Proposition~\ref{p:cara++}). Switching to that new basis is a \emph{pivot step}. It is a non-trivial result of Bland that there exist rules for choosing non-cycling sequences of pivot steps, see~\cite[$\mathsection$11.3]{Sch86} and~\cite[$\mathsection$5.8]{matousek2007understanding}. Broadly speaking, the simplex algorithm starts by computing a feasible basis, and then it performs such pivot steps until no entry outside the basis can be used to improve the objective; the final basis then determines an optimal solution. \subsection{Integer programming} An \emph{integer program} (IP) adds integrality constraints to linear programs by restricting all of the variables to take their values over $\mathbb{Z}$ rather than $\mathbb{R}$, for instance \begin{equation}\label{eq:ip} \begin{aligned} \min \quad & \ve c^T \ve x \\ \text{s.t.}\quad & A \ve x = \ve b\\ & \ve x \geq \ve 0, \ve x \in \mathbb{Z}^n. \end{aligned} \end{equation} This variation arises naturally in the management of undivisible resources or yes/no decision making; the emblematic example is the \emph{knapsack problem} which asks, given a set of objects with weights and values, for the subset of maximal value whose weight does not exceed a given threshold. The \emph{relaxation} of an integer linear program is the linear program obtained by forgetting the integrality conditions, as is~\eqref{eq:lp} for~\eqref{eq:ip}. In general, the solution to the relaxed linear program provides a bound on the solution to the integer program, a lower bound in the case of~\eqref{eq:lp} and~\eqref{eq:ip}. Linear programming and relaxation play a fundamental role in combinatorial algorithms; we refer the reader to the books~\cite{V01, WS11} for more detail. What we just said applies also on \emph{mixed} integer programs in which only some of the variables are required to be integer. \subsubsection{Sparsity of integer solutions} Carath\'eodory's theorem readily measures the sparsity of optimal solutions. For example, Theorem~\ref{Bound_via_siegel} provides the following bound on the support of an optimal solution. \begin{corollary} Let $A \in \mathbb{Z}^{m \times d}$, $\ve b \in \mathbb{Z}^m$ and $\ve c \in \mathbb{Z}^d$. The integer point of the polyhedron $\{\ve x \in \mathbb{R}^d \colon A\ve x= \ve b, \ \ve x\geq \ve 0\}$ that minimizes $\ve c^T \ve x$ has at most $$2(m+1) \log(2\sqrt{(m+1)}M)$$ non-zero components, where $M$ is the largest of the entries of $A, \ve c$ in absolute value. \end{corollary} \noindent Similar results have been used, for instance, for solving of bin-packing problems, see e.g., \cite{eisenbrandshmonin-caratheodory, goemans+rothvoss}. See \cite{Alievetal2018} for an application to the sparsity of optimal solutions and tighter bounds for special cases such as knapsack problems. \subsubsection{Graver bases} \label{s:hilbertbasisinIP1} Another example of the influence of Carath\'eodory's theorem is the use of \emph{Hilbert bases} by \emph{Graver's optimization methods}. Although we present these ideas for integer programs, they apply more broadly, for instance to convex integer optimization problems, with respect to a convex objective function composed with linear functions, or convex separable functions, see~\cite[$\mathsection 3$ and $\mathsection 4$]{alggeo4optimization} and~\cite{shmuelbook}. Consider the integer program~\ref{eq:ip} and assume, as is usually the case in practice, that $A \in \mathbb{Q}^{m \times n}$. We can decompose the polyhedron $A \ve x = \ve 0$ into $2^n$ cones \[\{A \ve x = \ve 0\} = \bigcup_{\ve \varepsilon \in \{-1,1\}^n} \{A \ve x = \ve 0, \varepsilon_1x_1 \ge 0, \varepsilon_2x_2 \ge 0, \ldots, \varepsilon_nx_n \ge 0\}.\] Each of these cones is pointed and rational, so it has a Hilbert basis~\cite[Corollary 2.6.4]{alggeo4optimization}. The union of these $2^d$ Hilbert bases is called the \emph{Graver basis} of $A$. Note that Seb\H{o}'s integer Carath\'eodory theorem (Theorem~\ref{thm:sebo}) ensures that any integer point in the polyhedron $A \ve x = \ve b$ can be written as a non-negative integer linear combination of at most $2n-2$ vectors from the Graver basis of $A$. Moreover, Graver~\cite{Graver1975} established the following remarkable augmentation property: any non-optimal feasible solution of the integer program~\eqref{eq:ip} can be improved by adding some suitable vector from the Graver basis of $A$. Hence, any integer program of the form~\eqref{eq:ip} can be solved by first computing a Graver basis for $A\boldsymbol{x} = \ve b$, then computing a feasible solution, and finally improving this solution by a greedy walk on the set of integer solutions, the candidate steps being provided by the vectors of the Graver basis. \begin{figure}[ht] \centering \includegraphics[page=23]{figures-final} \caption{Illustration, in a planar projection, of Graver basis methods for $A \ve x = \ve b$ for $A = (1 \ 2 \ 1)$. The Graver basis of $A\ve x = \ve 0$ consists of $(2,-1,0),(0,-1,2),(1,0,-1),(1,-1,1)$, and their opposites. On the left, the neighbors of an integer point through the Graver basis. On the right, the cone $\ve x \ge 0$ (shaded) and a walk from an arbitrary (black) feasible integer point to the (red) integer point optimal for the given direction $\ve c$.} \label{fig:graveraction} \end{figure} The main obstacle to the practical use of Graver bases is their potentially exponential size. For general matrices, deciding if a given set of vectors is a Hilbert basis is already coNP-complete~\cite{durandetal}. The good news, from the last decade of work, are that for highly structured matrices, such as those with regular block decompositions, Graver bases can be computed efficiently and are actually manageable for optimization (see details in Chapters 3 and 4 of~\cite{alggeo4optimization} and the extensive presentation in \cite{shmuelbook}). \subsection{LP duality} \label{s:lpduality} An important idea for the study of linear programs is the notion of \emph{LP duality}. This idea naturally arises from the question of certifying the quality of a solution to a linear program. For example, the objective function value of an optimal solution to \begin{equation}\label{eq:lp2} \begin{aligned} \min \quad & 5 x_1 + 3 x_2 \\ \text{s.t.}\quad & \pth{\begin{matrix} 2 & 3 \\ 1 & 2 \end{matrix}} \pth{\begin{matrix}x_1\\x_2\end{matrix}} \ge \pth{\begin{matrix} 2 \\ 5\end{matrix}}\\ & \ve x \geq \ve 0 \end{aligned} \end{equation} can be seen to be at least $2$ by looking at the first constraint and at least $\frac{15}2$ by scaling the second constraint by $\frac32$. More generally, the solution of a linear program~\eqref{primal} can be bounded from below by the solution of a linear program~\eqref{dual}, where \noindent \hfill \begin{minipage}{5cm} \begin{align*}\tag{P} \label{primal} \min \quad & \ve c^T \ve x\\ \text{s.t.}\quad & A \ve x \geq \ve b\\ & \ve x \geq \ve 0 \end{align*} \end{minipage} \quad and \quad \begin{minipage}{5cm} \begin{align*}\tag{D} \label{dual} \max \quad & \ve b^T \ve y\\ \text{s.t.}\quad & A^T \ve y \leq \ve c\\ & \ve y\geq \ve 0. \end{align*} \end{minipage} \hfill \noindent The linear programs~\eqref{primal} and~\eqref{dual} are said to be \emph{dual} to one another; the variable $\ve y$ of the dual program can be interpreted as the weights of a linear combination of the constraints of the primal program (and conversely). This relation, called \emph{weak linear programming duality}, can be strengthened. \begin{theorem}[Strong Duality] \label{strongLPduality} Given two dual linear programs, if at least one is feasible, then they have the same optimal value. \end{theorem} \noindent The duality theory of linear optimization has many applications such as fast certification of solutions or primal-dual algorithms and in proving combinatorial theorems~\cite{Sch86}. But it also plays a role in discrete geometry, for example in the proof of $(p,q)$ theorems \cite{Alon92pq}. In the following subsection we are going to prove Theorem \ref{strongLPduality} in an atypical way. \subsubsection{LP duality from the MinMax theorem} LP duality is the classical favorite approach to prove von Neumann's MinMax theorem (Theorem~\ref{thm:VonNeumann}) for two-player zero-sum games, as we mentioned in Section~\ref{s:nash-complexity}. Going in the other direction, Dantzig~\cite{dantzig-minmax} proposed a deduction of the strong duality theorem from the minimax theorem; as we explain below, Dantzig's proof required a detour, in some cases, via Farkas' lemma, another result equivalent to Theorem~\ref{strongLPduality}. The impression of equivalence between minimax theorem, Nash equilibria for zero-sum two-player games and strong duality theorem nevertheless lingered and became a folklore theorem. It is only recently that Adler~\cite{adler-minmaxvsduality} filled-in the missing case to give a genuine direct equivalence between these three cornerstones. Dantzig's approach proceeds as follows. The weak duality already proves one inequality. The other inequality reduces to finding a solution $(\ve x, \ve y) \ge 0$ of the system~\eqref{combinedPD}: \[\tag{P+D}\label{combinedPD} \left\{\begin{array}{rr} A\ve x -\ve b&\geq 0,\\ -A^T\ve y+\ve c&\geq 0,\\ \ve b^T \ve y-\ve c^T \ve x&\geq 0,\\ \end{array}\right.\] This system rewrites \begin{align*} \underbrace{\pth{\begin{matrix} 0 & A & -\ve b \\ -A^T & 0 & \ve c \\ \ve b^T & -\ve c^T & 0 \end{matrix}}}_{M} \pth{\begin{matrix} \ve y \\ \ve x \\ 1 \end{matrix}} \ge \ve 0, \end{align*} so our task is to find a vector $\ve z$ with positive last component and such that $M\ve z \ge \ve 0$. Consider the zero-sum game with payoff matrix $-M$ and let $(\ve s^*,\ve t^*)$ be a Nash equilibrium. The matrix $-M$ has dimension $n \times n$, so one may pit each strategy against itself to see that the value $v$ of the game satisfies \[ {\ve s^*}^T (-M) \ve s^* \ge v \ge {\ve t^*}^T (-M) \ve t^*.\] Since $M$ is antisymmetric, this implies that $v=0$. Moreover, for any $\ve z \in \Delta_{n-1}$ we have ${\ve s^*}^T (-M) \ve z \ge 0$, so $M {\ve s^*} \ge \ve 0$. Since $\ve s^* \in \Delta_{n-1}$, writing $\ve s^* = (\begin{matrix} \widetilde{\ve y} & \widetilde{\ve x} & \widetilde{u}\end{matrix})^T$ leads to the desired solution whenever $\widetilde{u} \neq 0$. When $\widetilde{u} = 0$, Dantzig concluded by a separate use of Farkas' lemma (the ``incompleteness'' in his derivation). Adler was able to complete this missing case, without appealing to Farkas. \subsubsection{Totally Dual Integral polyhedra} \label{s:hilbertbasisinIP2} Applied optimization models typically involve \emph{rational polyhedra}, which are expressed as systems of linear inequalities with rational coefficients. An important question for computation is whether a rational polyhedron is \emph{integral}, that is whether all its vertices have integer coordinates. Indeed, for integral polyhedra, integer optimization (which is typically very hard) becomes linear optimization (which is considered tractable). Let us see how Hilbert bases help when looking for rational polyhedra that are integral. In what follows, we consider a rational polyhedron $P=\{\boldsymbol{x} \colon A\boldsymbol{x} \le \ve b\}$ with $A$ and $\ve b$ rational. Checking whether $P$ is integral is a finite process, as one can simply list all the vertices. The following structural result allows us to bypass this tedious enumeration in various situations. Observe that if $P$ is integral, then for every integral vector $\ve w$, the value $\max \{\ve w^T\boldsymbol{x} : \ve x \in P\}$ is an integer (indeed, it is the inner product of two integral vectors). Surprisingly, a rational polyhedron is integral if and only if it satisfies this condition. This equivalence, due to Edmonds and Giles~\cite{edmondsgiles}, is still not a practical way to detect integral polyhedra (the set of candidate vectors $\ve w$ is infinite) but it suggests to look at things via duality. Indeed, the strong LP duality (Theorem \ref{strongLPduality}) states $$\max\{\ve w^T \ve x \colon A \boldsymbol{x} \leq \ve b\}=\min\{\ve y^T \ve b \colon A^T \ve y = \ve w, \ve y\geq 0\},$$ so $P$ is integral if the vector $\ve b$ is integral and the right-hand side minimization problem has an integral optimal \emph{value} for every integral vector $\ve w$. (Note that in general integrality properties are not preserved through linear programming duality.) A system of inequalities $A \ve x \leq \ve b$ is \emph{totally dual integral }(TDI) if the right-hand side minimization problem above has an integral solution for every integral vector $\ve w$ (for which the optimum is finite). A rational polyhedron $P$ that can be represented by a TDI system where $\ve b$ is integral is thus integral. The converse is true and any integral polyhedron can be represented by a TDI system of inequalities (but which, in practice, may not be easy to find). Let us stress that TDIness is a property of the system of inequalities, not of the underlying polyhedron: Giles and Pulleyblank~\cite{gilespulleyblanktdi} proved that for every rational system of inequalities $A \ve x \leq \ve b$, there is a rational number $\alpha$ such that $\alpha A \ve x \leq \alpha \ve b$ is TDI. They also proved, using Carath\'eodory-style properties, that a system of inequalities $A \ve x \leq \ve b$ is TDI if and only if for every face $F$ of $P$, the rows of $A$ which are active in $F$ form a Hilbert basis for that supporting cone. This makes checking TDIness a finite process, but still not a practical one as checking whether a system of vectors forms a Hilbert bases is not efficient. See~\cite{durandetal} and references therein for computational issues. TDIness and related notions such as box-TDIness often shed new light on results in combinatorial optimization, for instance on the matroid intersection theorem. Consider two matroids $M_1$ and $M_2$ over the same ground set $S$, understood as their sets of independent sets. Any matroid has an associated \emph{matroid polytope}, obtained by taking the convex hull of its (indicator vectors of) independent sets. It turns out that the convex hull of the independent sets in $M_1 \cap M_2$ coincides with the intersection of the matroid polytopes of both matroids. This is remarkable, as in general $\conv (A \cap B)$ is different from $\conv(A) \cap \conv(B)$. See~\cite[$\mathsection$41]{Sch03} for more on this topic. A special case of TDIness allows for linear programming proofs of combinatorial results. A matrix $A$ is \emph{totally unimodular} (TU) if every square submatrix has determinant in $\{0,-1,+1\}$. For such a matrix, the polyhedron $\{\ve x: A\ve x \leq \ve b, \ve x\geq \boldsymbol{0}\}$ is integral for every integral vector $\ve b$. TU matrices give rise to TDI systems. They are completely characterized and are very important in combinatorics and optimization (see \cite{seymourTU,truemper+walter}). Since the transpose of a TU matrix is TU again, the strong duality theorem in linear programming (Theorem~\ref{strongLPduality}) provides alternative proofs of the K\H{o}nig theorem (the maximum cardinality of a matching in a bipartite graph is equal to the minimum cardinality of a set of nodes intersecting each edge), the K\H{o}nig-Rado theorem (the maximum cardinality of a stable set in a bipartite graph without isolated vertices is equal to the minimum number of edges needed to cover all nodes), and the integrality of the Max-Flow-Min-Cut theorem. In all these three cases, the matrix $A$ is the node-arc incidence matrix of a directed graph, and it is an easy exercise to check that such a matrix is TU. \subsection{Convex optimization} Using linearization techniques, one may apply ideas from the theory of linear programming, and its duality, to more general optimization problems of the form \begin{equation}\tag{P'} \label{program} \begin{array}{rlll}\min & f(\ve x) \\ \mbox{s.t.} & h_j(\ve x)\leq 0 & j=1,\ldots,q \end{array} \end{equation} where $f$ and $h_j$'s are differentiable functions $\mathbb{R}^n\rightarrow\mathbb{R}\cup\{-\infty,+\infty\}$ (not just linear as before). \subsubsection{The KKT conditions from LP duality.} A milestone in mathematical programming is the following necessary optimality condition, due to Karush, Kuhn, and Tucker~\cite{boyd2004convex}. \begin{theorem}[KKT condition]\label{kkt} Let $\ve x^*$ be a feasible solution of the problem~\eqref{program} such that the constraints are qualified at $\ve x^*$. If $\ve x^*$ is a local optimum, then there are nonnegative real numbers $\mu_1, \mu_2, \ldots, \mu_q$ such that % \[\displaystyle{\nabla f(\ve x^*)+\sum_{j=1}^q\mu_j\nabla h_j(\ve x^*)=\boldsymbol{0}}\qquad\mbox{and}\qquad \forall j, \quad \mu_jh_j(\ve x^*)=0.\] \end{theorem} \noindent The requirement that ``the constraints are qualified'' is a regularity condition on the feasible domain $F = \{\ve x \colon h_j(\ve x) \le 0, j \in [q]\}$ near $\ve x^*$. We do not spell out this rather technical condition but give a sufficient requirement. Call a direction $\ve d$ \emph{feasible at} $\ve x^*$ if $F$ contains a segment of positive length with endpoint $\ve x^*$ and direction $\ve d$. (This is where we are approximating: the adequate notion of feasibility is somewhat more flexible.) The constraints are qualified at $\boldsymbol{x}^*$ if the closure of the cone of feasible directions coincides with the tangent cone at $\ve x^*$, that is $\{\ve d \colon \nabla h_j(\ve x^*)\cdot\ve d\leq 0 \text{ for all } j \in [q] \hbox{ s.t. } h_j(\ve x^*) =0\}$. Even this coarser requirement may prove tedious to check, and several simpler sufficient conditions were investigated; the above criterion readily yields that affine constraints are qualified in any feasible point; another important case is that of convex differentiable constraints, which are qualified at any feasible point provided there exists a point satisfying strictly every constraint~\cite[$\mathsection$5.5]{boyd2004convex}. The factors $\mu_i$ in Theorem~\ref{kkt} are the \emph{Lagrange multipliers}. The strong duality theorem is classically equivalent to the following lemma of Farkas~\cite[$\mathsection$ 7.3]{Sch86}. \begin{lemma}[Farkas' lemma] \label{farkas} Let $A$ be a real matrix and let $\boldsymbol{b}$ be a vector. There exists $\ve x \geq\boldsymbol{0}$ such that $A\ve x=\boldsymbol{b}$ if and only if $\ve y\cdot\boldsymbol{b}\geq 0$ for every $\ve y$ such that $A^T\boldsymbol{y}\geq\boldsymbol{0}$. \end{lemma} \noindent Theorem~\ref{kkt} can be deduced from Farkas' lemma via the following linearization argument. Let $\ve x^*$ be a feasible solution of \eqref{program} and write \[ A=- \pth{\pth{\nabla h_j(\ve x^*)}_{j\in J}} \quad \hbox{where} \quad J = \{j\in[q] \colon h_j(\ve x^*)=0\}.\] Let $\ve x^*$ be a local optimum of~\eqref{program}. Since $f$ is differentiable, $\nabla f(\ve x^*)\cdot\ve d\geq 0$ for every direction~$\ve d$ feasible at $\boldsymbol{x}^*$. Moreover, since the constraints are qualified at $\ve x^*$, any direction $\ve d$ satisfying $A^T \ve d \ge \ve 0$ is in the closure of the cone of directions feasible at $\boldsymbol{x}^*$, hence \[ \forall \ve d \in \mathbb{R}^n \hbox{ s.t. } A^T \ve d \ge \ve 0, \quad \nabla f(\ve x^*)\cdot\ve d\geq 0.\] By Farkas' lemma, this is equivalent to the existence of a vector $\ve \mu'$ in $\mathbb{R}_+^{J}$ such that $A \ve \mu'=\nabla f(\ve x^*)$. Completing $\ve \mu'$ into $\ve \mu$ by zeroes yields Theorem \ref{kkt}. \subsubsection{Strong duality in convex programming from the KKT conditions.} \label{s:lpkkt} The KKT condition (Theorem~\ref{kkt}) can, in turn, be used to prove a strong duality theorem for convex programming, generalizing Theorem~\ref{strongLPduality}. Let us introduce the Lagrangian function \[ \L(\ve x,\ve \mu)=f(\ve x)+\sum_{j=1}^q\mu_j h_j(\ve x).\] Since \[ \sup_{\ve \mu \in \mathbb{R}_+^{q}} \L(\ve x,\ve \mu) = \left\{\begin{array}{ll} f(\boldsymbol{x}) & \hbox{ if } \mu_jh_j(\ve x) \le 0\\ +\infty & \hbox{ otherwise.} \end{array}\right.\] \eqref{program} is equivalent to \[ \begin{array}{rl}\min & \sup \{\L(\ve x, \ve \mu) \colon \ve \mu \in \mathbb{R}_+^q\} \\ \mbox{s.t.} & \ve x\in\mathbb{R}^n. \end{array} \] The same argument as in Equation~\eqref{eq:minmax} yields \begin{equation}\label{eq:dualitefaibleconvex} \displaystyle{ \inf_{\ve x\in\mathbb{R}^n} \ \sup \{\L(\ve x, \ve \mu) \colon \ve \mu \in \mathbb{R}_+^q\} \ge \sup_{\mu\in\mathbb{R}_+^q} \ \inf \{\L(\ve x, \ve \mu) \colon \ve \boldsymbol{x} \in \mathbb{R}^n\}}. \end{equation} Finding the right-hand side term consists in solving the following \emph{dual} program: \begin{equation}\tag{D'}\label{dualprogram} \begin{array}{rlll}\max & g(\ve \mu) \qquad \hbox{where} \quad g(\ve \mu) = \inf\{ \L(\ve x, \ve \mu) \colon \ve \boldsymbol{x} \in \mathbb{R}^n\}\\ \mbox{s.t.} & \ve \mu \ge \ve 0. \end{array} \end{equation} This dual program always asks to maximize a concave function. In the case where Problem~\eqref{program} is a linear program, this notion of duality coincides with the LP duality introduced in Section~\ref{s:lpduality}. \begin{proposition}[Strong duality for convex optimization] Suppose that in~\eqref{program}, $f$ and $h_1, \ldots, h_q$ are convex functions. Suppose moreover that the constraints are qualified at every feasible solution. If \eqref{program} has an optimal solution, then the dual program has one too and the optimal values of \eqref{program} and \eqref{dualprogram} coincide. \end{proposition} \noindent The proof goes as follows. Let $\ve x^*$ be an optimal solution of \eqref{program}. By Theorem~\ref{kkt}, there exists $\ve \mu^*\in\mathbb{R}_+^q$ such that \[\displaystyle{\nabla f(\ve x^*)+\sum_{j=1}^q\mu_j^*\nabla h_j(\ve x^*)=\boldsymbol{0}}\qquad\mbox{and}\qquad \forall j, \quad \mu_j^*h_j(\ve x^*)=0.\] % On the one hand, we have % \[ \L(\boldsymbol{x}^*,\ve \mu^*)=f(\ve x^*)+\sum_{j=1}^q\mu_j^* h_j(\ve x^*) = f(\boldsymbol{x}^*).\] % On the other hand, we have % \[ \nabla_{\boldsymbol{x}}\L(\boldsymbol{x},\ve \mu^*) = \nabla f(\ve x)+\sum_{j=1}^q\mu_j^*\nabla h_j(\ve x), \quad \hbox{so} \quad \nabla_{\boldsymbol{x}}\L(\boldsymbol{x}^*,\ve \mu^*)=\boldsymbol{0}. \] The map $\boldsymbol{x} \mapsto \L(\boldsymbol{x},\ve \mu^*)$ is convex because $\ve \mu^* \ge \boldsymbol{0}$, so $\boldsymbol{x}^*$ is a global minimum and $g(\ve \mu^*)= \L(\boldsymbol{x}^*,\ve \mu^*) = f(\boldsymbol{x}^*)$. Together with the weak duality of Equation~\eqref{eq:dualitefaibleconvex}, this ensures that $\ve \mu^*$ is an optimal solution for \eqref{dualprogram}. \subsection{Sampling approaches} Let us now consider optimization problems of the form \begin{equation}\label{eq:minwit} \begin{aligned} \min \quad & f(\ve x)\\ \text{s.t.}\quad & \ve x \in C_1 \cap C_2 \cap \cdots \cap C_m \end{aligned} \end{equation} where one minimizes a function over an intersection of subsets $C_i$, the \emph{constraints}. Such problems include linear programming (when $f$ linear and the $C_i$ are halfspaces), convex programming (when $f$ is convex and the $C_i$ are convex sets) or their integral or mixed analogues (via restrictions to $\mathbb{Z}^k \times \mathbb{R}^d$). \subsubsection{Witness sets of constraints} A first use of Helly's theorem concerns the removal of redundant constraints defining an optimal solution. To begin, consider the linear program \begin{equation}\label{eq:lpwit} \begin{aligned} \min \quad & \ve c^T \ve x \\ \text{s.t.}\quad & A \ve x \le \ve b\\ & \boldsymbol{x} \in \mathbb{R}^d, \end{aligned} \end{equation} where $A \in \mathbb{R}^{m \times n}$ (this form differs from those seen so far but is equivalent~\cite[$\mathsection 4$]{matousek2007understanding}). Assume that the problem is feasible and let $t$ denote its solution. The set $A \ve x \le \ve b$ of feasible solutions is an intersection of $m$ halfspaces, and their common intersection with $\ve c^T \ve x < t$, another halfspace, is empty. By Helly's theorem, some $d+1$ of these $m+1$ halfspaces must have empty intersection, and $\ve c^T \ve x < t$ must be one of them. It follows that we may drop from~\eqref{eq:lpwit} all but some (carefully chosen) $d$ constraints without changing the solution. Recall that Helly's theorem for halfspaces is dual to Carath\'eodory's theorem; in the dual, this argument yields that an optimal solution can be realized by a feasible bases (as introduced in Section~\ref{s:simplex}). More generally, given a problem of the form~\eqref{eq:minwit}, a subset $W \subseteq \{C_1, C_2, \ldots C_m\}$ is a \emph{witness set of constraints} if \[ \begin{aligned} & \min \quad f(\ve x) & \quad = \qquad & \min \quad f(\ve x)\\ & \text{s.t.}\quad \ve x \in \bigcap_{i=1}^m C_i & & \text{s.t.} \quad \ve x \in \bigcap_{C \in W} C, \\ \end{aligned}\] and $W$ is inclusion-minimal for that property; in other words, a witness set is a non-redundant set of constraints that defines the same optimum as the entire problem. The above argument gives, \emph{mutatis mutandis}, that the witness sets of any feasible mixed-convex programming over $\mathbb{Z}^k \times \mathbb{R}^d$ have size at most $2^k(d+1)-1$; this bound increases by $1$ for unfeasible programs. The relation between Helly's theorem and witness sets extends beyond convexity, as observed by Amenta~\cite{amenta94}. We say that a family $\mathcal{F}$ admits a \emph{Helly-type theorem} with constant $h$ if the non-empty intersection of every $h$-element subset of $\mathcal{F}$ implies the non-empty intersection of $\mathcal{F}$. Consider a minimization problem of the form~\eqref{eq:minwit}, where $f$ is a function from a space $X$ to a space $V$. We assume that $V$ is equipped with a total order $\prec$, so that the minimization question makes sense. For any $v \in V$ we let $L_v = \{x \in X\colon f(x) \prec v\}$. \begin{proposition}\label{prop:Amenta} Let $h \in \mathbb{N}$. If every family $\{C_1,C_2, \ldots, C_m, L_v\}$ admits a Helly-type theorem with constant $h$, then any witness set of constraints of~\eqref{eq:minwit} has size at most $h$ (actually, $h-1$ if the problem is feasible). \end{proposition} \noindent Note that we make no assumption on $f$ (not even continuity!), $X$ or $V$. The proof for the feasible case goes as follows. Let $\mathcal{F} = \{C_1,C_2, \ldots, C_m\}$ and let $s$ denote the solution to~\eqref{eq:minwit}. For $G \subseteq \{C_1, C_2, \ldots, C_m\}$ define $s(G)$ as the minimum of $f$ over $\bigcap_{C \in G}C$ and put \[ s' = \max\{ s(G)\colon G \subseteq \mathcal{F} \hbox{ and } |G| = h-1\}.\] On the one hand, $s \ge s(G)$ for every $G \subseteq \mathcal{F}$, we have $s \ge s'$. On the other hand, every $h$ elements of $\mathcal{F}$ intersect (because~\eqref{eq:minwit} is feasible) and $L_{s'}$ intersects any $h-1$ elements of $\mathcal{F}$ (by definition of $s'$); thus, every $h$ elements in $\mathcal{F} \cup \{L_{s'}\}$ intersect, and the Helly-type theorem on $\mathcal{F} \cup \{L_{s'}\}$ ensures that $s \le s'$. By minimality, any witness set thus has size at most $h-1$. \subsubsection{Combinatorial algorithms for linear programming} Devising algorithms for linear programming with provably good complexity has been a major challenge for the past 70 years. The interior point method of Karmarkar~\cite{Karmarkar1984} and the analysis of the ellipsoid method by Khachyian~\cite{khachiyan1980polynomial} only showed that the complexity of LP is polynomial in the number $m$ of constraints, the number $d$ of variables, and the bit complexity $L$ of the entries of the matrix $A$ and vectors $\ve b$ and $\ve c$. A major question thus remains: \begin{oproblem} Is there an algorithm that solves linear programming in time polynomial in the number of constraints and the number of variables, assuming that arithmetic operations on input numbers have unit cost? \end{oproblem} \noindent (This is problem number nine in Smale's list of open problems~\cite{smale1998mathematical}.) An algorithm with complexity polynomial in $m$ and $d$ in the unit cost model is called \emph{strongly polynomial}. Although the simplex algorithm proves effective in practice, no choice of pivoting rule is known to ensure a number of step polynomial in the number $m$ of constraints and the number $d$ of variables; in fact, for every pivot rule whose worst-case complexity is established, that complexity is at least exponential in $m$ and $d$~(see \cite{avis+friedmann,cunninghampivot,friedmann,klee+minty} and the references therein). Although no strongly polynomial time algorithm is known, partial progress was made through the 1980's and 1990's via combinatorial random sampling algorithms; this approach hinges on the fact that the bounded size of witness sets allows to throw away redundant constraints quickly. Let us illustrate the basic idea of combinatorial random sampling algorithms in its simplest form, due to Seidel~\cite{seidel1991small} (see also~\cite[$\mathsection 4$]{MMMC}). Consider \begin{equation}\label{eq:lpsei} \begin{aligned} \min \quad & \ve c^T \ve x \\ \text{s.t.}\quad &\boldsymbol{x} \in \mathbb{R}^d,\\ & \ve x \in H_1 \cap H_2 \cap \cdots \cap H_m,\\ \end{aligned} \end{equation} where each $H_i$ is a halfspace in $\mathbb{R}^d$. Pick $t \in [m]$ uniformly at random and let $\ve s_t$ denote the solution to the linear program with the constraint $H_t$ removed. The idea is to compute $\ve s_t$ recursively, then deduce $\ve s$ from $\ve s_t$: we check in $O(d)$ time whether $\ve s_t$ belongs to $H_t$. If $\ve s_t \in H_t$, then $\ve s = \ve s_t$, and if $\ve s_t \notin H_t$, then $\ve s$ must belong to the hyperplane bounding $H_t$ and be the solution to the linear program \begin{equation}\label{eq:lpsei2} \begin{aligned} \min \quad & \ve c^T \ve x \\ \text{s.t.}\quad &\boldsymbol{x} \in \partial H_t,\\ & \ve x \in \bigcap_{i \in [m]\setminus \{t\}} (H_i \cap \partial H_t). \end{aligned} \end{equation} This new linear program has $m-1$ constraints in $d-1$ variables and can be obtained from~\eqref{eq:lpsei} in time $O(dm)$. Altogether, the expected time $T(m,d)$ to compute $\ve s$ writes \[ T(m,d) = T(m-1,d) + O(d) + \mathbb{1}_{\ve s_t \notin H_t} T(m-1,d-1).\] Observe that $\ve s_t \notin H_t$ if and only if $H_t$ belongs to \emph{every} witness set for~\eqref{eq:lpsei}. Since the size of witness sets is at most $d$, the event $\ve s_t \notin H_t$ occurs with probability at most $\frac{d}{m}$ when $t$ is chosen uniformly at random. Altogether, this recursion solves to $T(m,d) = O(d!m)$, which is the running time of Seidel's algorithm. In fact, Seidel's algorithm builds on an idea by Clarkson, which we describe next. The \emph{iterated reweighting} method of Clarkson~\cite{clarkson-lv} consists of assigning a \emph{weight} $w_i$, initially set to $1$, to every constraint $H_i$ and iterating a simple process: Sample $O(d^2)$ constraints with probabilities proportional to their current weights and solve the problem on these constraints. If the solution is feasible, we are done; otherwise double the weights of all violated constraints and reiterate. It remains to be shown that, almost surely, the algorithm terminates. This can be seen by comparing the growth rates of the total weight of the system, and the weights of some witness set $W$. On the one hand, every unsuccessful iteration must double the weight of at least one constraint in $W$. On the other hand, as the constraints are chosen in each iteration with probability proportional to their current weight, the expected total weight of the constraints violated at any given iteration is $O\pth{\frac{\sum_i w_i}{d}}$~\cite{C16}. Thus, after $k$ iterations, $W$ has weight $\Omega(d 2^{k/d})$ but the total weight of all the constraints in the system is $O\pth{m\pth{1+\frac{O(1)}d}^k}$. Putting these two bounds together implies that the algorithm terminates, with high probability, within $O(d \log m)$ iterations. At each iteration one has to solve a linear program on $O\left(d^2\right)$ constraints, which can be done in time $O(d)^{d/2+O(1)}$, say by the simplex method, and compute the set of violating constraints, which takes time $O(md)$. This implies an overall running time of $O(d^2 m \log m + O(d)^{d/2+O(1)} \log m)$. Clarkson's approach was later improved by Matou\v{s}ek et al.~\cite{msw-sblp-92} to achieve an expected time complexity of~$O(d^2m+e^{O(\sqrt{d \log d})})$. A similar bound was obtained independently by Kalai~\cite{kalai1992subexponential} via a randomized pivot rule for the simplex algorithm. Clarkson's algorithm was subsequently derandomized; see~\cite{C16} and the references therein for the latest developments. \subsubsection{Combinatorial abstractions of LP} Clarkson's algorithm uses very little structure from linear programming, namely the abilities to solve a small-size problem and to decide if a given solution violates a given constraint. Surprisingly, there are many computational problems for which these two operations can be performed effectively to find a solution and are not ``linear''. A simple example is the computation of the smallest enclosing circle of a finite point set $P\subset \mathbb{R}^2$. Here, the constraints are the points of $P$, the candidate solutions are the circles, and a circle violates a point if it does not enclose it. Observe that subsets of points that minimally define their enclosing centers have size at most three, so in this case witness sets again have size at most three. It turns out that Clarkson's algorithm readily applies to this problem. This is not an LP in disguise: a generic instance may have a witness sets of size two \emph{or} three. Various combinatorial abstractions of LP were studied in order to understand precisely what class of problems can be solved with the randomized approach we described before. Consider an abstract set of constraints numbered from $1$ to $m$, and an objective function that associates to any set $S \subseteq [m]$ the value $f(S) \in \mathbb{R}$ of the optimum when only the constraints in $S$ are considered. A natural black-box model allows to compute $f(S)$ when $S$ has bounded size (independently of $m$), or to decide violations asking whether $f(S \cup \{i\}) \stackrel{?}= f(S)$. It turns out that Clarkson's algorithm can compute effectively $f(S)$ in this abstract model under three assumptions~\cite{msw-sblp-92}: (i) that $f$ be decreasing under inclusion, that is $f(S) \le f(T)$ whenever $S \subseteq T \subseteq [m]$, (ii) that $f$ be \emph{local} in the sense that \[\begin{aligned} f(S) \neq f(S \cup \{i\}) & \Leftrightarrow f(T) \neq f(T \cup \{i\})\\ & \forall S \subseteq T \subseteq [m] \hbox{ such that } f(S)=f(T), \hbox{ and } \forall i \in [m], \end{aligned}\] and (iii) that witness sets have bounded size, where a witness set $S$ is a minimal subset of $[m]$ with $f(S)=f([m])$. Functions $f$ satisfying~(i) and~(ii) are called \emph{LP-type problems} or \emph{generalized linear programming} problems. Any generic problem of the form~\eqref{eq:minwit} is LP-type; here by generic we mean that for every subset of constraints, $f$ achieves its minimum over the intersection of those constraints in only a bounded number of points. As noted by Proposition~\ref{prop:Amenta}, controlling the size of witness set for such problems, and thus the effectiveness of Clarkson's approach is a matter of Helly-type theorems. Later a generalization, called \emph{violator spaces}~\cite{ViolatorSpaces2008}, was shown to give the precise family of problems solved by Clarkson's approach. \subsubsection{Chance-constrained optimization}\label{s:CCP} Consider the problem of computing, given $n$ points in the plane, the smallest disk containing a given proportion of these points (say $70\%$). More generally, given a probability measure $\mu$ in the plane, and a positive number $\varepsilon$, one may consider the optimization problem \begin{equation*} \begin{split} \min \quad & r \\ \text{s.t.} \quad & \Pr\left[\|\boldsymbol{x}-\boldsymbol{y}\|_2 \le r \right] \geq 1-\varepsilon, \\ & \boldsymbol{x} \in \mathbb{R}^2, \end{split} \end{equation*} where $\boldsymbol{y}$ is a random point chosen from the probability distribution $\mu$. This quantitative variation of the smallest enclosing circle problem, discussed above, is no longer LP-type but can still be solved effectively. The technique relies, again, on the fact that witness sets have bounded size and applies more generally to \emph{chance constrained problem} (CCP). A CCP asks to optimize a function of a variable $\boldsymbol{x} \in \mathbb{R}^d$ under constraints depending on a parameter $\ve w \in \Delta$, of the form: \begin{equation*} \begin{split} CCP(\varepsilon) =\min \quad & g(\boldsymbol{x}) \\ \text{s.t.} \quad & \Pr\pth{f(\ve x,\ve w) \leq 0} \geq 1-\varepsilon,\; \boldsymbol{x} \in K. \end{split} \end{equation*} Here, $g$ is a convex function, the probability is taken relative to a measure $\mu$ on the space $\Delta$ of parameters, $f(\boldsymbol{x},\ve w)$ is measurable with respect to $\ve w$, $f(\cdot, \ve w)$ is convex for every $\ve w$, and $K$ is a convex set. This type of optimization problem naturally arises when modeling with uncertain constraints~\cite{shapirodentchevarusz}. An approach to solve CCP, initiated by Calafiore and Campi \cite{calafiorecampi2005,calafiorecampi2006}, is to sample $\ve w^1, \ve w^2, \ldots, \ve w^N$ from $\mu$ and solve the deterministic convex program \begin{align*} SCP(N)= \min \quad & g(\boldsymbol{x}) \\ \text{s.t.} \quad & f(\boldsymbol{x}, \ve w^i) \leq 0 ,\quad i=1,2,\ldots, N,\; \boldsymbol{x} \in K. \end{align*} For any $\delta \in (0,1)$, if $N \geq \frac{2d}{\varepsilon} \ln\frac{1}{\varepsilon} + \frac{2}{\varepsilon} \ln\frac{1}{\delta}+2d$, then the solution to $SCP(N)$ is a solution to $CCP(\varepsilon)$ with probability at least $1-\delta$~\cite{calafiorecampi2005,calafiorecampi2006}. The proof in~\cite{DLetal-ccopt} goes as follows. For $\ve x\in \mathbb{R}^d$, let $V (\ve x)= \Pr\pth{f(x,w)>0}$ so that we are interested in ensuring $V(\boldsymbol{x}) < \varepsilon$. Each of the $N$ constraints $f(\cdot, \ve w^i) \leq 0$ is convex so any witness set has size at most $d$. Now, for every $I \in \binom{[N]}d$ we define \[ \Gamma_N^I=\left\{(\ve w^1,\dots, \ve w^N) \in \Delta^N\colon (\ve w^i)_{i \in I} \text{ is a witness set} \right\}.\] Note that the $\Gamma_N^I$ decompose $\Delta^N$ according to which are the indices of the $d$ witness constraints. Let $\boldsymbol{x}^*$ denote the optimal solution of $SCP(N)$ and $\boldsymbol{x}^I$ the solution to the convex program defined by the constraints $\{\ve w^i \colon i\in I\}$ alone. The probability of failure $\Pr \pth{V(\boldsymbol{x}^*)\ge\varepsilon}$ is less than or equal to \[ \sum_{I \in \binom{[N]}d} \Pr\pth{ \left\{\left(\ve w^1,\dots, \ve w^N\right) \in \Gamma_N^I \colon V(\ve x_I) \geq \varepsilon\right\}}. \] The summand corresponding to index set $I$ rewrites as \[\begin{aligned} \Pr&\left[ \left\{\left(\ve w^i\right)_{i\in I}\colon V(\boldsymbol{x}_I) \geq {\varepsilon} \right\}\right] \\ & \prod_{j \notin I} \Pr\left[ \left\{\left(\ve w^i\right)_{i \in I}: f\left(\boldsymbol{x}_I, \ve w^j\right) \leq 0\right\} \Bigm| \left\{\left(\ve w^i\right)_{i\in I}\colon V(\boldsymbol{x}_I) \geq {\varepsilon} \right\} \right]. \end{aligned}\] The first factor is at most $1$ and each of the following $N-d$ factors is at most $\varepsilon$. Altogether, we get $ \Pr \pth{V(\boldsymbol{x}^*)} \le \binom{N}d \pth{1-\varepsilon}^{N-d}$ and the announced bound follows. \subsubsection{$S$-optimization} \label{centerpointsinoptima} There are many situations where one wants to optimize under convex constraints while restricting the solutions to belong to some set $S$; these are called \emph{$S$-optimization problems}. This allows to model complicated constraints like mixed-integer constraints ($S=\mathbb{R}^d \times \mathbb{Z}^k$), sparsity constraints (e.g., compressed sensing), or complementarity constraints. Several of the techniques described above generalize if the intersections of $S$ with convex sets of the ambient space admits a Helly-type theorem. We denote its Helly number by $h(S)$, for example $h(\mathbb{R}^d\times \mathbb{Z}^k) = 2^k(d+1)$. Proposition~\ref{prop:Amenta} yields that witness sets have size at most $h(S)-1$, and the reader can check that the analysis of chance constraint programs in Section~\ref{s:CCP} applies: the solution to $SCP(N)$ is a solution to $CCP(\varepsilon)$ with probability at least $1-\delta$ if $N \geq \frac{2h(S)-2}{\varepsilon} \ln\frac{1}{\varepsilon} + \frac{2}{\varepsilon} \ln\frac{1}{\delta}+2h(S)-2$. In fact, the proof presented above differs from the initial argument~\cite{calafiorecampi2005,calafiorecampi2006} and was found when generalizing CCP to the $S$-optimization setup~\cite{DLetal-ccopt}. Helly-type theorems have been obtained for various sets $S$~\cite{Averkov:2013uo,deloeraetal2015Helly,garber,queyranne+tardella}. Let us now turn our attention to the problem of minimizing a convex function~$g$ over an arbitrary subset $S \subseteq \mathbb{R}^d$. We assume that $g$ is given by a first-order evaluation oracle and that $S$ is nonempty and closed. The {\em cutting plane method} that we now present allows to approximate the solution. To allow us to control the quality of the approximation, we fix a finite measure $\mu$ supported on $S$. The algorithm starts with a convex set $E_0$ that contains the solution in its interior and is such that $\mu(\inter E_0) >0$. It then builds a sequence $\{E_i\}$ where each $E_i$ is also a convex set that contains the solution in its interior. Given $E_{i-1}$, we select a point $\boldsymbol{x}_i \in (\inter E_{i-1})\cap S$ and compute $g(\boldsymbol{x}_i)$ and a subgradient $\ve h_i \in \partial g(\boldsymbol{x}_i)$. We set $\boldsymbol{x}^\star:=\textrm{argmin}_{\boldsymbol{x} \in \{\boldsymbol{x}_1, \ldots, \boldsymbol{x}_i\}}g(\boldsymbol{x})$ and define $E_i$ in a way that ensures that \[ E_{i}\; \supseteq \;\{ \boldsymbol{x} \in E_0 \colon g(\boldsymbol{x}^\star)-g(\boldsymbol{x}_j)\ge \ve h_j^T (\boldsymbol{x}-\boldsymbol{x}_j),\; \forall j\in[i] \},\] and that $\mu(\inter E_i)$ is non-increasing. We stop when $\mu(\inter E_i)$ is smaller than the desired error and return $\boldsymbol{x}^*$. This approach leaves many details unspecified, in particular the precise definition of $E_i$ and the way to choose the points $\boldsymbol{x}_i$. When $S=\mathbb{R}^d$ and $\mu$ is the Lebesgue measure, a possible implementation is the classical \emph{ellipsoid method}~\cite{khachiyan1980polynomial}. When $S = \mathbb{Z}^d$ and $\mu$ is the counting measure for $\mathbb{Z}^d$, we obtain cutting plane algorithms for convex integer optimization problems. Another variant of this method which uses random sampling was explored by Bertsimas and Vempala~\cite{Bertsimas:2004fp}. The choice of the points $\boldsymbol{x}_i$ in the cutting plane method is important. A particularly good choice are the Tukey centers. Given a vector $\ve u \in \mathbb{S}^{d-1}$ and a point $\boldsymbol{x} \in \mathbb{R}^d$ we let $H^+(\ve u,\boldsymbol{x})$ denote the halfspace $\{\ve y\in\mathbb{R}^d \colon \ve u^T (\ve y-\boldsymbol{x}) \ge 0\}$. Consider the function \begin{equation}\label{eq:max-value} \mathcal{F}(S,\mu):=\max_{\boldsymbol{x}\in S}\inf_{\ve u\in \mathbb{S}^{n-1}}\mu(H^+(\ve u,\boldsymbol{x})). \end{equation} A {\em Tukey center} is a point that attains the maximum value of $\mathcal{F}(S,\mu)$. Lower bounds on $\mathcal{F}(S,\mu)$, and therefore on the depth of Tukey centers (see Section~\ref{depthsec}), can often be obtained from Helly-type theorems. For instance, if $S = \mathbb{R}^d$ and $\mu$ is the counting measure of a finite subset of $\mathbb{R}^d$, then the centerpoint theorem (\ref{thm:centerpointthm}) ensures that $\mathcal{F}(S,\mu) \ge \frac1{d+1}|P|$. The proof of the centerpoint theorem from Helly's theorem extends to the setting of $S$-convexity: if $S\subseteq \mathbb{R}^d$ is nonempty and closed and $\mu$ is finite and supported on $S$, then $\mathcal{F}(S,\mu)\ge \frac{1}{h(S)}\mu(\mathbb{R}^d)$~\cite{basu+oertel}. For instance, Doignon's theorem ensures that if $S = \mathbb{Z}^d$ and $\mu_C$ counts integer points inside a compact set $C$, then $\mathcal{F}(S,\mu_C) \ge \frac{|C \cap \mathbb{Z}^d|}{2^d}$. It turns out that choosing $\boldsymbol{x}_i$ among the points maximizing $\mathcal{F}(\mu_i,S)$, where $\mu_i$ is the restriction of $\mu$ to $\inter(E _{i-1})$, gives the best running times among cutting plane algorithms for convex minimization over~$S$~\cite{basu+oertel}. More notions generalize to $S$-optimization. For instance there exists an analogue of the strong duality theorem for $S$-convex optimization under some natural conditions~\cite{basuetal:S-optimality}. We will meet Tukey centers again in Section \ref{depthsec}, and we conclude this section with a challenge: \begin{oproblem} What is the complexity of computing Tukey centers for given $S$ and $\mu$? E.g., can one compute an exact Tukey center of the integer points of a convex polytope in polynomial time in the input size? \end{oproblem} \section{Combinatorial topology \label{s:topology} From combinatorial topology, we will focus on two results about labeled or colored triangulations of simplicial complexes: \emph{Sperner's lemma and Tucker's lemma}. The importance of these two lemmas owes much to their special position at the crossroads of topology and combinatorics. As we alluded in the introduction, Sperner's and Tucker's lemmas are the combinatorial equivalent versions to the famous topological theorems of Brouwer and Borsuk-Ulam, respectively. Their combinatorial nature makes them particularly well-suited for computations and applications too. Combinatorial structures have been used in algebraic topology since Poincar\'e's foundational \emph{analysis situs}, so it is not surprising that some topological questions may be studied by combinatorial methods. The lemmas of Sperner and Tucker are well-known for offering an elementary access, via labelings of combinatorial structures, to important results in topology such as the theorems of Brouwer and Borsuk-Ulam. It is perhaps less obvious that some combinatorial problems may be studied by topological methods. A seminal example of topological methods applied to combinatorics was the use by L. Lov\'asz of the Borsuk-Ulam theorem to settle a conjecture of Kneser on the chromatic number of certain graphs (see Section~\ref{s-graphs}). His paper opened up the application of topological methods in combinatorics that are now common tools. These techniques appear in several books~\cite{Longueville-book,matousek2003using} and surveys~\cite{Bjorner-survey,Karasev-survey}. In many cases the topological methods hinge on the theorems of Brouwer or Borsuk-Ulam; as we discuss in the application sections, on several occasions the topological machinery can be made implicit, and the combinatorial question settled directly by the lemmas of Sperner or Tucker. \subsection{Sperner and Tucker} \label{sec:sperner+tucker} A \emph{labeling} of a simplicial complex $\mathsf{K}$ by a set $S$ is a map from the vertices of $\mathsf{K}$ to $S$; the \emph{label} of a vertex is its image. Sperner's lemma gives a sufficient condition for the existence of a \emph{fully-labeled} facet, that is a facet whose vertices have pairwise distinct labels. (Sometimes the labels are called \emph{colors} and fully-labeled faces are called \emph{colorful}; we will avoid this terminology in this paper to avoid confusion with the colorful theorems in convex geometry that we discuss in Sections~\ref{s:convexgeo} and~\ref{s-graphs}.) \begin{namedtheorem}[Sperner's lemma] Assume that the vertices of a finite triangulation $\mathsf{T}$ of a simplex~$\Delta$ are labeled so that any vertex lying in a face of $\Delta$ has the same label as one of the vertices of that face. If the vertices of $\Delta$ are given pairwise distinct labels, then the number of facets of $\mathsf{T}$ whose vertices have pairwise distinct labels is odd. \end{namedtheorem} \bigskip \begin{figure}[h] \begin{center} \includegraphics[page=9]{figures-final} \end{center} \caption{A Sperner labeling of a triangulation of $\Delta_2$ illustrated by colors. The fully-labeled triangles are shown shaded. The gray edges augment the triangulation of $\Delta_2$ into a triangulation of~$\mathbb{S}^2$. \label{figspernerproof}} \end{figure} \noindent We call a labeling that satisfies the assumptions of Sperner's lemma a \emph{Sperner labeling}. A more general version holds for \emph{pseudomanifolds}, i.e., for pure $d$-dimensional simplicial complexes where every face of dimension $d-1$ is contained in at most two facets. (Recall that a simplicial complex is \emph{pure} if all its inclusion-maximal faces have the same dimension.) In particular, any triangulation of a $d$-dimensional manifold is a pseudomanifold of dimension $d$. The {\em boundary} of a pseudomanifold is the subcomplex generated by its $(d-1)$-dimensional simplices that are faces of exactly one $d$-dimensional simplex. \begin{proposition} \label{prop:sperner_even} Any labeling by $[d+1]$ of a $d$-dimensional pseudomanifold without boundary has an even number of fully-labeled facets. \end{proposition} \noindent Proposition~\ref{prop:sperner_even} follows from a simple parity argument. Consider the graph where the nodes are the facets and where the edges connect pairs of facets that share a $(d-1)$-dimensional face whose vertices use every label in $[d]$. This graph has only nodes of degree $0$, $1$, or~$2$, so it consists of vertex-disjoint cycles and paths. The nodes of degree~$1$ are exactly the fully-labeled facets and there are evenly many of them (twice the number of paths). Coming back to the remark in introduction: this is where it is useful to understand why a house with an odd number of openings has a room with an odd number of openings. Clearly Sperner's lemma follows from Proposition~\ref{prop:sperner_even}. For this observe that any Sperner labeling of a $d$-dimensional simplex $\Delta_d$ extends into a triangulation of $\mathbb{S}^d$ where (i) the outer vertices of $\Delta_d$ form a fully-labeled facet, and (ii) no other added facet is fully-labeled (as illustrated in Figure~\ref{figspernerproof}). Knowing a vertex of degree one allows easily, by path following, to find another one; we come back in Section~\ref{fixed-point-computing} on algorithmic applications of this idea. Other arguments can be used to prove Sperner's lemma \cite{McLennan+Tourky2008}. \bigskip Now we will state the octahedral Tucker lemma. This is a rather streamlined version of Tucker's lemma that already suffices for all our applications. Given vectors of signs $\ve x, \ve y \in\{+,-,0\}^n$, we write $\ve x\preceq\ve y$ if every nonzero coordinate of~$\ve x$ is the same as in $\ve y$. We let $\boldsymbol{x}^+$ denote the set of indices $i$ such that $x_i = +$, and $\boldsymbol{x}^-$ similarly. In particular, $\boldsymbol{x} \preceq \boldsymbol{y}$ if and only if $\boldsymbol{x}^+ \subseteq \boldsymbol{y}^+$ and $\boldsymbol{x}^- \subseteq \boldsymbol{y}^-$. Note that each vector of signs uniquely identifies a coordinate (sub)-orthant, and that the order $\preceq$ indicates containment. There is an interpretation of $(\{+,-,0\}^n, \preceq)$ as a simplicial complex illustrated in Figure~\ref{f:geometry-octahedral-Tucker}. By $\pm a$ we mean a choice of one of the two scalars $-a$ or $a$. \begin{namedtheorem}[Octahedral Tucker lemma] \label{tuckerOlemma} Let $\lambda:\{+,-,0\}^n\setminus\{\boldsymbol{0}\}\rightarrow \{\pm 1, \pm 2, \ldots, \pm m\}$ be such that $\lambda(-\ve x)=-\lambda(\ve x)$ for all $\boldsymbol{x}$. If $\lambda(\boldsymbol{x})+\lambda(\boldsymbol{y})\neq 0$ for all $\boldsymbol{x}\preceq\boldsymbol{y}$, then $n \le m$. \end{namedtheorem} \begin{figure} \begin{center} \includegraphics[page=10]{figures-final} \caption{Geometric interpretation of the dominance graph for $\preceq$ in the octahedral Tucker lemma for $n=2$ (left) and $n=3$ (right).\label{f:geometry-octahedral-Tucker}} \end{center} \end{figure} \noindent The octahedral Tucker lemma was apparently first stated explicitly by Ziegler~\cite[Lemma~4.1]{Z-GK}, following its implicit use by Matou\v{s}ek~\cite{Matousek:2004hm} in his proof of the lower bound on the chromatic number of Kneser graphs from Tucker's lemma (see Section~\ref{s-chromatic}). Several classical proofs of Proposition~\ref{tuckerlemma} can be found in Matou\v{s}ek's book~\cite{matousek2003using}. As we explain in Section~\ref{subsec:continuousversionspernertucker}, the lemmas of Sperner and Tucker are indeed ``topological'' in that they essentially state that certain chain maps, namely those induced by the labeling maps, are non-trivial in simplicial homology with coefficients over $\mathbb{Z}_2$. \begin{figure}[h] \centering \includegraphics[page=11]{figures-final} \caption{An illustration of Tucker's lemma: a triangulation of $\mathbb{B}^2$ that induces a symmetric triangulation of $\mathbb{S}^1$. On the left the antipodal simplices on the boundary are painted with the same color. On the right side, a labeling of the vertices that is antipodal on the boundary.} \label{fig:anti_sym_tri} \end{figure} A more common version of Tucker's lemma deals with triangulations of a ball instead of an octahedron. Tucker's lemma gives a lower bound on the number of distinct labels used by labelings that avoid certain local patterns. We say that a triangulation $\mathsf{T}$ of $\mathbb{B}^d$ induces a \emph{symmetric triangulation} of $\mathbb{S}^{d-1}$ if its boundary $\partial\mathsf{T}$ forms a centrally-symmetric triangulation of~$\mathbb{S}^{d-1}$. A labeling of a symmetric triangulation of $\mathbb{S}^d$ by integers is \emph{antipodal} if antipodal vertices have opposite labels. \begin{proposition}[Tucker lemma] \label{tuckerlemma} Let $\mathsf{T}$ be a triangulation of $\mathbb{B}^d$ that induces a symmetric triangulation of $\mathbb{S}^{d-1}$. Let $\lambda$ be a labeling of the vertices of $\mathsf{T}$ by $\{\pm 1,\ldots,\pm d\}$. If $\lambda(-\ve v)=-\lambda(\ve v)$ for every vertex $\ve v$ of $\partial\mathsf{T}$, then there exists an edge $\ve u \ve v$ in $\mathsf{T}$ with $\lambda(\ve u)+\lambda(\ve v)=0$. \end{proposition} \noindent (See Figure~\ref{fig:anti_sym_tri} for an illustration.) The octahedral version is obtained by applying Proposition~\ref{tuckerlemma} to the barycentric subdivision $\mathsf{T}$ of the $n$-dimensional \emph{cross-polytope} $\Diamond^n = \conv \{\pm \ve e_i\}_{1 \le i \le n}$~\cite[Theorem~2.3.2]{matousek2003using}. Indeed, consider a map $\lambda: \{+,-,0\}^n\setminus\{\ve 0\} \to \{\pm 1, \ldots, \pm m\}$ such that $\lambda(-\ve x)=-\lambda(\ve x)$ for all $\boldsymbol{x}$ and $\lambda(\boldsymbol{x})+\lambda(\boldsymbol{y})\neq 0$ for all $\boldsymbol{x}\preceq\boldsymbol{y}$. Any $\ve x \in \{+,-,0,\}^n\setminus\{\ve 0\}$ can be interpreted as the vertex in $\mathsf{T}$ corresponding to the face $\conv(\{\ve e_i: i\in \ve x^+\} \cup \{-\ve e_i: i \in \ve x^-\})$ of $\Diamond^n$. The edges in $\mathsf{T}$ connect pairs $\ve x$ and $\ve y$ such that $\boldsymbol{x}\preceq\boldsymbol{y}$. Defining $\lambda(\ve 0) = m+1$, we get a labeling of all vertices of $\mathsf{T}$ satisfying $\lambda(-\ve v)=-\lambda(\ve v)$ for every vertex $\ve v$ of $\partial\mathsf{T}$ and having no edge $\ve u \ve v$ in $\mathsf{T}$ with $\lambda(\ve u)+\lambda(\ve v)=0$. By Proposition~\ref{tuckerlemma}, we must have $m+1 > n$. As stated in the introduction, Tucker's lemma is equivalent to the fact that for any triangulation of the projective plane and any two-coloring of its vertices, one of the color classes spans a non-contractible cycle. Indeed, such a two-coloring can be seen as a two-coloring of a triangulation of the disk, with symmetric vertices of the boundary getting identical colors. If all monochromatic cycles were contractible, we could easily choose a sign for each vertex and get a labeling that would contradict Tucker's lemma. The reverse implication is also easy and left to the reader. \bigskip Consider a triangulation $\mathsf{T}$ as in Proposition~\ref{tuckerlemma} and a labeling $\lambda$ of its vertices by $\{\pm 1, \ldots, \pm m\}$; a $k$-dimensional face of $\mathsf{T}$ is \emph{alternating} if its vertices can be indexed $\ve v_{i_0}, \ldots \ve v_{i_k}$ so that $0 < \lambda(\ve v_{i_0}) < -\lambda(\ve v_{i_1}) < \cdots < (-1)^{k}\lambda(\ve v_{i_{k}})$ or if $0 > \lambda(\ve v_{i_0}) > -\lambda(\ve v_{i_1}) > \cdots > (-1)^{k}\lambda(\ve v_{i_{k}})$. In the first case we call the simplex \emph{positively alternating} and in the second case \emph{negatively alternating}. A lemma due to Fan~\cite{kyfan1952} generalizes Tucker's lemma in terms of a parity counting of alternating simplices. \begin{theorem}[Fan's lemma] \label{our-fan} Let $\mathsf{T}$ be a triangulation of $\mathbb{B}^d$ that induces a symmetric triangulation of $\mathbb{S}^{d-1}$. Let $\lambda$ be a labeling of the vertices of $\mathsf{T}$ by $\{\pm 1,\ldots,\pm m\}$ such that $\lambda(-\ve v)=-\lambda(\ve v)$ for every vertex $\ve v$ of $\partial\mathsf{T}$. If no two adjacent vertices have opposite labels, then $\mathsf{T}$ has an odd number of alternating facets. \end{theorem} \noindent Fan's lemma readily implies Tucker's lemma since the existence of an alternating $d$-dimensional face implies $m \ge d+1$. Going in the other direction, it was only recently observed by Alishahi~\cite{alishahi2017} that an existential version of Fan's lemma is easily derived from Tucker's lemma. Let us illustrate that, surprisingly, Fan's lemma can be easier to prove than Tucker's lemma: this is one example where a stronger hypothesis facilitates induction. We give an inductive proof for a \emph{flag of hemispheres}, i.e., a triangulation $\mathsf{T}$ of $\mathbb{B}^d$ such that the restriction of $\mathsf{T}$ on $H_i^+$ and on $H_i^-$ triangulates each of them, where $H_i^+$ and $H_i^-$ are the $i$-dimensional hemispheres $$H_i^+=\{\ve x\in\mathbb{S}^{d-1}\colon x_{i+1}\geq 0, x_{i+2}=\cdots=x_d=0\},$$ $$H_i^-=\{\ve x\in\mathbb{S}^{d-1}\colon x_{i+1}\leq 0, x_{i+2}=\cdots=x_d=0\}.$$ (Prescott and Su~\cite{prescott+su} gave another combinatorial constructive proof for this special case.) Consider the graph whose nodes are the facets of $\mathsf{T}$ and whose edges connect pairs of facets that share a $(d-1)$-dimensional face that is positively alternating. We augment this graph with an extra node $s$ and add edges connecting $s$ to all facets of $\mathsf{T}$ that have a $(d-1)$-dimensional positively alternating face on $\partial\mathsf{T}$; in a sense, $s$ represents the ``outer facet''. Apart from $s$, all nodes have degree $0$, $1$, or $2$. The nodes of degree $1$ are exactly the $d$-dimensional alternating facets. The triangulation $\mathsf{T}$ refines $H_{d-1}^+$, which is homeomorphic to $\mathbb{B}^{d-1}$. So, by induction in dimension $d-1$, the number of $(d-1)$-dimensional alternating faces of $\partial\mathsf{T}$ in $H_{d-1}^+$ is odd; the same holds for the $(d-1)$-dimensional faces of $\partial\mathsf{T}$ in $H_{d-1}^-$. The symmetry of $\partial\mathsf{T}$, and that of the labeling, imply that the degree of $s$ is odd; it follows that there is an odd number of $d$-dimensional alternating facets in $\mathsf{T}$. The above elementary proof requires that the triangulation restricts nicely to lower-dimensional spheres to allow induction. Two proofs of Theorem~\ref{our-fan} can be found in the literature, both for an equivalent version with a sphere instead of a ball (later we will explain this equivalence for our own proof of Theorem \ref{our-fan}). On the one hand, \v{Z}ivaljevi\'c~\cite{MR2728499} observed that the labeling is essentially a classifying map that is unique up to $\mathbb{Z}_2$-homotopy, so the number of alternating facets (mod $2$) reformulates as the cap product of a certain cohomology class with a certain homology class. On the other hand, Musin~\cite{Musin:2012tq} builds a simplicial $\mathbb{Z}_2$-map from the triangulation to a $d$-dimensional polytope for which the following holds: a simplex is alternating if and only if its image by this simplicial map contains $0$ in its convex hull; a degree argument allows then to conclude. It turns out that Alishahi's idea to derive Fan's lemma from Tucker's lemma leads to a short proof of Theorem~\ref{our-fan}, also based on a degree argument. Before we spell out this (original) proof let us stress that the following question remains: \begin{oproblem} Give a direct combinatorial proof of Fan's lemma (as stated in Theorem \ref{our-fan}) and of Tucker's lemma (Proposition~\ref{tuckerlemma}) valid for \emph{any} centrally symmetric triangulation. \end{oproblem} Let us now prove Theorem~\ref{our-fan} for an arbitrary triangulation $\mathsf{T}$ of $\mathbb{B}^d$. Let $\lambda$ be a labeling of the vertices of $\mathsf{T}$ satisfying the conditions of Theorem~\ref{our-fan}. We first turn $\mathsf{T}$ into a triangulation $\mathsf{T}'$ of $\mathbb{S}^d$ by gluing two antipodal copies of $\mathbb{B}^d$, each triangulated by $\mathsf{T}$; notice that the number of \emph{positively alternating} facets of $\mathsf{T}'$ equals the number of \emph{alternating} facets of $\mathsf{T}$ (both positive and negative ones), since the negatively alternating facets in one copy of $\mathsf{T}$ become positively alternating in the other copy. We define next a labeling $\mu$ of the vertices of $\sd\mathsf{T}'$ by $\{\pm 1,\ldots,\pm (d+1)\}$: a vertex $v$ of $\sd\mathsf{T}'$ corresponds to a simplex $\tau_v$ of $\mathsf{T}'$, and we set the absolute value of $\mu(v)$ to be the number of vertices of the largest alternating face of $\tau_v$, and its sign according to alternation. This sign is defined uniquely since there cannot be maximal alternating faces of both types in $\tau$ (this can be checked using the fact that no adjacent vertices in $\mathsf{T}$ can have opposite labels). Now, a crucial observation is that if $\sigma$ is an alternating facet (for $\lambda$), then $\sd\sigma$ contains exactly one alternating facet (for $\mu$) of the same type; and if $\sigma$ is not an alternating facet, then $\sd\sigma$ contains no alternating facet. At this point, to establish Theorem~\ref{our-fan}, it suffices thus to prove that $\sd\mathsf{T}'$ contains an odd number of positively alternating facets. This fact follows from basic degree theory. The assumptions on $\lambda$ guarantee that $\mu$ induces an antipodal simplicial map from $\sd\mathsf{T}'$ to $\partial\Diamond^{d+1}$, the boundary of the $(d+1)$-dimensional cross-polytope, whose vertices are identified with the elements in $\{\pm 1,\ldots,\pm (d+1)\}$. Any antipodal self-map of $\mathbb{S}^d$ is of odd degree~\cite[Theorem 4.3.32]{MR1430097}. Thus, denoting by $t\in C_d(\sd\mathsf{T}',\mathbb{Z}_2)$ the formal sum of all facets of $\sd\mathsf{T}'$ and by $z\in C_d(\partial\Diamond^{d+1},\mathbb{Z}_2)$ the formal sum of all facets of $\partial\Diamond^{d+1}$, we must have $\mu_{\sharp}(t)=z$. The only simplices that are mapped to the simplex $\{+1,-2,\ldots,(-1)^d(d+1)\}$ by $\mu_{\sharp}$ are the positively alternating ones, so there are an odd number of them. \subsection{Continuous versions} \label{subsec:continuousversionspernertucker} One of the most famous theorems about fixed points is due to the Dutch mathematician L.\,E.\,J.\ Brouwer and states that any continuous function from a ball into itself has a fixed point. Brouwer's theorem is often seen as a continuous version of Sperner's lemma (without the oddness assertion): they can be deduced easily from one another. Let us sketch how Brouwer's theorem follows from Sperner's lemma (we discuss the other direction in Section~\ref{s:s-t-generalizations}). Contrary to Brouwer's original proof, which says nothing about how to find the fixed point or a good approximation of it, this proof has computational implications (see Section~\ref{fixed-point-computing}). Without loss of generality we take the $d$-dimensional standard simplex $\Delta_d \subset {\mathbb R}^{d+1}$ as a realization of a ball (it is easy to set up a homeomorphism). Then we triangulate the simplex $\Delta_d$ and design a labeling of that triangulation tailored to the continuous function~$f$ under consideration. Specifically, we associate to a vertex $\ve v$ of the triangulation the label $i$ if the $i$-th barycentric coordinate of $\ve v$ is larger than the $i$-th barycentric coordinate of its image $f(\ve v)$. (So, intuitively, if $\ve v$ is labeled $i$, then $f$ moves $\ve v$ outwards the $i$-th vertex of the standard simplex.) Note that, unless the vertex $\ve v$ is a fixed point, there must be at least one such index. If there are several such indices, simply make an arbitrary choice among them. Now, the labeling we provided satisfies the assumptions of Sperner's lemma, so we can find a fully-labeled simplex of $\mathsf{T}$ such that the $i$-th barycentric coordinate of the vertex labeled $i$ is decreased by $f$. Re-triangulate $\Delta_d$ again and again adding more and more points in such a way that the maximum diameter of the simplices appearing in the triangulation goes to zero in the limit. At each step we find a fully-labeled simplex. The barycenters of all such simplices will produce an infinite sequence of points and, since it is a bounded sequence, it contains a convergent subsequence. Let $\boldsymbol{x}^*$ be the limit of this subsequence. Since the map $f$ is continuous, the $i$-th barycentric coordinate of $\boldsymbol{x}^*$ is larger or equal than the $i$-th barycentric coordinate of $f(\boldsymbol{x}^*)$ for every $i$ and therefore $\boldsymbol{x}^*$ is a fixed point of the map. \bigskip The Knaster-Kuratowski-Mazurkiewicz theorem, also known as the KKM theorem, is a classical consequence of Sperner's lemma or Brouwer's theorem. It is used for instance in game theory or for the study of variational inequalities. Consider the $d$-dimensional simplex $\Delta_d =\conv \{\ve e_i \colon 1 \le i \le d+1\}$ and $d+1$ closed sets $C_1, C_2, \ldots, C_{d+1}$ in $\mathbb{R}^d$. This theorem, illustrated in Figure~\ref{f:KKM}, ensures that if for every $I \subseteq [d+1]$ the face $\conv\{\ve e_i \colon i \in I\}$ of $\Delta_d$ is covered by $\bigcup_{i \in I} C_i$, then $\bigcap_{i=1}^{d+1}C_i$ is nonempty. This statement is somehow reminiscent of Helly's theorem. A corollary of the KKM theorem can actually be used to prove it, see Section~\ref{s:Helly}. This corollary states that if $d+1$ closed sets $C_1, C_2, \ldots, C_{d+1}$ are such that each of them contains a distinct facet of $\Delta_d$ and such that their union covers $\Delta_d$, then their intersection is nonempty (this statement is also called a \emph{dual KKM theorem} in \cite{asadaetal}). To see that it is a consequence of the KKM theorem, assign number $i$ to the facet covered by $C_i$; number the vertices of $\Delta_d$ so that vertex $i$ is on facet $i$; the $C_i$'s satisfy then the condition of the KKM theorem. \begin{figure} \begin{center} \includegraphics[page=12]{figures-final} \caption{\small An illustration of the KKM lemma in two dimensions.} \label{f:KKM} \end{center} \end{figure} Several variations of the original KKM theorem exist. Gale~\cite{Gale-colorfulkkm} proved the following \emph{colorful} version: given $d+1$ different KKM covers $\{C^1_i\}_{i=1}^{d+1}, \{C^2_i\}_{i=1}^{d+1}$, \ldots, $\{C^{d+1}_i\}_{i=1}^{d+1}$ of the $d$-simplex, there exists a permutation $\pi$ of the numbers $[d+1]$ such that $\bigcap_{i=1}^{d+1} C_{\pi(i)}^i \neq \varnothing$. Clearly choosing all the covers to be the same recovers the classical version of KKM stated above. Gale's colorful KKM theorem has an intuitive interpretation first stated by Gale himself: ``if each of three people paint a triangle red, white and blue according to the KKM rules of covering, then there will be a point which is in the red set of one person, the white set of another, the blue of the third''. A recent strengthening of this colorful theorem~\cite{asadaetal} states that, given $d$ KKM covers $\{C^1_i\}_{i=1}^{d+1}, \{C^2_i\}_{i=1}^{d+1} \ldots, \{C^{d}_i\}_{i=1}^{d+1}$ of the $d$-simplex $\Delta_d$, then there exist a point $\boldsymbol{x}$ in $\Delta_d$ and $d+1$ bijections $\pi_i\colon[d] \rightarrow[d+1] \setminus \{i\}$ for $i=1,\dots,d+1$, such that $\boldsymbol{x} \in \bigcap_{j=1}^d C_{\pi_i(j)}^j$ for every $i$. It is interesting to note the proofs of these colorful results combine degree theory with Birkhoff's theorem on doubly-stochastic matrices. Finally, we note \cite{Musin:2017} has a common generalizations of Sperner, Tucker, KKM, Fan. \bigskip Another fascinating and very useful consequence of Brouwer's theorem is Kakutani's 1941 fixed-point theorem. It deals, not with real-valued functions, but with \emph{set-valued functions}, where points are mapped to subsets. For a suitable notion of continuity, it ensures that for any continuous function $F$ mapping points of a ball to convex subsets of it there is an $\ve x$ such that $\ve x\in F(\ve x)$. Kakutani's theorem is especially useful in game theory, its most traditional application being the Nash theorem, see Section~\ref{s-games+fairness+independence}. \bigskip Similar to Sperner's lemma, Tucker's lemma has continuous and covering versions. The continuous version is the celebrated \emph{Borsuk-Ulam theorem}, which has many applications in discrete geometry, combinatorics, and topology. It asserts that there is no continuous function from $\mathbb{S}^d$ into $\mathbb{S}^{d-1}$ that commutes with the central symmetry. Nice proofs of the Borsuk-Ulam theorem from Tucker's lemma, as well as equivalent formulations and many applications, can be found for instance in the books of Matou\v{s}ek~\cite[Chapter~2.3]{matousek2003using} and de Longueville~\cite[Chapter~1]{Longueville-book}. Just as KKM is the covering version of Sperner's lemma, Tucker's lemma has a covering version, the \emph{Lyusternik-Schnirel'mann theorem} ~\cite{LStheorem}. It states that, if the sphere $\mathbb{S}^d$ is covered by $d+1$ closed subsets, then one of the sets must contain two antipodal points. This theorem and some of its extensions (e.g., those due to K. Fan) have found many applications in other areas of mathematics, for instance, as for the KKM theorem, in the study of variational inequalities. \subsection{Generalizations and variations}\label{s:s-t-generalizations} A labeling $\lambda$ of a simplicial complex $\mathsf{T}$ by a set $S$ can be interpreted as a map from the vertices of $\mathsf{T}$ to the vertices of some abstract simplicial complex $\mathsf{K}$ with vertex set $S$. This viewpoint leads to several interesting developments. \bigskip A first idea is to extend $\lambda$ into a \emph{linear map} $f$ from $|\mathsf{T}|$ into $|\mathsf{K}|$. For a Sperner labeling, $f$ maps $\Delta_d$ into itself. Composing $f$ with a suitable orthogonal transformation ensures that any fixed point of the resulting map, which must exist by Brouwer's theorem, lies in a fully-labeled simplex; this is the standard proof of Sperner's lemma from Brouwer's theorem~\cite[Section 1.1]{Longueville-book}. Using this idea, Sperner's lemma can be generalized to triangulations of arbitrary polytopes by De Loera, Peterson and Su~\cite{DeLoera:2002hj}. \begin{proposition}[Polytopal Sperner lemma] \label{p:polytopalsperner} Let $P \subset \mathbb{R}^d$ be a polytope with $n$ vertices, $\mathsf{T}$ a triangulation of $P$, and $\lambda$ a labeling of the vertices of $\mathsf{T}$ by $[n]$. If the vertices of $P$ have pairwise distinct labels and every vertex of~$\mathsf{T}$ lying on a face $F$ of the boundary of $P$ has same label as some vertex of $F$, then $\mathsf{T}$ contains at least $n-d$ fully-labeled facets. \end{proposition} \noindent The gist of the proof is that $\lambda$ extends into a piece-wise linear map from $|\mathsf{T}|$ to~$P$ (as illustrated in Figure~\ref{f:polytopalsperner}). This map can be shown to be surjective, so its image provides a covering of $P$ by simplices spanned by its vertices. The number of such simplices required to cover $P$ is at least $n-d$, and each of them is the image (under the extension of $\lambda$) of a fully-labeled facet of $\mathsf{T}$. This approach generalizes to non-convex polyhedral manifolds~\cite{meunier-sperner} and broader classes of manifolds~\cite{Musin:2012tq, Musin:2016}. The reader interested in recent progress on lower bounds for the number of fully-labeled facets is refered to the work of Asada et al.~\cite{asadaetal}. \begin{figure}[h] \begin{center} \includegraphics[page=13]{figures-final} \end{center} \caption{A Sperner-type labeling of a triangulation of a polygon and the associated surjective map to the polygon.\label{f:polytopalsperner}} \end{figure} \bigskip Another idea is to extend $\lambda$ into a chain map $\lambda_\sharp: C_{\bullet}(\mathsf{T},R) \to C_{\bullet}(\mathsf{K},R)$. Depending on the coefficients ring $R$ and the complex $\mathsf{K}$, one gets different generalizations of the classical statements. This extension goes as follows. Send every simplex $\{\ve v_{0}, \ldots, \ve v_{k}\}$ of $\mathsf{T}$ to $\{\lambda(\ve v_{0}), \ldots, \lambda(\ve v_{k})\}$ if the $\lambda(\ve v_{i})$ are pairwise distinct and to $0$ otherwise; the linear extension of this map to $C_{\bullet}(\mathsf{T},\mathbb{Z}_2)$ is the map $\lambda_\sharp$, and it commutes with $\partial$ (it is a \emph{chain map}). \medskip For $R=\mathbb{Z}_2$, we obtain a short proof by induction of Sperner's lemma. Assume that $\lambda$ is a Sperner labeling of a triangulation $\mathsf{T}$ of $\Delta_d$, so $\lambda_\sharp: C_{\bullet}(\mathsf{T},\mathbb{Z}_2) \to C_{\bullet}(\sigma_d,\mathbb{Z}_2)$. Let $t$ denote the sum of $d$-simplices of $\mathsf{T}$ and $\sigma$ the unique $d$-simplex of $\sigma_d$. Observe that $\lambda_\sharp(t) = \ell \sigma$, where $\ell$ is the number of fully-labeled simplices in $\mathsf{T}$ mod $2$. A simple induction shows that $\lambda_\sharp(\partial t) = \partial \sigma$. Thus, $\partial \lambda_\sharp(t) = \lambda_\sharp(\partial t) \neq 0$. It comes that $\ell \neq 0$, and $\mathsf{T}$ has an odd number of fully-labeled simplices. \medskip For $R=\mathbb{Z}$, the same proof gives a Sperner-type result for oriented simplices. The \emph{orientation} of a fully-labeled simplex $\{\ve v_1,\ve v_2, \ldots, \ve v_{d+1}\}$ of $\mathsf{T}$, where $\ve v_i$ has label $i$, is defined as the sign ($1$ or $-1$) of the determinant \[ \left|\begin{array}{cccc} \ve v_1 & \ve v_2 & \ldots & \ve v_{d+1} \\ 1 & 1 & \ldots & 1\end{array}\right|.\] Specifically, this proves that in any Sperner labeling of a triangulation of $\Delta_d$ by $[d+1]$ the orientations of the fully-labeled simplices add up to $1$~\cite[Theorem~2]{Fan68}. This approach also yields a proof of (a signed version of) Fan's Lemma~\cite[Theorem~1.10]{Longueville-book}, again for triangulations with flags of hemispheres. \medskip For $R = \mathbb{Z}_q$, this idea leads to generalizations of the lemmas of Tucker and Fan with more than two ``signs''~\cite{HaSaScZi09,Me05}. The general insight, following a generalization by Dold~\cite{Dold:1983wr} of the Borsuk-Ulam theorem, is to replace $\mathbb{S}^d$ and the antipodality by a suitable simplicial complex on which $\mathbb{Z}_q$ acts freely. The resulting $\mathbb{Z}_q$-Fan lemmas actually provide combinatorial proofs of Dold's theorem. The deduction of the $\mathbb{Z}_q$-Fan lemmas from their $\mathbb{Z}_q$-Tucker versions works as explained above in the $\mathbb{Z}_2$ case~\cite{alishahi2017}. \medskip A variant of the above argument yields a new and elementary proof of a relative of Sperner's lemma, due to Meshulam~\cite{Mes01}. It is a powerful result that has found many applications in graph theory and in discrete geometry, such as the recently-found generalization of the colorful Carath\'eodory theorem by Holmsen and Karasev~\cite{Holmsen+Karasev}. We come back later to some of its applications in discrete geometry (Section \ref{caratheodory-thms}) and in graph theory (Section~\ref{subsubsec:hall}). Meshulam's lemma was first explicitly derived from (a special version of) Leray's acyclic cover theorem (its first use was implicit; see Kalai and Meshulam~\cite[Proposition~3.1]{Kalai:2005cm} for an explicit statement and proof for rational homology). For the sake of presentation, we consider here homology over $\mathbb{Z}_2$ but the proof generalizes, \emph{mutatis mutandis}, to an arbitrary ring. Given a simplicial complex $\mathsf{K}$ and a subset $X$ of its vertices, we denote by $\mathsf{K}[X]$ the simplicial complex formed by the simplices of $\mathsf{K}$ whose vertices are in~$X$. \begin{prop}[Meshulam lemma]\label{p:meshulamlemma} Let $\lambda$ be a labeling of the vertices of a simplicial complex $\mathsf{K}$ by $[d+1]$. If $\widetilde{H}_{|I|-2}\pth{\mathsf{K}[\lambda^{-1}(I)]}$ is trivial for every nonempty $I \subseteq [d+1]$, then $\mathsf{K}$ contains a fully-labeled $d$-dimensional face. \end{prop} \noindent The proof goes as follows. Let $\lambda_{\sharp}:C_\bullet(\mathsf{K}) \to C_\bullet(\sigma_d)$ denote the chain map induced by $\lambda$ and recall that it maps simplices with repeated labels to $0$. We build a chain map $f_\sharp\colon C_\bullet(\sigma_d) \to C_\bullet(\mathsf{K})$ such that $\lambda_\sharp \circ f_\sharp = \operatorname{id}_{\sigma_d}$; the identity $(\lambda_\sharp \circ f_\sharp)([d+1]) = [d+1]$ then ensures that $f_\sharp([d+1])$ contains an odd number of fully-labeled simplices, proving the statement. We build $f_\sharp$ by increasing dimension. We start by setting $f_\sharp(\{i\})$ to be some (arbitrary) vertex in $\lambda^{-1}(\{i\})$ for every $i \in [d+1]$; this is possible because $\widetilde{H}_{-1}(\mathsf{K}[\lambda^{-1}(\{i\})])$ is trivial. Assume that $f_\sharp$ is defined over all chains up to dimension $k$, that it maps any subset $I$ of cardinality at most $k+1$ to $C_\bullet(\mathsf{K}[\lambda^{-1}(I)])$, and that it commutes with the boundary operator. Now, for any subset $I$ of cardinality $k+2$, we have \[ \partial\pth{\sum_{i \in I} f_\sharp(I\setminus \{i\})} = \partial f_\sharp\pth{ \sum_{i \in I} I\setminus \{i\}} = \partial f_\sharp(\partial I) = f_\sharp(\partial^2 I) = 0,\] so the $k$-chain $\sum_{i \in I} f_\sharp(I\setminus \{i\})$ is a cycle in $C_\bullet(\mathsf{K}[\lambda^{-1}(I)])$. The assumption of the lemma ensures that it is the boundary of some chain $\gamma \in C_{|I|-1}(\mathsf{K}[\lambda^{-1}(I)])$, and we set $f_\sharp(I)$ to be $\gamma$. To see that $(\lambda_\sharp \circ f_\sharp )(I) = I$ for any $I\subseteq [d+1]$, first note that this is straightforward for $|I|=1$. For the general case, remark that \[ \partial \lambda_\sharp \pth{ f_\sharp(I)} = \lambda_\sharp \pth{\partial f_\sharp(I)} = \sum_{i \in I} \lambda_\sharp \pth{f_\sharp(I\setminus \{i\})} = \sum_{i \in I} I\setminus \{i\},\] so we can assume by induction that $\partial \lambda_\sharp \pth{ f_\sharp(I)} = \partial I$. We conclude by observing that $f_\sharp(I) \in C_{|I|-1}(\mathsf{K}[\lambda^{-1}(I)])$ means that $\lambda_\sharp\pth{f_\sharp(I)}$ is supported only on subsets of $I$, so it must be that $\lambda_\sharp \pth{ f_\sharp(I)} =I$. \bigskip The parity argument used to prove Proposition~\ref{prop:sperner_even} can also be found, specialized to a certain labeled pseudomanifold, in Scarf's proofs of the integer Helly theorem~\cite{Sca1977} and his classical lemma in game theory~\cite{game-scarf}. There is a related unbounded polar version that will be useful in Section~\ref{s:kernels}. \begin{corollary}[{\cite[Theorem~3]{KiPa09}}]\label{cor:KP} Let $P$ be a $d$-dimensional pointed polyhedron whose characteristic cone is generated by $d$ linearly independent vectors and whose facets are labeled by~$[d]$. If no facet containing the $i$-th extreme direction is labeled $i$, then there exists a extreme point incident to a facet of each label. \end{corollary} Another recent variation of Sperner's lemma, motivated by applications in approximation algorithms, asks for the minimum possible number of facets in the Sperner labeling that must be non-uniquely-labeled. Mirzakhani and Vondrak~\cite{mirza+vondrak,MR3726616} settled this question for certain triangulations of the simplex, for which they provided the optimal Sperner labeling. They then proposed two very interesting open questions. \begin{oproblem} Is there a theorem that interpolates between the result above (lower bound on the number of simplices with at least two different colors) and the original Sperner's lemma (lower bound on the number of simplices with vertices of different color) by predicting how many simplices are colored with at least $j$ different colors? How does such theorem depend on the structure of the particular triangulation? \end{oproblem} It must be mentioned that Tucker-type theorems and Sperner-type theorems are related to each other in interesting ways. For example, it is known that the Borsuk-Ulam theorem implies the Brouwer fixed-point theorem, but at the combinatorial level Nyman and Su \cite{Nyman+Su2013} proved that Fan's lemma implies Sperner's theorem too. Other interconnections can be found in \cite{prescott+su,spencer+su}. \subsection{Computational considerations} \label{fixed-point-computing} The proof of Sperner's lemma given for Proposition~\ref{prop:sperner_even} builds a graph where every vertex has degree zero, one, or two, then exhibits a vertex of degree one and argues that any \emph{other} vertex of degree one must correspond to a fully-labeled simplex. This provides a simple algorithm for finding a fully-labeled simplex: just follow the path! We can combine this simple path-following algorithm for finding fully-labeled simplices with the proof of Brouwer's theorem, presented at the beginning of Section \ref{subsec:continuousversionspernertucker}, and provide a method for finding an \emph{approximate fixed-point} of the continuous map $f$. Again assume we are given a continuous map $f: \Delta_d \to \Delta_d$ and an $\varepsilon>0$. Our goal is to find $\boldsymbol{x} \in \Delta_d$ such that $\|f(\boldsymbol{x})-\boldsymbol{x}\| \le \varepsilon$. For this, it suffices to compute a triangulation of $\Delta_d$ with simplices of diameter sufficiently small, depending on $\varepsilon$ and the modulus of continuity of $f$, label it as in the proof of Brouwer's theorem, and any vertex $\boldsymbol{x}$ of a fully-labeled simplex does the job (this fact is more easily formalized by using the $\ell_\infty$ norm on the barycentric coordinates). This template of proof was first presented in \cite{scarf1967} and is quite flexible, e.g., it applies to non-contracting functions. We left out many details, for instance the choice of the triangulation to speed up the algorithm and the estimation of the modulus of continuity. The interested reader can find more details on methods to compute approximate fixed points based on these ideas in~\cite{todd1993new}. \medskip The theory of computational complexity is a formal way for computer scientists to classify the inherent difficulty of computational problems. Families of problems, called \emph{complexity classes}, collect problems of equivalent difficulty (a complete introduction can be found in \cite{Arora+Barak:complexity}). Famous complexity classes of course include the class P and the class NP, but we briefly discuss here, and in Section \ref{compu-convgeo}, about the complexity classes that relate to computational versions of our five central theorems. The \emph{path-following algorithm} for computing the fully-labeled simplex for Sperner's lemma is representative of the \emph{PPAD class}, a complexity class well-suited for studying computational problems whose solution is known to exist, but finding it is not that easy. This class was presented by Papadimitriou~\cite{ppad-original}, see also~\cite{Nisanetal-2007}. The prototypical problem of the class PPAD (which abbreviates \emph{Polynomial Parity Argument for Directed graphs}) assumes an underlying directed graph where every vertex has in- and out-degrees at most one; the graph may be implicit, and all that is required is the existence of a function that computes the neighborhood of a given vertex in time polynomial in the encoding of that vertex. The PPAD problem asks, given the encoding of an unbalanced vertex (that is, with different in- and out-degrees), to compute the encoding of \emph{another} unbalanced vertex. (Note that this computational problem does not easily reduce to a meaningful associated decision problem, since the existence of this other unbalanced vertex follows systematically from parity considerations.) A problem is in the \emph{PPAD class} if it has a polynomial reduction to the PPAD problem, and a problem from the PPAD class is \emph{PPAD-complete} if every problem from the PPAD-class has a polynomial reduction to that problem. (All reductions here are meant in the usual sense of polynomial reductions~\cite[$\mathsection 2$]{Nisanetal-2007}.) PPAD-completeness results imply conditional lower bounds in the following sense: one cannot solve a PPAD-complete problem substantially faster than by path-following, unless there is also a substantially better method than path-following for the PPAD problem (and similarly for every other problem in the PPAD class). As in the case of the P vs NP problem, failure over time to improve on even the most streamlined of these problems supports the conjecture that \emph{none} of these methods can be substantially improved. The ``Sperner problem'' -- where one asks, given a Sperner labeling, for a fully-labeled simplex -- is well-known to be PPAD-complete in any fixed dimension. (Formalizing this problem properly requires some care, for instance the definition of a canonical sequence of triangulations with simplices of vanishing diameter, which we do not discuss.) Papadimitriou's seminal paper, which started the theory of PPAD problems~\cite{ppad-original}, settled the three-dimensional case and listed the two-dimensional case as an important open problem; it was settled in the positive a decade later by Chen and Deng~\cite{chen2009complexity}. While Tucker's lemma can also be proved via a path-following argument~\cite{freundtodd}, the computational problem associated to Tucker's lemma is not known to belong to the PPAD class: contrary to Sperner's lemma, there is no natural orientation of the edges of the underlying graph. The suitable complexity class to use for the ``Tucker problem'' is a superclass of the PPAD class, the class {\em PPA}. Here PPA abbreviates \emph{Polynomial Parity Argument for graphs}. This class was introduced at the same time as the PPAD class, its definition is almost the same: instead of working with directed graphs, one works with undirected ones. The underlying graph defining the PPA problem has all its vertices of degree at most two and asks, given the encoding of a vertex of degree one, to output the encoding of another degree one vertex. PPA contains PPAD but it is a famous problem to decide whether the two classes are actually the same, already asked in the paper founding this topic \cite{ppad-original}. Experts believe that these two classes are different \cite{GoldbergPapa17}. \begin{oproblem} Are the complexity classes PPA and PPAD equal? \end{oproblem} As for the Sperner problem, the Tucker problem is PPA-complete already in dimension two (see Aisenberg et al.~\cite{aisenbergetal2015}, who corrected an earlier, wrong, assertion of PPADness). P{\'a}lv{\"o}lgyi~\cite{palvolgyi20092d} introduced a clean variation of this problem -- the octahedral Tucker problem -- together with the open question below: Given a function $\lambda:\{+,-,0\}^n\setminus\{\boldsymbol{0}\}\rightarrow \{\pm 1, \pm 2, \ldots, \pm m\}$, computable in time polynomial in $n$ and such that $n > m$ and $\lambda(-\ve x)=-\lambda(\ve x)$ for all $\boldsymbol{x}$, compute $\boldsymbol{x}\preceq\boldsymbol{y}$ such that $\lambda(\boldsymbol{x})+\lambda(\boldsymbol{y}) = 0$. Note that contrary to the Tucker problem we just discussed, the dimension is not part of the input. The computational complexity of the algorithmic version of the Octahedral Tucker lemma had been an outstanding challenging problem, but the paper \cite{dengetal2017} resolved this problem by proving Octahedral Tucker to be PPA-complete. \bigskip We will also discuss in our applications, in particular Section~\ref{s:nash-complexity}, some consequences of the lemmas of Sperner or Tucker whose computational versions may be complete for two related complexity classes. The first class is FIXP, introduced by Etessami and Yannakakis~\cite{etessami+yannakakis}. It consists of the problems whose resolution on an instance $\ell$ reduces to the computation of a fixed point of some function $F_\ell$ that can be expressed by the operations $\{+,*,-,/,\max,\min\}$ with rational constants and functions and computed in time polynomial in the size of $\ell$; this extends PPAD, which coincides with the case of linear functions. The second class, called $\exists\mathbb{R}$~\cite{Schaefer2015} (sometimes abbreviated ETR for existence of real solutions, see \cite{gargetal2015}), studies problems that reduce to deciding the emptiness of a general semi-algebraic set, i.e., the set of real solutions of a system of inequalities with polynomials as constraints. These two complexity classes are relevant in Section \ref{s-games+fairness+independence}.
{ "timestamp": "2018-10-09T02:20:39", "yymm": "1706", "arxiv_id": "1706.05975", "language": "en", "url": "https://arxiv.org/abs/1706.05975", "abstract": "We discuss five discrete results: the lemmas of Sperner and Tucker from combinatorial topology and the theorems of Carathéodory, Helly, and Tverberg from combinatorial geometry. We explore their connections and emphasize their broad impact in application areas such as game theory, graph theory, mathematical optimization, computational geometry, etc.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Optimization and Control (math.OC)", "title": "The discrete yet ubiquitous theorems of Carathéodory, Helly, Sperner, Tucker, and Tverberg", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9840936106207202, "lm_q2_score": 0.8354835309589073, "lm_q1q2_score": 0.8221940045954993 }
https://arxiv.org/abs/2009.05922
An algorithm for finding minimal generating sets of finite groups
In this article, we study connections between components of the Cayley graph $\mathrm{Cay}(G,A)$, where $A$ is an arbitrary subset of a group $G$, and cosets of the subgroup of $G$ generated by $A$. In particular, we show how to construct generating sets of $G$ if $\mathrm{Cay}(G,A)$ has finitely many components. Furthermore, we provide an algorithm for finding minimal generating sets of finite groups using their Cayley graphs.
\section{Introduction}\label{sec: introdcution} The problem of determining (minimal) generating sets of groups has been studied widely; see, for instance, \cite{MR3814347, MR2361465, MR572868, MR3073659, MR2023255, MR1289999, MR643291}. Although the existence of a generating set of a certain group is known, it might be a difficult task to explore it. In fact, there is no general effective method to determine a generating set of a given group. From a complexity point of view, MIN-GEN---finding a minimum size generating set---for groups is in DSPACE\,($\log^2{n}$); see Proposition 3 of \cite{MR2296106}. Cayley graphs prove useful in studying various mathematical structures \cite{MR1957965, MR1927074, MR1939667, MR2228536, MR1955389, MR2574829, MR1934292, MR2548555} and have many applications in several fields \cite{MR2548552, MR2195356, MR2080457, MR2455531, MR2885431}. It is clear that a Cayley graph of any group $G$ encodes a large amount of information about algebraic, combinatorial, and geometric structures of $G$. In addition, Cayley graphs allow us to visualize groups and then to examine properties of groups in a convenient way. Often, a Cayley graph of a (finite) group $G$ relative to $A$ is drawn with the condition that $A$ be a generating set of $G$ (and even if $A = A^{-1}$; e.g. in geometric group theory). In this case, the Cayley graph is strongly connected. In fact, it is a standard result in the literature that a Cayley graph $\Cay{G, A}$ is connected if and only if $A$ generates $G$. In the present article, we investigate Cayley graphs in a more general setting by relaxing the requirement that $A$ be a generating set of $G$. In particular, we count the number of components of a Cayley graph in terms of the index of a subgroup. As a consequence of this result, we are able to construct a generating set of a certain group whenever its Cayley graph has finitely many components. This leads to an algorithm for finding some minimal generating sets of finite groups. We remark that some results presented in the next section are known in algebraic graph theory; see, for instance, \cite[Lemma 5.1]{MR1957965}, \cite[p.1]{MR2320601}, and \cite[pp. 302--303]{MR1927074}. \section{Main results} \subsection{Definitions and basic properties}\label{subsec: basic property} Let $G=(V,E)$ be a graph. The equivalence relation $\p$ on $V$ is defined by $u\p v$ if and only if either $u=v$ or there is a path from $u$ to $v$ in $G$. We say that $C$ is a component of $G$ if and only if there is an equivalence class $X$ of the relation $\p$ such that $C$ is the subgraph of $G$ induced by $X$, that is, $C=G[X]$. Note that $u\p v$ if and only if $u$ and $v$ are in the same component of $G$. Let $G$ be a (finite or infinite) group and let $A$ be an arbitrary subset of $G$. Recall that the {\it Cayley digraph} of $G$ with respect to $A$, denoted by $\dCay{G, A}$, is a directed graph with the vertex set $G$ and the arc set $\cset{(g, ga)}{g\in G, a\in A\textrm{ and }a\ne e}$. Remark that we exclude the identity $e$ in order to avoid loops in Cayley digraphs. The (undirected) {\it Cayley graph} of $G$ with respect to $A$, denoted by $\Cay{G, A}$, is defined as a graph with the vertex set $G$ such that $\set{u, v}$ is an edge if and only if $u = va$ or $v = ua$ for some $a\in A\setminus\set{e}$. It is not difficult to check that $\Cay{G, A}$ is the underlying graph of $\dCay{G, A}$; that is, the vertex sets of $\Cay{G, A}$ and $\dCay{G, A}$ coincide and $\set{u, v}$ is an edge in $\Cay{G, A}$ if and only if $(u, v)$ or $(v, u)$ is an arc in $\dCay{G,A}$. To nonidentity elements of $A$, we can associate distinct colors, labeled by their names. The {\it Cayley color digraph} of $G$ with respect to $A$, denoted by $\dcCay{G, A}$, consists of the digraph $\dCay{G, A}$ in which any arc $(g, ga)$ is given color $a$ for all $a\in A\setminus\set{e}$. Next, we mention some basic properties of Cayley graphs and Cayley digraphs. \begin{theorem}\label{thm: indegree & outdegree of dCay} Let $G$ be a group and let $A$ be a finite subset of $G$. Then $$ \indeg{v}=\outdeg{v}= \begin{cases} \abs{A} & \textrm{ if } e\notin A;\\ \abs{A}-1 & \textrm{ if } e\in A \end{cases} $$ for all vertices $v$ in $\dCay{G,A}$. Therefore, $\dCay{G,A}$ is regular. \end{theorem} \begin{proof} Note that $va=vb$ if and only if $a=b$ for all $u,v\in G, a,b\in A$. In the case when $e\notin A$, we obtain $$\outdeg{v}=\abs{\set{va:a\in A}}=\abs{A}$$ and $$\indeg{v}=\abs{\set{va^{-1}:a\in A}}=\abs{\set{a^{-1}:a\in A}}=\abs{A}.$$ In the case when $e\in A$, we obtain $$\outdeg{v}=\abs{\set{va:a\in A-\set{e}}}=\abs{A-\set{e}}=\abs{A}-1$$ and \begin{equation*}\tag*{\qedsymbol} \indeg{v}=\abs{\set{va^{-1}:a\in A-\set{e}}}=\abs{\set{a^{-1}:a\in A-\set{e}}}=\abs{A-\set{e}}=\abs{A}-1. \end{equation*} \let\qed\relax \end{proof} \begin{theorem}\label{thm: degree of Cayley graph} Let $G$ be a group and let $A$ be a finite subset of $G$ not containing $e$. Then \begin{equation} \deg{v}=2\abs{A}-\abs{A\cap A^{-1}}, \end{equation} where $A^{-1} = \cset{a^{-1}}{a\in A}$, for all vertices $v$ in $\Cay{G,A}$. Therefore, $\Cay{G,A}$ is regular. \end{theorem} \begin{proof} Suppose that $vA=\set{va:a\in A}$ and $vA^{-1}=\set{va^{-1}:a\in A}$. Then $vA\cap vA^{-1}=v(A\cap A^{-1})$. Note that \begin{align*} \textrm{$\set{u,v}\in E(\Cay{G,A})$}\quad &\Leftrightarrow\quad \textrm{$(u,v)\in E(\dCay{G,A})$ or $(v,u)\in E(\dCay{G,A})$}\\ {} &\Leftrightarrow\quad \textrm{$u=va^{-1}$ for some $a\in A$ or $u=va$ for some $a\in A$}. \end{align*} Hence, \begin{align*} \deg v &=\abs{\set{u\in G:\set{u,v}\in E(\Cay{G,A})}}\\ {} &= \abs{vA\cup vA^{-1}}\\ {} &= \abs{vA}+\abs{vA^{-1}}-\abs{vA\cap vA^{-1}}\\ {} &= \abs{vA}+\abs{vA^{-1}}-\abs{v(A\cap A^{-1})}\\ {} &= 2\abs{A}-\abs{A\cap A^{-1}}\tag*{\qedsymbol} \end{align*} \let\qed\relax \end{proof} The following lemma gives a characterization of the existence of paths in Cayley graphs, which is quite well known in the literature. \begin{lemma}[See, e.g., Lemma 5.1 of \cite{MR1957965}]\label{lem: path from g to h} Let $g$ and $h$ be distinct elements in a group $G$ and let $A\subseteq G$. Then there is a path from $g$ to $h$ in $\Cay{G,A}$ if and only if $g^{-1}h = a_1^{\varepsilon_1}a_2^{\varepsilon_2}\cdots a_n^{\varepsilon_n}$ for some $a_1, a_2,\ldots, a_n\in A$, $\varepsilon_1, \varepsilon_2,\ldots, \varepsilon_n\in\set{\pm 1}$. \end{lemma} \subsection{Components of Cayley graphs and Cayley digraphs} In what follows, if $A$ is a subset of a group $G$, then $\gen{A}$ denotes the subgroup of $G$ generated by $A$. That is, $\gen{A}$ is the smallest subgroup of $G$ containing $A$. Henceforward, $A$ is a subset of a (finite or infinite) group $G$ unless stated otherwise. An equivalence class of the relation $\p$ induced by the Cayley graph $\Cay{G, A}$, defined in the beginning of Section \ref{subsec: basic property}, turns out to be a coset of $\gen{A}$ in $G$, as shown in the following theorem. \begin{theorem}\label{theorem: coset of gen{A} iff p relation} Let $u, v\in G$. Then $u$ and $v$ are in the same coset of $\gen{A}$ in $G$ if and only if $u\p v$, where $\p$ is the equivalence relation induced by $\Cay{G,A}$. \end{theorem} \begin{proof} $(\Rightarrow)$ Let $X$ be a coset of $\gen{A}$ in $G$. Then $X=g\gen{A}$ for some $g\in G$. Let $u,v\in X$. Then $u=ga_1^{\varepsilon_1}a_2^{\varepsilon_2}\cdots a_n^{\varepsilon_n}$ and $v=gb_1^{\delta_1}b_2^{\delta_2}\cdots b_m^{\delta_m}$, where $a_1,a_2,\ldots,a_n,b_1,b_2,\ldots,b_m\in A$ and $\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_n,\delta_1,\delta_2,\ldots,\delta_m\in\set{\pm 1}$. In the case when $u\neq v$, $$u^{-1}v=a_n^{-\varepsilon_n}\cdots a_2^{-\varepsilon_2}a_1^{-\varepsilon_1}b_1^{\delta_1}b_2^{\delta_2}\cdots b_m^{\delta_m}$$ implies that there is a path from $u$ to $v$ by Lemma \ref{lem: path from g to h}. Thus $u\p v$. $(\Leftarrow)$ Let $u,v\in G$ and suppose that $u\p v$. If there exists a path from $u$ to $v$ in $\Cay{G,A}$, then $u^{-1}v=a_1^{\varepsilon_1}a_2^{\varepsilon_2}\cdots a_n^{\varepsilon_n}$, where $a_1,a_2,\ldots,a_n\in A, \varepsilon_1,\varepsilon_2,\ldots,\varepsilon_n\in\set{\pm 1}$. Suppose that $u\in g\gen{A}$ for some $g\in G$. Then $u=gb_1^{\delta_1}b_2^{\delta_2}\cdots b_m^{\delta_m}$ with \mbox{$b_1,b_2,\ldots,b_m\in A$}, $\delta_1,\delta_2,\ldots,\delta_m \in\set{\pm 1}$. Thus $v=gb_1^{\delta_1}b_2^{\delta_2}\cdots b_m^{\delta_m}a_1^{\varepsilon_1}a_2^{\varepsilon_2}\cdots a_n^{\varepsilon_n}\in g\gen{A}$. \end{proof} \begin{corollary}\label{cor: equivalent class iff coset of gen{A}} Let $X\subseteq G$. Then $X$ is an equivalence class of the relation $\p$ induced by $\Cay{G,A}$ if and only if $X$ is a coset of $\gen{A}$ in $G$. \end{corollary} \begin{corollary}\label{cor: component of Cay is in the form of coset}Let $G$ be a group and let $A\subseteq G$. \begin{enumerate} \item\label{item: component of Cayley graph Cay(G,A)} $C$ is a component of $\Cay{G,A}$ if and only if there is a unique coset $X$ of $\gen{A}$ such that $C = \Cay{G,A}[X]$. \item$C$ is a component of $\dCay{G,A}$ if and only if there is a unique coset $X$ of $\gen{A}$ such that $C = \dCay{G,A}[X]$. \end{enumerate} \end{corollary} In general, finding the subgroup of $G$ generated by $A$ might be a complicated and \mbox{tedious} task. Corollary \ref{cor: component of Cay is in the form of coset} enables us to find this subgroup by looking at the component of $\Cay{G, A}$ that contains the identity of $G$ (see Theorem \ref{thm: component in terms of cosets}). Furthermore, it \mbox{indicates} that another component of $\Cay{G, A}$ is simply a left translation of the \mbox{identity} component (see the proof of Theorem \ref{thm: two components isomorphic}). We remark that part of Corollary \ref{cor: component of Cay is in the form of coset} \eqref{item: component of Cayley graph Cay(G,A)} is known in the literature; see, for instance, \cite[p. 1]{MR2320601}. \begin{theorem}\label{thm: component in terms of cosets} The subgroup $\gen{A}$ is equal to the set of vertices in the component of $\Cay{G,A}$ containing the identity of $G$. In general, $C$ is a component of $\Cay{G,A}$ containing a vertex $v$ if and only if the vertex set of $C$ equals $v\gen{A}$. \end{theorem} \begin{proof} Let $C$ be the component of $\Cay{G,A}$ containing the identity of $G$. By Corollary \ref{cor: component of Cay is in the form of coset}, $C=\Cay{G,A}[g\gen{A}]$ for some $g\in G$. Since $C$ contains the identity of $G$, that is, $e\in g\gen{A}$, it follows that $g\in\gen{A}$ and so $g\gen{A}=\gen{A}$. The remaining statement can be proved in a similar fashion. \end{proof} \begin{theorem}\label{thm: two components isomorphic} If $B$ and $C$ are components of $\Cay{G,A}$, then $B$ and $C$ are isomorphic as graphs. \end{theorem} \begin{proof} Suppose that $B$ and $C$ are components of $\Cay{G,A}$. From Corollary \ref{cor: component of Cay is in the form of coset}, we obtain $B=\Cay{G,A}[g_1\gen{A}]$ and $C=\Cay{G,A}[g_2\gen{A}]$ for some $g_1,g_2\in G$. Let $\varphi$ be a map defined by $\varphi(x)=g_2g_1^{-1}x$ for all $x\in g_1\gen{A}$. It is straightforward to check that $\varphi$ is a graph isomorphism from $B$ to $C$. So $B$ and $C$ are isomorphic. \end{proof} Another application of Corollary \ref{cor: component of Cay is in the form of coset} reveals a geometric aspect of Cayley digraphs and Cayley graphs: they are disjoint unions of smaller Cayley digraphs (or graphs). \begin{theorem} Let $G$ be a group and let $A\subseteq G$. If $C_i, i\in I$, are all the components of $\dCay{G, A}$ and if $v_i$ is a vertex in $C_i$ for all $i\in I$, then $$\dCay{G,A}=\dot{\bigcup\limits_{i\in I}}\dCay{G,A}[v_i\gen{A}],$$ where the dot notation indicates that the union is disjoint. \end{theorem} \begin{corollary} Let $G$ be a group and let $A\subseteq G$. If $C_i, i\in I$, are all the components of $\Cay{G, A}$ and if $v_i$ is a vertex in $C_i$ for all $i\in I$, then $$\Cay{G,A}=\dot{\bigcup\limits_{i\in I}}\Cay{G,A}[v_i\gen{A}].$$ \end{corollary} Next, we show that the number of components of $\Cay{G, A}$ is indeed the number of cosets of $\gen{A}$ in $G$. This result refines the well known fact that the Cayley graph $\Cay{G, A}$ is connected if and only if $A$ generates $G$. \begin{lemma}\label{lem: number of component of Cay(G,A) equal number of component of Cay(G,genA)} The numbers of components of $\Cay{G,A}$ and $\Cay{G,\gen{A}}$ are equal. \end{lemma} \begin{proof} Set $E=\set{X:X~ \textrm{is a coset of}~ \gen{A}},~S=\set{C:C~ \textrm{is a component of}~ \Cay{G,A}}$, and $T=\set{C:C~ \textrm{is a component of}~ \Cay{G,\gen{A}}}$. By Corollary \ref{cor: component of Cay is in the form of coset}, $\abs{S}=\abs{E}=\abs{T}$. \end{proof} \begin{theorem}\label{thm: number of component Cay, subset} The number of components of $\Cay{G,A}$ equals $[G:\gen{A}]$, the index of $\gen{A}$ in $G$. \end{theorem} \begin{proof} The theorem follows directly from Corollary \ref{cor: component of Cay is in the form of coset} and Lemma \ref{lem: number of component of Cay(G,A) equal number of component of Cay(G,genA)}. \end{proof} As a consequence of Theorem \ref{thm: number of component Cay, subset}, we immediately obtain a few properties of \mbox{Cayley} graphs related to algebraic properties of groups. \begin{corollary} If $G$ is a finite group, then the number of components of $\Cay{G,A}$ divides $\abs{G}$. \end{corollary} \begin{proof} This follows from Theorem \ref{thm: number of component Cay, subset} and the fact that $[G:\gen{A}]$ divides $\abs{G}$. \end{proof} \begin{corollary}\label{cor: characterization of connectivity in Cayly graph} The Cayley graph $\Cay{G,A}$ is connected if and only if $G=\gen{A}$. \end{corollary} \begin{proof} This follows from the fact that $[G:\gen{A}]=1$ if and only if $\gen{A}=G$. \end{proof} \begin{corollary} Let $G$ be a group. Then $G$ is cyclic if and only if there exists an element $a\in G$ such that $\Cay{G,\set{a}}$ is connected. \end{corollary} Among other things, we obtain a graph-theoretic version of the famous \mbox{Lagrange} theorem in abstract algebra, as shown in the following theorem. \begin{theorem}\label{thm: Main Theorem} Let $G$ be a group and let $H$ be a subgroup of $G$. Then the following hold: \begin{enumerate} \item\label{item: vertex component equal left coset} Each component of $\Cay{G, H}$ has a left coset of $H$ as its vertex set and is the complete graph $K_{\abs{H}}$. In particular, there is a one-to-one correspondence between the vertex sets of components of $\Cay{G, H}$ and the left cosets of $H$ in $G$. \item\label{item: number of components} The Cayley graph $\Cay{G, H}$ has $[G\colon H]$ components. Hence, if $H$ is proper in $G$, then $\Cay{G, H}$ is disconnected. \end{enumerate} \end{theorem} In view of Theorem \ref{thm: Main Theorem}, the index formula $\abs{G} = [G\colon H]\abs{H}$ can be recovered by counting the number of vertices of $\Cay{G, H}$ in the case when $G$ is finite. Moreover, a simple application of Theorem \ref{thm: Main Theorem} shows that $2$ always divides $\abs{G}(\abs{H}-1)$ for any subgroup $H$ of a finite group $G$. In fact, the result is trivial when $H = \set{e}$. Therefore, we assume that $H\ne\set{e}$. Let $C$ be a component of $\Cay{G, H}$. Then $C$ is the complete graph $K_{\abs{H}}$ and so there are ${\abs{H}\choose 2} = \frac{\abs{H}(\abs{H}-1)}{2}$ edges in $C$. Hence, the total number of edges in $\Cay{G, H}$ equals $$\frac{\abs{H}(\abs{H}-1)}{2}[G\colon H] = \frac{\abs{G}(\abs{H}-1)}{2}.$$ This shows that $\frac{\abs{G}(\abs{H}-1)}{2}$ must be an integer. The next theorem shows how to construct a generating set of $G$ from an arbitrary subset $A$ of $G$ whenever $\Cay{G, A}$ has a finite number of components (e.g., $G$ is finite or $G$ has a subgroup of finite index). \begin{theorem}\label{thm: generating set of digraph k component} If $\Cay{G,A}$ has finitely many components $C_1, C_2,\ldots, C_k$ and if $v_i$ is a vertex in $C_i$ for all $i = 1,2,\ldots, k$, then \begin{equation*} S_1 = A\cup\set{v_1^{-1}v_2, v_2^{-1}v_3,\ldots, v_{k-1}^{-1}v_k}\quad\textrm{and}\quad S_2 = A\cup\set{v_1^{-1}v_2, v_1^{-1}v_3,\ldots, v_1^{-1}v_k} \end{equation*} form generating sets of $G$. \end{theorem} \begin{proof} First, we will show that $\Cay{G,S_1}$ is connected. Let $u$ and $v$ be distinct vertices in $\Cay{G,S_1}$. If $u$ and $v$ are in the same component of $\Cay{G,A}$, then there is a path from $u$ to $v$ in $\Cay{G,A}$. Since $\Cay{G,A}$ is a subgraph of $\Cay{G,S_1}$, there is a path from $u$ to $v$ in $\Cay{G,S_1}$. Therefore, we may suppose that $u$ and $v$ are in distinct components of $\Cay{G,A}$, namely the $i^{\rm th}$ and $j^{\rm th}$ components, respectively. Hence, $u\p_1 v_i$ and $v\p_1 v_j$, where $\p_1$ is the equivalence relation induced by $\Cay{G,A}$. It follows that $u\p_2 v_i$ and $v\p_2 v_j$, where $\p_2$ is the equivalence relation induced by $\Cay{G,S_1}$. Since $v_{i+1}=v_i(v_i^{-1}v_{i+1})$ and $v_i^{-1}v_{i+1}\ne e$ for all $i=1,2,\ldots,k-1$, we obtain that $\set{v_i,v_{i+1}}$ is an edge in $\Cay{G,S_1}$ for all $i=1,2,\ldots,k-1$. This implies that $v_i\p_2 v_j$. By symmetry and transitivity, $u\p_2 v$ and so there is a path from $u$ to $v$ in $\Cay{G,S_1}$. Thus $\Cay{G,S_1}$ is connected. The verification that $S_2$ is a generating set of $G$ is similar to the case of $S_1$. \end{proof} \begin{theorem} Let $A$ be a finite subset of a group $G$ not containing $e$. If $\Cay{G,A}$ has $k$ components, where $k\in\mathbb{N}$, then $G$ is generated by $\abs{A}+k-1$ elements and so $$ \mathrm{rank}(G) \leq \abs{A}+k-1. $$ \end{theorem} \begin{proof} Let $S_2$ be the set defined in Theorem \ref{thm: generating set of digraph k component}. We claim that $\abs{S_2}=\abs{A}+k-1$. Let $B=\set{v_1^{-1}v_2, v_1^{-1}v_3,\ldots, v_1^{-1}v_k}$. By the left cancellation law, the elements in $B$ are all distinct. So $\abs{B}=k-1$. Next, we show that $A\cap B=\emptyset$. If there is an element $a\in A\cap B$, then $a=v_1^{-1}v_i$ for some $i\in\set{2,3,\ldots,k}$. Since $v_i=v_1(v_1^{-1}v_i)=v_1a$ and $a\in A\setminus\set{e}$, there is an edge from $v_1$ to $v_i$, a contradiction. Thus $A\cap B=\emptyset$. Hence, $$\abs{S_2}=\abs{A\cup B}=\abs{A}+\abs{B}-\abs{A\cap B}=\abs{A}+k-1.$$ By Theorem \ref{thm: generating set of digraph k component}, $G$ is generated by $S_2$. By definition, $$ \mathrm{rank}(G) = \min{\cset{\abs{X}}{X\subseteq G\textrm{ and }\gen{X} = G}} \leq \abs{A}+k-1, $$ which completes the proof. \end{proof} \begin{corollary} Let $G$ be a group and let $a\in G\setminus\set{e}$. If $\Cay{G,\set{a}}$ has $k$ components, where $k\in\mathbb{N}$, then $G$ is generated by $k$ elements and so $\mathrm{rank}(G) \leq k$. \end{corollary} \section{An application to finding minimal generating sets} A generating set $A$ of a (finite or infinite) group $G$ is {\it minimal} if no proper subset of $A$ generates $G$. It is clear that any finitely generated group has a minimal generating set, but finding one is quite difficult in certain circumstances. In this section, we provide an algorithm for finding minimal generating sets of finite groups as an application of Theorem \ref{thm: generating set of digraph k component}. Let $G$ be a finite group. A formal presentation of this algorithm is as follows. \begin{enumerate} \item Set $A:=\set{a_1}$, where $a_1\in G$ and $a_1\neq e$. \item Set $v_1:=a_1,i:=1$. \item\label{item: skip process} Draw $\Cay{G,A}$. \item If $\Cay{G,A}$ is connected, skip to step \eqref{item: stop process}. Otherwise, set $i:=i+1$ and $v_2:=b_i$, where $b_i$ is an element of $G$ not in the component of $v_1$. \item Set $a_i:=v_1^{-1}v_2$ and $A:=A\cup\set{a_i}$. \item Return to step \eqref{item: skip process}. \item\label{item: stop process} If $i=1$, stop. Otherwise, set $i:=i-1$. \item Draw $\Cay{G,A\backslash\set{a_i}}$. \item If $\Cay{G,A\backslash\set{a_i}}$ is connected, set $A:=A\backslash\set{a_i}$. Otherwise, go to step \eqref{item: stop}. \item\label{item: stop} Return to step \eqref{item: stop process}. \end{enumerate} Theorem \ref{thm: generating set of digraph k component} ensures that this algorithm must stop at some point and turns $A$ into a minimal generating set of $G$. We illustrate how this algorithm works in the next example. \begin{example} Let $G$ be the group defined by presentation \begin{equation}\label{presentation of group G} G = \gen{a, b, c\colon a^2 = b^2 = (ab)^2= c^3 = acabc^{-1} =abcbc^{-1}}. \end{equation} Its Cayley table is given by Table \ref{tab: group K4:C3(A4)} (cf. \cite{GAP4.10.0}). \begin{table}[ht]\centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hline $\cdot$ & $e$ & $a$ & $b$ & $ab$ & $c$ & $ac$ & $bc$ & $abc$ & $cc$ & $acc$ & $bcc$ & $abcc$\\ \hline $e$ & $e$ & $a$ & $b$ & $ab$ & $c$ & $ac$ & $bc$ & $abc$ & $cc$ & $acc$ & $bcc$ & $abcc$\\ \hline $a$ & $a$ & $e$ & $ab$ & $b$ & $ac$ & $c$ & $abc$ & $bc$ & $acc$ & $cc$ & $abcc$ & $bcc$\\ \hline $b$ & $b$ & $ab$ & $e$ & $a$ & $bc$ & $abc$ & $c$ & $ac$ & $bcc$ & $abcc$ & $cc$ & $acc$\\ \hline $ab$ & $ab$ & $b$ & $a$ & $e$ & $abc$ & $bc$ & $ac$ & $c$ & $abcc$ & $bcc$ & $acc$ & $cc$\\ \hline $c$ & $c$ & $bc$ & $abc$ & $ac$ & $cc$ & $bcc$ & $abcc$ & $acc$ & $e$ & $b$ & $ab$ & $a$\\ \hline $ac$ & $ac$ & $abc$ & $bc$ & $c$ & $acc$ & $abcc$ & $bcc$ & $cc$ & $a$ & $ab$ & $b$ & $e$\\ \hline $bc$ & $bc$ & $c$ & $ac$ & $abc$ & $bcc$ & $cc$ & $acc$ & $abcc$ & $b$ & $e$ & $a$ & $ab$\\ \hline $abc$ & $abc$ & $ac$ & $c$ & $bc$ & $abcc$ & $acc$ & $cc$ & $bcc$ & $ab$ & $a$ & $e$ & $b$\\ \hline $cc$ & $cc$ & $abcc$ & $acc$ & $bcc$ & $e$ & $ab$ & $a$ & $b$ & $c$ & $abc$ & $ac$ & $bc$\\ \hline $acc$ & $acc$ & $bcc$ & $cc$ & $abcc$ & $a$ & $b$ & $e$ & $ab$ & $ac$ & $bc$ & $c$ & $abc$\\ \hline $bcc$ & $bcc$ & $acc$ & $abcc$ & $cc$ & $b$ & $a$ & $ab$ & $e$ & $bc$ & $ac$ & $abc$ & $c$\\ \hline $abcc$ & $abcc$ & $cc$ & $bcc$ & $acc$ & $ab$ & $e$ & $b$ & $a$ & $abc$ & $c$ & $bc$ & $ac$\\ \hline \end{tabular} \caption{Cayley table of the group $G$ defined by (\ref{presentation of group G}) (cf. \cite{GAP4.10.0}).}\label{tab: group K4:C3(A4)} \end{table} We can use the algorithm mentioned previously to find a minimal generating set of $G$ as follows: \begin{enumerate} \item Set $A:=\set{b}$. \item Set $v_1:=b,i:=1$. \item Draw $\Cay{G,\set{b}}$, as shown in Figure \ref{fig: dcCay{A4,{b}}}. \item Since $\Cay{G,\set{b}}$ is not connected, set $i:=2$ and $v_2:=ab$. \item Set $a_2:=b^{-1}(ab)=a$ and $A:=\set{b,a}$. \item Draw $\Cay{G,\set{b,a}}$, as shown in Figure \ref{fig: dcCay{A4,{b,a}}}. \item Since $\Cay{G,\set{b,a}}$ is not connected, set $i:=3$ and $v_2:=bc$. \item Set $a_3:=b^{-1}(bc)=c$ and $A:=\set{b,a,c}$. \item Draw $\Cay{G,\set{b,a,c}}$, as shown in Figure \ref{fig: dcCay{A4,{b,a,c}}}. \item Since $\Cay{G,\set{b,a,c}}$ is connected and $i=3$, set $i:=2$. \item Draw $\Cay{G,\set{b,c}}$, as shown in Figure \ref{fig: dcCay{A4,{b,c}}}. \item Since $\Cay{G,\set{b,c}}$ is connected, set $A:=\set{b,c}$. \item Since $i=2$, set $i:=1$. \item Draw $\Cay{G,\set{c}}$, as shown in Figure \ref{fig: dcCay{A4,{c}}}. \item Since $\Cay{G,\set{c}}$ is not connected, go to the next step. \item Since $i=1$, stop. \end{enumerate} This shows that $A=\set{b,c}$ is a minimal generating set of $G$. \end{example} \begin{figure}[h] \centering \definecolor{qqqqff}{rgb}{0.,0.,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-3.,-3.) rectangle (3.,3.); \draw (-2.2,2.5) node[anchor=north west] {$e$}; \draw (1.8,2.5) node[anchor=north west] {$a$}; \draw (1.7,-2.1) node[anchor=north west] {$ab$}; \draw (-2.2,-2.1) node[anchor=north west] {$b$}; \draw (-0.8,1.9) node[anchor=north west] {$cc$}; \draw (0.2,1.9) node[anchor=north west] {$acc$}; \draw (1.4,0.8) node[anchor=north west] {$ac$}; \draw (1.3,-0.3) node[anchor=north west] {$abc$}; \draw (0.1,-1.4) node[anchor=north west] {$abcc$}; \draw (-0.9,-1.4) node[anchor=north west] {$bcc$}; \draw (-1.9,-0.3) node[anchor=north west] {$bc$}; \draw (-1.8,0.8) node[anchor=north west] {$c$}; \draw [line width=2.pt,color=qqqqff] (-2.,2.)-- (-2.,-2.); \draw [line width=2.pt,color=qqqqff] (2.,2.)-- (2.,-2.); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766)-- (0.5411961001461969,-1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197)-- (1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969)-- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (-2.,-2.) -- (-2.,2.); \draw [->,line width=1.pt,color=qqqqff] (-2.,2.) -- (-2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (2.,-2.) -- (2.,2.); \draw [->,line width=1.pt,color=qqqqff] (2.,2.) -- (2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461971,1.3065629648763766) -- (-0.5411961001461969,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461969,-1.3065629648763766) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197) -- (1.3065629648763766,0.5411961001461968); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,0.5411961001461969) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969) -- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,-0.5411961001461971) -- (-1.3065629648763766,0.5411961001461969); \begin{scriptsize} \draw [fill=black] (-2.,2.) circle (2.5pt); \draw [fill=black] (2.,2.) circle (2.5pt); \draw [fill=black] (2.,-2.) circle (2.5pt); \draw [fill=black] (-2.,-2.) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,-0.541196100146197) circle (2.5pt); \draw [fill=black] (-0.5411961001461971,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (0.5411961001461969,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (1.3065629648763766,-0.5411961001461971) circle (2.5pt); \draw [fill=black] (1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (0.5411961001461971,1.3065629648763766) circle (2.5pt); \draw [fill=black] (-0.5411961001461969,1.3065629648763766) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \caption{$\protect\overrightarrow{\mathrm{Cay}}_c(G,\set{b})$; blue arcs are induced by $b$.} \label{fig: dcCay{A4,{b}}} \end{figure} \begin{figure}[h] \centering \definecolor{ffqqqq}{rgb}{1.,0.,0.} \definecolor{qqqqff}{rgb}{0.,0.,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-3.,-3.) rectangle (3.,3.); \draw (-2.2,2.5) node[anchor=north west] {$e$}; \draw (1.8,2.5) node[anchor=north west] {$a$}; \draw (1.7,-2.1) node[anchor=north west] {$ab$}; \draw (-2.2,-2.1) node[anchor=north west] {$b$}; \draw (-0.8,1.9) node[anchor=north west] {$cc$}; \draw (0.2,1.9) node[anchor=north west] {$acc$}; \draw (1.4,0.8) node[anchor=north west] {$ac$}; \draw (1.3,-0.3) node[anchor=north west] {$abc$}; \draw (0.1,-1.4) node[anchor=north west] {$abcc$}; \draw (-0.9,-1.4) node[anchor=north west] {$bcc$}; \draw (-1.9,-0.3) node[anchor=north west] {$bc$}; \draw (-1.8,0.8) node[anchor=north west] {$c$}; \draw [line width=2.pt,color=qqqqff] (-2.,2.)-- (-2.,-2.); \draw [line width=2.pt,color=qqqqff] (2.,2.)-- (2.,-2.); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766)-- (0.5411961001461969,-1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197)-- (1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969)-- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (-2.,-2.) -- (-2.,2.); \draw [->,line width=1.pt,color=qqqqff] (-2.,2.) -- (-2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (2.,-2.) -- (2.,2.); \draw [->,line width=1.pt,color=qqqqff] (2.,2.) -- (2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461971,1.3065629648763766) -- (-0.5411961001461969,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461969,-1.3065629648763766) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197) -- (1.3065629648763766,0.5411961001461968); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,0.5411961001461969) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969) -- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,-0.5411961001461971) -- (-1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=ffqqqq] (-2.,2.)-- (2.,2.); \draw [line width=2.pt,color=ffqqqq] (-2.,-2.)-- (2.,-2.); \draw [line width=2.pt,color=ffqqqq] (-1.3065629648763766,0.5411961001461969)-- (-1.3065629648763766,-0.541196100146197); \draw [line width=2.pt,color=ffqqqq] (1.3065629648763766,0.5411961001461969)-- (1.3065629648763766,-0.5411961001461971); \draw [line width=2.pt,color=ffqqqq] (-0.5411961001461971,-1.3065629648763766)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=ffqqqq] (-0.5411961001461969,1.3065629648763766)-- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (-2.,2.) -- (2.,2.); \draw [->,line width=1.pt,color=ffqqqq] (2.,2.) -- (-2.,2.); \draw [->,line width=1.pt,color=ffqqqq] (-2.,-2.) -- (2.,-2.); \draw [->,line width=1.pt,color=ffqqqq] (2.,-2.) -- (-2.,-2.); \draw [->,line width=1.pt,color=ffqqqq] (-1.3065629648763766,-0.541196100146197) -- (-1.3065629648763766,0.5411961001461968); \draw [->,line width=1.pt,color=ffqqqq] (-1.3065629648763766,0.5411961001461969) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=ffqqqq] (1.3065629648763766,-0.5411961001461971) -- (1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=ffqqqq] (1.3065629648763766,0.5411961001461969) -- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=ffqqqq] (-0.5411961001461971,-1.3065629648763766) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (0.5411961001461971,1.3065629648763766) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (-0.5411961001461969,1.3065629648763766) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (0.5411961001461969,-1.3065629648763766) -- (-0.5411961001461969,1.3065629648763766); \begin{scriptsize} \draw [fill=black] (-2.,2.) circle (2.5pt); \draw [fill=black] (2.,2.) circle (2.5pt); \draw [fill=black] (2.,-2.) circle (2.5pt); \draw [fill=black] (-2.,-2.) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,-0.541196100146197) circle (2.5pt); \draw [fill=black] (-0.5411961001461971,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (0.5411961001461969,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (1.3065629648763766,-0.5411961001461971) circle (2.5pt); \draw [fill=black] (1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (0.5411961001461971,1.3065629648763766) circle (2.5pt); \draw [fill=black] (-0.5411961001461969,1.3065629648763766) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \caption{$\protect\overrightarrow{\mathrm{Cay}}_c(G,\set{b,a})$; blue arcs are induced by $b$ and red arcs are induced by $a$.} \label{fig: dcCay{A4,{b,a}}} \end{figure} \begin{figure}[h] \centering \definecolor{qqwuqq}{rgb}{0.,0.39215686274509803,0.} \definecolor{zzttqq}{rgb}{0.6,0.2,0.} \definecolor{ffqqqq}{rgb}{1.,0.,0.} \definecolor{qqqqff}{rgb}{0.,0.,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-3.,-3.) rectangle (3.,3.); \draw (-2.2,2.5) node[anchor=north west] {$e$}; \draw (1.8,2.5) node[anchor=north west] {$a$}; \draw (1.7,-2.1) node[anchor=north west] {$ab$}; \draw (-2.2,-2.1) node[anchor=north west] {$b$}; \draw (-0.8,1.9) node[anchor=north west] {$cc$}; \draw (0.2,1.9) node[anchor=north west] {$acc$}; \draw (1.4,0.8) node[anchor=north west] {$ac$}; \draw (1.3,-0.3) node[anchor=north west] {$abc$}; \draw (0.1,-1.4) node[anchor=north west] {$abcc$}; \draw (-0.9,-1.4) node[anchor=north west] {$bcc$}; \draw (-1.9,-0.3) node[anchor=north west] {$bc$}; \draw (-1.8,0.8) node[anchor=north west] {$c$}; \draw [line width=2.pt,color=qqqqff] (-2.,2.)-- (-2.,-2.); \draw [line width=2.pt,color=qqqqff] (2.,2.)-- (2.,-2.); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766)-- (0.5411961001461969,-1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197)-- (1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969)-- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (-2.,-2.) -- (-2.,2.); \draw [->,line width=1.pt,color=qqqqff] (-2.,2.) -- (-2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (2.,-2.) -- (2.,2.); \draw [->,line width=1.pt,color=qqqqff] (2.,2.) -- (2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461971,1.3065629648763766) -- (-0.5411961001461969,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461969,-1.3065629648763766) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197) -- (1.3065629648763766,0.5411961001461968); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,0.5411961001461969) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969) -- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,-0.5411961001461971) -- (-1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=ffqqqq] (-2.,2.)-- (2.,2.); \draw [line width=2.pt,color=ffqqqq] (-2.,-2.)-- (2.,-2.); \draw [line width=2.pt,color=ffqqqq] (-1.3065629648763766,0.5411961001461969)-- (-1.3065629648763766,-0.541196100146197); \draw [line width=2.pt,color=ffqqqq] (1.3065629648763766,0.5411961001461969)-- (1.3065629648763766,-0.5411961001461971); \draw [line width=2.pt,color=ffqqqq] (-0.5411961001461971,-1.3065629648763766)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=ffqqqq] (-0.5411961001461969,1.3065629648763766)-- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (-2.,2.) -- (2.,2.); \draw [->,line width=1.pt,color=ffqqqq] (2.,2.) -- (-2.,2.); \draw [->,line width=1.pt,color=ffqqqq] (-2.,-2.) -- (2.,-2.); \draw [->,line width=1.pt,color=ffqqqq] (2.,-2.) -- (-2.,-2.); \draw [->,line width=1.pt,color=ffqqqq] (-1.3065629648763766,-0.541196100146197) -- (-1.3065629648763766,0.5411961001461968); \draw [->,line width=1.pt,color=ffqqqq] (-1.3065629648763766,0.5411961001461969) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=ffqqqq] (1.3065629648763766,-0.5411961001461971) -- (1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=ffqqqq] (1.3065629648763766,0.5411961001461969) -- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=ffqqqq] (-0.5411961001461971,-1.3065629648763766) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (0.5411961001461971,1.3065629648763766) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (-0.5411961001461969,1.3065629648763766) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=ffqqqq] (0.5411961001461969,-1.3065629648763766) -- (-0.5411961001461969,1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-2.,2.)-- (-0.5411961001461969,1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-0.5411961001461969,1.3065629648763766)-- (-1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqwuqq] (-1.3065629648763766,0.5411961001461969)-- (-2.,2.); \draw [line width=2.pt,color=qqwuqq] (0.5411961001461971,1.3065629648763766)-- (2.,2.); \draw [line width=2.pt,color=qqwuqq] (2.,2.)-- (1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqwuqq] (1.3065629648763766,0.5411961001461969)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (0.5411961001461969,-1.3065629648763766)-- (1.3065629648763766,-0.5411961001461971); \draw [line width=2.pt,color=qqwuqq] (1.3065629648763766,-0.5411961001461971)-- (2.,-2.); \draw [line width=2.pt,color=qqwuqq] (2.,-2.)-- (0.5411961001461969,-1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-2.,-2.)-- (-0.5411961001461971,-1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-0.5411961001461971,-1.3065629648763766)-- (-1.3065629648763766,-0.541196100146197); \draw [line width=2.pt,color=qqwuqq] (-1.3065629648763766,-0.541196100146197)-- (-2.,-2.); \draw [->,line width=1.pt,color=qqwuqq] (-2.,2.) -- (-1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (-1.3065629648763766,0.5411961001461969) -- (-0.5411961001461969,1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-0.5411961001461969,1.3065629648763766) -- (-2.,2.); \draw [->,line width=1.pt,color=qqwuqq] (0.5411961001461971,1.3065629648763766) -- (2.,2.); \draw [->,line width=1.pt,color=qqwuqq] (2.,2.) -- (1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (1.3065629648763766,0.5411961001461969) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-2.,-2.) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (-1.3065629648763766,-0.541196100146197) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-0.5411961001461971,-1.3065629648763766) -- (-2.,-2.); \draw [->,line width=1.pt,color=qqwuqq] (1.3065629648763766,-0.5411961001461971) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (0.5411961001461969,-1.3065629648763766) -- (2.,-2.); \draw [->,line width=1.pt,color=qqwuqq] (2.,-2.) -- (1.3065629648763766,-0.5411961001461971); \begin{scriptsize} \draw [fill=black] (-2.,2.) circle (2.5pt); \draw [fill=black] (2.,2.) circle (2.5pt); \draw [fill=black] (2.,-2.) circle (2.5pt); \draw [fill=black] (-2.,-2.) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,-0.541196100146197) circle (2.5pt); \draw [fill=black] (-0.5411961001461971,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (0.5411961001461969,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (1.3065629648763766,-0.5411961001461971) circle (2.5pt); \draw [fill=black] (1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (0.5411961001461971,1.3065629648763766) circle (2.5pt); \draw [fill=black] (-0.5411961001461969,1.3065629648763766) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \caption{$\protect\overrightarrow{\mathrm{Cay}}_c(G,\set{b,a,c})$; blue arcs are induced by $b$, red arcs are induced by $a$, and green arcs are induced by $c$.} \label{fig: dcCay{A4,{b,a,c}}} \end{figure} \begin{figure}[h] \centering \definecolor{qqwuqq}{rgb}{0.,0.39215686274509803,0.} \definecolor{zzttqq}{rgb}{0.6,0.2,0.} \definecolor{qqqqff}{rgb}{0.,0.,1.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-3.,-3.) rectangle (3.,3.); \draw (-2.2,2.5) node[anchor=north west] {$e$}; \draw (1.8,2.5) node[anchor=north west] {$a$}; \draw (1.7,-2.1) node[anchor=north west] {$ab$}; \draw (-2.2,-2.1) node[anchor=north west] {$b$}; \draw (-0.8,1.9) node[anchor=north west] {$cc$}; \draw (0.2,1.9) node[anchor=north west] {$acc$}; \draw (1.4,0.8) node[anchor=north west] {$ac$}; \draw (1.3,-0.3) node[anchor=north west] {$abc$}; \draw (0.1,-1.4) node[anchor=north west] {$abcc$}; \draw (-0.9,-1.4) node[anchor=north west] {$bcc$}; \draw (-1.9,-0.3) node[anchor=north west] {$bc$}; \draw (-1.8,0.8) node[anchor=north west] {$c$}; \draw [line width=2.pt,color=qqqqff] (-2.,2.)-- (-2.,-2.); \draw [line width=2.pt,color=qqqqff] (2.,2.)-- (2.,-2.); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766)-- (0.5411961001461969,-1.3065629648763766); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197)-- (1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969)-- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (-2.,-2.) -- (-2.,2.); \draw [->,line width=1.pt,color=qqqqff] (-2.,2.) -- (-2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (2.,-2.) -- (2.,2.); \draw [->,line width=1.pt,color=qqqqff] (2.,2.) -- (2.,-2.); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461969,1.3065629648763766) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461971,1.3065629648763766) -- (-0.5411961001461969,1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-0.5411961001461971,-1.3065629648763766) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (0.5411961001461969,-1.3065629648763766) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,-0.541196100146197) -- (1.3065629648763766,0.5411961001461968); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,0.5411961001461969) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=qqqqff] (-1.3065629648763766,0.5411961001461969) -- (1.3065629648763766,-0.5411961001461971); \draw [->,line width=1.pt,color=qqqqff] (1.3065629648763766,-0.5411961001461971) -- (-1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqwuqq] (-2.,2.)-- (-0.5411961001461969,1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-0.5411961001461969,1.3065629648763766)-- (-1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqwuqq] (-1.3065629648763766,0.5411961001461969)-- (-2.,2.); \draw [line width=2.pt,color=qqwuqq] (0.5411961001461971,1.3065629648763766)-- (2.,2.); \draw [line width=2.pt,color=qqwuqq] (2.,2.)-- (1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqwuqq] (1.3065629648763766,0.5411961001461969)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (0.5411961001461969,-1.3065629648763766)-- (1.3065629648763766,-0.5411961001461971); \draw [line width=2.pt,color=qqwuqq] (1.3065629648763766,-0.5411961001461971)-- (2.,-2.); \draw [line width=2.pt,color=qqwuqq] (2.,-2.)-- (0.5411961001461969,-1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-2.,-2.)-- (-0.5411961001461971,-1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-0.5411961001461971,-1.3065629648763766)-- (-1.3065629648763766,-0.541196100146197); \draw [line width=2.pt,color=qqwuqq] (-1.3065629648763766,-0.541196100146197)-- (-2.,-2.); \draw [->,line width=1.pt,color=qqwuqq] (-2.,2.) -- (-1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (-1.3065629648763766,0.5411961001461969) -- (-0.5411961001461969,1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-0.5411961001461969,1.3065629648763766) -- (-2.,2.); \draw [->,line width=1.pt,color=qqwuqq] (0.5411961001461971,1.3065629648763766) -- (2.,2.); \draw [->,line width=1.pt,color=qqwuqq] (2.,2.) -- (1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (1.3065629648763766,0.5411961001461969) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-2.,-2.) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (-1.3065629648763766,-0.541196100146197) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-0.5411961001461971,-1.3065629648763766) -- (-2.,-2.); \draw [->,line width=1.pt,color=qqwuqq] (1.3065629648763766,-0.5411961001461971) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (0.5411961001461969,-1.3065629648763766) -- (2.,-2.); \draw [->,line width=1.pt,color=qqwuqq] (2.,-2.) -- (1.3065629648763766,-0.5411961001461971); \begin{scriptsize} \draw [fill=black] (-2.,2.) circle (2.5pt); \draw [fill=black] (2.,2.) circle (2.5pt); \draw [fill=black] (2.,-2.) circle (2.5pt); \draw [fill=black] (-2.,-2.) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,-0.541196100146197) circle (2.5pt); \draw [fill=black] (-0.5411961001461971,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (0.5411961001461969,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (1.3065629648763766,-0.5411961001461971) circle (2.5pt); \draw [fill=black] (1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (0.5411961001461971,1.3065629648763766) circle (2.5pt); \draw [fill=black] (-0.5411961001461969,1.3065629648763766) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \caption{$\protect\overrightarrow{\mathrm{Cay}}_c(G,\set{b,c})$; blue arcs are induced by $b$ and green arcs are induced by $c$.} \label{fig: dcCay{A4,{b,c}}} \end{figure} \begin{figure}[h] \centering \definecolor{qqwuqq}{rgb}{0.,0.39215686274509803,0.} \definecolor{zzttqq}{rgb}{0.6,0.2,0.} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(-3.,-3.) rectangle (3.,3.); \draw (-2.2,2.5) node[anchor=north west] {$e$}; \draw (1.8,2.5) node[anchor=north west] {$a$}; \draw (1.7,-2.1) node[anchor=north west] {$ab$}; \draw (-2.2,-2.1) node[anchor=north west] {$b$}; \draw (-0.8,1.9) node[anchor=north west] {$cc$}; \draw (0.2,1.9) node[anchor=north west] {$acc$}; \draw (1.4,0.8) node[anchor=north west] {$ac$}; \draw (1.3,-0.3) node[anchor=north west] {$abc$}; \draw (0.1,-1.4) node[anchor=north west] {$abcc$}; \draw (-0.9,-1.4) node[anchor=north west] {$bcc$}; \draw (-1.9,-0.3) node[anchor=north west] {$bc$}; \draw (-1.8,0.8) node[anchor=north west] {$c$}; \draw [line width=2.pt,color=qqwuqq] (-1.9834292323733376,1.9834292323733376)-- (-0.5411961001461969,1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-0.5411961001461969,1.3065629648763766)-- (-1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqwuqq] (-1.3065629648763766,0.5411961001461969)-- (-1.9834292323733376,1.9834292323733376); \draw [line width=2.pt,color=qqwuqq] (0.5411961001461971,1.3065629648763766)-- (2.,2.); \draw [line width=2.pt,color=qqwuqq] (2.,2.)-- (1.3065629648763766,0.5411961001461969); \draw [line width=2.pt,color=qqwuqq] (1.3065629648763766,0.5411961001461969)-- (0.5411961001461971,1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (0.5411961001461969,-1.3065629648763766)-- (1.3065629648763766,-0.5411961001461971); \draw [line width=2.pt,color=qqwuqq] (1.3065629648763766,-0.5411961001461971)-- (2.,-2.); \draw [line width=2.pt,color=qqwuqq] (2.,-2.)-- (0.5411961001461969,-1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-1.9834292323733376,-2.0165707676266624)-- (-0.5411961001461971,-1.3065629648763766); \draw [line width=2.pt,color=qqwuqq] (-0.5411961001461971,-1.3065629648763766)-- (-1.3065629648763766,-0.541196100146197); \draw [line width=2.pt,color=qqwuqq] (-1.3065629648763766,-0.541196100146197)-- (-1.9834292323733376,-2.0165707676266624); \draw [->,line width=1.pt,color=qqwuqq] (-1.9834292323733376,1.9834292323733376) -- (-1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (-1.3065629648763766,0.5411961001461969) -- (-0.5411961001461969,1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-0.5411961001461969,1.3065629648763766) -- (-1.9834292323733376,1.9834292323733376); \draw [->,line width=1.pt,color=qqwuqq] (0.5411961001461971,1.3065629648763766) -- (2.,2.); \draw [->,line width=1.pt,color=qqwuqq] (2.,2.) -- (1.3065629648763766,0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (1.3065629648763766,0.5411961001461969) -- (0.5411961001461971,1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-1.9834292323733376,-2.0165707676266624) -- (-1.3065629648763766,-0.5411961001461969); \draw [->,line width=1.pt,color=qqwuqq] (-1.3065629648763766,-0.541196100146197) -- (-0.5411961001461971,-1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (-0.5411961001461971,-1.3065629648763766) -- (-1.9834292323733376,-2.0165707676266624); \draw [->,line width=1.pt,color=qqwuqq] (1.3065629648763766,-0.5411961001461971) -- (0.5411961001461969,-1.3065629648763766); \draw [->,line width=1.pt,color=qqwuqq] (0.5411961001461969,-1.3065629648763766) -- (2.,-2.); \draw [->,line width=1.pt,color=qqwuqq] (2.,-2.) -- (1.3065629648763766,-0.5411961001461971); \begin{scriptsize} \draw [fill=black] (-1.9834292323733376,1.9834292323733376) circle (2.5pt); \draw [fill=black] (2.,2.) circle (2.5pt); \draw [fill=black] (2.,-2.) circle (2.5pt); \draw [fill=black] (-1.9834292323733376,-2.0165707676266624) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (-1.3065629648763766,-0.541196100146197) circle (2.5pt); \draw [fill=black] (-0.5411961001461971,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (0.5411961001461969,-1.3065629648763766) circle (2.5pt); \draw [fill=black] (1.3065629648763766,-0.5411961001461971) circle (2.5pt); \draw [fill=black] (1.3065629648763766,0.5411961001461969) circle (2.5pt); \draw [fill=black] (0.5411961001461971,1.3065629648763766) circle (2.5pt); \draw [fill=black] (-0.5411961001461969,1.3065629648763766) circle (2.5pt); \end{scriptsize} \end{tikzpicture} \caption{$\protect\overrightarrow{\mathrm{Cay}}_c(G,\set{c})$; green arcs are induced by $c$.} \label{fig: dcCay{A4,{c}}} \end{figure} \par\noindent\textbf{Acknowledgments.} The authors would like to express their special gratitude to Professor Milton Ferreira for his hospitality at the University of Aveiro, Portugal with financial support by Institute for Promotion of Teaching Science and Technology (IPST), Thailand, via Development and Promotion of Science and Technology Talents Project (DPST). This research was supported by the Research Center in Mathematics and Applied Mathematics, Chiang Mai University. \bibliographystyle{amsplain}\addcontentsline{toc}{section}{References}
{ "timestamp": "2021-04-20T02:08:48", "yymm": "2009", "arxiv_id": "2009.05922", "language": "en", "url": "https://arxiv.org/abs/2009.05922", "abstract": "In this article, we study connections between components of the Cayley graph $\\mathrm{Cay}(G,A)$, where $A$ is an arbitrary subset of a group $G$, and cosets of the subgroup of $G$ generated by $A$. In particular, we show how to construct generating sets of $G$ if $\\mathrm{Cay}(G,A)$ has finitely many components. Furthermore, we provide an algorithm for finding minimal generating sets of finite groups using their Cayley graphs.", "subjects": "Group Theory (math.GR)", "title": "An algorithm for finding minimal generating sets of finite groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9766692359451417, "lm_q2_score": 0.8418256492357358, "lm_q1q2_score": 0.8221852136380888 }
https://arxiv.org/abs/0912.0239
On k-crossings and k-nestings of permutations
We introduce k-crossings and k-nestings of permutations. We show that the crossing number and the nesting number of permutations have a symmetric joint distribution. As a corollary, the number of k-noncrossing permutations is equal to the number of k-nonnesting permutations. We also provide some enumerative results for k-noncrossing permutations for some values of k.
\section{Introduction} Nestings and crossings are equidistributed in many combinatorial objects, such as matchings, set partitions, permutations, and embedded labelled graphs~\cite{Chetal07, Corteel07, deMi07}. More surprising is the symmetric joint distribution of the crossing and nesting numbers: A set of $k$~arcs forms a $k$-crossing (nesting) if each of the $\binom{k}{2}$ pairs of arcs cross (nest). The crossing number of an object is the largest $k$ for which there is a $k$-crossing, and the nesting number is defined similarly. Chen \emph{ et al.}~ \cite{Chetal07} proved the symmetric joint distribution of the nesting and crossing numbers for set partitions and matchings. Although they describe explicit involutions, they do not use simple local operations on the partitions. De Mier~\cite{deMi07}, interpreted the work of Krattenthaler~\cite{Kratt06} to show that $k$-crossings and $k$-nestings satisfy a similar distribution in embedded labelled graphs. A hole in this family of results is the extension of $k$-crossings and $k$-nestings in permutations. This note fills this gap. We also give exact enumerative formulas for permutations of size $n$ with crossing numbers $1$ (non-crossing) and $\lceil n/2\rceil$. \section{Introducing $k$-crossings and $k$-nestings of permutations} \subsection{Crossings and nestings} The \emph{ arc annotated sequence} associated to the permutation $\sigma\in\mathfrak{S}_n$ is the directed graph on the vertex set $V(\sigma)=\{1,\dots, n\}$ with arc set $A(\sigma)=\{(a, \sigma(a)): 1\leq a\leq n\}$, drawn in a particular way. It is also known as the standard representation, or simply, the arc diagram. It is embedded in the plane by drawing an increasing linear sequence of the vertices, with edges $(a, \sigma(a))$ satisfying $a\leq \sigma(a)$ drawn above the vertices (the upper arcs), and the remaining lower arcs satisfying $a>\sigma(a)$ drawn below. We refer to this graph as $A_\sigma$; the subgraph induced by the upper arcs and $V(\sigma)$ is $A_\sigma^{+}$; and the subgraph induced by the lower arcs and $V(\sigma)$ is $A_\sigma^{-}$. Additionally, we reverse the orientation of the arcs in $A_\sigma^{-}$, and view it as a classic arc diagram above the horizon. Because of these rules, the direction of the arcs is determined, and hence we simplify our drawings by not showing arrows on the arcs. These two subgraphs are arc diagrams in their own right: for example $A_\sigma^{-}$ represents a set partition, and $A_\sigma^+$ is a set partition with some additional loops. For convenience purposes, we draw $A_{\sigma}^{-}$ flipped above the axis, so that it resembles other arc diagrams. \begin{figure}\center \large{$A_\sigma=$} \includegraphics[width=6cm]{perm.png} \\ \bigskip \large{$A_{\sigma^{+}}=$} \includegraphics[width=6cm]{sigmap.png}\\ \bigskip \large{$A_{\sigma^{-}}=$} \includegraphics[width=6cm]{sigmam.png}\\ \bigskip \caption{An arc diagram representation for the permutation \mbox{$\sigma=[9\,5\,6\,7\,8\,3\,2\,1\,4\,12\,11\,10]$}, and its decomposition into upper and lower arc diagrams $(A_\sigma^{+}, A_\sigma^{-})$. In this example, $\Cr(\sigma)=4$, $\Ne(\sigma)=3$, and the degree sequence is given by $D_{\sigma}={ (1,0)(1,0)(1,0)(1,0)(1,1)(0,1)(0,1)(0,1)(0,1)(1,0)(1,1)(0,1)}.$} \end{figure} Crossings and nestings are defined for permutations by considering the upper and lower arcs separately. A crossing is a pair of arcs $\left\{(a,\sigma(a)),(b, \sigma(b))\right\}$ satisfying either $a<b\leq \sigma(a)<\sigma(b)$ (an upper crossing) or $\sigma(a)<\sigma(b)<a<b$ (a lower crossing). A nesting is a pair of arcs $(a,\sigma(a))$ $(b, \sigma(b))$ satisfying $a<b\leq \sigma(b)< \sigma(a)$ (an upper nesting) or $\sigma(a)<\sigma(b)<b<a$ (a lower nesting). There is a slight dissymmetry to the treatment of upper and lower arcs in this definition which we shall see is inconsequential. However, the reader should recall that what is considered a crossing (nesting) in the upper diagram is elsewhere called an \emph{enhanced} crossing (resp. enhanced nesting). Crossings and nestings were defined in this way by Corteel~\cite{Corteel07} because they represent better known permutation statistics. Corteel's Theorem 1 states that the number of top arcs in this representation of a permutation is equal to the number of weak exceedences, the number of arcs on the bottom is the number of descents, each crossing is equivalent to an occurrence of the pattern $2-31$, and each nesting is an occurrence of the pattern $31-2$. Corteel's Proposition 4 states nestings and crossings occur in equal number across all permutations of length $n$. \subsection{$k$-nestings and $k$-crossings} To generalize her work we define $k$-crossings and $k$-nestings in the same spirit as set partitions and matchings. A $k$-crossing in a permutation ard diagram $A_\sigma$ is a set of~$k$ arcs~$\{(a_i, \sigma(a_i)):1\leq i\leq k\}$ that satisfy either the relation $a_1<a_2<\dots<a_k<\sigma(a_1)<\sigma(a_2)<\dots<\sigma(a_k)$ (upper $k$-crossing) or $\sigma(a_1)<\sigma(a_2)<\dots<\sigma(a_k)<a_1<a_2<\dots<a_k$ (lower $k$-crossing). Similarly, a $k$-nesting is a set of $k$ arcs $\{(a_i, \sigma(a_i)):1\leq i\leq n\}$ that satisfy either the relation $a_1<a_2<\dots<a_k<\sigma(a_k)<\dots<\sigma(a_2)<\sigma(a_k)$ (upper $k$-nesting) or $\sigma(a_1)<\sigma(a_2)<\dots<\sigma(a_k)<a_k<\dots<a_2<a_1$ (lower $k$-nesting). The \emph{ crossing number\/} of a permutation $\sigma$, denoted by $\Cr(\sigma)$, is the size of the largest~$k$ such that $A_\sigma$ contains a $k$-crossing. In this case we also say $\sigma$ is $k+1$-noncrossing. Likewise, the nesting number of a permutation $\Ne(\sigma)$ is the size of the largest nesting in $A_\sigma$, and define $k+1$-noncrossing similarly. Occasionally we consider the top and lower diagrams in their own right as \emph{ graphs}, and then we use the definition of deMier~\cite{deMi07}, and hence distinguish separately the enhanced crossing number of the \emph{ graph} $A_\sigma^+$ denoted $\Cr^*(A_\sigma^+)$ from the permutation crossing number, and likewise for the enhanced nesting number $\Ne^*$. The number of permutations of $\mathfrak{S}_n$ with crossing number equal to $k$ is $\CR(k)$, and we likewise define $\NE(k)$ for nestings. The \emph{ degree sequence} $D_g $ of a graph $g$ is the sequence of indegree and outdegrees of the vertices, when considered as a directed graph: \[D_g \equiv(D_g(i))_i=\left(\operatorname{indegree}_{g}(i), \operatorname{outdegree}_{g}(i)\right)_{i=1}^n.\] Some sources call these left-right degree sequences since in other arc diagrams the incoming arcs always come in on the left, and the outgoing arcs go out to the right. As a graph, the degree sequence of a permutation is trivial: $(1,1)^n$, since a permutation is a map in which every point has a unique image, and a unique pre-image. To define a more useful entity, we define the degree sequence of a permutation to be the degree sequence of only the upper arc diagram: $D_\sigma\equiv D_{A_\sigma^+}$. The degree sequence defined by the lower arc diagram can be computed coordinate-wise directly from the upper by simple transformations given in Table~\ref{tab:vtypes}, and we denote this sequence $\overline{D_\sigma}$. (The sums of the vertex degrees is not (1,1) because the lower arcs have their orientation reversed, and hence the indegree, and the out degree have switched) An example is in Figure 1. The vertices with degree $(0,1)$ are called ``openers'' and those with degree $(1,0)$ are ``closers''. \begin{table}\label{tab:vtypes} \center \begin{tabular}{|l|c|c|l|}\hline {\bf Type} & {\bf vertex $i$} & $D_{\sigma}(i)$ &$\overline{D_{\sigma}}(i)$ \\\hline opener & \vtype{opener} &(1,0) & (1,0)\\ \hline closer & \vtype{closer} &(0,1) & (0,1)\\ \hline loop & \vtype{loop} & (1,1) & (0,0)\\\hline upper transient & \vtype{trans1} &(1,1) & (0,0)\\ \hline lower transient & \vtype{trans2} &(0,0) & (1,1)\\ \hline \end{tabular} \smallskip \caption{\it The five vertex types that appear in permutations, and their associated upper degree value, and lower degree value. } \end{table} The main theorem can now be stated. \begin{theorem} \label{thm:main} Let $NC_n(i,j, D)$ be the number of permutations of~$n$ with crossing number $i$, nesting number $j$, and left-right degree sequence specified by $D$. Then \begin{equation} NC_n(i,j, D)=NC_n(j,i, D). \end{equation} \end{theorem} There is an explicit involution behind this enumerative result. \subsection{Preliminary enumerative results} The number of permutations of $\mathfrak{S}_n$ with crossing number equal to $k$ is directly computable for small values of~$n$ and~$k$. \begin{table}[h!]\center \begin{tabular}{c|rrrrrrrrr} $n\backslash k$ & 1 & 2 & 3 & 4 & 5\\ \hline 1&1\\ 2& 2 &\\ 3& 5& 1\\ 4 & 14 & 10 & \\ 5 & 42 & 76&2 \\ 6 & 132& 543& 45 \\ 7 & 429& 3904& 701& 6 \\ 8 & 1430& 29034&9623& 233\\ 9 & 4862& 225753& 126327& 5914& 24\\ \end{tabular} \smallskip \caption{\it $\CR(k)$: The number of permutations of $\mathfrak{S}_n$ with crossing number $k$. A crossing number of 1 is equivalent to non-crossing. } \label{tab:enum} \end{table} We immediately notice the first column of Table~\ref{tab:enum}, the non-crossing permutations, are counted by Catalan numbers: $\CR(1)=\frac{1}{n+1}\binom{2n}{n}$. This has a simple explanation: non-crossing \emph{ partitions\/} have long been known to be counted by Catalan numbers and there is a simple bijection between non-crossing permutations and non-crossing partitions. Essentially, to go from a non-crossing permutation to a non-crossing partition, flip the arc diagram upside down, convert the loops to fixed points, and then remove the lower arcs. This defines a unique set partition, and is easy to reverse. This bijection is easy to formalize, but it is not the main topic of this note. \section{Enumeration of maximum nestings and crossings } To get a sense of how Theorem 1 is proved, and to obtain some new enumerative results, we consider the set of maximum nestings and crossings. A \emph{maximum nesting} is the largest possible: a $\lceil n/2\rceil$-nesting is maximum in a permutation on~$n$ elements. We can compute $\NE( \lceil n/2\rceil )$ explicitly. \begin{theorem} The number of permutations with a maximum nesting satisfies the following formula: \begin{equation} \NE( \lceil n/2\rceil )= \left\{ \begin{array}{ll} m! & n=2m+1\\ 2(m+1)!-(m-1)!-1 & n=2m\\ \end{array}. \right. \end{equation} \end{theorem} \begin{proof} We divide the result into a few cases, but each one is resolved the same way: For each permutation $\sigma\in\mathfrak{S}_n$ with a maximum nesting, the $ \lceil n/2\rceil $-nesting comes from either $A_\sigma^+$ or $A_\sigma^{-}$, and in most cases defines that subgraph. Once one side is fixed, and there is a given degree sequence, it is straightforward to compute the number of ways to place the remaining arcs. Some cases are over counted, and tallying these gives the final result. \subsubsection*{Odd $n$: $n=2m+1$} To achieve an $m+1$-nesting, it must be an enhanced nesting in the upper arc diagram, and it uses all vertices, including a loop: $\sigma(i)=n-i: 1\leq i\leq m$. It remains to define $\sigma(i)$ for $m<i\leq n$. The lower degree sequence is fixed, and so $1\leq \sigma(i)<m$ for each~$i$, but other than that there is no restriction. Thus, there are $m!$ possibilities. \subsubsection*{Even $n$: $n=2m$} The even case is slightly more complicated, owing to the fact that three different ways to achieve an $m$-nesting: \begin{description} \item[An $m$-nesting in $A_\sigma^+$] These permutations satisfy $\sigma(i)=n-i, 1\leq i \leq m$. As before, there are $m!$ ways to define $\sigma(i), m<i\leq n$. \item[An $m$-nesting in $A_\sigma^{-}$] These permutations satisfy $\sigma(n-i)=i, 1\leq i\leq m$. Again, there are $m!$ possibilities to define $\sigma(i), 1<i\leq m$. Only the involution $[n\, n-1\, \dots 2\, 1]$ is in the intersection of these sets. \item[An enhanced $m$-nesting in $A_\sigma^{+}$] If the $m$-nesting uses only $2m-1$ vertices, there is one left over. It must either be a lower transient vertex, or a loop since there is nothing left to connect to it. We count these by considering the different ways to construct it from a smaller permutation diagram. Suppose we have a permutation with an $m$-nesting on $2m-1$ vertices. By the first part, we know there are $(m-1)!$ of these. We place it on $2m$ points, by first selecting our special vertex $i$, and placing the permutation on the rest. There are $2m$ ways to pick this special vertex. Finally, we create the new permutation $\sigma$ by connecting the new vertex to the rest of the structure. We choose a point $j$ to be the value $\sigma(i)$. We can choose $i$ and thus $i$ is a loop. Otherwise, $j$ must be before the loop in $\sigma'$. We then set $\sigma^{-1}(i)$ to be $\sigma'^{-1}(j)$. There are $m$ choices for $\sigma(i)$. \item[Over counting] we have counted twice the family of diagrams with two loops in the center. There are $(m-1)!$ of these. \end{description} Putting all of the pieces together, and simplifying the expression we get the formula: \[\operatorname{N}_{2m}(m)= 2(m+1)!-(m-1)!-1.\] \end{proof} This proof suggests a direct involution on the permutation which switches a maximum nesting for a maximum crossing, since the degree sequence of a nesting and a crossing have the same shape. Thus the formula for maximum crossings is the same. This involution might send a $k$-crossing to a $k+1$-nesting, and so it can not be used for general $k$. \subsection{Other enumerative questions} From the formula, we see that a very smal proportion of permutations have maximum crossings, ($\leq 2\frac{n+1}{2}!/n!$) or are non-crossing $\approx 4^nn^{-3/2}/n!$. What can be said of the nature of the distribution, or the average crossing number? What is the nature of the generating function $P(z;u)$ where $u$ marks the crossing number, or even simply the generating function for $k$-noncrossing permutations? Bousquet-M\'elou and Xin~\cite{BoXi05} consider this question for permutations: 2-noncrossing partitions are counted by Catalan numbers, (as we mentioned before), and thus the generating function is algebraic; the counting sequence for 3-noncrossing partitions is P-recursive, and so the generating function is D-finite, and they conjecture that the generating function for $k$-noncrossing partitions, $k>3$ are likely not D-finite. How can these results be adapted to permutations, given the similar structure? \section{Proof of main theorem} The proof of the main theorem first decomposes a permutation into its upper and lower arc diagrams and then applies the results for set partitions separately to each part. \begin{proof} We can apply a result of deMier~\cite[Theorem 3.3]{deMi07} almost directly to prove our result by describing an involution $\Psi:\mathfrak{S}_n\rightarrow\mathfrak{S}_n$ which preserves the left-right degree sequence and swaps nesting and crossing number. That is, $D_\sigma=D_{\Psi(\sigma)}$, and $\Ne(\sigma)=\Cr(\Psi(\sigma))$, $\Cr(\sigma)=\Ne(\Psi(\sigma))$. This involution happens on the Ferrer's diagram, and is described in detail by Krattenthaler~\cite{Kratt06}. Her Theorem 3.3 states that any left-right degree sequence $D$, the number of $k$- noncrossing graphs on $D$ equals the number of $k$-nonnesting graphs on $D$. Actually, she proves more: the number of graphs with crossing number $i$ and nesting number $j$ have a symmetric distribution across all graphs with a given fixed degree sequence. In order to apply her results, the first step is re-write $A_\sigma^+$ so that we only consider proper crossings and nestings instead of enhanced crossing and nestings. This is a common trick, known as inflation. Essentially, we create the graph $g$ from $A_\sigma^+$ by adding some supplementary vertices to eliminate loops and transitory vertices:\\ \begin{center} \includegraphics[height=2cm]{enhanced.png} \end{center} Now each nesting and crossing is proper, and by~\cite[Lemma 3.4]{deMi07} $\Ne^*(A_\sigma^+)=\Ne(g)$ and $\Cr^*(A_\sigma^+)=\Cr(g)$. In the course of her proof, she relies on an involution of Krattenthaler~\cite{Kratt06} on fillings of Ferrer's diagrams. Let $\Psi$ be the map on embedded labelled graphs described implicitly in her proof. Because $\Psi$ is a left-right degree preserving map, we can identify the supplementary vertices in $\Psi(g)$ to get a graph with the correct kind of vertices. Call this new graph $g'$. We now extend the definition of $\Psi$ to $A_\sigma^+$ by $\Psi(A_\sigma^+)\equiv g'.$ Consider the pair of graphs $(\Psi(A_\sigma^+), \Psi(A_\sigma^{-}))$. Proving our main theorem now reduces to showing that there is a unique $\tau\in\mathfrak{S}_n$ such that $A_\tau=(\Psi(A_\sigma^+), \Psi(A_\sigma^{-}))$, which we do next. For every vertex in $A_\tau$ the indegree and the outdegree are equal to one. This is because the left-right degree sequence of both the top and the bottom are preserved in the map, and hence the vector sum of their degree sequence is unchanged, i.e. $(1,1)^n$, and has all the correct partial sum properties. The map is a bijection and so $\tau$ is unique. This map swaps the upper nesting and the upper crossing number, and also the lower nesting and the lower crossing number. Thus $\Cr(\tau)=\max\{\Cr^*(A_\tau^+),\Cr(A_\tau^{-})\}=\max \{ \Ne^*(A_\sigma^+), \Ne(A_\sigma^{-}) \}=\Ne(\sigma)$. Thus, the crossing and the nesting number are switched under the map $\Psi$. \end{proof} \subsection{Other proofs of this result} Set partitions can be viewed as an instance of fillings of Ferrer's diagrams, and Krattenthaler's involution reduces to the involutions that Chen \emph{ et al.} describe to prove their~\cite[Theorem 1]{Chetal07}. Figure~\ref{fig:psisigma} illustrates our involution on an example. Remark that the degree sequence is fixed. \begin{figure}[h!]\center \large{$A_\sigma=$} \includegraphics[width=6cm]{perm.png} \\ \bigskip \large{$A_{\Psi(\sigma)}=$} \includegraphics[width=6cm]{psi-sigma.png} \\ \bigskip \caption{\it The permutation $\sigma$ and its image in the involution $\Psi(\sigma)$. Note that $\Ne(\Psi(\sigma))=4$, $\Cr(\Psi(\sigma))=3$.} \label{fig:psisigma} \end{figure} \subsection{Equidistribution in permutation subclasses} Involutions are in bijection with partial matchings, and have thus been previously considered. What of other subclasses of permutations? The map presented here does not fix involutions, because loops are mapped to upper transient vertices, but it does fix any class that is closed under degree sequence, for example, permutations with no lower transitory vertices, or permutations with no upper transitory vertices nor loops. These conditions have interpretations in terms of other permutation statistics, if we consider the initial motivations of Corteel. \section{Conclusions and open questions} The main open question, aside from the enumerative, and probabilistic questions we have already raised, is to find a direct permutation description of our involution, i.e. a description avoiding the passage through tableaux or fillings of Ferrer's diagrams. Is this involution already part of the vast canon of permutation automorphisms? Which subclasses of permutations preserve the symmetric distribution? From our example, we remark that cycle type is not neccesarily conserved (since loops are always mapped to upper transitory vertices), but non-intersecting intervals are preserved. Involution permutations are in bijection with partial matchings, and so this subclass has this property. Is there an interpretation of crossing and nesting numbers in terms of other permutations statistics? Which other statistics does this involution preserve? Ultimately we have considered a type of graph with two edge colours and strict degree restrictions. Can this be generalized to a larger class of graphs with fewer degree restrictions? What of a generalization of graphs with multiple edge colours? \subsection*{Acknowledgements} The authors thank Cedric Chauve, Lily Yen, Sylvie Corteel and Eric Fusy for useful discussions. SB and MM are both funded by NSERC (Canada), and MM did part of this work as part of a CNRS Poste Rouge (France).
{ "timestamp": "2009-12-01T20:47:28", "yymm": "0912", "arxiv_id": "0912.0239", "language": "en", "url": "https://arxiv.org/abs/0912.0239", "abstract": "We introduce k-crossings and k-nestings of permutations. We show that the crossing number and the nesting number of permutations have a symmetric joint distribution. As a corollary, the number of k-noncrossing permutations is equal to the number of k-nonnesting permutations. We also provide some enumerative results for k-noncrossing permutations for some values of k.", "subjects": "Combinatorics (math.CO)", "title": "On k-crossings and k-nestings of permutations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9891815523039174, "lm_q2_score": 0.8311430520409023, "lm_q1q2_score": 0.8221513744044353 }
https://arxiv.org/abs/1404.6550
A note on coloring vertex-transitive graphs
We prove bounds on the chromatic number $\chi$ of a vertex-transitive graph in terms of its clique number $\omega$ and maximum degree $\Delta$. We conjecture that every vertex-transitive graph satisfies $\chi \le \max \left\{\omega, \left\lceil\frac{5\Delta + 3}{6}\right\rceil\right\}$ and we prove results supporting this conjecture. Finally, for vertex-transitive graphs with $\Delta \ge 13$ we prove the Borodin-Kostochka conjecture, i.e., $\chi\le\max\{\omega,\Delta-1\}$.
\section{Introduction} Many results and conjectures in the graph coloring literature have the form: \emph{if the chromatic number $\chi$ of a graph is close to its maximum degree $\Delta$, then the graph contains a big clique, i.e., $\omega$ is large} (\cite{brooks1941colouring, borodin1977upper, reed1999strengthening, reed1998omega, CatlinAnotherBound, molloy2002graph}). Generically, we call conjectures of this sort \emph{big clique conjectures}. In \cite{denseneighborhoods}, it was shown that many big clique conjectures hold under the added hypothesis that every vertex is in a medium sized clique. Partial results on big clique conjectures often guarantee a medium sized clique, but not a big clique. But in a vertex-transitive graph, the existence of one medium sized clique implies that every vertex is in a medium sized clique. By applying the idea in \cite{denseneighborhoods}, we now get a big clique. So, in essence, partial results on big clique conjectures are self-strengthening in the class of vertex-transitive graphs. In this short note, we give some examples of this phenomenon. There is not much new graph theory here, just combinations of known results that yield facts we did not know. The following conjecture is the best we could hope for. A good deal of evidence supports it, as we will detail below. \begin{mainconj} If $G$ is vertex-transitive, then $\chi(G) \le \max \set{\omega(G), \ceil{\frac{5\Delta(G) + 3}{6}}}$. \end{mainconj} Our Main Conjecture would be best possible, as shown by Catlin's counterexamples to the Haj{\'o}s conjecture \cite{catlin1979hajos}. Catlin computed the chromatic number of line graphs of odd cycles where each edge has been duplicated $k$ times; in particular, he showed that $\chi(G_{t,k}) = 2k + \ceil{\frac{k}{t}}$ for $t \ge 2$, where $G_{t,k} \mathrel{\mathop:}= L(kC_{2t+1})$. Since $\Delta(G_{t,k}) = 3k-1$ and $\omega(G_{t,k}) = 2k$, we have $\chi(G_{2, k}) = 2k+\ceil{\frac{k}2}=\ceil{\frac{5k}2} = \ceil{\frac{15k-2}6} = \max \set{\omega(G_{2,k}), \ceil{\frac{5\Delta(G_{2,k}) + 3}{6}}}$ for all $k \ge 1$. The Main Conjecture does not hold for graphs in general. To see this, let $H_t$ be $K_t$ joined to a $5$-cycle. For any $c \ge 2$, we can make $t$ large enough so that $\chi(H_t) > \max\set{\omega(H_t), \Delta(H_t) - c}$ and $\Delta(H_t)$ is as large as we like. So, no bound of this form can hold in general, not even for claw-free graphs (since $H_t$ is claw-free). In \cite{rabern2011strengthening}, it was shown that a bound of this form does hold for line graphs of multigraphs. In particular, they satisfy $\chi \le \max \set{\omega, \frac{7\Delta + 10}{8}}$. The bound in the Main Conjecture would be best possible for line graphs of multigraphs as well. Our main result is the following weakening of the Borodin-Kostochka conjecture for vertex-transitive graphs, which we prove in Section~\ref{BK}. This theorem likely holds for all $\Delta \ge 9$ and proving this may be a good deal easier than proving the full Borodin-Kostochka conjecture (note that the Main Conjecture implies the Main Theorem for all $\Delta \ge 9$). \begin{mainthm}\label{BKTransitive} If $G$ is vertex-transitive with $\Delta(G) \ge 13$ and $K_{\Delta(G)} \not \subseteq G$, then $\chi(G) \le \Delta(G) - 1$. \end{mainthm} As further evidence for the Main Conjecture, we show that the analogous upper bound holds for the fractional chromatic number. Also, we show that the Main Conjecture is true if all vertex-transitive graphs satisfy both Reed's $\omega$, $\Delta$, and $\chi$ conjecture and the strong $2\Delta$-colorability conjecture (see \cite{aharoni2007independent}; really we can get by with $\frac52\Delta$-colorability). Finally, we show the following. \begin{thm} There exists $c < 1$, such that for any vertex-transitive graph $G$, we have $\chi(G) \le \max \set{\omega(G), c(\Delta(G) + 1)}$. \end{thm} \section{Clustering of maximum cliques} Before coloring anything, we need a better understanding of the structure of maximum cliques in a graph. \subsection{The clique graph} \begin{defn} Let $G$ be a graph. For a collection of cliques $\fancy{Q}$ in $G$, let $X_\fancy{Q}$ be the intersection graph of $\fancy{Q}$; that is, the vertex set of $X_\fancy{Q}$ is $\fancy{Q}$ and there is an edge between $Q_1, Q_2 \in \fancy{Q}$ iff $Q_1 \ne Q_2$ and $Q_1$ and $Q_2$ intersect. \end{defn} When $\fancy{Q}$ is a collection of maximum cliques, we get a lot of information about $X_\fancy{Q}$. Kostochka \cite{kostochkaRussian} used the following lemma of Hajnal \cite{HajnalSaturation} to show that the components of $X_\fancy{Q}$ are complete in a graph with $\omega > \frac23 (\Delta + 1)$. \begin{lem}[Hajnal \cite{HajnalSaturation}]\label{HajnalLemma} If $G$ is a graph and $\fancy{Q}$ is a collection of maximum cliques in $G$, then \[\card{\bigcup \fancy{Q}} + \card{\bigcap \fancy{Q}} \geq 2\omega(G).\] \end{lem} Hajnal's lemma follows by an easy induction. The proof of Kostochka's lemma in \cite{kostochkaRussian} is in Russian; for a reproduction of his original proof in English, see \cite{rabernhitting}. Below we give a shorter proof from \cite{raberndiss}. \begin{lem}[Kostochka \cite{kostochkaRussian}]\label{KostochkaCliqueGraph} If $\fancy{Q}$ is a collection of maximum cliques in a graph $G$ with $\omega(G) > \frac23 (\Delta(G) + 1)$ such that $X_\fancy{Q}$ is connected, then $\cap \fancy{Q} \neq \emptyset$. \end{lem} \begin{proof} Suppose not and choose a counterexample $\fancy{Q} \mathrel{\mathop:}= \set{Q_1, \ldots, Q_r}$ minimizing $r$. Plainly, $r \geq 3$. Let $A$ be a noncutvertex in $X_{\fancy{Q}}$ and $B$ a neighbor of $A$. Put $\fancy{Z} \mathrel{\mathop:}= \fancy{Q} - \set{A}$. Then $X_{\fancy{Z}}$ is connected and hence by minimality of $r$, $\cap \fancy{Z} \neq \emptyset$. In particular, $\card{\cup \fancy{Z}} \leq \Delta(G) + 1$. By assumption, $\cap\fancy{Q}=\emptyset$, so $\card{\cap \fancy{Q}} + \card{\cup \fancy{Q}} \leq 0 + (\card{\cup \fancy{Z}} + \card{A - B}) \leq (\Delta(G) + 1) + (\Delta(G)+1 - \omega(G)) < 2\omega(G)$. This contradicts Lemma \ref{HajnalLemma}. \end{proof} As shown by Christofides, Edwards and King \cite{christofides2012note}, components of $X_\fancy{Q}$ have nice structure in the $\omega = \frac23 (\Delta + 1)$ case as well. We'll need this stronger result to get our bounds on coloring vertex-transitive graphs to be tight. \begin{lem}[Christofides, Edwards and King \cite{christofides2012note}]\label{TwoThirdsEqualityStructure} If $\fancy{Q}$ is a collection of maximum cliques in a graph $G$ with $\omega(G) \ge \frac23 (\Delta(G) + 1)$ such that $X_\fancy{Q}$ is connected, then either \begin{itemize} \item $\cap \fancy{Q} \ne \emptyset$; or \item $\Delta(X_\fancy{Q}) \le 2$ and if $B, C \in \fancy{Q}$ are different neighbors of $A \in \fancy{Q}$, then $B \cap C = \emptyset$ and $\card{A \cap B} = \card{A \cap C} = \frac12 \omega(G)$. \end{itemize} \end{lem} \subsection{In vertex-transitive graphs} Let $G$ be a vertex-transitive graph and let $\fancy{Q}$ be the collection of all maximum cliques in $G$. It is not hard to see that $X_\fancy{Q}$ is vertex-transitive as well; in fact, we have the following. \begin{observation}\label{transitiveClustering} Let $G$ be a vertex-transitive graph and let $\fancy{Q}$ be the collection of all maximum cliques in $G$. For each component $C$ of $X_\fancy{Q}$, put $G_C \mathrel{\mathop:}= G\brackets{\bigcup V(C)}$. Then $G_C$ is vertex-transitive for each component $C$ of $X_\fancy{Q}$ and $G_{C_1} \cong G_{C_2}$ for components $C_1$ and $C_2$ of $X_\fancy{Q}$. \end{observation} A basic consequence of Observation \ref{transitiveClustering} is that if $G$ is vertex-transitive and $G_C$ has a dominating vertex (or universal vertex), then every vertex of $G_C$ is dominating; so $G_C$ is complete. Let $G$ be a vertex-transitive graph with $\omega > \frac23 (\Delta + 1)$. Suppose that $X_\fancy{Q}$ has one or more edges. By Kostochka's lemma, $\cap\fancy{Q}_C$ is nonempty, where $\fancy{Q}_C$ is the set of maximum cliques in some component $G_C$. Choose a vertex $v\in\cap\fancy{Q}_C$, and note that $v$ is adjacent to each vertex in $G_C$. Since $G$ is vertex-transitive, each vertex of $G_C$ is a dominating vertex in $G_C$; so, in fact, $G_C$ is a clique, and $C$ is edgeless. Using Lemma \ref{TwoThirdsEqualityStructure}, we get a bit more. \begin{lem}\label{TransitiveClusteringBigCliques} Let $G$ be a connected vertex-transitive graph and let $\fancy{Q}$ be the collection of all maximum cliques in $G$. If $\omega(G) \ge \frac23 \parens{\Delta(G) + 1}$, then either \begin{itemize} \item $X_\fancy{Q}$ is edgeless; or \item $X_\fancy{Q}$ is a cycle and $G$ is the graph obtained from $X_\fancy{Q}$ by blowing up each vertex to a $K_{\frac12 \omega(G)}$. \end{itemize} \end{lem} \begin{proof} If $\omega(G) > \frac23 \parens{\Delta(G) + 1}$, then $X_\fancy{Q}$ is edgeless as shown above. Hence we may assume $\omega(G) = \frac23 \parens{\Delta(G) + 1}$. Let $Z$ be a component of $X_\fancy{Q}$ and put $\fancy{Z} \mathrel{\mathop:}= V(Z)$. By Lemma \ref{TwoThirdsEqualityStructure}, $\Delta(X_\fancy{Z}) \le 2$ and if $B, C \in \fancy{Z}$ are different neighbors of $A \in \fancy{Z}$, then $B \cap C = \emptyset$ and $\card{A \cap B} = \card{A \cap C} = \frac12 \omega(G)$. By Observation \ref{transitiveClustering}, $X_\fancy{Z}$ must be a cycle. But then every vertex in $G_Z$ has $\frac12 \omega(G) + \frac12 \omega(G) + \frac12 \omega(G) - 1 = \Delta(G)$ neighbors in $G_Z$ and thus $G = G_Z$. Hence $X_\fancy{Q} = X_\fancy{Z}$ is a cycle and $G$ is the graph obtained from $X_\fancy{Q}$ by blowing up each vertex to a $K_{\frac12 \omega(G)}$. \end{proof} \section{The fractional version} The problem of determining chromatic number can be phrased as an integer program: we aim to minimize the total number of colors used, subject to the constraints that (i) each vertex gets colored and (ii) the vertices receiving each color form an independent set. To reach a linear program from this integer program, we relax the constraint that each vertex is colored with a single color, and instead allow a vertex to be colored with a combination of colors, e.g., $1/2$ red, $1/3$ green, and $1/6$ blue. However, we still require that the total weight of any color on any clique is at most 1. The minimum value of this linear program is the fractional chromatic number, denoted $\chi_f$ (see~\cite{ScheinermanUllmanFrac} for a formal definition and many results on fractional coloring). It is an easy exercise to show that every vertex-transitive graph $G$ satisfies $\chi_f(G) = \frac{|G|}{\alpha(G)}$, where $|G|$ denotes $|V(G)|$ and $\alpha(G)$ denotes the maximum size of an independent set. We also need Haxell's condition \cite{haxell2001note} for the existence of an independent transversal. \begin{lem}[Haxell \cite{haxell2001note}] \label{HaxellTransversal} Let $H$ be a graph and $V_1 \cup \cdots \cup V_r$ a partition of $V(H)$. Suppose that $\card{V_i} \geq 2\Delta(H)$ for each $i \in \irange{r}$. Then $H$ has an independent set $\set{v_1, \ldots, v_r}$ where $v_i \in V_i$ for each $i \in \irange{r}$. \end{lem} \begin{lem}\label{TransitiveFractionalColoringWithBigCliques} If $G$ is a vertex-transitive graph with $\omega(G) \ge \frac23 \parens{\Delta(G) + 1}$, then $\alpha(G) = \floor{\frac{|G|}{\omega(G)}}$. Moreover, if $\omega(G) > \frac23 \parens{\Delta(G) + 1}$, then $\omega(G)$ divides $|G|$. \end{lem} \begin{proof} We may assume that $G$ is connected. Since $G$ is vertex-transitive, every vertex of $G$ is in an $\omega(G)$-clique. First, suppose $\omega(G) > \frac23 \parens{\Delta(G) + 1}$. Then Lemma \ref{TransitiveClusteringBigCliques} shows that the vertex set of $G$ can be partitioned into cliques $V_1, \ldots, V_r$ with $|V_i| \ge \ceil{\frac23 \parens{\Delta(G) + 1}}$ for each $i \in \irange{r}$. Let $H$ be the graph formed from $G$ by making each $V_i$ independent. Then $\Delta(H) \le \Delta(G) + 1 - \ceil{\frac23 \parens{\Delta(G) + 1}}$; now by Lemma \ref{HaxellTransversal}, $G$ has an independent set with a vertex in each $V_i$. Since $G$ is vertex-transitive, all $V_i$ have the same size; so, in fact, $\card{V_i}=\omega(G)$ for all $i$. But now $\card{G} = \alpha(G)\card{V_i}=\alpha(G)\omega(G)$, so we're done. So instead suppose $\omega(G) = \frac23 \parens{\Delta(G) + 1}$. Now Lemma \ref{TransitiveClusteringBigCliques} shows that $G$ is obtained from a cycle $C$ by blowing up each vertex of $C$ to a copy of $K_{\frac12 \omega(G)}$. Hence $\alpha(G) = \floor{\frac{|C|}{2}} = \floor{\frac{|G|}{\omega(G)}}$ as desired. \end{proof} Reed's $\omega$, $\Delta$, and $\chi$ conjecture states that every graph satisfies \[\chi \leq\ceil{\frac{\omega + \Delta + 1}{2}}.\] In \cite{molloy2002graph}, Molloy and Reed proved this upper bound without the round-up for the fractional chromatic number $\chi_f$. Since $\chi_f(G) = \frac{|G|}{\alpha(G)}$ for vertex-transitive graphs, an earlier result of Fajtlowicz \cite{fajtlowicz1984independence} suffices for our purposes. \begin{lem}[Fajtlowicz \cite{fajtlowicz1984independence}]\label{fajtlowicz} For every graph $G$, we have $\alpha(G) \ge \frac{2|G|}{\omega(G) + \Delta(G) + 1}$. \end{lem} \begin{thm} If $G$ is vertex-transitive, then $\alpha(G) \ge \frac{|G|}{\max\set{\omega(G), \frac56\parens{\Delta(G) + 1}}}$. \label{fracversion} \end{thm} \begin{proof} Suppose $\omega(G) > \frac23 \parens{\Delta(G) + 1}$. Then Lemma \ref{TransitiveFractionalColoringWithBigCliques} shows $\alpha(G) = \frac{|G|}{\omega(G)}$ and we're done. Otherwise, $\omega(G) \le \frac23 \parens{\Delta(G) + 1}$ and Lemma \ref{fajtlowicz} gives $\alpha(G) \ge \frac{2|G|}{\frac23 (\Delta(G) + 1) + \Delta(G) + 1} = \frac{|G|}{\frac56 (\Delta(G) + 1)}$ as desired. \end{proof} Restating Theorem~\ref{fracversion} in terms of fractional coloring, we have the following. \begin{cor} If $G$ is vertex-transitive, then $\chi_f(G) \le \max\set{\omega(G), \frac56\parens{\Delta(G) + 1}}$. \end{cor} \section{Reed's conjecture plus strong coloring} For a positive integer $r$, a graph $G$ with $\card{G} = rk$ is called \emph{strongly $r$-colorable} if for every partition of $V(G)$ into parts of size $r$ there is a proper coloring of $G$ that uses all $r$ colors on each part. If $\card{G}$ is not a multiple of $r$, then $G$ is strongly $r$-colorable iff the graph formed by adding $r\ceil{\frac{|G|}{r}} - |G|$ isolated vertices to $G$ is strongly $r$-colorable. The \emph{strong chromatic number} $s\chi(G)$ is the smallest $r$ for which $G$ is strongly $r$-colorable. Not surprisingly, if $G$ is strongly $r$-colorable, then $G$ is also strongly $(r+1)$-colorable, although the proof of this fact is non-trivial~\cite{Fellows1990}. In \cite{haxell2004strong}, Haxell proved that the strong chromatic number of any graph is at most $3\Delta - 1$. In \cite{haxell2008strong}, she proved further that for every $c>11/4$ there exists $\Delta_c$ such that if $G$ has maximum degree $\Delta$ at least $\Delta_c$, then $G$ has strong chromatic number at most $c\Delta$. The strong $2\Delta$-colorability conjecture \cite{aharoni2007independent} says that the strong chromatic number of any graph is at most $2\Delta$. If true, this conjecture would be sharp. We need the following intermediate conjecture. \begin{conjecture}\label{StrongTransitive} The strong chromatic number of any vertex-transitive graph is at most $\frac52 \Delta$. \end{conjecture} We also need Reed's conjecture \cite{reed1998omega} restricted to vertex-transitive graphs. \begin{conjecture}\label{ReedTransitive} Every vertex-transitive graph satisfies $\chi \leq\ceil{\frac{\omega + \Delta + 1}{2}}$. \end{conjecture} \begin{thm}\label{ConjectureaImplyConjecture} If Conjecture \ref{StrongTransitive} and Conjecture \ref{ReedTransitive} both hold, then the Main Conjecture does as well. \end{thm} \begin{proof} We may assume that $G$ is connected. Put $\Delta \mathrel{\mathop:}= \Delta(G)$, $\omega \mathrel{\mathop:}= \omega(G)$ and $\chi \mathrel{\mathop:}= \chi(G)$. Suppose $\omega < \frac23 \parens{\Delta + 1}$. So, we have $\omega \le \frac{2\Delta + 1}{3}$ and moreover, when $\Delta \equiv 3 \text{ (mod $6$)}$, we have $\omega \le \frac23 \Delta$. Plugging the first inequality into Conjecture \ref{ReedTransitive} gives $\chi \le \ceil{\frac{5\Delta + 4}{6}} = \ceil{\frac{5\Delta + 3}{6}}$ when $\Delta \not \equiv 3 \text{ (mod $6$)}$; by using the improved upper bound on $\omega$ in the remaining case, we again prove the desired upper bound on $\chi$. Now suppose $\omega \ge \frac23 (\Delta + 1)$ and let $\fancy{Q}$ be the set of maximum cliques in $G$. Applying Lemma \ref{TransitiveClusteringBigCliques}, either $X_\fancy{Q}$ is edgeless or $G$ is obtained from an odd cycle by blowing up each vertex to a $K_{\frac{\omega}{2}}$. In the latter case, $G$ is one of Catlin's examples from \cite{catlin1979hajos} and the bound holds as mentioned in the introduction. Hence we may assume that $X_\fancy{Q}$ is edgeless; that is, $V(G)$ can be partitioned into $\omega(G)$-cliques. Suppose $\chi > \omega$. Now we show that Conjecture \ref{StrongTransitive} implies the Main Conjecture. Form $G'$ from $G$ by adding vertices to the maximum cliques of $G$ until they all have $\ceil{\frac{5\Delta + 3}{6}}$ vertices; each new vertex has no edges outside its clique, and $\Delta$ always denotes the maximum degree in $G$, not in $G'$. Now form $G''$ from $G'$ by removing all edges within each maximum clique. Each vertex now has at most $\Delta + 1 - \omega \le \frac13 (\Delta + 1)$ neighbors in $G'$ outside of its clique, hence the maximum degree of $G''$ is at most $\frac13(\Delta+1)$. Since $\ceil{\frac{5\Delta + 3}{6}} \ge \frac52 \parens{\frac13 (\Delta + 1)}$, Conjecture \ref{StrongTransitive} implies that $G''$ is strongly $\ceil{\frac{5\Delta + 3}{6}}$-colorable. By taking the $V_i$'s of $G''$ to be the vertex sets of the maximum cliques in $G'$, we see that $G'$ is $\ceil{\frac{5\Delta + 3}{6}}$-colorable, and hence so is $G$. \end{proof} Reed \cite{reed1998omega} has shown that there is $0 < \epsilon < 1$ such that every graph satisfies $\chi \le \epsilon\omega + (1-\epsilon)(\Delta+1)$; for a shorter and simpler proof, see~\cite{KingReed2012}. Combining this upper bound with Haxell's $3\Delta - 1$ strong colorability result, we get the following similarly to Theorem \ref{ConjectureaImplyConjecture}. \begin{thm} There exists $c < 1$, such that for any vertex-transitive graph $G$, we have $\chi(G) \le \max \set{\omega(G), c(\Delta(G) + 1)}$. \end{thm} \section{Borodin-Kostochka for vertex-transitive graphs} \label{BK} In \cite{bigcliques}, we proved the following. \begin{thm}\label{BigCliquesExist} If $G$ is a graph with $\Delta(G) \ge 13$ and $K_{\Delta(G) - 3} \not \subseteq G$, then $\chi(G) \le \Delta(G) - 1$. \end{thm} In \cite{denseneighborhoods}, the second author proved the following. \begin{thm}\label{DenseNeighbors} If $G$ is a graph with $\Delta(G) \ge 9$ and $K_{\Delta(G)} \not \subseteq G$ such that every vertex is in a clique on $\frac23 \Delta(G) + 2$ vertices, then $\chi(G) \le \Delta(G) - 1$. \end{thm} By combining these theorems, we immediately get that the Borodin-Kostochka conjecture holds for vertex-transitive graphs with $\Delta \ge 15$. We can improve this result using Lemma \ref{TransitiveClusteringBigCliques} and Haxell's $3\Delta - 1$ strong colorability result. \begin{mainthm}\label{BKTransitive} If $G$ is vertex-transitive with $\Delta(G) \ge 13$ and $K_{\Delta(G)} \not \subseteq G$, then $\chi(G) \le \Delta(G) - 1$. \end{mainthm} \begin{proof} Suppose that $\chi(G)\ge \Delta(G)$. By Lemma \ref{BigCliquesExist}, we have $\omega(G) \ge \Delta(G) - 3 > \frac23 (\Delta(G) + 1)$ since $\Delta(G) \ge 13$. Now Lemma \ref{TransitiveClusteringBigCliques} shows that $X_{\fancy{Q}}$ is edgeless, where $\fancy{Q}$ is the collection of all maximum cliques in $G$. Form $G'$ from $G$ by adding vertices to the maximum cliques of $G$ until they all have $\Delta(G) - 1$ vertices, where each new vertex has no edges outside its clique. Each vertex has at most $\Delta(G) + 1 - \omega(G) \le 4$ neighbors outside its clique. Since $\Delta(G) - 1 = 12 \ge 3*4 - 1$, Haxell's $3\Delta - 1$ strong colorability result implies that $G'$ is $\parens{\Delta(G) - 1}$-colorable and hence so is $G$. \end{proof} If Conjecture \ref{ReedTransitive} holds, then we get $\omega \ge \Delta - 2 > \frac23 (\Delta + 1)$ when $\Delta \ge 9$. So, since $\Delta + 1 - \omega \le 3$, the above argument works for $\Delta \ge 9$. That is, Conjecture \ref{ReedTransitive} by itself implies the Borodin-Kostochka conjecture for vertex-transitive graphs. \bibliographystyle{amsplain}
{ "timestamp": "2014-04-29T02:00:57", "yymm": "1404", "arxiv_id": "1404.6550", "language": "en", "url": "https://arxiv.org/abs/1404.6550", "abstract": "We prove bounds on the chromatic number $\\chi$ of a vertex-transitive graph in terms of its clique number $\\omega$ and maximum degree $\\Delta$. We conjecture that every vertex-transitive graph satisfies $\\chi \\le \\max \\left\\{\\omega, \\left\\lceil\\frac{5\\Delta + 3}{6}\\right\\rceil\\right\\}$ and we prove results supporting this conjecture. Finally, for vertex-transitive graphs with $\\Delta \\ge 13$ we prove the Borodin-Kostochka conjecture, i.e., $\\chi\\le\\max\\{\\omega,\\Delta-1\\}$.", "subjects": "Combinatorics (math.CO)", "title": "A note on coloring vertex-transitive graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986571747626947, "lm_q2_score": 0.8333245953120234, "lm_q1q2_score": 0.8221345023375013 }
https://arxiv.org/abs/2002.05670
Experimental Design in Two-Sided Platforms: An Analysis of Bias
We develop an analytical framework to study experimental design in two-sided marketplaces. Many of these experiments exhibit interference, where an intervention applied to one market participant influences the behavior of another participant. This interference leads to biased estimates of the treatment effect of the intervention. We develop a stochastic market model and associated mean field limit to capture dynamics in such experiments, and use our model to investigate how the performance of different designs and estimators is affected by marketplace interference effects. Platforms typically use two common experimental designs: demand-side ("customer") randomization (CR) and supply-side ("listing") randomization (LR), along with their associated estimators. We show that good experimental design depends on market balance: in highly demand-constrained markets, CR is unbiased, while LR is biased; conversely, in highly supply-constrained markets, LR is unbiased, while CR is biased. We also introduce and study a novel experimental design based on two-sided randomization (TSR) where both customers and listings are randomized to treatment and control. We show that appropriate choices of TSR designs can be unbiased in both extremes of market balance, while yielding relatively low bias in intermediate regimes of market balance.
\subsection{Estimation with the $\ensuremath{\mathsf{TSR}}$ design} \label{ssec:TSR_est} The preceding sections reveal that each of the naive $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ estimators has its virtues, depending on market balance conditions. In this section, we explore whether we can develop $\ensuremath{\mathsf{TSR}}$ designs and estimators in which $a_C$ and $a_L$ are chosen {\em as a function of} $\lambda/\tau$, to obtain the beneficial asymptotic performance of the naive $\ensuremath{\mathsf{CR}}$ estimator in the highly-demand constrained regime, as well as the $\ensuremath{\mathsf{LR}}$ estimator in the highly supply-constrained regime. We also expect that an appropriate interpolation should yield a bias for $\ensuremath{\mathsf{TSR}}$ that is comparable to, if not lower than, $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ in intermediate regimes of market balance. Recall the naive $\ensuremath{\mathsf{TSRN}}$ estimator defined in \eqref{eq:naive_TSR_est}, and in particular the steady-state version of this estimator. Suppose the platform observes $\lambda/\tau$; note that this is reasonable from a practical standpoint as this is a measure of market imbalance involving only the overall arrival rate of customers and the average rate at which listings become available. For example, consider the following heuristic choices of $a_C$ and $a_L$ for the $\ensuremath{\mathsf{TSR}}$ design, for some fixed values of $\b{a}_C$ and $\b{a}_L$: \begin{equation} a_C(\lambda/\tau) = \left(1 - e^{-\lambda/\tau}\right) + \b{a}_C e^{-\lambda/\tau}; \quad a_L(\lambda/\tau) = \b{a}_L\left(1 - e^{-\lambda/\tau}\right) + e^{-\lambda/\tau}. \label{eq:aC_aL_naive_TSR} \end{equation} Then as $\lambda/\tau \to 0$, we have $a_C(\lambda/\tau) \to \b{a}_C$ and $a_L(\lambda/\tau) \to 1$, while as $\lambda/\tau \to \infty$ we have $a_C(\lambda/\tau) \to 1$ and $a_L(\lambda/\tau) \to \b{a}_L$.\footnote{Our choice of exponent here is somewhat arbitrary; the same analysis follows even if we replace $e^{-\lambda/\tau}$ with $e^{-c\lambda/\tau}$ for any value of $c > 0$.}. With these choices, it follows that in the highly demand-constrained limit ($\lambda/\tau \to 0$), the $\ensuremath{\mathsf{TSRN}}$ estimator becomes equivalent to the naive $\ensuremath{\mathsf{CR}}$ estimator, while in the highly supply-constrained limit ($\lambda/\tau \to \infty$), the $\ensuremath{\mathsf{TSRN}}$ estimator becomes equivalent to the naive $\ensuremath{\mathsf{LR}}$ estimator. In particular, using Propositions \ref{prop:Qij_demand_constrained} and \ref{prop:Qij_supp_constrained}, it is straightforward to show that the steady-state naive $\ensuremath{\mathsf{TSRN}}$ estimator is unbiased in {\em both} limits; we state this as the following theorem, and omit the proof.% \begin{corollary} \label{cor:naive_TSR} For each $\lambda/\tau$, consider the $\ensuremath{\mathsf{TSR}}$ design with $a_C$ and $a_L$ defined as in \eqref{eq:aC_aL_naive_TSR}. Consider a sequence of systems where either $\lambda/\tau \to 0$, or $\lambda/\tau \to \infty$. Then in either limit: \[ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRN}}}(\infty | a_C(\lambda/\tau), a_L(\lambda/\tau)) - \ensuremath{\mathsf{GTE}}\to 0. \] \end{corollary} We are also led to ask whether we can improve upon the naive $\ensuremath{\mathsf{TSRN}}$ estimator when the market is moderately balanced. % Note that the $\ensuremath{\mathsf{TSRN}}$ estimator does not explicitly correct for {\em either} the fact that there is interference across listings, or the fact that there is interference across customers. We now suggest a heuristic for correcting these effects, which we use to define two improved interpolating $\ensuremath{\mathsf{TSR}}$ estimators; these estimators are the fourth and fifth estimators appearing in Figure \ref{fig:numerics_homogeneous}, which we call ``$\ensuremath{\mathsf{TSR}}$-Improved (1)'' and ``$\ensuremath{\mathsf{TSR}}$-Improved (2)". These effects are visualized in Figure \ref{fig:tsr_comp_terms}. First, abusing notation, let $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T|a_C, a_L)$ denote the estimator in \eqref{eq:naive_CR} using the same terms from a $\ensuremath{\mathsf{TSR}}$ design, and dividing through by $a_L$ on both terms as normalization. Similarly abusing notation, let $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T|a_C, a_L)$ denote the estimator in \eqref{eq:naive_LR} using the same terms from a $\ensuremath{\mathsf{TSR}}$ design, and dividing through by $a_C$ on both terms as normalization. Motivated by these naive estimators, we explicitly consider an interpolation between the $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ estimators of the form: \begin{align} \label{eq:interpolatingTSR} \beta \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T|a_C,a_L)+(1-\beta)\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T|a_C,a_L)& \\ = \beta \left(\frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{01}(T | a_C,a_L)}{(1-a_C)a_L}\right) &+ (1-\beta) \left(\frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{10}(T | a_C,a_L)}{a_C(1-a_L)}\right). \notag \end{align} Now, consider the quantity $Q_{00}(T|a_C,a_L)/((1-a_C)(1-a_L)) - Q_{10}(T|a_C,a_L)/((1-a_C)a_L)$ in a $\ensuremath{\mathsf{TSR}}$ design. This is the (appropriately normalized) difference between the rate at which control customers book control listings, and the rate at which treatment customers book control listings. Note that both treatment and control customers have the same utility for control listings, due to the $\ensuremath{\mathsf{TSR}}$ design, but potentially different utilities for treatment listings. Hence, the difference in steady-state rates of booking among control and treatment customers on {\em control listings} must be driven by the fact that treatment customers substitute bookings from control listings to treatment listings (or vice versa). This difference captures the ``cannibalization" effect (i.e., interference) that was found in $\ensuremath{\mathsf{LR}}$ designs in the demand-constrained regime. \begin{figure} \centering \includegraphics[scale=0.35]{figs_other/tsr_competition_terms.png} \caption{Illustration of $\ensuremath{\mathsf{TSR}}$ design and competition effects. Intervention only applies when treatment customers view treatment listings (green cell). Suppose that the intervention makes the listing more attractive. % } \label{fig:tsr_comp_terms} \end{figure} Thus motivated, we can think of this difference as a ``correction term'' for the $\ensuremath{\mathsf{LR}}$ design from our interpolating $\ensuremath{\mathsf{TSR}}$ estimator in \eqref{eq:interpolatingTSR}. % Using a symmetric argument we can also consider an appropriately weighted correction term associated to interference across customers in a $\ensuremath{\mathsf{CR}}$ design: $Q_{00}(T|a_C,a_L)/((1-a_L)(1-a_C))-Q_{01}(T|a_C,a_L)/((1-a_C)a_L)$. (Similar estimates were also studied in \cite{bajari2019double}; see the related work for further details on this work.) See Figure \ref{fig:tsr_comp_terms} for an illustration of these competition effect estimates. % We can weight these correction terms with different factors $k > 0$ to control their impact. In addition, we can choose these weights in a market-balance-dependent fashion, based on the direction of market balance in which we have seen that the respective interference grows. Combining these insights, for $\beta \in (0,1)$ and $k > 0$ we define a class of improved $\ensuremath{\mathsf{TSR}}$ estimators given by: \begin{multline} \label{eq:TSRI-A} \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRI\text{-}k}}}(T|a_C, a_L) = \\ \beta \left[\frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{01}(T | a_C,a_L)}{(1-a_C)a_L}-k(1-\beta)\left(\frac{Q_{00}(T|a_C, a_L)}{(1-a_C)(1-a_L)}-\frac{Q_{01}(T|a_C, a_L)}{(1-a_C)a_L}\right) \right] \\ +(1- \beta) \left[ \frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{10}(T | a_C,a_L)}{a_C(1-a_L)}-k\beta\left(\frac{Q_{00}(T|a_C,a_L)}{(1-a_C)(1-a_L)}-\frac{Q_{10}(T|a_C,a_L)}{a_C(1-a_L)}\right) \right] \ . \end{multline} Given market balance $\lambda/\tau$, we set $\beta=e^{-\lambda/\tau}$, and we choose $a_C$ and $a_L$ as in \eqref{eq:aC_aL_naive_TSR}. In the limit where $\lambda/\tau \to 0$, note that $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRI}}-k} (T|a_C(\lambda/\tau),a_L(\lambda/\tau))$ approaches $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T|\b{a}_C)$ as expected. Similarly, in the limit where $\lambda/\tau \to \infty$, $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRI\text{-}k}}} (T|a_C(\lambda/\tau),a_L(\lambda/\tau))$ approaches $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T|\b{a}_L)$. It is straightforward to show that $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRI\text{-}k}}}$ for any $k$ is unbiased in {\em both} the highly demand-constrained and highly supply-constrained regimes, since the correction terms play no role in the limits. For moderate values of market balance, both the cannibalization correction terms kick in, which lead to improvements over naive $\ensuremath{\mathsf{TSRN}}$ as seen in Figure \ref{fig:numerics_homogeneous}. To simplify the exposition, we only consider two factors $k=1,2$; we see that $\ensuremath{\mathsf{TSRI\text{-}1}}$ has lower bias than the naive $\ensuremath{\mathsf{TSR}}$, but $\ensuremath{\mathsf{TSRI\text{-}2}}$, which has a higher weight in front of the correction terms, has a lower bias than both naive $\ensuremath{\mathsf{TSRN}}$ and $\ensuremath{\mathsf{TSRI\text{-}1}}$, as well as the naive $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimators. In Appendix \ref{app:simulations}, we explore the robustness of our results to other model primitives, specifically scenarios with smaller or larger utilities and the introduction of heterogeneity on one or both sides of the market. We find that the bias of $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimators can increase with the introduction of these factors, but remarkably the bias of the $\ensuremath{\mathsf{TSRI}}$ estimators remains low across the ranges that we study. We emphasize the fact that the three $\ensuremath{\mathsf{TSR}}$ estimators presented here are examples to illustrate the potential for bias reduction using this new design. There is of course a much broader range of both $\ensuremath{\mathsf{TSR}}$ designs and estimators; some of these may offer even better performance.% We conclude this section with two additional observations. First, note that all our analysis in this section has been carried out in the mean field steady state; in particular, Figure \ref{fig:numerics_homogeneous} shows the bias of the estimators in steady state. For practical implementation, it is also important to consider the relative bias in the candidate estimators in the transient system, since experiments are typically run for relatively short time horizons. For discussion of the transient behavior for a finite time horizon, see Appendix \ref{app:transient}. Second, in the next section, we discuss variance of the estimators we have studied. There we find that the $\ensuremath{\mathsf{TSR}}$ estimators with the lowest bias also have the highest variance; in other words, there is a {\em bias-variance tradeoff}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figs_mean_field/homogeneous_listings_customers_mf_bias_vary_lambda_mod.png} \caption{Difference between estimator and $\ensuremath{\mathsf{GTE}}$ in steady state. We consider variation in $\lambda/\tau$ by fixing $\tau = 1$ and varying $\lambda$; analogous results are obtained if $\lambda$ is fixed and $\tau$ is varied. We consider a market with homogeneous customers and homogeneous listings, pre-treatment. We set $\epsilon=1$ and $\alpha=0.5$. In the $\ensuremath{\mathsf{CR}}$ design, $a_C=0.5$. In the $\ensuremath{\mathsf{LR}}$ design, $a_L=0.5$. Customers have utility $v=0.315$ for control listings and $\tilde{v}=0.394$ for treatment listings, which corresponds to a steady state booking probability of 20 percent in global control and 23 percent global treatment when $\lambda=\tau$. } \label{fig:numerics_homogeneous} \end{figure} \subsection{Estimation with the $\ensuremath{\mathsf{TSR}}$ design} The preceding section reveals that each of the naive $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ estimators has its virtue, depending on market balance conditions. In this section, we explore whether we can develop $\ensuremath{\mathsf{TSR}}$ designs and estimators in which $a_C$ and $a_L$ are chosen {\em as a function of} $\lambda/\tau$, to obtain the beneficial asymptotic performance of the naive $\ensuremath{\mathsf{CR}}$ estimator in the highly-demand constrained regime, as well as the $\ensuremath{\mathsf{LR}}$ estimator in the highly supply-constrained regime. \newgw{We also explore the performance of $\ensuremath{\mathsf{TSR}}$ in intermediate regimes of market balance.} Recall the naive $\ensuremath{\mathsf{TSR}}$ estimator defined in \eqref{eq:naive_TSR_est}, and in particular the steady-state version of this estimator. Suppose the \newgw{platform} \delgw{market} observes $\lambda/\tau$; note that this is reasonable from a practical standpoint as this is a measure of market imbalance involving only the overall arrival rate of customers and the average rate at which listings become available. For example, consider the following heuristic choices of $a_C$ and $a_L$ for the $\ensuremath{\mathsf{TSR}}$ design, for some fixed values of $\b{a}_C$ and $\b{a}_L$: \begin{equation} a_C(\lambda/\tau) = \left(1 - e^{-\lambda/\tau}\right) + \b{a}_C e^{-\lambda/\tau}; \quad a_L(\lambda/\tau) = \b{a}_L\left(1 - e^{-\lambda/\tau}\right) + e^{-\lambda/\tau}. \label{eq:aC_aL_naive_TSR} \end{equation} Then as $\lambda/\tau \to 0$, we have $a_C(\lambda/\tau) \to \b{a}_C$ and $a_L(\lambda/\tau) \to 1$, while as $\lambda/\tau \to \infty$ we have $a_C(\lambda/\tau) \to 1$ and $a_L(\lambda/\tau) \to \b{a}_L$.\footnote{Our choice of exponent here is somewhat arbitrary; the same analysis follows even if we replace $e^{-\lambda/\tau}$ with $e^{-c\lambda/\tau}$ for any value of $c > 0$.}. With these choices, it follows that in the highly demand-constrained limit ($\lambda/\tau \to 0$), the naive $\ensuremath{\mathsf{TSR}}$ estimator becomes equivalent to the naive $\ensuremath{\mathsf{CR}}$ estimator, while in the highly supply-constrained limit ($\lambda/\tau \to \infty$), the naive $\ensuremath{\mathsf{TSR}}$ estimator becomes equivalent to the naive $\ensuremath{\mathsf{LR}}$ estimator. In particular, using Propositions \ref{prop:Qij_demand_constrained} and \ref{prop:Qij_supp_constrained}, it is straightforward to show that the steady-state naive $\ensuremath{\mathsf{TSR}}$ estimator is unbiased in {\em both} limits; we state this as the following theorem, and omit the proof \begin{corollary} \label{cor:naive_TSR} For each $\lambda/\tau$, consider the $\ensuremath{\mathsf{TSR}}$ design with $a_C$ and $a_L$ defined as in \eqref{eq:aC_aL_naive_TSR}. Consider a sequence of systems where either $\lambda/\tau \to 0$, or $\lambda/\tau \to \infty$. Then in either limit: \[ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSR}}}(\infty | a_C(\lambda/\tau), a_L(\lambda/\tau)) - \ensuremath{\mathsf{GTE}}\to 0. \] \end{corollary} Figure \ref{fig:numerics} reveals the steady-state performance of the different naive estimators ($\ensuremath{\mathsf{CR}}$, $\ensuremath{\mathsf{LR}}$, and $\ensuremath{\mathsf{TSR}}$ with the preceding scaling of $a_C$ and $a_L$), as a function of market balance $\lambda/\tau$. The figure shows the difference between each estimator and $\ensuremath{\mathsf{GTE}}$; the estimators are all upward-biased because of the parameter values chosen, though the qualitative findings in the figure are robust across parameter choices. As we see, the naive $\ensuremath{\mathsf{TSR}}$ estimator performs well in each asymptotic regime. We are also led to ask whether we can improve upon the naive $\ensuremath{\mathsf{TSR}}$ estimator when the market is moderately balanced. Note that the naive $\ensuremath{\mathsf{TSR}}$ estimator does not explicitly correct for {\em either} the fact that there is interference across listings, or the fact that there is interference across customers. We now suggest a heuristic for correction of these effects that leads to an improved interpolating $\ensuremath{\mathsf{TSR}}$ estimator; this is the fourth estimator that appears in Figure \ref{fig:numerics}, \newgw{that we call `Improved $\ensuremath{\mathsf{TSR}}$'}. First, abusing notation, let $\ensuremath{\mathsf{GTE}}^{\ensuremath{\mathsf{CR}}}(T|a_C, a_L)$ denote the estimator in \eqref{eq:naive_CR} using the same terms from a $\ensuremath{\mathsf{TSR}}$ design, and dividing through by $a_L$ on both terms as normalization. Similarly abusing notation, let $\ensuremath{\mathsf{GTE}}^{\ensuremath{\mathsf{LR}}}(T|a_C, a_L)$ denote the estimator in \eqref{eq:naive_CR} using the same terms from a $\ensuremath{\mathsf{TSR}}$ design, and dividing through by $a_C$ on both terms as normalization. Motivated by these naive estimators, we explicitly consider an interpolation between the $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ estimators of the form: \newgw{\begin{align} \label{eq:interpolatingTSR} \beta \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T|a_C,a_L)+(1-\beta)\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T|a_C,a_L)& \\ = \beta \left(\frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{01}(T | a_C,a_L)}{(1-a_C)a_L}\right) &+ (1-\beta) \left(\frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{10}(T | a_C,a_L)}{a_C(1-a_L)}\right). \notag \end{align}} Now, consider the quantity $Q_{00}(T|a_C,a_L)/((1-a_C)(1-a_L)) - Q_{10}(T|a_C,a_L)/((1-a_C)a_L)$ in a $\ensuremath{\mathsf{TSR}}$ design. This is the (appropriately normalized) difference between the rate at which control customers book control listings, and the rate at which treatment customers book control listings. Note that the difference between control customers and treatment customers is that the latter are exposed to treatment listings, while the former are not. Hence, the difference in steady-state rates of booking among these two groups on {\em control listings} must be driven by the fact that treatment customers substitute booking from control listings to treatment listings (or vice versa). This difference is precisely the ``cannibalization" effect (i.e., interference) that was found in $\ensuremath{\mathsf{LR}}$ designs in the highly demand-constrained regime. Following from this motivation, we subtract an appropriately weighted ``correction term’’ for the $\ensuremath{\mathsf{LR}}$ design from our interpolating $\ensuremath{\mathsf{TSR}}$ estimator in \eqref{eq:interpolatingTSR}. (This argument is of course heuristic: it ignores dynamic effects in regimes of intermediate market balance.) Using a symmetric argument we also subtract an appropriately weighted correction term associated to interference across customers in a $\ensuremath{\mathsf{CR}}$ design: $Q_{00}(T|a_C,a_L)/((1-a_L)(1-a_C))-Q_{01}(T|a_C,a_L)/((1-a_C)a_L)$. (Similar correction terms were also studied in \cite{bajari2019double}; see the related work for further details on this work.) We weight these correction terms in a market-balance-dependent fashion, based on the direction of market balance in which we have seen that the respective interference grows. Combining these insights, for $\beta \in (0,1)$ our improved $\ensuremath{\mathsf{TSR}}$ estimator is given by: \begin{multline} \label{eq:TSRI} \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRI}}}(T|a_C, a_L) = \\ \beta \left[\frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{01}(T | a_C,a_L)}{(1-a_C)a_L}-(1-\beta)\left(\frac{Q_{00}(T|a_C, a_L)}{(1-a_C)(1-a_L)}-\frac{Q_{01}(T|a_C, a_L)}{(1-a_C)a_L}\right) \right] \\ +(1- \beta) \left[ \frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{10}(T | a_C,a_L)}{a_C(1-a_L)}-\beta\left(\frac{Q_{00}(T|a_C,a_L)}{(1-a_C)(1-a_L)}-\frac{Q_{10}(T|a_C,a_L)}{a_C(1-a_L)}\right) \right], \end{multline} Given market balance $\lambda/\tau$, we set $\beta=e^{-\lambda/\tau}$, and we choose $a_C$ and $a_L$ as in \eqref{eq:aC_aL_naive_TSR}. In the limit where $\lambda/\tau \to 0$, note that $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRI}}} (T|a_C(\lambda/\tau),a_L(\lambda/\tau))\to\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T|\b{a}_C)$ as expected. Similarly, in the limit where $\lambda/\tau \to \infty$, we have $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRI}}} (T|a_C(\lambda/\tau),a_L(\lambda/\tau))\to\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T|\b{a}_L)$. In particular, it is straightforward to show as a result that this new estimator is also unbiased in {\em both} the highly demand-constrained and highly supply-constrained regimes. In these limits, the correction terms play no role. However, for moderate values of market balance, both the cannibalization correction terms kick in, \newgw{improving over naive $\ensuremath{\mathsf{TSR}}$ as seen in the figure. As we mentioned, we leave the study of other $\ensuremath{\mathsf{TSR}}$ designs and estimators that may perform even better for future research.} \gw{Figure 1: should we normalize by GTE as in the next figure? Can we provide an appendix with more results to give more sense of robustness?} \begin{figure} \centering \begin{tabular}{c c} \includegraphics[width=0.5\textwidth]{estimator_minus_gte_lambdas.pdf} & \includegraphics[width=0.5\textwidth]{transient_dynamics_med_lambda.pdf} \end{tabular} \caption{Left: Difference between estimator and $\ensuremath{\mathsf{GTE}}$ in steady state. We consider variation in $\lambda/\tau$ by fixing $\tau = 1$ and varying $\lambda$; analogous results are obtained if $\lambda$ is fixed and $\tau$ is varied. Right: transient behavior of estimators. Again we fix $\tau = 1$. For both plots, we set $\epsilon_{\gamma,i}=1, \alpha_{\gamma,i}=0.5$ for all $\gamma, i$. In the $\ensuremath{\mathsf{CR}}$ design, $a_C=1/2$. In the $\ensuremath{\mathsf{LR}}$ design, $a_L=1/2$. There are three listing types and two customer types. Utilities for $\gamma_1$ are $v_{\gamma_1}(\theta_1)= 1.5, v_{\gamma_1}(\theta_2)=3 , v_{\gamma_1}(\theta_3)=6. \tilde{v}_{\gamma_1}(\theta_1)= 1, \tilde{v}_{\gamma_1}(\theta_2)=8, \tilde{v}_{\gamma_1}(\theta_3)=12.$ Utilities for $\gamma_2$ are $v_{\gamma_2}(\theta_1= 1, v_{\gamma_2}(\theta_2)=2 , v_{\gamma_2}(\theta_3)=4. \tilde{v}_{\gamma_2}(\theta_1)= 0.5, \tilde{v}_{\gamma_2}(\theta_2)=6, \tilde{v}_{\gamma_2}(\theta_3)=8$.} \label{fig:numerics} \end{figure} \subsection{Highly demand-constrained markets} Consider a hypothetical limit where each listing becomes instantly available again after being booked (i.e., $\tau \to \infty$ but $\lambda$ remains fixed). This is the demand-constrained extreme, where capacity constraints on listings become irrelevant. Note that on arrival of a customer, both listings are always available, and therefore, included in the consideration set. In this limit, observe that the steady-state rate at which customers book listing $\ell = 1,2$ becomes: \begin{equation} \label{eq:example_rate_dc} \frac{\lambda v}{\epsilon + 2 v}. \end{equation} (The factor 2 appears in the denominator as there are two listings.) Since the intervention changes $v$ to $\tilde{v}$, the $\ensuremath{\mathsf{GTE}}$ is: \[ \ensuremath{\mathsf{GTE}} = \frac{2 \lambda \tilde{v}}{\epsilon + 2 \tilde{v}} - \frac{2\lambda v}{\epsilon + 2v}. \] Now suppose we consider a $\ensuremath{\mathsf{CR}}$ design that randomizes a fraction $a_C$ of arriving customers to treatment. Observe that in this demand-constrained extreme, every arriving control (resp., treatment) customer sees the full global control (resp., treatment) market condition; there is no dynamic influence of one customer's booking behavior on any other customer. This suggests the naive $\ensuremath{\mathsf{CR}}$ estimator should correctly recover the global treatment effect. Indeed, the steady-state naive $\ensuremath{\mathsf{CR}}$ estimator becomes: \[ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(\infty | a_C) = \frac{1}{a_C} \frac{2 a_C \lambda \tilde{v}}{\epsilon + 2 \tilde{v}} - \frac{1}{1-a_C} \frac{2 (1-a_C) \lambda v}{\epsilon + 2 v} = \ensuremath{\mathsf{GTE}}. \] In other words, the naive $\ensuremath{\mathsf{CR}}$ estimator is perfectly {\em unbiased}. On the other hand, consider a $\ensuremath{\mathsf{LR}}$ design where listing 1 is (randomly) assigned to treatment, and listing 2 is (randomly) assigned to control. In this design, the steady-state naive $\ensuremath{\mathsf{LR}}$ estimator (with $a_L = 1/2$) becomes: \[ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty | a_L) = \frac{1}{a_L} \frac{\lambda \tilde{v}}{\epsilon + v + \tilde{v}} - \frac{1}{1-a_L} \frac{\lambda v}{\epsilon + v + \tilde{v}}. \] It is clear that in general this will {\em not} be equal to the $\ensuremath{\mathsf{GTE}}$, because there is {\em interference} between the two listings: {\em every} arriving customer sees a market environment that is neither quite global treatment nor global control, and the estimates reflect this imperfection. Even with immediate replenishment, treatment listings compete for customers and ``cannibalize" bookings from control listings, causing the naive $\ensuremath{\mathsf{LR}}$ estimator to be biased. (Note that such a violation would arise for virtually any reasonable choice model that could be considered.) \subsection{Highly supply-constrained markets} Now we consider the opposite extreme, where the market is heavily supply constrained; in particular, we consider the hypothetical limit where $\lambda \to \infty$ but $\tau$ remains fixed. Now in this case, note that a listing that becomes available will {\em nearly instantaneously} be booked; therefore, virtually every arriving customer will find at most one of the two listings available, and their decision of whether to book will be entirely determined by comparison of that available listing against the outside option. In particular, as a result when $\lambda \to \infty$ the steady-state rate at which listing $\ell = 1,2$ is booked approaches $\tau$. We thus require a more refined estimate of this booking rate as $\lambda \to \infty$. Suppose $\lambda$ is large, and suppose listing $\ell$ becomes available. Based on the intuition above, we make the approximation that the listing will be considered in isolation by a succession of customers until it is booked. Customers arrive at rate $\lambda$, and book an available listing with probability $v/(\epsilon + v)$; in other words, in this regime listings compete only with the outside option, and not with each other. Therefore the mean time until such a booking occurs is $(\epsilon + v)/\lambda v$; and once booked, the listing remains booked for mean time $1/\tau$, at which time it becomes available again. Therefore for large $\lambda$, the long-run average rate at which a listing $\ell = 1,2$ is booked is approximately: \begin{equation} \label{eq:example_rate_sc} \left( \frac{\epsilon + v}{\lambda v} + \frac{1}{\tau}\right)^{-1}. \end{equation} As expected, this rate approaches $\tau$ as $\lambda \to \infty$. The $\ensuremath{\mathsf{GTE}}$ is thus: \[ \ensuremath{\mathsf{GTE}} = 2\left( \frac{\epsilon + \tilde{v}}{\lambda \tilde{v}} + \frac{1}{\tau}\right)^{-1} - 2 \left( \frac{\epsilon + v}{\lambda v} + \frac{1}{\tau}\right)^{-1}. \] With this observation in hand, suppose we again consider the same $\ensuremath{\mathsf{LR}}$ design where listing 1 is (randomly) assigned to treatment, and listing 2 is (randomly) assigned to control. Since in \eqref{eq:example_rate_sc} there is no influence of one listing on the other, observe that the naive $\ensuremath{\mathsf{LR}}$ estimator (with $a_L = 1/2$) becomes: \[ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty | a_L) = \frac{1}{a_L} \left( \frac{\epsilon + \tilde{v}}{\lambda \tilde{v}} + \frac{1}{\tau}\right)^{-1} - \frac{1}{1-a_L}\left( \frac{\epsilon + v}{\lambda v} + \frac{1}{\tau}\right)^{-1} = \ensuremath{\mathsf{GTE}}. \] In other words, the naive $\ensuremath{\mathsf{LR}}$ estimator is perfectly {\em unbiased}. This is intuitive: in the limit where $\lambda$ is large, since listings do not compete with each other for bookings, there is no interference when we implement the $\ensuremath{\mathsf{LR}}$ design. On the other hand, consider the naive $\ensuremath{\mathsf{CR}}$ design where a fraction $a_C$ of arriving customers are randomized to treatment. In this case we wish to establish the rate at which bookings are made by treatment and control customers respectively. Suppose listing $\ell$ was occupied by a treatment customer, and becomes available. Define: \[ \zeta(a_C) = a_C \tilde{v}/(\epsilon + \tilde{v}) + (1 - a_C)v/(\epsilon + v). \] This is the probability an arriving customer books the available listing. Customers arrive at rate $\lambda$, so a mean time $1/(\lambda \zeta(a_C))$ elapses until a booking is made; the listing then remains occupied for mean time $1/\tau$. Hence, the long-run average rate at which bookings of listing $\ell$ occur is given by $(1/(\lambda \zeta(a_C))+1/\tau)^{-1}$. Now, conditional on a booking, the booking was made by a treatment customer with probability: \[ \eta(a_C) = \frac{1}{\zeta(a_C)} \frac{a_C \tilde{v}}{\epsilon + \tilde{v}}. \] Hence, Thus the long-run average rate at which listing $\ell$ is booked by treatment customers is: \[ \eta(a_C) \cdot \left(\frac{1}{\lambda \zeta(a_C)} + \frac{1}{\tau}\right)^{-1} = \left( \frac{\epsilon + \tilde{v}}{a_C \lambda \tilde{v}} + \frac{1}{\eta(a_C) \tau} \right)^{-1}. \] We can use the same logic for the booking rate of control customers, and so we find the naive $\ensuremath{\mathsf{CR}}$ estimator is: \[ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(\infty | a_C) = \frac{2}{a_C} \left( \frac{\epsilon + \tilde{v}}{a_C \lambda \tilde{v}} + \frac{1}{\eta(a_C) \tau}\right)^{-1} - \frac{2}{1-a_C}\left( \frac{\epsilon + v}{(1-a_C)\lambda v} + \frac{1}{(1 - \eta(a_C))\tau}\right)^{-1}. \] In general, this estimator will be {\em biased}, i.e., not equal to $\ensuremath{\mathsf{GTE}}$. The issue is that in this case, customers have a {\em dynamic} influence on each other across the treatment groups: when a listing becomes available, whether or not it is available for booking by a subsequent control customer depends on whether or not a treatment customer had previously booked the listing. In this case, customers compete among each other for listings. This interference across customer groups leads to the biased expression for the naive $\ensuremath{\mathsf{CR}}$ estimator. We note that the $\ensuremath{\mathsf{GTE}}$ and naive $\ensuremath{\mathsf{LR}}$ estimators converge to zero in the limit where $\lambda \to \infty$; this is because the booking rate of each listing becomes $\tau$ in this limit. The naive $\ensuremath{\mathsf{CR}}$ estimator does {\em not} converge to zero in general, however, as $\lambda \to \infty$, because in general $\eta(a_C) = (1/\zeta(a_C)) \cdot (a_C \tilde{v})/(\epsilon + \tilde{v}) \neq (1/\zeta(a_C)) \cdot ((1-a_C) {v})/(\epsilon + {v}) = 1-\eta(a_C)$. For example, if $\tilde{v} > v$ and $a_C = 1/2$, these expressions reveal that treatment customers book more often than control customers. \subsection{Discussion: Violation of SUTVA} Our simple example illustrates that the naive $\ensuremath{\mathsf{LR}}$ estimator is biased when $\lambda \to 0$, and unbiased when $\lambda \to \infty$; and the naive $\ensuremath{\mathsf{CR}}$ estimator is unbiased when $\lambda \to 0$, while it is biased when $\lambda \to \infty$. These findings can be interpreted through the lens of the classical potential outcomes model; an important result from this literature is that when the {\em stable unit treatment value assumption} (SUTVA) holds, then naive estimators of the sort we consider will be unbiased for the true treatment effect. SUTVA requires that the treatment condition of units other than a given customer or listing should not influence the potential outcomes of that given customer or listing. The discussion above illustrates that in the limit where $\lambda \to 0$, there is no interference across customers in the $\ensuremath{\mathsf{CR}}$ design; this is why the naive $\ensuremath{\mathsf{CR}}$ estimator is unbiased. Similarly, in the limit where $\lambda \to \infty$, there is no interference across listings in the $\ensuremath{\mathsf{LR}}$ design; this is why the naive $\ensuremath{\mathsf{LR}}$ estimator is unbiased. On the other hand, the cases where each estimator is biased involve interference across experimental units. \subsection{Simulation details} For every setting, we run 500 simulations with $N=5000$. We fix $\alpha=1$, $\epsilon=1$, and $\tau=1$ across all settings and for each we consider three levels of demand: $\lambda= 0.1, 1, 10$. The simulation runs for $T_1$ time units. We drop the burn in time and calculate the value of the estimator on the time interval $[T_0, T_1]$ for some $0 < T_0 < T_1$. We set $T_0=5$ and $T_1=25$. We further scale time by $\min\{\lambda, \tau\}$ so that the number of ``events'' (i.e., bookings) that occur in the time interval is consistent across different values of $\lambda$ and $\tau$. More specifically, for a given $(\lambda, \tau)$ pair, we calculate the value of the estimator on the time interval $[T_0 / \min\{\lambda, \tau\}, \ T_1/\min\{\lambda, \tau\}]$. To see why this is necessary, note that in the case where there is no outside option and all customers book a listing if one is available, the rate of bookings is $\min(\lambda, \tau)$. We heuristically rescale time so that if we removed the effect of choice set dynamics and utilities and all customers book any available listing, the number of bookings would be consistent across market balance levels. This rescaling allows us to achieve similar precision in our estimates of bias across different levels of market balance; that is, under this rescaling the size of the standard error of the estimates is similar across different levels of market balance. Because of this rescaling, it is difficult to compare how standard errors of the estimators change when we fix the time horizon and change market balance. We suggest that the reader use the standard error figures to compare the standard error of different estimators \textit{within} the same level of market balance. For the base setting shown in Section \ref{sec:variance} with homogeneous listings and customers, a customer has utility $v = 0.315$ for a control listing and $\tilde{v}=0.3937$ for a treatment listing, corresponding to a mean field steady state booking probability of 20 percent in the global control model and 23 percent in the global treatment model. This change corresponds to a 25 percent increase in the utility, due to treatment. Unless otherwise noted, all sets of market parameters are chosen to maintain the 20 and 23 percent booking probabilities in global control and treatment, respectively. We fix $a_C=0.5$ in the $\ensuremath{\mathsf{CR}}$ experiments and $a_L=0.5$ in the $\ensuremath{\mathsf{LR}}$ experiments. For the $\ensuremath{\mathsf{TSR}}$ experiments, we vary $a_C$ and $a_L$ as defined in Section \ref{ssec:TSR_est}. For a general $\ensuremath{\mathsf{TSR}}$ experiment with randomization parameters $(a_C, a_L)$, we simulate a completely randomized design on the listing side, fixing $\lfloor N \cdot a_L \rfloor$ listings in treatment and $\lceil N \cdot (1-a_L) \rceil$ listings in control. Since customers arrive over time, we cannot run a completely randomized design and so we simulate Bernoulli randomization on the customers, randomizing each customer to treatment independently with probability $a_C$. For each statistic (bias, standard error, and RMSE), bootstrapped 95th percentile confidence intervals are presented. In each setting, we re-sample 500 simulation runs, with replacement, from the original set of 500 simulations. We calculate the value of the bias, standard error, and RMSE for each estimator and present the 95th percentile intervals of each. \subsection{Market scenarios} \label{ssec:robustness_market_scenarios} In this section, we analyze the effect of changes in customers' choice set parameters on the behavior of the estimators, while holding the market balanced fixed with $\lambda = \tau = 1$. In all of these scenarios, $\ensuremath{\mathsf{TSRI\text{-}2}}$ has the lowest bias and the highest variance. $\ensuremath{\mathsf{TSRI\text{-}2}}$ is the estimator that minimizes $\mathrm{RMSE}$ in these simulations, though this depends on the relative magnitude of the bias and standard error, as a result of the size of the market. For a larger market, the standard error is smaller the magnitude of the bias, and vice versa for a smaller market. \textbf{Varying average utility.} In addition to the homogeneous system with one customer type and one listing type presented in Figure \ref{fig:simulations_hom}, we consider two additional settings, again in a homogeneous system, but now scaling the utility $v$. We fix the effect of the intervention such that the lift in utility $\tilde{v}/v = 1.25$ is constant in all three settings, and consider settings with smaller $v$ and larger $v$. Rescaling $v$ will change the steady state booking probabilities in global treatment and control so that we no longer have 23 percent of booking in global treatment and 20 percent in global control. Note that changes to $\alpha$ and/or $\epsilon$ can be equivalently viewed as a rescaling of the utility, so changing the utility also allows us to explore the effect of changes in $\alpha$ or $\epsilon$ as well. \begin{itemize} \item Low utility: $v = 0.155, \tilde{v} = 0.1938$. \item Medium utility: $v = 0.315, \tilde{v} = 0.3937$. \item High utility: $v = 0.62, \tilde{v} = 0.775$. \end{itemize} Results are presented in Figure \ref{fig:robustness_vary_avg_utility}. As the average utility increases (and so does the steady state booking probability of arriving customers), the bias for $\ensuremath{\mathsf{CR}}, \ensuremath{\mathsf{LR}}, \ensuremath{\mathsf{TSR}}-naive$ and $\ensuremath{\mathsf{TSRI\text{-}1}}$ all increase, whereas the bias $\ensuremath{\mathsf{TSRI\text{-}2}}$ is largely consistent across different levels of utility. The standard errors of the estimators do not change notably as we vary the average utility. A consequence of the change in bias is that the $\mathrm{RMSE}$ minimizing estimator depends on the utility level, with $\ensuremath{\mathsf{TSRI\text{-}2}}$ having the highest $\mathrm{RMSE}$ for $v=0.15$ but the smallest for $v=0.62$. \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/mf_varying_utility_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_bias_varying_utility_lam_1_error_bars_mod} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_se_varying_utility_lam_1_error_bars_mod} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_rmse_varying_utility_lam_1_error_bars_mod} \end{tabular} \caption{(Varying average utility.) Results in balanced market with $\lambda=\tau=1$. Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 500 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 500 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $500$ runs (normalized by GTE). } \label{fig:robustness_vary_avg_utility} \end{figure} \textbf{Heterogeneity of customers.} There is one listing type $\theta$ and two customer types $\gamma_1$, $\gamma_2$. We fix the size of the treatment utility increase such that $\tilde{v}_{\gamma_1}(\theta) = 1.25 \cdot v_{\gamma_1}(\theta)$ and $\tilde{v}_{\gamma_2}(\theta) = 1.25 \cdot v_{\gamma_2}(\theta)$. We additionally fix the steady state booking probabilities in global control and global treatment to be 20 percent and 23 percent, respectively. Customers of type $\gamma_1$ have higher utility for the listing than customers of type $\gamma_2$. We vary the heterogeneity of the customers by varying the ratio $v_{\gamma_2}(\theta)/ v_{\gamma_1}(\theta)$. \begin{itemize} \item Homogeneous: $v_{\gamma_2}(\theta)/ v_{\gamma_1}(\theta)=1$, with $v_{\gamma_1}(\theta) = 0.315, v_{\gamma_2}(\theta) = 0.315$. \item Low level of heterogeneity (Het-L): $v_{\gamma_2}(\theta)/ v_{\gamma_1}(\theta)=3$, with $v_{\gamma_1}(\theta) = 0.17, v_{\gamma_2}(\theta) = 0.51$. \item High level of heterogeneity (Het-H): $v_{\gamma_2}(\theta)/ v_{\gamma_1}(\theta)=3.83$, with $v_{\gamma_1}(\theta) = 0.12, v_{\gamma_2}(\theta) = 0.46$. \end{itemize} Results are presented in Figure \ref{fig:robustness_vary_customer_het}. As heterogeneity of the customers increases, the $\ensuremath{\mathsf{CR}}$ estimator has similar levels of bias, while $\ensuremath{\mathsf{LR}}, \ensuremath{\mathsf{TSRN}}$, and $\ensuremath{\mathsf{TSRI\text{-}1}}$ slightly increase, though not as appreciably as the increase seen when varying the average utility level. In all cases, $\ensuremath{\mathsf{TSRI\text{-}2}}$ has the lowest bias and highest standard error. In these simulations, $\ensuremath{\mathsf{TSRI\text{-}2}}$ is the estimator that minimizes $\mathrm{RMSE}$, although this can change depending on the size of the market and the relative sizes of the bias and the standard error. \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/mf_varying_customer_het_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_bias_varying_customer_het_lam_1_error_bars_mod} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_se_varying_customer_het_lam_1_error_bars_mod} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_rmse_varying_customer_het_lam_1_error_bars_mod} \end{tabular} \caption{(Varying heterogeneity of customers.) Results in balanced market with $\lambda=\tau=1$. Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 500 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 500 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $500$ runs (normalized by GTE). } \label{fig:robustness_vary_customer_het} \end{figure} \textbf{Heterogeneity of listings.} There are two listing types $\theta_1$ and $\theta_1$ and one customer type $\gamma$. We fix the treatment effect so that $\tilde{v}(\theta) = 1.25 \cdot v(\theta)$. We additionally fix the steady state booking probabilities in global control and global treatment to be 20 percent and 23 percent, respectively. We vary the heterogeneity of the listings by letting $v(\theta_2)>v(\theta_1)$ and varying the ratio $v(\theta_2) / v(\theta_1)$. \begin{itemize} \item Homogeneous: $v(\theta_2) / v(\theta_1) = 1$, with $v(\theta_1) = v(\theta_2) = 0.315$. \item Low level of heterogeneity (Het-L): $v(\theta_2) / v(\theta_1) = 1.6$, with $v(\theta_1) = 0.25, v(\theta_2) = 0.4$. \item High level of heterogeneity (Het-H): $v(\theta_2) / v(\theta_1)=6$, with $v(\theta_1) = 0.1, v(\theta_2) = 0.6$. \end{itemize} Results are presented in Figure \ref{fig:robustness_vary_list_het}. As heterogeneity of the listings increases, the bias of the $\ensuremath{\mathsf{CR}}$ estimator increases significantly, the $\ensuremath{\mathsf{TSRN}}$ estimator increases to a lesser extent, and the $\ensuremath{\mathsf{LR}}$, $\ensuremath{\mathsf{TSRI\text{-}1}}$, and $\ensuremath{\mathsf{TSRI\text{-}2}}$ estimators remain roughly consistent. In all cases, $\ensuremath{\mathsf{TSRI\text{-}2}}$ has the lowest bias and highest standard error. In these simulations, $\ensuremath{\mathsf{TSRI\text{-}2}}$ is the estimator that minimizes $\mathrm{RMSE}$, although this can change depending on the size of the market and the relative sizes of the bias and the standard error. \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/mf_varying_list_het_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_bias_varying_list_het_lam_1_error_bars_mod} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_se_varying_list_het_lam_1_error_bars_mod} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_rmse_varying_list_het_lam_1_error_bars_mod} \end{tabular} \caption{(Varying heterogeneity of listings.) Results in balanced market with $\lambda=\tau=1$. Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 500 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 500 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $500$ runs (normalized by GTE). } \label{fig:robustness_vary_list_het} \end{figure} \textbf{Heterogeneity of treatment effect.} We consider the impact of heterogeneous treatment effects on the performance of the estimators. We restrict attention to where the treatment increases the utility, but the size of the increase can differ across the market. There are two listing types $\theta_1$ and $\theta_2$ and one customer type $\gamma$. We fix the customer's preferences pre-treatment such that the customer prefers $\theta_2$ to $\theta_1$ pre-treatment, with $v_\gamma(\theta_2)/v_\gamma(\theta_1) = 1.3$. We compare three settings, one where the treatment effect has the same multiplicative lift on both listing types, one where the treatment amplifies the existing preference order, and one where the treatment reverses the existing preference order. We fix the steady state booking probabilities and global control and global treatment to be 20 percent and 23 percent, respectively. In all of the scenarios, we have $v(\theta_1) = 0.27, v(\theta_2)=0.351$. \begin{itemize} \item Multiplicative: $\tilde{v}(\theta_1)=0.3375, \tilde{v}(\theta_2)=0.4388$. \item Heterogeneous treatment effects - amplify (HTE-amp): $\tilde{v}(\theta_1)=0.2727, \tilde{v}(\theta_2)=0.5265$. \item Heterogeneous treatment effects - reverse (HTE-rev): $\tilde{v}(\theta_1)=0.432, \tilde{v}(\theta_2)=0.355$. \end{itemize} Results are presented in Figure \ref{fig:robustness_vary_htes}. We find that when the treatment effect amplifies the existing preferences, the $\ensuremath{\mathsf{CR}}$ estimator has a larger bias. This is likely an artifact of the increase in listing heterogeneity after treatment (see discussion on varying listing heterogeneity and Figure \ref{fig:robustness_vary_list_het}). The bias is similar in the two settings with a multiplicative treatment effect and heterogeneous treatment effects that reverse the control preference order. In all cases, $\ensuremath{\mathsf{TSRI\text{-}1}}$ has the lowest bias, highest standard error, and lowest $\mathrm{RMSE}$. \begin{figure} \centering \begin{tabular}{l l} \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/mf_varying_htes_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_bias_varying_htes_lam_1_error_bars_mod} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_se_varying_htes_lam_1_error_bars_mod} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/sims_rmse_varying_htes_lam_1_error_bars_mod} \end{tabular} \caption{(Varying heterogeneity of treatment lift on utilities.) Results in balanced market with $\lambda=\tau=1$. Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 500 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 500 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $500$ runs (normalized by GTE). } \label{fig:robustness_vary_htes} \end{figure} \subsection{Robustness with varying market balance} \label{ssec:robustness_vary_lambda} We now replicate Figures \ref{fig:numerics_homogeneous} and \ref{fig:simulations_hom} for different market settings, verifying the results we present in the main text regarding the dependence of the estimators on market balance. In Section \ref{ssec:robustness_market_scenarios}, we consider four scenarios at a fixed market balance, by varying average utility, customer heterogeneity, listing heterogeneity, and treatment effect heterogeneity. In this section, we choose one representative set of parameters from each scenario and show how the performance of the estimators change as we change market balance. In particular, we show that the behavior of the estimators in a large market setting is still similar to the behavior obtained in the mean field limit, even in the presence of heterogeneity in the market. The representative settings shown are low utility (varying average utility), high level of heterogeneity (heterogeneity of customers), high level of heterogeneity (heterogeneity of listings), and heterogeneous treatment effects amplifying existing preference (heterogeneity of treatment effects). The parameters are as defined in Section \ref{ssec:robustness_market_scenarios}. One might suspect that these settings, either with a low booking probability or high levels of heterogeneity, are the ones most likely to differ from the mean field model. We show, however, that the findings in the main text are indeed robust. Across our range of simulations, we see qualitatively similar behavior to our findings in the main text: the naive $\ensuremath{\mathsf{CR}}$ estimator has lower bias for small $\lambda/\tau$ and the naive $\ensuremath{\mathsf{LR}}$ estimator has lower bias for large $\lambda/\tau$, while the $\ensuremath{\mathsf{TSR}}$ estimators interpolate between the two. The $\ensuremath{\mathsf{TSR}}$ estimators offer these bias reductions at the cost of higher variance. Further, the bias of $\ensuremath{\mathsf{CR}}$ is higher at larger $\lambda/\tau$, and the bias of $\ensuremath{\mathsf{LR}}$ is higher at smaller $\lambda/\tau$. We note that although the qualitative findings are similar, the bias in the simulations is not identical to the bias in the mean field model, especially at the smaller and larger values of $\lambda$. We conjecture that this is due to the fact that stochastic effects matter in these extremes. When $\lambda$ is small, since $T$ is fixed, we see a (relatively) smaller number of customer arrivals in the same time horizon; indeed, this is why standard errors are higher overall in this setting. On the other hand, when $\lambda$ is large, then (relatively) few listings will be available. In this setting, if the steady state number of available listings is very small relative to $N$, then our mean field approximation will begin to be less accurate, even though the qualitative behavior is similar. The discrepancies may be larger with the $\ensuremath{\mathsf{TSR}}$ estimators, since there are even fewer interactions happening in each of the four cells. \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/avg_v_small_mf_bias_vary_lambda_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/avg_v_small_bias_simulations_error_bar_mod.png} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/avg_v_small_se_simulations_error_bar_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/avg_v_small_rmse_simulations_error_bar_mod.png} \end{tabular} \caption{(Varying market balance - small average utility.) Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 500 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 500 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $500$ runs (normalized by GTE). } \label{fig:robustness_vary_lambda_small_utility} \end{figure} \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_customers_large_mf_bias_vary_lambda_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_customers_large_bias_simulations_error_bar_mod.png} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_customers_large_se_simulations_error_bar_mod} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_customers_large_rmse_simulations_error_bar_mod} \end{tabular} \caption{(Varying market balance - large heterogeneity between customers.) Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 200 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 200 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $200$ runs (normalized by GTE). } \label{fig:robustness_vary_lambda_heterogeneous_customers} \end{figure} \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_listings_large_mf_bias_vary_lambda_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_listings_large_bias_simulations_error_bar_mod} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_listings_large_se_simulations_error_bar_mod} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/het_of_listings_large_rmse_simulations_error_bar_mod} \end{tabular} \caption{(Varying market balance - large heterogeneity between listings.) Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 500 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 500 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $500$ runs (normalized by GTE). } \label{fig:robustness_vary_lambda_heterogeneous_listings} \end{figure} \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/hte_amp_large_mf_bias_vary_lambda_mod.png} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/hte_amp_large_bias_simulations_error_bar_mod} \\ \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/hte_amp_large_se_simulations_error_bar_mod} & \includegraphics[height=.26\textwidth]{figs_sims/robustness_across_param/hte_amp_large_rmse_simulations_error_bar_mod} \end{tabular} \caption{(Varying market balance - heterogeneous treatment effects amplifying global control preferences.) Top left: Bias of each estimator in the mean field model (normalized by GTE). Top right: Bias of each estimator in simulations, averaged across 200 runs (normalized by GTE). Bottom left: Standard error of estimates, calculated across 200 runs (normalized by GTE). Bottom right: $\mathrm{RMSE}$ of the estimates, calculated across $200$ runs (normalized by GTE). } \label{fig:robustness_vary_lambda_heterogeneous_customers} \end{figure} \medskip \subsection{Modifications of consideration sets.} Here we consider a modification to the consideration set formation process described in Section \ref{sec:model}. Instead of a customer sampling each listing independently with probability $\alpha$, we now consider a situation where a customer samples a fixed set of $K = 50$ listings into their consideration set, drawn uniformly at random, as long as there are at least $50$ listings available. When there are fewer than $50$ listings, the customer samples all listings in to their consideration set. We note that another scenario to investigate is the one in which listings are drawn proportionally to their utility to the customer, to capture search and recommendation algorithms that recommend listings that the customer is likely to book. We leave deeper investigation of the formation of consideration sets (including sensitivity to the value of $K$) for future work. We calibrate the utilities such that the mean field model where customers sample listings into their choice set with probability $K/N = 50/N$ has a steady state booking probability of 20 percent in global control and 23 percent in global treatment, when $\lambda=\tau$. Note that with this change in consideration set formation, we can no longer guarantee that the finite system converges to the mean field model defined in Section \ref{sec:meanfield}. Thus we study these scenarios through simulations in the finite model and present the bias and standard error of the estimators in the simulations (see Figure \ref{fig:choice_set_k_50}). As noted above, we find qualitatively similar behavior as before. \begin{figure}[H] \centering \includegraphics[height=.27\textwidth]{figs_choice_set/fixed_k_50_bias_simulations_error_bar_mod.png} \includegraphics[height=.27\textwidth]{figs_choice_set/fixed_k_50_se_simulations_error_bar_mod.png} \\ \includegraphics[height=.27\textwidth]{figs_choice_set/fixed_k_50_rmse_simulations_error_bar_mod.png} \caption{(Fixed size $K = 50$ consideration set.) Top left: Average bias (normalized by $\ensuremath{\mathsf{GTE}}$) of each estimator across 500 runs. Top right: Standard error of estimates, calculated across 500 runs (normalized by $\ensuremath{\mathsf{GTE}}$). Bottom: RMSE of estimates, calculated across 500 runs (normalized by $\ensuremath{\mathsf{GTE}}$). There is one customer type and one listing type. Customers have utility $v=6$ for control listings and $\tilde{v}=7.5$ for treatment listings. } \label{fig:choice_set_k_50} \end{figure} \section{Analysis of bias} \label{sec:bias} We now utilize the framework defined to analyze the bias of two common experiment types, $\ensuremath{\mathsf{LR}}$ experiments and $\ensuremath{\mathsf{CR}}$ experiments. Recall from Section \ref{sec:intro} that, in a setting where listings are immediately replenished, all customers see the full set of original listings as available. There is no competition between customers but there is still competition between listings, and so, intuitively, we expect $\ensuremath{\mathsf{CR}}$ to be unbiased and $\ensuremath{\mathsf{LR}}$ to be biased. Meanwhile, in a setting where listings remain unavailable for some amount of time, the resulting dynamic linkage across customers creates a bias in $\ensuremath{\mathsf{CR}}$ as well. Now, consider the extreme where the market is highly supply-constrained: most customers who arrive see no available listings, but some customers arrive just as a booked listing becomes available and see a single available listing. Such customers compare the listing against the outside option, but, since no other listings are available, do not compare listings against each other. In this regime, there is no competition across listings but there is competition between customers, and so we expect $\ensuremath{\mathsf{LR}}$ to be unbiased and $\ensuremath{\mathsf{CR}}$ to be biased. In this section, we formalize this intuition about the behavior of the estimators in the extremes of market balance. We establish two key theoretical results: in the limit of a highly supply-constrained market (where $\lambda/\tau \to \infty$), the naive $\ensuremath{\mathsf{LR}}$ estimator becomes an unbiased estimator of the $\ensuremath{\mathsf{GTE}}$, while the naive $\ensuremath{\mathsf{CR}}$ estimator is biased. On the other hand, in the limit of a highly demand-constrained market (where $\lambda/\tau \to 0$), the naive $\ensuremath{\mathsf{CR}}$ estimator becomes an unbiased estimator of the GTE, while the naive $\ensuremath{\mathsf{LR}}$ estimator is biased. In other words, each of the two naive designs is respectively unbiased in the limits of extreme market imbalance. At the same time, we find empirically that neither estimator performs well in the region of moderate market balance. Inspired by this finding, we consider $\ensuremath{\mathsf{TSR}}$ and associated estimators that naturally interpolate between the two naive designs depending on market balance. % Given the findings above, we show that a simple approach to adjusting $a_C$ and $a_L$ as a function of market balance yields performance that balances between the naive $\ensuremath{\mathsf{LR}}$ estimator and the naive $\ensuremath{\mathsf{CR}}$ estimator. Nevertheless, we show there is significant room for improvement, by adjusting for the types of interference that arise using observations from the $\ensuremath{\mathsf{TSR}}$ experiment. In particular, we propose a heuristic for a novel interpolating estimator for the $\ensuremath{\mathsf{TSR}}$ design that aims to correct these biases, and yields surprisingly good numerical performance. \subsection{Theory: Stead-state bias of $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ in unbalanced markets} \label{ssec:ss_theory} In this subsection, we study theoretically the bias of the steady-state naive $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimators in the limits where the market is extremely unbalanced (either demand-constrained or supply-constrained). The key tool we employ is a characterization of the asymptotic behavior of $Q_{ij}(\infty | a_C, a_L)$ as defined in \eqref{eq:Qijinfty} in the limits where $\lambda/\tau \to 0$ and $\lambda/\tau \to \infty$. We use this characterization in turn to quantify the asymptotic bias of the naive estimators relative to the $\ensuremath{\mathsf{GTE}}$. We derive these results in the next two subsections and provide a simple example in Section \ref{ssec:Qij_homogeneous} to illustrate the effects. \subsubsection{Highly demand-constrained markets.} We start by considering the behavior of naive estimators in the limit where $\lambda/\tau \to 0$. We start with the following proposition that characterizes behavior of $Q_{ij}(\infty|a_C,a_L)$ as $\lambda/\tau \to 0$. The proof is in Appendix \ref{app:proofs}. \begin{proposition} \label{prop:Qij_demand_constrained} Fix all system parameters except $\lambda$ and $\tau$, and consider a sequence of systems in which $\lambda/\tau \to 0$. Then along this sequence, \begin{equation} \label{eq:Qij_demand_constrained} \frac{1}{\lambda} Q_{ij}(\infty | a_C, a_L) \to \sum_{\theta} \sum_{\gamma} \phi_{\gamma,i} p_{\gamma,i}( \theta,j | \v{\rho} ). \end{equation} \end{proposition} The expression on the right hand side depends on both $a_C$ and $a_L$ through $\phi_{\gamma,i}$ and $\v{\rho}$ respectively. In particular, we recall that $\phi_{\gamma,1} = a_C \phi_\gamma$, $\phi_{\gamma,0} = (1-a_C) \phi_\gamma$, and $\rho(\theta,1) = a_L \rho(\theta)$, $\rho(\theta,0) = (1-a_L) \rho(\theta)$. In our subsequent discussion in this regime, to emphasize the dependence of $\v{\rho}$ on $a_L$ below, we will write $\v{\rho}(a_L) = (\rho(\theta,j|a_L), \theta \in \Theta, j = 0,1)$. With this definition, we have $\rho(\theta,1|a_L) = a_L \rho(\theta)$, $\rho(\theta,0|a_L) = (1-a_L) \rho(\theta)$. This proposition allows us to characterize the bias of both $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimators in the demand constrained limit. Note that Proposition \ref{prop:Qij_demand_constrained} shows that, in this limit, the (scaled) rate of booking behaves {\em as if} the available listings of type $(\theta,j)$ was {\em exactly} $\rho(\theta,j|a_L)$ for every $\theta$ and treatment condition $j = 0,1$. That is, it is as if every arriving customer sees the entire mass of listings as being available, and so bookings are immediately replenished. This observation drives our first main result, that in the demand constrained limit the $\ensuremath{\mathsf{CR}}$ estimator is unbiased and $\ensuremath{\mathsf{LR}}$ estimator is biased. \begin{theorem} \label{thm:demand_constrained} Consider a sequence of systems where $\lambda/\tau \to 0$. Then for all $a_C$ such that $0 < a_C < 1$, $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(\infty|a_C)/\lambda - \ensuremath{\mathsf{GTE}}/\lambda \to 0$. However, for $0 < a_L < 1$, generically over parameter values\footnote{Here "generically" means for all parameter values, except possibly for a set of parameter values of Lebesgue measure zero.} we have $\lim\ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty|a_C)/\lambda - \ensuremath{\mathsf{GTE}}/\lambda \neq 0$. \end{theorem} The full proof can be found in Appendix \ref{app:proofs}. The key insight is that as the market becomes more demand constrained, there is a weakening of the competition between arriving customers, which leads to less interference in a $\ensuremath{\mathsf{CR}}$ experiment. In the limit, the $\ensuremath{\mathsf{CR}}$ estimator becomes unbiased. On the other hand, in an $\ensuremath{\mathsf{LR}}$ experiment there is a positive mass of control \textit{and} treatment listings available in steady state, leading to competition between listings and bias in the $\ensuremath{\mathsf{LR}}$ estimator. \subsubsection{Heavily supply-constrained markets.} We now characterize the behavior of naive estimators in the limit where $\lambda/\tau \to \infty$. We start with the next proposition, where we study the behavior of $Q_{ij}$ as $\lambda/\tau \to \infty$. The proof is in Appendix \ref{app:proofs}. To state the proposition, we define: \[ g_{\gamma,i}(\theta ,j) = \frac{\alpha_{\gamma,i}(\theta,j) v_{\gamma,i}(\theta, j)}{\epsilon_{\gamma,i}}. \] \begin{proposition} \label{prop:Qij_supp_constrained} Fix all system parameters except $\lambda$ and $\tau$, and consider a sequence of systems in which $\lambda/\tau \to \infty$. Along this sequence, the following limit holds: \begin{equation} \label{eq:Qij_supp_constrained} \frac{1}{\tau}Q_{ij}(\infty | a_C, a_L) \to \sum_\theta \left(\frac{\sum_\gamma \phi_{\gamma,i} g_{\gamma,i}(\theta,j)}{\sum_{i' = 0,1} \sum_{\gamma}\phi_{\gamma,i'} g_{\gamma,i'}(\theta,j)}\right) \rho(\theta,j) \nu(\theta). \end{equation} \end{proposition} As before, the expression on the right hand side depends on both $a_C$ and $a_L$ through $\phi_{\gamma,i}$ and $\v{\rho}$ respectively. In particular, we recall that $\phi_{\gamma,1} = a_C\phi_\gamma$, $\phi_{\gamma,0} = (1-a_C) \phi_\gamma$, and $\rho(\theta,1) = a_L \rho(\theta)$, $\rho(\theta,0) = (1-a_L)\rho(\theta)$. A key intermediate result we employ is to demonstrate that in the steady-state in this limit, $s^*(\theta, j|a_C, a_L) \to 0$ for all $\theta,j$. We know that in the steady state of the mean field limit, the rate at which occupied listings become available must match the rate at which available listings become occupied (flow conservation). We use this fact to show that to first order in $\lambda/\tau$, in the limit where $\lambda/\tau \to \infty$ we have: \[ s^*(\theta, j | a_C, a_L) \approx \frac{1}{\lambda/\tau} \cdot \frac{\rho(\theta, j) \nu(\theta)}{\sum_{\gamma} \sum_{i = 0,1} \phi_{\gamma,i} g_{\gamma,i}(\theta, j)}. \] The proposition follows by using this limit to characterize the choice probabilities. The proof of the preceding proposition reveals that in the limit where $\lambda/\tau \to \infty$, we have: \[ p_{\gamma,i}(\theta,j | \v{s}^*(a_C, a_L)) \approx g_{\gamma,i}(\theta,j) s^*(\theta,j|a_C,a_L) = \frac{ \alpha_{\gamma,i}(\theta,j) v_{\gamma,i}(\theta,j) s^*(\theta,j|a_C, a_L)}{ \epsilon_{\gamma,i}}. \] This preceding expression is the formalization of our intuition that, in the limit where the market is heavily supply-constrained, it is as if each arriving customer seeing an available listing compares only that listing to the outside option; there is no longer any competition {\em between} listings. We can use the preceding proposition to understand the behavior of the $\ensuremath{\mathsf{GTE}}$, the naive $\ensuremath{\mathsf{LR}}$ estimator, and the naive $\ensuremath{\mathsf{CR}}$ estimator in steady-state, as $\lambda/\tau \to \infty$. For simplicity, we hold $\tau$ constant and take the limit $\lambda \to \infty$. In this case, the preceding proposition shows that: \[ Q_{11}(\infty | 1, 1) \to \tau \sum_\theta \rho(\theta) \nu(\theta) ; \ \ Q_{00}(\infty | 0, 0) \to \tau \sum_\theta \rho(\theta) \nu(\theta). \] The global treatment effect $\ensuremath{\mathsf{GTE}} \to 0$ in this limit. Bookings occur essentially instantaneously after a listing becomes available, which happens at rate $\tau \sum_\theta \rho(\theta) \nu(\theta)$. We also note that: \[ Q_{11}(\infty | 1,a_L) \to a_L \tau \sum_{\theta} \rho(\theta) \nu(\theta); \ \ Q_{10}(\infty | 1,a_L) \to (1-a_L) \tau \sum_{\theta} \rho(\theta) \nu(\theta).\] The preceding two expressions reveal that the steady-state naive $\ensuremath{\mathsf{LR}}$ estimator $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty|a_L)$ in this setting approaches zero, matching the $\ensuremath{\mathsf{GTE}}$; thus it is asymptotically unbiased. It is also now straightforward to see why the $\ensuremath{\mathsf{CR}}$ design will be biased. Note that: \[ Q_{11}(\infty | a_C,1) \to a_C \tau \sum_\theta \left(\frac{\sum_\gamma \phi_{\gamma} g_{\gamma,1}(\theta,1)}{\sum_{\gamma}\sum_{i' = 0,1} \phi_{\gamma,i'} g_{\gamma,i'}(\theta,1)}\right) \rho(\theta) \nu(\theta). \] An analogous expression holds for $Q_{01}(\infty | a_C, 1)$. We see that the right hand side reflects the dynamic interference created between treatment and control customers: just as in our simple example, whether or not an available listing is seen by, e.g., a control customer depends on whether it has previously been booked by a treatment customer. That is, customers compete for listings. As in the example, the naive $\ensuremath{\mathsf{CR}}$ estimator will remain nonzero in general in the limit, even though the $\ensuremath{\mathsf{GTE}}$ approaches zero. We summarize our discussion in the following theorem. \begin{theorem} \label{thm:supp_constrained} Consider a sequence of systems where $\lambda/\tau \to \infty$. Then $\ensuremath{\mathsf{GTE}}/\tau \to 0$, and for all $a_L$ such that $0 < a_L < 1$, there also holds $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty | a_L)/\tau \to 0$. However, for $0 < a_C < 1$, generically over parameter values we have $\lim\ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(\infty|a_C)/\tau - \ensuremath{\mathsf{GTE}}/\tau \neq 0$. \end{theorem} Although the preceding theorem shows that the {\em absolute} bias of the naive $\ensuremath{\mathsf{LR}}$ estimator approaches zero, in fact in general the {\em relative} bias $(\widehat{GTE}^{\ensuremath{\mathsf{LR}}}(\infty|a_L) - \ensuremath{\mathsf{GTE}})/\ensuremath{\mathsf{GTE}}$ will not generally approach zero; this is because the $\ensuremath{\mathsf{GTE}}$ is also approaching zero, and so the second-order behavior of the naive $\ensuremath{\mathsf{LR}}$ estimator matters. This is in contrast to the behavior of the naive $\ensuremath{\mathsf{CR}}$ estimator in the demand-constrained limit: in that limit, the $\ensuremath{\mathsf{GTE}}$ remains nonzero in general, and so the naive $\ensuremath{\mathsf{CR}}$ estimator is both absolutely and relatively unbiased. Nevertheless, note that relative bias of the naive $\ensuremath{\mathsf{LR}}$ estimator will be significantly smaller than the relative bias of the naive $\ensuremath{\mathsf{CR}}$ estimator in the supply-constrained limit, since the naive $\ensuremath{\mathsf{CR}}$ estimator has a nonzero absolute bias in this limit while the $\ensuremath{\mathsf{GTE}}$ approaches zero. We finish this subsection by noting Theorems \ref{thm:demand_constrained} and \ref{thm:supp_constrained} are driven by fundamental competition dynamics in the respective limiting regimes, and therefore, we believe they hold under a much more general class of models than the ones considered here. We leave this for future investigation. \subsubsection{An example: Homogeneous customers and listings.} \label{ssec:Qij_homogeneous} To more clearly understand the behavior of the bias, in this section we apply Propositions \ref{prop:Qij_demand_constrained} and \ref{prop:Qij_supp_constrained} to a simpler setting where both listings and customers are homogeneous, i.e., there is only one type of customer and one type of listing. This example illustrates the symmetry between the two sides of the market and the resulting implications for bias in marketplace experiments. Let $v$ denote the control utility and $\tilde{v}$ the treatment utility of a customer for a listing. Let $\epsilon$ denote the outside option value of both control and treatment customers, $\alpha_0(0) = \alpha_1(1) = 1$, and $\nu(0) = \nu(1) = 1$. % In this example, we consider two limits: one where $\lambda$ is fixed and $\tau \to \infty$ (the demand-constrained regime), and one where $\tau$ is fixed and $\lambda \to \infty$ (the supply-constrained regime). In the first case, when $\tau \to \infty$ with $\lambda$ fixed, if we apply Proposition \ref{prop:Qij_demand_constrained}, we obtain: \begin{equation} \label{eq:Qij_homogeneous_demand_const} \begin{aligned}[c] Q_{00}(\infty | a_C, a_L) \to & \lambda \cdot \frac{(1-a_C)(1-a_L) \rho v}{\epsilon + \rho v}; % \\ Q_{10}(\infty | a_C, a_L) \to &\lambda \cdot \frac{a_C(1-a_L)\rho v}{\epsilon + (1-a_L)\rho v + a_L \rho \tilde{v}};% \\ \end{aligned} \qquad \begin{aligned}[c] Q_{01}(\infty | a_C, a_L) \to &\lambda \cdot \frac{(1-a_C)a_L \rho v}{\epsilon + \rho v};% \\ Q_{11}(\infty | a_C, a_L) \to & \lambda \cdot \frac{a_C a_L \rho \tilde{v}}{\epsilon + (1-a_L)\rho v + a_L \rho \tilde{v}}.% \end{aligned} \end{equation} In this limit, \[ \ensuremath{\mathsf{GTE}} \to \lambda \cdot \left(\frac{\rho \tilde{v}}{\epsilon + \rho \tilde{v}} - \frac{\rho v}{\epsilon + \rho v}\right). \] From these expressions it is clear that the naive $\ensuremath{\mathsf{CR}}$ estimator is unbiased, while the naive $\ensuremath{\mathsf{LR}}$ estimator is biased. Further, the expressions reveal that listing-side randomization creates interference across listings. In the second case, when $\lambda \to \infty$ with $\tau$ fixed, if we apply Proposition \ref{prop:Qij_supp_constrained}, we obtain: \begin{equation} \label{eq:Qij_homogeneous_supp_const} \begin{aligned}[c] Q_{00}(\infty | a_C, a_L) \to &\tau (1-a_C)(1-a_L) \rho; % \\ Q_{10}(\infty | a_C, a_L) \to &\tau a_C(1-a_L)\rho;% \end{aligned} \qquad \begin{aligned}[c] Q_{01}(\infty | a_C, a_L) \to &\tau \cdot \frac{(1-a_C)v}{(1-a_C)v + a_C \tilde{v}} a_L \rho; % \\ Q_{11}(\infty | a_C, a_L) \to &\tau \cdot \frac{a_C\tilde{v}}{(1-a_C)v + a_C \tilde{v}} a_L \rho.% \end{aligned} \end{equation} In this limit, $\ensuremath{\mathsf{GTE}} \to 0$. From these expressions it is clear that the naive $\ensuremath{\mathsf{CR}}$ estimator is biased, while the naive $\ensuremath{\mathsf{LR}}$ estimator is unbiased. Further, these expressions also reveal that customer-side randomization creates interference across customers. Interestingly, these expressions highlight a remarkable symmetry. As expected, in the limit of a highly demand-constrained market, customers choose among listings; thus there is competition for customers among listings, and this is the source of potential interference in $\ensuremath{\mathsf{LR}}$ designs. The expressions reveal that in the limit of a highly supply-constrained market, it is {\em as if} listings choose among customers; thus there is competition among customers, and this is the source of potential interference in $\ensuremath{\mathsf{CR}}$ designs. Indeed, the expressions for $Q_{01}$ and $Q_{11}$ in \eqref{eq:Qij_homogeneous_supp_const} take the form of a multinomial logit choice model of listings for customers. We believe this type of symmetry provides important insight into the nature of experimental design in two-sided markets, and in particular the roots of the interference typically observed in such settings. \subsubsection{Sign of the bias in $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimates.} Theorems \ref{thm:demand_constrained} and \ref{thm:supp_constrained} state that the $\ensuremath{\mathsf{LR}}$ estimate is biased in the demand constrained limit and the $\ensuremath{\mathsf{CR}}$ estimate is biased in the supply constrained limit, but make no claim as to whether the estimators overestimate or underestimate the $\ensuremath{\mathsf{GTE}}$. In general, we cannot provide guarantees for the sign of the bias, as it depends on the distribution of listings, the rates at which listing replenish, and the lift on the individual $\alpha_\gamma(\theta)$ and $v_\gamma(\theta)$ induced by the interventions. However, for a broad class of interventions, we can show that the $\ensuremath{\mathsf{LR}}$ estimate overestimates in the demand constrained limit and $\ensuremath{\mathsf{CR}}$ overestimates in the supply constrained limit. In such cases where we know the bias to be positive, $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ experiments can be used to bound the size of the $\ensuremath{\mathsf{GTE}}$. We call an intervention \textit{positive} if $\tilde{\alpha}_\gamma(\theta)\tilde{v}_\gamma(\theta) > \alpha_\gamma(\theta)v_\gamma(\theta)$ for all $\gamma$ and $\theta$. Such an intervention can be viewed as an improvement on the platform for all customer and listing type pairs, since for each pair at least one of the customer's consideration probability or utility for the listing type must increase. Note that this class of interventions is broad enough to allow for heterogeneous treatment effects across different $(\gamma, \theta)$ pairs.\footnote{A symmetric analysis can be applied for ``negative" interventions, where $\tilde{\alpha}_\gamma(\theta)\tilde{v}_\gamma(\theta) < \alpha_\gamma(\theta)v_\gamma(\theta)$ for all $\gamma$ and $\theta$; though, of course, interventions known to be negative in advance are less likely to be desirable from the platform's perspective.} For positive interventions, straightforward applications of Propositions \ref{prop:Qij_demand_constrained} and \ref{prop:Qij_supp_constrained} show that $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}$ overestimates the $\ensuremath{\mathsf{GTE}}$ in the demand constraint limit and $\widehat{\ensuremath{\mathsf{GTE}}}^\ensuremath{\mathsf{CR}}$ overestimates the $\ensuremath{\mathsf{GTE}}$ in the supply constrained limit. The result follows from the fact that in a customer-side (resp., listing-side) experiment in a supply constrained (resp., demand-constrained) setting, the individuals in the treatment group face less competition than they would in the global treatment setting, whereas the individuals in the control group face more competition than in the global control setting. \begin{proposition} \label{prop:cr_lr_positive_bias} Suppose that the treatment is positive, i.e., $\tilde{\alpha}_\gamma(\theta)\tilde{v}_\gamma(\theta) > \alpha_\gamma(\theta)v_\gamma(\theta)$ for all $\gamma, \theta$. Then we have the following. \begin{enumerate} \item $\ensuremath{\mathsf{LR}}$ bias when demand constrained: Consider a sequence of systems where $\lambda/\tau \to 0$. For any $0 < a_L < 1$, we have $\lim\ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty|a_C)/\lambda - \ensuremath{\mathsf{GTE}}/\lambda > 0$. \item $\ensuremath{\mathsf{CR}}$ bias when supply constrained: Consider a sequence of systems where $\lambda/\tau \to \infty$. For any $0 < a_C < 1$, we have $\lim\ \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(\infty|a_C)/\tau - \ensuremath{\mathsf{GTE}}/\tau > 0$. \end{enumerate} \end{proposition} Further, we find through simulations that $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ overestimate the $\ensuremath{\mathsf{GTE}}$ with positive treatments in intermediate ranges of market balance, for all parameter regimes that we study in the examples in this section (see Figure \ref{fig:simulations_hom} as well as Appendix \ref{app:simulations}). We do find in some cases that the $\ensuremath{\mathsf{TSRI}}$ estimators underestimate the $\ensuremath{\mathsf{GTE}}$, and so, since we plot bias on a log scale, we report the absolute value of the bias. % \subsection{Discussion: Violation of SUTVA} Our results on the bias of the naive $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ experiments can be interpreted through the lens of the classical potential outcomes model. An important result from this literature is that when the {\em stable unit treatment value assumption} (SUTVA) holds, then naive estimators of the sort we consider will be unbiased for the true treatment effect. SUTVA requires that the treatment condition of units other than a given customer or listing should not influence the potential outcomes of that given customer or listing. The discussion above illustrates that in the limit where $\lambda \to 0$, there is no interference across customers in the $\ensuremath{\mathsf{CR}}$ design; this is why the naive $\ensuremath{\mathsf{CR}}$ estimator is unbiased. Similarly, in the limit where $\lambda \to \infty$, there is no interference across listings in the $\ensuremath{\mathsf{LR}}$ design; this is why the naive $\ensuremath{\mathsf{LR}}$ estimator is unbiased. On the other hand, the cases where each estimator is biased involve interference across experimental units. \input{TSR} \section{Comparison with cluster-randomized experiments} \label{sec:cluster} In this section we compare the $\ensuremath{\mathsf{TSR}}$ approach to existing approaches to reduce bias in marketplace experiments. One such approach is to run a cluster-randomized experiment, which changes the unit of randomization in order to reduce interference effects across units. The typical approach is to divide the marketplace into clusters, such as geographical regions, such that there is less interaction of market participants across different clusters. All participants within a cluster receive the same treatment condition. The platform then estimates the $\ensuremath{\mathsf{GTE}}$ by comparing the outcomes within the treatment clusters versus the outcomes in the control clusters. It is important to note that many markets and social networks are highly connected and it is not possible to avoid all interference across clusters; see, e.g., \cite{holtz2020reducing} for an example in the context of Airbnb. Thus cluster-randomized experiments will reduce but not fully remove the bias. To compare the performance of the cluster-randomized and $\ensuremath{\mathsf{TSR}}$ approaches, we use our existing model to define a regime that gives the best-case performance for cluster-randomized experiments, where there are tightly clustered preferences in the marketplace and the platform knows ex-ante the true clusters (without having to learn them). The simulations suggest that cluster-randomized estimators offer substantial bias reductions over when the market is tightly clustered, but these improvements diminish if the market becomes more interconnected. The $\ensuremath{\mathsf{TSR}}$ estimators, however, offer bias reductions in both clustered and interconnected markets and, in interconnected markets, are less biased than the cluster-randomized estimator. In our simulations, the variance of the cluster-randomized estimator is lower than that of $\ensuremath{\mathsf{TSRI\text{-}2}}$ in our example, although the variance of the cluster-randomized estimator will likely change if we deviate from this best-case scenario with perfect knowledge of the clusters, identical listings within clusters, and uniform treatment effects across different clusters. Hence, while our model for market clusters is stylized, we believe our results suggest that $\ensuremath{\mathsf{TSR}}$ designs can be a useful alternative to cluster-randomized experiments in interconnected markets. \subsection{Market setup for cluster randomization} We consider the case where clusters are defined on the listing side, so that the cluster-randomized experiment is expected to improve upon a listing-randomized experiment.\footnote{Because we analyze a model where customers are short-lived while listings remain on the platform, it is likely that the platform has more information on the listings, and is better able to learn clusters on the listing side.} To induce a clustered structure, we model a setting with two customer types and two listing types, where each type of customer prefers a different type of listing. Formally, there are customer types $\{\gamma_1, \gamma_2\}$ and listing types $\{\theta_1, \theta_2\}$. All customers consider all listings ($\alpha_\gamma(\theta)=1$ for all $\gamma, \theta$) but customers have different utilities for different listings.\footnote{Alternatively, we can induce a clustered structure by modifying the consideration probabilities $\alpha_\gamma(\theta)$. Both approaches of modifying the $\alpha_\gamma(\theta)$ and modifying the $v_\gamma(\theta)$ are equivalent in the mean field model.} The global control utilities $v_\gamma(\theta)$ have the following form: \medskip \begin{center} \begin{tabular}{ c | c c } & $\theta_1$ & $\theta_2$ \\ \hline $\gamma_1$ & $x$ & $y$ \\ $\gamma_2$ & $y$ & $x$ \\ \end{tabular} \end{center} \smallskip where $x\geq y \geq 0$. Note that if $y=0$, then the market can be perfectly decomposed into two sub-markets, where customers of type $\gamma_1$ (resp., $\gamma_2$) \textit{only} book listings of type $\theta_1$ (resp., $\theta_2$). If $y=x$, then each customer prefers both listings equally. Thus we can interpret the ratio $y/x$ as a measure of how equally a customer prefers both products, where intuitively the market is tightly clustered when $y/x$ is small. We call $y/x$ the \textit{preference ratio}. The platform then runs a cluster randomized experiment where it first assigns listings to clusters and then randomizes entire clusters to either treatment or control. In practice, the platform must \textit{learn} how to create the clusters, likely through observational data in the global control setting,\footnote{See \cite{holtz2020reducing} in the context of marketplaces and \cite{Ugander13} in the context of social networks.} but in these simulations, we assume that the platform observes the cluster structure perfectly. The platform assigns all $\theta_1$ listings to one cluster and $\theta_2$ listings to another, and runs a completely randomized design on the clusters, assigning one of the clusters to treatment and one to control. For simplicity, assume that the intervention has a multiplicative lift $\delta>1$ on all customer-listing pairs, so that the treatment utilities satisfy $\tilde{v}_\gamma(\theta) = \delta v_\gamma(\theta)$. % The cluster-randomized estimator $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{Cluster}}}$, with clusters defined on the listing side, compares the (scaled) rate of bookings of listings in treatment clusters to the rate of bookings of listings in control clusters. Formally, in the mean field setting, once the clusters are randomized, let $Z$ denote the mass of listings assigned to a treatment cluster. % Then \begin{equation} \label{eq:naive_cluster} \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{Cluster}}}(T | Z) = \frac{Q_{11}(T | 1,Z)}{Z} - \frac{Q_{10}(T | 1,Z)}{1-Z}. \end{equation} We can similarly define an analogous estimator in the finite model. The full set of parameters is as follows. The market has an equal number of listings of both types, so that $m^{(N)}(\theta_1) = m^{(N)}(\theta_2) = 0.5$, and the same arrival rate for both customers types ($\lambda_{\gamma_1}^{(N)} = \lambda_{\gamma_2}^{(N)}$). We set $x=0.5$ and vary $y \in [0,1]$. We set $\delta=1.3$. We fix $\tau=1$ and consider $\lambda \in \{0.1, 1, 10\}$. \subsection{Results} Figure \ref{fig:cluster_vary_pref_ratio} shows how the performance of the estimators change when we vary the preference ratio (at a fixed market balance). We choose $\ensuremath{\mathsf{TSRI\text{-}2}}$ as a representative estimator for the $\ensuremath{\mathsf{TSR}}$ approach; see Figure \ref{fig:cluster_vary_pref_ratio_all_estimators} in Appendix \ref{app:cluster} for a comparison with all estimators. We see a clear takeaway that, perhaps unsurprisingly, the cluster randomized estimator offer substantial bias improvements when the market is tightly clustered (i.e., a customer strongly prefers one type of item over another) but offer little reduction in bias when the market is more interconnected. In particular, the cluster-randomized estimator outperforms the $\ensuremath{\mathsf{TSR}}$ estimators when the market is tightly clustered, while the $\ensuremath{\mathsf{TSR}}$ estimators outperform the cluster-randomized estimator when the market is more connected. The standard error changes little across the preference ratios, with the standard error of the cluster-randomized estimate lower than that of $\ensuremath{\mathsf{TSRI\text{-}2}}$, although the variance of the cluster-randomized estimator will likely change if we deviate from this best-case scenario with perfect knowledge of the clusters, identical listings within clusters, and uniform treatment effects across clusters. In our scenario, the cluster-randomized estimator has lower $\mathrm{RMSE}$ for tightly clustered markets and higher otherwise. \begin{figure} \centering \begin{tabular}{l l } \includegraphics[height=.28\textwidth]{figs_sims/cluster/sims_bias_varying_pref_ratio_lam_1_error_bars_mod} & \includegraphics[height=.28\textwidth]{figs_sims/cluster/sims_bias_over_LR_varying_pref_ratio_lam_1_error_bars_mod.png} \\ \includegraphics[height=.28\textwidth]{figs_sims/cluster/sims_se_varying_pref_ratio_lam_1_error_bars_mod.png} & \includegraphics[height=.28\textwidth]{figs_sims/cluster/sims_rmse_varying_pref_ratio_lam_1_error_bars_mod} \end{tabular} \caption{Simulations of cluster-randomized estimator and TSR estimators as preference ratio $y/x$ varies. Relative demand is fixed at $\lambda/\tau=1$. We fix $x=0.5$ and vary $y$. Bootstrapped 95 percentile confidence intervals are provided for each statistic. } \label{fig:cluster_vary_pref_ratio} \end{figure} \section{Conclusion} \label{sec:conclusion} This paper proposes a general mean field framework to study the dynamics of inventory bookings in two-sided platforms, and we leverage this framework to study the design and analysis of a number of different experimental designs and estimators. We study both commonly used designs and estimators ($\ensuremath{\mathsf{CR}}$, $\ensuremath{\mathsf{LR}}$), and also introduce a family of more general two-sided randomization designs and estimators ($\ensuremath{\mathsf{TSR}}$). Our work sheds light on the market conditions in which each approach to estimation performs best, based on the relative supply and demand balance in the marketplace. For bias minimization, we suggest two directions for future work. The first is further optimization of the $\ensuremath{\mathsf{TSR}}$ design as a standalone experiment design. We have proposed three natural $\ensuremath{\mathsf{TSR}}$ estimators, but the space of both designs and estimators is much richer and it is worth asking which are optimal with respect to bias and variance, as well as how this answer may change with differing market conditions. The second direction is to develop the $\ensuremath{\mathsf{TSR}}$ design as a method to debias one-sided experiments. The design allows us to measure competition effects between customers and between listings; this observation suggests that these measurements can be used to approximately debias existing $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ experiments, providing another route for platforms to utilize two-sided randomization designs. To make technical progress in this paper, we employed several simplifying assumptions on the choice model and booking behavior. % We believe that our core insight of market balance mediating competition effects, and thus affecting the resulting bias in an experiment, extends to other settings as well. We hypothesize, however, that some of our results are more robust to modeling choices than others. For example, the result that the $\ensuremath{\mathsf{CR}}$ estimator is unbiased in the demand constrained limit depends only on the fact that booked listings are replenished in between customer arrivals, and likely extends to any reasonable choice model. On the other hand, the result that the absolute bias of the $\ensuremath{\mathsf{LR}}$ estimator approaches 0 in the supply constrained limit may be more sensitive to different choice models, and in particular how the customer weighs options on the platform compared to the outside option. In this supply constrained limit, we conjecture that the relative performance of the two experiment types still holds even in modified settings; that is, the $\ensuremath{\mathsf{LR}}$ estimator has lower bias than the $\ensuremath{\mathsf{CR}}$ bias. Further, we believe that the approach of using $\ensuremath{\mathsf{TSR}}$ to observe competition effects and heuristically debias estimators also extends beyond our model. Another practical consideration for platforms is that not all experiment designs are suitable for all types of interventions. In this paper, we have largely focused on interventions such as user interface changes that change how an \textit{individual} customer perceives an \textit{individual} listing. There are also other interventions that operate between \textit{subsets} of customers or \textit{subsets} of listings. For example, a modification in the ranking algorithm over the listings changes operates not on an individual customer-listing pair, but rather changes how a customer interacts with a subset of listings. This intervention is more conducive to a $\ensuremath{\mathsf{CR}}$ experiment than an $\ensuremath{\mathsf{LR}}$ or $\ensuremath{\mathsf{TSR}}$ experiment. Beyond these feasibility constraints, it remains an open question whether certain types of experiments lead to lower bias in different classes of interventions. Finally, we emphasize the importance of inference in these settings, which we do not study in this paper. In practice, standard errors are also estimated ``naively": they are typically computed assuming independence of observations. However, because of interference, observations are clearly not independent. % In these settings, how biased might the standard error estimates be? How can experimenters derive valid confidence intervals in these settings? Such questions are critical for any platforms controlling the false positive and false negative results arising from their experiments. \section{Experiments: Designs and estimators} \label{sec:experiments} In this section, we leverage the framework developed in the previous section to undertake a study of experimental designs a platform might employ to test interventions in the marketplace. For simplicity, we focus on interventions that change the {\em choice probability} of one or more types of customers for one or more types of listings, and we assume the platform is interested in estimating the resulting rate at which bookings take place. However, we believe the same approach we employ here can be applied to study other types of interventions and platform objectives as well. % Formally, the platform's goal is to design experiments with associated estimators to assess the performance of the intervention (the {\em treatment}), relative to the status quo (the {\em control}). In particular, the platform is interested in determining the steady-state rate of booking when the entire market is in the treatment condition (i.e., {\em global treatment}), compared to the steady-state rate of booking when the entire market is in the control condition (i.e., {\em global control}). We refer to the difference of these two rates as the {\em global treatment effect} ($\ensuremath{\mathsf{GTE}}$). We focus on these steady-state quantities as a platform is typically interested in the long-run effect of an intervention. Two types of canonical experimental designs are typically employed in practice: {\em listing-side randomization} (denoted $\ensuremath{\mathsf{LR}}$) and {\em customer-side randomization} (denoted $\ensuremath{\mathsf{CR}}$). In the former design, listings are randomized to treatment or control; in the latter design, customers are randomized to treatment or control. Each design also has an associated natural ``naive" estimator of booking rates, that is, a (scaled) difference in means estimators for the two groups. As we discuss, these estimators will typically be biased, due to interference effects. The $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ designs are special cases of a novel, more general {\em two-sided randomization} ($\ensuremath{\mathsf{TSR}}$) design that we introduce in this work, where {\em both} listings and customers are randomized to treatment and control simultaneously. As we discuss, this type of experiment can be combined with design and analysis techniques to reduce bias. On the design side, $\ensuremath{\mathsf{TSR}}$ designs allow us to construct experiments that interpolate between $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ designs in such a way that bias is reduced. On the analysis side, $\ensuremath{\mathsf{TSR}}$ designs allow us to observe different competition effects, that we can use to heuristically debias our estimators. ($\ensuremath{\mathsf{TSR}}$ designs were also independently introduced and studied in recent work by \cite{bajari2019double}; see Section \ref{sec:related} for discussion.) In the next subsection we develop the relevant formalism for these designs; we then subsequently define natural ``naive" estimators that are commonly used for the $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ designs, as well as an estimator for a $\ensuremath{\mathsf{TSR}}$ design. In the remainder of the paper we study the bias of these different designs and estimators under different market conditions. \subsection{Experimental design} Since $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ are special cases of a $\ensuremath{\mathsf{TSR}}$ design, we first describe how to embed $\ensuremath{\mathsf{TSR}}$ experimental designs into our model, and then subsequently describe $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ designs in our model. \textbf{Treatment condition.} We consider a binary treatment: every customer and listing in the market will either be in {\em treatment} or {\em control}. (Generalization of our model to more than two treatment conditions is relatively straightforward.) We model the treatment condition by expanding the set of customer and listing types. For every customer type $\gamma \in \Gamma$, we create two new customer types $(\gamma, 0), (\gamma, 1)$; and for every listing type $\theta \in \Theta$, we create a two new listing types $(\theta, 0), (\theta, 1)$. The types $(\gamma, 0), (\theta, 0)$ are {\em control types}; the types $(\gamma, 1), (\theta, 1)$ are {\em treatment types}. \textbf{Two-sided randomization.} In this design, randomization takes place on both sides of the market simultaneously. We assume that a fraction $a_C$ of customers are randomized to treatment, and a fraction $1 -a_C$ to control, independently; and we assume that a fraction $a_L$ of listings are randomized to treatment, and a fraction $1 - a_L$ to control, independently. % \textbf{Treatment as a choice probability shift.} Examples of interventions that platforms may wish to test include the introduction of higher quality photos for a hotel listing on a lodging site, showing previous job completion rates of a freelancer on an online labor market, or reducing the friction for an item in the checkout flow. Such interventions change the choice probability of listings by customers either through the consideration probabilities or perceived utility for a listing. In particular, we continue to assume the multinomial logit choice model, and we assume that for a type $\gamma$ customer and a type $\theta$ listing that have been given the intervention, the utility becomes $\tilde{v}_\gamma(\theta) > 0$ and the probability of inclusion in the consideration set becomes $\tilde{\alpha}_\gamma(\theta) > 0$. Since we focus on changes in choice probabilities, we assume that the holding time parameter of a listing of type $\theta$ is $\nu(\theta)$, regardless of whether it is assigned to treatment or control.\footnote{Our current work allows us to relatively easily incorporate $\nu$ depending on treatment condition of the listing, and as such we can extend our results to study $\ensuremath{\mathsf{LR}}$ designs where $\nu$ varies with treatment condition. In general, however, when customers are also randomized to treatment or control, the holding time parameter of a listing should also depend on the treatment condition of the customer who booked that listing. Adapting our framework to incorporate this possibility remains an interesting direction for future work.} In the $\ensuremath{\mathsf{TSR}}$ designs that we consider, a key feature is that the intervention is applied only when {\em a treated customer interacts with a treated listing}. For example, when an online labor marketplace decides to show previous job completion rates of a freelancer as an intervention, only treated customers can see these rates, and they only see them when they consider treated freelancers. We model this by redefining quantities in the experiment as follows: \begin{align} v_{\gamma,0}(\theta, 0) = v_{\gamma, 1}(\theta, 0) = v_{\gamma,0}(\theta, 1) = v_\gamma(\theta);&\quad v_{\gamma,1}(\theta, 1) = \tilde{v}_\gamma(\theta);\label{eq:v_expt}\\ \alpha_{\gamma,0}(\theta, 0) = \alpha_{\gamma, 1}(\theta, 0) = \alpha_{\gamma,0}(\theta, 1) = \alpha_\gamma(\theta);&\quad \alpha_{\gamma,1}(\theta, 1) = \tilde{\alpha}_\gamma(\theta); \label{eq:alpha_expt}\\ \epsilon_{\gamma,0} = \epsilon_{\gamma,1} &= \epsilon_\gamma; \label{eq:epsilon_expt}\\ \nu(\theta,0) = \nu(\theta,1) &= \nu(\theta).\label{eq:nu_expt} \end{align} This definition is a natural way to incorporate randomization on each side of the market. However, we remark here that it is not necessarily canonical; for example, an alternate design would be one where the intervention is applied when {\em either} the customer has been treated {\em or} the listing has been treated. Even more generally, the design might randomize whether the intervention is applied, based on the treatment condition of the customer and the listing. In all likelihood, the relative advantages of these designs would depend not only on the bias they yield in any resulting estimators, but also in the variance characteristics of those estimators. We leave further study and comparison of these designs to future work. % \textbf{Customer-side and listing-side randomization.} Two canonical special cases of the $\ensuremath{\mathsf{TSR}}$ design are as follows. When $a_L = 1$, all listings are in the treatment condition; in this case, randomization only takes place on the customer side of the market. This is the customer-side randomization ($\ensuremath{\mathsf{CR}}$) design. When $a_C = 1$, all customers are in the treatment condition and randomization only takes place on the listing side of the market. This is the listing-side randomization ($\ensuremath{\mathsf{LR}}$) design. \textbf{System dynamics.} With the specification above, it is straightforward to adapt our mean field system of ODEs, cf.~\eqref{eq:ODE}, and the associated choice model \eqref{eq:meanfieldchoice}, to this setting. The key changes are as follows: \begin{enumerate} \item The mass of control (resp., treatment) listings of type $(\theta, 0)$ (resp., $(\theta, 1)$) becomes $(1-a_L)\rho(\theta)$ (resp., $a_L \rho(\theta)$). In other words, abusing notation, we define $\rho(\theta, 0) = (1-a_L) \rho(\theta)$, and $\rho(\theta, 1) = a_L\rho(\theta)$. \item The arrival rate of control (resp., treatment) customers of type $(\gamma, 0)$ (resp., $(\gamma, 1)$) becomes $(1-a_C)\lambda \phi_\gamma$ (resp., $a_C \lambda \phi_\gamma$). Thus we define $\phi_{\gamma,0} = (1-a_C)\phi_\gamma$, and $\phi_{\gamma,1} = a_C\phi_\gamma$. \item The choice probabilities are defined as in \eqref{eq:meanfieldchoice}, with the relevant quantities defined in \eqref{eq:v_expt}-\eqref{eq:epsilon_expt}. \end{enumerate} Using Proposition \ref{prop:ODE_trajectory} and Theorems \ref{thm:ODE_ss}, we know that there exists a unique solution to the resulting system of ODEs; and that there exists a unique limit point to which all trajectories converge, regardless of initial condition. This limit point is the steady state for a given experimental design. For a $\ensuremath{\mathsf{TSR}}$ experiment with treatment customer fraction $a_C$, and treatment listing fraction $a_L$, we use the notation $\v{s}_t(a_C, a_L) = (s_t(\theta, j) | a_C, a_L), \theta \in \Theta, j \in \{0,1\})$ to denote the ODE trajectory, and we use $\v{s}^*(a_C, a_L) = (s^*(\theta, j) | a_C, a_L), \theta \in \Theta, j \in \{0,1\})$ to denote the steady state. \textbf{Rate of booking}. In our subsequent development, it will be useful to have a shorthand notation for the rate at which bookings of listings of treatment condition $j \in \{0,1\}$ are made by customers of treatment condition $i \in \{0,1\}$, in the interval $[0,T]$. In particular, we define: \begin{equation} \label{eq:QijT} Q_{ij}(T | a_C, a_L) = \frac{\lambda}{T} \int_0^T \sum_\theta \sum_\gamma \phi_{\gamma,i} p_{\gamma,i}(\theta, j | \v{s}_t(a_C, a_L))\; dt. \end{equation} Since $\v{s}_t(a_C, a_L)$ is globally asymptotically stable, bounded, and converges to $\v{s}^*(a_C, a_L)$ as $t \to \infty$, we have: \begin{equation} \label{eq:Qijinfty} Q_{ij}(\infty | a_C, a_L) \triangleq \lim_{T \to \infty} Q_{ij}(T | a_C, a_L) = \lambda \sum_\theta \sum_\gamma \phi_{\gamma,i} p_{\gamma,i}(\theta, j | \v{s}^*(a_C, a_L)). \end{equation} \textbf{Global treatment effect.} Recall we assume the {\em steady-state rate of booking} is the quantity of interest to the platform. In particular, the platform is interested in the change in this rate from the {\em global control} condition ($a_C = 0, a_L = 0$) to the {\em global treatment} condition ($a_C = 1, a_L = 1$). In the global control condition, the steady state rate at which customers book is: $Q^{\ensuremath{\mathsf{GC}}} = Q_{00}(\infty | 0, 0)$, and in the global treatment condition, the steady state rate at which customers book is $Q^{\ensuremath{\mathsf{GT}}} = Q_{11}(\infty | 1,1)$. Under these definitions, the global treatment effect is $\ensuremath{\mathsf{GTE}} = Q^{\ensuremath{\mathsf{GT}}} - Q^{\ensuremath{\mathsf{GC}}}$. We remark that the rate of booking decisions made by arriving customers will change over time, even if the market parameters are constant over time (including the arrival rates of different customer types, as well as the utilities that customers have for each listing type). This transient change in booking rates is driven by changes in the state $\v{s}_t$; in general, such fluctuations will lead the transient rate of booking to differ from the steady-state rate, for all values of $a_C$ and $a_L$ (including global treatment and global control). In this work, we focus on the steady state quantities to capture, informally, the long run change in behavior due to an intervention. \footnote{Note that in two-sided markets, certain types of interventions will also cause long-run {\em economic} equilibration due to strategic responses on the part of market participants; for example, if prices are lowered during an experiment, this may affect entry decisions of both buyers and sellers, and thus the long-run market equilibrium. While our model allows the choice probabilities to change due to treatment, a more complete analysis of long run economic equilibration due to interventions remains a direction for future work.} \subsection{Estimators: Transient and steady state} The goal of the platform is to use the experiment to estimate $\ensuremath{\mathsf{GTE}}$. In this section we consider estimators the platform might use to estimate this quantity. We first consider the $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ designs, and we define ``naive'' estimators that the platform might use to estimate the global treatment effect. These designs and estimators are those most commonly used in practice. We define these estimators during the transient phase of the experiment % and then define the associated steady-state versions of these estimators. Finally, we combine these estimation approaches in a natural heuristic that can be employed for any general $\ensuremath{\mathsf{TSR}}$ design. {\bf Estimators for the $\ensuremath{\mathsf{CR}}$ design.} We start by considering the $\ensuremath{\mathsf{CR}}$ design, i.e., where $a_L = 1$ and $a_C \in (0,1)$. A simple naive estimate of the rate of booking is to measure the rate at which bookings are made in a given interval of time by control customers, and compare this to the analogous rate for treatment customers. Formally, suppose the platform runs the experiment for the interval $t \in [0, T]$, with a fraction $a_C$ of customers in treatment. The rate at which customers of treatment condition $i \in \{0,1\}$ book in this period is $Q_{i1}(T | a_C, 1)$. The {\em naive $\ensuremath{\mathsf{CR}}$ estimator} is the difference between treatment and control rates, where we correct for differences in the size of the control and treatment groups, by scaling with the respective masses:% \begin{equation} \label{eq:naive_CR} \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T | a_C) = \frac{Q_{11}(T | a_C,1)}{a_C} - \frac{Q_{01}(T | a_C,1)}{1-a_C}. \end{equation} We let $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(\infty | a_C) = Q_{11}(\infty | a_C,1)/a_C - Q_{01}(\infty | a_C,1)/(1-a_C)$ denote the steady-state naive $\ensuremath{\mathsf{CR}}$ estimator. {\bf Estimators for the $\ensuremath{\mathsf{LR}}$ design.} Analogously, we can define a naive estimator for the $\ensuremath{\mathsf{LR}}$ design, i.e., where $a_C = 1$ and $a_L \in (0,1)$. Formally, suppose the platform runs the experiment for the interval $t \in [0, T]$, with fraction $a_L$ of listings in treatment. The rate at which listings with treatment condition $j \in \{0,1\}$ are booked in this period is $Q_{1j}(T | 1, a_L)$. The {\em naive $\ensuremath{\mathsf{LR}}$ estimator} is the difference between treatment and control rates, scaled by the mass of listings in each group: \begin{equation} \label{eq:naive_LR} \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T | a_L) = \frac{Q_{11}(T | 1,a_L)}{a_L} - \frac{Q_{10}(T | 1,a_L)}{1-a_L}. \end{equation} We let $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty | a_L) = Q_{11}(\infty | 1,a_L)/a_L - Q_{10}(\infty | 1,a_L)/(1-a_L)$ denote the corresponding steady-state naive $\ensuremath{\mathsf{LR}}$ estimator. % {\bf Estimators for the $\ensuremath{\mathsf{TSR}}$ design.} As with the $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ designs, it is possible to design a natural naive estimator for the $\ensuremath{\mathsf{TSR}}$ design as well. In particular, we have the following definition of the {\em naive $\ensuremath{\mathsf{TSR}}$ estimator}: \begin{equation} \label{eq:naive_TSR_est} \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRN}}}(T | a_C, a_L) = \frac{Q_{11}(T | a_C,a_L)}{a_Ca_L} - \frac{Q_{01}(T | a_C,a_L) + Q_{10}(T | a_C, a_L) + Q_{00}(T | a_C, a_L)}{1 - a_C a_L}. \end{equation} To interpret this estimator, observe that the first term is the normalized rate at which treatment customers booked treatment listings in the experiment; we normalize this by $a_C a_L$, since a mass $a_C$ of customers are in treatment, and a mass $a_L$ of listings are in treatment. This first term estimates the global treatment rate of booking. The sum $Q_{01}(T | a_C,a_L) + Q_{10}(T | a_C, a_L) + Q_{00}(T | a_C, a_L)$ is the total rate at which control bookings took place: either because the customer was in the control group, or because the listing was in the control group, or both. (Recall that in the $\ensuremath{\mathsf{TSR}}$ design, the intervention is only seen when treatment customers interact with treatment listings.) This is normalized by the complementary mass, $1 - a_C a_L$. This second term estimates the global control rate of booking. As before, we can define a steady-state version of this estimator as $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRN}}}(\infty | a_C, a_L)$, with the steady-state versions of the respective quantities on the right hand side of \eqref{eq:naive_TSR_est}. It is straightforward to check that as $a_L \to 1$, we have $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRN}}}(T | a_C, a_L) \to \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T | a_C)$, the naive $\ensuremath{\mathsf{CR}}$ estimator. Similarly, as $a_C \to 1$, we have $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{TSRN}}}(T | a_C, a_L) \to \widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T | a_L)$, the naive $\ensuremath{\mathsf{LR}}$ estimator. In this sense, the naive $\ensuremath{\mathsf{TSR}}$ estimator naturally ``interpolates" between the naive $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{CR}}$ estimators. In the next section, we exploit this interpolation to choose $a_C$ and $a_L$ as a function of market conditions in such a way as to reduce bias. More generally, the $\ensuremath{\mathsf{TSR}}$ design also contains much more information about competition in the marketplace, and the resulting interference effects, than either the $\ensuremath{\mathsf{CR}}$ or $\ensuremath{\mathsf{LR}}$ designs. Inspired by this observation, together with the idea of interpolating between the naive $\ensuremath{\mathsf{CR}}$ estimator and the naive $\ensuremath{\mathsf{LR}}$ estimator, in Section \ref{ssec:TSR_est} we explore alternative, more sophisticated $\ensuremath{\mathsf{TSR}}$ estimators that heuristically debias interference due to competition effects. As we show, these estimators can offer substantial bias reduction over the naive $\ensuremath{\mathsf{TSRN}}$ estimator above. \section{Introduction} \label{sec:intro} We develop a framework to study experiments (also known as {\em A/B tests}) that two-sided platform operators routinely employ to improve the platform. Platforms use experiments to test many types of interventions that affect interactions between participants in the market; examples include features that change the process by which buyers search for sellers or interventions that alter the information the platform shares with buyers. The goal of the experiment is to introduce the intervention to some fraction of the market and use the resulting outcomes to estimate the effect if the intervention were introduced to the entire market. Platforms rely on these estimated effect sizes to make decisions about whether or not to launch the intervention to the entire market. However, in marketplace experiments, these estimates are often biased due to interference between market participants. Market participants interact and compete with each other and, as a result, the treatment assigned to one individual may influence the behavior of another individual. These interactions violate the Stable Unit Treatment Value Assumption (SUTVA) (\cite{ImbensRubin15}) that guarantees unbiased estimates of the treatment effect. Previous work has shown that the resulting bias can be quite large, and at times as large as the treatment effect itself (\cite{Blake14, Fradkin2015SearchFA, holtz2020reducing}). In this work, we model the platform competition dynamics, investigate how they influence the performance of different canonical experimental designs, and also introduce novel designs that can yield improved performance. We are particularly motivated by marketplaces where customers do not purchase goods, but rather {\em book} them for some amount of time. This covers a broad array of platforms, including freelancing (e.g., Upwork), lodging (e.g., Airbnb and Booking.com), and many services (tutoring, dogwalking, child care, etc.). While we explicitly model such a platform, the model we describe also captures features of a platform where goods are bought and supply must be replenished for future demand. Our model consists of a fixed number of {\em listings}; {\em customers} arrive sequentially over (continuous) time. For example, in an online labor platform, a freelancer offering work is a listing, and a client looking to hire a freelancer is a customer. On a lodging site, listings include hotel rooms or private rooms and customers are travelers wanting to book. Naturally, an arriving customer can only book {\em available} listings (i.e., those not currently booked). The customer forms a consideration set from the set of available listings and then, according to a choice model, chooses which listing to book from this set (including an outside option). Once a listing is booked, it is occupied and becomes unavailable until the end of its occupancy time. We focus on {\em interventions} by the platform that change the parameters governing the choice probability of the customer, such as those described above; we refer to the new choice parameters as the {\em treatment} model and the baseline as the {\em control} model.\footnote{The same modeling framework that we employ in this paper can be used to consider interventions that change other parameters, such as customer arrival rates or the time that listings remain occupied when booked; such application is outside the scope of our current work.} We assume the platform wants to use an experiment to assess the difference between the rate at which bookings would occur if all choices were made according to the treatment parameters % and the corresponding rate if all choices were made according to the control parameters % . This difference is the {\em global treatment effect} or $\ensuremath{\mathsf{GTE}}$. In particular, we assume that the quantity of interest is the {\em steady-state} (or long-run) $\ensuremath{\mathsf{GTE}}$, i.e., the long-run average difference in rental rates.\footnote{Our framework can also be used to evaluate other metrics of interest based on experimental outcomes; for simplicity we focus on rate of booking in this work.} Most platforms employ one of two simple designs for testing such an intervention: either {\em customer-side randomization} (what we call the $\ensuremath{\mathsf{CR}}$ design) or {\em listing-side randomization} (what we call the $\ensuremath{\mathsf{LR}}$ design). In the $\ensuremath{\mathsf{CR}}$ design, customers are randomized to treatment or control. All customers in treatment make choices according to the treatment choice model and all customers in control make choices according to the control choice model. In the $\ensuremath{\mathsf{LR}}$ design, listings are randomized to treatment or control, and the utility of a listing is then determined by its treatment condition. As a result, in the $\ensuremath{\mathsf{LR}}$ design, in general each arriving customer will consider {\em some} listings in the treatment condition and {\em some} listings in the control condition. As an example, suppose the platform decides to test an intervention that shows badges for certain listings. In the $\ensuremath{\mathsf{CR}}$ design, all treatment customers see the badges and no control customers see the badges. In the $\ensuremath{\mathsf{LR}}$ design, all customers see the badges on treated listings, and do not see them on control listings. Each of these designs are associated with natural estimators. In the $\ensuremath{\mathsf{CR}}$ design, the platform measures the difference in the rate of bookings between treatment and control customers; % this is what we call the {\em naive $\ensuremath{\mathsf{CR}}$ estimator}. In the $\ensuremath{\mathsf{LR}}$ design, the platform measures the difference in the rate at which treatment and control listings are booked; this is what we call the {\em naive $\ensuremath{\mathsf{LR}}$ estimator}. To develop some intuition for the potential biases, first consider an idealized {\em static} setting where listings are instantly replenished upon being booked; in other words, every arriving customer sees the full set of original listings as available. As a result, in the $\ensuremath{\mathsf{CR}}$ design there is no interference between treatment and control customers, and consequently the $\ensuremath{\mathsf{CR}}$ estimator is unbiased for the true $\ensuremath{\mathsf{GTE}}$. On the other hand, in the $\ensuremath{\mathsf{LR}}$ design, every arriving customer considers {\em both} treatment and control listings when choosing whether to book, creating a linkage across listings through customer choice. In other words, in the $\ensuremath{\mathsf{LR}}$ design there is interference between treatment and control, and in general the $\ensuremath{\mathsf{LR}}$ estimator will be biased for the true $\ensuremath{\mathsf{GTE}}$.% Now return to a dynamic model where the inventory of listings is limited, and listings remain unavailable for some time after being booked. In this case, observe that on top of the preceding discussion, there is a dynamic linkage between customers: the set of listings available for consideration by a customer is dependent on the listings considered and booked by previously arriving customers. This dynamic effect introduces a new form of bias into estimation and is distinctly unique to our work. In particular, because of this dynamic bias, in general the naive $\ensuremath{\mathsf{CR}}$ estimator will be biased as well. Our paper develops a dynamic model of two-sided markets with inventory dynamics, and uses this framework to compare and contrast both the designs and estimators above. We also introduce and study a novel class of more general designs based on {\em two-sided randomization} (of which the two examples above are special cases). In more detail, our contributions and the organization of the paper are as follows. % \noindent {\bf Benchmark model and formal mean field limit}. Our first main contribution is to develop a general, flexible theoretical model to capture the dynamics described above. In Section \ref{sec:model}, we present a model that yields a continuous-time Markov chain in which the state at any given time is the number of currently available listings of each type. In Section \ref{sec:meanfield}, we propose a formal mean field analog of this continuous-time Markov chain, by considering a limit where the number of listings in the system and the arrival rate of customers both grow to infinity. Scaling by the number of listings yields a continuum mass of listings in the limit. In the mean field model, the state at a given time is the mass of available listings and this mass evolves via a system of ODEs. Using a Lyapunov argument, we show that this system is globally asymptotically stable and give a succinct characterization of the resulting asymptotic steady state of the system as the solution to an optimization problem. We formally establish that the mean field limit arises as the fluid limit of the corresponding finite market model, as market size grows; in other words, the mean field model is a good approximation to large markets. The mean field model allows us to tractably analyze different estimators, as well as to study their biases in the large market regime. \noindent {\bf Designs and estimators: Two-sided, customer-side, and listing-side randomization.} In Section \ref{sec:experiments}, we introduce a more general form of experimental design, called {\em two-sided randomization} ($\ensuremath{\mathsf{TSR}}$); an analogous idea was independently proposed recently by \cite{bajari2019double} (see also Section \ref{sec:related}). % In a $\ensuremath{\mathsf{TSR}}$ design, both customers {\em and} listings are randomized to treatment and control. However, the intervention is only applied when a treatment customer considers a treatment listing; otherwise, if the customer is in control or the listing is in control, the intervention is not seen by the customer. (In the example above, a customer would see the badge on a listing only if the customer were treated {\em and} the listing were treated.) Notably, the $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ designs are special cases of $\ensuremath{\mathsf{TSR}}$. % We also define natural naive estimators for each design. \noindent {\bf Analysis of bias: The role of market balance.} In Section \ref{sec:bias} we study the bias of the different designs and estimators proposed. Our main theoretical results characterize how the bias depends on the relative volumes of supply and demand in the market. In particular, in the highly {\em demand-constrained} regime (where customers arrive slowly and/or listings replenish quickly): % {\em the naive $\ensuremath{\mathsf{CR}}$ estimator becomes unbiased, while the naive $\ensuremath{\mathsf{LR}}$ estimator is biased.} On the other hand, in the highly {\em supply-constrained} regime (where customers arrive rapidly and/or listings replenish slowly) we find that in fact {\em the naive $\ensuremath{\mathsf{LR}}$ estimator becomes unbiased, while the naive $\ensuremath{\mathsf{CR}}$ estimator is biased.} These findings suggest that platforms can potentially reduce bias by choosing the type of experiment based on knowledge of market balance. % Given the findings that $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ experiments offer benefits in different extremes, it is natural to ask whether good performance can be achieved in moderately balanced markets by ``interpolating'' between the naive $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimators. We define a naive $\ensuremath{\mathsf{TSR}}$ estimator that achieves this interpolation and has low bias in both market extremes, but still has large bias for moderate market balance. We then define more sophisticated $\ensuremath{\mathsf{TSR}}$ estimators that explicitly aim to correct for interference in regimes of moderate market balance. These latter estimators exhibit substantially improved performance in simulations. Appendix \ref{app:simulations} shows that these estimators perform well across a wide range of market parameters. \noindent{\bf Insights from simulations.} In Section \ref{sec:variance} we turn to simulations in the finite system to study the variance of the estimators. The simulations corroborate the theoretical findings that $\ensuremath{\mathsf{TSR}}$ offers benefits with respect to bias, albeit at the cost of moderate increases in variance. Among the $\ensuremath{\mathsf{TSR}}$ estimators that we study, we find that those with smaller reductions in bias have smaller increases in variance, while those with larger reductions in bias have larger increases in variance, thus revealing a {\em tradeoff} between bias and variance for the $\ensuremath{\mathsf{TSR}}$ estimators. In Section \ref{sec:cluster}, we compare the $\ensuremath{\mathsf{TSR}}$ approach with cluster-randomized experiments, an existing approach that platforms utilize to reduce bias. The simulations suggest that while both approaches can reduce bias when the market is tightly clustered, $\ensuremath{\mathsf{TSR}}$ estimators can reduce bias in highly interconnected markets where cluster randomized experiments cannot. \vspace{0.1in} Taken together, our work sheds light on what experimental designs and associated estimators should be used by two-sided platforms depending on market conditions, to alleviate the biases from interference that arise in such contexts. We view our work as a starting point towards a comprehensive framework for experimental design in two-sided platforms; we discuss some directions for future work in Section \ref{sec:conclusion}. \section{Introduction.}\label{intro} \input{intro} \input{related} \input{model} \input{meanfield} \input{experiments} \input{bias} \input{variance} \input{cluster} \input{conclusion} \ACKNOWLEDGMENT{We would like to thank Navin Sivanandam from Airbnb for numerous fruitful conversations. We would also like to thank Guillaume Basse, Dominic Coey, Nikhil Garg, David Holtz, Michael Luca, Johan Ugander, and Nicolas Stier for their feedback, as well as seminar participants at ACM EC'20, Facebook, Marketplace Algorithms and Design, CODE 2020, INFORMS 2020, Lyft, Vinted, Marketplace Innovations Workshop 2021, MSOM 2021, and Stanford. Finally, we thank the ACM EC'20 review team for their thoughtful comments and engagement with our work. \gw{Complete} This work was supported by the National Science Foundation under grants 1931696 and 1839229 and the Dantzig-Lieberman Operations Research Fellowship.} \bibliographystyle{informs2014} \section{A mean field model of platform dynamics} \label{sec:meanfield} The continuous-time Markov process described in the preceding section is challenging to analyze directly because the customers' choices and consideration sets induce complex dynamics. Instead, to make progress we consider a formal {\em mean field} limit motivated by the regime where $N \to \infty$, in which the evolution of the system becomes deterministic. We first present a formal mean field analogue of the Markov process introduced in the previous section and provide intuition for its derivation. We then formally prove that the sequence of Markov processes converges to this mean field model as $N \rightarrow \infty$. The mean field model provides tractable expressions in the large market regime for the different estimators we consider, allowing us to study and compare their bias. The mean field model we study consists of a continuum unit mass of listings. The total mass of listings of type $\theta$ in the system is $\rho(\theta) > 0$ (recall that $\sum_\theta \rho(\theta) = 1$). We represent the state at time $t$ by $\v{s}_t = (s(\theta), \theta \in \Theta)$; $s_t(\theta)$ represents the mass of listings of type $\theta$ available at time $t$. The state space for this model is: \begin{equation} \label{eq:S} \mcal{S} = \{ \v{s} : 0 \leq s(\theta) \leq \rho(\theta) \}. \end{equation} We first present the intuition behind our mean field model. Consider a state $\v{s} \in \mcal{S}$ with $s(\theta) > 0$ for all $\theta$. We view this state as analogous to a state $\v{\sigma} \approx N \v{s}$ in the $N$'th system. We consider the system dynamics defined by \eqref{eq:R1}-\eqref{eq:R2}. Note that the rate at which occupied listings of type $\theta$ become available is $(m^{(N)}(\theta) - \sigma(\theta)) \tau(\theta)$, from \eqref{eq:R1}. If we divide by $N$, then this rate becomes $(\rho(\theta) - s(\theta))\tau(\theta)$ as $N \to \infty$. On the other hand, note that for large $N$, if $D_\gamma(\theta|\v{\sigma})$ is Binomial$(\sigma(\theta), \alpha_\gamma(\theta))$, then $D_\gamma(\theta|\v{\sigma})/N$ concentrates on $\alpha_\gamma(\theta) s(\theta)$. Thus the choice probability $r_\gamma(\theta | \v{\sigma})$ is approximately: \begin{equation} \label{eq:meanfieldchoice} p_\gamma( \theta | \v{s}) \triangleq \frac{\alpha_\gamma(\theta) v_\gamma(\theta) s(\theta) }{\epsilon_\gamma + \sum_{\theta'} \alpha_\gamma(\theta') v_\gamma(\theta') s(\theta')}. \end{equation} (Here we use the fact that $\epsilon_\gamma^{(N)}/N \to \epsilon_\gamma$ as $N \to \infty$.) This is the mean field multinomial logit choice model for our system. In the finite model, the rate at which listings of type $\theta$ become occupied is $\sum_\gamma \lambda_\gamma^{(N)} r_\gamma(\theta | \v{\sigma})$, from \eqref{eq:R2}. If we divide by $N$, this rate becomes $\lambda \sum_\gamma \phi_\gamma p_\gamma(\theta | \v{s})$ as $N \to \infty$. Inspired by the preceding observations, we define the following system of differential equations for the evolution of $\v{s}_t$: \begin{equation} \label{eq:ODE} \frac{d}{dt} s_t(\theta) = (\rho(\theta) - s_t(\theta))\tau(\theta) - \lambda \sum_\gamma \phi_\gamma p_\gamma(\theta | \v{s}_t),\ \ \theta \in \Theta. \end{equation} This is our formal mean field model. In the remainder of this section, we first show that this system has a unique solution for any initial condition. Then we characterize the behavior of the system. By constructing an appropriate Lyapunov function, we show that the mean field model has a unique limit point to which all trajectories converge (regardless of initial condition). This limit point is the unique steady state of the mean field limit. Finally, we prove that the sequence of Markov processes indeed converges to this mean field model (in an appropriate sense). Hence, the mean field model provides a close approximation to the evolution of large finite markets. \subsection{Existence and uniqueness of mean field trajectory} First, we show the straightforward result that the system of ODEs defined in \eqref{eq:ODE} possesses a unique solution. This follows by an elementary application of the Picard-Lindel\"{o}f theorem from the theory of differential equations. The proof is in Appendix \ref{app:proofs}. \begin{proposition} \label{prop:ODE_trajectory} Fix an initial state $\hat{\v{s}} \in \mcal{S}$. The system \eqref{eq:ODE} has a unique solution $\{\v{s}_t:t\geq 0\}$ satisfying $0 \leq s_t(\theta) \leq \rho(\theta)$ and for all $t$ and $\theta$, and $\v{s}_0 = \hat{\v{s}}$. \end{proposition} \subsection{Existence and uniqueness of mean field steady state} Now we characterize the behavior of the mean field limit. We show that the system of ODEs in \eqref{eq:ODE} has a unique limit point, to which all trajectories converge regardless of the initial condition. We refer to this as the {\em steady state} of the mean field system. We prove the result via the use of a convex optimization problem; the objective function of this problem is a Lyapunov function for the mean field dynamics that guarantees global asymptotic stability of the steady state. Formally, we have the following result. The proof is in Appendix \ref{app:proofs}. \begin{theorem} \label{thm:ODE_ss} There exists a unique steady state $\v{s}^* \in \mcal{S}$ for \eqref{eq:ODE}, i.e., a unique vector $\v{s}^* \in \mcal{S}$ solving the following system of equations: \begin{equation} \label{eq:FOC} (\rho(\theta) - s^*(\theta))\tau(\theta) = \lambda \sum_\gamma \phi_\gamma p_\gamma(\theta | \v{s}^*),\ \ \theta \in \Theta. \end{equation} This limit point has the property that $0 < s^*(\theta) < \rho(\theta)$ for all $\theta$, i.e., it is in the interior of $\mcal{S}$. Further, this limit point is globally asymptotically stable, i.e., all trajectories of $\eqref{eq:ODE}$ converge to $\v{s}^*$ as $t \to \infty$, for any initial condition $\v{s}_0 \in \mcal{S}$. The limit point $\v{s}^*$ is the unique solution to the following optimization problem: \begin{align} \text{minimize} \ \ \ & W(\v{s}) \triangleq \sum_\gamma \left( \lambda_\gamma \log\left(\epsilon_\gamma + \sum_\theta \alpha_\gamma(\theta) v_\gamma(\theta) s(\theta) \right) \right) \nonumber \\ & \hspace{5em}- \tau(\theta) \sum_\theta \rho(\theta) \log s(\theta) + \tau(\theta) \sum_\theta s(\theta) \label{eq:ss_objective} \\ \text{subject to} \ \ \ &0 \leq s(\theta) \leq \rho(\theta), \ \ \theta \in \Theta. \label{eq:ss_constraint} \end{align} \end{theorem} The function $W$ appearing in the proposition statement is {\em not} convex; our proof proceeds by first noting that it suffices to restrict attention to $\v{s}$ such that $s(\theta) > 0$ for all $\theta$, then making the transformation $y(\theta) = \log (s(\theta))$. The objective function redefined in terms of these transformed variables {\em is} strictly convex, and this allows us to establish the desired result. \subsection{Convergence to the mean field limit} Finally, we formally describe the sense in which our system converges to the system in \eqref{eq:ODE}. We first move from analyzing the number of listings available in the $N$'th system to analyzing the proportion of listings available. To this end, define the normalized process $\v{Y}_t^{(N)}$ where \begin{equation*} Y_t^{(N)}(\theta) = \sigma_t^{(N)}(\theta)/N , \ \ \theta \in \Theta. \end{equation*} Note that under this definition, $\v{Y}_t^{(N)}$ is also a continuous time Markov process with dynamics induced by the dynamics of $\v{\sigma}_t^{(N)}$; in particular, the chain $\v{Y}_t^{(N)}$ has the same transition rates as $\v{\sigma}_t^{(N)}$, but increments are of size $1/N$. The following theorem establishes the convergence of $Y_t^{(N)}$ to the solution of the ODE described in \eqref{eq:ODE} as $N \rightarrow \infty$. \begin{theorem} \label{thm:mf_convergence} Assume that $\epsilon_\gamma^{(N)} / N \rightarrow \epsilon_\gamma$ for all $\gamma$ and $\lambda_\gamma^{(N)} / N \rightarrow \lambda_\gamma$ for all $\gamma$ as $N\rightarrow \infty$. Fix $\hat{\v{s}} \in \mcal{S}$ and assume that $\v{Y}_0^{(N)}$ is deterministic, with $Y_0^{(N)}(\theta) \rightarrow \hat{s}(\theta)$ for all $\theta$. % Let $\v{s}_t$ denote the unique solution to the system defined in \eqref{eq:ODE}, with initial condition $\v{s}_0 = \hat{\v{s}}$. Then for all $\delta>0$ and for all times $u>0$, \begin{equation} \P\left[\sup_{0 \leq t \leq u} \| \v{Y}_t^{(N)} - \v{s}_t \| > \delta \right] = O\left(\frac{1}{N}\right). \end{equation} \end{theorem} The proof for this result relies on an application of Kurtz's Theorem for the convergence of pure jump Markov processes; full details are in Appendix \ref{app:proofs}. We note that this result holds for any sequence of initial conditions $\v{Y}_0^{(N)}$, as long as the proportion of available listings at time $t=0$ converges to a constant vector $\hat{\v{s}}$ as $N \rightarrow \infty$; further, the vector $\hat{\v{s}}$ can be any (feasible) initial state in the mean field model. % We now utilize the mean field model to study experimental designs and interference. \section{A Markov chain model of platform dynamics} \label{sec:model} In this section, we first introduce the basic dynamic platform model that we study in this paper with a finite number $N$ of listings. In the next section, we describe a formal mean field limit of the model inspired by the regime where $N \to \infty$. This mean field limit model then serves as the framework within which we study the bias of different experimental designs and associated estimators in the remainder of the paper. We consider a two-sided platform where we refer to the supply side as {\em listings} and the demand side as {\em customers}. % Customers arrive over time and at the time of arrival, the customer forms a consideration set from the set of available listings in the market and then decides whether to book one of them. If the customer books, then the selected listing is occupied for a random length of time during which it is unavailable for other customers. At the end of this booking, the listing again becomes available for use for other customers. % The formal details of our model are as follows. \textbf{Time.} The system evolves in continuous time $t \geq 0$. \textbf{Listings.} The system consists of a fixed number $N$ of listings. We refer to ``the $N$'th system" as the instantiation of our model with $N$ listings present. % We use a superscript ``$N$" to denote quantities in the $N$'th system where appropriate. We allow for heterogeneity in the listings. Each listing $\ell$ has a {\em type} $\theta_\ell \in \Theta$, where $\Theta$ is a finite set (the {\em listing type space}). Note that in general, the type may encode both observable and unobservable covariates; in particular, our analysis does not presume that the platform is completely informed about the type of each listing. For example, in a lodging site $\theta_\ell$ may encode observed characteristics of a house such as the number of bedrooms, but also characteristics that are unobserved by the platform because they may be difficult or impossible to measure. Let $m^{(N)}(\theta)$ denote the total number of listings of type $\theta$ in the $N$'th system. For each $\theta \in \Theta$, we assume that $\lim_{N \to \infty} m^{(N)}(\theta)/N = \rho(\theta) > 0$. Note that $\sum_\theta \rho(\theta) = 1$. \textbf{State description.} At each time $t$, each listing $\ell$ can be either {\em available} or {\em occupied} (i.e., occupied by a customer who previously booked it). The system state at time $t$ in the $N$'th system is described by $\v{\sigma}_t^{(N)} = (\sigma_t^{(N)}(\theta))$, where $\sigma_t^{(N)}(\theta)$ denotes the number of listings of type $\theta$ available in the system at time $t$. Let $S_t^{(N)} = \sum_\theta \sigma_t^{(N)}(\theta)$ be the total number of listings available at time $t$. In our subsequent development, we develop a model that makes $\v{\sigma}_t^{(N)}$ a continuous-time Markov process. \textbf{Customers.} Customers arrive to the platform sequentially and decide whether to book, and if so, which listing to book. Each customer $j$ has a {\em type} $\gamma_j \in \Gamma$, where $\Gamma$ is a finite set (the {\em customer type space}) that represents customer heterogeneity. As with listings, the type may encode both observable and unobservable covariates, and again, our analysis does not presume that the platform is completely informed about the type of each customer. % Customers of type $\gamma$ arrive according to a Poisson process of rate $\lambda_\gamma^{(N)}$; these processes are independent across types. Let $\lambda^{(N)} = \sum_\gamma \lambda_\gamma^{(N)}$ be the total arrival rate of customers. Let $T_j$ denote the arrival time of the $j$'th customer. We assume that $\lim_{N \to \infty} \lambda^{(N)}/ N = \lambda > 0$, i.e., the arrival rate of customers grows proportionally with the number of listings when we take the large market limit. Further, we assume that for each $\gamma \in \Gamma$, we have $\lim_{N \to \infty} \lambda_\gamma^{(N)}/\lambda^{(N)} = \phi_\gamma > 0$. Note that $\sum_\gamma \phi_\gamma = 1$. \textbf{Consideration sets}. In practice, when customers arrive to a platform, they typically form a {\em consideration set} of possible listings to book; the initial formation of the consideration set may depend on various aspects of the search and recommendation algorithms employed by the platform. To simplify the model, we capture this process by assuming that on arrival, each listing of type $\theta$ that is available at time $t$ is included in the arriving customer's consideration set independently with probability $\alpha_\gamma(\theta) > 0$ for a customer of type $\gamma$. For example, $\alpha_\gamma(\theta)$ can capture the possibility that the platform's search ranking is more likely to highlight available listings of type $\theta$ that are more attractive for a customer of type $\gamma$, making these listings more likely to be part of the customer's consideration set; this effect is made clear via our choice model presented below. After the consideration set is formed, a choice model is then applied to the consideration set to determine whether a booking (if any) is made. % Formally, the customer choice process unfolds as follows. Suppose that customer $j$ arrives at time $T_j$. For each listing $\ell$, let $C_{j\ell} = 0$ if the listing is unavailable at $T_j$. Otherwise, if listing $\ell$ is available, then let $C_{j\ell} = 1$ with probability $\alpha_{\gamma_j}(\theta_\ell)$, and let $C_{j\ell} = 0$ with probability $1-\alpha_{\gamma_j}(\theta_\ell)$, independently of all other randomness. Then the consideration set of customer $j$ is $\{ \ell : C_{j\ell} = 1\}$. Our theoretical results in this paper are developed with this model of consideration set formation. Other models of consideration set formation are also reasonable, however. As one example, customers might sample a consideration set of a fixed size, regardless of total number of listings available. We explore such a consideration set model through simulations in Appendix \ref{app:simulations} and show that similar insights hold. \textbf{Customer choice}. Customers choose at most one listing to book and can choose not to book at all. We assume that customers have a {\em utility} for each listing that depends on both customer and listing types: a type $\gamma$ customer has utility $v_\gamma(\theta) > 0$ for a type $\theta$ listing. % Let $q_{j\ell}$ denote the probability that arriving customer $j$ of type $\gamma_j$ books listing $\ell$ of type $\theta_\ell$. In this paper we assume that customers make choices according to the well-known {\em multinomial logit choice model}. In particular, given the realization of $\v{C}_j$, we have: \begin{equation} \label{eq:mnl} q_{j\ell} = \frac{ C_{j\ell} v_{\gamma_j}(\theta_\ell)}{\epsilon_{\gamma_j}^{(N)} + \sum_{\ell' = 1}^N C_{j\ell'} v_{\gamma_j}(\theta_{\ell'})}. \end{equation} Here $\epsilon_\gamma^{(N)} > 0$ is the value of the {\em outside option} for type $\gamma$ customers in the $N$'th system. The probability that customer $j$ does not book any listing at all grows with $\epsilon_\gamma^{(N)}$. We let the outside option scale with $N$; this is motivated by the observation that in practical settings, the probability a customer does not book should remain bounded away from zero even for very large systems. In particular, we assume that $\lim_{N \to \infty} \epsilon_\gamma^{(N)}/N = \epsilon_\gamma > 0$. We note that this specification of choice model, although it relies on the multinomial logit model, can be quite flexible because we allow for arbitrary heterogeneity of listings and customers. For later reference, we define: \begin{equation} \label{eq:qjtheta} q_j(\theta) = \ensuremath{\mathbb{E}}\left[ \sum_{\ell : \theta_\ell = \theta} q_{j\ell}\right], \end{equation} where the expectation is over the randomness in $\v{C}_j$. With this definition, $q_j(\theta)$ is the probability that customer $j$ books an available listing of type $\theta$, where the probability is computed prior to realization of the consideration set. \textbf{Dynamics: A continuous-time Markov chain.} The system evolves as follows. Initially all listings are available.\footnote{As the system we study is irreducible and we analyze its steady state behavior, it would not matter if we chose a different initial condition.} Every time a customer arrives, the choice process described above unfolds. An occupied listing remains occupied, independent of all other randomness, for an exponential time that is allowed to depend on the type of the listing.\footnote{An even more general model might allow the occupancy time to depend on {\em both} listing type {\em and} the type of the customer who made the booking; such a generalization remains an interesting open direction.} More formally, let $\tau>0$ and for each type $\theta$ define $\nu(\theta)$ such that, once booked, a listing of this type will remain occupied for an exponential time with parameter $\tau \nu(\theta)$. We overload notation and define $\tau(\theta) = \tau \nu(\theta)$. Once this time expires, the listing returns to being available. When fixing $\v{\nu} = (\nu(\theta), \theta \in \Theta)$ and all system parameters except for $\tau$, increasing $\tau$ will make the system less supply constrained and decreasing $\tau$ will make the system more supply constrained, while preserving the relative occupancy times of each listing type. Our preceding specification turns $\v{\sigma}_t^{(N)}$ into a continuous-time Markov process on a finite state space $\mcal{S}^{(N)} = \{ \v{\sigma} : 0 \leq \sigma(\theta) \leq m^{(N)}(\theta),\ \forall \theta \}$. We now describe the transition rates of this Markov process. For a state $\v{\sigma} \in \mcal{S}^{(N)}$, $\sigma(\theta)$ represents the number of available listings of type $\theta$. There are only two types of transitions possible: either (i) a listing that is currently occupied becomes available, or (ii) a customer arrives, and books a listing that is currently available. (If a customer arrives but does not book anything, the state of the system is unchanged.) Let $\v{e}_\theta$ denote the unit basis vector in the direction $\theta$, i.e., $e_\theta(\theta) = 1$, and $e_\theta(\theta') = 0$ for $\theta' \neq \theta$. The rate of the first type of transition is: \begin{equation} \label{eq:R1} R(\v{\sigma}, \v{\sigma} + \v{e}_\theta) = (m^{(N)}(\theta) - \sigma(\theta)) \tau(\theta), \end{equation} since there are $m^{(N)}(\theta) - \sigma(\theta)$ booked listings of type $\theta$, and each remains occupied for an exponential time with mean $1/\tau(\theta)$, independently of all other randomness. The second type of transition requires some more steps to formulate. In principle, our choice model suggests that the identity of both the arriving guest and individual listings affect system dynamics; however, our state description only tracks the aggregate number of listings of each type available at each time $t$. The key here is that our entire specification depends on customers {\em only} through their type, and depends on listings {\em only} through their type. Formally, suppose a customer $j$ of type $\gamma_j = \gamma$ arrives to find the system in state $\v{\sigma}$. For each $\theta$ let $D_\gamma(\theta | \v{\sigma})$ be a Binomial$(\sigma(\theta), \alpha_\gamma(\theta))$ random variable, independently across $\theta$. Recall that for each available listing $\ell$, each $C_{j\ell}$ is a Bernoulli$(\alpha_\gamma(\theta_l))$ random variable. Recalling $q_j(\theta)$ as defined in \eqref{eq:qjtheta}, it is straightforward to check that: \begin{equation} \label{eq:anonymizedchoice} q_j(\theta) = r_\gamma(\theta | \v{\sigma}) \triangleq \ensuremath{\mathbb{E}} \left[\frac{ D_\gamma(\theta|\v{\sigma}) v_{\gamma}(\theta)}{\epsilon_{\gamma}^{(N)} + \sum_{\theta'} D_\gamma(\theta'|\v{\sigma}) v_{\gamma}(\theta')}\right]. \end{equation} In other words, the probability an arriving customer of type $\gamma$ books a listing of type $\theta$ when the state is $\v{\sigma}$ is given by $r_\gamma(\theta | \v{\sigma})$; and this probability depends on the past history {\em only} through the state $\v{\sigma}$ (ensuring the Markov property holds). With this definition at hand, for states $\v{\sigma}$ with $\sigma(\theta) > 0$, the rate of the second type of transition is: \begin{equation} \label{eq:R2} R(\v{\sigma}, \v{\sigma} - \v{e}_\theta) = \sum_\gamma \lambda_\gamma^{(N)} r_\gamma(\theta | \v{\sigma}). \end{equation} Note that the resulting Markov chain is irreducible, since customers have positive probability of sampling into, and booking from, their consideration set, and every listing in the consideration set has positive probability of being booked. % \textbf{Steady state.} Since the Markov process defined above is irreducible on a finite state space, there is a unique steady state distribution $\pi^{(N)}$ on $\mcal{S}^{(N)}$ for the process. % \section{Response to Reviewer \thereviewer}} \newcommand{\sectionrev}[1]{\bigskip \hrule \section{{#1}}} \newenvironment{pointEditor} {\begin{sf} \refstepcounter{pointEditor} \bigskip \noindent { \textbf{Point~\thepointE} } ---\ } {\par \end{sf}} \newenvironment{pointAE} {\begin{sf} \refstepcounter{pointAE} \bigskip \noindent { \textbf{Point~\thepointAE} } ---\ } {\par \end{sf}} \newenvironment{point} {\begin{sf} \refstepcounter{point} \bigskip \noindent {\textbf{Point~\thepoint} } ---\ } {\par \end{sf}} \newcommand{\shortpoint}[1]{\refstepcounter{point} \bigskip \noindent {\begin{sf}\textbf{Point~\thepoint} ---~#1\par \end{sf}}} \newenvironment{reply} {\medskip \noindent \textbf{Reply}:\ } {\medskip} \newcommand{\shortreply}[1][]{\medskip \noindent \textbf{Reply}:\ #1 \medskip} \title{Experimental Design in Two-Sided Platforms: An Analysis of Bias\\ \textit{Authors' Response to Management Science Reviews}} \date{\vspace{-3em}} \author{} \usepackage[round, sort]{natbib} \begin{document} \maketitle \section{Summary of changes} \label{sec:summary} The authors would like to thank the reviewers for their feedback. Motivated by their suggestions, we have made several changes to better engage with both the existing literature on experiments with interference as well as the use of these experiments in practice. With respect to the existing literature on methods to reduce bias, we have added a section comparing the bias reduction of TSR designs with that of the existing cluster-randomized designs. We show that cluster-randomized designs lead to lower bias when the market is tightly clustered, but TSR designs lead to lower bias in more interconnected markets. Regarding the use of these experiments in practice, we added discussions on robustness to our modeling assumptions, the application of these proposed experiment types to different kinds of interventions one may want to test in practice, and the open questions on statistical inference in these settings. We also provide guarantees on the sign of the bias in some specific market settings, which platforms can use to heuristically bound the true global treatment effect. Additionally, we have updated the simulation figures to better allow the reader to compare the performance of the experiments across different market scenarios and we have added 95th percentile bootstrapped intervals to better illustrate the simulation error. We provide detailed replies to editor, AE, and referees below. = \section{Response to editor} Dear David: Thanks very much for a thorough and constructive review process. We also very much appreciate the encouragement with a minor revision recommendation. Our revision has followed the suggestions from the review team closely. As a result, we think the paper has further improved. As you pointed out the main request was to compare with other approaches by using the simulations. We have added extensive simulations to address the issues raised by the referees; further details are provided below. Indeed, we provide detailed replies to the AE and referees. We look forward to hearing back from you. \sectionrev{Response to AE} We would like to thank you for the very constructive and thoughtful reviews. We really appreciate your encouragement and that you think our work tackles a relevant problem and makes important contributions. Our revision has followed your suggestions and those from the review team closely. As a result, we think the paper has further improved. Below, we describe how we have addressed your individual comments. For your convenience, we include your comments. \begin{pointAE} 1. Comparison with cluster-randomized designs. Reviewer 1 raises the question of how the proposed experimental designs compare with others in the literature. This seems like an important question to me. What would happen if other designs were used in these settings? Even if not developed with this particular data generating process in mind, they might perform reasonable well — or not. Analytical results may be beyond the scope of this paper, but it seems valuable to include these in the simulations. Thinking about this also highlighted how some other approaches end up being inapplicable precisely because of (for many applications) unrealistic versions of the model: if customers and listings are all homogeneous, then there is no structure to exploit in a clustered design. But of course this is not realistic for many applications, whether ride sharing (where the market is segmented by geography and car type), lodging, or general freelancing. So this comparison would need to allow for meaningfully heterogeneous customers and listings (currently these simulations are in the appendix) and have some idea of how the clustered designs use this (typically this is through past interactions). \end{pointAE} \begin{reply} In this version of the paper we added a section on comparisons between TSR and cluster-randomized experiments. To do so, we introduce a utility model with two customer types and two listing types that is parametrized to capture how tightly the market is clustered. Via numerical simulations we show that when the market is tightly clustered, cluster-randomized experiments offer substantial bias improvements. However, as there is more overlap between clusters, TSR estimators outperform cluster-randomized estimators. While our model for market clusters is stylized, our results suggest that TSR experiments can be a useful alternative to cluster-randomized experiments. Please see Section \ref{sec:cluster} and Appendix \ref{app:cluster} for more details. \end{reply} \begin{pointAE} 2. Sign of the bias Reviewer 2 raises a questions about signing the bias in the model in the simulations. This seems quite interesting because this would allow bounding the true estimand under weaker assumptions. Other work on interference has taken an interest in such partial identification (e.g. Choi 2017, Eckles et al. 2016, Pouget-Abadie et al. 2018). My sense is that the kind of interference due to congestion that is possible here can straightforwardly produce bias across the null, which can occur in some models of contagion (Morozova et al. 2018) but is explicitly ruled out by others (prior references). More generally, perhaps the authors can help readers gain intuition about the bias, and they may want to make sure the simulations illustrate what they regard as a relevant range of biases. (I liked the stylized extremes in the simple examples in the analytical results as one way they currently do this.) \end{pointAE} \begin{reply} We now show that for a broad class of positive treatment effects, the sign of the bias for LR and CR in the respective extremes of market balance is always positive (and negative for negative treatment effects). This is intuitive as cannibalization will induce an over estimation of bookings in the treatment group relative to global treatment, and an underestimation of bookings in the control group relative to global control. We confirm the robustness of these results via simulations for intermediate regimes of market balance. % \end{reply} \newpage \begin{pointAE} 3. Statistical inference One thing that is absent from the paper is any discussion of statistical inference (i.e., confidence intervals, hypothesis tests). Typical uses of randomized experiments require assessing statistical uncertainty. The simulations involve reporting standard errors, but these are the true (i.e. infeasible) standard errors computed through replications of the simulation. The authors should at least note that statistical inference is unaddressed here, but perhaps they can do a little more. It should be straightforward to test a sharp null hypothesis of no effects whatsoever (Fisher's null) using Fisherian randomization inference. I wonder then whether the Type I error rate (size) of these tests is in the simulations. I think it is reasonable for a full treatment of statistical inference to be beyond the scope of this paper, but I'd also suggest that the paper's impact (especially on empirical research and industry practice) will be much greater if there are least some pointers towards inference. \end{pointAE} \begin{reply} We agree that statistical inference is an important direction to study, particularly with regard to practical impact of this work. At the same time, there are significant challenges in this direction. Notably, the idea of deriving valid confidence intervals in these settings is complicated by the fact that the point estimates of the GTE are themselves biased. Even if we could perfectly estimate the standard errors, the bias in the point estimate could lead to invalid confidence intervals as well. Thus statistical inference involves a combination of debiasing the treatment effect estimate as well as the standard error estimate. While we don't address this topic in the current manuscript, motivated by your suggestion we have now added a discussion in Section \ref{sec:conclusion} highlighting open questions in this area. We discuss how platforms typically estimate standard errors ``naively,'' assuming independence across observations, which is clearly violated in these settings with interference and results in possible bias in the standard error estimates as well. \end{reply} \begin{pointAE} 4. Presentation of simulation results. Some of the reviewer comments may help the authors improve the presentation of the results, perhaps making some of the results more readily understood. For example, both Reviewers 1 \& 2 seem to have compared related results in Figures 4-11 — with some difficulty. Perhaps it is possible to make this clearer. One could imagine combining the panels that have the same outcome into a single (e.g.) line graph. I will leave this to the authors, but I think many readers will want to make these comparisons. \end{pointAE} \begin{reply} We have added figures that compare the performance of the estimators across different market conditions and levels of heterogeneity, in addition to the existing plots that compare across levels of supply and demand imbalance. Please see Appendix \ref{app:simulations}. \end{reply} \begin{pointAE} I also wanted to compare MSE of estimators, but could not readily do so. \end{pointAE} \begin{reply} Thanks for the suggestion. We have added plots for RMSE of the estimators. \end{reply} \begin{pointAE} I would suggest that the authors provide some guidance about the Monte Carlo error in all of these results. Maybe error bars are not needed everywhere, but as noted by Reviewer 1 it can be hard to tell, e.g., which estimators/designs are indistinguishable from unbiasedness. \end{pointAE} \begin{reply} Thank you for your suggestion. To provide guidance on the simulation error, we have added 95 percentile bootstrap confidence intervals to all simulation plots. \end{reply} \begin{pointAE} In summary, this paper provides a nice analysis of an increasingly important problem. Comparison with existing designs seems important for quantifying the advantages of the proposed design, and the authors might consider some further improvements for clarity and guidance for applying this (where statistical inference will be needed). \end{pointAE} \begin{reply} We appreciate the positive feedback, and we have incorporated both the suggestions for comparison against existing (cluster randomized) designs and clarity in the figures and exposition. \end{reply} \reviewersection We would like to thank you for the careful reading of our paper and your encouragement. We were pleased to learn that you think we are studying an important problem and that we made good progress replying to the EC reviews. We have followed your suggestions in this revision. As a result, we think the paper is more streamlined and reads better. Below, we describe how we have addressed your individual comments. For your convenience, we include your comments. \begin{point} My largest concern with the paper is that the current draft is a bit hard to follow. I’ve spent a lot of time with this manuscript, and still feel unsure about some important points. I think there are a few things the authors can do to streamline the paper and make it easier to understand. • I think the paper is currently a bit overstuffed; some of the examples in the paper feel redundant, and some of the ideas in the paper seem underdeveloped and distracting from the main point in their current form. On the example front, the examples provided in Sections 6.1 and 6.2 feel very similar to the example in Section 7.1.3 (the authors even note in 7.1.3, “we consider a setting analogous to Section 6.”). My guess is that the examples in Section 6 are supposed to be toy examples that build intuition, whereas the examples in Section 7.1.3 are meant to be an application of the mathematical tools built up earlier in Section 7, but it ends up feeling like the paper is repeating itself. Maybe it is possible to move one of these sets of examples to the appendix? \end{point} \begin{reply} Thank you for the comment. We have removed the toy examples and instead use the application of our model to the homogeneous setting in Section \ref{ssec:Qij_homogeneous} (previous Section 7.1.3) to build intuition. \end{reply} \begin{point} I’m also not sure the discussion of transient effects belongs in the main body of the paper. I think this is an important topic, given that firms are often in the position of making long-term product decisions based on short experiments that run for weeks, if not days. However, the discussion of transient estimators here seems under-developed. What should my takeaway be after reading Section 7.3 and looking at Figure 2? What are the implications, both in terms of different estimators and in terms of thinking about short-term vs. long-term experiments? Given that the transient vs. steady state contrast is not key to the arguments in the paper, and the authors do not devote a great amount of time to comparing, say, the rate at which bias of different estimators approaches 0 while in the transient system, I’m wondering if this analysis is better suited to an appendix. \end{point} \begin{reply} That's a good point. We have moved the transient discussion to the appendix. \end{reply} \newpage \begin{point} When the TSR design is first presented in Section 5, it feels to me like it comes a bit out of left field. Yes, the LR and CR designs are mathematically special cases of the TSR design, but that doesn’t provide me any intuition for why the TSR design should actually be less biased than CR or LR, so as a reader I end up getting sidetracked trying to figure this out. When presenting the TSRI designs later in the paper, I think the authors do a better job providing intuition for what the different terms in the estimator are doing (e.g., “linear combination of LR and CR, with terms that account for cannibalization”). I’d love to see more of that type of intuition throughout the paper when it comes to the TSR designs and the corresponding estimators. \end{point} \begin{reply} We have provided more motivation for TSR when we first introduce it in Section 5 and when we introduce it as an interpolation between CR and LR. We have also provided better intuition of the cannibalization terms for TSRI in Section \ref{ssec:TSR_est} and Figure \ref{fig:tsr_comp_terms}.% \end{reply} \begin{point} I recognize this is appendix content, but I was quite interested in the performance of the TSR estimators in simulations as market conditions were varied (e.g., different levels of heterogeneity, etc). In order to try and reason about this, I found myself trying to compare numbers in Figures 6, 7, and 8, numbers in Figures 9, 10, and 11, etc. I wonder if these bar graphs are the right way to communicate these results? For instance, I could imagine Figures 6, 7 and 8 being combined into one figure, where rather than relative demand on the x-axis, the level of heterogeneity is on the x-axis, and the relative demand is fixed for a given row of results. Not sure how this would look, so just a suggestion. But I think my broader point is that having these results split across multiple figures (with different y-axes in some cases) makes the natural comparisons hard to do. \end{point} \begin{reply} Thanks for this comment. We have added figures that compare the performance of the estimators across different market conditions at a fixed level of supply and demand (Appendix \ref{ssec:robustness_market_scenarios}), in addition to the existing plots that compare across levels of supply and demand imbalance (Appendix \ref{ssec:robustness_vary_lambda}). We now have four figures that depict the effect of changing average utility, customer heterogeneity, listing heterogeneity, and treatment effect heterogeneity in a scenario with balanced supply and demand. \end{reply} \begin{point} In addition to the above points about writing clarity, it would be great to see the paper connect a bit more directly to the existing literature on experimentation in two-sided marketplaces, which the authors reference in the literature review (Holtz 2018, Holtz et al. 2020, Sneider et al. 2019). Is it possible to calculate the bias of cluster randomized estimators, or switchback estimators, using the analytical framework developed in this paper? If not, is it possible to at least code up these estimators in the simulations used in the back half of the paper, so that cluster randomized experiments and switchback experiments can be included in the bias and variance comparisons? It would be helpful to know how TSR compares not just to LR and CR designs (which we know have bias issues, but are pretty good on the variance front), but also previously proposed solutions (which we know are at least a bit better on bias, but often much worse in terms of variance). In case it’s helpful, I think one way to represent cluster randomized experiments in your framework would be to model consumer choice using the type of nested choice model that you sometimes see in empirical IO, and then assign treatment and control at the level the groups in that nested model. \end{point} \begin{reply} This is a great comment. In this version of the paper we added a section on comparisons between TSR and cluster-randomized experiments. To do so, we introduce a utility model with two customer types and two listing types that is parametrized to capture how tightly the market is clustered. Via numerical simulations we show that when the market is tightly clustered, cluster-randomized experiments offer substantial bias improvements. However, as there is more overlap between clusters, TSR estimators outperform cluster-randomized estimators. While our model for market clusters is stylized, our results suggest that TSR experiments can be a useful alternative to cluster-randomized experiments. Please see Section \ref{sec:cluster} and Appendix \ref{app:cluster} for more details. \end{reply} \begin{point} Finally, I think it’s important for the authors to discuss some of the limitations of the framework presented and experiment designs proposed in this paper. For one, the stochastic market model proposed by the authors in many ways simplifies the dynamics that we would observe in an actual marketplace. For instance, as the authors note in footnote 4, their model allows the length of a booking to depend on the type of listing, but not the type of customer. Of course, simplifications like this are required in order to do this type of work. That being said, I think it is important to spend at least a small amount of time in the discussion section of the paper discussing these assumptions, and some potential ways in which they might threaten the validity of the paper’s results when applied to real world scenarios (or, if the authors feel confident that none of these assumptions threaten the validity of the conclusions reached, a convincing argument making this point would be great also). \end{point} \begin{reply} Thank you for this note. We have added a discussion on robustness of our results to modeling assumptions in Section \ref{sec:conclusion}. We believe that our core insights of market balance mediating competition effects, and thus affecting bias in an experiment, broadly extends to other settings. We conjecture that some of our results are more robust to modeling choices than others. For example, the result that the CR estimator is unbiased in the demand constrained regime may be more robust than the result that the LR estimator is unbiased in the supply constrained regime. \end{reply} \begin{point} Similarly, I can think of a number of treatment interventions a two-sided platform might want to test that are not well-suited to the TSR design. For instance, if a platform wants to test a new search ranking algorithm, it is not clear to me how this change would only be applied to some listings, but not others, i.e., the TSR design as proposed does not seem possible. This isn’t meant to take away from the potential benefits of TSR – it seems like a helpful design. It would just be helpful for the authors to discuss scenarios where the TSR design is, and is not, appropriate. • Related to my comment above about not all treatments lending themselves to TSR, one of the examples provided on Page 14 (changing the frequency with which the recommendation system promotes listings of a certain type) seems like one such broken example. Even if I only treat some of the would-be promoted listings, isn’t this going to by definition “treat” some of the other listings who are not being promoted (assuming there is some type of UI space constraint)? \end{point} \begin{reply} This is a very good observation. The TSR design is more suited to interventions that act on a single customer-listing pair. The ranking algorithm you mention would act on a customer and a set of listings, which would not be well suited to the TSR design. We now provide a discussion of this in Section \ref{sec:conclusion}. On the second point you bring up about the recommendation setting example, we agree that if there is a UI space constraint or, additionally, a constraint on customer attention, then changing one listing would affect others. This is another reason to further study modifications in the consideration set formation, where perhaps customers can only sample a fixed number of listings into their consideration set. We briefly study this scenario in Appendix \ref{app:simulations} but leave further study on the choice model for future work. For the time being, we have removed the recommendation system example and replaced it with an example where the platform reduces friction for a listing in the checkout flow. \end{reply} \begin{point} Minor points \end{point} \begin{reply} We have addressed all of your minor points. We list the most prominent ones below. \end{reply} \begin{point} • The paper really hits the ground running, which I didn’t mind as someone familiar with the literature on experimentation in two-sided platforms, but it might be good to ease the reader into this literature a bit more in the introduction, and explain why this is an important problem. I could see someone less familiar with the existing research asking “OK, so sometimes these experiments yield biased estimates – so what?” \end{point} \begin{reply} We have provided a bit more context in the introduction where we now discuss how experiments are used and previous literature on the magnitude of the bias in the marketplace experiments. \end{reply} \begin{point} • When varying customer and listing heterogeneity in Appendix B, the authors change the utilities, but I’m wondering if there’s a way to look at how the bias/variance changes as the number of customer types or listing types changes? This might not be easy to do, since this might also depend on the probabilities of inclusion in the choice set, etc., but when I think of heterogeneity, I think of the number of types, whereas what the authors do now seems to be something more akin to heterogeneous treatment effects. \end{point} \begin{reply} We have added simulations where we increase the heterogeneity of the listings and/or customers before the intervention. Appendix \ref{app:simulations} now contains simulations where we fix the number of listings types, but vary pre-treatment utilities that customers have for these types to consider varying levels of heterogeneity between the listings. Further, we fix the relative lift that treatment has on the utilities to be uniform across all types, to remove any heterogeneous treatment effects. We find that this heterogeneity affects the bias of the CR estimator, but remarkably does not affect the bias of the TSR estimators. We also provide a similar analysis where we vary the heterogeneity between customers and show that this heterogeneity affects the bias of the LR estimator, but again not the TSR estimators. \end{reply} \begin{point} • The consideration set analysis in the appendix that was added after EC is useful. I’d be curious to see how the bias/variance of different estimators change as the size of the consideration sets change. Perhaps it is worth redoing the included analysis for a couple of additional values of K (e.g., 10, 50, 100, 500). \end{point} \begin{reply} We believe that modifications in customer behavior (including the size of the consideration set, how customers sample listings into the set, and the choice model used) are all important directions to study and require further analysis than we can provide with our current model. We leave much of this direction to future work in the area. \end{reply} \begin{point} • It may be worth doing a comparison of the simulation RMSEs, in addition to showing bias and variance separately. Granted, RMSE is just $bias^2 + variance$, but it may be tough for the reader to do that calculation in their head, especially when the y-axes in the subfigures are on different scales. \end{point} \begin{reply} We have now added RMSE of the estimators to all simulation figures. We remark in Section \ref{sec:variance} that the performance of these estimators with respect to RMSE depends largely on the size of the platform (or the time horizon of the experiment). The bias of the experiment is relatively stable with changes in these two factors, but of course variance decreases in the size and the length of the time horizon. Thus whether bias or variance contributes more to the RMSE depends on the scale of the market and experiment. \end{reply} \begin{point} Again, I really enjoyed reading this paper, and am glad to see people doing work in this area. Thank you, and good luck! \end{point} \begin{reply} Thanks again for your constructive comments and encouragement. \end{reply} \reviewersection We would like to thank you for the careful reading of our paper your comments. We have followed your suggestions in this revision. As a result, we think the paper has further improved. Below, we describe how we have addressed your individual comments. For your convenience, we include your comments. \begin{point} On page 28, the authors mentioned that "the estimators are all upward-biased because of the parameter values chosen." I was wondering if there could be any guarantee on the sign of the bias due to interference, for the logit choice model studied in this paper. In practice, I could imagine that the exact size of the treatment effect is not the most critical for decision making: a company might be comfortable launching a new feature, if the estimation is positive, and they know that the estimator is guaranteed to underestimate a positive treatment effect. From the results presented here, it seems like interference generally leads to overestimating a positive treatment effect. I tried to think about this but couldn't easily find an example going in the other direction. In Figure 2 in the original submission to EC the TSR estimators were underestimating in transient phase, but the new Figure 2 in the current submission is showing a different dynamic. \end{point} \begin{reply} We now show that for a broad class of positive treatment effects, the sign of the bias for LR and CR in the respective extremes of market balance is always positive (and negative for negative treatment effects). This is intuitive as cannibalization will induce an over estimation of bookings in the treatment group relative to global treatment, and an underestimation of bookings in the control group relative to global control. We confirm the robustness of these results via simulations for intermediate regimes of market balance. % \end{reply} \begin{point} Thanks for including the simulation results on page 47 where the size of the consideration set is fixed at K=50. In comparison to earlier figures, the biases seem to be substantially larger (>0.8 vs around 0.02 for oversupplied CR, for example). I wonder if there is any intuitive explanations for this. \end{point} \begin{reply} This is a good observation, and unfortunately one for which our current analysis using the mean field model isn't sufficiently equipped to answer. However, we believe that modifications in customer behavior (including the size of the consideration set, how customers sample listings into the set, and the choice model used) are all important directions to study and require further analysis than we can provide with our current model. We leave much of this direction to future work in the area. \end{reply} \begin{point} With $\lambda / \tau = 10$, would the platform run out of listings, and would the actual consideration set be smaller than 50 in this case? \end{point} \begin{reply} When the platform has fewer than 50 listings, the customer will sample all listings available into the consideration set. We have modified the text in Appendix \ref{app:simulations} to make this clear. \end{reply} \begin{point} I found myself wondering why are the correction terms in (40) multiplied by $\beta$ and $(1-\beta)$ respectively \end{point} \begin{reply} The motivation is to weight the correction term less when the corresponding type of competition is weaker. For example, as we approach the highly demand-constrained regime, the term corresponding to customer competition gets diminishing weight since we know that type of interference vanishes in that regime. A symmetric argument applies for the supply-constrained regime and other competition term. It is possible that the correction terms themselves will go to 0 as the competition weakens, in which case the $\beta$ and $(1-\beta)$ weights would be redundant, but we leave the optimization of the $\ensuremath{\mathsf{TSR}}$ estimator for future work. \end{reply} \begin{point} I probably missed this, but for Figure 2, is the initial state the stationary state equilibrium when all customers and listings are in control? \end{point} \begin{reply} In the earlier version, the initial steady state for the transient numerics was a market with all listings available. However, motivated by your question, we have realized that it is more practical for the initial steady state (in the experiment) to be the steady state equilibrium when all customers and listings are in control. We have now modified the figure to start at the steady state for global control. \end{reply} \begin{point} It could be helpful if the y axis are normlized for the first two subfigures in Figures 4-11. \end{point} \begin{reply} Thank you for the suggestion. We have now normalized the axes in the mean field bias and simulation bias figures. See Appendix \ref{app:simulations}. We find that the mean field bias and simulation bias are quite close in each of the scenarios we consider. \end{reply} \begin{point} Also, for a number of settings, we see that the mean field bias is substantially smaller than the simulation bias for TSRI2 but not for TSRI-1, and I am curious why do we see this. \end{point} \begin{reply} For settings with few customers $(\lambda = 0.1)$, we see that in some cases the simulation bias in TSRI-2 differs more from the mean field bias than the the other estimators. In some cases (small average utility) the simulation bias is lower, while in other cases (large heterogeneity between customers) the simulation bias is larger. We believe that these discrepancies are due to the larger standard errors associate with the TSRI-2 estimator. In this revised manuscript, we ran larger simulations than in the original submission ($N=5000$ instead of $N=1000$) and the discrepancies between the mean field and simulation bias are smaller. In addition, we have added 95 percentile bootstrapped intervals to each of the simulation plots, and we can see that the mean field bias lies within the corresponding interval. \end{reply} \newpage \begin{point} I did find the left panel in Figure 1 of the original EC (bias ~ $\lambda / \tau$ for different estimators) submission very helpful, and I'm also curious what this figure would look like if normalized by GTE \end{point} \begin{reply} We have included figures where we give mean field and simulation results for the estimators $\lambda \in \{0.1, 1, 10\}$. Thanks again for all the constructive comments. \end{reply} \reviewersection \begin{point} I am satisfied with the changes the authors have made to the manuscript. \end{point} We really appreciate your encouragement and support of our work. \end{document} \section{Related work} \label{sec:related} {\bf SUTVA.} The types of interference described in these experiments are violations of the Stable Unit Treatment Value Assumption (SUTVA) in causal inference \citep{ImbensRubin15}. SUTVA requires that the (potential outcome) observation on one unit should be unaffected by the particular assignment of treatments to the other units.t A large number of recent works have investigated experiment design in the presence of interference, particularly in the context of markets and social networks. \noindent {\bf Interference in marketplaces.} Biases from interference can be large: \cite{Blake14} empirically show in an auction experiment that the presence of interference among bidders caused the estimate of the treatment effect to be wrong by a factor of two. \cite{Fradkin_2019} finds through simulations that a marketplace experiment changing search and recommendation algorithms can overestimate the true effect by 50 percent. % More recent work by \cite{holtz2020reducing} randomizes clusters of similar listings to treatment or control, and finds the bias due to interference can be almost 1/3 of the treatment effect. Interestingly, \cite{holtz2020reducing} also finds weak empirical evidence that the extent of interference depends on market balance; our paper provides strong theoretical grounding for such a claim. Inspired by the goal of reducing such bias, other work has developed approaches to bias characterization and reduction both theoretically (e.g., \cite{Basse16} in the context of auctions with budgets), as well as via simulation (e.g., \cite{Holtz18} who explores the performance of $\ensuremath{\mathsf{LR}}$ designs). Our work complements this line, by developing a mathematical framework for the study of estimation bias in dynamic platforms. Key to our analysis is the use of a mean field model to model both transient and steady-state behavior of experiments. A related approach is taken in \cite{Wager19}, where a mean field analysis is used to study equilibrium effects of an experimental intervention where treatment is incrementally applied in a marketplace (e.g., through small pricing changes). \noindent {\bf Interference in social networks.} A bulk of the literature in experimental design with interference considers an interference that arises through some underlying social network: e.g., \cite{Manski13} studies the identification of treatment responses under interference; \cite{Ugander13} introduces a graph cluster based randomization scheme and analyzes the bias and variance of the design; and many other papers, including \cite{Athey18, Basse19, Saveski17} focus on estimating the spillover effects created by interference. In particular, \cite{pouget2019variance} and \cite{zigler2018bipartite} consider interference on a bipartite network, which is closer to a two-sided marketplace setting. In general, this line of work considers a fixed interference pattern (social network) over time. Our work is distinct because the interference caused by supply and demand competition is endogenous to the experiment and dynamically evolving over time. \noindent {\bf Other experimental designs.} In practice, platforms currently mitigate the effects of interference through either clustering techniques that change the unit of observation to reduce spillovers among them (e.g., \cite{chamandy16}), similar to some of the works mentioned above (e.g., \cite{Holtz18, Ugander13}); or by {\em switchback testing} \citep{sneider19}, in which the treatment is turned on and off over time. Both cause a substantial increase in estimation variance due to a reduction in effective sample size and thus the naive $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ designs remain popular workhorses in the platform experimentation toolkit. In addition to these broad classes of experiments, other work has also introduced modified experiment designs for specific types of interventions, such as \cite{hathuc2020counterfactual} for ranking experiments. \noindent {\bf Two-sided randomization.} Finally, a closely related paper is \cite{bajari2019double}. Independently of our own work, there the authors propose a more general multiple randomized design of which $\ensuremath{\mathsf{TSR}}$ is a special case. They focus on a static model and provide an elegant and complete statistical analysis under a local interference assumption. By contrast, we focus on a dynamic platform model with market-wide interference patterns and focus on a mean field analysis of bias. \subsection{Transient behavior} For practical implementation, it is important to consider the relative bias in the candidate estimators in the transient system, since experiments are typically run for relatively short time horizons. Theoretically, we can provide some insight when $\tau \to \infty$: in this case, the dominant term in the right hand side of \eqref{eq:ODE} is $(\rho(\theta) - s_t(\theta))\tau$. Using this fact, it can be shown that as $\tau \to \infty$, for each $t > 0$, there holds $\v{s}_t^*(a_C, a_L) \to \v{\rho}(a_L)$ (where we define $\v{\rho}(a_L)$ as in Section \ref{ssec:ss_theory}). In other words, the state {\em remains} at $\v{\rho}(a_L)$ at all times. As a result in this limit the transient estimators $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T | a_C)$ and $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(T | a_L)$ are equivalent to $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(\infty | a_C)$ and $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{LR}}}(\infty | a_L)$, respectively. In particular, asymptotically as $\tau \to \infty$, the transient estimator $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}(T | a_C, a_L)$ will be an {\em unbiased} estimate of $\ensuremath{\mathsf{GTE}}$ at all times $T > 0$. (The same is true if $\lambda \to 0$, provided the initial state is $\v{s}_0 = \v{\rho}(a_L)$.) More generally, Figure \ref{fig:transient_dynamics} numerically investigates how the time horizon of the experiment affects the performance of the estimators in the mean field model; there we plot an example where $\lambda/\tau$ takes a moderate value. The relative performance of estimators depends on the time horizon of interest as well as market balance conditions. \begin{figure}[H] \centering \begin{tabular}{c c c } \includegraphics[height=0.26\textwidth]{figs_transient/dynamics_lam_01_mod.png} & \includegraphics[height=0.26\textwidth]{figs_transient/dynamics_lam_1_mod.png} & \includegraphics[height=0.26\textwidth]{figs_transient/dynamics_lam_10_mod.png} \end{tabular} \caption{Transient dynamics of estimators with $\lambda/\tau = 0.1, 1$, and $10$. Parameters are as defined in Figure \ref{fig:numerics}.} \label{fig:transient_dynamics} \end{figure} \section{Bias-variance tradeoff of estimators} \label{sec:variance} Our mean field model is deterministic, so it does not allow us to study the {\em variance} of the different estimators. In practice, however, markets consist of finitely many listings and experiments are run for finite time horizon $T$, and so the variance of any estimator will be nonzero.\footnote{We note that even if the system only consists of a finite number of listings $N$, as $T \to \infty$ the standard error of the various estimators proposed in this paper will converge to zero. However, for finite $T$, this is not the case; since A/B tests are always run to a finite horizon $T$, this nonzero variance will impact the accuracy of any estimates obtained.} In particular, the variance of estimators becomes an important consideration alongside bias, particularly in choosing between multiple estimators with similar bias. The variance of the $\ensuremath{\mathsf{TSR}}$ estimators is especially important, given the earlier discussion that many heuristics that platforms use to minimize bias do so at the cost of increased variance, leading to under-powered experiments (see Section \ref{sec:related}). With this background as motivation, in this section we provide a preliminary yet suggestive simulation study of variance. % The simulations highlight two important considerations that a platform must take into account when designing and analyzing an experiment. First, similar to the results from Section \ref{sec:bias} on bias, we find that the estimator with the lowest variance depends on market balance. Second, we see a bias-variance tradeoff between the $\ensuremath{\mathsf{CR}}$, $\ensuremath{\mathsf{LR}}$ and $\ensuremath{\mathsf{TSR}}$ estimators, with the $\ensuremath{\mathsf{TSR}}$ estimators offering bias improvements at the cost of an increase in variance. We emphasize the point that whether a platform should care more about bias or variance depends on the size of the platform (number of listings $N$) and the time horizon on which the experiment is run. The bias of the experiment is relatively unaffected by changes in these two factors, but of course variance decreases in the size of the market and the length of the time horizon. Thus these two factors dictate whether bias or variance contribute more to the overall $\mathrm{RMSE}$. Full details of the simulation environment and parameters are in Appendix \ref{app:simulations}, which we briefly summarize here. We simulate marketplace experiments with varying market parameters for a finite system with a number of listings $N=5000$ and fixed time horizon $T$. For each run of the simulation, we fix an experiment design (e.g., $\ensuremath{\mathsf{CR}}$, $\ensuremath{\mathsf{LR}}$, $\ensuremath{\mathsf{TSR}}$) and simulate customer arrivals and booking decisions until time $T$. System evolution is simulated according to the continuous time Markov chain specified in \eqref{eq:R1}-\eqref{eq:R2}. We calculate the estimator corresponding to the experiment design, defined in \eqref{eq:naive_CR}-\eqref{eq:naive_TSR_est} and \eqref{eq:TSRI-A} (for $k=1,2$) for the time interval $[T_0, T]$, where $T_0$ is chosen to eliminate the transient burn-in period. We then simulate multiple runs and compare the bias and standard error of the estimators across runs. Note that we report the true standard errors, calculated across simulation runs. For discussion on the estimation of standard errors, see Section \ref{sec:conclusion}. Figure \ref{fig:simulations_hom} shows simulations for a homogeneous system with only one customer and one listing type, with the same parameters as the mean field numerics presented in Figure \ref{fig:numerics_homogeneous}. Note that the bias of the estimators in these large market simulations echo the qualitative insights about bias obtained from the mean field model. Similar findings are obtained in more general scenarios, cf.~Appendix \ref{app:simulations}, where we investigate the effect of heterogeneity in the marketplace. \begin{figure} \centering \includegraphics[height=0.3\textwidth ]{figs_sims/homogeneous_listings_customers_bias_simulations_error_bar_mod.png} \includegraphics[height=0.3\textwidth]{figs_sims/homogeneous_listings_customers_se_simulations_error_bar_mod.png} \\ \hspace{2cm}\includegraphics[height=0.3\textwidth]{figs_sims/homogeneous_listings_customers_rmse_simulations_error_bar_mod.png} \caption{(Homogeneous listings and customers.) Top Left: Bias of each estimator. Top right: Standard error of estimates. Bottom: $\mathrm{RMSE}$ of the estimates. Statistics are normalized by $\ensuremath{\mathsf{GTE}}$. All statistics are calculated across 500 runs, with bootstrapped 95 percentile confidence intervals provided for each statistic. We consider a setting with homogeneous listings and customers, with the same utilities as defined in Figure \ref{fig:numerics_homogeneous}. In the $\ensuremath{\mathsf{CR}}$ design, $a_C = 0.5$. In the $\ensuremath{\mathsf{LR}}$ design, $a_L = 0.5$. Simulation parameters are defined in Appendix \ref{app:simulations}. } \label{fig:simulations_hom} \end{figure} These simulations point to a bias variance-tradeoff between $\ensuremath{\mathsf{TSR}}$ estimators and the naive $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimators, as well as between the three $\ensuremath{\mathsf{TSR}}$ estimators themselves. The $\ensuremath{\mathsf{TSR}}$ estimators, as discussed earlier, offer benefits over the naive $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ estimators with respect to bias, but they do so at the cost of an increase in variance. Moreover, among the three $\ensuremath{\mathsf{TSR}}$ estimators that we explore, those with lower bias also have higher variance. The naive $\ensuremath{\mathsf{TSRN}}$ estimator has similar variance with the lowest of $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$, but the bias of this estimator is also similar to the lowest bias of $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$. On the other hand, $\ensuremath{\mathsf{TSRI\text{-}2}}$ shows a substantial improvement in bias over both $\ensuremath{\mathsf{CR}}$ and $\ensuremath{\mathsf{LR}}$ for several market conditions, but this estimator also has the largest variance among all five estimators, especially in the regime of intermediate market balance (cf. Appendix \ref{app:simulations}). Further, the minimum variance estimator depends on market conditions. For example, in a demand-constrained market with $\lambda/\tau = 0.1$, $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}$ has the lowest standard error, whereas in a supply-constrained market with $\lambda/\tau=10$, $\widehat{\ensuremath{\mathsf{GTE}}}^{\ensuremath{\mathsf{CR}}}$ has the highest standard error. We conclude this section by highlighting the potential for the class of $\ensuremath{\mathsf{TSR}}$ experiments, which open up a large class of both designs and estimators. Among the three estimators we explore, we see that there is a $\ensuremath{\mathsf{TSR}}$ estimator with low bias and another $\ensuremath{\mathsf{TSR}}$ estimator with low variance. It is possible that, with further optimization of these designs and estimators, one can devise a new estimator that optimizes this bias-variance tradeoff. %
{ "timestamp": "2021-09-28T02:27:22", "yymm": "2002", "arxiv_id": "2002.05670", "language": "en", "url": "https://arxiv.org/abs/2002.05670", "abstract": "We develop an analytical framework to study experimental design in two-sided marketplaces. Many of these experiments exhibit interference, where an intervention applied to one market participant influences the behavior of another participant. This interference leads to biased estimates of the treatment effect of the intervention. We develop a stochastic market model and associated mean field limit to capture dynamics in such experiments, and use our model to investigate how the performance of different designs and estimators is affected by marketplace interference effects. Platforms typically use two common experimental designs: demand-side (\"customer\") randomization (CR) and supply-side (\"listing\") randomization (LR), along with their associated estimators. We show that good experimental design depends on market balance: in highly demand-constrained markets, CR is unbiased, while LR is biased; conversely, in highly supply-constrained markets, LR is unbiased, while CR is biased. We also introduce and study a novel experimental design based on two-sided randomization (TSR) where both customers and listings are randomized to treatment and control. We show that appropriate choices of TSR designs can be unbiased in both extremes of market balance, while yielding relatively low bias in intermediate regimes of market balance.", "subjects": "Methodology (stat.ME); Econometrics (econ.EM)", "title": "Experimental Design in Two-Sided Platforms: An Analysis of Bias", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357628519819, "lm_q2_score": 0.8397339656668286, "lm_q1q2_score": 0.8221295836693435 }
https://arxiv.org/abs/1802.08015
Spanned lines and Langer's inequality
We collect some results in combinatorial geometry that follow from an inequality of Langer in algebraic geometry. Langer's inequality gives a lower bound on the number of incidences between a point set and its spanned lines, and was recently used by Han to improve the constant in the weak Dirac conjecture. Here we observe that this inequality also leads to improved constants in Beck's theorem, which states that a finite point set in the real or complex plane has many points on a line or spans many lines. Most of the proofs that we use are not original, and the goal of this note is mainly to carefully record the quantitative results in one place. We also include some discussion of possible further improvements to these statements.
\section{Introduction} We consider the following standard notions from combinatorial geometry. For a point set $P$ in a space $\mathbb R^d$ or $\mathbb C^d$, we write $L(P)$ for the set of lines spanned by $P$, and we set $n = |L(P)|$. We write $\ell_i$ for the number of lines in $L(P)$ containing exactly $i$ points of $P$. We have the basic equalities \begin{equation}\label{eq:basic} \sum_{i\geq 2} \ell_i = n,~~~ \sum_{i\geq 2}i\ell_i = I(P,L(P)), ~~~\text{and}~~~ \sum_{i\geq 2} \binom{i}{2}\ell_i = \binom{n}{2}, \end{equation} where $I(P,L(P))$ denotes the number of incidences between $P$ and $L(P)$, i.e., the number of pairs $(p,\ell)\in P\times L(P)$ such that $p\in \ell$. We consider the implications of the following result of Langer \cite[Proposition 11.3.1]{L}. \begin{theorem}[Langer]\label{thm:langer} Let $P$ be a set of $n$ points in $\mathbb C^2$, with at most $2n/3$ points collinear. Then \begin{equation}\label{eq:langer} \sum_{i\geq 2} i \ell_i\geq \frac{n(n+3)}{3}. \end{equation} \end{theorem} Langer derived this theorem from his generalization of the Bogomolov--Miyaoka--Yau inequality for algebraic surfaces (we will not try to explain the underlying theory here, and we will consider Theorem \ref{thm:langer} as a black box). Langer's work was published in 2003, but it appears that the inequality has long gone unnoticed in combinatorial geometry. For instance, it is sharper than a later result of Payne and Wood \cite[Theorem 4]{PW}. Recently, Han \cite{Ha} uncovered Theorem \ref{thm:langer} and used it to improve the constant in the weak Dirac conjecture. To be precise, Han obtained \eqref{eq:langer} in the middle of the proof of \cite[Theorem 10]{Ha} as a consequence of a Hirzebruch-type inequality (stated as \eqref{eq:bojanowski1} below), which can be found in the Master's thesis of Bojanowski \cite{Bo} (see also Pokora \cite{Po}). This inequality from \cite{Bo} is in turn based on the work of Langer \cite{L}, so Han's result is indirectly derived from \cite{L}, but it can also be obtained directly from Langer \cite[Proposition 11.3.1]{L}, i.e., from Theorem \ref{thm:langer}. Let us introduce Han's improvement of the weak Dirac conjecture. The strong Dirac conjecture \cite{D} states that there is a constant $c$ such that any set of $n$ non-collinear points in $\mathbb R^2$ contains a point that lies on at least $n/2-c$ lines spanned by the point set. The weak Dirac conjecture states that there is a constant $c'>0$ such that any set of $n$ non-collinear points in $\mathbb R^2$ has a point on at least $c'n$ spanned lines. This weak form was proved by Beck \cite{Be} and Szemer\'edi and Trotter \cite{ST}, in both cases with $c'$ unspecified but very small. Payne and Wood \cite{PW} obtained $c' = 1/37$, Pham and Phi \cite{PP} improved it to $c'=1/26$, and Han \cite{Ha} made a large jump to $c' = 1/3$. We include the proof from Theorem \ref{thm:langer} here for the sake of exposition. \begin{corollary}[Han]\label{cor:han} Let $P$ be a set of $n$ points in $\mathbb C^2$, not contained in a line. Then there is a point in $P$ that is contained in at least $(n+3)/3$ lines spanned by $P$. \end{corollary} \begin{proof} If there is a line $\ell\in L(P)$ containing more than $2n/3$ points of $P$, then any point off $\ell$ lies on more than $2n/3$ lines of $L(P)$. Thus we can assume that $P$ has at most $2n/3$ points collinear, so that Theorem \ref{thm:langer} gives \[I(P,L(P)) = \sum_{i\geq 2} i \ell_i\geq \frac{n(n+3)}{3}.\] It follows that there is a point $p\in P$ that is involved in at least $(n+3)/3$ incidences, i.e., it lies on that many spanned lines. \end{proof} Remarkably, Theorem \ref{thm:langer} and Corollary \ref{cor:han} are both tight in $\mathbb C^2$, by Construction \ref{con:hesse} below. On the other hand, that construction is not possible in $\mathbb R^2$, so there may be room for improvement over $\mathbb R$. \paragraph{Hirzebruch-type inequalities.} Let us clarify what is meant by a ``Hirzebruch-type inequality''. For a point set $P$ in $\mathbb R^2$ that is not collinear, \emph{Melchior's inequality} \cite{M} states \begin{equation}\label{eq:melchior} \ell_2 \geq 3 + \sum_{i\geq 4} (i-3)\ell_i. \end{equation} This inequality, which can be derived from Euler's formula, has many applications in combinatorial geometry. For instance, it implies the Sylvester--Gallai theorem, which says that $\ell_2>0$ for any non-collinear point set (see for instance \cite{GT}). Melchior's inequality is false in $\mathbb C^2$, again because of Construction \ref{con:hesse} below. For a point set $P$ in $\mathbb C^2$ (or in $\mathbb R^2$) with at most $n-3$ points collinear, \emph{Hirzebruch's inequality} \cite{Hi} states that \begin{equation}\label{eq:hirzebruch} \ell_2 + \frac{3}{4}\ell_3 \geq n + \sum_{i\geq 5} (2i-9)\ell_i. \end{equation} One particular consequence is that $\ell_2 +\ell_3>0$ for any $n$ points in $\mathbb C^2$ with at most $n-3$ collinear. Kelly \cite{K} used this fact to solve a problem of Serre. Bojanowski \cite{Bo} and Pokora \cite{Po} used Langer's work \cite{L} to prove an inequality of a similar form. For a point set $P$ in $\mathbb C^2$ with at most $2n/3$ points collinear, we have \begin{equation}\label{eq:bojanowski1} \ell_2 + \frac{3}{4}\ell_3 \geq n + \sum_{i\geq 5} \frac{i^2-4i}{4}\ell_i, \end{equation} or equivalently \begin{equation}\label{eq:bojanowski2} \sum_{i\geq 2} (4i-i^2)\ell_i \geq 4n. \end{equation} This inequality is equivalent to \eqref{eq:langer}: Moving $\sum(i-i^2)\ell_i$ to the right in \eqref{eq:bojanowski2}, we get \[ 3\sum_{i\geq 2} i\ell_i \geq 4n + \sum_{i\geq 2} (i^2-i)\ell_i =4n + 2\sum_{i\geq 2} \binom{i}{2} \ell_i = 4n + 2\binom{n}{2} =n^2+3n. \] The inequality \eqref{eq:bojanowski1} is at least as good as \eqref{eq:hirzebruch}, and strictly better whenever there is a line with at least five points. Of course, its condition on the number of collinear points is also stronger, but in combinatorial problems the case where many points are collinear can usually be handled in other ways. \paragraph{Constructions.} As remarked by Langer \cite[Example 11.3.2]{L}, the inequality \eqref{eq:langer} is tight in $\mathbb C^2$ for infinitely many $n$, because of the following configuration. \begin{construction}[Fermat configuration]\label{con:hesse} For $n$ divisible by $3$, there is a set $P$ of $n$ points in $\mathbb C^2$ with the following properties. The points are contained in three non-concurrent lines, with $n/3$ points on each line. For one point of $P$ on one of the three lines and another point of $P$ on another of the lines, the line through the two points hits the third line in a point of $P$. Thus $P$ spans $(n/3)^2$ lines with three points, and none with two points. Therefore we have \[\sum_{i\geq 2}i\ell_i = \left(\frac{n}{3}\right)^2\cdot 3 + 3\cdot \frac{n}{3} = \frac{n(n+3)}{3},\] showing that \eqref{eq:langer} is best possible. Each point lies on exactly $(n+3)/3$ spanned lines, showing that Corollary \ref{cor:han} is tight. In total there are $n^2/9+3$ spanned lines. \end{construction} For a more explicit description of the construction, see \cite[Example 1]{BDSW}. Note that by adding or removing a few points, one can obtain constructions for any $n$ with a similar number of incidences (although one loses the property that no line has two points). Pokora \cite[Remark 2.5]{Po} lists several other examples for which \eqref{eq:langer} is tight, but these do not appear to be part of an infinite family. A crucial point is that Construction \ref{con:hesse} cannot be realized in $\mathbb R^2$, since it would violate the Sylvester--Gallai theorem. The best construction in $\mathbb R^2$ that we are aware of is the following example, which is ascribed to B\"or\"oczky in \cite{CM}. \begin{construction}[B\"or\"oczky's example]\label{con:boroczky} For $n$ divisible by $2$, there is a set $P$ of $n$ points in $\mathbb R^2$ with $n/2$ points on a conic and $n/2$ points on a line that is disjoint from the conic, satisfying the following properties. For any two points of $P$ on the conic, the line through them hits the disjoint line in a point of $P$; this gives $\binom{n/2}{2}$ lines with three points. The only other spanned lines are the $n/2$ tangent lines to the conic at points of $P$, each of which hits another point of $P$ on the disjoint line. Thus we have \[\sum_{i\geq 2}i\ell_i = \frac{n}{2}\cdot 2 + \binom{n/2}{2}\cdot 3 + 1\cdot \frac{n}{2} = \frac{3n(n+2)}{8}.\] Each point of $P$ on the conic lies on $n/2$ spanned lines, and the total number of spanned lines is $\binom{n/2}{2}+n/2+1\geq n^2/8$. \end{construction} See \cite[Section 2]{GT} for a more detailed description, including an analysis of the constructions one obtains by adding or removing points. \newpage Construction \ref{con:boroczky} shows that, although it is possible that Theorem \ref{eq:langer} can be improved in $\mathbb R^2$, one cannot expect more than the following. \begin{conjecture} Let $P$ be a set of $n$ points in $\mathbb R^2$, with at most $n/2$ points collinear. Then \[\sum_{i\geq 2} i \ell_i\geq \frac{3}{8}n^2.\] \end{conjecture} If this conjecture holds, it would imply a further improvement to the weak Dirac conjecture. At the same time, Construction \ref{con:boroczky} shows that such a bound cannot directly prove the strong Dirac conjecture. Note that the strong Dirac conjecture does hold for Construction \ref{con:boroczky}, because any point on the conic lies on $n/2$ spanned lines. Perhaps one could prove the strong Dirac conjecture by showing that Construction \ref{con:boroczky} is in some sense the only non-collinear example (up to projective equivalence and small modifications) in $\mathbb R^2$ with at most $n/2$ points collinear and with fewer than $n^2/2$ incidences. Indeed, the next best examples that we know have roughly $n^2/2$ incidences, namely two lines with $n/2$ points each, and Sylvester's group construction on an irreducible cubic (see \cite[Proposition 2.6]{GT}). \paragraph{Other applications.} We think that Theorem \ref{thm:langer} will have many more applications in combinatorial geometry. Let us mention two examples. Fulek et al. \cite{F} proved that there exists a $c>0$ such that if a finite point set in $\mathbb R^2$ is not covered by two lines, then there are three points in the set such that all three lines spanned by them contain at most $c$ points (they call this a \emph{$c$-ordinary triangle}). Their proof gave $c=12000$. Dubroff \cite{D} finds a more efficient argument that, with the help of Theorem \ref{thm:langer} and some of the corollaries obtained below, proved the same statement with $c=11$. De Zeeuw \cite{DZ} proves that for any $\alpha>0$ there exists $c_\alpha>0$ such that if $n$ points in $\mathbb R^3$ have at most $\alpha n$ points on a plane, then they span at least $c_{\alpha} n^2$ lines with exactly two points. This result can be proved without Theorem \ref{thm:langer}, but using it gives far better values for $c_\alpha$, and also simplifies the proof. \newpage \section{Spanned lines in \texorpdfstring{$\mathbb R^2$}{the real plane}} We consider a theorem of Beck \cite[Theorem 3.1]{Be}, sometimes referred to as ``Beck's theorem of two extremes''. It states that $n$ points either span a line with $c_1n$ points, or they span $c_2n^2$ points. Beck gave $c_1 = 1/100$ and left $c_2$ unspecified, but his proof would clearly give a very small value; Payne \cite[Theorem 2.4]{Pa} wrote out a proof with $c_1 = c_2 = 2^{-15}$. Payne and Wood \cite{PW} gave a more refined argument that roughly gives $c_1=c_2= 1/100$ (it is not stated explicitly in the paper, but in \cite[Theorem 5]{PW} one can for instance put $\ell =1/100$). Here we use Langer's inequality to obtain much better constants. The idea of the proof can already be found in Kelly and Moser \cite[Equation 4.62]{KM}. \begin{theorem}\label{thm:beckreal} For any set $P$ of $n$ points in $\mathbb R^2$, one of the following is true: \begin{itemize} \item There is a line that contains more than $\alpha n$ points of $P$, with $\alpha = (6+\sqrt{3})/9 > 0.85$; \item There are at least $n^2/9$ lines spanned by $P$. \end{itemize} \end{theorem} \begin{proof} First assume that $P$ has at most $2n/3$ points collinear, so that by Theorem \ref{thm:langer} inequality \eqref{eq:langer} holds. We can rearrange Melchior's inequality \eqref{eq:melchior} to \[\sum_{i\geq 2} (3-i)\ell_i\geq 3,\] and then add Langer's inequality \eqref{eq:langer} to get \[\sum_{i\geq 2} i\ell_i + \sum_{i\geq 2} (3-i)\ell_i \geq \frac{n(n+3)}{3} + 3,\] or equivalently \begin{equation}\label{eq:manylines} 3|L(P)| = 3\sum_{i\geq 2} \ell_i \geq \frac{n^2+3n+9}{3}. \end{equation} Thus $|L(P)|\geq n^2/9$, which proves the second alternative. Now suppose that $P$ has more than $2n/3$ points collinear. Let $L$ be the line with more than $2n/3$ points of $P$; say $|P\cap L| = \alpha n$ and $|P\backslash L| = (1-\alpha)n$. Then we count one line for every choice of a point from $P\cap L$ and a point from $P\backslash L$, but we may overcount once for every pair of points from $P\backslash L$ (when the line through that pair hits $L$ in a point of $P$), so \[|L(P)| \geq \alpha n\cdot (1-\alpha )n - \binom{(1-\alpha )n}{2} \geq \left(-\frac{3}{2}\alpha^2 +2\alpha - \frac{1}{2}\right)n^2. \] As long as we have $-3\alpha^2/2 +2\alpha - 1/2 \geq 1/9$, the second alternative of the theorem holds. Solving for $\alpha$ shows that this is the case for $\alpha\leq (6+\sqrt{3})/9$. Otherwise, the first alternative holds. \end{proof} The best example in $\mathbb R^2$ that we know of, in terms of having few spanned lines while having less than $2n/3$ points collinear, is Construction \ref{con:boroczky}, which spans roughly $n^2/8$ lines while having at most $n/2$ points collinear. Thus it seems that a small improvement to Theorem \ref{thm:beckreal} is still possible (one would have to decrease $\alpha$ in the first alternative to make this possible). \newpage Beck \cite{Be} also proved the closely related statement that if $n$ points have exactly $n-k$ collinear, then they determine at least $c_3kn$ lines. This solved a problem that was often mentioned by Erd\H os (see for instance \cite[Section 4]{Er}). Again Beck did not specify the constant, but Payne and Wood \cite[Theorem 5]{PW} refined the proof to obtain $c_3 = 1/98$ (and Payne \cite{Pa} refined this to $1/93$). Langer's inequality gives an improvement, using essentially the same proof as Beck. \begin{corollary} Let $P$ be a set of $n$ points in $\mathbb R^2$ with at most $n-k$ points contained in any line. Then \[|L(P)| \geq \frac{1}{9}kn.\] \end{corollary} \begin{proof} If the second alternative of Theorem \ref{thm:beckreal} holds, then we are done, since $n^2/9 \geq kn/9$. Suppose that the first alternative holds, so we have $k\leq (1-\alpha)n$. Then we count \[ k(n-k) - \binom{k}{2} = k\left(n-k - \frac{k-1}{2}\right) \geq k\left(n- \frac{3}{2}k\right) \geq \frac{3\alpha-1}{2} kn > \frac{1}{9}kn\] lines. \end{proof} Erd\H os specifically asked for $ckn$ lines with a constant $c$ independent of $k$ and $n$, and in that sense doing better than $c=1/9$ would require an improvement in Theorem \ref{thm:beckreal}. In Sylvester's cubic curve construction (see \cite{GT}), we have $k= n-3$ (at most three points are collinear), and we have about $n^2/6$ spanned lines, which shows that we cannot do better than $c = 1/6$. \bigskip The following corollary gives more detailed information on the spanned lines with very few points. The idea, due to Elliott \cite[proof of Theorem 2]{El}, is that in any non-collinear point set at least half the spanned lines have at most three points; see also Purdy and Smith \cite[Lemma 2.2]{PS} or Payne and Wood \cite[Observation 15]{PW}. \begin{corollary}\label{cor:twothreereal} Let $P$ be a set of $n$ points in $\mathbb R^2$ with at most $\alpha n$ points collinear, for $\alpha = (6+\sqrt{3})/9\approx 0.85$. Then \[ \ell_2+\ell_3 \geq \frac{n^2}{18}.\] \end{corollary} \begin{proof} Writing Melchior's inequality \eqref{eq:melchior} as \[\ell_2 \geq 3 + \sum_{i\geq 4} (i-3) \ell_i\] and adding $\ell_2 + 2\ell_3$ on both sides gives \[2\ell_2+2\ell_3 \geq 3 + \ell_2 + 2\ell_3 + \sum_{i\geq 4} (i-3) \ell_i \geq 3 + \sum_{i\geq 2} \ell_i.\] In other words, more than half the spanned lines have two or three points. Thus, by Theorem \ref{thm:beckreal}, we have $\ell_2 + \ell_3 \geq (n^2/9)/2 = n^2/18$. \end{proof} \newpage \section{Spanned lines in \texorpdfstring{$\mathbb C^2$}{the complex plane}} As mentioned in the introduction, Melchior's inequality fails in $\mathbb C^2$, so the arguments of the previous section do not work there. Nevertheless, we can obtain bounds that are slightly weaker than in $\mathbb R^2$, using only the Cauchy--Schwarz inequality (and Langer's inequality, of course). Beck's theorem of two extremes is known to hold in $\mathbb C^2$, because it follows from the Szemer\'edi--Trotter theorem, which was proved in $\mathbb C^2$ by T\'oth \cite{T} and Zahl \cite{Z}. However, their proofs would give extremely small constants in Beck's theorem. Langer's inequality gives reasonable constants. \begin{theorem}\label{thm:beckcomplex} For any set $P$ of $n$ points in $\mathbb C^2$, one of the following is true: \begin{itemize} \item There is a line that contains more than $\beta n$ points of $P$, with $\beta = (4+\sqrt{2})/6 >0.90$; \item There are at least $n^2/12$ lines spanned by $P$. \end{itemize} \end{theorem} \begin{proof} We have \[\binom{n}{2} = \sum_{\ell\in L(P)} \binom{|\ell\cap P|}{2} = \frac{1}{2}\sum_{\ell\in L(P)} |\ell\cap P|^2 - \frac{1}{2}\sum_{\ell\in L(P)} |\ell\cap P|, \] so by the Cauchy-Schwarz inequality we get (writing $m =|L(P)|$ and $I = I(P,L(P))$) \[ 2\binom{n}{2} \geq \frac{1}{m} \left(\sum_{\ell\in L(P)} |\ell\cap P|\right)^2 - \sum_{\ell\in L(P)} |\ell\cap P| = \frac{I^2}{m} - I =I\left( \frac{I}{m} - 1\right). \] Assuming that $P$ has at most $2n/3$ points collinear, Langer's inequality \eqref{eq:langer} gives \[n^2-n \geq \frac{n(n+3)}{3}\left(\frac{n(n+3)}{3m} - 1\right), \] which is equivalent to \begin{equation}\label{eq:spannedcomplex} |L(P)| = m \geq \frac{(n+3)^2}{12}. \end{equation} On the other hand, if $P$ has more than $2n/3$ points collinear, then as in the proof of Theorem \ref{thm:beckreal}, if we have $-3a^2/2 +2a - 1/2 \geq 1/12$, then the second alternative holds. Solving for $a$ shows that this is the case for $a\leq (4+\sqrt{2})/6$. Otherwise, the first alternative holds. \end{proof} In Construction \ref{con:hesse} we have at most $n/3$ points collinear and $n^2/9+3$ spanned lines, which suggests that some improvement to Theorem \ref{thm:beckcomplex} is possible. \bigskip We would have liked to prove a complex analogue of Corollary \ref{cor:twothreereal} (see \cite[Section 4]{DZ} for an application where it would be very useful), but we do not know how to do that without Melchior's inequality. Here is an attempt that comes close. By \eqref{eq:bojanowski1}, we have \[\ell_2+\ell_3\geq \ell_2 + \frac{4}{3}\ell_3 \geq \sum_{i\geq 5}\frac{i^2-4i}{4} \ell_i \geq \sum_{i\geq 5}\ell_i,\] so, adding $\ell_2+\ell_3+\ell_4$ on both sides, \[2(\ell_2+\ell_3) +\ell_4\geq \sum_{i\geq 2}\ell_i = |L(P)|.\] From \eqref{eq:basic} we have $6\ell_4\leq \binom{n}{2}\leq n^2/2$, so $\ell_4\leq n^2/12$, which together with \eqref{eq:spannedcomplex} gives \[\ell_2+\ell_3\geq \frac{1}{2}\left(|L(P)|-\ell_4\right) \geq \frac{1}{2}\left(\frac{n^2 + 6n +9}{12}-\frac{n^2}{12}\right) =\frac{1}{4}n +\frac{3}{8}. \] The problem seems to be that \eqref{eq:bojanowski1} gives no control over $\ell_4$. This bound is actually worse than the bound $\ell_2+\ell_3\geq n$ that follows directly from inequality \ref{eq:hirzebruch} or \ref{eq:bojanowski1}, but we think this argument is worth mentioning, because it shows that any bound of the form $\ell_4<cn^2$ with $c< 1/12$ would give a quadratic lower bound on $\ell_2+\ell_3$. In $\mathbb R^2$, Brass \cite{Br} proved $\ell_4< n^2/14$ using Melchior's inequality, but again we have no equivalent in $\mathbb C^2$. However, we can obtain the following bound on the number of lines with five or more points. A similar statement using \eqref{eq:hirzebruch} can be found in Payne \cite[Observation 3.18]{Pa}. \begin{corollary}\label{cor:manypoorcomplex} Let $P$ be a set of $n$ points in $\mathbb C^2$ with at most $2n/3$ points collinear. Then for any $k\geq 5$ we have \begin{equation}\label{eq:richlines} \sum_{i \geq k} \ell_i \leq \frac{4}{(k-2)^2}|L(P)|. \end{equation} \end{corollary} \begin{proof} By \eqref{eq:bojanowski1} we have \[\sum_{i=2}^{k-1} \ell_i \geq \ell_2 + \frac{3}{4}\ell_3 \geq n + \sum_{i\geq 5} \frac{i^2-4i}{4}\ell_i > \frac{k^2-4k}{4}\sum_{i\geq k} \ell_i,\] so, adding $\frac{k^2-4k}{4}\sum_{i= 2}^{k-1} \ell_i$ on both sides, we get \[\frac{k^2-4k+4}{4}\sum_{i= 2}^{k-1} \ell_i > \frac{k^2-4k}{4}\sum_{i\geq 2} \ell_i.\] Thus we have \[\sum_{i = 2}^{k-1} \ell_i > \frac{(k-2)^2-4}{(k-2)^2}|L(P)|,\] which implies the statement. \end{proof} It follows in particular, using Theorem \ref{thm:beckcomplex}, that \[\ell_2+\ell_3+\ell_4 > \frac{5}{9}|L(P)|\geq \frac{5}{108}n^2.\] Since $|L(P)|\leq \binom{n}{2}$, \eqref{eq:richlines} implies that the number of lines with at least $k$ points satisfies \[\sum_{i \geq k} \ell_i \leq \frac{2n^2}{(k-2)^2}.\] But note that one can do better if one knows that the number of spanned lines is less than $\binom{n}{2}$ (see \cite{Du} for an application of this observation). Compare this with the Szemer\'edi--Trotter theorem \cite{ST, T, Z} in $\mathbb R^2$, which implies that the number of lines with at least $k$ points is at most $cn^2/k^3$ for some constant $c$ (when $k<\sqrt{n}$). This $c$ is fairly large (\cite{F} states $c=125$), so \eqref{eq:richlines} does better for small $k$ (specifically $k\leq 58$). \newpage
{ "timestamp": "2018-02-23T02:08:25", "yymm": "1802", "arxiv_id": "1802.08015", "language": "en", "url": "https://arxiv.org/abs/1802.08015", "abstract": "We collect some results in combinatorial geometry that follow from an inequality of Langer in algebraic geometry. Langer's inequality gives a lower bound on the number of incidences between a point set and its spanned lines, and was recently used by Han to improve the constant in the weak Dirac conjecture. Here we observe that this inequality also leads to improved constants in Beck's theorem, which states that a finite point set in the real or complex plane has many points on a line or spans many lines. Most of the proofs that we use are not original, and the goal of this note is mainly to carefully record the quantitative results in one place. We also include some discussion of possible further improvements to these statements.", "subjects": "Combinatorics (math.CO)", "title": "Spanned lines and Langer's inequality", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9838471680195557, "lm_q2_score": 0.8354835309589074, "lm_q1q2_score": 0.8219881058608999 }
https://arxiv.org/abs/1503.04749
The height of multiple edge plane trees
Multi-edge trees as introduced in a recent paper of Dziemiańczuk are plane trees where multiple edges are allowed. We first show that $d$-ary multi-edge trees where the out-degrees are bounded by $d$ are in bijection with classical $d$-ary trees. This allows us to analyse parameters such as the height.The main part of this paper is concerned with multi-edge trees counted by their number of edges. The distribution of the number of vertices as well as the height are analysed asymptotically.
\section{Introduction} Dziemia{\'n}czuk~\cite{Dziemianczuk:2014:enumer-raney} has introduced a tree model based on plane (=planar) trees~\cite[p.~31]{Flajolet-Sedgewick:ta:analy}, which are enumerated by Catalan numbers. Instead of connecting two vertices by one edge, in his multi-edge model, two vertices can be connected by several edges. If one counts trees by vertices, one must somehow restrict the number of edges in order to avoid an infinity of objects with the same number of vertices. In \cite{Dziemianczuk:2014:enumer-raney}, the chosen restriction is that each vertex has out-degree at most $d$, i.e., there are at most $d$ edges going out from any vertex. However, if one counts trees with a given number of edges, the restriction with the parameter $d$ is no longer necessary. This is in contrast to the case of classical plane trees where the number of edges equals the number of vertices minus one. In \cite{Dziemianczuk:2014:enumer-raney}, several parameters of multi-edge trees were analysed, but some questions about the (average) height (i.e., the maximum distance from the root) of such multi-edge trees were left open. The present paper aims to close this gap. In Section~\ref{section:bijection}, a bijection is constructed which links $d$-ary multiple edge trees with standard $d$-ary trees. Since the bijection is height-preserving, and the height of $d$-ary trees is well understood, we can resort to results by Flajolet and Odlyzko~\cite{Flajolet-Odlyzko:1982} as well as by Flajolet, Gao, Odlyzko and Richmond\cite{Flajolet-Gao-Odlyzko-Richmond:1993} and provide in this way a full analysis of the height of $d$-ary multi-edge trees, cf. Theorem~\ref{theorem:local-limit-theorem-d-ary-trees}. In Section~\ref{section:by-edges}, we count trees by the number of edges and drop the parameter $d$. The analysis of the height of plane trees appears in a classic paper by de Bruijn, Knuth and Rice~\cite{Bruijn-Knuth-Rice:1972} (see also \cite{Prodinger:1983:height-planted}), with an average height of asymptotically $\sqrt{\pi n}$. Now, we can follow this approach to some extent, but combine it with a technique presented in \cite{Flajolet-Prodinger:1986:regis}. The expected height is asymptotically equal to $\frac{2}{\sqrt{5}}\sqrt{\pi n}$, with a more precise result in Theorem~\ref{theorem:expected-height}. The constant is smaller, which is also intuitive, since the multiple edges contribute to the size of the objects, but not to the height. We also give an exact counting formula in terms of weighted trinomial coefficients (Theorem~\ref{theorem:explicit-formula-height}) and a local limit theorem (Theorem~\ref{theorem:local-limit-theorem}). The distribution of the number of vertices in plane multi-edge trees with $n$ edges is analysed in Theorem~\ref{theorem:number-vertices}. The number of trees with given number of vertices and edges is given in Theorem~\ref{theorem:fixed-vertices}. \section{A bijection between \texorpdfstring{$d$}{d}-ary multi-edge trees and ordinary \texorpdfstring{$d$}{d}-ary trees}\label{section:bijection} As explained in the introduction, Dziemia{\'n}czuk \cite{Dziemianczuk:2014:enumer-raney} studies $d$-ary multi-edge trees, where a vertex can have at most $d$ edges going out from it. We present a simple bijection to ordinary (pruned) $d$-ary trees, where every vertex has $d$ possible positions for an edge to be attached (e.g., left, middle, right in the case $d = 3$). See \cite[Example~I.14]{Flajolet-Sedgewick:ta:analy} for a discussion of pruned $d$-ary trees. This bijection preserves (amongst other parameters, such as the number of leaves) the height, allowing us to reduce the problem of enumerating $d$-ary multi-edge trees by height to the analogous question for $d$-ary trees, which has been settled in \cite{Flajolet-Gao-Odlyzko-Richmond:1993}. \medskip Our bijection can be described as follows: suppose that a vertex $v$ of a $d$-ary multi-edge tree has $r$ children, which are connected to $v$ by $k_1$, $k_2$, $\ldots$, $k_r$ edges respectively. The corresponding vertex $v'$ in the $d$-ary tree also has $r$ children (corresponding to the children of $v$ in the natural way), which are attached to $v'$ by edges in the $k_1$-th, $(k_1+k_2)$-th, $(k_1+k_2+k_3)$-th, \ldots, $(k_1+k_2+\cdots+k_r)$-th position. Since we are assuming that $k_1+k_2+\cdots+k_r$ is always $\leq d$, this is possible, and clearly this process is bijective for each vertex, so it also describes a bijection between trees. Figures~\ref{fig:5multi} and~\ref{fig:5ary} illustrate an example in the case $d=5$. \begin{figure}[htbp] \begin{center} \begin{tikzpicture} \node[fill=black,circle,inner sep=1.5pt] (v1) at (0,0) {}; \node[fill=black,circle,inner sep=1.5pt] (v2) at (-2,-1) {}; \node[fill=black,circle,inner sep=1.5pt] (v3) at (2,-1) {}; \node[fill=black,circle,inner sep=1.5pt] (v4) at (-3,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v5) at (-1,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v6) at (1,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v7) at (2,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v8) at (3,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v9) at (-3.5,-3) {}; \node[fill=black,circle,inner sep=1.5pt] (v10) at (-2.5,-3) {}; \node[fill=black,circle,inner sep=1.5pt] (v11) at (-1,-3) {}; \node[fill=black,circle,inner sep=1.5pt] (v12) at (2,-3) {}; \draw (v1)--(v2); \draw (v1)--(v3); \draw (v1) edge [bend left] (v3); \draw (v1) edge [bend right] (v3); \draw (v2)--(v4); \draw (v2) edge [bend left] (v5); \draw (v2) edge [bend right] (v5); \draw (v3)--(v6); \draw (v3)--(v7); \draw (v3)--(v8); \draw (v3) edge [bend left] (v8); \draw (v3) edge [bend right] (v8); \draw (v4) edge [bend left] (v9); \draw (v4) edge [bend right] (v9); \draw (v4)--(v10); \draw (v5)--(v11); \draw (v7) edge [bend left] (v12); \draw (v7) edge [bend right] (v12); \end{tikzpicture} \end{center} \caption{A $5$-ary multi-edge tree.}\label{fig:5multi} \end{figure} \begin{figure}[htbp] \begin{center} \begin{tikzpicture} \node[fill=black,circle,inner sep=1.5pt] (v1) at (0,0) {}; \node[fill=black,circle,inner sep=1.5pt] (v2) at (-2,-1) {}; \node[fill=black,circle,inner sep=1.5pt] (v3) at (1,-1) {}; \node[fill=black,circle,inner sep=1.5pt] (v4) at (-3.5,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v5) at (-2,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v6) at (-0.5,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v7) at (0.25,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v8) at (2.5,-2) {}; \node[fill=black,circle,inner sep=1.5pt] (v9) at (-4,-3) {}; \node[fill=black,circle,inner sep=1.5pt] (v10) at (-3.5,-3) {}; \node[fill=black,circle,inner sep=1.5pt] (v11) at (-3,-3) {}; \node[fill=black,circle,inner sep=1.5pt] (v12) at (-0.25,-3) {}; \draw (v1)--(v2); \draw[gray] (v1)--+(-0.2, -0.2); \draw[gray] (v1)--+(0, -0.2); \draw (v1)--(v3); \draw[gray] (v1)--+(0.4, -0.2); \draw (v2)--(v4); \draw[gray] (v2)--+(-0.2, -0.2); \draw[gray] (v2)--+(0.2, -0.2); \draw[gray] (v2)--+(0.4, -0.2); \draw (v2)--(v5); \draw (v3)--(v6); \draw (v3)--(v7); \draw (v3)--(v8); \draw[gray] (v3)--+(0.0, -0.2); \draw[gray] (v3)--+(0.2, -0.2); \draw (v4)--(v9); \draw (v4)--(v10); \draw[gray] (v4)--+(-0.4, -0.2); \draw[gray] (v4)--+(0.2, -0.2); \draw[gray] (v4)--+(0.4, -0.2); \draw (v5)--(v11); \draw[gray] (v5)--+(-0.1, -0.2); \draw[gray] (v5)--+(0, -0.2); \draw[gray] (v5)--+(0.2, -0.2); \draw[gray] (v5)--+(0.4, -0.2); \draw (v7)--(v12); \draw[gray] (v7)--+(-0.4, -0.2); \draw[gray] (v7)--+(0, -0.2); \draw[gray] (v7)--+(0.2, -0.2); \draw[gray] (v7)--+(0.4, -0.2); \end{tikzpicture} \end{center} \caption{The associated $5$-ary tree, where each vertex can have children in five different positions (far left, left, middle, right, far right).}\label{fig:5ary} \end{figure} From this bijection, we immediately obtain the following corollaries: \begin{corollary} The number of $d$-ary multi-edge trees with $n$ vertices equals the number of $d$-ary trees with $n$ vertices, which is the Fuss-Catalan number $\frac{1}{n} \binom{nd}{n-1}$. \end{corollary} This is for instance shown in \cite[Example~I.14]{Flajolet-Sedgewick:ta:analy}. \begin{corollary} The number of $d$-ary multi-edge trees of height $h$ with $n$ vertices equals the number of $d$-ary trees of height $h$ with $n$ vertices. \end{corollary} It is well known that $d$-ary trees belong to the general class of \emph{simply generated families of trees}, and the height of such families was studied in great detail in a paper by Flajolet, Gao, Odlyzko and Richmond \cite{Flajolet-Gao-Odlyzko-Richmond:1993}. They obtain the following local limit theorem (only stated for $d$-ary trees here, i.e. setting $\phi(y)=(1+y)^d$ and $\tau=1/(d-1)$ in the formul\ae{} given there), which refines earlier results of Flajolet and Odlyzko \cite{Flajolet-Odlyzko:1982} on the average height: \begin{theorem}[{\cite[Theorem 1.2]{Flajolet-Gao-Odlyzko-Richmond:1993}}]\label{theorem:local-limit-theorem-d-ary-trees} Let $N_h^{(d)}(n)$ be the number of $d$-ary trees ($d$-ary multi-edge trees) with $n$ vertices whose height is $h$ and $N^{(d)}(n)$ the total number of $d$-ary trees ($d$-ary multi-edge trees) with $n$ vertices. For any $\delta > 0$, we have the asymptotic formula \begin{align*} \frac{N_h^{(d)}(n)}{N^{(d)}(n)} &\sim 2c\beta^4\sqrt{\pi/n} \sum_{m \geq 1} (m\pi)^2 \left(2(\pi m\beta)^2-3 \right) e^{-(\pi m\beta)^2} \\ &= 2c\beta^{-1} n^{-1/2} \sum_{m \geq 1} m^2(2(m/\beta)^2 - 3) e^{-(m/\beta)^2}, \end{align*} where $c = \sqrt{2(d-1)/d}$, uniformly for \[\delta^{-1} (\log n)^{-1/2} \leq \beta = 2\sqrt{n}/(ch) \leq \delta (\log n)^{1/2}.\] \end{theorem} \begin{corollary}[{\cite[Theorem~S]{Flajolet-Odlyzko:1982}, \cite[Corollary~1.2]{Flajolet-Gao-Odlyzko-Richmond:1993}}] The average height of $d$-ary trees with $n$ vertices (and thus also the average height of $d$-ary multi-edge trees with $n$ vertices) is asymptotically equal to \[\sqrt{2\pi d n/(d-1)}.\] \end{corollary} Similar results for the average height were obtained by Kemp \cite{kemp:1980:average-height, kemp:1983:average-height} (see also \cite{Prodinger:1982:kemp-r-tuply}) for slightly different models of random plane trees, namely for trees with given root degree or number of leaves. As it was mentioned earlier, other statistical results carry over from $d$-ary trees to $d$-ary multi-edge trees as well: \begin{corollary} The number of $d$-ary multi-edge trees with $n$ vertices and $k$ leaves equals the number of $d$-ary trees with $n$ vertices and $k$ leaves. \end{corollary} More generally, the following holds: \begin{corollary} For every $r \in \{0,1,\ldots,d\}$, the number of $d$-ary multi-edge trees with $n$ vertices, $k$ of which have exactly $r$ children, equals the number of $d$-ary trees with $n$ vertices, $k$ of which have exactly $r$ children. Thus the average number of vertices with exactly $r$ children is the same for $d$-ary multi-edge trees and $d$-ary trees with $n$ vertices. \end{corollary} It is not difficult to show that the average proportion of vertices with exactly $r$ children is asymptotically equal to $\binom{d}{r} (d-1)^{d-r} d^{-d}$ as $n \to \infty$ (cf.\ the paragraph following \cite[Theorem~3.13]{Drmota:2009:random} with $k=r$, $\phi_k=\binom{d}{k}$, $\Phi(y)=(1+y)^d$, $\tau=1/(d-1)$), which tends to $1/(r!e)$ as $d \to \infty$. This generalises the observation made in~\cite{Dziemianczuk:2014:enumer-raney} in the case $r=0$ that the asymptotic average proportion of leaves tends to $1/e$ as $d \to \infty$. \section{Trees with Given Number of Edges}\label{section:by-edges} In this section, we consider plane rooted multi-edge trees with a given number $n$ of edges (which we call the \emph{size} of a tree). The resulting counting sequence $A_n$ is sequence \href{http://oeis.org/A002212}{A002212} in \cite{OEIS:2015}, see also \cite{Priez:2013:lattic-combin-hopf-algeb}. It starts with 1, 1, 3, 10, 36, 137, 543, 2219, 9285, 39587. Asymptotically, the number $A_n$ of plane rooted multi-edge trees with $n$ edges is \begin{equation}\label{eq:number-trees} A_n=\frac{5^{n+1/2}}{2\sqrt{\pi n^3}}\biggl(1+O\Bigl(\frac1n\Bigr)\biggr). \end{equation} This will follow without further effort at the end of the proof of Theorem~\ref{theorem:expected-height}. We now analyse the height of multi-edge trees. \subsection{Generating Functions} In the following lemma, we introduce the fundamental transformation which will be used throughout this section. The principal branch of the square root function is chosen as usual, i.e., as a holomorphic function on $\mathbb{C}\setminus \mathbb{R}_{\le 0}$ such that $\sqrt{1}=1$. \begin{lemma} Let $Z=\mathbb{C}\setminus[1/5, 1]$ and $U=\{u\in\mathbb{C} \mid \abs{u}<1; u\neq (-3+\sqrt{5})/2\}$. Let \begin{align*} \upsilon(z)&=\frac{1-3z-\sqrt{1-5z}\sqrt{1-z}}{2z}& \text{for }z&\in \mathbb{C},\\ \zeta(u)&=\frac{u}{u^2+3u+1}& \text{for }u&\in \mathbb{C}\setminus\Bigl\{ \frac{-3\pm \sqrt 5}{2}\Bigr\}. \end{align*} Then $\upsilon\colon Z\to U$ and $\zeta\colon U\to Z$ are bijective holomorphic functions which are inverses of each other. \end{lemma} \begin{proof} We first note that $\zeta$ is well-defined and holomorphic on $U$ with $\zeta'(u)\neq 0$ for all $u\in U$. If $\abs{u}=1$, then \begin{equation*} \zeta(u)=\frac{1}{u+\frac1u+3}=\frac{1}{3+2\Re u}. \end{equation*} Thus the image of the unit circle under $\zeta$ is the interval $[1/5, 1]$. For every $z\in \mathbb{C}\setminus\{0\}$, $z=\zeta(u)$ is equivalent to \begin{equation}\label{eq:quadratic-equation-u} u^2+u\Bigl(3-\frac{1}{z}\Bigr)+1=0 \end{equation} which has two not necessarily distinct solutions $u_1$, $u_2\in\mathbb{C}$ with $u_1u_2=1$. W.l.o.g., $\abs{u_1}\le \abs{u_2}$. Thus either $u_1\in U$ and $\abs{u_2}>1$ or $\abs{u_1}=\abs{u_2}=1$. In the latter case, we have $z\in[1/5, 1]$. For $z=0$, $z=\zeta(u)$ is equivalent to $u=0$. This implies that $\zeta\colon U\to Z$ is bijective. Furthermore, $\zeta\colon U\to Z$ has a holomorphic inverse $\zeta^{-1}$ defined on the simply connected region $\mathbb{C}\setminus[1/5, \infty)$. Solving \eqref{eq:quadratic-equation-u} explicitly yields \begin{equation*} u = \frac{1-3z\pm\sqrt{1-6z+9z^2-4z^2}}{2z} =\frac{1-3z\pm\sqrt{1-5z}\sqrt{1-z}}{2z}. \end{equation*} In a neighbourhood of zero, we must have $\zeta^{-1}(z)=\upsilon(z)$, because \begin{equation*} \frac{1-3z+\sqrt{1-5z}\sqrt{1-z}}{2z} \end{equation*} has a pole at $z=0$. It is easily seen that $\sqrt{1-5z}\sqrt{1-z}$ is a holomorphic function on $Z$. By the identity theorem, $\zeta^{-1}=\nu$ holds in $\mathbb{C}\setminus[1/5, \infty)$. By continuity of $\upsilon$ in $Z$, $\upsilon$ is also the inverse of $\zeta$ in $(1, \infty)$. \end{proof} For $h\ge 0$, consider the class $\mathcal{T}_h$ of plane rooted multi-edge trees of height at most $h$. Denote the ordinary generating function associated to $\mathcal{T}_h$ by $T_h(z)$. \begin{lemma}The generating function $T_h(z)$ is given by \begin{equation} T_h(z) = (1-z)\frac{\alpha^{h+1}-\beta^{h+1}}{\alpha^{h+2}-\beta^{h+2}} =(u+1)\frac{1-u^{h+1}}{1-u^{h+2}}\label{eq:T_h-explicit} \end{equation} where \begin{equation}\label{eq:definition-alpha-beta} \begin{aligned} \alpha &= \frac{1-z+\sqrt{1-5z}\sqrt{1-z}}{2} = \frac{u+1}{u^2+3u+1},\\ \beta &= \frac{1-z-\sqrt{1-5z}\sqrt{1-z}}{2} = \frac{u(u+1)}{u^2+3u+1} \end{aligned}\end{equation} for $z=\zeta(u)\in Z$. \end{lemma} \begin{proof} The class $\mathcal{T}_0$ consists of an isolated vertex. For $h>0$, $\mathcal{T}_h$ consists of a root and a sequence of branches of height at most $h-1$ such that each branch is attached by a positive number of edges to the root. If $\mathcal{E}=\{e\}$ is the class of one edge, we can write $\mathcal{T}_h$ symbolically as \begin{equation}\label{eq:symbolic-equation} \mathcal{T}_h = \circ \times (\mathcal{E}^+\mathcal{T}_{h-1})^*. \end{equation} The symbolic equation \eqref{eq:symbolic-equation} translates to \begin{equation*} T_h(z)=\frac1{1-\frac{z}{1-z}T_{h-1}(z)}=\frac{1-z}{1-z-zT_{h-1}(z)}. \end{equation*} This may be seen as a continued fraction. To obtain an explicit expression for $T_h(z)$, we use the ansatz $T_h(z)=p_h(z)/q_h(z)$ with $p_0(z)=q_0(z)=1$ and \begin{align*} p_h(z)&=(1-z)q_{h-1}(z),\\ q_h(z)&=(1-z)q_{h-1}(z)-zp_{h-1}(z). \end{align*} Eliminating $p_h(z)$ yields the second order recurrence \begin{equation*} q_h(z)=(1-z)q_{h-1}(z)-z(1-z)q_{h-2}(z). \end{equation*} The characteristic equation is \begin{equation*} Q^2-(1-z)Q+z(1-z)=0. \end{equation*} This quadratic equation has the roots $\alpha$ and $\beta$ defined in \eqref{eq:definition-alpha-beta}. This yields the explicit expressions \begin{equation*} q_h(z) = \frac{\alpha^{h+2}-\beta^{h+2}}{(1-z)(\alpha-\beta)},\qquad p_h(z) = \frac{\alpha^{h+1}-\beta^{h+1}}{\alpha-\beta}, \end{equation*} which result in \begin{equation}\label{eq:T_h-z-explicit} T_h(z) = (1-z)\frac{\alpha^{h+1}-\beta^{h+1}}{\alpha^{h+2}-\beta^{h+2}}. \end{equation} Under the substitution $z=\zeta(u)$, we have \begin{equation*} 1-z = \frac{(u+1)^2}{u^2+3u+1},\qquad \beta = \frac{u(u+1)}{u^2+3u+1},\qquad \alpha = \frac{u+1}{u^2+3u+1}. \end{equation*} Inserting this in \eqref{eq:T_h-z-explicit} yields~\eqref{eq:T_h-explicit}. \end{proof} Let $T$ be the generating function of all plane, rooted multi-edge trees. \begin{lemma} For $z=\zeta(u)\in Z$, \begin{equation*} T(z)=\frac{\beta}{z}=u+1 \end{equation*} and \begin{equation}\label{eq:T-T_h-substituted-u} (T-T_h)(z)=\frac{1-u^2}{u}\frac{u^{h+2}}{1-u^{h+2}}. \end{equation} \end{lemma} \begin{proof} It is clear that $T$ is the limit of $T_h$ for $h\to\infty$. As $\abs{u}<1$, we have $T(z)=u+1$. The expression for $T-T_h$ follows. \end{proof} Note that $T$ could also have been determined by removing the restriction on $h$ in the symbolic equation and solving the resulting quadratic equation for $T$. \begin{lemma} The functions $T(z)$, $T_h(z)$ and $\sum_{h\ge 0}(T-T_h)(z)$ are analytic for $z\in Z$. \end{lemma} \begin{proof} By the explicit formula for $\beta$, it is clear that $T(z)$ is an analytic function on $Z$. For $u=\upsilon(z)$ and $z\in Z$, the function \begin{equation*} T_h(z)=(u+1)\frac{1-u^{h+1}}{1-u^{h+2}} \end{equation*} is clearly analytic. The sum $\sum_{h\ge 0}(T-T_h)(z)$ can be written as \begin{equation*} \sum_{h\ge 0}(T-T_h)(z)=\frac{1-u^2}{u}\sum_{h\ge 0}\frac{u^{h+2}}{1-u^{h+2}}. \end{equation*} We can bound the sum by \begin{equation*} \abs[\Big]{\sum_{h\ge 0}\frac{u^{h+2}}{1-u^{h+2}}}\le \frac1{1-\abs{u}^2}\sum_{h\ge 0}\abs{u}^{h+2}=\frac{\abs{u}^2}{(1-\abs{u}^2)(1-\abs u)}. \end{equation*} By the Weierstrass $M$-test, \begin{equation*} \sum_{h\ge 0}(T-T_h)(z) \end{equation*} converges uniformly on compact subsets of $U$ and is therefore analytic in $U$. The results for $z\in Z$ follow by the fact that $\upsilon(z)$ is analytic. \end{proof} \subsection{Explicit Formula for the Number of Trees of Given Height} At this stage, we can compute the number of rooted plane multi-edge trees of size $n$ and height $>h$ explicitly. Taking the difference for $h$ and $h-1$ results in a formula for the number of trees of height $h$. \begin{theorem}\label{theorem:explicit-formula-height} Let $h\ge 0$. The number of rooted plane multi-edge trees of size $n$ and height $>h$ is \begin{multline} \label{eq:explicit-trees-of-large-height} \sum_{k\ge 0}\Biggl(\binom{n-1; 1, 3, 1}{n-(h+1)-(h+2)k}-2\binom{n-1; 1, 3, 1}{n-(h+1)-(h+2)k-2}\\ +\binom{n-1; 1, 3, 1}{n-(h+1)-(h+2)k-4}\Biggr) \end{multline} where \begin{equation*} \binom{n; 1,3,1}{k}=[v^k](1+3v+v^2)^n \end{equation*} denotes a weighted trinomial coefficient. \end{theorem} \begin{proof} By the definition of the generating functions, we have to compute $[z^n](T-T_h)(z)$. By Cauchy's formula, we have \begin{equation}\label{eq:cauchy-1} [z^n](T-T_h)(z) =\frac{1}{2\pi i}\oint_{\abs{z} \text{ small}}\frac{(T-T_h)(z)}{z^{n+1}}\,dz. \end{equation} For sufficiently small $|u|$, the index of $0$ with respect to $\zeta(u)$ is $1$. Therefore, using the substitution $z=\zeta(u)$ and using Cauchy's formula again, we can rewrite \eqref{eq:cauchy-1} as \begin{multline*} [z^n](T-T_h)(z)\\ \begin{aligned} &=\frac{1}{2\pi i}\oint_{\abs{u} \text{ small}}\frac{(T-T_h)(\zeta(u))}{u^{n+1}}(u^2+3u+1)^{n-1}(1-u^2)\,du\\ &= [u^n](T-T_h)(\zeta(u))(u^2+3u+1)^{n-1}(1-u^2)\\ &=[u^n]\frac{(1-u^2)u^{h+1}}{1-u^{h+2}}(u^2+3u+1)^{n-1}(1-u^2). \end{aligned} \end{multline*} Expanding the denominator into a geometric series yields \begin{align*} [z^n](T-T_h)(z)&=[u^n]\sum_{k\ge 0}(1-u^2)^2u^{h+1+(h+2)k}(u^2+3u+1)^{n-1}\\ &=\sum_{k\ge 0}[u^{n-(h+1+(h+2)k)}] (1-2u^2+u^4)(u^2+3u+1)^{n-1}\\ &=\sum_{k\ge 0}\bigl([u^{n-(h+1+(h+2)k)}](u^2+3u+1)^{n-1} \\&\qquad\qquad- 2 [u^{n-(h+1+(h+2)k)-2}](u^2+3u+1)^{n-1} \\&\qquad\qquad+ [u^{n-(h+1+(h+2)k)-4}](u^2+3u+1)^{n-1}\bigr). \end{align*} By the definition of $\binom{n;1,3,1}{k}$, this is exactly \eqref{eq:explicit-trees-of-large-height}. \end{proof} \begin{remark} It would be possible to determine the asymptotic behaviour of the trinomial coefficients by means of the saddle point method (cf. \cite[Section 4.3.3]{Greene-Knuth:1990:mathem}) and to obtain asymptotics for the average height (Theorem~\ref{theorem:expected-height}) and the local limit theorem (Theorem~\ref{theorem:local-limit-theorem}) from that, but the calculations would be somewhat more involved. \end{remark} \subsection{Expected Height} We now compute the expected height of a random rooted plane multi-edge tree of size $n$. \begin{theorem}\label{theorem:expected-height} Let $H_n$ be the height of a random rooted plane multi-edge tree of size $n$. Then \begin{equation}\label{eq:expected-height} \mathbb{E}(H_n)=\frac{2}{\sqrt{5}}\sqrt{\pi n}- \frac{3}{2} + O\Bigl(\frac{1}{\sqrt{n}}\Bigr). \end{equation} \end{theorem} Before proving Theorem~\ref{theorem:expected-height}, we prove a lemma on the harmonic sum occurring in its proof. \begin{lemma}\label{lemma:harmonic-sum-asymptotics}We have \begin{multline}\label{eq:harmonic-sum-asymptotics} \sum_{h\ge 1}\frac{u^h}{1-u^h}= -\frac{\log(1-u)}{1-u} + \frac{\gamma}{1-u}\\+\frac{\log(1-u)}{2}-\frac14 - \frac{\gamma}2 + O((1-u)\log(1-u)) \end{multline} as $u\to 1$ with $\abs{\arg(1-u)}<\pi/3$, where $\gamma$ is the Euler-Mascheroni constant. \end{lemma} \begin{proof} Using the substitution $u=e^{-t}$ yields \begin{equation*} \sum_{h\ge 1}\frac{u^h}{1-u^h}= \sum_{h\ge 1}\frac{e^{-ht}}{1-e^{-ht}}= \sum_{h\ge 1}\sum_{k\ge 1}e^{-kht}= \sum_{m\ge 1}d(m)e^{-mt}, \end{equation*} where $d(m)$ is the number of positive divisors of $m$. By \cite[Example~11]{Flajolet-Gourdon-Dumas:1995:mellin}, we have \begin{equation*} \sum_{m\ge 1} d(m)e^{-mt} = \frac{1}{t}(-\log t+\gamma) + \frac{1}{4} + O(t) \end{equation*} for real $t\to 0^+$. However, the same argument can also be used for $\abs{\arg t}<\pi/4$ because the inverse Mellin transform \begin{equation*} e^{-t}=\frac1{2\pi i}\int_{c-i\infty}^{c+i\infty} t^{-s}\Gamma(s)\, ds \end{equation*} remains valid for complex $t$ with $\abs{\arg t}<2\pi/5$ by the identity theorem for analytic functions; cf. \cite{Flajolet-Prodinger:1986:regis}. As \begin{equation*} t=-\log u=-\log(1-(1-u))=(1-u)+\frac{(1-u)^2}{2}+O((1-u)^3), \end{equation*} substituting back yields \eqref{eq:harmonic-sum-asymptotics}. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:expected-height}] We use the well-known identity \begin{align*} \mathbb{E}(H_n) &= \sum_{k=0}^\infty k\P(H_n=k)=\sum_{k>h\ge 0} \P(H_n=k)=\sum_{h\ge 0}\P(H_n >h )\\ &=\sum_{h\ge 0}(1-\P(H_n\le h)) = \frac{[z^n]\sum_{h\ge 0}(T-T_h)(z)}{[z^n]T(z)}. \end{align*} We intend to compute $[z^n]\sum_{h\ge 0}(T-T_h)(z)$ via singularity analysis. The dominant singularity is at $z=1/5$. To perform singularity analysis, we need the expansion of $T-T_h$ around $z=1/5$, corresponding to $u=1$ under the substitution $z=\zeta(u)$. By \eqref{eq:T-T_h-substituted-u}, we have \begin{align*} \sum_{h\ge 0}(T-T_h)(\zeta(u)) &= \frac{1-u^2}{u}\sum_{h\ge 2}\frac{u^h}{1-u^h}\\ &=-(1+u) + \frac{1-u^2}{u}\sum_{h\ge 1}\frac{u^h}{1-u^h}. \end{align*} By Lemma~\eqref{eq:harmonic-sum-asymptotics}, this is \begin{equation}\label{eq:expectation-expansion-u} \begin{aligned} \sum_{h\ge 0}(T-T_h)(\zeta(u))&=-2\log(1-u)-(2-2\gamma)\\ &\qquad+\frac12(1-u)+O((1-u)^2\log(1-u)). \end{aligned} \end{equation} We have \begin{align*} 1-u &= \sqrt{5}\sqrt{1-5z}-\frac52(1-5z)+O((1-5z)^{3/2}),\\ \log(1-u)&= \frac12\log(1-5z)+\frac12\log 5 - \frac{\sqrt{5}}{2}\sqrt{1-5z}+ O((1-5z)). \end{align*} Inserting this in~\eqref{eq:expectation-expansion-u} yields \begin{multline*} \sum_{h\ge 0}(T-T_h)(z)=-\log(1-5z) - (2-2\gamma+\log 5)\\+ \frac{3}2\sqrt{5}\sqrt{1-5z} +O((1-5z)\log(1-5z)) \end{multline*} for $z\to \frac15$ and $\abs{\arg(\frac15-z)}< 3\pi/ 5$, i.e. $\abs{\arg(z-\frac15)}>2\pi/5$. Note that the exact bounds for the arguments are somewhat arbitrary: the essential property of $2\pi/5$ here is that it is less than $\pi/2$. Using the expansions of $1-u$ and $t$ in terms of $\sqrt{1-5z}$ and of $1-u$, respectively, the angles are transformed accordingly, but we have to allow for a small error. By singularity analysis \cite{Flajolet-Odlyzko:1990:singul}, this yields \begin{equation}\label{eq:expectation-numerator} \begin{aligned} [z^n]\sum_{h\ge 0}(T-T_h)(z) &= \frac{5^n}{n} +\frac{3\sqrt{5}}{2}\frac{5^n}{\Gamma(-1/2)n^{3/2}}+O\biggl(5^n\frac{\log n}{n^2}\biggr)\\ &= \frac{5^n}{n}-\frac{3\cdot 5^{n+1/2}}{4\sqrt{\pi n^3}}+O\biggl(5^n\frac{\log n}{n^2}\biggr). \end{aligned} \end{equation} The number of plane rooted multi-edge trees of size $n$ is \begin{align*} A_n=[z^n]T(z)&=[z^n](u+1)=[z^n](2-(1-u))\\ &= [z^n]\biggl(2-\sqrt{5}\sqrt{1-5z}+\frac52(1-5z)+O((1-5z)^{3/2})\biggr). \end{align*} Singularity analysis yields \begin{equation*} A_n=-\sqrt{5}\frac{5^nn^{-3/2}}{\Gamma(-1/2)}+O\biggl(\frac{5^n}{n^{5/2}}\biggr) =\frac{5^{n+1/2}}{2\sqrt{\pi n^3}}\biggl(1+O\Bigl(\frac1n\Bigr)\biggr). \end{equation*} Combining this with \eqref{eq:expectation-numerator} yields \eqref{eq:expected-height}. \end{proof} \subsection{Local Limit Theorem} In this section, we prove a local limit theorem for the height of a plane rooted multi-edge tree. As our generating function is very explicit, we can give a result in a wider range than \cite{Flajolet-Gao-Odlyzko-Richmond:1993}. \begin{theorem}\label{theorem:local-limit-theorem}Let $0<\varepsilon<\frac16$. Then, for \begin{equation}\label{eq:llt-range-h} \sqrt{\frac{4n\pi^2}{5\varepsilon\log n}}<h<n^{3/4-\varepsilon}, \end{equation} the probability of a plane rooted multi-edge tree to have height $h$ is \begin{equation*} \frac{5h}{n}G\Bigl(\frac{\sqrt{5}h}{2\sqrt{n}}\Bigr)\Bigl(1+O\Bigl(\frac hn +\frac{h^4}{n^3}+\frac{\log n}{n^{1/2-2\varepsilon}}\Bigr)\Bigr) \end{equation*} for \begin{equation}\label{eq:poisson} \begin{aligned} G(\alpha)&=\sum_{m\ge 1}(2\alpha^2 m^2-3)m^2 \exp(-\alpha^2m^2)\\ &=\frac{\sqrt{\pi^5}}{\alpha^5}\sum_{m\ge 1} \Bigl(2\Bigl(\frac{\pi}{\alpha}\Bigr)^2m^2-3\Bigr) m^2\exp\Bigl(-\Bigl(\frac{\pi}{\alpha}\Bigr)^2 m^2\Bigr). \end{aligned} \end{equation} \end{theorem} The fact that the two expressions for $G(\alpha)$ in \eqref{eq:poisson} are equal is Poisson's sum formula (cf.\cite[(3.12.1)]{Bruijn:1958}) for $f(x)=(2\alpha^2 x^2-3)x^2\exp(-\alpha^2 x^2)$. We first compute the integral which will appear by application of the saddle point method. \begin{lemma}\label{lemma:integral} Let $0<a<1$, $0<b$ be real numbers and $c$, $d$ be complex numbers. Then \begin{multline*} \int_{-\infty}^\infty \frac{(ct+d)^3\exp\bigl(-\frac{t^2}5\bigr)}{(1-ae^{ibt})^2}\, dt \\ = \sqrt{5\pi}\sum_{m\ge 0}(m+1)\Bigl(\frac{15}2\Bigl(\frac52 cibm+d\Bigr)c^2+\Bigl(\frac52 cibm+d\Bigr)^3\Bigr)\\\times a^m\exp\Bigl(-\frac{5}{4}b^2m^2\Bigr). \end{multline*} \end{lemma} \begin{proof} We expand the denominator of the integrand as a binomial series, dominated by $(1-a)^{-2}$. Thus \begin{multline*} \int_{-\infty}^\infty \frac{(ct+d)^3\exp\bigl(-\frac{t^2}5\bigr)}{(1-ae^{ibt})^2}\, dt \\= \sum_{m\ge 0}(m+1)a^m \int_{-\infty}^\infty (ct+d)^3 \exp\Bigl(-\frac{t^2}{5}+ibmt\Bigr)\,dt. \end{multline*} Substituting $t=z+\frac{5}{2}ibm$ and shifting the path of integration back to the real line yields \begin{multline*} \int_{-\infty}^\infty(ct+d)^3 \exp\Bigl(-\frac{t^2}{5}+ibmt\Bigr)\,dt\\ = \exp\Bigl(-\frac{5}{4}b^2m^2\Bigr)\sqrt{5\pi}\Bigl(\frac{15}2\Bigl(\frac{5}{2}cibm+d\Bigr)c^2+\Bigl(\frac{5}{2}cibm+d\Bigr)^3\Bigr). \end{multline*} \end{proof} \begin{proof} Instead of computing the number of trees of height exactly $h$, we compute the number $A_{nh}$ of trees of height exactly $h-1$ because this leads to more convenient formul\ae{} and does not matter asymptotically. By \eqref{eq:T-T_h-substituted-u}, we get \begin{align*} A_{nh}:=[z^n](T_{h-1}-T_{h-2})(z)&=[z^n]((T-T_{h-2})-(T-T_{h-1}))\\ &=[z^n]\frac{1-u^2}{u}\Bigl(\frac{u^{h}}{1-u^h}-\frac{u^{h+1}}{1-u^{h+1}}\Bigr)\\ &=[z^n]\frac{1-u^2}{u}\frac{u^h-u^{h+1}}{(1-u^h)(1-u^{h+1})}\\ &=[z^n]\frac{(1+u)(1-u)^2}{u}\frac{u^h}{(1-u^h)(1-u^{h+1})} \end{align*} for $z=\zeta(u)$. Using this transformation and Cauchy's formula as in the proof of Theorem~\ref{theorem:explicit-formula-height} yields \begin{align*} A_{nh}&=\frac{1}{2\pi i}\oint_{\abs z\text{ small}} \frac{(1+u)(1-u)^2}{u}\frac{u^h}{(1-u^h)(1-u^{h+1})}\frac{dz}{z^{n+1}}\\ &=\frac{1}{2\pi i}\oint_{\abs{u}\text{ small}}\frac{(1+u)^2(1-u)^3(u^2+3u+1)^{n-1}}{(1-u^h)(1-u^{h+1})u^{n-h+2}}\,du. \end{align*} Now we apply the saddle point method to this integral. It turns out that the right choice for the contour of integration is given by the parametrisation $u=re^{i\varphi}$ with $r=\exp\bigl(-\frac52\frac{h}{n}\bigr)$ and $-\pi\le\varphi\le \pi$. This yields \begin{align*} A_{nh}&=\frac{1}{2\pi}\int_{-\pi}^\pi\frac{(1+u)^2(1-u)^3(u^2+3u+1)^{n-1}}{(1-u^h)(1-u^{h+1})u^{n-h+1}}\,d\varphi\\ &= \frac{1}{2\pi}\int_{-\pi}^\pi \frac{g(u)\exp(nf(u))}{(1-u^h)(1-u^{h+1})}\,d\varphi \end{align*} for \begin{align*} f(u)&=\log(1+3u+u^2)+\Bigl(\frac{h}{n}-1\Bigr)\log u,\\ g(u)&=\frac{(1+u)^2(1-u)^3}{u(u^2+3u+1)}. \end{align*} We set \begin{equation*} \alpha^2=\frac{5h^2}{4n}. \end{equation*} By the assumption \eqref{eq:llt-range-h}, we have \begin{equation}\label{eq:alpha-square-lower-bound} \alpha^2>\frac{\pi^2}{\varepsilon \log n}. \end{equation} Note that $r\to 1$ for $n\to\infty$. We also note that $g(u)=O(1)$ on the area of integration. If $\alpha^2\le\pi$, \[\abs{1-u^h}\ge 1-r^h= 1-\exp(-2\alpha^2)\ge 2\alpha^2\exp(-2\alpha^2)\ge 2\alpha^2\exp(-2\pi);\] thus \begin{equation}\label{eq:limit-theorem-denominator-bounds} \frac{1}{1-u^h}=O\Bigl(\frac{n}{h^2}\Bigr)=O(\log n), \qquad \frac{1}{1-u^{h+1}}=O\Bigl(\frac{n}{h^2}\Bigr)=O(\log n). \end{equation} Otherwise, $r^h\le \exp(-2\pi)$, i.e., $\frac{1}{1-u^h}$ and $\frac{1}{1-u^{h+1}}$ are bounded. Thus \eqref{eq:limit-theorem-denominator-bounds} can be used in any case. We first prune the tails. We set $\delta_n=n^{-1/2+\varepsilon}$ such that $n\delta_n^2=n^{2\varepsilon}$ and $n\delta_n^4=n^{-1+4\varepsilon}\le n^{-1/2+\varepsilon}$ and $n\delta_n/h^2=O(n^{-1/2+\varepsilon}\log n)$ for $n\to\infty$. In particular, we have $\delta_n=o(1)$. For $\abs{\varphi}>\delta_n$, we have \begin{align*} \abs{1+3u+u^2}&\le \abs{1+3u}+r^2= \sqrt{1+6r\cos\varphi+9r^2}+r^2\\ &\le \sqrt{1+6r\cos\delta_n+9r^2}+r^2\\ &\le\sqrt{1 + 9r^2 +6r - 6r\frac{\delta_n^2}{3}}+r^2 =\sqrt{(1+3r)^2-2r\delta_n^2}+r^2\\ &\le 1+3r+r^2-\frac{r}{1+3r}\frac{\delta_n^2}{2} \le 1+3r+r^2-\frac{\delta_n^2}{10} \end{align*} for sufficiently large $n$. We conclude that for $\abs{\varphi}>\delta_n$, \begin{equation*} \Re f(u)\le \log\Bigl(1+3r+r^2-\frac{\delta_n^2}{10}\Bigr) + \Bigl(\frac{h}{n}-1\Bigr)\log r \le f(r)-\frac{\delta_n^2}{100} \end{equation*} for sufficiently large $n$. Thus, by \eqref{eq:limit-theorem-denominator-bounds}, \begin{multline*} A_{nh} = \frac1{2\pi}\int_{-\delta_n}^{\delta_n}\frac{g(u)\exp(nf(u))}{(1-u^h)(1-u^{h+1})}\,d\varphi\\ + O\Bigl(\log^2n\exp(nf(r))\exp\Bigl(-\frac{n\delta_n^2}{100}\Bigr)\Bigr). \end{multline*} We now approximate the integrand in the central region. We have \begin{align*} f(u)&=\log 5 + \frac{h}{n}\Bigl(-\frac{5h}{2n}+i\varphi \Bigr) +\frac{1}{5}\Bigl(-\frac{5h}{2n}+i\varphi \Bigr)^2 \\ &\qquad\qquad\qquad+ O\Bigl(\Bigl(\frac{h}{n}+|\varphi|\Bigr)^4\Bigr)\\ &=\log 5 -\frac{5h^2}{4n^2} - \frac{\varphi^2}{5} + O\Bigl(\Bigl(\frac{h}{n}+|\varphi|\Bigr)^4\Bigr),\\ g(u) &= \frac45\Bigl(\frac{5h}{2n} - i\varphi\Bigl)^3\Bigl(1+O\Bigl(\frac{h}{n}+|\varphi|\Bigr)\Bigr),\\ \frac{1-u^{h+1}}{1-u^h}&=1 + \frac{u^h(1-u)}{1-u^h} = 1 + O\Bigl(\frac{n}{h^2}\Bigl(\frac{h}{n}+|\varphi|\Bigr)\Bigr)\\ &=1+O\Bigl(\frac{1}{h}+\frac{n|\varphi|}{h^2}\Bigr). \end{align*} Therefore, noting that $n (h/n+\delta_n)^4=O(h^4/n^3+ n\delta_n^4)=o(1)$ yields \begin{multline*} A_{nh}=\frac{2\cdot 5^n\exp\bigl(-\frac{5h^2}{4n}\bigr)}{5\pi} \int_{-\delta_n}^{\delta_n}\frac{\bigl(\frac{5h}{2n}-i\varphi\bigr)^3}{(1-u^h)^2}\exp\Bigl(-\frac{n\varphi^2}{5}\Bigr)\\ \times \Bigl(1+O\Bigl(\frac hn +\frac{h^4}{n^3}+\frac{\log n}{n^{1/2-\varepsilon}}\Bigr)\Bigr)\,d\varphi\\ + O\Bigl(5^n\log^2n\exp\Bigl(-\frac{5h^2}{4n} -\frac{n\delta_n^2}{100}\Bigr)\Bigr). \end{multline*} We now use the substitution $\sqrt{n}\varphi=t$, leading to \begin{multline*} \frac{A_{nh}5\pi\sqrt{n}}{2\cdot 5^n\exp\bigl(-\frac{5h^2}{4n}\bigr)} = \int_{-\delta_n\sqrt{n}}^{\delta_n\sqrt{n}}\frac{\bigl(\frac{5h}{2n}-i\frac{t}{\sqrt{n}}\bigr)^3}{(1-u^h)^2}\exp\Bigl(-\frac{t^2}{5}\Bigr)\\ \times \Bigl(1+O\Bigl(\frac hn +\frac{h^4}{n^3}+\frac{\log n}{n^{1/2-\varepsilon}}\Bigr)\Bigr)\,dt\\ + O\Bigl(\sqrt{n}\log^2n\exp\Bigl(-\frac{n\delta_n^2}{100}\Bigr)\Bigr). \end{multline*} We set \begin{align*} I_{hn}&=\int_{-\infty}^{\infty} \frac{\bigl(\frac{5h}{2n}-i\frac{t}{\sqrt{n}}\bigr)^3}{(1-u^h)^2}\exp\Bigl(-\frac{t^2}{5}\Bigr)\,dt,\\ E_{hn}&=\frac{1}{(1-r^h)^2}\int_{-\infty}^{\infty} \Bigl(\frac{5h}{2n}+\frac{\abs{t}}{\sqrt{n}}\Bigr)^3\exp\Bigl(-\frac{t^2}{5}\Bigr)\,dt, \end{align*} and note that the contribution of $\abs t>\delta_n\sqrt{n}$ is again negligible: we have \begin{multline*} \Bigg|\int_{\delta_n\sqrt{n}}^{\infty} \frac{\bigl(\frac{5h}{2n}-i\frac{t}{\sqrt{n}}\bigr)^3}{(1-u^h)^2}\exp\Bigl(-\frac{t^2}{5}\Bigr)\,dt \Bigg| \\\leq \frac{1}{(1-r^h)^2} \int_{\delta_n\sqrt{n}}^{\infty} \biggl(\frac{5h}{2n}+\frac{t}{\sqrt{n}}\biggr)^3 \exp \biggl( -\frac{t \delta_n \sqrt{n}}{5} \biggr)\,dt. \end{multline*} Now we can use the estimate~\eqref{eq:limit-theorem-denominator-bounds} for $1-r^h$ as before, and the integral in the upper bound can in principle be computed explicitly. It is $O((\sqrt{n}\delta_n)^{-1} \exp(-n\delta_n^2/5))$, so the total contribution of the tails (i.e., the regions where $\abs t>\delta_n\sqrt{n}$; of course the estimate for negative $t$ is analogous) is $O(n^{-1/2}\delta_n^{-1}\log^2n \exp(-n\delta_n^2/5))$. It would be possible to give an even better bound, but this is enough for our purposes. We obtain \begin{multline}\label{eq:1} \frac{A_{nh}5\pi\sqrt{n}}{2\cdot 5^n\exp\bigl(-\frac{5h^2}{4n}\bigr)} = I_{hn}+ O\Bigl(E_{hn}\Bigl(\frac hn +\frac{h^4}{n^3}+\frac{\log n}{n^{1/2-\varepsilon}}\Bigr)\Bigr)\\ + O\Bigl(\sqrt{n}\log^2n\exp\Bigl(-\frac{n\delta_n^2}{100}\Bigr)\Bigr). \end{multline} By Lemma~\ref{lemma:integral} with $a=\exp\bigl(-(5h^2)/(2n)\bigr)$, $b=h/\sqrt{n}$, $c=-i/\sqrt{n}$ and $d=(5h)/(2n)$ and by replacing $m+1$ by $m$, we obtain \begin{equation}\label{eq:I_h_n_G} \begin{aligned} I_{hn}&=\frac{25h\sqrt{5\pi}\exp\bigl(\frac{5h^2}{4n}\bigr)}{4n^2}\sum_{m\ge 1}\Bigl(\frac{5 h^{2}}{2n}m^2-3\Bigr)m^2 \exp\Bigl(-\frac{5h^2}{4n}m^2\Bigr)\\ &=\frac{25h\sqrt{5\pi}\exp(\alpha^2)}{4n^2}G(\alpha). \end{aligned} \end{equation} The integral $E_{hn}$ can be bounded by \begin{equation}\label{eq:limit-theorem-E-bound} E_{hn}=O\biggl(\frac{\frac{h^3}{n^3}+\frac{1}{n^{3/2}}}{(1-r^h)^2}\biggr). \end{equation} We first consider the case $\alpha^2\ge \pi$. In this case, we have $E_{hn}=O(h^3/n^3)$. All summands in the first expression in \eqref{eq:poisson} are positive and its first summand is at least $\alpha^2\exp(-\alpha^2)$, so that \begin{equation*} I_{hn}=\Omega\Bigl(\frac{h^3}{n^3}\Bigr)=\Omega(E_{hn}). \end{equation*} Then \eqref{eq:1} yields \begin{equation}\label{eq:probability-case-1} \frac{A_{nh}5\pi\sqrt{n}}{2\cdot 5^n\exp\bigl(-\frac{5h^2}{4n}\bigr)} = \frac{25h\sqrt{5\pi}\exp(\alpha^2)}{4n^2}G(\alpha)\Bigl(1+ O\Bigl(\frac hn +\frac{h^4}{n^3}+\frac{\log n}{n^{1/2-\varepsilon}}\Bigr)\Bigr). \end{equation} We now turn to the case $\alpha^2<\pi$. We now use the second expression for $G(\alpha)$ in \eqref{eq:poisson}. Again, all summands are positive and we bound $G(\alpha)$ by the first summand from below. This yields \begin{equation*} G(\alpha)=\Omega\Bigl(\frac{1}{\alpha^{7}} \exp\Bigl(-\frac{\pi^2}{\alpha^2}\Bigr)\Bigr) \end{equation*} and, by \eqref{eq:I_h_n_G} and \eqref{eq:alpha-square-lower-bound}, \begin{equation*} I_{hn}=\Omega\Bigl(\frac{n^{3/2}}{h^{6}}\exp(-\varepsilon\log n)\Bigr)=\Omega\Bigl(\frac{n^{3/2-\varepsilon}}{h^6}\Bigr). \end{equation*} For an upper bound of $E_{hn}$, we use the estimate $(1-r^h)^{-1}=O(n/h^2)$, cf.\ \eqref{eq:limit-theorem-denominator-bounds}. We get \begin{equation*} E_{hn}=O\Bigl(\frac{1}{n^{3/2}}\cdot \frac{n^2}{h^4}\Bigr)=O\Bigl(\frac{n^{1/2}}{h^{4}}\Bigr)= O\Bigl(\frac{n^{3/2-\varepsilon}}{h^6} \frac{h^{2}}{n} n^{\varepsilon}\Bigr)=O(n^{\varepsilon}I_{hn}). \end{equation*} Thus \eqref{eq:1} yields \begin{equation}\label{eq:probability-case-2} \frac{A_{nh}5\pi\sqrt{n}}{2\cdot 5^n\exp\bigl(-\frac{5h^2}{4n}\bigr)} = \frac{25h\sqrt{5\pi}\exp(\alpha^2)}{4n^2}G(\alpha)\Bigl(1+ O\Bigl(\frac{\log n}{n^{1/2-2\varepsilon}}\Bigr)\Bigr). \end{equation} Combining \eqref{eq:probability-case-1} and \eqref{eq:probability-case-2} with \eqref{eq:number-trees} yields the result. \end{proof} \subsection{Number of Vertices} In this section, we consider the number of vertices of a random rooted plane multi-edge tree of size $n$. We first give an explicit formula. \begin{theorem}\label{theorem:fixed-vertices} The number of rooted plane multi-edge trees of size $n$ with $k$ vertices is \begin{equation}\label{eq:exact-number} \frac1k\binom{2k-2}{k-1}\binom{n-1}{k-2}. \end{equation} \end{theorem} \begin{proof} We first provide a proof based on the generating function, which will also be needed later. Let $T(y,z)$ be the bivariate generating function for rooted plane multi-edge trees, where $y$ marks the number of vertices and $z$ the number of edges. Rooted plane multi-edge trees $\mathcal{T}$ can be represented symbolically as \begin{equation}\label{eq:symbolic-equation-marked} \mathcal{T} = \{y\} \times (\mathcal{E}^+\mathcal{T})^*. \end{equation} This symbolic equation translates to \begin{equation}\label{eq:vertices-bivariate-generating-function-equation-1} T(y, z)=\frac{y}{1-\frac{z}{1-z}T(y, z)}=\frac{y(1-z)}{1-z-zT(y, z)}. \end{equation} For a fixed $z$, we compute the coefficient $[y^k]T(y, z)$ using the Lagrange inversion formula. By~\eqref{eq:vertices-bivariate-generating-function-equation-1}, we have \begin{equation*} y= T(y, z) \frac{1-z-zT(y, z)}{1-z}. \end{equation*} Now the Lagrange inversion formula gives us \begin{align*} [y^k]T(y, z)&=\frac1k [T^{k-1}]\Bigl(\frac{1-z}{1-z-zT}\Bigr)^{k}\\ &=\frac1k[T^{k-1}]\Bigl(\frac1{1-\frac{z}{1-z}T}\Bigr)^k \\&=\frac1k \binom{-k}{k-1}(-1)^{k-1} \Bigl(\frac{z}{1-z}\Bigr)^{k-1}\\ &=\frac{1}{k}\binom{2k-2}{k-1}\Bigl(\frac{z}{1-z}\Bigr)^{k-1}. \end{align*} Finally, we extract the coefficient of $z^n$: \begin{align*} [z^n][y^k]T(y, z)&=\frac1k\binom{2k-2}{k-1}[z^{n-k+1}](1-z)^{1-k}\\ &= \frac1k\binom{2k-2}{k-1}\binom{1-k}{n-k+1}(-1)^{n-k+1}\\ &=\frac1k\binom{2k-2}{k-1}\binom{n-1}{n-k+1}. \end{align*} \end{proof} \begin{proof}[Combinatorial proof of Theorem~\ref{theorem:fixed-vertices}] It is well known that the number of plane rooted trees (without multiple edges) with $k$ vertices is given by the Catalan number $C_{k-1}=\frac1k\binom{2k-2}{k-1}$. Each such tree can be transformed into a multi-edge tree of size $n$ by distributing the $n$ edges among the $k-1$ edges of the non-multi-edge tree. This corresponds to a composition of $n$ into $k-1$ parts. There are $\binom{n-1}{k-2}$ such compositions. Thus there are $\frac1k\binom{2k-2}{k-1}\binom{n-1}{k-2}$ plane rooted multi-edge tree of size $n$ with $k$ vertices. \end{proof} The distribution of the number of vertices can now be derived from the explicit formula in Theorem~\ref{theorem:fixed-vertices} using Stirling's formula. In order to determine the asymptotic behaviour of the moments, we use an approach via Hwang's quasi power theorem which turns out to be more convenient. \begin{theorem}\label{theorem:number-vertices} Let $V_n$ be the number of vertices of a random rooted plane multi-edge tree of size $n$. Then \begin{align*} \mathbb{E}(V_n)&=\frac{4}{5}n+\frac{9}{10}+O\left(\frac{1}{n}\right),\\ \mathbb{V}(V_n)&=\frac{4}{25}n + \frac{2}{25}+O\left(\frac{1}{n}\right),\\ \intertext{and} \P\left(\frac{V_n-\frac45n}{\frac25\sqrt{n}}\le v\right)&=\frac1{\sqrt{2\pi}}\int_{-\infty}^v e^{-t^2/2}\,dt + O\left(\frac{1}{\sqrt{n}}\right) \end{align*} holds uniformly for $v\in\mathbb{R}$. Furthermore, the local limit theorem \begin{equation*} \P(V_n=k)\sim \frac{5}{2\sqrt{2 n\pi}}\exp\Bigl(-\frac{1}{2}\Bigl(\frac{k-\frac{4}{5}n}{\frac{2}{5}\sqrt{n}}\Bigr)^2\Bigr) \end{equation*} holds for $k = \frac{4n}{5} + o(n^{2/3})$. \end{theorem} \begin{proof} Let $T(y,z)$ be the bivariate generating function as in the first proof of Theorem~\ref{theorem:fixed-vertices}. The functional equation~\eqref{eq:vertices-bivariate-generating-function-equation-1} is equivalent to \begin{equation}\label{eq:vertices-bivariate-generating-function-equation} zT(y, z)^2-(1-z)T(y, z)+y(1-z)=0. \end{equation} Solving this quadratic equation yields \begin{equation}\label{eq:bivariate-generating-function} T(y, z)=\frac{(1-z)-\sqrt{1-z}\sqrt{1-(4y+1)z}}{2z}; \end{equation} note that the negative sign has to be chosen to obtain regularity at $z=0$. The probability generating function of $V_n$ is then \begin{equation*} p_n(y)=\frac{[z^n]T(y, z)}{[z^n]T(1, z)}. \end{equation*} For $y$ in a neighbourhood of $1$, the dominant singularity is at $z=1/(1+4y)$. As \begin{align*} T(y, z)&=\frac{1-\frac1{1+4y}}{\frac{2}{1+4y}}-\frac{\sqrt{1-\frac{1}{1+4y}}}{\frac{2}{1+4y}}\sqrt{1-(4y+1)z} + O(1- (4y+1)z)\\ &= 2y - \sqrt{y(1+4y)}\sqrt{1-(4y+1)z} + O(1- (4y+1)z) \end{align*} for $z\to 1/(1+4y)$ except for one ray, singularity analysis \cite{Flajolet-Odlyzko:1990:singul} yields \begin{align*} [z^n]T(y, z) &= -\frac{\sqrt{y(1+4y)}}{\Gamma(-1/2)}(4y+1)^{n}n^{-3/2} + O((4y+1)^n n^{-5/2}) \\ &=\frac{\sqrt{y(1+4y)}}{2\sqrt{\pi n^3}}(4y+1)^n\Bigl(1+O\Bigl(\frac1n\Bigr)\Bigr). \end{align*} For $y=1$, this coincides with \eqref{eq:number-trees}. Thus \begin{equation*} p_n(y)=\frac{[z^n]T(y, z)}{[z^n]T(1, z)}=\sqrt{y}\left(\frac{4y+1}{5}\right)^{n+1/2}\left(1+O\left(\frac1n\right)\right). \end{equation*} The asymptotic formul\ae{} for mean and variance in Theorem~\ref{theorem:number-vertices} as well as the central limit theorem are an immediate consequence of Hwang's quasi power theorem~\cite{Hwang:1998} in the version of \cite[Theorem~IX.8]{Flajolet-Sedgewick:ta:analy}. The local limit theorem follows immediately from the explicit formula in~Theorem \ref{theorem:fixed-vertices}: applying Stirling's formula to~\eqref{eq:exact-number}, we find that the total number of multi-edge trees with $n$ edges and $k = \frac{4n}{5} + R$ vertices is equal to $$\frac{5^{n+3/2}}{4\pi \sqrt{2} n^2} \exp \Big( - \frac{25R^2}{8n} + O \Big( \frac{1}{n} + \frac{R}{n} + \frac{R^3}{n^2} \Big) \Big).$$ Combining this with the asymptotic formula~\eqref{eq:number-trees} for the total number $A_n$ of multi-edge trees with $n$ edges, we obtain the desired statement for $R = o(n^{2/3})$. \end{proof} \bibliographystyle{amsplainurl}
{ "timestamp": "2015-03-17T01:19:29", "yymm": "1503", "arxiv_id": "1503.04749", "language": "en", "url": "https://arxiv.org/abs/1503.04749", "abstract": "Multi-edge trees as introduced in a recent paper of Dziemiańczuk are plane trees where multiple edges are allowed. We first show that $d$-ary multi-edge trees where the out-degrees are bounded by $d$ are in bijection with classical $d$-ary trees. This allows us to analyse parameters such as the height.The main part of this paper is concerned with multi-edge trees counted by their number of edges. The distribution of the number of vertices as well as the height are analysed asymptotically.", "subjects": "Combinatorics (math.CO)", "title": "The height of multiple edge plane trees", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983847162336162, "lm_q2_score": 0.8354835309589074, "lm_q1q2_score": 0.821988101112518 }
https://arxiv.org/abs/1701.03203
Stability of the Heisenberg Product on Symmetric Functions
The Heisenberg product is an associative product defined on symmetric functions which interpolates between the usual product and the Kronecker product. In 1938, Murnaghan discovered that the Kronecker product of two Schur functions stabilizes. We prove an analogous result for the Heisenberg product of Schur functions.
\section{Introduction} \label{intro} Aguiar, Ferrer Santos, and Moreira introduced a new product on symmetric functions (also on representations of symmetric group) \cite{M-2015}. Unlike the usual product and the Kronecker product, the terms appearing in Heisenberg product of two Schur functions have different degrees. The highest degree component is the usual product. When the Schur functions have the same degree, the lowest degree component of the Heisenberg product is their Kronecker product. In 1938, Murnaghan \cite{F-1938} found that the Kronecker product of two Schur functions stabilizes in the following sense. Given a partition $\lambda$ of $l$ and a large integer $n$, let $\lambda[n]$ be the partition of $n$ by prepending a part of size $n-l$ to $\lambda$. Given two partitions $\lambda$ and $\mu$, the coefficients appearing in the Schur expansion of the Kronecker product $s_{\lambda[n]}*s_{\mu[n]}$ do not depend upon $n$ when $n$ is large enough. The paper is organized as follows. In the second section, we give some basic notations and definitions. In the third section, we define the Aguiar coefficients and prove that low degree terms of the Heisenberg product also stabilize. Section 4 offers an example of stabilization. In the last section, we define the stable Aguiar coefficients, and show how to recover the usual Aguiar coefficients from the stable ones. \section{Preliminaries} \label{Preliminaries} We begin by defining the Kronecker coefficients in terms of representations of symmetric groups. For any partition $\alpha$, let $V_\alpha$ denote the irreducible representation of $S_{|\alpha|}$ indexed by $\alpha$. Let $\lambda$, $\mu$ and $\nu$ be partitions of $n$ (written as $\lambda, \mu, \nu \vdash n$). While the tensor product $V_\lambda\otimes V_\mu$ is a representation of $S_n\times S_n$, it can also be considered as a representation of $S_n$ (by viewing $S_n$ as a subgroup of $S_n\times S_n$ through the diagonal map). Write it as $\text{Res}_{S_n}^{S_n\times S_n} (V_\lambda \otimes V_\mu)$. The Kronecker coefficient $g_{\lambda, \mu}^\nu$ is the multiplicity of $V_\nu$ in the decomposition of $\text{Res}_{S_n}^{S_n\times S_n} (V_\lambda \otimes V_\mu)$ into irreducibles. That is, $$\text{Res}_{S_n}^{S_n\times S_n} (V_\lambda \otimes V_\mu)=\bigoplus\limits_{\nu\vdash n} g_{\lambda, \mu}^{\nu} V_\nu.$$ There is one-to-one correspondence between irreducible representations and Schur functions by the Frobenius map, which sends $V_\lambda$ to the Schur function $s_\lambda$. So we could also express the Kronecker product (denoted by $*$) in terms of symmetric functions: $$s_\lambda * s_\mu=\sum\limits_{\nu\vdash n} g_{\lambda, \mu}^\nu s_\nu.$$ We will switch between the two languages. Given a partition $\lambda=(\lambda_1, \lambda_2, \dotsc)$ and a positive integer $n$, let $\lambda[n]$ be the sequence $(n-|\lambda|,\lambda_1, \lambda_2, \dotsc)$. When $n \geq |\lambda| + \lambda_1$, $\lambda[n]$ is a partition of $n$. The stability of Kronecker coefficients says that for any partitions $\lambda$, $\mu$, $\nu$, the Kronecker coefficient $g_{\lambda[n], \mu[n]}^{\nu[n]}$ does not depend on $n$ when $n$ is large enough. Write $\overline{g}_{\lambda \mu}^\nu$ for the stable value of the above Kronecker coefficient and call it a reduced Kronecker coefficient. When all the Kronecker coefficients in $s_{\lambda[n]} * s_{\mu[n]}$ reach the reduced ones, we say that the Kronecker product stabilizes. Moreover, Murnahan \cite{F-1938} also claimed that $\overline{g}_{\lambda \mu}^\nu$ vanishes unless $$|\lambda|\leq |\mu|+|\nu|,\hskip 5mm |\mu|\leq |\lambda|+|\nu|,\hskip 5mm |\nu|\leq |\lambda|+|\mu|,$$ which are triangle inequalities. When $|\nu|= |\lambda|+|\mu|$, $\overline{g}_{\lambda \mu}^\nu$ is equal to the Littlewood-Richardson coefficient $c_{\lambda \mu}^\nu$ \cite{F-1938}. Briand et al.~\cite{E-2010} determined when the expansion of Kronecker product stabilizes and provide with another condition for the reduced Kronecker coefficient being nonzero. \begin{prop}[\cite{E-2010} Theorem 1.2] Let $\lambda$ and $\mu$ be partitions. The expansion of the Kronecker product $s_{\lambda[n]} * s_{\mu[n]}$ stabilizes at $n=|\lambda|+|\mu|+\lambda_1+\mu_1$. \end{prop} \begin{prop}[\cite{E-2010} Theorem 3.2] Let $\lambda$ and $\mu$ be partitions, then $$\max\{|\nu|+\nu_1\arrowvert\, \nu \;\text{parition},\, \overline{g}_{\lambda, \mu}^\nu> 0\}=|\lambda|+|\mu|+\lambda_1+\mu_1.$$ \end{prop} Other than stability, we do not know much about the Kronecker coefficients. Aguiar et al.~\cite{M-2015} introduced a new product which interpolates between the Kronecker and the usual product on symmetric functions. \begin{defn}{(Heisenberg product)} Let $V$ and $W$ be representations of $S_n$ and $S_m$ respectively. Fix an integer $l\in[\text{max}\{m,n\}, m+n]$, and let $a=l-m$, $b=n+m-l$, and $c=l-n$. Observe that $S_x\times S_y$ is a subgroup of $S_{x+y}$: $S_x\times S_y\hookrightarrow S_{x+y}$, for any nonnegative integers $x$ and $y$. Also, $S_b$ can be considered as a subgroup of $S_b\times S_b$ through the diagonal embedding $\Delta_{S_b}: S_b\hookrightarrow S_b\times S_b$. We have the diagram of inclusions: \begin{equation} \label{IncluDiag} \raisebox{25pt}{ \xymatrix{ {S_a \times S_b \times S_b \times S_c} \ar@{^{(}->}[r] &{S_{a+b} \times S_{b+c} =S_n\times S_m}\\ {S_a \times S_b \times S_c} \ar@{^{(}->}[u]^{id_{S_a}\times \Delta_{S_b}\times id_{S_c}} \ar@{^{(}->}[ur] \ar@{^{(}->}[r] &{S_{a+b+c} =S_l} }} \end{equation} The Heisenberg product (denoted by $\sharp$) of $V$ and $W$ is \begin{equation} \begin{split} V \sharp W= \bigoplus\limits_{l=\text{max}(n,m)}^{n+m} (V\sharp W)_l, \end{split} \end{equation} where \begin{equation} (V\sharp W)_l=\text{Ind}_{S_a\times S_b\times S_c}^{S_l}\text{Res}_{S_a\times S_b\times S_c}^{S_n\times S_m} (V\otimes W) \end{equation} is the degree $l$ component. \end{defn} When $l= m+ n$, $(V\sharp W)_l=\text{Ind}_{S_n\times S_m}^{S_{n+m}} (V\otimes W)$, which is the usual induction product of representations (corresponding to the usual product of symmetric functions); when $l=n=m$, $(V\sharp W)_l=\text{Res}_{S_l}^{S_l\times S_l} (V\otimes W)$, which is the Kronecker product of representations. The Heisenberg product connects the usual induction product and the Kronecker product. Remarkably, this product is associative (\cite{M-2015} Theorem 2.3, Theorem 2.4, Theorem 2.6.) By the definition of the Heisenberg product, the lower degree components behave like the Kronecker product. A natural question is whether those components stabilize. \begin{thm} Given nonnegative integers $d$ and $h$ and two partitions $\lambda$ and $\mu$, the expansion of $(V_{\lambda[n]}\sharp V_{\mu[n-d]})_{n+h}$ (degree $h$ higher than the lowest degree) stabilizes when $n \geq |\lambda| +|\mu|+\lambda_1+\mu_1+ 3h+ 2d$. Moreover, this is where the stabilization begins. \end{thm} \section{Proof of Theorem 2.3} \label{Proof of Thm} Let $\lambda$ be a partition. Define $\lambda^+$ to be the partition obtained from $\lambda$ by adding $1$ to the first part $\lambda^+= (\lambda_1+1, \lambda_2, \lambda_3, \dotsc)$; similarly, set $\lambda^-= (\lambda_1-1, \lambda_2, \lambda_3, \dotsc)$. Let $\overline{\lambda}=(\lambda_2, \lambda_3, \dotsc)$ be the partition obtained from $\lambda$ by removing the first part. For partitions $\lambda$ and $\mu$, we set $\lambda+\mu=(\lambda_1+\mu_1, \lambda_2+\mu_2, \dotsc)$ and $\lambda-\mu=(\lambda_1-\mu_1, \lambda_2-\mu_2, \dotsc)$. \begin{lem} Let $\lambda$, $\mu$ and $\nu$ be partitions with $|\nu|=|\lambda|+|\mu|$ (1) If $\nu_1-\nu_2\geq |\lambda|$, then $c_{\lambda, \mu}^\nu=c_{\lambda, \mu^+}^{\nu^+}$. (2) If $\mu_1-\mu_2\geq |\lambda|$, then $c_{\lambda, \mu}^\nu=c_{\lambda, \mu^+}^{\nu^+}$ \end{lem} \begin{proof} By the Littlewood-Richardson Rule, $c_{\alpha, \beta}^\gamma$ ($\alpha$, $\beta$, and $\gamma$ are partitions) counts the number of semistandard skew tableaux of shape $\gamma/\beta$ and weight $\alpha$ whose row reading word is a lattice permutation (\cite{Mac}, Chapter 1 Section 9). Let $T_{\alpha, \beta}^\gamma$ be the set of these tableaux. We show that $|T_{\lambda, \mu}^\nu|=|T_{\lambda, \mu^+}^{\nu^+}|$. Note that $T_{\lambda, \mu}^\nu= \emptyset$ unless $\mu \subset \nu$, and $\mu \subset \nu$ if and only if $\mu^+ \subset \nu^+$, hence it is enough to consider the case $\mu \subset \nu$. The skew diagrams $\nu/\mu$ and ${\nu^+}/{\mu^+}$ differ only by a shift of the first row. Since $\nu_1-\nu_2\geq |\lambda|$, the first row (may be empty) of $\nu/\mu$ is disconnected from the rest of the skew diagram, and similarly for ${\nu^+}/{\mu^+}$. This gives us a natural bijection between $T_{\lambda, \mu}^\nu$ and $T_{\lambda, \mu^+}^{\nu^+}$. Hence $|T_{\lambda, \mu}^\nu|=|T_{\lambda, \mu^+}^{\nu^+}|$, and (1) is proved. The proof of (2) is the same, as $\mu_1-\mu_2 \geq |\lambda|$ also implies that the first row of $\nu/\mu$ is disconnected from the rest of it. \end{proof} \begin{rem} When $\lambda$, $\mu$, and $\nu$ do not satisfy the conditions in Lemma 3.1, the one unit shift of the first row may fail to be a bijection between $T_{\lambda, \mu}^\nu$ and $T_{\lambda, \mu^+}^{\nu^+}$. However, it is still a well defined injection from $T_{\lambda, \mu}^\nu$ to $T_{\lambda, \mu^+}^{\nu^+}$, which means $c_{\lambda, \mu}^\nu \leq c_{\lambda, \mu^+}^{\nu^+}$. In other words, the sequence $c_{\lambda, \overline{\mu}[n]}^{\overline{\nu}[n+|\lambda|]}$ is weakly increasing and is constant when $n$ is large. \end{rem} Brion \cite{Brion} and Manivel \cite{Manivel} provide with an analogous result for the Kronecker coefficients: \begin{prop} Let $\lambda$, $\mu$, and $\nu$ be partitions. The sequence $g_{\lambda[n],\mu[n]}^{\nu[n]}$ is weakly increasing. \end{prop} The Aguiar coefficient $a_{\lambda, \mu}^\nu$ is the multiplicity of $V_\nu$ in $V_\lambda\sharp V_\mu$, i.e. $$V_\lambda \sharp V_\mu = \bigoplus\limits_{l=\text{max}(n,m)}^{n+m} \bigoplus\limits_{\nu\vdash l} a_{\lambda, \mu}^\nu V_\nu.$$ and we set $a^{\nu}_{\lambda, \mu}=0$ if $\lambda$, $\mu$ or $\nu$ is not a partition. The first part of Theorem 2.3 states that when $n\geq |\lambda| +|\mu|+\lambda_1+\mu_1+ 3h+ 2d$, \begin{equation}\label{Aguiar} a_{\lambda[n], \mu[n-d]}^{\nu^-}= a_{\lambda[n+1], \mu[n-d+1]}^{\nu} \end{equation} for all $\nu\vdash n+h+1$. To prove \eqref{Aguiar}, we first express the Aguiar coefficient in terms of the Littlewood-Richardson coefficients and the Kronecker coefficients. \begin{lem} For each $\nu\vdash l$, \begin{equation} a_{\lambda, \mu}^\nu=\sum\limits_{\shortstack{\scriptsize {$\alpha \vdash a, \rho \vdash c$, \scriptsize $\tau \vdash n$}\\ \scriptsize {$\beta, \eta, \delta \vdash b$}}} c_{\alpha, \beta}^\lambda\,\, c_{\eta, \rho}^\mu\,\, g_{\beta, \eta}^\delta\,\, c_{\alpha, \delta}^\tau\,\, c_{\tau, \rho}^\nu \end{equation} where $\max(n,m)\leq l\leq n+m$, $a=l-m$, $b=m+n-l$, and $c=l-n$. \end{lem} \begin{proof} Consider the diagram \eqref{IncluDiag} we used to define the Heisenberg product. Given partitions $\lambda\vdash n$ and $\mu\vdash m$, $V_\lambda \otimes V_\mu$ is a representation of $S_n\times S_m (=S_{a+b}\times S_{b+c})$. We compute the Heisenberg product of $V_\lambda$ and $V_\mu$ in three steps. \begin{equation} \label{Step} \raisebox{40pt}{ \xymatrix{ {S_a\! \times\! S_b\! \times\! S_b\! \times\! S_c} \ar@/_1pc/[d]_{(2)} &&{S_{a+b}\! \times\! S_{b+c} =S_n\!\times\! S_m} \ar@/_1pc/[ll]_{(1)} \ar@{^{(}->}[ll];[]\\ {S_a \times S_b \times S_c} \ar@{^{(}->}[u] \ar@{^{(}->}[urr] \ar@{^{(}->}[rr]_{(3)} \ar@{^{(}->}[dr]^{(3.1)}&&{S_{a+b+c} =S_l}\\ &{S_{a+b}\times S_c} \ar@{^{(}->}[ur]^{(3.2)}& \hskip -40mm =S_n\times S_c }} \end{equation} First, we restrict the representation from $S_n\times S_m$ to $S_a\times S_b\times S_b\times S_c$, \begin{equation} \tag*{(1)} \text{Res}_{S_a\times S_b\times S_b\times S_c}^{S_n\times S_m} (V_\lambda \otimes V_\mu)= \bigoplus\limits_{\shortstack{\scriptsize {$\alpha \vdash a$}\\ \scriptsize {$\beta \vdash b$}}} \bigoplus\limits_{\shortstack{\scriptsize {$\eta \vdash b$}\\\scriptsize {$\rho \vdash c$}}} c_{\alpha, \beta}^\lambda\,\, c_{\eta, \rho}^\mu\, V_\alpha\otimes V_\beta \otimes V_\eta \otimes V_\rho. \end{equation} Second, pull back to $S_a\times S_b\times S_c$ along the diagonal map of $S_b$. For $\alpha \vdash a$, $\rho \vdash c$, and $\beta, \eta\vdash b$ we have, \begin{align} \tag*{(2)} \text{Res}_{S_a\times S_b\times S_c}^{S_a\times S_b\times S_b\times S_c} (V_\alpha\otimes V_\beta \otimes V_\eta \otimes V_\rho)= \bigoplus\limits_{\delta\vdash b} g_{\beta, \eta}^\delta\, V_\alpha\otimes V_\delta\otimes V_\rho. \end{align} The final step is the induction from $S_a\times S_b\times S_c$ to $S_{a+b+c} (=S_l)$. Break this step into two substeps as in \eqref{Step}. Given $\alpha \vdash a$, $\delta\vdash b$, and $\rho\vdash c$, we have: \begin{align*} \tag*{(3)} \text{Ind}_{S_a\times S_b\times S_c}^{S_l} (V_\alpha\otimes V_\delta\otimes V_\rho)&= \text{Ind}_{S_n\times S_c}^{S_l}\text{Ind}_{S_a\times S_b\times S_c}^{S_n\times S_{c}} (V_\alpha\otimes V_\delta\otimes V_\rho)\\ &=\bigoplus\limits_{\shortstack{\scriptsize {$\tau \vdash n$}\\ \scriptsize {$\nu \vdash l$}}} c_{\alpha, \delta}^\tau\,\, c_{\tau, \rho}^\nu\,\, V_\nu. \end{align*} Combining $(1)$, $(2)$, and $(3)$ together, gives \begin{align*} (V_\lambda\otimes V_\mu)_l&=\text{Ind}_{S_a\times S_b\times S_c}^{S_{l}} \text{Res}_{S_a\times S_b\times S_c}^{S_a\times S_b\times S_b\times S_c} \text{Res}_{S_a\times S_b\times S_b\times S_c}^{S_{n}\times S_{m}} (V_\lambda\otimes V_\mu)\\ &=\bigoplus\limits_{\shortstack{\scriptsize {$\alpha \vdash a, \rho \vdash c, \tau \vdash n$}\\ \scriptsize ${\beta, \eta, \delta \vdash b, \nu\vdash l}$}} c_{\alpha, \beta}^\lambda\,\, c_{\eta, \rho}^\mu\,\, g_{\beta, \eta}^\delta\,\, c_{\alpha, \delta}^\tau\,\, c_{\tau, \rho}^\nu\,\, V_\nu \end{align*} So for $\nu\vdash l$, \begin{equation*} a_{\lambda, \mu}^\nu=\sum\limits_{\shortstack{\scriptsize {$\alpha \vdash a, \rho \vdash c$, \scriptsize $\tau \vdash n$}\\ \scriptsize {$\beta, \eta, \delta \vdash b$}}} c_{\alpha, \beta}^\lambda\,\, c_{\eta, \rho}^\mu\,\, g_{\beta, \eta}^\delta\,\, c_{\alpha, \delta}^\tau\,\, c_{\tau, \rho}^\nu, \end{equation*} as claimed. \end{proof} We set $c_{\lambda, \mu}^\nu=0$ when $\lambda$, $\mu$, or $\nu$ is not a partition. Then (3.2) holds for all compositions $\nu$ of $l$. Combining (3.1) and (3.2), shows that to prove the first part of Theorem 2.3, it is enough to show that, when $n\geq |\lambda| +|\mu|+\lambda_1+\mu_1+ 3h+ 2d$, \begin{equation} \begin{split} \sum_{(\alpha, \beta, \eta, \rho, \delta, \tau) \in T} &c_{\alpha, \beta}^{\lambda [n]}\,\, c_{\eta, \rho}^{\mu[n-d]}\,\, g_{\beta, \eta}^\delta\,\, c_{\alpha, \delta}^\tau\,\, c_{\tau, \rho}^{\nu^-}= \\ &\sum_{(\alpha^*, \beta^*, \eta^*, \rho^*, \delta^*, \tau^*)\in T^*} c_{\alpha^*, \beta^*}^{\lambda [n+1]}\,\, c_{\eta^*, \rho^*}^{\mu[n+1-d]}\,\, g_{\beta^*, \eta^*}^{\delta^*}\,\, c_{\alpha^*, \delta^*}^{\tau^*}\,\, c_{\tau^*, \rho^*}^{\nu} \end{split} \end{equation} \noindent for all $\nu \vdash n+h+1$, where \begin{eqnarray*} T&=&\{(\alpha, \beta, \eta, \rho, \delta, \tau)\mid\alpha \vdash d+h, \rho\vdash h, \tau\vdash n, \beta, \eta, \delta\vdash n-d-h \};\\ T^*&=&\{(\alpha^*, \beta^*, \eta^*, \rho^*, \delta^*, \tau^*)\mid \alpha^* \vdash d+h, \rho^*\vdash h,\tau^*\vdash n+1,\\ &&\hskip 70mm\beta^*,\eta^*, \delta^* \vdash n-d-h+1\}. \end{eqnarray*} Define $f: T\longmapsto \mathbb{Z}_{\geq 0}$ and $f^*: T^*\longmapsto \mathbb{Z}_{\geq 0}$ as follows: $$f(\alpha, \beta, \eta, \rho, \delta, \tau)=c_{\alpha, \beta}^{\lambda [n]}\,\, c_{\eta, \rho}^{\mu[n-d]}\,\, g_{\beta, \eta}^\delta\,\, c_{\alpha, \delta}^\tau\,\, c_{\tau, \rho}^{\nu^-},$$ $$f^*(\alpha^*, \beta^*, \eta^*, \rho^*, \delta^*, \tau^*)=c_{\alpha^*, \beta^*}^{\lambda [n+1]}\,\, c_{\eta^*, \rho^*}^{\mu[n+1-d]}\,\, g_{\beta^*, \eta^*}^{\delta^*}\,\, c_{\alpha^*, \delta^*}^{\tau^*}\,\, c_{\tau^*, \rho^*}^{\nu}.$$ Then equation (3.4) becomes: \begin{equation} \begin{split} \sum_{(\alpha, \beta, \eta, \rho, \delta, \tau) \in T} f(\alpha, \beta, &\eta, \rho, \delta, \tau) =\\ &\sum_{(\alpha^*, \beta^*, \eta^*, \rho^*, \delta^*, \tau^*)\in T^*} f^*(\alpha^*, \beta^*, \eta^*, \rho^*, \delta^*, \tau^*). \end{split} \end{equation} Some terms in the sums of (3.5) vanish. Let us consider only the nonvanishing terms. Let $T_0 =T \smallsetminus f^{-1}(0)$ and $T^*_0= T^*\smallsetminus {f^*}^{-1}(0)$, so that $T_0$ and $T^*_0$ index the nonzero terms. Then (3.5) becomes \begin{equation} \begin{split} \sum_{(\alpha, \beta, \eta, \rho, \delta, \tau) \in T_0} f(\alpha, \beta, &\eta, \rho, \delta, \tau) =\\ &\sum_{(\alpha^*, \beta^*, \eta^*, \rho^*, \delta^*, \tau^*)\in T^*_0} f^*(\alpha^*, \beta^*, \eta^*, \rho^*, \delta^*, \tau^*). \end{split} \end{equation} \begin{lem} The natural embedding $\varphi$ from $T$ to $T^*$: $$\varphi (\alpha, \beta, \eta, \rho, \delta, \tau)= (\alpha, \beta^+, \eta^+, \rho, \delta^+, \tau^+)$$ induces a map $\varphi|_{T_0}$ from $T_0$ to $T^*_0$. Moreover $f=f^*\circ \varphi|_{T_0}$. \begin{displaymath} \xymatrix{ T \ar@{^{(}->}[rr]^\varphi& &T^*\\ T_0 \ar@{^{(}->}[u] \ar@{^{(}->}[rr]^{\varphi|_{T_0}} \ar@{^{(}->}[urr]^{\varphi|_{T_0}}& &\varphi(T_0) \ar@{^{(}->}[u] } \end{displaymath} \end{lem} \begin{proof} For all $(\alpha, \beta, \eta, \rho, \delta, \tau)\in T_0$, we show that $\beta$, $\eta$, $\delta$, and $\tau$ have large enough first parts so that we can apply Proposition 2.1 and Lemma 3.1 to the Littlewood-Richardson coefficients and the Kronecker coefficients appearing in the definition of $f$. Since $n \geq |\lambda| +|\mu|+\lambda_1+\mu_1+ 3h+ 2d$, we have $$\lambda[n]_1- \lambda[n]_2 \geq n-|\lambda|-\lambda_1 \geq |\mu|+\mu_1+ 3h+ 2d \geq h+d \hskip 2mm (=|\alpha|)$$ and $$\mu[n-d]_1- \mu[n-d]_2 \geq n-d- |\mu|-\mu_1 \geq |\lambda|+\lambda_1 + 3h+ d \geq h \hskip 2mm (=|\rho|).$$ Using Lemma 3.1 (1), we get $$c_{\alpha, \beta}^{\lambda[n]}= c_{\alpha, \beta^+}^{\lambda[n+1]} \quad \text{and} \hskip 5mm c_{\eta, \rho}^{\mu[n-d]}=c_{\eta, \rho^+}^{\mu[n+1-d]}.$$ As $\beta \subset \lambda[n]$, $|\overline{\beta}|\leq |\lambda| < n-d-h$ and $(\overline{\beta})_1\leq \lambda_1$. Similarly, we have $|\overline{\eta}|\leq |\mu| < n-d-h$ and $(\overline{\eta})_1\leq \mu_1$. Since $\beta$ and $\eta$ are both partitions of $n-d-h$, they can be written as $\beta=\overline{\beta}[n-d-h]$ and $\eta=\overline{\eta}[n-d-h]$ respectively. They both have large first parts. More specifically, we have $$n-d-h\geq |\lambda|+|\mu|+\lambda_1+\mu_1+2h+d \geq |\overline{\beta}|+|\overline{\eta}|+(\overline{\beta})_1+(\overline{\eta})_1.$$ By Proposition 2.1, we have $$g_{\beta, \eta}^\delta = g_{\beta^+, \eta^+}^{\delta^+}.$$ Followed from proposition 2.2, $$|\overline{\delta}|+(\overline{\delta})_1\leq |\overline{\beta}|+|\overline{\eta}|+(\overline{\beta})_1+(\overline{\eta})_1$$ for otherwise $g_{\beta, \eta}^\delta=0$, so thus $f$. Hence, $$|\delta|-\delta_1+\delta_2\leq |\lambda|+|\mu|+\lambda_1+\mu_1$$ which gives us \begin{align*} \delta_1-\delta_2 \geq n-d-h-|\lambda|-|\mu|-\lambda_1-\mu_1\geq 2h+d\geq |\alpha| \end{align*} Applying Lemma 3.1 (2), we get $$c_{\alpha, \delta}^\tau= c_{\alpha, \delta^+}^{\tau^+}.$$ Also, by the Littlewood-Richardson rule, we have $$\tau_2\leq \delta_2+|\alpha| \quad \text{and} \quad \tau_1\geq \delta_1.$$ So \begin{align*} \tau_1-\tau_2 \geq \delta_1-(\delta_2+|\alpha|) \geq 2h+d-h-d= h = |\rho|. \end{align*} Hence, by Lemma 3.1 (2), we get $$c_{\tau, \rho}^{\nu^-}= c_{\tau^+, \rho}^{\nu}.$$ So \begin{equation} f(\alpha, \beta, \eta, \rho, \delta, \tau)=f^*(\varphi (\alpha, \beta, \eta, \rho, \delta, \tau)) (\neq 0), \end{equation} \noindent which means $\varphi (T_0) \subset T^*_0$ and $f=f^*\circ \varphi$. \end{proof} \begin{proof}[Proof of Theorem 2.3] The map of Lemma 3.4 is reversible as the map $(\alpha, \beta, \eta, \rho, \delta, \tau)\longrightarrow (\alpha, \beta^-,\eta^-, \rho, \delta^-, \tau^-)$ gives a well-defined injection from $T^*_0$ to $T_0$. So $\varphi\arrowvert_{T_0}$ is a bijection from $T_0$ to $T^*_0$. With this and (3.7), we prove (3.6), and hence the first part of Theorem 2.3. To prove that the lower bound is where the stabilization begins, we just need show that there is $\nu\vdash n+h$ with $\nu_1=\nu_2$ such that $a_{\lambda[n], \mu[n-d]}^\nu\neq 0$ when $n = |\lambda| +|\mu|+\lambda_1+\mu_1+ 3h+ 2d$. We use the formula (3.2) for $a_{\lambda[n], \mu[n-d]}^\nu\neq 0$ (replace $\lambda$ and $\nu$ by $\lambda[n]$ and $\mu[n-d]$ respectively, and set $l=n+h$), and take $$\alpha=(a)=(d+h), \rho=(c)=(h),$$ $$\beta=\lambda[n]-\alpha=(n-|\lambda|-d-h, \lambda_1, \lambda_2, \dots)=(|\mu|+\lambda_1+\mu_1+2h+d, \lambda_1, \dots),$$ $$\eta=\mu[n-d]-\rho=(n-d-|\mu|-h, \mu_1, \mu_2, \dots)=(|\lambda|+\lambda_1+\mu_1+2h+d, \mu_1, \dots),$$ $$\delta=(\overline{\beta}+\overline{\eta})[n-d-h]=(n-d-h-|\overline{\beta}|-|\overline{\eta}|,\beta_2+\eta_2, \beta_3+\eta_3, \dots)=(\lambda_1+\mu_1+2h+d,\lambda_1+\mu_1, \dots),$$ $$\tau=(\delta_1, \delta_2+|\alpha|, \delta_3, \dots)=(\lambda_1+\mu_1+2h+d, \lambda_1+\mu_1+d+h, \lambda_2+\mu_2, \dots),$$ $$\nu=(\tau_1, \tau_2+|\rho|, \dots)=(\lambda_1+\mu_1+2h+d, \lambda_1+\mu_1+2h+d, \lambda_2+\mu_2, \dots).$$ By the Pieri Rule, $1=c_{\alpha, \beta}^{\lambda[n]}=c_{\eta, \rho}^{\mu[n-d]}=c_{\alpha, \delta}^\tau=c_{\tau, \rho}^\nu$, as $\alpha$ and $\rho$ have only one part each. Since $|\overline{\delta}|=|\overline{\beta}|+|\overline{\eta}|$, we have $g_{\beta, \eta}^\delta=\overline{g}_{\overline{\beta}, \overline{\eta}}^{\overline{\delta}}=c_{\overline{\beta}, \overline{\eta}}^{\overline{\delta}}$ (note that $\overline{\delta}=\overline{\beta}+\overline{\eta}$) which is also nonzero due to the Littlewood-Richardson Rule. So $a_{\lambda[n],\mu[n-d]}^{\nu}\neq 0$ and $\nu_1=\nu_2=\lambda_1+\lambda_2+2h+d$, this proves that $n = |\lambda| +|\mu|+\lambda_1+\mu_1+ 3h+ 2d$ is where the stabilization begins. \end{proof} When $n<|\lambda| +|\mu|+\lambda_1+\mu_1+ 3h+ 2d$, Lemma 3.4 is not true for some $\nu$. However, using Remark 3.1 and Proposition 3.2, we know that the map $\varphi$ in Lemma 3.4 induces an injection from $T_0$ to $T^*_0$ with $f\leq f^*\circ \varphi|_{T_0}$. This gives us the following corollary: \begin{cor} Given three partitions $\lambda$, $\nu$, and $\mu$ and two nonnegative integers $d$ and $h$, the sequence $a_{\lambda[n],\mu[n-d]}^{\nu[n+h]}$ is weakly increasing. \end{cor} \section{Example} \label{example} We give an example of stabilization of the low degree terms of the Heisenberg product. Let us take $\lambda=(1,1)$, $\mu=(1)$. We check the stability of the two lowest degree components of $s_{(1,1)[n]}\sharp s_{(1)[n-1]}$: $s_{1,1,1}\sharp s_{1,1}=(s_{2,1,1,1}+s_{1,1,1,1,1})+\color{red}{(s_{3,1}+s_{2,2}+2s_{2,1,1}+s_{1,1,1,1})}\color{black}+\color{blue}{(s_3+s_{2,1})}$ $s_{2,1,1}\sharp s_{2,1}= (s_{4,2,1}+s_{4,1,1,1}+s_{3,3,1}+s_{3,2,2}+2s_{3,2,1,1}+s_{3,1,1,1,1}+s_{2,2,2,1}+s_{2,2,1,1,1})+\\ \color{red}{(s_5+5s_{4,1}+7s_{3,2}+9s_{3,1,1}+8s_{2,2,1}+7s_{2,1,1,1}+2s_{1,1,1,1,1})}\color{black}+\color{blue}{(s_4+3s_{3,1}+2s_{2,2}+3s_{2,1,1}+}\\ {s_{1,1,1,1})}$ The lowest degree component $(s_{(1,1)[n]}\sharp s_{(1)[n-1]})_n$ for $n\geq 5$: $(s_{3,1,1}\sharp s_{3,1})_5= \color{blue}s_{5}+3s_{4,1}+4s_{3,2}+4s_{3,1,1}+4s_{2,2,1}+3s_{2,1,1,1}+s_{1,1,1,1,1}$ $(s_{4,1,1}\sharp s_{4,1})_6=\color{blue} s_6+ 3s_{5,1}+ 4s_{4,2}+ 4s_{4,1,1}+ 2s_{3,3}+ 5s_{3,2,1}+ 3s_{3,1,1,1}+ s_{2,2,2}+ 2s_{2,2,1,1}+ s_{2,1,1,1,1}$ $(s_{5,1,1}\sharp s_{5,1})_7 =\color{blue}s_7+ 3s_{6,1}+4s_{5,2}+4s_{5,1,1}+2s_{4,3}+5s_{4,2,1}+3s_{4,1,1,1}+s_{3,3,1}+s_{3,2,2}+2s_{3,2,1,1}+s_{3,1,1,1,1}$ $(s_{6,1,1}\sharp s_{6,1})_8 =\color{blue}s_8+ 3s_{7,1}+4s_{6,2}+4s_{6,1,1}+2s_{5,3}+5s_{5,2,1}+3s_{5,1,1,1}+s_{4,3,1}+s_{4,2,2}+2s_{4,2,1,1}+s_{4,1,1,1,1}$ $\cdots \cdots$ We create a table for this: \begin{table}[htb] \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|}\hline \backslashbox{$n$}{$s_\nu$} & $n$ & $n-1$ & $n-2$ & $n-2$ & $n-3$ & $n-3$ & $n-3$ & $n-4$ & $n-4$ & $n-4$ & $n-4$\\ & & $1$ & $2$ & $1$ & $3$ & $2$ & $1$ & $3$ & $2$ & $2$ & $1$\\ & & & & $1$ & & $1$ & $1$ & $1$ & $2$ & $1$ & $1$\\ & & & & & & & $1$ & & & $1$ & $1$\\ & & & & & & & & & & & $1$\\\hline $3$& \cellcolor{yellow}$\color{red}1$ & $1$ & & & & & & & & & \\\hline $4$& $\color{red}1$ & $\cellcolor{yellow}\color{red}3$ & $2$ & $3$ & & & $1$ & & & & \\\hline $5$& $\color{red}1$ & $\color{red}3$ & $\color{red}\cellcolor{yellow}4$ & $\color{red}\cellcolor{yellow}4$ & $\color{red}2$ & $\color{red}5$ & $\color{red}\cellcolor{yellow}3$ & & & & $\color{red}1$\\\hline $6$& $\color{red}1$ & $\color{red}3$ & $\color{red}4$ & $\color{red}4$ & $\color{red}\cellcolor{yellow}2$ & $\color{red}\cellcolor{yellow}5$ & $\color{red}3$ & & $\color{red}\cellcolor{yellow}1$ & $\color{red}\cellcolor{yellow}2$ & $\color{red}\cellcolor{yellow}1$\\\hline $7$& $\color{red}1$ & $\color{red}3$ & $\color{red}4$ & $\color{red}4$ & $\color{red}2$ & $\color{red}5$ & $\color{red}3$ & $\color{red}\cellcolor{yellow}1$ & $\color{red}1$ & $\color{red}2$ & $\color{red}1$\\\hline $8$& $\color{red}1$ & $\color{red}3$ & $\color{red}4$ & $\color{red}4$ & $\color{red}2$ & $\color{red}5$ & $\color{red}3$ & $\color{red}1$ & $\color{red}1$ & $\color{red}2$ & $\color{red}1$\\\hline \end{tabular} \caption{$(s_{(1,1)[n]}\sharp s_{(1)[n-1]})_n$ for $3\leq n\leq 8$.} \label{StableTable} \end{table} The first column gives the values of n. The first row lists all the terms which may appear in the component, and we use the indexing partitions to denote the corresponding Schur functions. We color a coefficient red if it reaches the stable value. From Table 1, we can see that different terms stabilizes at different steps, we give an estimate for this in the next section. The stabilization of the lowest degree component happens at $n=7$ (using Theorem 2.3 ($d=1$ and $h=0$), the stabilization begins at $n= 2+1+1+1+2=7$). When $n\geq 7$, we have \begin{eqnarray}\label{lowest} \begin{split} (s_{n-2,1,1}&\sharp s_{n-2,1})_{n}=s_{n}+3s_{n-1,1}+4s_{n-2,2}+4s_{n-2,1,1}+2s_{n-3,3}\\ &+5s_{n-3,2,1}+3s_{n-3,1,1,1}+s_{n-4,3,1}+s_{n-4,2,2}+2s_{n-4,2,1,1}\\ &+s_{n-4,1,1,1,1}. \end{split} \end{eqnarray} The second lowest degree component $(s_{(1,1)[n]}\sharp s_{(1)[n-1]})_{n+1}$ for $n\geq 5$: $(s_{3,1,1}\sharp s_{3,1})_6=\color{red} s_6+7s_{5,1}+13s_{4,2}+16s_{4,1,1}+7s_{3,3}+24s_{3,2,1}+16s_{3,1,1,1}+7s_{2,2,2}+13s_{2,2,1,1}+7s_{2,1,1,1,1}+s_{1,1,1,1,1,1}$ $(s_{4,1,1}\sharp s_{4,1})_7=\color{red} s_7+ 7s_{6,1}+ 15s_{5,2}+17s_{5,1,1}+13s_{4,3}+33s_{4,2,1}+19s_{4,1,1,1}+17s_{3,3,1}+16s_{3,2,2}+26s_{3,2,1,1}+10s_{3,1,1,1,1}+8s_{2,2,2,1}+7s_{2,2,1,1,1}+2s_{2,1,1,1,1,1}$ $(s_{5,1,1}\sharp s_{5,1})_8 = \color{red} s_8+ 7s_{7,1}+15s_{6,2}+17s_{6,1,1}+15s_{5,3}+34s_{5,2,1}+19s_{5,1,1,1}+6s_{4,4}+26s_{4,3,1}+18s_{4,2,2}+29s_{4,2,1,1}+10s_{4,1,1,1,1}+10s_{3,3,2}+13s_{3,3,1,1}+12s_{3,2,2,1}+10s_{3,2,1,1,1}+2s_{3,1,1,1,1,1}+s_{2,2,2,2}+2s_{2,2,2,1,1}+s_{2,2,1,1,1,1}$ $(s_{6,1,1}\sharp s_{6,1})_9 =\color{red} s_9+7s_{8,1}+15s_{7,2}+17s_{7,1,1}+15s_{6,3}+34s_{6,2,1}+19s_{6,1,1,1}+8s_{5,4}+27s_{5,3,1}+18s_{5,2,2}+29s_{5,2,1,1}+10s_{5,1,1,1,1}+9s_{4,4,1}+12s_{4,3,2}+16s_{4,3,1,1}+12s_{4,2,2,1}+10s_{4,2,1,1,1}+2s_{4,1,1,1,1,1}+s_{3,3,3}+4s_{3,3,2,1}+3s_{3,3,1,1,1}+s_{3,2,2,2}+2s_{3,2,2,1,1}+s_{3,2,1,1,1,1}$ $(s_{7,1,1}\ s_{7,1})_{10} = \color{red} s_{10}+ 7s_{9,1}+15s_{8,2}+17s_{8,1,1}+15s_{7,3}+34s_{7,2,1}+19s_{7,1,1,1}+8s_{6,4}+27s_{6,3,1}+18s_{6,2,2}+29s_{6,2,1,1}+10s_{6,1,1,1,1}+2s_{5,5}+10s_{5,4,1}+12s_{5,3,2}+16s_{5,3,1,1}+12s_{5,2,2,1}+10s_{5,2,1,1,1}+2s_{5,1,1,1,1,1}+2s_{4,4,2}+3s_{4,4,1,1}+s_{4,3,3}+4s_{4,3,2,1}+3s_{4,3,1,1,1}+s_{4,2,2,2}+2s_{4,2,2,1,1}+s_{4,2,1,1,1,1}$ $(s_{8,1,1}\sharp s_{8,1})_{11} =\color{red} s_{11}+7s_{10,1}+15s_{9,2}+17s_{9,1,1}+15s_{8,3}+34s_{8,2,1}+19s_{8,1,1,1}+8s_{7,4}+27s_{7,3,1}+18s_{7,2,2}+29s_{7,2,1,1}+10s_{7,1,1,1,1}+2s_{6,5}+10s_{6,4,1}+12s_{6,3,2}+16s_{6,3,1,1}+12s_{6,2,2,1}+10s_{6,2,1,1,1}+2s_{6,1,1,1,1,1}+s_{5,5,1}+2s_{5,4,2}+3s_{5,4,1,1}+s_{5,3,3}+4s_{5,3,2,1}+3s_{5,3,1,1,1}+s_{5,2,2,2}+2s_{5,2,2,1,1}+s_{5,2,1,1,1,1}$ $(s_{9,1,1}\sharp s_{9,1})_{12} = \color{red} s_{12}+7s_{11,1}+15s_{10,2}+17s_{10,1,1}+15s_{9,3}+34s_{9,2,1}+19s_{9,1,1,1}+8s_{8,4}+27s_{8,3,1}+18s_{8,2,2}+29s_{8,2,1,1}+10s_{8,1,1,1,1}+2s_{7,5}+10s_{7,4,1}+12s_{7,3,2}+16s_{7,3,1,1}+12s_{7,2,2,1}+10s_{7,2,1,1,1}+2s_{7,1,1,1,1,1}+s_{6,5,1}+2s_{6,4,2}+3s_{6,4,1,1}+s_{6,3,3}+4s_{6,3,2,1}+3s_{6,3,1,1,1}+s_{6,2,2,2}+2s_{6,2,2,1,1}+s_{6,2,1,1,1,1}$ $\cdots \cdots$ This computation shows that the second lowest degree component of $s_{(1,1)[n]}\sharp s_{(1)[n-1]}$ stabilizes at $n=10$ (using Theorem 2.3 ($d=1$ and $h=1$), the stabilization begins at $n= 2+1+1+1+3+2=10$). When $n\geq 10$, we have \begin{eqnarray}\label{second} \begin{split} (s_{n-2,1,1}&\sharp s_{n-2,1})_{n+1}=s_{n+1}+7s_{n,1}+15s_{n-1,2}+17s_{n-1,1,1}+15s_{n-2,3}\\ &+34s_{n-2,2,1}+19s_{n-2,1,1,1}+8s_{n-3,4}+27s_{n-3,3,1}+18s_{n-3,2,2}\\& +29s_{n-3,2,1,1}+10s_{n-3,1,1,1,1}+2s_{n-4,5}+10s_{n-4,4,1}\\& +12s_{n-4,3,2}+16s_{n-4,3,1,1}+12s_{n-4,2,2,1}+10s_{n-4,2,1,1,1}\\& +2s_{n-4,1,1,1,1,1}+s_{n-5,5,1}+2s_{n-5,4,2}+3s_{n-5,4,1,1}+s_{n-5,3,3}\\& +4s_{n-5,3,2,1}+3s_{n-5,3,1,1,1}+s_{n-5,2,2,2}+2s_{n-5,2,2,1,1}\\& +s_{n-5,2,1,1,1,1}. \end{split} \end{eqnarray} \section {Stable Aguiar Coefficients} \label{StableAC} Given partitions $\lambda$, $\mu$, and $\nu$, Theorem 2.3 tells us that the sequence $\{a_{\overline{\lambda}[n+|\lambda|],\overline{\mu}[n+|\mu|]}^{\overline{\nu}[n+|\nu|]}\}_{n=0}^\infty$ is eventually constant. We write $\overline{a}_{\lambda,\mu}^\nu$ for that constant value, and call it a stable Aguiar coefficient. By the way we define stable Aguiar coefficient, we have $$\overline{a}_{\lambda,\mu}^\nu=\overline{a}_{\overline{\lambda}[n+|\lambda|],\overline{\mu}[n+|\mu|]}^{\overline{\nu}[n+|\nu|]}, \hskip 3mm \text{for all nonnegative integers} \hskip 2mm n.$$ The reason we restrict $n$ to nonnegative integers is that $\lambda[n+|\lambda|]$, $\mu[n+|\mu|]$ and $\nu[n+|\nu|]$ need to be partitions. But we can remove this restriction if we extend the definition of $\overline{a}_{\lambda,\mu}^\nu$ to the case where $\lambda$, $\mu$, and $\nu$, starting from the second position, are finite weakly decreasing sequences of positive integers, i.e. $\lambda_2\geq \lambda_3\geq \lambda_4\geq \cdots\geq 0$, $\mu_2\geq \mu_3\geq \mu_4\geq \cdots\geq 0$, and $\nu_2\geq \nu_3\geq \nu_4\geq \cdots\geq 0$. Then we have $$\overline{a}_{\lambda,\mu}^\nu=\overline{a}_{\overline{\lambda}[n+|\lambda|],\overline{\mu}[n+|\mu|]}^{\overline{\nu}[n+|\nu|]}, \hskip 3mm \text{for all integers} \hskip 2mm n.$$ By the Jacob-Trudi determinant formula: $$s_\lambda= \text{det} (h_{\lambda_j+i-j})_{i,j}$$ where $h_k$ is the complete homogeneous symmetric function, and we set $h_k=0$ when $k$ is negative and $h_0=1$. We no longer require $\lambda$ to be a partition, $\lambda$ can be any finite integer sequence. Then the Jacob-Trudi determinant will give us $0$ or $\pm 1$ times some Schur function. Murnaghan \cite{F-1938} pointed out that the reduced Kronecker coefficients determine the Kronecker product. Analogously, the stable Aguiar coefficients also determine the Heisenberg product, even for small values of $n$. Consider the lowest degree component of $s_{2,1,1}\sharp s_{2,1}$ as an example. The formula \eqref{lowest} gives us (let $n=4$) \begin{equation} \begin{split} (s_{2,1,1}\sharp s_{2,1})_4&=s_{4}+3s_{3,1}+4s_{2,2}+4s_{2,1,1}+2s_{1,3}+5s_{1,2,1}\\&+3s_{1,1,1,1}+s_{0,3,1}+s_{0,2,2}+2s_{0,2,1,1}+s_{0,1,1,1,1}. \end{split} \end{equation} Using Jacob-Trudi determinant, we have $$s_{1,3}=-s_{2,2}, \hskip 3mm s_{0,3,1}=-s_{2,1,1}, \hskip 3mm s_{0,2,1,1}=-s_{1,1,1,1}, \hskip 3mm \text{and}$$ $$s_{1,2,1}=s_{0,2,2}=s_{0,1,1,1,1}=0.$$ So (5.1) gives us $$(s_{2,1,1}\sharp s_{2,1})_4=s_4+3s_{3,1}+2s_{2,2}+3s_{2,1,1}+s_{1,1,1,1},$$ which coincides with the result we had in Section 4. This example shows the process to recover the Aguiar coefficients from the stable ones. \begin{thm} Let $\lambda, \mu$, and $\nu$ be partitions with $|\nu|\geq |\lambda|\geq |\mu|$, then \begin{equation} a_{\lambda,\mu}^\nu=\sum\limits_{i=1}^{4|\nu|-|\lambda|-|\mu|} (-1)^{i-1}\overline{a}_{\lambda,\mu}^{\nu^{\dag i}}, \end{equation} where $\nu^{\dag i}= (\nu_i-i+1, \nu_1+1, \nu_2+1, \dotsc, \nu_{i-1}+1, \nu_{i+1}, \nu_{i+2}, \dotsc)$. \end{thm} \begin{proof} From Theorem 2.3, we know that when $n\geq |\overline{\lambda}|+|\overline{\mu}|+\lambda_2+\mu_2+3(|\nu|-|\lambda|)+2(|\lambda|-|\mu|)-|\lambda|$, the Aguiar coefficients of $(s_{\overline{\lambda}[n+|\lambda|]}\sharp s_{\overline{\mu}[n+|\mu|]})_{n+|\nu|}$ stabilize, i.e. \begin{equation*} \begin{split} (s_{\overline{\lambda}[n+|\lambda|]}\sharp s_{\overline{\mu}[n+|\mu|]})_{n+|\nu|}&=\sum\limits_{\tau\vdash n+|\nu|} a_{\overline{\lambda}[n+|\lambda|],\overline{\mu}[n+|\mu|]}^\tau s_\tau \\&=\sum\limits_{\tau\vdash n+|\nu|} \overline{a}_{\overline{\lambda}[n+|\lambda|],\overline{\mu}[n+|\mu|]}^\tau s_\tau \\&=\sum\limits_{\tau\vdash n+|\nu|} \overline{a}_{\lambda,\mu}^{\overline{\tau}[-n+|\tau|]} s_\tau. \end{split} \end{equation*} So \begin{equation} \begin{split} (s_\lambda\sharp s_\mu)_{|\nu|}=\sum\limits_{\tau\vdash n+|\nu|} \overline{a}_{\lambda,\mu}^{\overline{\tau}[-n+|\tau|]} s_{\overline{\tau}[-n+|\tau|]}. \end{split} \end{equation} To get $a_{\lambda,\mu}^\nu$ from (5.3), we determine which $s_{\tau[-n+|\tau|]}$'s would give us $\pm s_\nu$. Suppose the length of $\tau$ is $l$. From the Jacobi-Trudi formula, we know that $s_{\overline{\tau}[-n+|\tau|]} = \pm s_\nu$ if and only if the length of $\nu$ is at most $l$ and $(\tau_1-n, \tau_2, \tau_3, \dotsc, \tau_l) + (l-1, l-2, \dotsc, 0)$ is a permutation of $(\nu_1, \nu_2, \dotsc, \nu_l)+(l-1, l-2, \dotsc, 0)$. This happens when there is an $i$ ($1\leq i\leq l$) such that $$\tau_1-n+(l-1)=\nu_i+l-i,$$ $$\tau_j+(l-j)=\nu_{j-1}+(l-j+1), \hskip 2mm j=2,3,4,\dotsc, i,$$ $$\tau_j+(l-j)=\nu_j+(l-j), \hskip 2mm j=i+1, i+2, \dotsc, l,$$ which is equivalent to $$\overline{\tau}[-n+|\tau|] = \nu^{\dag i},$$ and when this happens, $$s_{\overline{\tau}[-n+|\tau|]}= (-1)^{i-1}s_\nu.$$ So the coefficient of $s_\nu$ in $(s_\lambda\sharp s_\mu)_{|\nu|}$ is \begin{equation} a_{\lambda,\mu}^\nu=\sum\limits_{i=1}^{l} (-1)^{i-1}\overline{a}_{\lambda,\mu}^{\nu^{\dag i}}. \end{equation} Take $n=3|\nu|-|\lambda|-|\mu|\geq |\overline{\lambda}|+|\overline{\mu}|+\lambda_2+\mu_2+3(|\nu|-|\lambda|)+2(|\lambda|-|\mu|)-|\lambda|$, since $l\leq |\tau|= n+|\nu|$, (5.4) can be written as $$a_{\lambda,\mu}^\nu=\sum\limits_{i=1}^{4|\nu|-|\lambda|-|\mu|} (-1)^{i-1}\overline{a}_{\lambda,\mu}^{\nu^{\dag i}}.$$ \end{proof} Consider an example. From Section~\ref{example}, we know that $a_{(2,1,1), (2,1)}^{(2,2)}=2$. On the other hand, using the formula (5.2), we have \begin{equation} \label{key} \begin{split} a_{(2,1,1), (2,1)}^{(2,2)}&=\overline{a}_{(2,1,1), (2,1)}^{(2,2)}-\overline{a}_{(2,1,1), (2,1)}^{(1,3)}+\overline{a}_{(2,1,1), (2,1)}^{(-2,3,3)} \\&-\overline{a}_{(2,1,1), (2,1)}^{(-3,3,3,1)}+\cdots. \end{split} \end{equation} From \eqref{lowest}, we have $$\overline{a}_{(2,1,1), (2,1)}^{(2,2)}=4, \hskip 4mm \overline{a}_{(2,1,1), (2,1)}^{(1,3)}=2,$$ and $$\overline{a}_{(2,1,1), (2,1)}^{(2,2)^{\dag i}}=0, \hskip 2mm \text{when} \hskip 2mm i\geq 3.$$ So \eqref{key} gives us $$a_{(2,1,1), (2,1)}^{(2,2)}=4-2=2.$$ Now we use Theorem 5.1 to estimate when $a_{\lambda[n], \mu[n-d]}^{\nu[n+h]}$ stabilizes for given partitions $\lambda$, $\mu$, and $\nu$ and nonnegative integers $d$ and $h$. \begin{cor} The Aguiar coefficient $a_{\lambda[n], \mu[n-d]}^{\nu[n+h]}$ stabilizes when $n \geq \frac{1}{2}(|\lambda|+|\mu|+|\nu|+\lambda_1+\mu_1+\nu_1-1)+h+d$. \end{cor} \begin{proof} The formula (5.2) gives us \begin{equation} \label{specialstable} a_{\lambda[n], \mu[n-d]}^{\nu[n+h]}=\sum\limits_{i=1}^{2n+4h+d} (-1)^{i-1}\overline{a}_{\lambda[n],\mu[n-d]}^{{\nu[n+h]}^{\dag i}}, \end{equation} So $a_{\lambda[n], \mu[n-d]}^{\nu[n+h]}$ reaches the stable value when $\overline{a}_{\lambda[n],\mu[n-d]}^{{\nu[n+h]}^{\dag i}}=0$ for all $i\geq 2$. By Theorem 2.3, $(s_{\lambda[n]}\sharp s_{\mu[n-d]})_{n+h}$ stabilizes at $n=|\lambda|+|\mu|+\lambda_1+\mu_1+3h+2d =:m$, so $$\overline{a}_{\lambda[n],\mu[n-d]}^{{\nu[n+h]}^{\dag i}}=a_{\lambda[m], \mu[m-d]}^{\overline{{\nu[n+h]}^{\dag i}}[m+h]}.$$ Since for $i\geq 2$, we have \begin{equation*} \begin{split} \overline{{\nu[n+h]}^{\dag i}}[m+h]=&\overline{(\nu_{i-1}-i+1, n+h-|\nu|+1, \nu_1+1, \nu_2+1, \dotsc,\nu_{i-2}+1, \nu_{i}, }\\ &\hskip 88mm \overline{\nu_{i+1}, \dotsc)}[m+h]\\ =&\overline{(\nu_{i-1}-i+1+m-n, n+h-|\nu|+1, \nu_1+1, \nu_2+1, \dotsc,}\\ &\hskip 80mm \overline{\nu_{i-2}+1,\nu_{i}, \nu_{i+1}, \dotsc)}. \end{split} \end{equation*} When $n\geq \frac{1}{2}(|\lambda|+|\mu|+|\nu|+\lambda_1+\mu_1+\nu_1-1)+h+d$, we have $$n+h-|\nu|+1>\nu_{i-1}-i+1+m-n, \hskip 3mm \text{for all} \hskip 3mm i\geq 2.$$ So $a_{\lambda[m], \mu[m-d]}^{\overline{{\nu[n+h]}^{\dag i}}[m+h]}$ for all $i\geq 2$, which proves the corollary. \end{proof} We go back to Table 1 and give yellow color to the cells corresponding to the lower bounds obtained from Corollary 5.2. By comparing the yellow cells and the red numbers, we can see that, in this case, the lower bounds are the places where the stabilizations of the Aguiar coefficients begin, except for $a_{(n-2,1,1), (n-2,1)}^{(n-3,3)}$, $a_{(n-2,1,1), (n-2,1)}^{(n-3,2,1)}$, and $a_{(n-2,1,1), (n-2,1)}^{(n-4,1,1,1,1)}$. \bibliographystyle{amsplain}
{ "timestamp": "2017-01-13T02:02:19", "yymm": "1701", "arxiv_id": "1701.03203", "language": "en", "url": "https://arxiv.org/abs/1701.03203", "abstract": "The Heisenberg product is an associative product defined on symmetric functions which interpolates between the usual product and the Kronecker product. In 1938, Murnaghan discovered that the Kronecker product of two Schur functions stabilizes. We prove an analogous result for the Heisenberg product of Schur functions.", "subjects": "Combinatorics (math.CO); Representation Theory (math.RT)", "title": "Stability of the Heisenberg Product on Symmetric Functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9888419677654414, "lm_q2_score": 0.8311430562234878, "lm_q1q2_score": 0.8218691352106167 }
https://arxiv.org/abs/1207.3468
Minimal Convex Decompositions
Let $P$ be a set of $n$ points on the plane in general position. We say that a set $\Gamma$ of convex polygons with vertices in $P$ is a convex decomposition of $P$ if: Union of all elements in $\Gamma$ is the convex hull of $P,$ every element in $\Gamma$ is empty, and for any two different elements of $\Gamma$ their interiors are disjoint. A minimal convex decomposition of $P$ is a convex decomposition $\Gamma'$ such that for any two adjacent elements in $\Gamma'$ its union is a non convex polygon. It is known that $P$ always has a minimal convex decomposition with at most $\frac{3n}{2}$ elements. Here we prove that $P$ always has a minimal convex decomposition with at most $\frac{10n}{7}$ elements.
\section{Introduction} Let $P_n$ denote a set of $n$ points on the plane in general position. We denote as $Conv(P_n)$ the convex hull of $P_n$ and $c$ the number of its vertices, and given a polygon $\alpha$ we denote as $\alpha^o$ its interior. We say that a set $\Gamma=\{\gamma_1,\gamma_2,...,\gamma_k\}$ of $k$ convex polygons with vertices in $P_n$ is a {\em convex decomposition} of $P_n$ if: (C1) Every $\gamma_i \in \Gamma$ is empty, that is, $P_n \cap \gamma_i^o = \emptyset$ for $i=1,2,...,k.$ (C2) For every two different $\gamma_i,$ $\gamma_j \in \Gamma,$ $\gamma_i^o \cap \gamma_j^o = \emptyset.$ (C3) $\gamma_1 \cup \gamma_2 \cup ... \cup \gamma_k = Conv(P_n)$. In \cite{openproblems} they conjectured that for every $P_n$ there is a convex decomposition with at most $n+1$ elements. This was disproved in \cite{aichholzer} giving an $n$--point set such that every convex decomposition has at least $n+2$ elements. Later in this direction, in \cite{garcia} they give a point set $P_n$ on which every convex decomposition has at least $\frac{11n}{10}$ elements. We are interested in convex decompositions of $P_n$ with as few elements as possible. A {\em triangulation} of $P_n$ is a convex decomposition $T = \{t_1,t_2,...,t_k\}$ on which every $t_i$ is a triangle. In \cite{simflippingedges} they prove that any triangulation $T$ of $P_n,$ has a set $F$ of at least $\frac{n}{6}$ edges that, by removing them we obtain $|F|$ convex quadrilaterals with disjoint interiors. So $\Gamma = T \setminus F$ is a convex decomposition yielding the bound $|\Gamma| \leq \frac{11n}{6}-c-2.$ We have the following definition. \begin{definition} Let $\Gamma$ be a convex decomposition of $P_n.$ If the union of any two different elements in $\Gamma$ is a nonconvex polygon, then $\Gamma$ will be called {\em minimal convex decomposition.} \end{definition} In \cite{descconv} they show that any given set $P_n$ always has a minimal convex decomposition with at most $\frac{3n}{2}-c$ elements. Here we improve this bound giving a minimal convex decomposition of $P_n$ with at most $\frac{10n}{7}-c$ elements. \section{Minimal Convex Decompositions} Let $p_1=(x_1,y_1)$ be the element in $P_n$ with the lowest $y$--coordinate. If there are two points with same $y$--coordinate we take $p_1$ as the element with the smallest $x$--coordinate. We label every $p \in P_n\setminus \{p_1\}$ according to the angle $\theta$ between the line $y = y_1$ and the line $\overline{p_1 p}.$ The point $p$ will be labeled $p_{i+1}$ if it has the $i$--th smallest angle $\theta,$ see Figure \ref{my_proof_03}(a). For $i = 3,4,...,n-1,$ we say $p_i$ is negative, labeled $-,$ if $p_i \in Conv( \{p_1,p_{i-1},p_{i+1}\})^o.$ Otherwise we say $p_i$ is positive, labeled $+.$ See Figure \ref{my_proof_03}(b). \begin{figure}[h] \centering \psfrag{pn}{$p_n$} \psfrag{pn-1}{$p_{n-1}$} \psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{p3}{$p_3$} \psfrag{(a)}{(a)} \psfrag{(b)}{(b)} \psfrag{+}{$+$} \psfrag{-}{$-$} \includegraphics[width =0.8\textwidth]{my_proof_03.eps} \caption{Labeling elements of $P_n.$} \label{my_proof_03} \end{figure} Let ${A}$ and ${B}$ be the subsets of $P_n$ containing all positive and negative elements respectively. We divide ${A}$ into subsets of consecutive points as follows: If $p_3 \in {A},$ we define $A_1 = \{p_3,...,p_{3+r-1}\},$ as the subset with $r$ consecutive positive points, where $p_{r+3}$ is negative or $p_{r+3} = p_n.$ If $p_3 \not \in {A}$ then $A_1 = \emptyset.$ Suppose that $p_{n-1} \in {A}.$ For $i \geq 2$ let $A_i' = {A} \setminus \left(A_1 \cup ... \cup A_{i-1} \right),$ and let $A_i = \{p_j,p_{j+1},...,p_{j+r-1}\},$ where $r\geq 1,$ $p_j$ has the smallest index in $A_i',$ and $p_{j+r}$ is negative or $p_{j+r} = p_{n}.$ Let $k$ be the number of such $A_i$ sets obtained. If $p_{n-1} \not \in {A},$ we make $A_{k-1}$ the block containing the element in ${A}$ with the highest label, and then $A_k = \emptyset.$ In an analogous way we partition ${B}$ into $B_1$, $B_2$, ..., $B_{k-1}.$ Let $V$ be the polygon with vertex set ${A}\cup \{p_1,p_2,p_n\},$ and let $U'$ be the set of at most $c-2$ regions $Conv(P_n) \setminus V.$ We call $U$ the vertex set of $U'.$ We obtain a minimal convex decomposition $\Gamma$ of $P_n$ induced by polygons in $V$ and $U$ in the following way: (1) If $A_j=\{p_i,p_{i+1},...,p_{i+r-1}\}$, we make ${\cal A}_j = A_j \cup \{p_1,p_{i-1},p_{i+r}\}$. ${\cal A}_j$ is the vertex set of an empty convex $(|A_j|+3)$--gon. In case that $A_1 = \emptyset$ (or $A_k = \emptyset$) then ${\cal A}_1 = \{p_1,p_{2},p_{3}\}$ (${\cal A}_k = \{p_1,p_{n-1},p_{n}\}$). There are $k$ of such polygons. (2) If $B_j=\{p_i,p_{i+1},...,p_{i+r-1}\},$ we make ${\cal B}_j = B_j\cup \{p_{i-1},p_{i+r}\}$. ${\cal B}_j$ is the vertex set of an empty convex $(|B_j|+2)$--gon. There are $k-1$ of them. (3) Every $B_j = \{p_i,p_{i+1},...,p_{i+r-1}\}$ induces $|B_j|-1$ triangles ${\triangle} p_1 p_m p_{m+1},$ for $m = i,i+1,...,i+r-2.$ There are $|B_1| - 1 + |B_2| - 1 + ... + |B_{k-1}|-1$ of these triangles. Let $T_B$ be the set of them. (4) $U'$ can be subdivided in $|A_1|+|A_2|+...+|A_k| - (c-3)$ triangles with vertices in $U$ satisfying (C1) and (C2). Make $T_U$ the set of such triangles. Hence, $\Gamma = \cup_{i} ({\cal A}_i \cup {\cal B}_i) \cup T_U \cup T_B$ is a convex decomposition of $P_n.$ See Figure \ref{convdescAB}. We have that $|\Gamma| = k + k-1 + |T_B|+|T_U|$ {\em i.e.} \begin{equation}\label{cardinalidad} |\Gamma| = n+k-c. \end{equation} \begin{figure}[h] \centering \psfrag{A1}{$A_1$} \psfrag{A2}{$A_2$} \psfrag{A3}{$A_3$} \psfrag{A4}{$A_4$} \psfrag{B1}{$B_1$} \psfrag{B2}{$B_2$} \psfrag{B3}{$B_3$} \psfrag{U}{$U$} \psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{pn}{$p_n$} \includegraphics[width =0.6\textwidth]{conv_desc_ABs.eps} \caption{$P_{15}$ and $\Gamma$ described above. Here $k = 4$, so $|\Gamma| = n+k-3 = 16.$} \label{convdescAB} \end{figure} \subsection{Convex decomposition with at most $\frac{10n}{7}-c$ elements} We proceed now to show that every collection $P_n$ with $c$ vertices in $Conv(P_n)$ has a convex decomposition $\Gamma$ such that $|\Gamma| \leq \frac{10}{7}n-c.$ We use the following notation: If in a given collection $P_n$ we find that $p_3,$ $p_5,$ $p_7,$ ... are negative and $p_4,$ $p_6,$ $p_8,$ ... are positive, we say that $P_n$ is a $\pm$ set. Next result is for $\pm$ sets. \begin{lemma}\label{lemamine} Let $P_n$ be a $\pm$ set. Then $P_n$ has a convex decomposition $\Gamma$ with $\frac{4n}{3}-c$ elements, where $c$ is the number of vertices in $Conv(P_n)$. \end{lemma} \noindent {\bf Proof: } For $i = 2,8,...,n-6,$ we make $Q_i = \{p_1, p_{i}, p_{i+1}, ..., p_{i+6}\},$ and let $T_i=\{t_1,t_2,...,t_9\}$ be a set of triangles with vertices in $Q_i$ such that $t_1 = {\triangle} p_1 p_{i} p_{i+1}$, $t_2 = {\triangle} p_1 p_{i+1} p_{i+3}$, $t_3 = {\triangle} p_1 p_{i+3} p_{i+5}$, $t_4 = {\triangle} p_1 p_{i+5} p_{i+6}$, $t_5 = {\triangle} p_i p_{i+1} p_{i+2}$, $t_6 = {\triangle} p_{i+1} p_{i+2} p_{i+3}$, $t_7 = {\triangle} p_{i+2} p_{i+3} p_{i+4}$, $t_8 = {\triangle} p_{i+3} p_{i+4} p_{i+5}$ and $t_9 = {\triangle} p_{i+4} p_{i+5} p_{i+6},$ as shown in Figure~\ref{qidelta}. We obtain a set $\Gamma_i$ of convex polygons, joining elements in $T_i,$ to get a minimal convex decomposition of $P_n.$ We make a final modification on positive and negative points as follows: Given 3 consecutive points labeled $+,$ $p_i,$ $p_j$ and $p_k$ ($i<j<k$), if $p_j \in Conv(\{p_1,p_i, p_k\})^o$ we label $p_j$ as $+ -$, otherwise we label $p_j$ as $+ +.$ Analogously we modify labels $-$ to $- -$ and $- +.$ \begin{figure}[h] \centering \psfrag{p1}{$p_1$} \psfrag{pi+1}{$p_{i+1}$} \psfrag{pi+2}{$p_{i+2}$} \psfrag{pi+3}{$p_{i+3}$} \psfrag{pi+4}{$p_{i+4}$} \psfrag{pi+5}{$p_{i+5}$} \psfrag{pi+6}{$p_{i+6}$} \psfrag{pi}{$p_{i}$} \psfrag{t1}{$t_1$} \psfrag{t2}{$t_2$} \psfrag{t3}{$t_3$} \psfrag{t4}{$t_4$} \psfrag{t5}{$t_5$} \psfrag{t6}{$t_6$} \psfrag{t7}{$t_7$} \psfrag{t8}{$t_8$} \psfrag{t9}{$t_9$} \includegraphics[width =0.6\textwidth]{Q_i_y_delta.eps} \caption{$Q_i$ and its convex decomposition $T_i.$} \label{qidelta} \end{figure} We proceed now to make case analysis over the labels in $p_{i+2},$ $p_{i+3}$ and $p_{i+4}.$ Let $\ell$ be the line containing $p_{i+1}$ and $p_{i+3},$ make ${\cal D}$ the open half plane bounded by $\ell$ containing $p_1,$ and ${\cal U} = \mathbb{R}^2 \setminus ({\cal D} \cup \{\ell\}).$ Given two polygons $\alpha$ and $\beta$ sharing an edge $e,$ we denote as $\alpha \uplus \beta$ the polygon $\alpha \cup (\beta -\{e\}).$ \noindent {\bf Case (a).} $p_{i+2}$ and $p_{i+4}$ have label $+ +$ and $+ -$ respectively. We have: {\em Subcase 1.} Suppose that $p_{i+3}$ is $- -.$ If $p_{i}\in {\cal D}$ and the pentagon $P = t_6 \uplus t_7 \uplus t_8$ is convex, then $\Gamma_i =\{t_1 \uplus t_2,t_3,t_4,t_5,P, t_9\}.$ If $P$ is not convex, $\Gamma_i =\{t_1 \uplus t_2,t_3 \uplus t_8, t_4,t_5,t_6 \uplus t_7, t_9\}.$ See Figure \ref{casoA1}(a). If $p_i \in {\cal U},$ and the hexagon $H = t_5 \uplus t_6 \uplus t_7 \uplus t_8$ is convex, $\Gamma_i =\{t_1, t_2, t_3, t_4, H, t_9\}.$ If $H$ is not convex, $\Gamma_i =\{t_1,t_2,t_3 \uplus t_8,t_4,t_5 \uplus t_6 \uplus t_7, t_9\}.$ See Figure \ref{casoA1}(b). \begin{figure}[h] \centering \psfrag{p1}{$p_1$} \psfrag{pi}{$p_i$} \psfrag{pi+1}{$p_{i+1}$} \psfrag{pi+2}{$p_{i+2}$} \psfrag{pi+3}{$p_{i+3}$} \psfrag{pi+4}{$p_{i+4}$} \psfrag{pi+5}{$p_{i+5}$} \psfrag{pi+6}{$p_{i+6}$} \psfrag{(a)}{(a)} \psfrag{(b)}{(b)} \includegraphics[width = 0.9\textwidth]{caso_a_1.eps} \caption{$p_{i+2},$ $p_{i+4}$ and $p_{i+3}$ being $+ +,$ $+ -$ and $- -$ respectively.} \label{casoA1} \end{figure} {\em Subcase 2.} Suppose that $p_{i+3}$ is $- +.$ If $p_{i}$ and $p_{i+4}$ are in ${\cal D},$ then make $H=t_1 \uplus t_2 \uplus t_3 \uplus t_8$ and $\Gamma_i =\{H,t_4,t_5,t_6,t_7, t_9\}$ (see Figure \ref{casoA2}(a)). Now, if $p_i \in {\cal U}$ (and $p_{i+4} \in {\cal D}$) $H$ is missing $t_1,$ so $\Gamma_i =\{t_1,t_2 \uplus t_3 \uplus t_8,t_5 \uplus t_6,t_4,t_7, t_9\}$ (see Figure \ref{casoA2}(b)). On the other hand if $p_{i+4} \in {\cal U}$ (and $p_{i} \in {\cal D}$), $H$ is missing $t_8,$ so $\Gamma_i =\{t_1 \uplus t_2 \uplus t_3,t_4,t_5,t_6 \uplus t_7,t_8, t_9\}$ (see Figure \ref{casoA2}(c)). Finally if $p_i, p_{i+2}, p_{i+4} \in {\cal U},$ $\Gamma_i =\{t_1,t_2 \uplus t_3,t_4,t_5 \uplus t_6 \uplus t_7,t_8, t_9\}$ (see Figure \ref{casoA2}(d)). \begin{figure}[h] \centering \psfrag{p1}{$p_1$} \psfrag{pi}{$p_i$} \psfrag{pi+1}{$p_{i+1}$} \psfrag{pi+2}{$p_{i+2}$} \psfrag{pi+3}{$p_{i+3}$} \psfrag{pi+4}{$p_{i+4}$} \psfrag{pi+5}{$p_{i+5}$} \psfrag{pi+6}{$p_{i+6}$} \psfrag{(a)}{(a)} \psfrag{(b)}{(b)} \psfrag{(c)}{(c)} \psfrag{(d)}{(d)} \includegraphics[width = 0.9\textwidth]{caso_a_2.eps} \caption{Polygons we find when $p_{i+3}$ is $-+$ and $p_{i+2}$ and $p_{i+4}$ are $++$ and $+-$ respectively. We find same polygons if both of $p_{i+2}$ and $p_{i+4}$ are labeled $++.$} \label{casoA2} \end{figure} \noindent {\bf Case (b).} Both $p_{i+2}$ and $p_{i+4}$ have label $+ -.$ Observe that $\{p_{i},p_{i+2},p_{i+4},p_{i+6}\}$ is the set of vertices of a convex quadrilateral $q,$ so we make $\Gamma_i = \{ t_1, t_2 \uplus t_6, t_3 \uplus t_8, t_4, t_5, t_7, t_9, q \},$ $U = U\setminus \{p_{i+2},p_{i+4}\}$ and $U' = U'\setminus q.$ \noindent {\bf Case (c).} $p_{i+2}$ and $p_{i+4}$ both have label $+ +.$ We are making a similar analysis as in Case (a): Suppose that $p_{i+3}$ is $- -.$ If $p_i \in {\cal D}$ and hexagon $H = t_5 \uplus t_6 \uplus t_7 \uplus t_8$ is convex, $\Gamma_i =\{t_1,t_2,t_3,t_4,H, t_9\}.$ If $H$ is not convex, we make $\Gamma_i =\{t_1 \uplus t_2,t_3,t_4,t_5,t_6 \uplus t_7 \uplus t_8, t_9\}.$ See Figure \ref{casoC1}(a). If $p_{i}\in {\cal U},$ $H$ is always convex, so $\Gamma_i =\{t_1,t_2,t_3,t_4,H, t_9\}.$ See Figure \ref{casoC1}(b). When $p_{i+3}$ is $- +,$ $\Gamma_i$ has the same polygons as in subcase 2 of Case (a). And if $p_{i+2}$ and $p_{i+4}$ have label $+ -$ and $+ +$ respectively, we obtain $\Gamma_i$ analogously as in Case (a). \begin{figure}[h] \centering \psfrag{p1}{$p_1$} \psfrag{pi}{$p_i$} \psfrag{pi+1}{$p_{i+1}$} \psfrag{pi+2}{$p_{i+2}$} \psfrag{pi+3}{$p_{i+3}$} \psfrag{pi+4}{$p_{i+4}$} \psfrag{pi+5}{$p_{i+5}$} \psfrag{pi+6}{$p_{i+6}$} \psfrag{(a)}{(a)} \psfrag{(b)}{(b)} \includegraphics[width =0.9\textwidth]{caso_c_1.eps} \caption{Polygons we find when $p_{i+3}$ is $- -,$ and $p_{i+2}$ and $p_{i+4}$ are both $++.$} \label{casoC1} \end{figure} Lets make $R_i = \gamma_i \uplus \gamma_{i+1}$ where $\gamma_i$ is the polygon containing $t_4$ in $Q_i,$ and $\gamma_{i+1}$ is the polygon containing $t_1$ in $Q_{i+1},$ and let $b$ be the number of $Q_i$ sets as in case (b). We obtain a minimal convex decomposition of $P_n$ by finding $\Gamma_2,$ $\Gamma_8,$ ... ,$\Gamma_{n-6},$ obtaining the $\frac{n}{2}-c-2b$ triangles in $T_U,$ and getting $R_i$ by removing edges $p_1p_{i},$ for $i = 8, 14, 20, ... ,n-6.$ So $\Gamma$ is such that $|\Gamma| = \left(6\frac{n}{6} + 2b \right) + \left( \frac{n}{2}-c-2b \right) - \left(\frac{n}{6} \right) = \frac{4n}{3} - c.$ \hfill \vrule height 5 pt width 0.05 in depth 0.8 pt {\medskip } We have the following observation. \noindent {\bf Observation 1.} Let $\gamma$ be the vertex set of a convex polygon, and let $p$ be a point in $Conv(\gamma)^o,$ then $\gamma \cup p$ has a minimal convex decomposition with 3 elements. We proceed now to prove our main theorem. \begin{theorem} Let $P_n$ be an $n$--point set on the plane in general position. Then $P_n$ has a minimal convex decomposition with at most $\frac{10}{7}n-c$ elements. \end{theorem} \noindent {\bf Proof: } Let $k$ be the number of polygons ${\cal A}_i$ described above. If $k \leq \frac{3n}{7},$ we apply Equation~(\ref{cardinalidad}) to find a convex decomposition with $n + k - c \leq n + \frac{3n}{7}-c$ elements. If $k = \frac{n}{2}$, $P_n$ is a $\pm$ set, and it has a convex decomposition with $\frac{4n}{3}-c$ elements, by Lemma~\ref{lemamine}. In case that $\frac{3n}{7}< k < \frac{n}{2},$ we consider every ${\cal A}_i.$ Let $I = {\cal A}_i \cap Conv(P_n)^o.$ If $I={\cal A}_i$ let $q_i$ be the element in ${\cal A}_i$ with the highest coordinate $y$ (if there are 2 points with this coordinate, we make $q_i$ the element having the greatest $x$ coordinate of them). If $I \not = {\cal A}_i,$ let $q_i$ be the element with the highest label in ${\cal A}_i - I,$ and make $r_i$ in $B_i$ the element with minimum $y$ coordinate, if there are 2 with the same coordinate, we make $r_i$ the element with maximum $x$ coordinate. See Figure~\ref{coleccionPM}. \begin{figure}[h] \centering \psfrag{A1}{$A_1$} \psfrag{A2}{$A_2$} \psfrag{A3}{$A_3$} \psfrag{A4}{$A_4$} \psfrag{B1}{$B_1$} \psfrag{B2}{$B_2$} \psfrag{B3}{$B_3$} \psfrag{p1}{$p_1$} \psfrag{p2}{$p_2$} \psfrag{pn}{$p_n$} \includegraphics[width =0.9\textwidth]{coleccionPM.eps} \caption{$P_n$ and its collection $\pm$ associated.} \label{coleccionPM} \end{figure} We make $P' = \{q_1, r_1, q_2, r_2, ... , q_{k-1}, r_{k-1}, q_k\} \cup \{p_1,p_2,p_n\}.$ $P'$ is a $\pm$ set with $2k+2$ elements. By Lemma~\ref{lemamine}, $P'$ has a convex decomposition $\Gamma'$ with $\frac{4}{3} (2k+2) -c$ elements. Let $S$ be the set $P_n - P',$ where $|S|= n - 2k - 2.$ By Observation 1, we find that every element in $S$ when is added increases in 2 the number of polygons, so $P'$ and $S$ induce a minimal convex decomposition $\Gamma$ of $P'\cup S=P_n$ with $\frac{4}{3} (2k+2) -c + 2|S|$ elements. Substituting $|S|$ we have that $|\Gamma| = 2n-\frac{4}{3}k-c-\frac{4}{3}.$ Using the fact that $k \geq \frac{3n}{7}$ we obtain that $\Gamma$ is such that $|\Gamma| \leq \frac{10n}{7}-c.$ \hfill \vrule height 5 pt width 0.05 in depth 0.8 pt {\medskip } \section{Concluding remarks} Analogously to triangulation of $P_n,$ we can define {\em convex quadrangulation}. It would be interesting characterizing $n$--point sets that accept a convex qua\-dran\-gu\-la\-tion.
{ "timestamp": "2012-07-17T02:01:32", "yymm": "1207", "arxiv_id": "1207.3468", "language": "en", "url": "https://arxiv.org/abs/1207.3468", "abstract": "Let $P$ be a set of $n$ points on the plane in general position. We say that a set $\\Gamma$ of convex polygons with vertices in $P$ is a convex decomposition of $P$ if: Union of all elements in $\\Gamma$ is the convex hull of $P,$ every element in $\\Gamma$ is empty, and for any two different elements of $\\Gamma$ their interiors are disjoint. A minimal convex decomposition of $P$ is a convex decomposition $\\Gamma'$ such that for any two adjacent elements in $\\Gamma'$ its union is a non convex polygon. It is known that $P$ always has a minimal convex decomposition with at most $\\frac{3n}{2}$ elements. Here we prove that $P$ always has a minimal convex decomposition with at most $\\frac{10n}{7}$ elements.", "subjects": "Computational Geometry (cs.CG); Combinatorics (math.CO); Metric Geometry (math.MG)", "title": "Minimal Convex Decompositions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9914225135725426, "lm_q2_score": 0.8289388083214156, "lm_q1q2_score": 0.821828596943846 }
https://arxiv.org/abs/0906.2795
Descent sets of cyclic permutations
We present a bijection between cyclic permutations of {1,2,...,n+1} and permutations of {1,2,...,n} that preserves the descent set of the first n entries and the set of weak excedances. This non-trivial bijection involves a Foata-like transformation on the cyclic notation of the permutation, followed by certain conjugations. We also give an alternate derivation of the consequent result about the equidistribution of descent sets using work of Gessel and Reutenauer. Finally, we prove a conjecture of the author in [SIAM J. Discrete Math. 23 (2009), 765-786] and a conjecture of Eriksen, Freij and Wästlund.
\section{Introduction}\label{sec:intro} \subsection{Permutations, cycles, and descents} Let $[n]=\{1,2,\dots,n\}$, and let $\S_n$ denote the set of permutations of $[n]$. We will use both the one-line notation of $\pi\in\S_n$ as $\pi=\pi(1)\pi(2)\dots\pi(n)$ and its decomposition as a a product of cycles of the form $(i,\pi(i),\pi^2(i),\dots,\pi^{k-1}(i))$ with $\pi^k(i)=i$. For example, $\pi=2517364=(1,2,5,3)(4,7)(6)$. Sometimes it will be convenient to write each cycle starting with its largest element and order cycles by increasing first element, e.g., $\pi=(5,3,1,2)(6)(7,4)$. We denote by $\C_n$ the set of permutations in $\S_n$ that consist of one single cycle of length~$n$. We call these {\em cyclic permutations} or {\em $n$-cycles}. For example, $$\C_3=\{(1,2,3),(1,3,2)\}=\{231,312\}.$$ It is clear that $|\C_n|=(n-1)!$. Given $\pi\in\S_n$, let $D(\pi)$ denote the {\em descent set} of $\pi$, that is, $$D(\pi)=\{i\,:\,1\le i\le n-1,\ \pi(i)>\pi(i+1)\}.$$ The descent set can be defined for any sequence of integers $a=a_1 a_2 \dots a_n$ by letting $D(a)=\{i\,:\,1\le i\le n-1,\ a_i>a_{i+1}\}$. The main result of this paper, which we present in Section~\ref{sec:main}, is a bijection $\bij$ between $\C_{n+1}$ and $\S_n$ with the property that for every $(n+1)$-cycle $\pi$, $$D(\pi(1)\pi(2)\dots\pi(n))=D(\bij(\pi)).$$ Despite the simplicity of the statement and the relatively natural description of $\bij$ that we will give, the proof that it is a bijection with the desired property is somewhat technical. Let us introduce some notation. For $\pi\in\S_n$, let $\wt\pi$ be the permutation defined by $\wt\pi(i)=n+1-\pi(n+1-i)$ for $1\le i\le n$. The cycle form of $\wt\pi$ can be obtained by replacing each entry $j$ with $n+1-j$ in the cycle form of $\pi$. For $1\le i\le n-1$, we have that $i\in D(\wt\pi)$ if and only if $n+1-i\notin D(\pi)$. We will write $I=\{i_1,i_2,\dots,i_k\}_<$ to indicate that the elements of $I$ are listed in increasing order. Subsets $I\subseteq[n-1]$ are in bijective correspondence with compositions of $n$ via $=\{i_1,i_2,\dots,i_k\}\mapsto (i_1,i_2-i_1,\dots,i_k-i_{k-1},n-i_k)$. The partition of $n$ obtained by listing the parts of this composition in non-increasing order is called the associated partition of $I$. For example, for $n=13$ and $I=\{3,5,8,12\}$, the associated partition is $(4,3,3,2,1)$. \subsection{Related work} Following the notation from~\cite{Eli}, let $\T^0_n$ be the set whose elements are $n$-cycles in one-line notation in which one entry has been replaced with $0$. For example, $\T^0_3=\{031,201,230,012,302,310\}$. Since there are $n$ ways to choose what entry to replace, and the value of the replaced entry can be recovered by looking at the other entries, it is clear that $|\T^0_n|=n!$. Note that if the $0$ in $\tau\in\T^0_n$ is in position $i$, then $i-1\in D(\tau)$ (if $i>1$) and $i\notin D(\tau)$. It was conjectured in~\cite{Eli} that descent sets in $\T^0_n$ behave like descent sets in $\S_n$: \begin{conj}[\cite{Eli}]\label{conj:Eli} For any $n$ and any $I\subseteq[n-1]$, $$|\{\tau\in\T^0_n\,:\,D(\tau)=I\}|=|\{\sigma\in\S_n\,:\,D(\sigma)=I\}|.$$ \end{conj} In Section~\ref{sec:consequences} we prove this conjecture as Corollary~\ref{cor:Elishift}, along with other consequences of our main bijection. There is some work in the literature relating the cycle structure of a permutation with its descent set. Gessel and Reutenauer~\cite{GR} showed that the number of permutations with given cycle structure and descent set can be expressed as a product of certain characters of the symmetric group. They also gave a statistic-preserving bijection between words and multisets of necklaces. In Section~\ref{sec:related} we discuss how their work relates to ours, and how their methods can be used to prove some of our results non-bijectively. More recently, Eriksen, Freij and W\"astlund~\cite{EFW} studied descent sets of derangements. Recall that derangements are permutations with no fixed points. In~\cite[Problem~9.3]{EFW}, the authors pose the following question: \begin{prob}[\cite{EFW}]\label{prob:EFW} For any two subsets $I,J\subseteq[n-1]$ with the same associated partition, give a bijection between derangements of $[n]$ whose descent set is contained in $I$ and derangements of $[n]$ whose descent set is contained in $J$. \end{prob} At the end of Section~\ref{sec:related} we solve this problem by giving a bijection based on the work of Gessel and Reutenauer~\cite{GR}. \section{The main result}\label{sec:main} \begin{theorem}\label{thm:bij} For every $n$ there is a bijection $\bij:\C_{n+1}\rightarrow\S_n$ such that if $\pi\in\C_{n+1}$ and $\sigma=\bij(\pi)$, then $$D(\pi)\cap[n-1]=D(\sigma).$$ \end{theorem} We start this section defining the map $\bij:\C_{n+1}\rightarrow\S_n$ and giving some examples. Next we describe a map $\inv:\S_n\rightarrow\C_{n+1}$. Finally we prove that $\inv=\bij^{-1}$ and that $\bij$ preserves the descent set of the first $n$ entries. \subsection{The map $\bij$} Given $\pi\in\C_{n+1}$, write it in cycle form with $n+1$ at the end, i.e., $\pi=(t_1,t_2,\dots,t_n,n+1)$. Let $t_1=t_{i_1}<t_{i_2}<\dots<t_{i_r}<t_{i_{r+1}}=n+1$ be the left-to-right maxima of the sequence $t_1,t_2,\dots,t_n,n+1$. Let $$\sigm=(t_{1},t_2,\dots,t_{i_2-1})(t_{i_2},t_{i_2+1},\dots,t_{i_3-1})\cdots(t_{i_r},t_{i_r+1},\dots,t_n).$$ To simplify notation, let $a_j=t_{i_j}$ and $b_j=t_{i_{j+1}-1}$ for $1\le j\le r$, so \beq\label{eq:sigm}\sigm=(a_1,\dots,b_1)(a_2,\dots,b_2)\cdots(a_r,\dots,b_r).\eeq To obtain $\bij(\pi)$ we make some switches in the cycle form of $\sigm$, that we describe next. At any stage of the algorithm we denote by $\Gamma_i$ the $i$-th cycle of $\sigm$, with the cycles written from left to right in the same order as in~(\ref{eq:sigm}). The terms {\em left}, {\em right}, {\em first} (or {\em leftmost}) and {\em last} (or {\em rightmost}) will always assume that the entries within each cycle are also written in this order. Whenever we have two adjacent elements $s$ and $t$ in a cycle, with $s$ immediately to the left of $t$, we will say that $s$ {\em precedes} $t$. For $1\le x,y\le n$, let $P(x,y)$ be the condition $$\pi(x)>\pi(y) \ \textrm{ and }\ \sigm(x)<\sigm(y).$$ (For $x$ or $y$ outside of these bounds, $P(x,y)$ is defined to be false.)\ms Repeat the following steps for $i=1,2,\dots,r-1$: \bit \item Let $z$ be the rightmost entry of $\Gamma_i$. If $P(z,z+1)$ or $P(z,z-1)$ hold, let $\eps\in\{-1,1\}$ be such that $P(z,z+\eps)$ holds and $\sigm(z+\eps)$ is largest. \item Repeat for as long as $P(z,z+\eps)$ holds: \ben\renewcommand{\labelenumi}{\Roman{enumi}.} \item\label{it:a} Switch $z$ and $z+\eps$ in the cycle form of $\sigm$. \item\label{it:b} If the last switch did not involve the leftmost entry of $\Gamma_i$, let $x$ and $y$ be the elements preceding the switched entries. If $|x-y|=1$, switch $x$ and $y$ in the cycle form of $\sigm$, and repeat step~II. \item Let $z:=z+\eps$ (the new rightmost entry of $\Gamma_i$). \een \eit Define $\bij(\pi)=\sigm$. \begin{table}[htb] $$\begin{array}{|c|c|c|c|c|c|c|} \hline \pi\in\C_5 & \sigma=\bij(\pi)\in\S_4 & D(\pi)\cap[3]=D(\sigma) \\[2pt] \hline (1,2,3,4,5) = 23451 & 1 2 3 4 & \emptyset \\ \hline (2,1,3,4,5) = 31452 & 2 1 3 4 & \multirow{3}{*}{\{1\}} \\ (3,2,1,4,5) = 41253 & 3 1 2 4 & \\ (4,3,2,1,5) = 51234 & 4 1 2 3 & \\ \hline (1,3,2,4,5) = 34251 & 1 3 2 4 & \multirow{5}{*}{\{2\}} \\ (1,4,3,2,5) = 45231 & 1 4 2 3 & \\ (3,1,2,4,5) = 24153 & 2 3 1 4 & \\ (3,1,4,2,5) = 45123 & 3 4 1 2 & \\ (4,3,1,2,5) = 25134 & 2 4 1 3 & \\ \hline (1,2,4,3,5) = 24531 & 1 2 4 3 & \multirow{3}{*}{\{3\}} \\ (2,4,1,3,5) = 34512 & 1 3 4 2 & \\ (4,1,2,3,5) = 23514 & 2 3 4 1 & \\ \hline (2,3,1,4,5) = 43152 & 3 2 1 4 & \multirow{3}{*}{\{1,2\}}\\ (2,4,3,1,5) = 54132 & 4 2 1 3 & \\ (4,2,3,1,5) = 53124 & 4 3 1 2 & \\ \hline (1,4,2,3,5) = 43521 & 3 2 4 1 & \multirow{5}{*}{\{1,3\}}\\ (2,1,4,3,5) = 41532 & 2 1 4 3 & \\ (2,3,4,1,5) = 53412 & 4 2 3 1 & \\ (3,4,2,1,5) = 51423 & 4 1 3 2 & \\ (4,2,1,3,5) = 31524 & 3 1 4 2 & \\ \hline (1,3,4,2,5) = 35421 & 1 4 3 2 & \multirow{3}{*}{\{2,3\}}\\ (3,4,1,2,5) = 25413 & 2 4 3 1 & \\ (4,1,3,2,5) = 35214 & 3 4 2 1 & \\ \hline (3,2,4,1,5) = 54213 & 4 3 2 1 & \{1,2,3\}\\ \hline \end{array}$$ \caption{\label{tab:n4} The images by $\bij$ of all elements in $\C_5$.} \end{table} \begin{example} Let $$\pi=(11,4,10,1,7,16,9,3,5,12,20,2,6,14,18,8,13,19,15,17,21)\in\C_{21}.$$ Finding the left-to-right maxima of the sequence, we get $$\sigm=(11,4,10,1,7)(16,9,3,5,12)(20,2,6,14,18,8,13,19,15,17).$$ Now we look at the first cycle, so $z=b_1=7$. Both $P(7,6)$ and $P(7,8)$ hold, but $\sigm(6)=14>13=\sigm(8)$, so $\eps=-1$. Switching $7$ and $6$ we get $$\sigm=(11,4,10,1,{\bf 6})(16,9,3,5,12)(20,2,{\bf 7},14,18,8,13,19,15,17).$$ The entries preceding the switched ones are $1$ and $2$ so we switch them too: $$\sigm=(11,4,10,{\bf 2},6)(16,9,3,5,12)(20,{\bf 1},7,14,18,8,13,19,15,17).$$ Now $z=6$, and since $P(6,5)$ holds, we switch $6$ and $5$: $$\sigm=(11,4,10,2,{\bf 5})(16,9,3,{\bf 6},12)(20,1,7,14,18,8,13,19,15,17).$$ The entries to their left are $2$ and $3$, so they need to be switched, and then the entries preceding these are $10$ and $9$, so they need to be switched as well: $$\sigm=(11,4,{\bf 9},{\bf 3},5)(16,{\bf 10},{\bf 2},6,12)(20,1,7,14,18,8,13,19,15,17).$$ Since $P(5,4)$ is false, we now look at $\Gamma_2$, so $z=b_2=12$. Only $P(12,13)$ holds, so $\eps=1$ and we switch $12$ and $13$: $$\sigm=(11,4,9,3,5)(16,10,2,6,{\bf 13})(20,1,7,14,18,8,{\bf 12},19,15,17).$$ Now $z=13$ and $P(13,14)$ holds, so we switch $13$ and $14$, the preceding entries $6$ and $7$, and also $2$ and $1$: $$\sigm=(11,4,9,3,5)(16,10,{\bf 1},{\bf 7},{\bf 14})(20,{\bf 2},{\bf 6},{\bf 13},18,8,12,19,15,17).$$ Finally, $z=14$ and $P(14,15)$ holds, so we switch $14$ and $15$, and we stop here because $P(15,16)$ is false: $$\bij(\pi)=\sigm=(11,4,9,3,5)(16,10,1,7,{\bf 15})(20,2,6,13,18,8,12,19,{\bf 14},17)\in\S_{20}.$$ In one-line notation, \renewcommand{\tabcolsep}{0pt} $$\begin{array}{cc*{2}{c@{\,\cdot\,}}*{4}{c@{\ }}*{3}{c@{\,\cdot\,}}*{2}{c@{\ }}*{4}{c@{\,\cdot\,}}c@{\ }c@{\,\cdot\,}c@{\ }c@{\,\cdot\,}c@{\ }c} \pi&=&7&6&5&10&12&14&16&13&3&1&4&20&19&18&16&9&21&8&15&2&11\\ \bij(\pi)&=&7&6&5&9&11&13&15&12&3&1&4&19&18&17&16&10&20&8&14&2& \end{array}$$ where the descents have been marked with dots. \end{example} \begin{example} Let $$\pi=(2,9,17,6,11,19,7,13,12,15,8,14,1,4,5,10,18,3,16,20)\in\C_{20}.$$ Inserting parentheses before the left-to-right maxima, we have $$\sigm=(2)(9)(17,6,11)(19,7,13,12,15,8,14,1,4,5,10,18,3,16).$$ Now $z=b_1=2$, and only $P(2,1)$ holds, so we switch $2$ and $1$: $$\sigm=({\bf 1})(9)(17,6,11)(19,7,13,12,15,8,14,{\bf 2},4,5,10,18,3,16).$$ In $\Gamma_2$ we have $z=b_2=9$ and $P(9,8)$ holds, so we switch $9$ and $8$. Now $P(8,7)$ holds, so we switch $8$ and $7$. Similarly, we switch $7$ and $6$, then $6$ and $5$, and then $5$ and $4$, obtaining $$\sigm=(1)({\bf 4})(17,{\bf 7},11)(19,{\bf 8},13,12,15,{\bf 9},14,2,{\bf 5},{\bf 6},10,18,3,16).$$ Finally, in $\Gamma_3$ we have $z=b_3=11$ and $P(11,10)$ holds, so we switch $11$ and $10$, and also the preceding entries $7$ and $6$: $$\bij(\pi)=\sigm=(1)(4)(17,{\bf 6},{\bf 10})(19,8,13,12,15,9,14,2,5,{\bf 7},{\bf 11},18,3,16)\in\S_{19}.$$ In one-line notation, \renewcommand{\tabcolsep}{0pt} $$\begin{array}{cc*{2}{c@{\ }}c@{\,\cdot\,}*{7}{c@{\ }}*{3}{c@{\,\cdot\,}}*{2}{c@{\ }}*{2}{c@{\,\cdot\,}}c@{\ }c@{\,}c} \pi&=&4&9&16&5&10&11&13&14&17&18&19&15&12&1&8&20&6&3&7&\cdot2\\ \bij(\pi)&=&1&5&16&4&7&10&11&13&14&17&18&15&12&2&9&19&6&3&8 \end{array}$$ where the descents have been marked with dots. \end{example} \subsection{The map $\inv$} Given $\sigma\in\S_{n}$, write it in cycle form with the largest element of each cycle first, ordering the cycles by increasing first element, say $$\sigma=(c_1,\dots,d_1)(c_2,\dots,d_2)\cdots(c_r,\dots,d_r).$$ Removing the internal parentheses and appending $n+1$, we obtain an $(n+1)$-cycle $$\pim=(c_1,\dots,d_1;c_2,\dots,d_2;\dots;c_r\dots,d_r;n+1).$$ For convenience we write semicolons in order to keep track of the places from where parentheses were removed. We call {\em blocks} the entries between consecutive semicolons. Similarly to the description of $\bij$, we will obtain $\inv(\sigma)$ by making some switches to the cycle form of $\pim$. At each stage of the algorithm, we denote by $\Delta_i$ the $i$-th block of $\pim$. For $1\le x,y\le n$, let $Q(x,y)$ be the condition $$\pim(x)>\pim(y) \ \textrm{ and }\ \sigma(x)<\sigma(y).$$ Repeat the following steps for $i=r-1,r-2,\dots,1$: \bit \item Let $z$ be the rightmost entry of $\Delta_i$. If $Q(z,z+1)$ or $Q(z,z-1)$ hold, let $\eps\in\{-1,1\}$ be such that $Q(z,z+\eps)$ holds and $\pim(z+\eps)$ is smallest. \item Repeat for as long as $Q(z,z+\eps)$ holds: \ben\renewcommand{\labelenumi}{\Roman{enumi}'.} \item Switch $z$ and $z+\eps$ in the cycle form of $\pim$. \item If the last switch did not involve the leftmost entry of the $\Delta_i$, let $x$ and $y$ be the elements preceding the switched entries. If $|x-y|=1$, switch $x$ and $y$ in the cycle form of $\sigma$, and repeat step~II'. \item Let $z:=z+\eps$ (the new rightmost entry of $\Delta_i$). \een \eit Define $\inv(\sigma)=\pim$. \ms \begin{example} Let $$\sigma=(11,4,9,3,5)(16,10,1,7,15)(20,2,6,13,18,8,12,19,14,17)\in\S_{20}.$$ Removing the parentheses and appending $n+1$ we get $$\pim=(11,4,9,3,5;16,10,1,7,15;20,2,6,13,18,8,12,19,14,17;21).$$ We start looking at $\Delta_2$, so $z=d_2=15$. Only $Q(15,14)$ holds, so we switch $15$ and $14$: $$\pim=(11,4,9,3,5;16,10,1,7,{\bf 14};20,2,6,13,18,8,12,19,{\bf 15},17;21).$$ Now $z=14$ and $Q(14,13)$ holds, so we switch $14$ and $13$. The entries to their left are $7$ and $6$, and the entries preceding these are $1$ and $2$, so we make the corresponding switches: $$\pim=(11,4,9,3,5;16,10,{\bf 2},{\bf 6},{\bf 13};20,{\bf 1},{\bf 7},{\bf 14},18,8,12,19,15,17;21).$$ Now $z=13$ and $Q(13,12)$ holds, so we switch $13$ and $12$: $$\pim=(11,4,9,3,5;16,10,2,6,{\bf 12};20,1,7,14,18,8,{\bf 13},19,15,17;21).$$ Looking at $\Delta_1$, we have $z=d_1=5$. Only $Q(5,6)$ holds, so we switch $5$ and $6$, and also the preceding entries $3$ and $2$, and $9$ and $10$: $$\pim=(11,4,{\bf 10},{\bf 2},{\bf 6};16,{\bf 9},{\bf 3},{\bf 5},12;20,1,7,14,18,8,13,19,15,17;21).$$ Now $z=6$ and $Q(6,7)$ holds, so we switch $6$ and $7$, and also the preceding entries $2$ and $1$: $$\pim=(11,4,10,{\bf 1},{\bf 7};16,9,3,5,12;20,{\bf 2},{\bf 6},14,18,8,13,19,15,17;21).$$ Since $Q(7,8)$ is false, the algorithm ends here, so $$\inv(\sigma)=(11,4,10,1,7,16,9,3,5,12,20,2,6,14,18,8,13,19,15,17,21)\in\C_{21}.$$ \end{example} \begin{example} Let $$\sigma=(1)(4)(17,6,10)(19,8,13,12,15,9,14,2,5,7,11,18,3,16)\in\S_{19}.$$ After removing the parentheses, $$\pim=(1;4;17,6,10;19,8,13,12,15,9,14,2,5,7,11,18,3,16;20).$$ In $\Delta_3$, $z=d_3=10$ and $Q(10,11)$ holds, so we switch $10$ and $11$, and also $6$ and $7$: $$\pim=(1;4;17,{\bf 7},{\bf 11};19,8,13,12,15,9,14,2,5,{\bf 6},{\bf 10},18,3,16;20).$$ Since $Q(11,12)$ is false, we look at $\Delta_2$, so $z=d_2=4$. We see that both $Q(4,3)$ and $Q(4,5)$ hold, but $\pim(3)=16>6=\pim(5)$, so we switch $4$ and $5$. Now $z=5$ and $Q(5,6)$ holds, so we switch $5$ and $6$. Similarly, we switch $6$ and $7$, next $7$ and $8$, and then $8$ and $9$: $$\pim=(1;{\bf 9};17,{\bf 6},11;19,{\bf 7},13,12,15,{\bf 8},14,2,{\bf 4},{\bf 5},10,18,3,16;20).$$ Finally, in the first block we switch $1$ and $2$, ending with $$\inv(\sigma)=\pim=(2,9,17,6,11,19,7,13,12,15,8,14,1,4,5,10,18,3,16,20)\in\C_{20}.$$ \end{example} \subsection{Properties of $\bij$ and $\inv$} The following five lemmas give more insight into the computation of $\bij(\pi)$. They are valid for each $1\le i\le r-1$. The $i$-th iteration of the main loop of the algorithm will be sometimes referred to as {\em fixing $\Gamma_i$}. \begin{lemma}\label{lem:decrpi} Suppose that in the process of fixing $\Gamma_i$, the elements that occupy the last position of $\Gamma_i$ are $b,b+\eps,b+2\eps,\dots,b+k\eps$ in this order. Then $$\pi(b)>\pi(b+\eps)>\pi(b+2\eps)>\dots>\pi(b+k\eps).$$ \end{lemma} \begin{proof} For each $1\le j\le k$, the switch between $b+(j-1)\eps$ and $b+j\eps$ only takes place if $P(b+(j-1)\eps,b+j\eps)$ holds, which implies that $\pi(b+(j-1)\eps)>\pi(b+j\eps)$. \end{proof} \begin{lemma}\label{lem:relative} \ben \item\label{partt1} No switch ever takes place between two entries of the same $\Gamma_i$. \item\label{partt2} The relative order of the entries within $\Gamma_i$ always stays the same. In particular, the first entry of $\Gamma_i$ is always the largest. \een \end{lemma} \begin{proof} Since the switches always involve consecutive values, the relative order of the entries in $\Gamma_i$ never changes in a switch are between an entry of $\Gamma_i$ and an entry of another cycle. Thus, part~(\ref{partt2}) follows from part~(\ref{partt1}). To prove part~(\ref{partt1}), assume for contradiction that a switch takes place between two entries of $\Gamma_i$. Consider the first such switch, which must necessarily be between the last element $z$ and another element $z+\eps$ in $\Gamma_i$. For $P(z,z+\eps)$ to hold, we would need $\sigm(z)<\sigm(z+\eps)$. But this cannot happen because $\sigm(z)$ is the first entry of $\Gamma_i$, and hence the largest since by assumption this is the first switch between two entries of $\Gamma_i$. \end{proof} \begin{lemma}\label{lem:notmoved} While fixing $\Gamma_i$, \ben \item\label{part1} neither the first nor the last entry of $\Gamma_j$ are moved for any $j>i$; in particular, before iteration $j$, $\Gamma_j=(a_j,\dots,b_j)$; \item\label{part2} no entry $t$ with $t\ge a_{i+1}$ is moved; \item\label{part3} no entry preceding an entry $t\ge a_{i+1}$ is moved. \een \end{lemma} \begin{proof} We use induction on $i$. Our induction hypothesis is that all three parts of the lemma hold for smaller values of $i$. Assume we have fixed $\Gamma_1,\Gamma_2,\dots,\Gamma_{i-1}$, and neither $a_j$ nor $b_j$ for $j\ge i$ have moved. In particular, $\sigm(b_i)=a_i$ at this point. Suppose that during the process of fixing $\Gamma_i$ we move some $b_j$ with $j>i$. Consider the first time this happens, and let $z$ be the rightmost entry of $\Gamma_i$ right before the switch. Since $b_i$ was the rightmost entry of $\Gamma_i$ before iteration $i$, we have by Lemma~\ref{lem:decrpi} that $\pi(z)\le\pi(b_i)=a_{i+1}$. For the switch between $z$ and $b_j$ to happen, we must have $b_j=z\pm1$ and $P(z,b_j)$ must hold, which implies that $\pi(z)>\pi(b_j)$. But $\pi(z)\le a_{i+1}$ as we just showed, $\pi(b_j)=a_{j+1}$, and $a_{i+1}<a_{j+1}$ because the left-to-right maxima of a sequence are increasing, so this is a contradiction. Suppose now that during the process of fixing $\Gamma_i$ we move some $a_j$ with $j>i$. Consider the first time this happens. Since switches only take place between consecutive values and the sequence $a_1,a_2,\dots$ is increasing, we must have $j=i+1$, and there must be some element in $x$ in $\Gamma_i$ with $|a_{i+1}-x|=1$. Now, the facts that $a_{i+1}$ is larger than all the elements of $\Gamma_i$ and that $\Gamma_i$ starts with its largest element, which we know from Lemma~\ref{lem:relative}(\ref{partt2}), imply that $x$ is the first entry of $\Gamma_i$ and $a_{i+1}=x+1$. However, we claim that in this case no switch takes place. Indeed, for any switch to take place, $P(z,z+\eps)$ must hold for some $\eps\in\{-1,1\}$, where $z$ is the last entry of $\Gamma_i$. This means that $\pi(z+\eps)<\pi(z)\le\pi(b_i)=a_{i+1}=x+1$ and $\sigm(z+\eps)>\sigm(z)=x$, so $\pi(z+\eps)<\sigm(z+\eps)$, which implies that $z+\eps$ or $\sigm(z+\eps)$ have been moved in a previous steps of the algorithm. But the fact that $\sigm(z+\eps)\ge a_{i+1}$ makes this impossible, by the induction hypothesis on parts~(\ref{part2}) and~(\ref{part3}). Now we prove part~(\ref{part2}). At the beginning of the computation of $\bij(\pi)$, when $\sigm$ is given by equation~(\ref{eq:sigm}), $a_{i+1}$ is larger than all the elements in $\Gamma_1,\dots,\Gamma_i$. Since all the switches involve consecutive values, no $t\ge a_{i+1}$ can be involved in a switch with elements of $\Gamma_1,\dots,\Gamma_i$ without $a_{i+1}$ being involved in a switch first. But we just proved that this cannot happen. To prove part~(\ref{part3}), assume without loss of generality that the entry $s$ preceding $t$ is moved for the first time while fixing $\Gamma_i$. Since $t$ has not been moved, the switch must happen in step I of the $i$-th iteration, and it must involve $s$ and the last entry of $\Gamma_i$ at the time, say $z$. For this switch to take place, we need $\pi(z)>\pi(s)$. But this can never happen, because by Lemma~\ref{lem:decrpi}, $\pi(z)\le\pi(b_i)=a_{i+1}$, and since neither $s$ nor $t$ have moved so far, $\pi(s)=t\ge a_{i+1}$. \end{proof} \begin{lemma}\label{lem:notback} While fixing $\Gamma_i$, no entries in cycles $\Gamma_j$ with $j<i$ are moved. \end{lemma} \begin{proof} Suppose this is false, and consider the first time that the last entry $z$ of $\Gamma_i$ is switched with an entry $z+\eps$ of $\Gamma_j$, where $j<i$. Then we must have $\sigm(z)<\sigm(z+\eps)$. But $\sigm(z)$ is the first entry of $\Gamma_i$, which is larger than any element in $\Gamma_1,\Gamma_2,\dots,\Gamma_{i-1}$ by Lemma~\ref{lem:notmoved}(\ref{part1}), in particular larger than $\sigm(z+\eps)$. \end{proof} \begin{lemma}\label{lem:bi} In the process of fixing $\Gamma_i$, the elements that occupy the last position of $\Gamma_i$ are $b_i,b_i+\eps,b_i+2\eps,\dots,b_i+k\eps$ for some $k$, in this order. Additionally, $$\pi(b_i)>\pi(b_i+\eps)>\pi(b_i+2\eps)>\dots>\pi(b_i+k\eps).$$ \end{lemma} \begin{proof} This follows easily from the description of $\bij$ and Lemmas~\ref{lem:decrpi} and~\ref{lem:notmoved}(\ref{part1}). \end{proof} Now we prove two properties of $\bij$, the main one being that it preserves the descent set if we forget $\pi(n+1)$. \begin{prop}\label{prop:descents} Let $\pi\in\C_{n+1}$ and $\sigma=\bij(\pi)$. Then \ben \item\label{desc} $D(\pi)\cap[n-1]=D(\sigma)$, \item\label{n} $\pi^{-1}(n+1)=\sigma^{-1}(n)$. \een \end{prop} \begin{proof} First observe that if $\sigm$ is the permutation in equation~(\ref{eq:sigm}), before any cycles are fixed, then $\sigm(x)=\pi(x)$ for all $x\notin\{b_1,\dots,b_r\}$, and $\sigm(b_i)<\pi(b_i)$ for $1\le i\le r$. Note also that $\pi(b_r)=n+1$ and $\sigm(b_r)=a_r=n$. By Lemma~\ref{lem:notmoved}(\ref{part1}), $a_r$ and $b_r$ are never moved when fixing the cycles $\Gamma_1,\dots,\Gamma_{r-1}$, so $\sigma(b_r)=n$, which proves part~(\ref{n}) of the proposition. To prove part~(\ref{desc}), let $W=(D(\pi)\cap[n-1])\bigtriangleup D(\sigm)$ be the set of indices where the descents of $\pi$ and $\sigm$ disagree ($\bigtriangleup$ denotes the symmetric difference). Before fixing any cycles, the only indices that may be in $W$ are $b_i-1$ and $b_i$ for $1\le i\le r-1$, by the previous observation. We claim that fixing cycle $\Gamma_i$ removes $b_i-1$ and $b_i$ from $W$ (if they were in it) without adding any other elements to $W$. Indeed, by Lemma~\ref{lem:bi}, the first step in iteration $i$ checks $P(b_i,b_i-1)$ and $P(b_i,b_i+1)$, which determine whether $b_i-1$ and $b_i$ are in $W$, respectively. If either of them is, the switch between $b_i$ and $b_i+\eps$ (with $\eps$ chosen so that $\sigm(b_i+\eps)$ is largest) performed by $\bij$ in step I of the $i$-th iteration guarantees that $b_i-1,b_i\notin W$ after the switch. However, two elements could now have been added to $W$:\bit \item If $b_i$ was not the first entry of $\Gamma_i$ (note that by Lemma~\ref{lem:notmoved}(\ref{part1}) we know that $b_i+\eps$ was not the first entry of its cycle) and the entries preceding $b_i$ and $b_i+\eps$ were consecutive, say $s$ and $s+1$, then the switch between $b_i$ and $b_i+\eps$ adds $s$ to $W$. Step II of the $i$-th iteration switches $s$ and $s+1$ so that $s$ is no longer in $W$, and next performs any necessary switches to prevent any other indices from being added to $W$. \item It is possible that since $\sigm(b_i+\eps)$ has changed in step I, the relative order of $\sigm(b_i+\eps)$ and $\sigm(b_i+2\eps)$ is now different from the relative order of $\pi(b_i+\eps)$ and $\pi(b_i+2\eps)$. The condition $P(b_i+\eps,b_i+2\eps)$ determines whether this is the case, and if so, the second repetition of step I switches $b_i+\eps$ and $b_i+2\eps$ in the cycle form of $\sigm$ to fix the problem. Again, step II prevents other elements from being added to $W$. \eit These steps are repeated until for some $k$, $P(b_i+k\eps,b_i+(k+1)\eps)$ is false, which means that either $b_i+(k+1)\eps\in\{0,n+1\}$ or the relative order of $\sigm(b_i+k\eps)$ and $\sigm(b_i+(k+1)\eps)$ agrees with the relative order of $\pi(b_i+k\eps)$ and $\pi(b_i+(k+1)\eps)$. Iteration $i$ ends here; at this time, the descent set of the sequence $\sigm(b_i-\eps)\sigm(b_i)\sigm(b_i+\eps)\dots\sigm(b_i+k\eps)\sigm(b_i+(k+1)\eps)$ agrees with the descent set of $\pi(b_i-\eps)\pi(b_i)\pi(b_i+\eps)\dots\pi(b_i+k\eps)\pi(b_i+(k+1)\eps)$, and the only elements that may remain in $W$ are $b_j-1$ and $b_j$ for $i< j\le r-1$. After iteration $r-1$, we have $W=\emptyset$, so part~(\ref{desc}) is proved. \end{proof} \begin{prop} The maps $\bij$ and $\inv$ are inverses of each other. \end{prop} \begin{proof} Let $\pi\in\C_{n+1}$ and $\sigma=\bij(\pi)$. We will prove that $\inv(\sigma)=\pi$ by showing that the switches done by the $i$-th iteration of $\bij$ on $\pi$ are the same as the switches done by the iteration of $\inv$ on $\sigma$ corresponding to the same $i$ (we will call this the $i$-th iteration of $\inv$, not being bothered by the fact that the iteration number decreases from $r-1$ to $1$). By Lemma~\ref{lem:bi}, the elements that occupy the last position of $\Gamma_i$ during iteration $i$ are $b_i,b_i+\eps,b_i+2\eps,\dots,b_i+k\eps$ for some $k\ge0$. Let $\sigm_0$ and $\sigm_k$ be the permutations obtained during the computation of $\bij(\pi)$ right before and right after the $i$-th iteration, respectively. More generally, for each $1\le j\le k$, let $\sigm_j$ be the permutation obtained from $\sigm_{j-1}$ after the switch between $b_i+(j-1)\eps$ and $b_i+j\eps$ from step I and the subsequent switches of the preceding entries from step II take place. For $0\le j\le k$, let $\pim_j\in\C_{n+1}$ be the permutation whose cycle notation is obtained by removing all but the first and last parentheses in the cycle form of $\sigm_j$ and appending $n+1$. We will show that if $\pim=\pim_k$ right before the $i$-th iteration of the computation of $\inv(\sigma)$, then $\pi=\pim_0$ right after the $i$-th iteration. This being true for $i=r-1,r-2,\dots,1$ will prove that $\inv(\bij(\pi))=\pi$ for all $\pi\in\C_{n+1}$, and since $|\C_{n+1}|=|\S_n|=n!$, the proposition will follow. We use the same notation as in the description of $\bij$, so $$\pi=(a_1,\dots,b_1,a_2,\dots,b_2,\dots,a_r,\dots,b_r,n+1).$$ The $i$-th cycle of $\sigm_0$ is $\Gamma_i=(a_i,\dots,b_i)$, since by Lemma~\ref{lem:notmoved}(\ref{part1}), $a_i$ and $b_i$ have not been moved before iteration $i$. For $1\le j\le k$, let $s_j=\sigm_0(b_i+j\eps)$. If $b_i+(k+1)\eps\notin\{0,n+1\}$, let $s_{k+1}=\sigm_0(b_i+(k+1)\eps)$, and if $b_i-\eps\notin\{0,n+1\}$, let $s_0=\sigm_0(b_i-\eps)$. Here are some useful remarks, where $1\le j\le k$: \ben\renewcommand{\labelenumi}{(\alph{enumi})} \item\setcounter{obsa}{\theenumi} For the switch between $b_i+(j-1)\eps$ and $b_i+j\eps$ to take place, $P(b_i+(j-1)\eps,b_i+j\eps)$ must hold, which is equivalent to $$\pi(b_i+(j-1)\eps)>\pi(b_i+j\eps)\ \textrm{ and }\ \sigm_{j-1}(b_i+(j-1)\eps)<\sigm_{j-1}(b_i+j\eps).$$ \item\setcounter{obsd}{\theenumi} In order for $s_j$ to be switched when fixing $\Gamma_i$, we must have $s_j=x-1$, where $x$ is the first element of $\Gamma_i$ right before the switch takes place, and the switch must be between $s_j$ and $x$. To see this, assume without loss of generality that $s_j$ is the first entry among $s_1,s_2,\dots,s_k$ that is switched while going from $\sigm_{\ell-1}$ to $\sigm_\ell$ for some $\ell$, and that $x$ is the first entry of $\Gamma_i$ in $\sigm_{\ell-1}$. After $s_j$ is switched, the element that takes its place must be larger than the first entry of $\sigm_\ell$, otherwise the condition $\sigm_{j-1}(b_i+(j-1)\eps)<\sigm_{j-1}(b_i+j\eps)$ from remark~(\alph{obsa}) would not hold. However, if $s_j<x-1$ then, after $s_j$ is switched, the element that takes its place is $s_j\pm1<x$ and thus smaller than the first entry of $\sigm_\ell$. If $s_j=x+1$ and it is switched with $x$, the same problem occurs. And if $s_j>x+1$, then there is no element in $\Gamma_i$ that $s_j$ can be switched with. So, the only possibility is that $s_j=x-1$ and it is switched with $x$. \item\setcounter{obse}{\theenumi} From remark (\alph{obsa}) it follows that in iteration $i$, each $b_i+j\eps$ is only moved when going from $\sigm_{j-1}$ to $\sigm_j$. Indeed, since $s_j$ can be only switched with the first entry of $\Gamma_i$, the preceding element $b_i+j\eps$ cannot be switched in step II of the algorithm. \item\setcounter{obsb}{\theenumi} In $\sigm_0$, $b_i+j\eps$ is in a cycle $\Gamma_\ell$ with $\ell>i$, by Lemmas~\ref{lem:relative}(\ref{partt1}) and~\ref{lem:notback}, and remark~(\alph{obse}). \item\setcounter{obsc}{\theenumi} In $\sigm_0$, $b_i+j\eps$ is not the rightmost entry of a cycle, because by Lemma~\ref{lem:notmoved}(\ref{part1}), none of the $b_\ell$ with $\ell>i$ moves while fixing $\Gamma_i$. \item\setcounter{obsf}{\theenumi} Remarks (\alph{obse}) and (\alph{obsc}) imply that $\pim_\ell(b_i+j\eps)=\sigm_\ell(b_i+j\eps)$ for $\ell\neq j$. \een For simplicity, assume first that $a_i$ is not moved while fixing $\Gamma_i$. By remark~(\alph{obsd}) above, none of the $s_j$ is moved while fixing $\Gamma_i$ in this case. Thus, using remark~(\alph{obse}) we have $\sigm_\ell(b_i+j\eps)=s_j$ for $0\le\ell\le j-1$ and $\sigm_\ell(b_i+j\eps)=s_{j+1}$ for $j+1\le\ell\le k$. By remark~(\alph{obsa}), $a_i=\sigm_{j-1}(b_i+(j-1)\eps)<\sigm_{j-1}(b_i+j\eps)=s_j$. But then, by Lemma~\ref{lem:notmoved}(\ref{part2})(\ref{part3}), neither $b_i+j\eps$ nor $s_j$ for $1\le j\le k$ have been moved in the first $i-1$ iterations, so $\pi(b_i+j\eps)=s_j$, and by Lemma~\ref{lem:bi}, \beq\label{eq:orderai}a_{i+1}>s_1>s_2>\dots>s_k>a_i.\eeq Additionally, it is not the case that $a_{i+1}>s_0>s_1$. Indeed, if it were (assuming that $s_0$ is defined), then we would have $a_i=\sigm_0(b_i)<\sigm_0(b_i-\eps)=s_0$ and $a_{i+1}=\pi(b_i)>\pi(b_i-\eps)=s_0$, so $P(b_i,b_i-\eps)$ would hold, and since $s_0>s_1$, the algorithm would have switched $b_i$ with $b_i-\eps$ instead of with $b_i+\eps$. Similarly, it is not the case that $s_k>s_{k+1}>a_i$. If it were (assuming that $s_{k+1}$ is defined), then we would have $a_i=\sigm_k(b_i+k\eps)<\sigm_k(b_i+(k+1)\eps)=s_{k+1}$ and $s_k=\pi(b_i+k\eps)>\pi(b_i+(k+1)\eps)=s_{k+1}$, so $P(b_i+k\eps,b_i+(k+1)\eps)$ would hold, and the algorithm would switch $b_i+k\eps$ with $b_i+(k+1)\eps$. We now show that, under the assumption that $a_i$ is not moved, iteration $i$ of $\inv$ undoes precisely the switches performed by iteration $i$ of $\bij$. Given $\pim_k$, whose $i$-th block is $c_i,\dots,d_i$, where $c_i=a_i$ and $d_i=b_i+k\eps$, $Q(b_i+k\eps,b_i+(k-1)\eps)$ is the condition $$\pim_k(b_i+k\eps)>\pim_k(b_i+(k-1)\eps)\ \textrm{ and }\ \sigma(b_i+k\eps)<\sigma(b_i+(k-1)\eps).$$ The first inequality can be restated as $a_{i+1}>\sigm_k(b_i+(k-1)\eps)=s_k$ by remark (\alph{obsf}), and it holds by equation~(\ref{eq:orderai}). The second inequality is equivalent to $\pi(b_i+k\eps)<\pi(b_i+(k-1)\eps)$ by Proposition~\ref{prop:descents}, and it holds by Lemma~\ref{lem:bi}. Additionally, since $s_k>s_{k+1}>a_i$ does not hold, either $Q(b_i+k\eps,b_i+(k+1)\eps)$ does not hold or, if it does, then $s_k=\pim_k(b_i+(k-1)\eps)<\pim_k(b_i+(k+1)\eps)=s_{k+1}$, so $\inv$ starts iteration $i$ by switching $b_i+k\eps$ and $b_i+(k-1)\eps$ (as opposed to $b_i+(k+1)\eps$) in step I'. Next, the switches in step II' of $\inv$ undo the switches from step II of $\bij$. Afterwards, for each $j=k-1,k-2,\dots,1$, the computation of $\inv$ checks condition $Q(b_i+j\eps,b_i+(j-1)\eps)$, that is, whether $$\pim_j(b_i+j\eps)>\pim_j(b_i+(j-1)\eps)\ \textrm{ and }\ \sigma(b_i+j\eps)<\sigma(b_i+(j-1)\eps).$$ Again, the first inequality can be restated as $a_{i+1}>\sigm_j(b_i+(j-1)\eps)=s_j$ by remark (\alph{obsf}), and it holds by equation~(\ref{eq:orderai}). The second inequality is equivalent to $\pi(b_i+j\eps)<\pi(b_i+(j-1)\eps)$ by Proposition~\ref{prop:descents}, and it holds by Lemma~\ref{lem:bi}. Thus, $\inv$ performs the switch between $b_i+j\eps$ and $b_i+(j-1)\eps$ in step I', followed by the switches in step II' that undo the ones performed by $\bij$. Finally, $\inv$ checks condition $Q(b_i,b_i-\eps)$, that is, whether $$\pim_0(b_i)>\pim_0(b_i-\eps)\ \textrm{ and }\ \sigma(b_i)<\sigma(b_i-\eps),$$ assuming that $b_i-\eps\notin\{0,n+1\}$. The first inequality is $a_{i+1}>s_0$, and the second is equivalent to $s_i=\sigm_k(b_i)<\sigm_k(b_i-\eps)=s_0$. If both inequalities held, then $a_{i+1}>s_0>s_1$, which we know is false. So iteration $i$ of $\inv$ stops here. In general, it can happen that $a_i$ moves while fixing $\Gamma_i$, and if so it is possible that the $b_i+j\eps$ have moved in the first $i-1$ iterations of $\bij$. In this case one can still show that $a_{i+1}>s_1>s_2>\dots>s_k$, but it is not necessarily true that $s_k>a_i$ anymore. At the beginning of iteration $i$ of $\bij$, we have $a_i=\sigm_0(b_i)<\sigm_0(b_i+\eps)=s_1$. After switching $b_i$ and $b_i+\eps$ in step I and perhaps some preceding entries in step II, it is possible that $a_i$ has been switched with $s_\ell$ for some $\ell$. In this case, $s_\ell=a_i-1$ by remark~(\alph{obsd}), and after the switch the first entry of $\Gamma_i$ is $s_\ell=a_i-1$ and $\sigm_1(b_i+\ell\eps)=s_\ell+1=a_i$. Now we have that $\sigm_1(b_i)=s_1$, and for $j\notin\{0,1,\ell\}$, $\sigm_1(b_i+j\eps)=s_j$. In general, after the switch between $b_i+(j-1)\eps$ and $b_i+j\eps$ takes place in step I, the last switch in step II may involve the first entry $x$ of $\Gamma_i$ and some $s_t$ with $s_t=x-1$, making the first entry of $\Gamma_i$ go down by one and $\sigm_j(b_i+t\eps)=\sigm_{j-1}(b_i+t\eps)+1=s_t+1$. At the end of iteration $i$, $$\sigm_k(b_i)>\sigm_k(b_i+\eps)>\dots>\sigm_k(b_i+(k-1)\eps)>\sigm_k(b_i+k\eps)=a_i-m=c_i,$$ where $m$ is the number of times that the first entry of $\Gamma_i$ has been switched, and $\sigm_k(b_i+j\eps)=s_{j+1}+1$ for exactly $m$ of the values of $j\in\{0,1,\dots,k-1\}$ and $\sigm_k(b_i+j\eps)=s_{j+1}$ for the remaining ones. The above reasoning for the case where $a_i$ was not moved can be adapted to show that, also in this general case, iteration $i$ of $\inv$ reverses the switches done by iteration $i$ of $\bij$. \end{proof} \section{Consequences}\label{sec:consequences} The following is an obvious consequence of Theorem~\ref{thm:bij}. We state it separately in order to refer to it later. \begin{corollary}\label{cor:cycles} For every $n$ and every $I\subseteq[n-1]$, $$|\{\pi\in\C_{n+1}\,:\,D(\pi)\cap[n-1]=I\}|=|\{\sigma\in\S_n\,:\,D(\sigma)=I\}|.$$ \end{corollary} This result has the following probabilistic interpretation. Choose a permutation $\pi\in\S_{n+1}$ uniformly at random. Then, for any given $I\subseteq[n-1]$, the event that $D(\pi)\cap[n-1]=I$ and the event that $\pi$ is a cyclic permutation are independent. To see this, note that the relative order of $\pi(1)\pi(2)\dots\pi(n)$ is given by a uniformly random permutation in $\S_n$. Thus, for any fixed $I\subseteq[n-1]$, the probability that $D(\pi)\cap[n-1]=I$ for a random $\pi\in\S_{n+1}$ is the same as the probability that $D(\sigma)=I$ for a random $\sigma\in\S_n$, which by Corollary~\ref{cor:cycles} is the same as the probability that $D(\pi)\cap[n-1]=I$ for a random $\pi\in\C_{n+1}$. Our next goal is to show that Conjecture~\ref{conj:Eli} follows from Theorem~\ref{thm:bij}. First, instead of the set $\T^0_n$, it will be more convenient for the sake of notation to consider the set $\U_n$ consisting of $n$-cycles in one-line notation in which one entry has been replaced with $n+1$. For example, $\U_3=\{431,241,234,412,342,314\}$. \begin{corollary}\label{cor:biju} For every $n$ there is a bijection $\biju$ between $\U_{n}$ and $\S_n$ such that if $\tau\in\U_n$ and $\sigma=\biju(\tau)$, then $$D(\tau)=D(\sigma).$$ Additionally, if $n+1$ is in position $k$ of $\tau$, then $\sigma(k)=n$. \end{corollary} \begin{proof} Let $\tau\in\U_n$ and suppose it has been obtained from an $n$-cycle $\pi$ by replacing $\pi(k)$ with $n+1$ in the one-line notation. Write $\pi$ in cycle form with $k$ at the end, say $\pi=(t_1,t_2,\dots,t_{n-1},k)$, and let $\pi'=(t_1,t_2,\dots,t_{n-1},k,n+1)\in\C_{n+1}$. Clearly, $D(\tau)=D(\pi')\cap[n-1]$, and the map $\tau\mapsto\pi'$ is a bijection between $\U_n$ and $\C_{n+1}$. Let $\sigma=\bij(\pi')$. By Theorem~\ref{thm:bij}, $D(\pi')\cap[n-1])=D(\sigma)$, and by Proposition~\ref{prop:descents}(\ref{n}), $\sigma(k)=n$. \end{proof} The following corollary proves Conjecture~\ref{conj:Eli}. \begin{corollary}\label{cor:Elishift} For every $n$ there is a bijection $\biju'$ between $\T^0_{n}$ and $\S_n$ such that if $\tau\in\T^0_n$ and $\sigma=\biju'(\tau)$, then $$D(\tau)=D(\sigma).$$ Additionally, if $0$ is in position $k$ of $\tau$, then $\sigma(k)=1$. \end{corollary} \begin{proof} Given $\tau\in\T^0_n$ obtained from an $n$-cycle $\pi$ by replacing $\pi(k)$ with $0$ in its one-line notation, let $\wt\tau\in\U_n$ be obtained from $\wt\pi$ (see the definition in the introduction) by replacing $\wt\pi(n+1-k)$ with $n+1$. It is clear that for $1\le i\le n-1$, $i\in D(\wt\tau)$ if and only if $n+1-i\notin D(\tau)$. Let $\sigma=\biju'(\tau)=\wt{\biju(\wt\tau)}$. Then, for $1\le i\le n-1$, $$i\in D(\sigma) \ \Leftrightarrow\ n+1-i\notin D(\biju(\wt\tau))=D(\wt\tau) \ \Leftrightarrow\ i\in D(\tau).$$ Also $\wt\sigma(n+1-k)=n$, so $\sigma(k)=1$. \end{proof} The final result of this section can be seen as a generalization of Corollary~\ref{cor:cycles}. We give a bijective proof of it. \begin{corollary}\label{cor:cyclesu} Fix $1\le m\le n$ and let $J=[n-1]\setminus\{m-1,m\}$. For any $I\subseteq J$, $$|\{\pi\in\C_{n}\,:\,D(\pi)\cap J=I\}|=|\{\sigma\in\S_{n}\,:\,\sigma(m)=1,\,D(\sigma)\cap J=I\}|.$$ \end{corollary} \begin{proof} Let $\pi\in\C_n$ with $D(\pi)\cap J=I$. Let $\tau\in\T^0_n$ be obtained by replacing $\pi(m)$ with $0$ in the one-line notation of $\pi$, and let $\sigma=\biju'(\tau)$. By Corollary~\ref{cor:Elishift}, $\sigma(m)=1$ and $D(\sigma)\cap J=D(\tau)\cap J=D(\pi)\cap J=I$. \end{proof} \section{Related work and non-bijective proofs}\label{sec:related} In this section we introduce some related work of Gessel and Reutenauer~\cite{GR}, which will allow us to give non-bijective proofs of Corollaries~\ref{cor:cycles} and~\ref{cor:cyclesu}. We start with some definitions. Let $X=\{x_1,x_2,\dots\}_<$ be a linearly ordered alphabet. A {\em necklace} of length $\ell$ is a circular arrangement of $\ell$ beads which are labeled with elements of $X$. Two necklaces are considered the same if they are cyclic rotations of one another. The cycle structure of a multiset of necklaces is the partition whose parts are the lengths of the necklaces in the multiset. The {\em evaluation} of a multiset of necklaces is the monomial $x_1^{e_1}x_2^{e_2}\dots$ where $e_i$ is the number of beads with label $x_i$. The following result is equivalent to Corollary~2.2 from~\cite{GR}. \begin{theorem}[\cite{GR}]\label{thm:GR} Let $I=\{i_1,i_2,\dots,i_k\}_<\subseteq[n-1]$ and let $\lambda$ be a partition of~$n$. Then the number of permutations with cycle structure $\lambda$ and descent set contained in $I$ equals the number of multisets of necklaces with cycle structure $\lambda$ and evaluation $x_1^{i_1}x_2^{i_2-i_1}\dots x_k^{i_k-i_{k-1}}x_{k+1}^{n-i_k}$ . \end{theorem} We can now give direct, non-bijective proofs of Corollaries~\ref{cor:cycles} and~\ref{cor:cyclesu}. \begin{proof}[Alternate proof of Corollary~\ref{cor:cycles}] Suppose that $I=\{i_1,i_2,\dots,i_k\}_<$, and let $I'=I\cup\{n\}$. By Theorem~\ref{thm:GR}, the number of permutations $\pi\in\C_{n+1}$ with $D(\pi)\subseteq I'$ (equivalently, $D(\pi)\cap[n-1]\subseteq I$) equals the number of necklaces with evaluation $$x_1^{i_1}x_2^{i_2-i_1}\dots x_k^{i_k-i_{k-1}}x_{k+1}^{n-i_k}x_{k+2}.$$ By first choosing the bead labeled $x_{k+2}$, it is clear that the number of such necklaces is $$\binom{n}{i_1,i_2-i_1,\dots,i_k-i_{k-1},n-i_k}.$$ But this is precisely (see~\cite{EC1}) the number of permutations in $\S_n$ whose descent set is contained in $I$. Thus, we have shown that $$|\{\pi\in\C_{n+1}\,:\,D(\pi)\cap[n-1]\subseteq I\}|=|\{\sigma\in\S_n\,:\,D(\sigma)\subseteq I\}|.$$ Since this holds for all $I\subseteq[n-1]$, the statement now follows by inclusion-exclusion: \bmt |\{\pi\in\C_{n+1}\,:\,D(\pi)\cap[n-1]=I\}|=\sum_{J\subseteq I}(-1)^{|I|-|J|}|\{\pi\in\C_{n+1}\,:\,D(\pi)\cap[n-1]\subseteq J\}|\\ =\sum_{J\subseteq I}(-1)^{|I|-|J|}|\{\sigma\in\S_n\,:\,D(\sigma)\subseteq J\}|=|\{\sigma\in\S_n\,:\,D(\sigma)=I\}|. \end{multline*} \end{proof} Note that even though a bijective proof of Theorem~\ref{thm:GR} is implicit in~\cite{GR}, the last inclusion-exclusion step in the above proof of Corollary~\ref{cor:cycles} makes it non-bijective. \begin{proof}[Alternate proof of Corollary~\ref{cor:cyclesu}] Let $I=\{i_1,i_2,\dots,i_k\}_<$. Assume first that $1<m<n$, and let $$I'=I\cup\{m-1,m\}=\{i_1,i_2,\dots,i_j,m-1,m,i_{j+1},\dots,i_k\}_<.$$ By Theorem~\ref{thm:GR}, the number of permutations $\pi\in\C_{n}$ with $D(\pi)\subseteq I'$ (equivalently, $D(\pi)\cap J\subseteq I$) equals the number of necklaces with evaluation $$x_1^{i_1}x_2^{i_2-i_1}\dots x_j^{i_j-i_{j-1}}x_{j+1}^{m-1-i_j}x_{j+2}x_{j+3}^{i_{j+1}-m}\dots x_{k+2}^{i_k-i_{k-1}}x_{k+3}^{n-i_k}.$$ By first choosing the bead labeled $x_{j+2}$, it is clear that the number of such necklaces is $$\binom{n-1}{i_1,i_2-i_1,\dots,i_j-i_{j-1},m-1-i_j,i_{j+1}-m,\dots,i_k-i_{k-1},n-i_k}.$$ But this is precisely the number of permutations $\sigma\in\S_n$ with $\sigma(m)=1$ whose descent set satisfies $D(\sigma)\cap J\subseteq I$. Indeed, each partition of $\{2,3,\dots,n\}$ into blocks of sizes $$i_1,i_2-i_1,\dots,i_j-i_{j-1},m-1-i_j,i_{j+1}-m,\dots,i_k-i_{k-1},n-i_k$$ corresponds to the permutation whose first $i_1$ entries are the elements of the first block in increasing order, followed by the $i_2-i_1$ elements of the second block in increasing order, until we get to the $m$-th entry, which is $1$, after which the $i_{j+1}-m$ elements of the $(j+1)$-st block follow in increasing order, and so on. This proves that $$|\{\pi\in\C_{n}\,:\,D(\pi)\cap J\subseteq I\}|=|\{\sigma\in\S_n\,:\,\sigma(m)=1,\,D(\sigma)\cap J\subseteq I\}|.$$ As before, since this equality holds for all $I\subseteq J$, the main statement now follows by inclusion-exclusion. If $m=1$ or $m=n$, we let $I'=I\cup\{m\}=\{1,i_1,i_2,\dots,i_k\}$ or $I'=I\cup\{m-1\}=\{i_1,i_2,\dots,i_k,m-1\}$, respectively, and apply an analogous argument. \end{proof} \bs We end this section with another application of the work of Gessel and Reutenauer. We show that \cite[Lemma 3.4]{GR} can be used to provide an explicit bijection that solves a generalization of Problem~\ref{prob:EFW}. Indeed, since it preserves the cycle structure, the following bijection sends derangements to derangements \begin{prop}\label{prop:subsets} For any two subsets $I,J\subseteq[n-1]$ with the same associated partition, there exists a bijection between $\{\pi\in\S_n\,:\,D(\pi)\subseteq I\}$ and $\{\sigma\in\S_n\,:\,D(\sigma)\subseteq J\}$ preserving the cycle structure. \end{prop} \begin{proof} Let $\pi\in\S_n$ with $D(\pi)\subseteq I$, where $I=\{i_1,i_2,\dots,i_k\}_<$, and let $\lambda$ be the cycle structure of $\pi$. For convenience, define $i_0=0$ and $i_{k+1}=n$, and let $$(r_1,r_2,\dots,r_{k+1})=(i_1,i_2-i_1,\dots,i_k-i_{k-1},n-i_k)$$ be the corresponding composition on $n$. Similarly, let $(s_1,s_2,\dots,s_{k+1})$ be the composition of $n$ corresponding to $J$. Since the associated partitions are the same, there is a permutation $\alpha$ of the indices such that $r_j=s_{\alpha(j)}$ for $1\le j\le k+1$. Write $\pi$ as a product of cycles and for each $1\le j\le k+1$, replace the entries $i_{j-1}+1,i_{j-1}+2,\dots,i_j$ with $x_{\alpha(j)}$, thus obtaining a multiset of necklaces. For each bead, consider the periodic sequence obtained by reading the necklace starting at that bead. Now, order these sequences lexicographically (if there are repeated necklaces, first choose an order among them), and label the vertices with $1,2,\dots,n$ according to this order. This yields the cycle form of a permutation $\sigma$, which clearly has cycle structure $\lambda$. It follows from~\cite{GR} that $D(\sigma)\subseteq J$, and that the map $\pi\mapsto\sigma$ is a bijection. In fact, this map essentially amounts to applying the bijection $U$ from \cite[Lemma 3.4]{GR} to a word whose standard permutation is $\pi^{-1}$, then replacing each $x_j$ with $x_{\alpha(j)}$ in the necklaces, and finally applying the inverse of $U$. \end{proof} \begin{example} Let $n=12$, $I=\{2,8\}$ and $J=\{4,6\}$, so $(r_1,r_2,r_3)=(2,6,4)$ and $(s_1,s_2,s_3)=(4,2,6)$. Let $$\pi=3\:4\:1\:2\:5\:9\:11\:12\:6\:7\:8\:10=(1,3)(2,4)(5)(6,9)(7,11,8,12,10),$$ with $D(\pi)=\{2,8\}=I$. After replacing $1,2$ with $x_{\alpha(1)}=x_2$, $3,4,5,6,7,8$ with $x_{\alpha(2)}=x_3$, and $9,10,11,12$ with $x_{\alpha(3)}=x_1$, we obtain the multiset of necklaces $$(x_2,x_3)(x_2,x_3)(x_3)(x_3,x_1)(x_3,x_1,x_3,x_1,x_1).$$ The corresponding periodic sequences are \bmt(x_2x_3x_2x_3\dots,x_3x_2x_3x_2\dots)(x_2x_3x_2x_3\dots,x_3x_2x_3x_2\dots)\\ (x_3x_3\dots)(x_3x_1x_3x_1\dots,x_1x_3x_1x_3\dots)\\ (x_3x_1x_3x_1x_1\dots,x_1x_3x_1x_1x_3\dots,x_3x_1x_1x_3x_1\dots,x_1x_1x_3x_1x_3\dots,x_1x_3x_1x_3x_1\dots), \end{multline*} and ordering them lexicographically we obtain the permutation $$\sigma=(5,10)(6,11)(12)(9,4)(8,2,7,1,3)=3\:7\:8\:9\:10\:11\:1\:2\:4\:5\:6\:12,$$ with $D(\sigma)=\{6\}\subseteq J$. If instead we had had $J'=\{2,6\}$, with composition $(2,4,6)$, the permutation corresponding to $\pi$ would have been $$\sigma'=(1,7)(2,8)(12)(11,6)(10,4,9,3,5)=7\:8\:5\:9\:10\:11\:1\:2\:3\:4\:6\:12,$$ with $D(\sigma')=\{2,6\}=J'$. \end{example} Note that for two arbitrary subsets $I,J\subseteq[n-1]$ with the same associated partition, there is in general no bijection between $\{\pi\in\S_n:D(\pi)=I\}$ and $\{\sigma\in\S_n:D(\sigma)=J\}$ preserving the cycle structure. For example, in the case of $5$-cycles, we see in Table~\ref{tab:n4} that, even though both $\{1,2\}$ and $\{1,4\}$ have the same associated partition $(3,1,1)$, $$\{\pi\in\C_5:D(\pi)=\{1,2\}\}=\{53124\}$$ but $$\{\sigma\in\C_5:D(\sigma)=\{1,4\}\}=\{31452,41253\}.$$
{ "timestamp": "2009-06-15T22:25:51", "yymm": "0906", "arxiv_id": "0906.2795", "language": "en", "url": "https://arxiv.org/abs/0906.2795", "abstract": "We present a bijection between cyclic permutations of {1,2,...,n+1} and permutations of {1,2,...,n} that preserves the descent set of the first n entries and the set of weak excedances. This non-trivial bijection involves a Foata-like transformation on the cyclic notation of the permutation, followed by certain conjugations. We also give an alternate derivation of the consequent result about the equidistribution of descent sets using work of Gessel and Reutenauer. Finally, we prove a conjecture of the author in [SIAM J. Discrete Math. 23 (2009), 765-786] and a conjecture of Eriksen, Freij and Wästlund.", "subjects": "Combinatorics (math.CO)", "title": "Descent sets of cyclic permutations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9886682451330957, "lm_q2_score": 0.831143045767024, "lm_q1q2_score": 0.8217247365130599 }
https://arxiv.org/abs/1310.5924
The symplectic geometry of closed equilateral random walks in 3-space
A closed equilateral random walk in 3-space is a selection of unit length vectors giving the steps of the walk conditioned on the assumption that the sum of the vectors is zero. The sample space of such walks with $n$ edges is the $(2n-3)$-dimensional Riemannian manifold of equilateral closed polygons in $\mathbb{R}^3$. We study closed random walks using the symplectic geometry of the $(2n-6)$-dimensional quotient of the manifold of polygons by the action of the rotation group $\operatorname {SO}(3)$. The basic objects of study are the moment maps on equilateral random polygon space given by the lengths of any $(n-3)$-tuple of nonintersecting diagonals. The Atiyah-Guillemin-Sternberg theorem shows that the image of such a moment map is a convex polytope in $(n-3)$-dimensional space, while the Duistermaat-Heckman theorem shows that the pushforward measure on this polytope is Lebesgue measure on $\mathbb{R}^{n-3}$. Together, these theorems allow us to define a measure-preserving set of "action-angle" coordinates on the space of closed equilateral polygons. The new coordinate system allows us to make explicit computations of exact expectations for total curvature and for some chord lengths of closed (and confined) equilateral random walks, to give statistical criteria for sampling algorithms on the space of polygons and to prove that the probability that a randomly chosen equilateral hexagon is unknotted is at least $\frac{1}{2}$. We then use our methods to construct a new Markov chain sampling algorithm for equilateral closed polygons, with a simple modification to sample (rooted) confined equilateral closed polygons. We prove rigorously that our algorithm converges geometrically to the standard measure on the space of closed random walks, give a theory of error estimators for Markov chain Monte Carlo integration using our method and analyze the performance of our method. Our methods also apply to open random walks in certain types of confinement, and in general to walks with arbitrary (fixed) edgelengths as well as equilateral walks.
\section{Introduction} In this paper, we consider the classical model of a random walk in $\mathbb{R}^3$-- the walker chooses each step uniformly from the unit sphere. Some of the first results in the theory of these random walks are based on the observation that if a point is distributed uniformly on the surface of a sphere in $3$-space and we write its' position in terms of the cylindrical coordinates $z$ and $\theta$, then $z$ and $\theta$ are independent, uniform random variates. This is usually called Archimedes' Theorem, and it is the underlying idea in the work of Lord Rayleigh~\cite{Rayleigh:1919do}, L.R.G.\,Treloar~\cite{TF9464200077}, and many others in the theory of random walks, starting at the beginning of the 20th century. In particular, it means that the vector of $z$-coordinates of the edges of a random walk is uniformly distributed on a hypercube and that the vector of $\theta$-coordinates of the edges is uniformly distributed on the $n$-torus. When we condition the walk on closure, it seems that this pleasant structure disappears: the individual steps in the walk are no longer independent random variates, and there are no obvious uniformly distributed random angles or distances in sight. This makes the study of closed random walks considerably more difficult than the study of general random walks. The main point of this paper is that the apparent disappearance of this structure in the case of closed random walks is only an illusion. In fact there is a very similar structure on the space of closed random walks if we are willing to pay the modest price of identifying walks related by translation and rigid rotation in $\mathbb{R}^3$. This structure is less obvious, but just as useful. As it turns out, Archimedes' Theorem was generalized in deep and interesting ways in the later years of the 20th century, being revealed as a special case of the Duistermaat-Heckman theorem~\cite{Duistermaat:1982hq} for toric symplectic manifolds. Further, Kapovich and Millson~\cite{Kapovich:1996p2605} and Haussmann and Knutson~\cite{Knutson:2_iyExxE} revealed a toric symplectic structure on the quotient of the space of closed equilateral polygons by the action of the Euclidean group $E(3)$. Together, these theorems define a structure on closed random walk space which is remarkably similar to the structure on the space of open random walks: if we view an $n$-edge closed equilateral walk as the boundary of a triangulated surface, we will show below that the lengths of the $n-3$ diagonals of the triangulation are uniformly distributed on the polytope given by the triangle inequalities and that the $n-3$ dihedral angles at these diagonals of the triangulated surface are distributed uniformly and independently on the $(n-3)$-torus. This structure allows us to define a special set of ``action-angle'' coordinates which provide a measure-preserving map from the product of a convex polytope $P \subset \mathbb{R}^{n-3}$ and the $(n-3)$-torus (again, with their standard measures) to a full-measure subset of the Riemannian manifold of closed polygons of fixed edge lengths. Understanding this picture allows us to make some new explicit calculations and prove some new theorems about closed equilateral random walks. For instance, we are able to find an exact formula for the total curvature of closed equilateral polygons, to prove that the expected lengths of chords skipping various numbers of edges are equal to the coordinates of the center of mass of a certain polytope, to compute these moments explicitly for random walks with small numbers of edges, and to give a simple proof that at least $3/4$ of equilateral hexagons are unknotted. Further, we will be able to give a unified theory of several interesting problems about confined random walks, and to provide some explicit computations of chordlengths for confined walks. We state upfront that all the methods we use from symplectic geometry are by now entirely standard; the new contribution of our paper lies in the application of these powerful tools to geometric probability. We will then turn to sampling for the second half of our paper. Our theory immediately suggests a new Markov chain sampling algorithm for confined and unconfined random walks. We will show that the theory of hit-and-run sampling on convex polytopes immediately yields a sampling algorithm which converges at a geometric rate to the usual probability measure on equilateral closed random walks (or equilateral closed random walks in confinement). Geometric convergence allows us to apply standard Markov Chain Monte Carlo theory to give error estimators for MCMC integration over the space of closed equilateral random walks (either confined or unconfined). Our sampling algorithm works for any toric symplectic manifold, so we state the results in general terms. We do this primarily because various interesting confinement models for random walks have a natural toric symplectic structure, though our results are presumably applicable far outside the theory of random walks. As with the tools we use from symplectic geometry, hit-and-run sampling and MCMC error estimators are entirely standard ways to integrate over convex polytopes. Again, our main contribution is to show that these powerful tools apply to closed and confined random walks with fixed edgelengths and to lay out some initial results which follow from their use. \section{Toric Symplectic Manifolds and Action-Angle Coordinates} \label{sec:moment polytope sampling} We begin with a capsule summary of some relevant ideas from symplectic geometry. A symplectic manifold $M$ is an even-dimensional manifold with a special nondegenerate $2$-form $\omega$ called the \emph{symplectic form}. The volume form $\, \mathrm{d}m = \frac{1}{n!}\omega^n$ on $M$ is called the \emph{symplectic volume} or \emph{Liouville volume} and the corresponding measure is called \emph{symplectic measure}. A diffeomorphism of a symplectic manifold which preserves the symplectic form is called a \emph{symplectomorphism}; it must preserve symplectic volume as well. An action of the torus $T^k$ on $M$ by symplectomorphisms defines a \emph{moment map} $\mu : M \rightarrow \mathbb{R}^k$, where the torus action preserves the fibers (inverse images of points) of the map. If the action obeys some mild additional technical hypotheses, it is called \emph{Hamiltonian}. Two powerful theorems apply in this case: first, the convexity theorem of Atiyah~\cite{Atiyah:1982ih} and Guillemin--Sternberg~\cite{Guillemin:1982gx} asserts that the image of the moment map is a convex polytope in $\mathbb{R}^k$ called the \emph{moment polytope}. The fibers over vertices of the moment polytope consist of fixed points of the torus action. Further, if the action is effective, the Duistermaat--Heckman theorem~\cite{Duistermaat:1982hq} asserts that the pushforward of symplectic measure to the moment polytope is a piecewise polynomial multiple of Lesbegue measure. If $k=n$ the symplectic manifold is called a \emph{toric symplectic manifold} and the pushforward measure on the moment polytope is a constant multiple of Lebesgue measure. If we can invert the moment map, we can construct a map $\alpha: P \times T^n \rightarrow M^{2n}$ compatible with $\mu$ which parametrizes a full-measure subset of $M^{2n}$ by the $n$ coordinates of points in $P$, which are called the ``action'' variables, and the $n$ angles in $T^n$, which are called the corresponding ``angle'' variables. By convention, we call the action variables $d_i$ and the angle variables $\theta_i$. We have \begin{theorem}[Duistermaat-Heckman \cite{Duistermaat:1982hq}] Suppose $M^{2n}$ is a toric symplectic manifold with moment polytope $P$, $T^n$ is the $n$-torus ($n$ copies of the circle) and $\alpha$ inverts the moment map. If we take the standard measure on the $n$-torus and the uniform (or Lesbegue) measure on $\operatorname{int}(P)$, then the map $\alpha \colon\! \operatorname{int}(P) \times T^n \rightarrow M^{2n}$ parametrizing a full-measure subset of $M^{2n}$ in action-angle coordinates is measure-preserving. In particular, if $f \colon\! M^{2n} \rightarrow \mathbb{R}$ is any integrable function then \begin{equation} \int_M f(x) \, \mathrm{d}m = \int_{P \times T^n} f(d_1,\dots,d_n,\theta_1,\dots,\theta_n) \thinspace\operatorname{dVol}_{E^n} \wedge \,\mathrm{d}\theta_1 \wedge \dots \wedge \,\mathrm{d}\theta_n \label{eq:duistermaat heckman} \end{equation} and if $f(d_1,\dots,d_n,\theta_1,\dots,\theta_n) = f_d(d_1,\dots,d_n) f_\theta(\theta_1,\dots,\theta_n)$ then \begin{equation} \int_M f(x) \, \mathrm{d}m = \int_P f_d(d_1,\dots,d_n) \thinspace\operatorname{dVol}_{E^n} \int_{T^n} f_\theta(\theta_1,\dots,\theta_n) \,\mathrm{d}\theta_1 \wedge \dots \wedge \,\mathrm{d}\theta_n. \label{eq:duistermaat heckman product} \end{equation} \label{theorem:mp} \end{theorem} All this seems forbiddingly abstract, so we give a specific example which will prove important below. The 2-sphere is a symplectic manifold where the symplectic form $\omega$ is the ordinary area form, and the symplectic volume and the Riemannian volume are the same. Any area-preserving map of the sphere to itself is a symplectomorphism, but we are interested in the action of the circle (or $1$-torus) on the sphere given by rotation around the $z$-axis. This action is by area-preserving maps, and hence by symplectomorphisms, and in fact it is Hamiltonian. We can see that the action preserves the fibers of the moment map, which are just horizontal circles on the sphere. Since the dimension of the torus (1) is half the dimension of the sphere (2), the sphere is then a toric symplectic manifold. The image of the moment map is the closed interval ${[-1,1] \subset \mathbb{R}}$, which is certainly a convex polytope. And as the Duistermaat-Heckman theorem claims, \emph{the pushforward of Lebesgue measure on the sphere to this interval is a constant multiple of the Lebesgue measure on the line}. This, of course, is just Archimedes' Theorem, restated in a very sophisticated form. In particular, it means that one can sample points on the sphere uniformly by choosing their $z$ and $\theta$ coordinates independently from uniform distributions on the interval and the circle. The Duistermaat-Heckman theorem extends a similar sampling strategy to any toric symplectic manifold. The best way to view this sampling strategy, we think, is as a useful technique in the theory of intrinsic statistics on Riemannian manifolds (cf.\,\cite{Pennec:2006tx}) which applies to a special class of manifolds. In principle, one can sample the entirety of any Riemannian manifold by choosing charts for the manifold explicitly and then sampling appropriate measures on a randomly chosen chart. Since the charts are maps from balls in Euclidean space to the manifold, this reduces the problem to sampling a ball in $\mathbb{R}^n$ with an appropriate measure. Of course, this point of view is so general as to be basically useless in practice: you rarely have explicit charts for a nontrivial manifold, and the resulting measures on Euclidean space could be very exotic and difficult to sample accurately. Action-angle coordinates, however, give a single ``chart'' with a simple measure to sample -- the product of Lesbegue measure on the convex moment polytope and the uniform measure on the torus. There is a small price to pay here. We cannot sample \emph{all} of the toric symplectic manifold this way. The boundary of $P$ corresponds to a sort of skeleton inside the toric symplectic manifold $M$, and we cannot sample this skeleton in any very simple way using action-angle coordinates. Of course, if we are using the Riemannian (or symplectic) volume of $M$ to define the probability measure, this is a measure zero subset, so it is irrelevant to theorems in probability. The benefit is that by deleting this skeleton, we remove most of the topology of $M$, leaving us with the topologically very simple sample space $P \times T^{n-3}$. \section{Toric Symplectic Structure on Random Walks or Polygonal ``Arms''} We now consider the classical space of random walks of fixed step length in $\mathbb{R}^3$ and show that the arguments underlying the historical application of Archimedes' theorem (e.g. in Rayleigh~\cite{Rayleigh:1919do}) can be viewed as arguments about action-angle coordinates on this space as a toric symplectic manifold. We denote the space of open ``arm'' polygons with $n$ edges of lengths $\vec{r} = (r_1, \dots, r_n)$ in $\mathbb{R}^3$ by $\Arm_3(n;\vec{r})$. In particular, the space of equilateral $n$-edge arms (with unit edges) is denoted $\Arm_3(n;\vec{1})$. If we consider polygons related by a translation to be equivalent, the space $\Arm_3(n;\vec{r})$ is a product $S^2(r_1) \times \ldots \times S^2(r_n)$ of round 2-spheres with radii given by the $r_i$. The standard probability measure on this space is the product measure on these spheres; this corresponds to choosing $n$ independent points distributed according to the uniform measure on $S^2$ to be the edge vectors of the polygon. \begin{proposition} \label{prop:symplectic arms} The space of fixed edgelength open polygonal ``arms'' $\Arm_3(n;\vec{r})$ is the product of $n$ spheres of radii $\vec{r} = (r_1, \dots, r_n)$. This is a $2n$-dimensional toric symplectic manifold where the Hamiltonian torus action is given by rotating each sphere about the $z$-axis, and the symplectic volume is the standard measure. The moment map $\mu \colon\! \Arm_3(n;\vec{r}) \rightarrow \mathbb{R}^n$ is given by the $z$-coordinate of each edge vector, and the image of this map (the moment polytope) is the hyperbox $\Pi_{i=1}^n [-r_i,r_i]$. There is a measure-preserving map \begin{equation*} \alpha: \Pi_{i=1}^n [-r_i,r_i] \times T^n \rightarrow \Arm_3(n;\vec{r}). \end{equation*} given explicitly by $\vec{e}_i = (\cos \theta_i \sqrt{1 - z_i^2}, \sin \theta_i \sqrt{1 - z_i^2}, z_i).$ \end{proposition} \begin{proof} Recall that the moment polytope is the convex hull of the images of the fixed points of the Hamiltonian torus action. The only polygonal arms fixed by the torus action are those where every edge is in the $\pm z$-direction, so the $z$-coordinates of the fixed points are indeed the vertices of the hyperbox $\Pi_{i=1}^n [-r_i,r_i]$ and the hyperbox itself is clearly the convex hull. The $z$-coordinates $z_1, \dots, z_n$ and rotation angles $\theta_1, \dots, \theta_n$ are the action-angle coordinates on $\Arm_3(n;\vec{r})$ and the fact that $\alpha$ is measure-preserving is an immediate consequence of Theorem~\ref{theorem:mp}. \end{proof} Since we can sample $\Pi_{i=1}^n [-r_i,r_i] \times T^n$ directly, this gives a direct sampling algorithm for (a full-measure subset of) $\Arm_3(n;\vec{r})$. Of course, direct sampling of fixed-edgelength arms is straightforward even without symplectic geometry, but this description of arm space has additional implications for confinement problems: if we can describe a confinement model by additional linear constraints on the action variables, this automatically yields a toric symplectic structure on the space of confined arms. We give examples in the next two sections, then in Section~\ref{subsec:symplectic Rayleigh} we use this machinery to provide a symplectic explanation for Rayleigh's formula for the probability density function (pdf) of the distance between the endpoints of a random equilateral arm. \subsection{Slab-Confined arms} \label{subsec:mps for arms} One system of linear constraints on the action variables of equilateral arms is the `slab' confinement model: \begin{definition} Given a polygon $p$ in $\mathbb{R}^3$ with vertices $v_1, \dots, v_n$, let $\operatorname{zWidth}(p)$ be the maximum absolute value of the difference between $z$-coordinates of any two vertices. We define the subspace $\operatorname{SlabArm}(n,h) \subset \Arm_3(n;\vec{1})$ to be the space of equilateral (open) space $n$-gons up to translation which obey the constraint $\operatorname{zWidth}(p) \leq h$. \label{def:slabarm} \end{definition} This is a slab constraint model where the endpoints of the walk are free (one could also have a model where one or both endpoints are on the walls of the slab). We now rephrase this slab constraint in action-angle variables. \begin{proposition} A polygon $p$ in $\Arm_3(n;\vec{1})$ given in action-angle coordinates by $(z_1,\dots,z_n,\theta_1,\dots,\theta_n)$ lies in $\operatorname{SlabArm}(n,h)$ if and only if the vector $\vec{z} = (z_1,\dots,z_n)$ of action variables lies in the parallelotope $P(n,h)$ given by the inequalities \begin{equation*} -1 \leq z_i \leq 1, \quad -h \leq \sum_{k=i}^j z_k \leq h. \end{equation*} Hence, there is a measure-preserving map \begin{equation*} \alpha: P(n,h) \times T^n \rightarrow \operatorname{SlabArm}(n,h) \end{equation*} given by restricting the action-angle map of Proposition~\ref{prop:symplectic arms}. \label{prop:slab arm polytope} \end{proposition} \begin{proof} This follows directly from Definition~\ref{def:slabarm}: $\sum_{k=i}^j z_k$ is the difference in $z$-height between vertex $i$ and $j$ so this family of linear constraints encodes $\operatorname{zWidth}(p) \leq h$. The other constraints just restate the condition that $\vec{z}$ lies in the moment polytope $[-1,1]^n$ for $\Arm_3(n;\vec{1})$. \end{proof} \begin{corollary}\label{cor:slabArm} The probability that a random polygon $p \in \Arm_3(n;\vec{1})$ lies in $\operatorname{SlabArm}(n,h)$ is given by $\operatorname{Vol} P(n,h)/2^n$. \end{corollary} This probability function should be useful in computing the entropic force exerted by an ideal polymer on the walls of a confining slab. Figure~\ref{fig:three edge slab polytopes} shows a collection of these moment polytopes for different slab widths, and the corresponding volumes. \begin{figure}[ht] \hphantom{.} \hfill \begin{overpic}[width=1.25in]{slab2.pdf} \put(15,-15){$h = 2, \operatorname{Vol} = \frac{23}{3}$} \end{overpic} \hfill \begin{overpic}[width=1.25in]{slab1-5.pdf} \put(15,-15){$h = \frac{3}{2}, \operatorname{Vol} = \frac{155}{24}$} \end{overpic} \hfill \begin{overpic}[width=1.25in]{slab1.pdf} \put(15,-15){$h = 1, \operatorname{Vol} = 4$} \end{overpic} \hfill \begin{overpic}[width=1.25in]{slab0-5.pdf} \put(15,-15){$h = \frac{1}{2}, \operatorname{Vol} = \frac{1}{2}$} \end{overpic} \hfill \vspace{0.2in} \caption{This figure shows the moment polytopes corresponding to $3$-edge arms contained in slabs of width $h$ as subpolytopes of the cube with vertices $(\pm 1, \pm 1, \pm 1)$, which is the moment polytope for unconfined arms. In this case, we can compute the volume of these moment polytopes directly using \texttt{polymake}~\cite{Gawrilow:2000vl}. We conclude, for instance, that the probability that a random $3$-edge arm is confined in a slab of width $\nicefrac{1}{2}$ is $\nicefrac{1}{16}$.} \label{fig:three edge slab polytopes} \end{figure} \subsection{Half-space confined arms} A similar problem is this: suppose we have a freely jointed chain which is attached at one end to a plane (which we assume for simplicity is the $xy$-plane), and must remain in the halfspace on one side of the plane. This models a polymer where one end of the molecule is bound to a surface (at an unknown site). The moment polytope is \begin{equation} \mathcal{H}_n = \{\vec{z} \in [-1,1]^n \,|\, z_1 \geq 0, z_1 + z_2 \geq 0, \dots, z_1 + \dots + z_n \geq 0, -1 \leq z_i \leq 1 \} \label{eq:half space polytope} \end{equation} and the analogue of Proposition~\ref{prop:slab arm polytope} holds in this case. We can understand this condition on arms in terms of a standard random walk problem: the $z_i$ are i.i.d. steps in a random walk, each selected from the uniform distribution on $[-1,1]$, and we are interested in conditioning on the event that all the partial sums are in $[0,\infty)$. A good deal is known about this problem: for instance, Caravenna gives an asymptotic pdf for the end of a random walk conditioned to stay positive, which is the height of the end of the chain above the plane~\cite{Caravenna:2005ja}. If we could find an explicit form for this pdf, we could analyze the stretching experiment where the free end of the polymer is raised to a known height above the plane using magnetic or optical tweezers (cf. \cite{Strick:2000vk}). We can directly compute the partition function for this problem; this is the volume of subpolytope~\eqref{eq:half space polytope} of the hypercube. This result is also stated in a paper of Bernardi, Duplantier, and Nadeau~\cite{Bernardi:2010ws}. The proof is a pleasant combinatorial argument which is tangential to the rest of the paper, so we relegate it to Appendix~\ref{sec:halfSpaceArms}. \begin{proposition} The volume of the polytope~\eqref{eq:half space polytope} is $\frac{1}{2^n} \binom{2n}{n} = \frac{(2n-1)!!}{n!}$. \label{prop:half space volume} \end{proposition} \subsection{Distribution of failure to close lengths} \label{subsec:symplectic Rayleigh} We now apply the action-angle coordinates to give an alternate formula for the pdf of end-to-end distance in a random walk in $\mathbb{R}^3$ with fixed step lengths and show that it is equivalent to Rayleigh's $\operatorname{sinc}$ integral formula~\cite{Rayleigh:1919do}. This pdf is key to determining the Green's function for closed polygons, which in turn is fundamental to the Moore--Grosberg~\cite{Moore:2005fh} and Diao--Ernst--Montemayor--Ziegler~\cite{Diao:2011ie,Diao:wt,Diao:2012dza} sampling algorithms and to expected total curvature calculations~\cite{GrosbergExtra,cgks}. For mathematicians, we note that this pdf is required in order to estimate the entropic elastic force exerted by an ideal polymer whose ends are held at a fixed distance. Such experiments are actually done in biophysics -- Wuite et al.~\cite{Wuite:2000uu} (cf. \cite{Bustamante:2003tg}) made one of the first measurements of the elasticity of DNA by stretching a strand of DNA between a bead held in a micropipette and a bead held in an optical trap. We first establish some lemmas: \begin{lemma} \label{lem:sum pdf} The pdf of a sum of independent uniform random variates in $[-r_1,r_1]$ to $[-r_n,r_n]$ is given by the pushforward of the Lesbegue measure on $\Pi_{i=1}^n [-r_i,r_i]$ to $[-\sum r_i,\sum r_i]$ by the linear function $\sum x_i$. This is given by \begin{equation} f_n(x) = \frac{1}{\Pi_{i=1}^n 2 r_i} \frac{1}{\sqrt{n}} \operatorname{SA}(x,r_1,\dots,r_n), \end{equation} where $\operatorname{SA}(x,r_1,\dots,r_n)$ is the volume of the slice of the hypercube $\Pi_{i=1}^n [-r_i,r_i]$ by the plane $\sum x_i = x$. The function $f_n$ is everywhere $n-2$ times differentiable for $n > 2$. \end{lemma} \begin{proof} This is the coarea formula applied to the hyperbox $\Pi_{i=1}^n [-r_i,r_i]$ and the function $\sum x_i$, normalized by the volume of $\Pi_{i=1}^n [-r_i,r_i]$. To see that this function is $n-2$ times differentiable, note that the pdf $f_2(x)$ has a corner at $x=0$ (and is linear away from this corner) and that we can express $f_n(x)$ in terms of $f_{n-1}$ by the convolution integral \begin{equation*} f_n(x) = \int_{-r_n}^{r_n} f_{n-1}(x - y) \frac{1}{2r_n} \, \mathrm{d}y. \end{equation*} where $\nicefrac{1}{2r_n}$ is the pdf of the uniform random variate $x_{n}$ on $[-r_n,r_n]$. \end{proof} We have \begin{proposition} \label{prop:end-to-end} The pdf of the end-to-end distance $\ell \in [0,\sum r_i]$ over the space of polygonal arms $\Arm_3(n;\vec{r})$ is given by \begin{equation*} \phi_n(\ell) = \frac{\ell}{2^{n-1} R \sqrt{n-1}} \left( \operatorname{SA}(\ell-r_n,r_1,\dots,r_{n-1}) - \operatorname{SA}(\ell+r_n,r_1,\dots,r_{n-1})\right). \end{equation*} where $R = \Pi_{i=1}^n r_i$ is the product of the edgelengths and $\operatorname{SA}(x,r_1,\dots,r_{n-1})$ is the volume of the slice of the hyperbox $\Pi_{i=1}^{n-1} [-r_i,r_i]$ by the plane $\sum_{i=1}^{n-1} x_i = x$. \end{proposition} \begin{proof} From our moment polytope picture, we can see immediately that the sum $z$ of the $z$-coordinates of the edges of a random polygonal arm in $\Arm_3(n;\vec{r})$ has the pdf of a sum of uniform random variates in $[-r_1,r_1] \times \cdots \times [-r_n,r_n]$, or $f_n(z)$ in the notation of Lemma~\ref{lem:sum pdf}. Since this is a projection of the spherically symmetric distribution of end-to-end distance in $\mathbb{R}^3$ to the $z$-axis ($\mathbb{R}^1$), Equation~29 of~\cite{Lord:1954wh} applies\footnote{Lord's notation can be slightly confusing: in his formula for $p_3(r)$ in terms of $p_1(r)$, we have to remember that $p_3(r)$ is not itself a pdf on the line, it is a pdf on $\mathbb{R}^3$. It only becomes a pdf on the line when multiplied by the correction factor $4\pi r^2$ giving the area of the sphere at radius $r$ in $\mathbb{R}^3$.}, and tells us that the pdf of $\ell$ is given by \begin{equation*} \phi_n(\ell) = -2 \ell f_n'(\ell). \end{equation*} To differentiate $f_n(\ell)$, we use the following observation (cf. Buonacore \cite{Buonocore:2009fk}): \begin{equation} f_n(x) = \int_{-r_n}^{r_n} f_{n-1}(x - y) \frac{1}{2r_n} \, \mathrm{d}y = \frac{F_{n-1}(x + r_n) - F_{n-1}(x - r_n)}{2 r_n}, \end{equation} where $F_{n-1}(x)$ is the cdf of a sum of uniform random variates in $[-r_1,r_1] \times \cdots \times [-r_{n-1},r_{n-1}]$. Differentiating and substituting in the results of Lemma~\ref{lem:sum pdf} yields the formula above. \end{proof} Since we will often be interested in equilateral polygons with edgelength 1, we observe \begin{corollary} \label{cor:end-to-end} The pdf of the end-to-end distance $\ell \in [0,n]$ over the space of equilateral arms $\Arm_3(n;\vec{1})$ is given by \begin{equation} \phi_n(\ell) = \frac{\ell}{2^{n-1} \sqrt{n-1}} \left( \operatorname{SA}(\ell-1,[-1,1]^{n-1}) - \operatorname{SA}(\ell+1,[-1,1]^{n-1})\right). \label{eq:phi ell} \end{equation} where $\operatorname{SA}(x,[-1,1]^{n-1})$ is the volume of the slice of the standard hypercube $[-1,1]^{n-1}$ by the plane $\sum_{i=1}^{n-1} x_i = x$. \end{corollary} The reader who is familiar with the theory of random walks may find the above corollary rather curious. As mentioned above, the standard formula for this pdf as an integral of $\operatorname{sinc}$ functions was given by Rayleigh in 1919 and it looks nothing like~\eqref{eq:phi ell}. The derivation given by Rayleigh of the $\operatorname{sinc}$ integral formula has no obvious connection to polyhedral volumes, but in fact by the time of Rayleigh's paper a connection between polyhedra and $\operatorname{sinc}$ integrals had already been given by George P\'olya in his thesis~\cite{polyathesis,polya} in 1912. This formula has been rediscovered many times~\cite{Borwein:2001bw, Marichal:2006tg}. First, we state the Rayleigh formula~\cite{Rayleigh:1919do,Diao:wt} in our notation: \begin{equation} \phi_n(\ell) = \frac{2 \ell}{\pi} \int_0^\infty y \sin \ell y \operatorname{sinc}^n y \, \mathrm{d}y, \label{eq:rayleigh form} \end{equation} where $\operatorname{sinc} x = \sin x/x$ as usual. Now P\'olya showed that the volume of the central slab of the hypercube $[-1,1]^{n-1}$ given by $-a_0 \leq \sum x_i \leq a_0$ is given by \begin{equation} \operatorname{Vol}(a_0) = \frac{2^n a_0}{\pi} \int_0^\infty \operatorname{sinc} a_0 y \, \operatorname{sinc}^{n-1} y \, \mathrm{d}y. \label{eq:polya sinc integral} \end{equation} Our $\operatorname{SA}(x,[-1,1]^{n-1})$ is the $(n-1)$-dimensional volume of a face of this slab; since it is this face (and its symmetric copy) which sweep out $n$-dimensional volume as $a_0$ increases, we can deduce that \begin{equation*} \operatorname{SA}(x,[-1,1]^{n-1}) = \frac{\sqrt{n-1}}{2} \operatorname{Vol}'(x), \end{equation*} and we can obtain a formula for $\operatorname{SA}(x,[-1,1]^{n-1})$ by differentiating~\eqref{eq:polya sinc integral}. After some simplifications, we get \begin{equation*} \operatorname{SA}(x,[-1,1]^{n-1}) = \frac{2^{n-1} \sqrt{n-1}}{\pi} \int_0^\infty \cos xy \operatorname{sinc}^{n-1} y \, \mathrm{d}y. \end{equation*} Using the angle addition formula for $\cos(a+b)$, this implies that \begin{align*} \operatorname{SA}(\ell-1,[-1,1]^{n-1}) - \operatorname{SA}(\ell+1,[-1,1]^{n-1}) &= \frac{2^{n-1} \sqrt{n-1}}{\pi} \int_0^\infty 2 \sin y \sin \ell y \operatorname{sinc}^{n-1} y \, \mathrm{d}y \\ &= \frac{2^{n} \sqrt{n-1}}{\pi} \int_0^\infty y \sin \ell y \operatorname{sinc}^{n} y \, \mathrm{d}y. \end{align*} Multiplying by the coefficient $\frac{\ell}{2^{n-1} \sqrt{n-1}}$ from~\eqref{eq:phi ell} recovers the Rayleigh form~\eqref{eq:rayleigh form} of the pdf $\phi_n$. Given~\eqref{eq:phi ell} and~\eqref{eq:rayleigh form}, the pdf of the failure-to-close vector $\vec{\ell} = \sum \vec{e}_i$ with length $|\vec{\ell}\,| = \ell$ can be written in the following forms: \begin{align} \nonumber \Phi_n(\vec{\ell}\,) = \frac{1}{4\pi \ell^2}\phi_n(\ell) & = \frac{1}{2^{n+1} \pi \ell \sqrt{n-1}} \left( \operatorname{SA}(\ell-1,[-1,1]^{n-1}) - \operatorname{SA}(\ell+1,[-1,1]^{n-1})\right) \\ \label{eq:pdf failure-to-close vector} & = \frac{1}{2\pi^2 \ell} \int_0^\infty y \sin \ell y \operatorname{sinc}^n y \, \mathrm{d}y \end{align} The latter formula for the pdf appears in Grosberg and Moore~\cite{Moore:2005fh} as equation (B5). Since Grosberg and Moore then actually evaluate the integral for the pdf as a finite sum, one immediately suspects that there is a similar sum form for the slice volume terms in~\eqref{eq:phi ell}. In fact, we have several options to choose from, including using P\'olya's finite sum form to express~\eqref{eq:polya sinc integral} and then differentiating the sum formula with respect to the width of the slab. We instead rely on the following theorem, which we have translated to the current situation. \begin{theorem}[Marichal and Mossinghoff~\cite{Marichal:2006tg}] Suppose that $\vec{w} \in \mathbb{R}^n$ has all nonzero components and suppose $z$ is any real number. Then the $(n-1)$-dimensional volume of the intersection of the hyperplane $\left< \vec{w},\vec{v} \right> = z$ with the hypercube $[-1,1]^n$ is given by \begin{equation} \operatorname{Vol} = \frac{|\vec{w}|_2}{(n-1)! \, \Pi w_i} \sum_{A \subset \{1,\dots,n\}} (-1)^{|A|} \left(z + \sum_{i \not\in A} w_i - \sum_{i \in A} w_i \right)_+^{n-1}, \label{eq:marichal slice formula} \end{equation} where $|\vec{w}|_2$ is the usual ($L^2$) norm of the vector $\vec{w}$, and $x_+ = \max(x,0)$, and we use the convention $0^0 = 0$ when considering the $n=1$ case. \end{theorem} For our $\operatorname{SA}(x,[-1,1]^{n-1})$ function, the vector $\vec{w}$ consists of all 1's, and using the fact that the number of subsets of $\{1,\dots,n\}$ with cardinality $k$ is $\binom{n}{k}$ it follows that \begin{proposition} The $n-2$ dimensional volume $\operatorname{SA}(x,[-1,1]^{n-1})$ is given by the finite sum \begin{equation} \operatorname{SA}(x,[-1,1]^{n-1}) = \frac{\sqrt{n-1}}{(n-2)!} \sum_{k=0}^{n-1} (-1)^k \binom{n-1}{k} (x + n-1 - 2k)_+^{n-2}. \label{eq:finite sum formula} \end{equation} \label{prop:sa explicit sum} \end{proposition} We can combine this with~\eqref{eq:pdf failure-to-close vector} to obtain the explicit piecewise polynomial pdf for the failure-to-close vector (for $n \geq 2$): \begin{equation}\label{eq:ftc pdf v2} \Phi_n(\vec{\ell}) = \frac{n-1}{2^{n+1}\pi\ell}\sum_{k=0}^{n-1}\frac{(-1)^k }{k!(n-k-1)!} \left((n+\ell-2k-2)_+^{n-2}-(n+\ell-2k)_+^{n-2}\right) \end{equation} When $n=2$, recall that we use the convention $0^0 = 0$. When $n=1$ the formula does not make sense, but we can easily compute $\Phi_1(\vec{\ell}) = \frac{1}{4\pi}\delta(1 - \ell)$. This formula for $\Phi_n(\ell)$ is known classically, and given as (2.181) in Hughes~\cite{hughes1995random}. The polynomials are precisely those given in (B13) of Grosberg and Moore~\cite{Moore:2005fh}. \subsection{The Expected Total Curvature of Equilateral Polygons} \label{subsec:total curvature of closed polygons} In Section~\ref{sec:numerics} it will be useful to know exact values of the expected total curvature of equilateral polygons. Let $\Pol_3(n;\vec{1}) \subset \Arm_3(n;\vec{1})$ be the subspace of closed equilateral $n$-gons. Following the approach of~\cite{GrosbergExtra,cgks}, we can use the pdf above to find an integral formula for the expected total curvature of an element of $\Pol_3(n;\vec{1})$: \begin{theorem}\label{thm:expected total curvature} The expected total curvature of an equilateral $n$-gon is equal to \begin{equation}\label{eq:total curvature} E(\kappa; \Pol_3(n;\vec{1})) = \frac{n}{2C_n}\int_0^2 \arccos\left(\frac{\ell^2-2}{2}\right) \Phi_{n-2}(\ell)\ell\,\mathrm{d}\ell, \end{equation} where $C_n$ and $\Phi_{n-2}(\ell)$ are given explicitly in~\eqref{eq:cn} and~\eqref{eq:ftc pdf v2}, respectively, and Table~\ref{tab:expected total curvature} shows exact values of the integral for small $n$. \end{theorem} This integral can be evaluated easily by computer algebra using the fact that $\Phi_{n-2}(\ell)$ is a piecewise polynomial in $\ell$ in combination with the identity $\int_0^2 \arccos\left(\frac{\ell^2-2}{2}\right) \ell^k \,\mathrm{d}\ell = \frac{2^{2k+1}n\operatorname{\mathrm{B}}(\nicefrac{k}{2}+1,\nicefrac{k}{2})}{(k+1)^2}$, where $\operatorname{\mathrm{B}}$ is the Euler beta function. Of course, it would be very interesting to find a closed form. \begin{proof} The total curvature of a polygon is just the sum of the turning angles, so the expected total curvature of an $n$-gon is simply $n$ times the expected value of the turning angle $\theta(\vec{e}_i, \vec{e}_{i+1})$ between any pair $(\vec{e}_i,\vec{e}_{i+1})$ of consecutive edges. In other words, \begin{equation}\label{eq:expected total curvature general} E(\kappa; \Pol_3(n;\vec{1})) = n E(\theta;\Pol_3(n;\vec{1})) = n \int \theta(\vec{e}_i,\vec{e}_{i+1}) P(\vec{e}_i, \vec{e}_{i+1}) \thinspace\operatorname{dVol}_{\vec{e}_i}\thinspace\operatorname{dVol}_{\vec{e}_{i+1}}, \end{equation} where $P(\vec{e}_i,\vec{e}_{i+1}) \thinspace\operatorname{dVol}_{\vec{e}_i}\thinspace\operatorname{dVol}_{\vec{e}_{i+1}}$ is the joint distribution of the pair of edges. The edges $\vec{e}_i, \vec{e}_{i+1}$ are chosen uniformly from the unit sphere subject to the constraint that the remaining $n-2$ edges must connect the head of $\vec{e}_{i+1}$ to the tail of $\vec{e}_i$. In other words, \[ P(\vec{e}_i, \vec{e}_{i+1}) \thinspace\operatorname{dVol}_{\vec{e}_i}\thinspace\operatorname{dVol}_{\vec{e}_{i+1}} = \frac{1}{C_n} \Phi_1(\vec{e}_i) \Phi_1(\vec{e}_{i+1}) \Phi_{n-2}(-\vec{e}_i-\vec{e}_{i+1}) \thinspace\operatorname{dVol}_{\vec{e}_i} \thinspace\operatorname{dVol}_{\vec{e}_{i+1}}, \] where \begin{equation}\label{eq:cn} C_n = \Phi_n(\vec{0}) = \frac{1}{2^{n+1}\pi(n-3)!}\sum_{k=0}^{\lfloor\nicefrac{n}{2}\rfloor} (-1)^{k+1} \binom{n}{k}(n-2k)^{n-3} \end{equation} is the normalized $(2n-3)$-dimensional Hausdorff measure of the submanifold of closed $n$-gons. Notice that $\Phi_1(\vec{v}) = \frac{\delta(|\vec{v}|-1)}{4\pi}$ is the distribution of a point chosen uniformly on the unit sphere. In particular, we can re-write the integral~\eqref{eq:expected total curvature general} as \[ E(\kappa; \Pol_3(n;\vec{1})) = \frac{n}{C_n} \int_{\vec{e}_i\in S^2} \int_{\vec{e}_{i+1} \in S^2} \theta(\vec{e}_i,\vec{e}_{i+1}) \frac{1}{16\pi^2} \Phi_{n-2}(-\vec{e}_i-\vec{e}_{i+1}) \thinspace\operatorname{dVol}_{S^2} \thinspace\operatorname{dVol}_{S^2}. \] Moreover, at the cost of a constant factor $4\pi$ we can integrate out the $\vec{e}_i$ coordinate and assume $\vec{e}_i$ points in the direction of the north pole. Similarly, at the cost of an additional $2\pi$ factor we can integrate out the azimuth angle of $\vec{e}_{i+1}$ and reduce the above integral to a single integral over the polar angle of $\vec{e}_{i+1}$, which is now exactly the angle $\theta(\vec{e}_i,\vec{e}_{i+1})$: \[ E(\kappa; \Pol_3(n;\vec{1})) = \frac{n}{2C_n} \int_0^\pi \theta \Phi_{n-2}(\sqrt{2-2\cos\theta}) \sin \theta \,\mathrm{d}\theta \] since $\sqrt{2-2\cos\theta}$ is the length of the vector $\vec{\ell} = -\vec{e}_i - \vec{e}_{i+1}$. Changing coordinates to integrate with respect to $\ell = |\vec{\ell}| \in [0,2]$ completes the proof. \end{proof} \section{The (Almost) Toric Symplectic Structure on Closed Polygons} We are now ready to describe explicitly the toric symplectic structure on closed polygons of fixed edge lengths. We first need to fix a bit of notation. The space $\Pol_3(n;\vec{r})$ of closed polygons of fixed edge lengths $\vec{r} = (r_1, \dots, r_n)$, where polygons related by translation are considered equivalent, is a subspace of the Riemannian manifold $\Arm_3(n;\vec{r})$ (with the product metric on spheres of varying radii). It has a corresponding subspace metric and measure, which we refer to as the \emph{standard measure} on $\Pol_3(n;\vec{r})$. There is a measure-preserving action of $\operatorname{SO}(3)$ on $\Pol_3(n;\vec{r})$, and a corresponding quotient space $\widehat{\Pol}_3(n;\vec{r}) = \Pol_3(n;\vec{r})/\operatorname{SO}(3)$. This quotient space inherits a pushforward measure from the standard measure on $\Pol_3(n;\vec{r})$, and we call this the standard measure on $\widehat{\Pol}_3(n;\vec{r})$, which we will shortly see (almost) has a toric symplectic structure. There are many ways to triangulate a convex $n$-gon; each triangulation $T$ joins $n-3$ pairs of vertices of the $n$-gon with chords to create a total of $n-2$ triangles. We call these $n-3$ chords the \emph{diagonals} of $T$. The edgelengths and diagonal lengths of $T$ must obey a set of $3(n-2)$ triangle inequalities, which we call the \emph{triangulation inequalities}. Finally, given a diagonal of a space polygon, we can perform what the random polygons community calls a \emph{polygonal fold} or \emph{crankshaft move}~\cite{Anonymous:2010p2603} and the symplectic geometry community calls a \emph{bending flow}~\cite{Kapovich:1996p2605} by rotating half of the polygon rigidly with respect to the other half around the diagonal; the collection of such rotations around all of the $n-3$ diagonals of a given triangulation will be our Hamiltonian torus action. We can now summarize the existing literature as follows: \begin{theorem}[Kapovich and Millson~\cite{Kapovich:1996p2605}, Howard, Manon and Millson~\cite{Howard:2008uy}, Hitchin \cite{Hitchin:1987uk}] \label{thm:symplectic closed polygons} The following facts are known: \begin{itemize} \item $\widehat{\Pol}_3(n;\vec{r})$ is a possibly singular $(2n-6)$-dimensional symplectic manifold. The symplectic volume is equal to the standard measure. \item To any triangulation $T$ of the standard $n$-gon we can associate a Hamiltonian action of the torus $T^{n-3}$ on $\widehat{\Pol}_3(n;\vec{r})$, where the angle $\theta_i$ acts by folding the polygon around the $i$-th diagonal of the triangulation. \item The moment map $\mu: \widehat{\Pol}_3(n;\vec{r}) \rightarrow \mathbb{R}^{n-3}$ for a triangulation $T$ records the lengths $d_i$ of the $n-3$ diagonals of the triangulation. \item The moment polytope $P$ for a triangulation $T$ is defined by the triangulation inequalities. \item The action-angle map $\alpha$ for a triangulation $T$ is given by constructing the triangles using the diagonal and edgelength data to recover their side lengths, and assembling them in space with (oriented) dihedral angles given by the $\theta_i$. \item The inverse image $\mu^{-1}(\operatorname{interior} P) \subset \widehat{\Pol}_3(n;\vec{r})$ of the interior of the moment polytope $P$ is an (open) toric symplectic manifold. \end{itemize} \end{theorem} \begin{figure}[t] \begin{overpic}[width=2.3in]{triangles_constructed.pdf} \put(30,22){$d_1$} \put(58,24){$d_2$} \end{overpic} \hfill \begin{overpic}[width=1.6in]{dihedrals_set.pdf} \put(72,15){$\theta_1$} \put(82,84){$\theta_2$} \end{overpic} \hfill \begin{overpic}[width=1.6in]{finalpoly_edited.pdf} \end{overpic} \caption{This figure shows how to construct an equilateral pentagon in $\widehat{\operatorname{Pol}}(5;\vec{1})$ using the action-angle map. First, we pick a point in the moment polytope shown in Figure~\ref{fig:fan triangulation polytopes} at center. We have now specified diagonals $d_1$ and $d_2$ of the pentagon, so we may build the three triangles in the triangulation from their side lengths, as in the picture at left. We then choose dihedral angles $\theta_1$ and $\theta_2$ independently and uniformly, and join the triangles along the diagonals $d_1$ and $d_2$, as in the middle picture. The right hand picture shows the final space polygon, which is the boundary of this triangulated surface.} \label{fig:action map} \end{figure} Here is a very brief summary of how these results work. Just as for Hamiltonian torus actions, in general there is a moment map associated to every Hamiltonian Lie group action on a symplectic manifold. In particular, Kapovich and Millson~\cite{Kapovich:1996p2605} pointed out that the symplectic manifold $\Arm_3(n;\vec{r})$ admits a Hamiltonian action by the Lie group $\operatorname{SO}(3)$ given by rotating the polygonal arm in space (this is the diagonal $\operatorname{SO}(3)$ action on the product of spheres) whose moment map $\mu$ gives the vector joining the ends of the polygon. The closed polygons $\Pol_3(n;\vec{r})$ are the fiber $\mu^{-1}(\vec{0})$ of this map. While the group action does not generally preserve fibers of this moment map, it does preserve $\mu^{-1}(\vec{0}) = \Pol_3(n;\vec{r})$ and in this situation, we can perform what is known as a \emph{symplectic reduction} (or Marsden--Weinstein--Meyer reduction~\cite{Marsden:1974tg,Meyer:1973wu}) to produce a symplectic structure on the quotient of the fiber $\mu^{-1}(\vec{0})$ by the group action. This yields a symplectic structure on the $(2n-6)$-dimensional moduli space $\widehat{\Pol}_3(n;\vec{r})$. The symplectic measure induced by this symplectic structure is equal to the standard measure given by pushing forward the subspace measure on $\Pol_3(n;\vec{r})$ to $\widehat{\Pol}_3(n;\vec{r})$ because the ``parent'' symplectic manifold $\Arm_3(n;\vec{r})$ is a K\"ahler manifold~\cite{Hitchin:1987uk}. The polygon space $\widehat{\Pol}_3(n;\vec{r})$ is singular if \[ \varepsilon_I(\vec{r}) := \sum_{i\in I} r_i - \sum_{j \notin I} r_j \] is zero for some $I \subset \{1, \ldots , n\}$. Geometrically, this means it is possible to construct a linear polygon with edgelengths given by $\vec{r}$. Since linear polygons are fixed by rotations around the axis on which they lie, the action of $\operatorname{SO}(3)$ is not free in this case and the symplectic reduction develops singularities. Nonetheless, the reduction $\widehat{\Pol}_3(n;\vec{r})$ is a complex analytic space with isolated singularities; in particular, the complement of the singularities is a symplectic (in fact K\"ahler) manifold to which Theorem~\ref{thm:symplectic closed polygons} applies. Both the volume and the cohomology ring of $\widehat{\Pol}_3(n;\vec{r})$ are well-understood from this symplectic perspective~\cite{Brion:1991hz,Kirwan:1992by,Hausmann:1998vx,Kamiyama:1999tj,Takakura:2001ir,Khoi:2005ch,Mandini:2008wq}. For example: \begin{proposition}[Takakura~\cite{Takakura:2001ir}, Khoi~\cite{Khoi:2005ch}, Mandini~\cite{Mandini:2008wq}]\label{prop:polygon space volume} The volume of $\widehat{\Pol}_3(n;\vec{r})$ is given by \[ \operatorname{Vol}(\widehat{\Pol}_3(n;\vec{r})) = -\frac{(2\pi)^{n-3}}{2(n-3)!} \sum_I (-1)^{n-|I|} \varepsilon_I(\vec{r})^{n-3}, \] where the sum is over all $I \subset \{1, \ldots , n\}$ such that $\varepsilon_I(\vec{r}) > 0$. \end{proposition} \begin{corollary}\label{cor:equilateral volume} The volume of the space of equilateral $n$-gons is \[ \operatorname{Vol}(\widehat{\Pol}_3(n;\vec{1})) = -\frac{(2\pi)^{n-3}}{2(n-3)!} \sum_{k=0}^{\lfloor \nicefrac{n}{2}\rfloor} (-1)^{k} \binom{n}{k} (n-2k)^{n-3}. \] \end{corollary} \subsection{The knotting probability for equilateral hexagons} We immediately give an example application of this picture. In~\cite{cgks}, we showed using the F\'ary-Milnor theorem that at least $\nicefrac{1}{3}$ of hexagons of total length 2 are unknotted by showing that their total curvature was too small to form a knot. We could repeat the calculation using our explicit formula for the expectation of the total curvature for equilateral hexagons above, but the results would be disappointing; only about $27\%$ of the space is revealed to be unknotted by this method. On the other hand action-angle coordinates, coupled with results of Calvo, immediately yield a much better bound: \newcommand{\widehat{\operatorname{Pol}}_3(6;\vec{1})}{\widehat{\operatorname{Pol}}_3(6;\vec{1})} \begin{proposition} At least $\nicefrac{3}{4}$ of the space $\widehat{\operatorname{Pol}}_3(6;\vec{1})$ of equilateral hexagons consists of unknots. \end{proposition} \begin{proof} Consider the triangulation of the hexagon given by joining vertices $1$, $3$, and $5$ by diagonals and its corresponding action-angle coordinates $\alpha : \mathcal{P} \times T^3 \rightarrow \widehat{\operatorname{Pol}}_3(6;\vec{1})$. In \cite{Calvo:2001cp}, an impressively detailed analysis of hexagon space, Jorge Calvo defines a geometric\footnote{Interestingly, curl is independent from topological invariant given by the handedness of the trefoil, so there are actually four different types of equilateral hexagonal trefoils.} invariant of hexagons called the curl which is $0$ for unknots and $\pm 1$ for trefoils. In the proof of his Lemma 16, Calvo observes that any knotted equilateral hexagon with curl $+1$ has all three dihedral angles between $0$ and $\pi$. The rest of the proof is elementary, but we give all the steps here as this is the first of many such arguments below. Formally, the knot probability is the expected value of the characteristic function \begin{equation*} \chi_{\text{knot}}(p) = \begin{cases} 1 & \text{if $p$ is knotted,} \\ 0 & \text{if $p$ is unknotted.} \end{cases} \end{equation*} By Calvo's work, $\chi_{\text{knot}}$ is bounded above by the sum of characteristic functions $\chi_{\text{curl}=+1}+\chi_{\text{curl}=-1}$ and $\chi_{\text{curl}=+1}$ is bounded above by the simpler characteristic function \begin{equation*} \chi(d_1,d_2,d_3,\theta_1,\theta_2,\theta_3) = \begin{cases} 1 & \text{if $\theta_i \in [0,\pi]$ for $i \in \{1, 2, 3\}$,} \\ 0 & \text{otherwise.} \end{cases} \end{equation*} Now Theorem~\ref{thm:symplectic closed polygons} tells us that almost all of $\widehat{\operatorname{Pol}}_3(6;\vec{1})$ is a toric symplectic manifold, so \eqref{eq:duistermaat heckman product} of Theorem~\ref{theorem:mp} holds for integrals over this polygon space. In particular, $\chi$ doesn't depend on the $d_i$, so its expected value over $\widehat{\operatorname{Pol}}_3(6;\vec{1})$ is equal to its expected value over the torus $T^3$ of $\theta_i$. This expected value is clearly $\nicefrac{1}{8}$, and a similar argument holds for $\chi_{\text{curl}=-1}$. This means that the knot probability is no more than $\nicefrac{1}{4}$, as desired. \end{proof} Of course, this bound is still a substantial underestimate of the fraction of unknots. Over a 12 hour run of the ``PTSMCMC'' Markov chain sampler of Section~\ref{subsec:ptsmcmc}, we examined 1,318,001 equilateral hexagons and found 173 knots. Using the $95\%$ confidence level Geyer IPS error estimators of Section~\ref{subsec:geyer ips}, we estimate the knot probability among unconfined equilateral hexagons to be $1.3 \times 10^{-4} \pm 0.2 \times 10^{-4}$, or between $1.1$ and $1.5$ in $10,\!000$. \subsection{The Fan Triangulation and Chordlengths} While Theorem~\ref{thm:symplectic closed polygons} applies to any triangulation, we now work out the details for a particular triangulation. The ``fan'' triangulation is created by joining vertex $v_1$ of the polygon to vertices $v_3, \dots, v_{n-1}$. This gives rise to the following set of triangulation inequalities. As shown in Figure~\ref{fig:fan triangulation polytopes}, we number the diagonals $d_1, \dots, d_{n-3}$ so that the first triangle has edgelengths $d_1$, $r_1$, $r_2$, the last triangle has edgelengths $d_{n-3}$, $r_{n-1}$, $r_{n}$, and all the triangles in between have edgelengths in the form $d_i$, $d_{i+1}$, $r_{i+2}$. The corresponding triangulation inequalities, which we call the ``fan triangulation inequalities'' are: \begin{equation} |r_1 - r_2| \leq d_1 \leq r_1 + r_2 \qquad \begin{matrix} r_{i+2} \leq d_i + d_{i+1} \\ |d_i - d_{i+1}| \leq r_{i+2} \end{matrix} \qquad |r_n - r_{n-1}| \leq d_{n-3} \leq r_n + r_{n-1} \label{eq:fan polytope} \end{equation} \begin{definition}\label{def:fan polytope} The \emph{fan triangulation polytope} $P_n(\vec{r}) \subset \mathbb{R}^{n-3}$ is the moment polytope for $\widehat{\Pol}_3(n;\vec{r})$ corresponding to the fan triangulation and is determined by the fan triangulation inequalities~\eqref{eq:fan polytope}. The fan triangulation polytopes $P_5(\vec{1})$ and $P_6(\vec{1})$ are shown in Figure~\ref{fig:fan triangulation polytopes}. \end{definition} This description of the moment polytope follows directly from Theorem~\ref{thm:symplectic closed polygons}. \begin{figure}[t] \begin{tabular}{ccc} \begin{overpic}[height=1.5in]{FanTriangulation.pdf} \put(98,48){$v_1$} \put(78,90){$v_2$} \put(31,102){$v_3$} \put(-8,72){$v_4$} \put(-8,26){$v_5$} \put(30,-4){$v_6$} \put(79,7){$v_7$} \put(64,77){$d_1$} \put(41,65){$d_2$} \put(41,42.5){$d_3$} \put(49,23){$d_4$} \end{overpic} & \begin{overpic}[height=1.5in]{PentagonMoment.pdf} \put(0,-7){$0$} \put(-5,0){$0$} \put(-5,47){$1$} \put(-5,97){$2$} \put(48,-7){$1$} \put(97,-7){$2$} \end{overpic} & \begin{overpic}[height=1.5in]{HexagonMoment.pdf} \put(90,71){$(2,3,2)$} \put(-8,5){$(0,0,0)$} \put(47,-2){$(2,1,0)$} \end{overpic} \\ [1 em] Fan Triangulation & \begin{minipage}{2in} \begin{center} Fan Triangulation polytope\\ for $\widehat{\operatorname{Pol}}(5;\vec{1})$ \end{center} \end{minipage} & \begin{minipage}{2in} \begin{center} Fan Triangulation polytope\\ for $\widehat{\operatorname{Pol}}(6;\vec{1})$ \end{center} \end{minipage} \end{tabular} \vspace{0.2in} \caption{This figure shows the fan triangulation of a 7-gon on the left and the corresponding moment polytopes for equilateral space pentagons and equilateral space hexagons. For the pentagon moment polytope, we show the square with corners at $(0,0)$ and $(2,2)$ to help locate the figure, while for the hexagon moment polytope, we show the box with corners at $(0,0,0)$ and $(2,3,2)$ to help understand the geometry of the figure. The vertices of the polytopes correspond to polygons fixed by the torus action given by rotating around the diagonals. For instance, the $(2,2)$ point in the pentagon's moment polytope corresponds to the configuration given by an isoceles triangle with sides $2$, $2$, and $1$. The diagonals lie along the long sides; rotating around them is a rotation of the entire configuration in space, and is hence trivial because we are considering equivalence classes up to the action of $SO(3)$. The $(2,3,2)$ point in the hexagon's moment polytope corresponds to a completely flat (or ``lined'') configuration double-covering a line segment of length $3$. Here all the diagonals lie along the same line and rotation around the diagonals does nothing. } \label{fig:fan triangulation polytopes} \end{figure} Applying Theorem~\ref{theorem:mp} to this situation gives necessary and sufficient conditions for uniform sampling on $\widehat{\Pol}_3(n;\vec{r})$. These could be used to test proposed polygon sampling algorithms given statistical tests for uniformity on convex subsets of Euclidean space and on the $(n-3)$-torus. \begin{proposition}\label{prop:polygon sampling} A polygon in $\widehat{\Pol}_3(n;\vec{r})$ is sampled according to the standard measure if and only if its diagonal lengths $d_1 = |v_1 - v_3|$, $d_2 = |v_1 - v_4|$, \dots, $d_{n-3} = |v_1 - v_{n-1}|$ are uniformly sampled from the fan polytope $P_n(\vec{r})$ and its dihedral angles around these diagonals are sampled independently and uniformly in $[0,2\pi)$. \end{proposition} The fan triangulation polytope also gives us a natural way to understand the probability distribution of chord lengths of a closed random walk. To fix definitions, \begin{definition} Let $\operatorname{ChordLength}(k,n;\vec{r})$ be the length $|v_1 - v_{k+1}|$ of the chord skipping the first $k$ edges in a polygon sampled according to the standard measure on $\widehat{\Pol}_3(n;\vec{r})$. This is a random variable. \end{definition} The expected values of squared chordlengths for equilateral polygons have been computed by a rearrangement technique, and turn out to be quite simple: \begin{proposition}[Cantarella, Deguchi, Shonkwiler~\cite{CPA:CPA21480} and Millett, Zirbel~\cite{Zirbel:2012gg}] \label{prop:second moment} The second moment of the random variable $\operatorname{ChordLength}(k,n;\vec{1})$ is $\frac{k(n-k)}{n-1}$. \end{proposition} It is obviously interesting to know the other moments of these random variables, but this problem seems considerably harder. In particular, the techniques used in the proofs of~Proposition~\ref{prop:second moment} don't apply to other moments of chordlength. Here is an alternate form for the chordlength problem which allows us to make some explicit calculations: \begin{theorem} \label{thm:expected chord length} The expected value of the random variable $\operatorname{ChordLength}(k,n;\vec{1})$ is coordinate $d_{k-1}$ of the center of mass of the fan triangulation polytope~$P_n(\vec{1})$. For $n$ between $4$ and $8$, these expectations are given by the fractions \renewcommand\arraystretch{1.1} \begin{equation} \begin{array}{l|llllll} n \backslash \raisebox{0.25em}{k} & 2 & 3 & 4 & 5 & 6 \\ \hline 4 & 1 & & & & \\ 5 & 17/15 & 17/15 & & & \\ 6 & 14/12 & 15/12 & 14/12 & & \\ 7 & 461/385 & 506/385 & 506/385 & 461/385 & \\ 8 & 1,\!168/960 & 1,\!307/960 & 1,\!344/960 & 1,\!307/960 & 1,\!168/960 \\ \end{array} \end{equation} The $p$-th moment of $\operatorname{ChordLength}(k,n;\vec{1})$ is coordinate $d_{k-1}$ of the $p$-th center of mass of $P_n(\vec{1})$. \end{theorem} \begin{proof} Since the measure on $\widehat{\Pol}_3(n;\vec{1})$ is invariant under permutations of the edges, the pdf of chord length for any chord skipping $k$ edges must be the same as the pdf for the length of the chord joining $v_1$ and $v_{k+1}$. But this chord is a diagonal of the fan triangulation, so its length is the coordinate $d_{k-1}$ of the fan triangulation polytope $P_n(\vec{1})$. Since these chord lengths don't depend on dihedral angles, their expectations over polygon space are equal to their expectations over $P_n(\vec{1})$ by \eqref{eq:duistermaat heckman product} of Theorem~\ref{theorem:mp}, which applies by Theorem~\ref{thm:symplectic closed polygons}. But the expectation of a power of a coordinate over a region is simply a coordinate of the corresponding center of mass. We obtained the results in the table by a direct computer calculation using~\texttt{polymake}~\cite{Gawrilow:2000vl}, which decomposes the polytopes into simplices and computes the center of mass as a weighted sum of simplex centers of mass. \end{proof} It would be very interesting to get a general formula for these polytope centers of mass. \subsection{Closed polygons in (rooted) spherical confinement} Following Diao et.\ al.\ \cite{Diao:2011ie}, we say that a polygon $p$ is in rooted spherical confinement of radius $R$ if every vertex of the polygon is contained in a sphere of radius $R$ centered at the first vertex of the polygon. As a subspace of the space of closed polygons of fixed edgelengths, the space of confined closed polygons inherits a toric symplectic structure. In fact, the moment polytope for this structure is a very simple subpolytope of the fan triangulation polytope: \begin{definition} The \emph{confined fan polytope} $P_{n,R}(\vec{r}) \subset P_n(\vec{r})$ is determined by the fan triangulation inequalities~\eqref{eq:fan polytope} and the additional linear inequalities $d_i \leq R$. \label{def:confined fan triangulation polytope} \end{definition} As before, we immediately have action-angle coordinates $P_{n,R}(\vec{r}) \times T^{n-3}$ on the space of rooted confined polygons. We note that the vertices of the confined fan triangulation polytope corresponding to a space of confined polygons are \emph{not} all fixed points of the torus action since this is not the entire moment polytope; new vertices have been added by imposing the additional linear inequalities. As before, we get criteria for sampling confined polygons (directly analogous to Proposition~\ref{prop:polygon sampling} for unconfined polygons): \begin{proposition} A polygon in $\widehat{\Pol}_3(n;\vec{r})$ is sampled according to the standard measure on rooted sphere-confined closed space polygons in a sphere of radius $R$ if and only if its diagonal lengths $d_1 = |v_1 - v_3|$, $d_2 = |v_1 - v_4|$, \dots, $d_{n-3} = |v_1 - v_{n-1}|$ are uniformly sampled from the confined fan polytope $P_{n,R}(\vec{r})$ and its dihedral angles around these diagonals are sampled independently and uniformly in $[0,2\pi)$. \end{proposition} We can also compute expected values for chordlengths for confined polygons following the lead of Theorem~\ref{thm:expected chord length}, but here our results are weaker because the pdf of chordlength is no longer simply a function of the number of edges skipped: \begin{theorem} The expected length of the chord joining vertex $v_1$ to vertex $v_{k+1}$ in a polygon sampled according to the standard measure on polygons in rooted spherical confinement of radius $R$ is given by coordinate $d_{k-1}$ of the center of mass of the confined fan triangulation polytope $P_{n,R}(\vec{r})$. For $n$ between $4$ and $10$, $\vec{r} = \vec{1}$, and $R = 3/2$, these expectations are \begin{equation*} \begin{array}{l|llllllll} n\backslash \raisebox{0.25em}{k} & 2 & 3 & 4 & 5 & 6 & 7 & 8 & \!\!\!\!\!\!\!\!\!\!\textrm{(denominator)}\\ \hline 4 & 3/4 & & & & & & & \\ 5 & 8/9 & 8/9 & & & & & & \\ 6 & 293/336 & 316/336 & 293/336 & & & & & \\ 7 & 281/320 & 298/320 & 298/320 & 281/320 & & & & \\ 8 & 23,\!237 & 24,\!752 & 24,\!402 & 24,\!752 & 23,\!237 & & & 26,\!496 \\ 9 & 46,\!723 & 49,\!718 & 49,\!225 & 49,\!225 & 49,\!718 & 46,\!723 & & 53,\!256 \\ 10 & 1,\!145,\!123 & 1,\!218,\!844 & 1,\!205,\!645 & 1,\!210,\!696 & 1,\!205,\!645 & 1,\!218,\!844 & 1,\!145,\!123 & 1,\!305,\!344 \\ \end{array} \end{equation*} where for $n = 8$, $9$, and $10$ we moved the common denominator of all fractions in the row to the right-hand column. \end{theorem} The proof is just the same as the proof of Theorem~\ref{thm:expected chord length}, and again we use \texttt{polymake} for the computations. The data show an interesting pattern: for 8, 9, and 10 edge polygons, the confinement is tight enough that the data reveals small parity effects in the expectations. For 10-gons, for instance, vertex $v_5$ is on average closer to vertex $v_1$ than vertex $v_4$ is! We also calculated the exact expectation of chordlength for equilateral 10-gons confined to spheres of other radii. The results are shown in Figure~\ref{fig:confined chord lengths}. \begin{figure}[h] \begin{center} \begin{overpic}[width=2.5in]{confined_10gon_chords.pdf} \end{overpic} \end{center} \caption{Each line in this graph shows the expected length of the chord joining vertices $v_1$ and $v_k$ in a random equilateral 10-gon. The 10-gons are sampled from the standard measure on polygons in rooted spherical confinement. From bottom to top, the confinement radii are $1.25$, $1.5$, $1.75$, $2$, $2.5$, $3$, $4$, and $5$. Polygons confined in a sphere of radius $5$ are unconfined. Note the small parity effects which emerge in tighter confinement. These are exact expectations, not the result of sampling experiments. \label{fig:confined chord lengths}} \end{figure} \section{Markov Chain Monte Carlo for Closed and Confined Random Walks} \label{sec:MCMC methods} We have now constructed the action-angle coordinates on several spaces of random walks, including closed walks, closed walks in rooted spherical confinement, standard (open) random walks, and open random walks confined to half-spaces or slabs. In each case, the action-angle coordinates have allowed us to prove some theorems and make some interesting exact computations of probabilities on the spaces. To address more complicated (and physically interesting) questions, we will now turn to numerically sampling these spaces. Numerical sampling of closed polygons underlies a substantial body of work on the geometry and topology of polymers and biopolymers~(see the surveys of \cite{Orlandini:2007kn} and \cite{Benham:2005cl}, which contain more than 200 references), which is a topic of interest in statistical physics. Many of the physics questions at issue in these investigations seem to be best addressed by computation. For instance, while our methods above gave us simple (though not very tight) theoretical bounds on the fraction of unknots among equilateral 6-gons, a useful theoretical bound on, say, the fraction of unknots among 1,273-gons seems entirely out of reach. On the other hand, it is entirely reasonable to work on developing well-founded algorithms with verified convergence and statistically defensible error bars for experimental work on such questions, and that is precisely our aim in this part of the paper. \subsection{Current Sampling Algorithms for Random Polygons} A wide variety of sampling algorithms for random polygons have been proposed. They fall into two main categories: Markov chain algorithms such as polygonal folds~\cite{MR95g:57016} or crankshaft moves~\cite{Vologodskii:1979ik,Klenin:1988dt} (cf.~\cite{Anonymous:2010p2603} for a discussion of these methods) and direct sampling methods such as the ``triangle method''~\cite{Moore:2004ds} or the ``generalized hedgehog'' method~\cite{Varela:2009cda} and the methods of Moore and Grosberg~\cite{Moore:2005fh} and Diao, Ernst, Montemayor, and Ziegler \cite{Diao:2011ie,Diao:wt,Diao:2012dza} which are both based on the ``sinc integral formula'' \eqref{eq:rayleigh form}. Each of these approaches has some defects. No existing Markov chain method has been proved to converge to the standard measure, though it is generally conjectured that they do. It is unclear what measure the generalized hedgehog method samples, while the triangle method clearly samples a submanifold\footnote{It is hard to know whether this restriction is important in practice. The submanifold may be sufficiently ``well-distributed'' that most integrands of interest converge anyway. Or perhaps calculations performed with the triangle method are dramatically wrong for some integrands!} of polygon space. The Moore--Grosberg algorithm is known to sample the correct distribution, but faces certain practical problems. It is based on computing successive piecewise-polynomial distributions for diagonal lengths of a closed polygon and directly sampling from these one-dimensional distributions. There is no problem with the convergence of this method, but the difficulty is that the polynomials are high degree with large coefficients and many almost-cancellations, leading to significant numerical problems with accurately evaluating them\footnote{Hughes discusses these methods in Section 2.5.4 of his book on random walks~\cite{hughes1995random}, attributing the formula rederived by Moore and Grosberg~\cite{Moore:2005fh} to a 1946 paper of Treloar~\cite{TF9464200077}. The problems with evaluating these polynomials accurately were known by the 1970's, when Barakat~\cite{0301-0015-6-6-008} derived an alternate expression for this probability density based on Fourier transform methods.}. These problems are somewhat mitigated by the use of rational and multiple-precision arithmetic in~\cite{Moore:2005fh}, but the number of edges in polygons sampled with these methods is inherently limited. For instance, the text file giving the coefficients of the polynomials needed to sample a random closed 95-gon is over 25 megabytes in length. Diao et.\ al.\ avoid this problem by approximating these distributions by normals, but this approximation means that they are not quite\footnote{Again, it is unclear what difference this makes in practice.} sampling the standard measure on polygon space. \subsection{The Toric Symplectic Markov Chain Monte Carlo Algorithm} We introduce a Markov Chain Monte Carlo algorithm for sampling toric symplectic manifolds with an adjustable parameter $\beta \in (0,1)$ explained below. We will call this the \proc{Toric-Symplectic-MCMC}$(\beta)$ algorithm or \proc{TSMCMC}$(\beta)$ for convenience. Though we intend to apply this algorithm to our random walk spaces, it works on any toric symplectic manifold, so we state the results in this section and the next for an arbitrary toric symplectic manifold $M^{2n}$ with moment map $\mu \colon\! M \to \mathbb{R}^n$, moment polytope $P$, and action-angle parametrization $\alpha \colon\! P \times T^n \to M$. The method is based on a classical Markov chain for sampling convex regions of $\mathbb{R}^n$ called the ``hit-and-run'' algorithm: choose a direction at random and sample along the intersection of that ray with the region to find the next point in the chain. This method was introduced by Boneh and Golan~\cite{Boneh:1979uy} and independently by Smith~\cite{Smith:1980tu} as a means of generating random points in a high-dimensional polytope. There is a well-developed theory around this method which we will be able to make use of below. The parameter $\beta$ controls the relative weight given to the action and angle variables. At each step of TSMCMC$(\beta)$, with probability $\beta$ we update the action variables by sampling the moment polytope $P$ using hit-and-run and with probability $1-\beta$ we update the angle variables by sampling the torus $T^n$ uniformly. When $\beta = \nicefrac{1}{2}$ this is analogous to the random scan Metropolis-within-Gibbs samplers discussed by Roberts and Rosenthal~\cite{Roberts:1997ir} (see also~\cite{Latuszynski:2013dz}). The adjustable parameter $\beta$ allows us to tune the speed and accuracy of Monte Carlo integration using TSMCMC$(\beta)$. As a trivial example, if we use TSMCMC$(\beta)$ to numerically integrate a functional which is almost constant in the angle variables, then we should let $\beta$ be very close to $1$. \begin{codebox} \Procname{$\proc{Toric-Symplectic-MCMC}(\vec{p},\vec{\theta},\beta)$} \zi $prob = \proc{Uniform-Random-Variate}(0,1)$ \zi \If $prob < \beta$ \zi \Then \> \Comment Generate a new point in $P$ using the hit-and-run algorithm. \zi $\vec{v} = \proc{Random-Direction}(n)$ \zi $(t_0,t_1) = \proc{Find-Intersection-Endpoints}(P,\vec{p},\vec{v})$ \zi $t = \proc{Uniform-Random-Variate}(t_0,t_1)$ \zi $\vec{p} = \vec{p} + t \vec{v}$ \zi \Else \> \Comment Generate a new point in $T^n$ uniformly. \zi \For $\id{ind} = 1$ \To $n$ \zi \Do $\theta_{\id{ind}} = \proc{Uniform-Random-Variate}(0,2\pi)$ \End \End \zi \Return ($\vec{p}$,$\vec{\theta}$) \End \label{tsmcmc} \end{codebox} We now prove that the distribution of samples produced by this Markov chain converges geometrically to the distribution generated by the symplectic volume on $M^{2n}$. First, we show that the symplectic measure on $M$ is invariant for TSMCMC. To do so, recall that for any Markov chain $\Phi$ on a state space $X$, we can define the $m$-step transition probability $\mathcal{P}^m(x,A)$ to be the probability that an $m$-step run of the chain starting at $x$ lands in the set $A$. This defines a measure $\mathcal{P}^m(x,\cdot)$ on $X$. The transition kernel $\mathcal{P} = \mathcal{P}^1$ is called \emph{reversible} with respect to a probability distribution $\pi$ if \begin{equation}\label{eq:reversibility} \int_A \pi(\!\, \mathrm{d}x) \mathcal{P}(x,B) = \int_B \pi(\!\, \mathrm{d}x) \mathcal{P}(x,A) \text{ for all measurable } A,B \subset X. \end{equation} In other words, the probability of moving from $A$ to $B$ is the same as the probability of moving from $B$ to $A$. If $\mathcal{P}$ is reversible with respect to $\pi$, then $\pi$ is invariant for $\mathcal{P}$: letting $A = X$ in \eqref{eq:reversibility}, we see that $\pi \mathcal{P} = \pi$. In TSMCMC$(\beta)$, the transition kernel $\mathcal{P} = \beta \mathcal{P}_1 + (1-\beta)\mathcal{P}_2$, where $\mathcal{P}_1$ is the hit-and-run kernel on the moment polytope and $\mathcal{P}_2(\vec{\theta},\cdot) = \tau$, where $\tau$ is the uniform measure on $T^n$. Since hit-and-run is reversible on the moment polytope~\cite{Smith:1984vz} and since $\mathcal{P}_2$ is obviously reversible with respect to $\tau$, we have: \begin{proposition}\label{prop:reversibility} TSMCMC$(\beta)$ is reversible with respect to the symplectic measure $\nu$ induced by symplectic volume on $M$. In particular, $\nu$ is invariant for TSMCMC$(\beta)$. \end{proposition} Recall that the total variation distance between two measures $\eta_1, \eta_2$ on a state space~$X$ is given by \begin{equation*} \tvnorm{\eta_1 - \eta_2} := \sup_{A \text{ any measurable set}} |\eta_1(A) - \eta_2(A)|. \end{equation*} We can now prove geometric convergence of the sample measure generated by TSMCMC($\beta$) to the symplectic measure in total variation distance. \begin{theorem}\label{prop:tsmcmc-convergence} Suppose that $M^{2n}$ is a toric symplectic manifold with moment polytope $P$ and action-angle coordinates $\alpha \colon\! P \times T^n \rightarrow M^{2n}$. Further, let $\mathcal{P}^m(\vec{p},\vec{\theta},\cdot)$ be the $m$-step transition probability of the Markov chain given by \proc{Toric-Symplectic-MCMC}$(\beta)$ and let $\nu$ be the symplectic measure on $M^{2n}$. There are constants $R < \infty$ and $\rho < 1$ so that for any $(\vec{p},\vec{\theta}) \in \operatorname{int}(P) \times T^n$, \begin{equation*} \tvnorm{\alpha_\star \mathcal{P}^m(\vec{p},\vec{\theta},\cdot) - \nu} < R \rho^m. \end{equation*} That is, for any choice of starting point, the pushforward by $\alpha$ of the probability measure generated by \proc{Toric-Symplectic-MCMC}$(\beta)$ on $P \times T^{n}$ converges geometrically (in the number of steps taken in the chain) to the symplectic measure on $M^{2n}$. \end{theorem} \begin{proof} Let $\lambda$ be Lebesgue measure on the moment polytope $P$ and, as above, let $\tau$ be uniform measure on the torus $T^n$. By Theorem~\ref{theorem:mp}, it suffices to show that \[ \tvnorm{\mathcal{P}^m(\vec{p},\vec{\theta}, \cdot) - \lambda \times \tau} < R \rho^m. \] Since the transition kernels $\mathcal{P}_1$ and $\mathcal{P}_2$ commute, for any non-negative integers $a$ and $b$ and partitions $i_1, \ldots , i_k$ of $a$ and $j_1, \ldots , j_\ell$ of $b$ we have \begin{equation}\label{eq:kernel product} \left(\mathcal{P}_1^{i_1}\mathcal{P}_2^{j_1} \cdots \mathcal{P}_1^{i_k}\mathcal{P}_2^{j_\ell}\right)(\vec{p},\vec{\theta},\cdot) = \left(\mathcal{P}_1^a \mathcal{P}_2^b\right)(\vec{p},\vec{\theta},\cdot) = \mathcal{P}_1^a(\vec{p},\cdot) \times \mathcal{P}_2^b(\vec{\theta},\cdot) = \mathcal{P}_1^a(\vec{p},\cdot) \times \tau, \end{equation} where the last equality follows from the fact that $\mathcal{P}_2(\vec{\theta},\cdot) = \tau$ for any $\vec{\theta} \in T^n$. The total variation distance between product measures is bounded above by the sum of the total variation distances of the factors (this goes back at least to Blum and Pathak~\cite{Blum:1972bt}; see Sendler~\cite{Sendler:1975vn} for a proof), so we have that \begin{align} \nonumber \tvnorm{\mathcal{P}_1^a(\vec{p},\cdot) \times \mathcal{P}_2^b(\vec{\theta},\cdot) - \lambda \times \tau} & = \tvnorm{\mathcal{P}_1^a(\vec{p},\cdot) \times \tau - \lambda \times \tau} \\ \label{eq:kernel triangle ineq} & \leq \tvnorm{\mathcal{P}_1^a(\vec{p}, \cdot) - \lambda} + \tvnorm{\tau - \tau} \\ \nonumber & = \tvnorm{\mathcal{P}_1^a(\vec{p}, \cdot) - \lambda}. \end{align} Using a result of Smith~\cite[Theorem~3]{Smith:1984vz}, the right hand side is bounded above by $\left(1-\frac{\xi}{n2^{n-1}}\right)^{a-1}$ where $\xi$ is the ratio of the volume of $P$ and the volume of the smallest round ball containing $P$. Let \[ \kappa := \left(1-\frac{\xi}{n2^{n-1}}\right). \] Then combining~\eqref{eq:kernel product},~\eqref{eq:kernel triangle ineq}, and the binomial theorem yields \begin{align*} \tvnorm{\mathcal{P}^m(\vec{p},\vec{\theta},\cdot) - \lambda \times \tau} & = \tvnorm{\left(\beta \mathcal{P}_1 + (1-\beta)\mathcal{P}_2\right)^m(\vec{p},\vec{\theta},\cdot) - \lambda \times \tau} \\ & = \tvnorm{ \sum_{i=0}^m \binom{m}{i} \beta^{m-i}(1-\beta)^i \left(\mathcal{P}_1^{m-i}(\vec{p},\cdot) \times \tau - \lambda \times \tau\right)} \\ & \leq \sum_{i=0}^m \binom{m}{i} \beta^{m-i}(1-\beta)^i \kappa^{m-i-1} \\ & = \frac{1}{\kappa}(1+\beta(\kappa-1))^m = \frac{1}{\kappa}\left(1 - \frac{\beta\xi}{n2^{n-1}}\right)^m. \end{align*} The ratio $\xi$ of the volume of $P$ and the volume of smallest round ball containing $P$ is always a positive number with absolute value less than $1$, so $0 < 1 - \beta\xi/n2^{n-1} < 1$. This completes the proof. \end{proof} This proposition provides a comforting theoretical guarantee that \proc{Toric-Symplectic-MCMC}$(\beta)$ will eventually work. The proof provides a way to estimate the constants $R$ and $\rho$. However, in practice, these upper bounds are far too large to be useful. Further, the rate of convergence for any given run will depend on the shape and dimension of the moment polytope $P$ and on the starting position $x$. There is quite a bit known about the performance of hit-and-run in general theoretical terms; we recommend the excellent survey article of Andersen and Diaconis~\cite{Andersen:2007vn}. To give one example, L\`ovasz and Vempala have shown~\cite{Lovasz:2006gs} (see also~\cite{Lovasz:1999cq}) that the number of steps of hit-and-run required to reduce the total variation distance between $\mathcal{P}^m(x,\cdot)$ and Lebesgue measure by an order of magnitude is proportional\footnote{The constant of proportionality is large and depends on the geometry of the polytope, and the amount of time required to reduce the total variation distance to a fixed amount from the start depends on the distance from the starting point to the boundary of the polytope.} to $n^3$ where $n$ is the dimension of the polytope. \subsection{The Markov Chain CLT and Geyer's IPS Error Bounds for TSMCMC integration} \label{subsec:geyer ips} We now know that the TSMCMC($\beta)$ algorithm will eventually sample from the correct probability measure on any toric symplectic manifold, and in particular from the correct probability distributions on closed and confined random walks. We should pause to appreciate the significance of this result for a moment -- while many Markov chain samplers have been proposed for closed polygons, none have been proved to converge to the correct measure. Further, there has never been a Markov chain sampler for closed polygons in rooted spherical confinement (or, as far as we know, for slab-confined or half-space confined arms). However, the situation remains in some ways unsatisfactory. If we wish to compute the probability of an event in one of these probability spaces of polygons, we must do an integral over the space by collecting sample values from a Markov chain. But since we do not have any explicit bounds on the rate of convergence of our Markov chains, we do not know how long to run the sampler, or how far the resulting sample mean might be from the integral over the space. To answer these questions, we need two standard tools: the Markov Chain Central Limit Theorem and Geyer's Initial Positive Sequence (IPS) error estimators for MCMC integration~\cite{Geyer:1992wm}. For the convenience of readers unfamiliar with these methods, we summarize the construction here. Since this is basically standard material, many readers may wish to skip ahead to the next section. Combining Proposition~\ref{prop:tsmcmc-convergence} with~\cite[Theorem~5]{Tierney:1994uk} (which is based on \cite[Corollary~4.2]{Cogburn:1972uv}) yields a central limit theorem for the \proc{Toric-Symplectic-MCMC}$(\beta)$ algorithm. To set notation, a run of the TSMCMC$(\beta)$ algorithm produces the sequence of points $((\vec{p}_0,\vec{\theta}_0),(\vec{p}_1,\vec{\theta}_1), \ldots)$, where the initial point $(\vec{p}_0,\vec{\theta}_0)$ is drawn from some initial distribution (for example, a delta distribution). For any run $R$, let \vspace{-0.2in} \[ \operatorname{SMean}(f;R,m) := \frac{1}{m} \sum_{k=1}^m f(\vec{p}_k,\vec{\theta}_k) \] be the sample mean of the values of a function $f\colon\! M^{2n} \to \mathbb{R}$ over the first $m$ steps in $R$. We will use ``$f$'' interchangeably to refer to the original function $f\colon\! M^{2n} \to \mathbb{R}$ or its expression in action-angle coordinates $f \circ \alpha \colon\! P \times T^n \to \mathbb{R}$. Let $E(f;\nu)$ be the expected value of $f$ with respect to the symplectic measure $\nu$ on $M$. Then for each positive integer $m$ the normalized sample error \[ \sqrt{m}(\operatorname{SMean}(f;R,m) - E(f;\nu)) \] is a random variable (as it depends on the various random choices in the run $R$). \begin{proposition}\label{prop:tsmcmc-CLT} Suppose $f$ is a square-integrable real-valued function on the toric symplectic manifold $M^{2n}$. Then regardless of the initial distribution, there exists a real number $\sigma(f)$ so that \begin{equation}\label{eq:tsmcmc-CLT} \sqrt{m}(\operatorname{SMean}(f;R,m) - E(f;\nu)) \stackrel{w}{\longrightarrow} \mathcal{N}(0,\sigma(f)^2), \end{equation} where $\mathcal{N}(0,\sigma(f)^2)$ is the Gaussian distribution with mean 0 and standard deviation $\sigma(f)$ and the superscript $w$ denotes weak convergence. \end{proposition} Given $\sigma(f)$ and a run $R$, the range $\operatorname{SMean}(f;R,m) \pm 1.96 \sigma(f)/\sqrt{m}$ is an approximate $95\%$ confidence interval for the true expected value $E(f;\nu)$. Abstractly, we can find $\sigma(f)$ as follows. The variance of the left hand side of \eqref{eq:tsmcmc-CLT} is \begin{multline*} m \operatorname{Var}(\operatorname{SMean}(f;R,m)) = \frac{1}{m} \sum_{i=1}^m \operatorname{Var}(f(\vec{p}_i,\vec{\theta}_i)) + \frac{1}{m} \sum_{i=1}^m \sum_{j=1}^m \operatorname{Cov}(f(\vec{p}_i,\vec{\theta}_i),f(\vec{p}_j,\vec{\theta}_j)). \end{multline*} Since the convergence in Proposition~\ref{prop:tsmcmc-CLT} is independent of the initial distribution, $\sigma(f)$ will be the limit of this quantity for \emph{any} initial distribution. Following Chan and Geyer~\cite{Chan:1994vh}, suppose the initial distribution is the stationary distribution. In that case, the quantities \[ \gamma_0(f) := \operatorname{Var}(f(\vec{p}_i,\vec{\theta}_i)) \] and \[ \gamma_k(f) := \operatorname{Cov}(f(\vec{p}_i,\vec{\theta}_i),f(\vec{p}_{i+k},\vec{\theta}_{i+k})) \] (the stationary variance and lag $k$ autocovariance, respectively) are independent of $i$. Then \begin{equation*} \sigma(f)^2 = \lim_{m \to \infty} \left(\gamma_0(f) + 2 \sum_{k=1}^{m-1}\frac{m-k}{m}\gamma_k(f)\right) = \gamma_0(f) + 2 \sum_{k=1}^\infty \gamma_k(f) \end{equation*} provided the sum on the right hand side converges. In what follows it will be convenient to write the above as \begin{equation}\label{eq:sigma1} \sigma(f)^2 = \gamma_0(f) + 2\gamma_1(f) + 2 \sum_{k=1}^\infty \Gamma_k(f), \end{equation} where $\Gamma_k(f) := \gamma_{2k}(f) + \gamma_{2k+1}(f)$. Again, we emphasize that the quantities $\gamma_0(f), \gamma_k(f), \Gamma_k(f)$ are associated to the \emph{stationary} Markov chain. In practice, of course, these quantities and hence this expression for $\sigma(f)$ are not computable. After all, if we could sample directly from the symplectic measure on $M^{2n}$ there would be no need for TSMCMC. However, as pointed out by Geyer~\cite{Geyer:1992wm}, $\sigma(f)$ can be estimated from the sample data that produced $\operatorname{SMean}(f;R,m)$. Specifically, we will estimate the stationary lagged autocovariance $\gamma_k(f)$ by the following quantity: \begin{equation} \bar{\gamma}_k(f) = \frac{1}{m} \sum_{i=1}^{m-k}[f(\vec{p}_i,\vec{\theta}_i) - \operatorname{SMean}(f;R,m)][f(\vec{p}_{i+k},\vec{\theta}_{i+k})-\operatorname{SMean}(f;R,m)] \end{equation} Note that multiplication by $\nicefrac{1}{m}$ rather than $\nicefrac{1}{m-k}$ is not a typographical error (cf.~\cite{Geyer:1992wm}, Sec 3.1). Let $\bar{\Gamma}_k(f) = \bar{\gamma}_{2k}(f) + \bar{\gamma}_{2k+1}(f)$. Then for any $N > 0$ \begin{equation}\label{eq:sigma2} \bar{\sigma}_{m,N}(f)^2 := \bar{\gamma}_0(f) + 2\bar{\gamma}_1(f) + 2\sum_{k=1}^N \bar{\Gamma}_k(f) \end{equation} is an estimator for $\sigma(f)^2$. We expect the $\bar{\Gamma}_k$ to decrease to zero as $k \to \infty$ since very distant points in the run of the Markov chain should become statistically uncorrelated. Indeed, since TSMCMC is reversible, Geyer shows this is true for the stationary chain: \begin{theorem}[Geyer \cite{Geyer:1992wm}, Theorem 3.1] \label{prop:Gamma_k} $\Gamma_k$ is strictly positive, strictly decreasing, and strictly convex as a function of $k$. \end{theorem} We expect, then, that any non-positivity, non-monotonicity, or non-convexity of the $\bar{\Gamma}_k$ should be due to $k$ being sufficiently large that $\bar{\Gamma}_k$ is dominated by noise. In particular, this suggests that a reasonable choice for $N$ in \eqref{eq:sigma2} is the first $N$ such that $\bar{\Gamma}_N \leq 0$, since the terms past this point will be dominated by noise and hence tend to cancel each other. \begin{definition}\label{def:ipse} Given a function $f$ and a length-$m$ run of the TSMCMC algorithm as above, let $N$ be the largest integer so that $\bar{\Gamma}_1(f), \ldots , \bar{\Gamma}_N(f)$ are all strictly positive. Then the \emph{initial positive sequence estimator} for $\sigma(f)$ is \[ \bar{\sigma}_{m}(f) := \bar{\sigma}_{m,N}(f) = \bar{\gamma}_0(f) + 2\bar{\gamma}_1(f) + 2\sum_{k=1}^N \bar{\Gamma}_k(f). \] \end{definition} Slightly more refined initial sequence estimators which take into account the monotonicity and convexity from Proposition~\ref{prop:Gamma_k} are also possible; see~\cite{Geyer:1992wm} for details. The pleasant result of all this is that $\bar{\sigma}_{m}$ is a statistically consistent overestimate of the actual variance. \begin{theorem}[Geyer~\cite{Geyer:1992wm}, Theorem~3.2] For almost all sample paths of TSMCMC, \[ \liminf_{m \to \infty} \bar{\sigma}_{m}(f)^2 \geq \sigma(f)^2. \] \end{theorem} Therefore, we propose the following procedure for \emph{Toric Symplectic Markov Chain Monte Carlo integration} which yields statistically consistent error bars on the estimate of the true value of the integral: \begin{tsmcmci}\label{tsmcmci} Let $f$ be a square-integrable function on a toric symplectic manifold $M^{2n}$ with moment map $\mu: M \to \mathbb{R}^n$. \begin{enumerate} \item Find the fixed points of the Hamiltonian torus action. The moment polytope $P$ is the convex hull of the images of these fixed points under $\mu$. \item Convert this vertex description of $P$ to a halfspace description. In other words, realize $P$ as the subset of points in $\mathbb{R}^n$ satisfying a collection of linear inequalities.\footnote{For small problems, this can be done algorithmically~\cite{Avis:1992kb,Chazelle:1993kn,Gawrilow:2000vl}. Generally, this will require an analysis of the moment polytope, such as the one performed above for the moment polytopes of polygon spaces.} \item Pick the parameter $\beta \in (0,1)$. We recommend repeating the entire procedure for several short runs with various $\beta$ values to decide on the best $\beta$ for a given application. The final error estimate is a good measure of how well the chain has converged after a given amount of runtime. \item Pick a point $(\vec{p}_0, \vec{\theta}_0) \in P \times T^n$. This will be the starting point of the Markov chain. Ideally, $\vec{p}_0$ should be as far as possible from the boundary of $P$. \item Using $(\vec{p}_0,\vec{\theta}_0)$ as the initial input, iterate the TSMCMC$(\beta)$ algorithm for $m$ steps ($m \gg 1$). This produces a finite sequence $((\vec{p}_1,\vec{\theta}_1),\ldots , (\vec{p}_m,\vec{\theta}_m))$ of points in $P \times T^n$. \item Let $\operatorname{SMean}(f;m) = \frac{1}{m} \sum_{i=1}^m f(\vec{p}_i, \vec{\theta}_i)$ be the average value of $f$ over the run of points produced in the previous step. \item Compute the initial positive sequence estimator $\bar{\sigma}_{m}(f)^2$. \item $\operatorname{SMean}(f;m)\pm 1.96\bar{\sigma}_m(f)/\sqrt{m}$ is an approximate $95\%$ confidence interval for the true expected value of the function $f$. \end{enumerate} \end{tsmcmci} \subsection{Tuning the TSMCMC Algorithm for Closed and Confined Polygons} \label{sec:numerics} The TSMCMC($\beta$) algorithm for polygons has several adjustable parameters. We must always choose a starting polygon. For unconfined polygons, we may choose any triangulation of the $n$-gon and get a corresponding moment polytope. Finally, we must make an appropriate choice of $\beta$. In this section, we report experimental results which address these questions. In our experiments, we always integrated total curvature and used equilateral closed polygons. At least for unconfined polygons, we know the exact value of the expectation from Theorem~\ref{thm:expected total curvature}. To measure convergence, we used the Geyer IPS error estimate as a measure of quality (lower is better). Since different step types take very different amounts of time to run, we ran different variations of the algorithm for a consistent amount of CPU time, even though this led to very different step counts. We discovered in our experiments that the rate of convergence of hit-and-run depends strongly on the start point. Our original choice of start point -- the regular planar equilateral $n$-gon -- turned out to be a very poor performer. While it seems like a natural choice mathematically, the regular $n$-gon is tucked away in a corner of the moment polytope and it takes hit-and-run quite a while to escape this trap. After a number of experiments, the most consistently desirable start point was obtained by folding the regular $n$-gon along the diagonals of the given triangulation and then randomly permuting the resulting edge set. We used this as a starting configuration in all of our unconfined experiments. We also discovered that hit-and-run can converge relatively slowly when sampling high-dimensional polytopes, leading to very long-range autocorrelations in the resulting Markov chain. Following a suggestion of Soteros~\cite{chrispc}, after considerable experimentation we settled on the convention that a single ``moment polytope'' step in our implementation of TSMCMC$(\beta)$ would represent ten iterations of hit-and-run on the moment polytope. This reduced autocorrelations greatly and led to better convergence overall. We used this convention for all our numerical experiments below. The $\proc{Toric-Symplectic-MCMC}(\beta)$ algorithm depends on a choice of triangulation $T$ for the $n$-gon to determine the moment polytope $P$. There is considerable freedom in this choice, since the number of triangulations of an $n$-gon is the Catalan number $C_{n-2} \sim 4^{n-2}/(n-2)^{3/2} \sqrt{\pi}$. We have proved above that the $\proc{Toric-Symplectic-MCMC}(\beta)$ algorithm will converge for any of these triangulations, but the rate of convergence is expected to depend on the triangulation, which determines the geometry of the moment polytope. This geometry directly affects the rate of convergence of hit-and-run; ``long and skinny'' polytopes are harder to sample than ``round'' ones (see Lovasz~\cite{Lovasz:1999cq}). To get a sense of the effect of the triangulation on the performance of TSMCMC$(\beta)$ we set $\beta=0.5$ and $n = 23$ and ran the algorithm from 20 start points for 20,000 steps. We then took the average IPS error bar for expected total curvature over these 20 runs as a measure of convergence. We repeated this analysis for 300 random triangulations and 300 repeats of three triangulations that we called the ``fan'', ``teeth'' and ``spiral'' triangulations. The results are shown in Figure~\ref{fig:triangulations}. The definition of the fan and teeth triangulations will be obvious from that figure; the spiral triangulation is generated by traversing the $n$-gon in order repeatedly, joining every other vertex along the traversal until the triangulation is complete. Our experiments showed that this spiral triangulation was the best performing triangulation among our candidates, so we standardized on that triangulation for further numerical experiments. \begin{figure}[ht] \begin{tabular}{cccc} \includegraphics[width=1.15in]{winning_spiral23.pdf} & \includegraphics[width=1.15in]{winning_random23.pdf} & \includegraphics[width=1.15in]{winning_fan23.pdf} & \includegraphics[width=1.15in]{winning_teeth23.pdf} \\ $[0.117,0.127]$ & $0.123$ & $[0.153,0.181]$ & $[0.158,0.184]$ \\ spiral & random & fan & teeth \end{tabular} \caption{We tested the average IPS $95\%$ confidence error estimate for the expected value of total curvature over random equilateral 23-gons over 20 runs of the TSMCMC(0.5) algorithm. Each run had a starting point generated by folding and permuting a regular $n$-gon as described above, and ran for 20,000 steps. We tried 300 random $n$-gons and 300 repetitions of the same procedure for the ``spiral'', ``fan'', and ``teeth'' triangulations shown above. Below each triangulation is shown the range of average error bars observed over 300 repetitions of the 20-start-point trials; for the random triangulation we report the best average error bar over a single 20-start-point-trial observed for any of the 300 random triangulations we computed. We can see that the algorithm based on the spiral triangulation generally outperforms algorithms based on even the best of the 300 random triangulations, while algorithms based on the fan and teeth triangulations converged more slowly.\label{fig:triangulations}} \end{figure} We then considered the effect of varying the parameter $\beta$ for the TSMCMC($\beta$) algorithm using the spiral triangulation. We ran a series of trials computing expected total curvature for 64-gons over 10 minute runs, while varying $\beta$ from $0.05$ (almost all dihedral steps) to $0.95$ (almost all moment polytope steps) over 10 minute runs. We repeated each run 50 times to get a sense of the variability in the Geyer IPS error estimators for different runs. Since dihedral steps are considerably faster than moment polytope steps, the step counts varied from about 1 to 9 million. The resulting Geyer IPS error estimators are shown in Figure~\ref{fig:spiral grid one}. Our recommendation from these experiments is to use the spiral triangulation and $\beta = 0.5$ for experiments with unconfined polygons. From the 50 runs using the recommended $\beta = 0.5$, the run with the median IPS error estimate produced an expected total curvature estimate of $101.724 \pm 0.142$ using about $4.6$ million samples; recall that we computed in Table~\ref{tab:expected total curvature} that the expected value of total curvature for equilateral, unconfined $64$-gons is a complicated fraction close to $101.7278$. \begin{figure}[ht] \begin{center} \begin{overpic}[width=2.5in]{tuning_tsmcmc_beta.pdf} \end{overpic} \end{center} \caption{The figure above shows a box-and-whisker plot for the IPS error estimators observed in computing expected total curvature over 50 runs of the TSMCMC($\beta$) algorithm for various values of $\beta$. The boxes show the $\nicefrac{1}{4}$ to $\nicefrac{3}{4}$ quantiles of the data, while the whiskers extend from the $0.05$ quantile to the $0.95$ quantile. While the whiskers show that there is plenty of variability in the data, the general trend is that the performance of the algorithm improves rapidly as $\beta$ varies from $0.05$ to $0.25$, modestly as $\beta$ varies from $0.25$ to $0.5$ and is basically constant for $\beta$ from $0.5$ to $0.95$. \label{fig:spiral grid one} } \end{figure} \subsection{Crankshafts, folds and permutation steps for unconfined equilateral polygons} \label{subsec:ptsmcmc} If we are dealing with unconfined equilateral polygons, we can define a transformation from $\widehat{\Pol}_3(n;\vec{1})$ to itself by permuting the edge set of the polygon. Since the symplectic measure on the space $\widehat{\Pol}_3(n;\vec{1}) = \Pol_3(n;\vec{1})/\operatorname{SO}(3)$ of unconfined equilateral polygons agrees with the standard measure, which is the pushforward of the subspace measure on ${\Pol_3(n;\vec{1}) \subset \Arm_3(n;\vec{1}) = S^2(1) \times \ldots \times S^2(1)}$, the action of the symmetric group $S_n$ which permutes edges is measure-preserving on $\widehat{\Pol}_3(n;\vec{1})$. As a consequence, we will see that we can mix permutation steps with standard TSMCMC steps without losing geometric convergence or the applicability of the Central Limit Theorem. Such a Markov chain is a mixture of dihedral angle steps, moment polytope steps, and permutation steps in some proportion. It is interesting to note that we can recover algorithms very similar to the standard ``crankshaft'' and ``fold'' Markov chains by allowing no moment polytope steps in the chain. Since previous authors have observed that adding permutation steps can significantly speed up convergence in polygon samplers~\cite{Anonymous:2010p2603}, we now experiment to see whether our algorithm, too, can be improved by mixing in some permutations. More precisely, we can define a new Markov chain \proc{Polygon-Permutation} on $\widehat{\Pol}_3(n;\vec{1})$ by permuting edges at each step: \begin{codebox} \Procname{\proc{Polygon-Permutation}(pol)} \zi $\sigma = \proc{Uniform-Permutation}(n)$ \zi pol $ = \proc{Permute-Edges}(\mathrm{pol},\sigma)$ \zi \Return pol \End \end{codebox} Since the symplectic measure on $\widehat{\Pol}_3(n;\vec{1})$ is permutation-invariant, the symplectic measure is stationary for $\proc{Polygon-Permutation}$. Now, we can mix TSMCMC$(\beta)$ with \proc{Polygon-Permutation} to get the following \proc{Permutation-Toric-Symplectic-MCMC}$(\beta,\delta)$ algorithm, where $\delta \in [0,1)$ gives the probability of doing a permutation step rather than a TSMCMC$(\beta)$ step. Recall that ${\alpha\colon\! P \times T^{n-3} \to \widehat{\Pol}_3(n;\vec{1})}$ is the action-angle parametrization, where $P$ is the moment polytope induced by the chosen triangulation. \begin{codebox} \Procname{$\proc{Permutation-Toric-Symplectic-MCMC}(\vec{p},\vec{\theta},\beta,\delta)$} \zi $prob = \proc{Uniform-Random-Variate}(0,1)$ \zi \If $prob < \delta$ \zi \Then $(\vec{p},\vec{\theta}\,) = \alpha^{-1}(\proc{Polygon-Permutation}(\alpha(\vec{p},\vec{\theta}\,)))$ \zi \Else $(\vec{p},\vec{\theta}\,) = \proc{Toric-Symplectic-MCMC}(\vec{p},\vec{\theta},\beta)$ \End \zi \Return $(\vec{p},\vec{\theta}\,)$ \End \end{codebox} Although \proc{Polygon-Permutation} is \emph{not} ergodic, the fact that it is stationary with respect to the symplectic measure is, by combining Proposition~\ref{prop:tsmcmc-convergence} and~\cite[Proposition~3]{Tierney:1994uk}, enough to imply that \proc{Permutation-Toric-Symplectic-MCMC}$(\beta, \delta)$ is (strongly) uniformly ergodic. \begin{proposition}\label{prop:ptsmcmc ergodicity} Let $\widehat{\mathcal{P}}$ be the transition kernel for PTSMCMC$(\beta,\delta)$ with $0 < \beta < 1$ and $\delta < 1$ and let $\nu$ be the symplectic measure on $\widehat{\Pol}_3(n;\vec{1})$. Then there exist constants $R < \infty$ and $\rho < 1$ so that for any ${(\vec{p}, \vec{\theta}\,) \in \operatorname{int}(P) \times T^{n-3}}$, \[ \tvnorm{\alpha_\star \widehat{\mathcal{P}}^m(\vec{p},\vec{\theta},\cdot) - \nu} < R \rho^m. \] \end{proposition} Just as in Proposition~\ref{prop:tsmcmc-CLT}, since PTSMCMC$(\beta,\delta)$ is uniformly ergodic and reversible with respect to symplectic measure, it satisfies a central limit theorem: \begin{proposition}\label{prop:ptmcmc CLT} Suppose $f\colon\! \widehat{\Pol}_3(n;\vec{1}) \to \mathbb{R}$ is square-integrable. For any run $R$ of PTSMCMC$(\beta,\delta)$, let $\operatorname{SMean}(f;R,m)$ be the sample mean of the value of $f$ over the first $m$ steps of $R$. Then there exists a real number $\sigma(f)$ so that \[ \sqrt{m}(\operatorname{SMean}(f;R,m) - E(f;\nu)) \stackrel{w}{\longrightarrow} \mathcal{N}(0,\sigma(f)^2). \] \end{proposition} The rest of the machinery of Section~\ref{subsec:geyer ips}, including the initial positive sequence estimator for $\sigma(f)^2$, also applies. As a consequence, we get a modified \textbf{Toric Symplectic Markov Chain Monte Carlo Integration} procedure adapted to unconfined, equilateral polygons. Note that the full symmetric group $S_n$ does not act on $\widehat{\Pol}_3(n;\vec{r})$ when not all $r_i$ are equal, so PTSMCMC$(\beta, \delta)$ cannot be used to sample non-equilateral polygons. However when many edgelengths are equal, a subgroup of the symmetric group which permutes only those edges certainly acts on $\widehat{\Pol}_3(n;\vec{r})$. We recommend making use of this smaller set of permutations when possible. Permuting edges never preserves spherical confinement, so PTSMCMC$(\beta,\delta)$ is inapplicable to confined polygon sampling. Having defined PTSMCMC$(\beta,\delta)$ and settled on a canonical starting point (the folded, permuted regular $n$-gon) and triangulation (the spiral), it remains to decide on the best values of $\beta$ and $\delta$. The question is complicated by the fact that the three different types of steps -- permutations, folding steps, and moment-polytope hit-and-run steps -- take different amounts of CPU time. To attempt to evaluate the various possibilities fairly, we ran experiments computing the expected total curvature for 64-gons where each experiment ran for $10$ minutes of CPU time, completing between $2$ million and $15$ million steps depending on the mixture of step types. We measured the $95\%$ confidence IPS error bars for each run, producing the data in Figure~\ref{fig:spiral grids}, and used the size of this error bar as a measure of convergence. \begin{figure}[ht] \hfill $\vcenter{\hbox{\begin{overpic}[height=1.6in]{spiral_grid_tlBoxedView.pdf} \put(90,12){$\beta$} \put(38,5){$\delta$} \put(0,55){IPS error} \end{overpic}}}$ \hfill $\vcenter{\hbox{\begin{overpic}[height=1.6in]{spiral_grid_tlBoxedViewNoDelta.pdf} \put(90,12){$\beta$} \put(38,5){$\delta$} \put(0,53){IPS error} \end{overpic}}}$ \hfill \hphantom{.} \caption{This plot shows the IPS error estimator for the average total curvature of unconfined equilateral 64-gons. The IPS error was computed for 10 minute runs of the PTSMCMC($\beta$,$\delta$) Markov chain algorithm. The values of $\delta$ (the fraction of permutations among all steps) ranged from $0$ to $0.95$ in steps of $0.05$, while the values of $\beta$ (the fraction of moment polytope steps among non-permutation steps) ranged from $0$ to $0.95$ in steps of $0.05$. When $\delta = 0$, this is just the TSMCMC($\beta$) chain; removing those runs yields the plot on the right. We observed that convergence was very sensitive to $\delta$, with error bars improving dramatically as soon as the fraction of permutation steps becomes positive: even the worst PTSMCMC($\beta$,$\delta$) run with $\delta > 0$ had error bars 3 times smaller than the error bars of the best TSMCMC($\beta$) run. From the view at right, we can see that the error bars continue to improve more modestly as $\delta$ increases. Varying $\beta$ has little effect on the error estimate when $\delta$ is large. \label{fig:spiral grids}} \end{figure} The data in Figure~\ref{fig:spiral grids} show that the fraction $\delta$ of permutation steps is the most important factor in determining the rate of convergence in the PTSMCMC($\beta$,$\delta$) algorithm. This shows that the extra complication in defining PTSMCMC($\beta$,$\delta$) for unconfined equilateral polygons is worth it: the error bars produced by PTSMCMC($\beta$,$\delta$) to compute the expected total curvature of unconfined equilateral $64$-gons are anywhere from 3 to 30 times smaller than the error bars for TSMCMC($\beta$). Larger values of $\beta$ produce smaller error bars when $\delta = 0$, meaning that a large fraction of moment polytope steps are needed to produce mixing when there are no permutation steps. On the other hand, as we can see in Figure~\ref{fig:delta_05_normalized_error}, even when $\delta = 0.05$ the permutation steps provide enough mixing that $\beta$ has virtually no effect on the IPS standard deviation estimator. In this case, the effect of $\beta$ on the size of the error bars is due to the fact that dihedral steps are faster than moment polytope steps, so runs with small $\beta$ produce more samples and hence smaller error bars. \begin{figure}[ht] \vspace{.1in} \hfill $\vcenter{\hbox{\begin{overpic}[height=1.4in]{delta05ErrorEstimate.pdf} \put(103,5){$\beta$} \put(-1,63){\text{IPS error}} \end{overpic}}}$ \hfill $\vcenter{\hbox{\begin{overpic}[height=1.4in]{delta05StandardDeviation.pdf} \put(103,5){$\beta$} \put(-15,63){\text{IPS standard deviation}} \end{overpic}}}$ \hfill \hphantom{.} \caption{These plots show the IPS error estimator and the IPS standard deviation estimator for the average total curvature of unconfined equilateral 64-gons using 10-minute runs of PTSMCMC($\beta$, $0.05$). Although the IPS error estimate decreases as $\beta$ decreases, the plot on the right demonstrates that the IPS standard deviation estimator is essentially constant -- and presumably close to the true standard deviation of the total curvature function -- across the different values of $\beta$. Since the IPS error estimate is proportional to the standard deviation estimate divided by the square root of the number of samples, we can see that the variation in IPS error bars for these runs is almost entirely due to the difference in the number of samples.} \label{fig:delta_05_normalized_error} \end{figure} Once $\delta$ is large, varying $\beta$ seems to have little effect on the convergence rate. In fact, though our theory above no longer proves convergence, we seem to get a very competitive algorithm by removing moment polytope steps altogether ($\beta = 0$) and performing only permutations and dihedral steps. This algorithm corresponds to the ``fold or crankshaft with permutations mixed in'' method. In practice, we make a preliminary recommendation of $\delta=0.9$ and $\beta=0.5$ for experimental work. These parameters guarantee convergence (by our work above) while optimizing the convergence rate. Using these recommended parameters, a 10 minute run of PTSMCMC($0.5$,$0.9$) for unconfined, equilateral 64-gons produced just under $7$ million samples and an expected total curvature of $101.7276 \pm 0.0044$, which compares quite favorably to the actual expected total curvature of $101.7278$. Taking out the $\beta=0$ runs, we observed that the absolute error in our computations of expected total curvature was less than our error estimate in $361$ of $380$ runs ($95\%$), which is exactly what we would expect from a $95\%$ confidence value estimator. We take this as solid evidence that the Markov chain is converging and the error estimators are working as expected. \subsection{Calculations on confined polygons} \label{sub:calculations_on_confined_polygons} Recall from~Definition~\ref{def:confined fan triangulation polytope} that a polygon is in spherical confinement in a sphere of radius $R$ centered at vertex $v_1$ of the polygon if the vector $\vec{d}$ of fan diagonals of the polygon lies in the confined fan polytope $P_{n,R}(\vec{r})$. This means that we can sample such polygons uniformly by restricting the hit-and-run steps in TSMCMC($\beta$) to the confined fan polytope $P_{n,R}(\vec{r})$. We again only explored the situation for equilateral polygons of edge length one. After some experimentation, we settled on the ``folded triangle'' as a start point. This polygon is constructed by setting each diagonal length $d_i$ to one and choosing dihedrals randomly. This polygon is contained in spherical confinement for every $R \geq 1$, so we could use it for all of our experiments. We investigated 23-gons confined to spheres of radius $2$, $4$, $6$, $8$, $10$ and $12$, measuring the Geyer IPS error estimate for values of $\beta$ selected from $0.05$ (almost all dihedral steps) to $0.95$ (almost all moment polytope steps) over 10 minute runs. Again, since dihedral steps are faster to run than moment polytope steps, the step counts varied over the course of the experiments. For instance, in the radius 2 experiments, we observed step counts as high as 35 million and as low as 7 million over runs with various $\beta$ values. Our integrand was again total curvature. Since we do not have an exact solution for the expected total curvature of a confined $n$-gon, we were unable to check whether the error bars predicted actual errors. However, it was comforting to note that the answers we got from runs with various parameters were very consistent. We ran each experiment 50 times to get a sense of the repeatability of the Geyer IPS error bar; the results are shown in Figure~\ref{fig:confined grids}. \begin{figure}[ht] \begin{center} \begin{overpic}[width=2.9in]{confined_grid_2.pdf} \put(5,-5){Equilateral 23-gons confined in a sphere of radius 2} \end{overpic} \hfill \begin{overpic}[width=2.9in]{confined_grid_10.pdf} \put(5,-5){Equilateral 23-gons confined in a sphere of radius 10} \end{overpic} \end{center} \caption{These box-and-whisker plots show the results of computing the expected total curvature for confined equilateral $23$-gons with edgelength $1$. The confinement model here is ``rooted spherical confinement'', meaning that each vertex is within a specified distance of the first vertex. For each $\beta$ value, we repeated $10$ minute experiments $50$ times, computing $50$ values for the Geyer IPS estimator. The boxes show the second and third quartiles of these $50$ data points, while the whiskers show the $0.05$ to $0.95$ quantiles of the IPS estimators observed. \label{fig:confined grids}} \end{figure} We observed first that there is a clear trend in the error bar data. For the tightly confined runs, there was a noticeable preference for $\beta \sim 0.5$, while in less tight confinement the results generally continued to improve modestly as $\beta$ increased. Still, we think the data supports a general recommendation of $\beta = 0.5$ for future confined experiments, with a possible decrease to $\beta = 0.4$ in very tight confinement, and this is our recommendation to future investigators. A very striking observation from Figure~\ref{fig:confined grids} is that the error bars for the tightly confined $23$-gons in a sphere of radius $2$ are about 10 times smaller than the error bars for the very loosely confined $23$-gons in a sphere of radius $10$. That is, our algorithm works better when the polygon is in tighter confinement! In some sense, this is to be expected, since the space being sampled is smaller. However, it flies in the face of the natural intuition that confined sampling should be numerically more difficult than unconfined sampling. Using TSMCMC($0.5$), we computed the expected total curvature of tightly confined equilateral 50- and 90-gons. Those expectations are shown in Table~\ref{tab:confined_total_curvature}. We can compare these data directly by looking at expected turning angles as in Figure~\ref{fig:confined_turning_angle}. In this very tight confinement regime, the effect of confinement radius on expected turning angle dominates the effect of the number of edges. \begin{table}[h] \begin{center} Expected Total Curvature of Tightly-Confined Equilateral 50- and 90-gons\\ \begin{tabular*}{.7\textwidth}{@{\extracolsep{\fill} }lll} \hline\hline Confinement Radius & {50-gons} & {90-gons} \\ \hline $1.1$ & $103.1120 \pm 0.0093$ & $185.701 \pm 0.028$ \\ $1.2$ & $100.1900 \pm 0.0089$ & $180.261 \pm 0.028$ \\ $1.3$ & $\hphantom{1}97.8369 \pm 0.0088$ & $175.947 \pm 0.028$ \\ $1.4$ & $\hphantom{1}95.8891 \pm 0.0090$ & $172.346 \pm 0.027$ \\ $1.5$ & $\hphantom{1}94.1979 \pm 0.0091$ & $169.271 \pm 0.028$ \\ $1.6$ & $\hphantom{1}92.7501 \pm 0.0094$ & $166.660 \pm 0.029$ \\ $\infty$ & $\hphantom{1}79.74197470$ & $142.5630093$\\ \hline\hline \end{tabular*} \end{center} \caption{This table shows the expected total curvature of equilateral 50- and 90-gons in rooted spherical confinement. We sampled equilateral 50- and 90-gons in confinement radii from $1.1$ to $1.6$ using 20-minute runs of TSMCMC($0.5$) and computed the average total curvature and IPS error bars for each run. Each 50-gon run yielded about 14.5 million samples, while each 90-gon run yielded about 8 million samples. The bottom line shows the exact expectation of total curvature for unconfined polygons given by Theorem~\ref{thm:expected total curvature}. More extensive information on expectations of confined total curvatures has been computed by Diao, Ernst, Montemayor, and Ziegler~\cite{clauspc}.} \label{tab:confined_total_curvature} \end{table} \begin{figure}[ht] \centering \begin{overpic}[width=2.9in]{confined_turning_angles.pdf} \end{overpic} \begin{overpic}[width=2.9in]{confined_turning_angle_difference.pdf} \end{overpic} \caption{The plot on the left shows the expected turning angles of equilateral 50-gons (blue) and equilateral 90-gons (red) in rooted spherical confinement of radii from $1.1$ to $1.6$. The horizontal lines show the expected turning angles for unconfined 50- and 90-gons computed using Theorem~\ref{thm:expected total curvature}, which are $\simeq 1.59484$ and $\simeq 1.58403$, respectively. The plot on the right shows the differences between the expected turning angles of equilateral 50-gons and the expected turning angles of equilateral 90-gons. The blue dots show this difference for various confinement radii, while the red line shows the corresponding difference for unconfined polygons. Without confinement, we expect polygons with more edges to have smaller expected turning angle, since each individual edge feels less pressure to get back to the starting point. These data provide evidence this effect dissipates and even reverses in extremely tight confinement.} \label{fig:confined_turning_angle} \end{figure} \section{Comparison with existing work, conclusion and future directions} Now that we have laid out the symplectic theory of random walks and a few of its consequences, it is time to look back and see how we can reconcile this point of view with the existing understanding of closed random walks. In the methods of Moore and Grosberg~\cite{Moore:2005fh} and Diao et. al.~\cite{Diao:2011ie}, closed random walks are generated incrementally, using distributions derived from the pdf $\Phi_n(\vec{\ell}\,)$ given in~\eqref{eq:ftc pdf v2} for the end-to-end distance of a random walk of $n$ steps. To review, the key idea is that if we have taken $m-1$ steps of an $n$-step closed walk and arrived at the $m$-th vertex~$\vec{v}_m$, the pdf of the next vertex $\vec{v}_{m+1}$ (conditioned on the steps we have already taken) is given by \begin{equation*} P(\vec{v}_{m+1}|\vec{v}_1, \dots, \vec{v}_m) = \frac{\Phi_{1}(\vec{v}_{m+1} - \vec{v}_m) \Phi_{n-m-1}(\vec{v}_{m+1} - \vec{v}_1)}{\Phi_{n-m}(\vec{v}_m - \vec{v}_1)}, \end{equation*} which is some complicated product of piecewise-polynomial $\Phi_k(\vec{\ell}\,)$ functions. We can sample $\vec{v}_{m+1}$ from this distribution and hence generate the rest of the walk iteratively. From the moment polytope point of view, the situation is considerably simpler. First, we observe that everything in the equation above can be expressed in terms of diagonal lengths in the fan triangulation polytope, since the length of the vector $\vec{\ell}$ is the only thing that matters in the formula for $\Phi_k(\vec{\ell})$. If we let $\vec{v}_1 = \vec{0}$ by convention, then conditioning on $\vec{v}_1, \dots, \vec{v}_m$ is simply restricting our attention to the slice of the moment polytope given by setting the diagonal lengths $d_1 = |\vec{v}_3|, d_2 = |\vec{v}_4|, \dots, d_{m-2} = |\vec{v}_m|$. The pdf $P(\vec{v}_{m+1}|\vec{v}_1, \dots, \vec{v}_m)$ is then the projection of the measure on this slice of the moment polytope to the coordinate $d_{m-1}$. This distribution is piecewise-polynomial precisely because it is the projection of Lesbegue measure on a convex polytope with a finite number of faces. Of course, projecting volume measures of successive slices to successive coordinates is a perfectly legitimate way to sample a convex polytope, which is another explanation for why these methods work; they are basically sampling successive marginals of the coordinate distributions on a succession of smaller convex polytopes. By contrast, our method generates the entire vector of diagonal lengths $d_1, \dots, d_{n-3}$ simultaneously according to their joint distribution by sampling the moment polytope directly. More importantly it offers a geometric insight into what this joint distribution \emph{is} which seems like it would be very hard to develop by analyzing~\eqref{eq:ftc pdf v2}. In conclusion, the moment polytope picture offers a clarifying and useful perspective on closed and confined random walks. It is clear that we have only scratched the surface of this topic in this paper, and that many fascinating questions remain to be explored both theoretically and computationally. In the interest of continuing the conversation, we provide an unordered list of open questions suggested by the work above. \begin{itemize} \item Previous studies of the relative efficiency of polygon sampling algorithms have focused on minimizing pairwise correlations between edges as a measure of performance. Proposition~\ref{prop:polygon sampling} suggests a more subtle statistical approach to evaluating sample quality: measure the uniformity of the distribution of diagonal lengths over the moment polytope and of dihedral angles over the torus (cf. \cite{Mardia:2000um}). \item It remains open to try to extend these methods to prove that a chain consisting only of permutation and dihedral steps is still strongly geometrically convergent on unconfined equilateral polygon space. This would lead directly to a proof of convergence for the crankshaft and fold algorithms, and hence place many years of sampling experiments using these methods on a solid theoretical foundation. \item Can we use the moment polytope pictures above for confined polygons to prove theorems about polygons in confinement? For instance, it would be very interesting to show that the expectation of total curvature is monotonic in the radius of confinement. \item What is the corresponding picture for random planar polygons? Of course, we can see the planar polygons as a special slice of the action-angle coordinates where the angles are all zero or $\pi$. But is it true that sampling this slice according to Hausdorff measure in action-angle space corresponds to sampling planar polygons according to their Hausdorff measure inside space polygons\footnote{These questions are less obvious than they may appear at first glance: the cylindrical coordinates $\theta$ and $z$ are action-angle coordinates on the sphere, but it is not the case that the arclength measure on a curve in the $\theta$-$z$ cylinder pushes forward to the arclength measure on the image of the curve on the sphere, even though the area measure on the $\theta$-$z$ cylinder does push forward to the standard area measure on the sphere.}? If not, can we correct the measure somehow? Or is there another picture for planar polygons entirely? \item Can we understand the triangulation polytopes better? Can we compute their centers of mass explicitly, for example? It is well known that finding the center of mass of a high-dimensional polytope is algorithmically difficult, so we can't hope for a purely mechanical solution to the problem. But a deeper understanding of these polytopes seems likely to result in interesting probability theorems. \item Why are permutation steps so effective in the PTSMCMC Markov chain? It seems easy to compute that the number of points in the permutation group orbit of an $n$-edge polygon is growing much faster than the volume of equilateral polygon space computed by~\cite{Takakura:2001ir,Khoi:2005ch,Mandini:2008wq} and given above as Corollary~\ref{cor:equilateral volume}. Can we prove that the points in this orbit are usually well-distributed over polygon space? This would give an appealing proof of the effectiveness of Grosberg's triangle method for polygon sampling~\cite{Moore:2004ds,Moore:2004tw,Lua:2004wa}. \item There is a large theory of ``low-discrepancy'' or ``quasi-random'' sequences on the torus which can provide better results in numerical integration than uniform random sampling. Would it be helpful to choose our dihedrals from such a sequence in the integration method above? \item Now that we can sample confined polygons quickly, with solid error bars on our calculations, what frontiers does this open in the numerical study of confined polymers? We take our cues from the pioneering work of Diao, Ernst, Montemayor, and Ziegler \cite{Diao:2011ie,Diao:wt,Diao:2012dza,diaopreprint}, but are eager to explore this new experimental domain. For instance, sampling tightly confined $n$-gons might be a useful form of ``enriched sampling'' in the hunt for complicated knots of low equilateral stick number, since very entangled polygons are likely to be geometrically compact as well. \end{itemize} We introduced a related probabilistic theory of non-fixed edge length closed polygons in a previous paper~\cite{CPA:CPA21480} by relating closed polygons with given total length to Grassmann manifolds. It remains to explain the connection between that picture and this one, and we will take up that question shortly. \section{Acknowledgements} We are grateful to many more friends and colleagues for important discussions related to this project than we can possibly remember to name here. But to give it our best shot, Michael Usher taught us a great deal of symplectic geometry, Malcolm Adams introduced us to the Duistermaat-Heckman theorem, Margaret Symington provided valuable insight on moment maps, and Alexander Y Grosberg and Tetsuo Deguchi have been constant sources of insight and questions on polygon spaces in statistical physics. Yuanan Diao, Claus Ernst, and Uta Ziegler introduced us to the Rayleigh $\operatorname{sinc}$ integral form for the pdf of arm length (and to a great deal more). We were inspired by their insightful work on confined sampling to look at confinement models above. Ken Millett and Eric Rawdon have graciously endured our various doubts about the convergence of the crankshaft and fold algorithms for many years, and were the source of many pivotal conversations. Chris Soteros provided much appreciated expert guidance on Markov chain sampling. And we are especially indebted to Alessia Mandini, Chris Manon, Angela Gibney, and Danny Krashen for explaining to us some of the elements of the algebraic geometry of polygon spaces. Shonkwiler was supported by the Simons Foundation and the UGA VIGRE II grant (DMS-07-38586), and Cantarella was supported by the Simons Foundation. We were supported by the Georgia Topology Conference grant (DMS-11-05699), which helped us organize a conference on polygon spaces in the summer of 2013. We are grateful to the Issac Newton Institute for the Mathematical Sciences, Cambridge, for support and hospitality during the program ``Topological Dynamics in the Physical and Biological Sciences'' in Fall 2012, when much of this work was completed.
{ "timestamp": "2013-10-23T02:07:12", "yymm": "1310", "arxiv_id": "1310.5924", "language": "en", "url": "https://arxiv.org/abs/1310.5924", "abstract": "A closed equilateral random walk in 3-space is a selection of unit length vectors giving the steps of the walk conditioned on the assumption that the sum of the vectors is zero. The sample space of such walks with $n$ edges is the $(2n-3)$-dimensional Riemannian manifold of equilateral closed polygons in $\\mathbb{R}^3$. We study closed random walks using the symplectic geometry of the $(2n-6)$-dimensional quotient of the manifold of polygons by the action of the rotation group $\\operatorname {SO}(3)$. The basic objects of study are the moment maps on equilateral random polygon space given by the lengths of any $(n-3)$-tuple of nonintersecting diagonals. The Atiyah-Guillemin-Sternberg theorem shows that the image of such a moment map is a convex polytope in $(n-3)$-dimensional space, while the Duistermaat-Heckman theorem shows that the pushforward measure on this polytope is Lebesgue measure on $\\mathbb{R}^{n-3}$. Together, these theorems allow us to define a measure-preserving set of \"action-angle\" coordinates on the space of closed equilateral polygons. The new coordinate system allows us to make explicit computations of exact expectations for total curvature and for some chord lengths of closed (and confined) equilateral random walks, to give statistical criteria for sampling algorithms on the space of polygons and to prove that the probability that a randomly chosen equilateral hexagon is unknotted is at least $\\frac{1}{2}$. We then use our methods to construct a new Markov chain sampling algorithm for equilateral closed polygons, with a simple modification to sample (rooted) confined equilateral closed polygons. We prove rigorously that our algorithm converges geometrically to the standard measure on the space of closed random walks, give a theory of error estimators for Markov chain Monte Carlo integration using our method and analyze the performance of our method. Our methods also apply to open random walks in certain types of confinement, and in general to walks with arbitrary (fixed) edgelengths as well as equilateral walks.", "subjects": "Differential Geometry (math.DG); Statistical Mechanics (cond-mat.stat-mech); Probability (math.PR); Symplectic Geometry (math.SG)", "title": "The symplectic geometry of closed equilateral random walks in 3-space", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9912886142555052, "lm_q2_score": 0.8289388019824946, "lm_q1q2_score": 0.8217175963198456 }
https://arxiv.org/abs/1407.5516
A DEIM Induced CUR Factorization
We derive a CUR matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix $A$, such a factorization provides a low rank approximate decomposition of the form $A \approx C U R$, where $C$ and $R$ are subsets of the columns and rows of $A$, and $U$ is constructed to make $CUR$ a good approximation. Given a low-rank singular value decomposition $A \approx V S W^T$, the DEIM procedure uses $V$ and $W$ to select the columns and rows of $A$ that form $C$ and $R$. Through an error analysis applicable to a general class of CUR factorizations, we show that the accuracy tracks the optimal approximation error within a factor that depends on the conditioning of submatrices of $V$ and $W$. For large-scale problems, $V$ and $W$ can be approximated using an incremental QR algorithm that makes one pass through $A$. Numerical examples illustrate the favorable performance of the DEIM-CUR method, compared to CUR approximations based on leverage scores.
\section{Conclusions} \vspace*{-5pt} \noindent The Discrete Empirical Interpolation Method (DEIM) is an index selection procedure that gives simple, deterministic CUR factorizations of the matrix ${\bf A}$.\ \ Since DEIM utilizes (approximate) singular vectors, we propose an effective one-pass incremental approximate QR factorization that can efficiently compute dominant singular vectors for data sets with rapidly decaying singular values; this method could prove useful in a variety of other settings. The accuracy of the resulting rank-$k$ CUR factorization can be bounded in terms of $\sigma_{k+1}$, the error in the best rank-$k$ approximation to ${\bf A}$.\ \ Our analysis of the 2-norm error $\|{\bf A}-{\bf C}{\bf U}{\bf R}\|$ applies to all CUR approximations that use the optimal central factor ${\bf U}={\bf C}^I{\bf A}{\bf R}^I$, and hence can give insight into the performance of other index selection algorithms, such as leverage scores, uniform random sampling, or column-pivoted QR factorization. Numerical examples illustrate that the DEIM-CUR approach can deliver very good low rank approximations, compared to row selection based on dominant leverage scores. \section*{Acknowledgements} \vspace*{-7pt} We thank Inderjit Dhillon, Petros Drineas, Ilse Ipsen, Michael Mahoney, and Nick Trefethen for a number of helpful discussions. We are also grateful to Gunnar Martinsson for recommending experiments with the column-pivoted QR selection algorithm, and to an anonymous referee for encouraging us to seek the improved analysis and growth example for the DEIM error constants at the end of Section~\ref{sec:theory}. \bibliographystyle{plain} \section{DEIM} \label{sec:deim} The DEIM point selection algorithm was first presented in~\cite{CS10} in the context of model order reduction for nonlinear dynamical systems, and is a discrete variant of the Empirical Interpolation Method originally proposed in~\cite{BMNP}. The DEIM procedure operates on the singular vector matrices ${\bf V}$ and ${\bf W}$ independently to select the row indices ${\bf p}$ and column indices ${\bf q}$. We explain the process for selecting ${\bf p}$; applying the same steps to ${\bf W}$ yields~${\bf q}$. To derive the method, we elaborate upon the interpolatory projectors introduced in the last section. \begin{definition} Given a full rank matrix ${\bf V} \in \mbox {\Bb R}^{m\times k}$ and a set of distinct indices ${\bf p}\in\mbox {\Bb N}^k$, the \emph{interpolatory projector} for ${\bf p}$ onto ${\rm Ran}({\bf V})$ is \begin{equation} \label{eq:intproj} \mbox{\boldmath$\cal P$} \equiv {\bf V}({\bf P}^T{\bf V})^{-1}{\bf P}^T, \end{equation} where ${\bf P} = {\bf I}(@:@,{\bf p})\in\mbox {\Bb R}^{m\times k}$, provided ${\bf P}^T{\bf V}$ is invertible. \end{definition} In general $\mbox{\boldmath$\cal P$}$ is an oblique projector, and it has an important property not generally enjoyed by orthogonal projectors: for any ${\bf x}\in\mbox {\Bb R}^m$, \[ (\mbox{\boldmath$\cal P$}{\bf x})({\bf p}) \,=\, {\bf P}^T\mbox{\boldmath$\cal P$}{\bf x} \,=\, {\bf P}^T{\bf V}({\bf P}^T{\bf V})^{-1}{\bf P}^T{\bf x} \,=\, {\bf P}^T{\bf x} \,=\, {\bf x}({\bf p}),\] so the projected vector $\mbox{\boldmath$\cal P$}{\bf x}$ matches ${\bf x}$ in the ${\bf p}$ entries, justifying the name ``interpolatory projector.'' The DEIM algorithm processes the columns of \[ {\bf V} = \left[\begin{array}{cccc}{\bf v}_1 & {\bf v}_2 & \cdots & {\bf v}_k\end{array}\right] \] one at a time, starting from the leading singular vector ${\bf v}_1$. Each step processes the next singular vector to produce the next index. The first index $p_1$ corresponds to the largest magnitude entry in ${\bf v}_1$: \[ |{\bf v}_1(p_1)| = \|{\bf v}_1\|_\infty.\] Now define ${\bf p}_1 \equiv [p_1]$, and let \[ \mbox{\boldmath$\cal P$}_1 \equiv {\bf v}_1 ({\bf P}_1^T{\bf v}_1)^{-1}{\bf P}_1^T\] denote the interpolatory projector for ${\bf p}_1$ onto ${\rm Ran}({\bf v}_1)$. The second index $p_2$ corresponds to the largest entry in ${\bf v}_2$, after the interpolatory projection in the ${\bf v}_1$ direction has been removed: \begin{eqnarray*} {\bf r}_2\!\! &\equiv&\!\! {\bf v}_2 - \mbox{\boldmath$\cal P$}_1 {\bf v}_2\\[.25em] |{\bf r}_2(p_2)|\!\! &=&\!\! \|{\bf r}_2\|_\infty. \end{eqnarray*} Notice that ${\bf r}_2(p_1) = 0$, since $\mbox{\boldmath$\cal P$}_1{\bf v}_2$ matches ${\bf v}_2$ in the $p_1$ position, a consequence of interpolatory projection. This property ensures the process will never produce duplicate indices. Now suppose we have $j-1$ indices, with \[ {\bf p}_{j-1} \equiv \left[\begin{array}{c} p_1 \\ \vdots \\ p_{j-1}\end{array}\right], \quad {\bf P}_{j-1} \equiv {\bf I}(@:@,{\bf p}_{j-1}), \quad {\bf V}_{j-1} \equiv [\begin{array}{ccc} {\bf v}_1 & \cdots & {\bf v}_{j-1}\end{array}], \quad \mbox{\boldmath$\cal P$}_{j-1} \equiv {\bf V}_{j-1}^{}({\bf P}_{j-1}^T{\bf V}_{j-1}^{})^{-1}{\bf P}_{j-1}^T. \] To select $p_{j}$, remove from ${\bf v}_j$ its interpolatory projection onto indices ${\bf p}_{j-1}$ and take the largest remaining entry: \begin{eqnarray*} {\bf r}_{j} \!\! &\equiv&\!\! {\bf v}_j - \mbox{\boldmath$\cal P$}_{j-1} {\bf v}_j\\[.25em] |{\bf r}_{j}(p_{j})|\!\! &=&\!\! \|{\bf r}_{j}\|_\infty. \end{eqnarray*} Implementations should not explicitly construct these projectors; see the pseudocode in Algorithm~\ref{fig:Deim_Arnoldi} for details. Those familiar with partially pivoted LU decomposition will notice, on a moment's reflection, that this index selection scheme is exactly equivalent to the index selection of partial pivoting. This arrangement is equivalent to the ``left looking" variant of LU factorization~\cite[sect.~5.4]{DDSV98}, but with two important differences. First, there are no explicit row interchanges in DEIM, as there are in LU factorization. Second, the original basis vectors (columns of ${\bf V}$) are not replaced with the residual vectors, as happens in traditional LU decomposition. (In the context of model reduction, it is preferable to keep the nice orthogonal basis intact for use as a reduced basis.) We will exploit this connection with partially pivoted LU factorization to analyze the approximation properties of DEIM. Since the DEIM algorithm processes the singular vectors sequentially, from most to least significant, it introduces new singular vector information in a coherent manner as it successively selects the $k$~indices. Contrast this to index selection strategies based on leverage scores, where all singular vectors are incorporated at once via row norms of ${\bf V}$ and ${\bf W}$; to account for the fact that higher singular vectors are less significant, such approaches often instead compute leverage scores using only a few of the leading singular vectors.% \footnote{A potential limitation of the DEIM approach is that ${\bf r}_j$ could have multiple entries that have nearly the same magnitude, but only one index is selected at the $j$th step; if the other large-magnitude entries in ${\bf r}_j$ are not significant in subsequent ${\bf r}_\ell$ vectors, the corresponding indices will not be selected. One can imagine modifications of the selection algorithm to account for such situations, e.g., by processing multiple singular vectors at a time.} \begin{algorithm}[t!] \begin{center} \fbox{ \begin{tabular}{p{10cm}} \ \\[-25pt] \begin{tabbing} defghasijkl\=bbb\=ccc\= \kill $\ \ \ $ {\bf Input:} \> $ {\bf V}$, an $m \times k$ matrix ($m\ge k$)\\[5pt] $\ $ {\bf Output:} \> ${\bf p}$, an integer vector with $k$ distinct entries in $ \{1,\ldots,m \}$ \\[7pt] $\ \ \ $ \> $ {\bf v} = {\bf V}(:,1)$ \\ $\ \ \ $ \> $[\sim , p_1] = {\rm max}( | {\bf v} |)$\\ $\ \ \ $ \>$ {\bf p} = [p_1] $ \\ $\ \ \ $ \>{\bf for} $j=2,3,\ldots, k$ \\ $\ \ \ $ \>\> $ {\bf v} = {\bf V}(:,j)$ \\ $\ \ \ $ \>\> $ {\bf c} = {\bf V}({\bf p},1:j-1)^{-1} {\bf v}({\bf p})$ \\ $\ \ \ $ \>\> $ {\bf r} = {\bf v} - {\bf V}(:,1:j-1) {\bf c}$ \\ $\ \ \ $ \>\>$[\sim , p_j] = {\rm max}(| {\bf r} |)$\\ $\ \ \ $ \>\>$ {\bf p} = [{\bf p};\ p_j] $ \\ $\ \ \ $ \>{\bf end}\\ \end{tabbing}\\[-20pt]\end{tabular}} \end{center} \vspace*{-7pt} \caption {DEIM point selection algorithm.} \label{fig:Deim_Arnoldi} \end{algorithm} For the interpolatory projector $\mbox{\boldmath$\cal P$}_j$ to exist at the $j$th step, ${\bf P}_{j-1}^T{\bf V}_{j-1}$ must be nonsingular. The linear independence of the columns of ${\bf V}$ assures this. In the following, ${\bf e}_j$ denotes the $j$th column of the identity matrix. \begin{lemma} \label{DEIM_inv} Let ${\bf P}_j = [ {\bf e}_{p_1}, {\bf e}_{p_2}, \ldots, {\bf e}_{p_j}]$ and let ${\bf V}_j = [ {\bf v}_1, {\bf v}_2, \ldots, {\bf v}_j]$ for $1 \le j \le k$. If $ { {{\rm rank }\,} } ({\bf V}) = k$, then ${\bf P}_j^T {\bf V}_j$ is nonsingular for $1 \le j \le k$. \end{lemma} \noindent{\it Proof:} Suppose ${\bf P}_{j-1}^T {\bf V}_{j-1} $ is nonsingular and let ${\bf r}_j = {\bf v}_j - {\bf V}_{j-1}({\bf P}_{j-1}^T {\bf V}_{j-1})^{-1} {\bf P}_{j-1}^T {\bf v}_j.$ Then $\|{\bf r}_j\|_{\infty} > 0$, for otherwise $ {\bf 0} = {\bf v}_j - {\bf V}_{j-1} {\bf c}_{j-1}$, in violation of the assumption that $ { {{\rm rank }\,} } ({\bf V}) = k$. Thus \begin{equation} \label{eta_formula} 0 < |{\bf e}_{p_j}^T {\bf r}_j| =| {\bf e}_{p_j}^T {\bf v}_j - {\bf e}_{p_j}^T {\bf V}_{j-1}({\bf P}_{j-1}^T {\bf V}_{j-1})^{-1} {\bf P}_{j-1}^T {\bf v}_j |, \end{equation} where $p_j$ is the $j$th DEIM interpolation point. Now factor \begin{equation} \label{PV_formula} {\bf P}_j^T {\bf V}_j = \left[ \begin{array}{cc} {\bf P}_{j-1}^T {\bf V}_{j-1} & {\bf P}_{j-1}^T{\bf v}_j \\ {\bf e}_{p_j}^T {\bf V}_{j-1} & {\bf e}_{p_j}^T {\bf v}_j \end{array} \right]= \left[ \begin{array}{cc} {\bf I}_{j-1} & {\bf 0} \\ {\bf e}_{p_j}^T {\bf V}_{j-1}({\bf P}_{j-1}^T {\bf V}_{j-1})^{-1} & 1 \end{array} \right] \left[ \begin{array}{cc} {\bf P}_{j-1}^T {\bf V}_{j-1} & {\bf P}_{j-1}^T{\bf v}_j \\ {\bf 0} & \nu_j \end{array} \right], \end{equation} where \[ \nu_j = {\bf e}_{p_j}^T {\bf v}_j - {\bf e}_{p_j}^T {\bf V}_{j-1}({\bf P}_{j-1}^T {\bf V}_{j-1})^{-1} {\bf P}_{j-1}^T {\bf v}_j . \] The inequality~(\ref{eta_formula}) implies $\nu_j \ne 0$ and hence equation~(\ref{PV_formula}) implies ${\bf P}_j^T {\bf V}_j$ is nonsingular. Since ${\bf e}_{p_1}^T{\bf v}_1 \ne 0$, this argument provides an inductive proof that ${\bf P}_j^T {\bf V}_j $ is nonsingular for $1 \le j \le k$. { { \hfill\rule{3mm}{3mm} } } \section{Computational Examples} \label{sec:examples} This section presents some computational evidence illustrating the excellent approximation properties of the DEIM-CUR factorization, consistent with the error analysis in Section~\ref{sec:theory}. For each of our three examples, we compare the accuracy of the DEIM-CUR factorization with several schemes based on leverage scores. To remove random variations from our experiments, in most cases we select columns and rows having the highest leverage scores; for the first example, we include results for random leverage score sampling. For Example~1 we also study the effect of inaccurate singular vectors on the DEIM selection, and compare the accuracy of DEIM-CUR to CUR approximations based on the column-pivoted QR algorithm. \newpage \noindent \textbf{Example 1}. \emph{Low-rank approximation of a sparse, nonnegative matrix} \medskip \noindent The first example builds a matrix ${\bf A}\in\mbox {\Bb R}^{300,000\times 300}$ of the form \begin{equation} \label{eq:sparsex2} {\bf A} = \sum_{j=1}^{10} {2\over j}\,{\bf x}_j^{}{\bf y}_j^T + \sum_{j=11}^{300} {1\over j}\,{\bf x}_j^{}{\bf y}_j^T, \end{equation} where ${\bf x}_j \in \mbox {\Bb R}^{300,000}$ and ${\bf y}_j \in \mbox {\Bb R}^{300}$ are sparse vectors with random nonnegative entries (in MATLAB, ${\bf x}_j = {\tt sprand(300000,1,0.025)}$ and ${\bf y}_j = {\tt sprand(300,1,0.025)}$). In this instantiation, ${\bf A}$ has 15,971,584 nonzeros, i.e., about 18\% of all entries are nonzero. The form~(\ref{eq:sparsex2}) is not a singular value decomposition, since $\{{\bf x}_j\}$ and $\{{\bf y}_j\}$ are not orthonormal sets; however, this decomposition suggests the structure of the SVD: the singular values decay like $1/j$, and with the first ten singular values weighted more heavily to give a notable drop between $\sigma_{10}$ and $\sigma_{11}$. We begin these experiments by computing ${\bf V}$ and ${\bf W}$ using MATLAB's economy-sized SVD routine (\verb|[V,S,W] = svd(A,'0')|). \begin{figure}[b!] \begin{center} \includegraphics[scale=0.65]{dirty_sparse4_err_q0} \end{center} \vspace*{-20pt} \caption{\label{fig:sparse_ex4_cur} Accuracy of CUR approximations for the sparse, nonnegative matrix~(\ref{eq:sparsex2}) using $k$ columns and rows, constructed by DEIM-CUR and two leverage score strategies: ``LS (all)'' selects rows and columns with highest leverage scores computed using all 300~singular vectors; ``LS (10)'' only uses the leading ten singular vectors. The ``DEIM($\widehat{{\bf V}},\widehat{{\bf W}}$)'' curve (nearly atop the ``DEIM'' curve) uses approximate singular vectors, described later.} \end{figure} Figure~\ref{fig:sparse_ex4_cur} compares the error $\|{\bf A}-{\bf C}{\bf U}{\bf R}\|$ for DEIM-CUR and methods that take ${\bf C}$ and ${\bf R}$ as the columns and rows of ${\bf A}$ with the highest leverage scores. These scores are computed using either all right and left singular vectors (300 of each), or using only the leading ten right and left singular vectors. Both approaches perform rather worse than DEIM-CUR, which closely tracks the optimal value $\sigma_{k+1}$. To gain insight into these results, we examine the interpolation constants $\eta_p$ and $\eta_q$ for all three approaches. Figure~\ref{fig:sparse_ex4_eta} shows that these constants are largest for leverage scores based on all the singular vectors; using only ten singular vectors improves both the interpolation constants and the accuracy of the approximation (as seen in Figure~\ref{fig:sparse_ex4_cur}). The DEIM-CUR method gives better interpolation constants and more accurate approximations. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.45]{sparse_ex4_eta_ls13}\quad \includegraphics[scale=0.45]{sparse_ex4_eta_d1} \end{center} \vspace*{-20pt} \caption{\label{fig:sparse_ex4_eta} Error constants $\eta_p = \|({\bf P}_k^T{\bf V}_k)^{-1}\|$ and $\eta_q = \|({\bf W}_k^T{\bf Q})^{-1}\|$ for rows and columns selected using two leverage score strategies (left plot) and the DEIM algorithm (right plot), for the matrix~(\ref{eq:sparsex2}). } \end{figure} A CUR factorization can also be obtained by randomly sampling columns and rows of ${\bf A}$, with the probability of selection weighted by leverage scores~\cite{MD09}. We apply this approach on the current example, selecting $k=30$ rows and columns of ${\bf A}$ with a probability given by the leverage scores computed from the leading ten singular vectors (normalized to give a probability distribution). Figure~\ref{fig:sparse_ex5_cur} gives the results of ten independent experiments, showing that while sampling can sometimes yield better results than the deterministic leverage score approach, overall the approximations are still inferior to those from DEIM-CUR. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.65]{sparse_ex5_cur} \end{center} \vspace*{-20pt} \caption{\label{fig:sparse_ex5_cur} Accuracy of CUR approximations for~(\ref{eq:sparsex2}) generated by randomly sampling rows and columns with probability weighted by leverage scores computed from the leading ten singular vectors. All ten~trials (gray lines) perform similarly to the deterministic ``LS (10)'' approach, and worse than the DEIM-CUR approximation.} \end{figure} How robust is the DEIM-CUR approximation to errors in the singular vectors? To investigate, we compute $\widehat{{\bf V}}\approx {\bf V}$ and $\widehat{{\bf W}}\approx {\bf W}$ using the Incremental QR algorithm detailed in Section~\ref{sec:qr} (with ${\it tol}=10^{-4}$) and the Randomized SVD algorithm described by Halko, Martinsson, and Tropp~\cite[p.~227]{HMT11}. To give extreme examples of the latter, we compute $\widehat{{\bf V}}$ and $\widehat{{\bf W}}$ through \emph{only one or two} applications each of ${\bf A}$ and ${\bf A}^T$.% \footnote{ This corresponds to $q=0$ and $q=1$ in the notation of~\cite[p.~227]{HMT11}. Let the columns of ${\bf Q}\in \mbox {\Bb R}^{m\times 2 k_{\rm max}}$ form an orthonormal basis for $({\bf A}\bA^T)^q{\bf A}\mbox{\boldmath$\Omega$}$, where $\mbox{\boldmath$\Omega$}\in \mbox {\Bb R}^{n\times 2 k_{\rm max}}$ is a random matrix with i.i.d.\ Gaussian entries and we take $k_{\rm max} = 30$. Then the leading $k_{\rm max}$~columns of ${\bf V}$ and ${\bf W}$ are approximated by taking the SVD of ${\bf Q}^*{\bf A} \in \mbox {\Bb R}^{2k_{\rm max} \times n}$. } As Figure~\ref{fig:dirty_sparse4_angle} illustrates, in both cases the angle between the exact and approximate leading singular subspaces is significant, particularly as $k$ grows. This drift in the subspaces has little effect on the accuracy of the DEIM approximations. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.45]{dirty_sparse4_angle_op}\quad \includegraphics[scale=0.45]{dirty_angles} \begin{picture}(0,0) \put(-202,40){\small \emph{SVD via Incremental QR, ${\it tol}=10^{-4}$}} \put(38,145){\rotatebox{10}{\small \emph{one application of ${\bf A}$ and ${\bf A}^{\kern-1pt T}$}}} \put(47,60){\rotatebox{33}{\small \emph{two applications of ${\bf A}$ and ${\bf A}^{\kern-1pt T}$}}} \put(45,40){\small \emph{Randomized SVD}} \end{picture} \end{center} \vspace*{-32pt} \caption{\label{fig:dirty_sparse4_angle} The angle between the leading $k$-dimensional exact singular subspaces ${\rm Ran}({\bf V}_k)$ and ${\rm Ran}({\bf W}_k)$ (generated by MATLAB's {\tt svd} command) and approximate singular subspaces ${\rm Ran}(\widehat{{\bf V}}_k)$ and ${\rm Ran}(\widehat{{\bf W}}_k)$ for the matrix~(\ref{eq:sparsex2}). On the left, $\widehat{{\bf V}}_k$ and $\widehat{{\bf W}}_k$ are generated using the Incremental QR algorithm described in Section~\ref{sec:qr}, with ${\it tol} = 10^{-4}$; on the right, $\widehat{{\bf V}}_k$ and $\widehat{{\bf W}}_k$ are generated using randomized SVD algorithm~\cite{HMT11} using one and two applications of ${\bf A}$ and ${\bf A}^T$.} \end{figure} \begin{itemize} \item The DEIM approximation using the Incremental QR algorithm is quite robust, choosing at most 3~different row indices and 2~different column indices for $k=1,\ldots,30$, with a relative discrepancy in $\|{\bf A}-{\bf C}_k{\bf U}_k{\bf R}_k\|$ of at most 9.27\% (and this realized only at step $k=30$). \item When ${\bf A}$ and ${\bf A}^T$ are applied once in the Randomized SVD algorithm, the DEIM indices differ considerably from those drawn from exact singular vectors (e.g., for $k=30$, 20 of 30 row indices and 3 of 30 column indices differ), yet the quality of the approximation $\|{\bf A}-{\bf C}_k{\bf U}_k{\bf R}_k\|$ remains almost the same (relative difference of at most 10.45\%); see the dashed line in Figure~\ref{fig:sparse_ex4_cur}. \item When ${\bf A}$ and ${\bf A}^T$ are applied twice, the DEIM indices are nearly identical (e.g., for $k=30$, 0 of 30 row indices and 2 of 30 column indices differ). On the scale of the plot in Figure~\ref{fig:sparse_ex4_cur}, $\|{\bf A}-{\bf C}_k{\bf U}_k{\bf R}_k\|$ could not be distinguished from the DEIM-CUR errors using exact singular vectors; the maximum relative discrepancy is 2.21\%. \end{itemize} \begin{figure}[b!] \begin{center} \includegraphics[scale=0.52]{test10_deimqrcomp} \begin{picture}(0,0) \put(-200,-13){\small ratio (DEIM-CUR error)/(QR-CUR error)} \put(-215,175){\small DEIM-CUR better} \put(-103,175){\small QR-CUR better} \put(0,82){\small \rotatebox{90}{rank, $k$}} \end{picture} \vspace*{2em} \includegraphics[scale=0.45]{test10_deim_p}\qquad \includegraphics[scale=0.45]{test10_qr_p} \begin{picture}(0,0) \put(-163,0){\small $\log_{10}(\eta_p)$ for DEIM-CUR} \put( 82,0){\small $\log_{10}(\eta_p)$ for QR-CUR} \put(-243,82){\small \rotatebox{90}{rank, $k$}} \put(0,82){\small \rotatebox{90}{rank, $k$}} \end{picture} \end{center} \vspace*{-10pt} \caption{\label{fig:test10} Comparison of DEIM-CUR and QR-CUR performance for 100~sparse random $300,000\times 300$ matrices of the form~(\ref{eq:sparsex2}). The top plot shows a histogram of the ratio of $\|{\bf A}-{\bf C}_k{\bf U}_k{\bf R}_k\|$ for DEIM-CUR and QR-CUR. The bottom plots compare the error constant $\eta_p = \|({\bf P}_k^T{\bf V}_k)^{-1}\|$ for DEIM-CUR (left) and QR-CUR (right); note the logarithmic scale of the horizontal axes in the lower plots.} \end{figure} Thus far we have only compared the DEIM-CUR approximations to CUR factorizations obtained from leverage scores, which also use singular vector information, thus illustrating how DEIM can use the same raw materials to better effect. Next we compare DEIM-CUR to approximations computed using a different approach based on QR factorization of ${\bf A}$; see, e.g., \cite{CGMR05,Ste99}. Begin by computing a column-pivoted QR factorization of ${\bf A}$; the first $k$ selected columns give the indices ${\bf q}$, from which we extract ${\bf C}_k = {\bf A}(@@:@@,{\bf q})$. Next, a column-pivoted QR factorization of ${\bf C}_k^T$ is performed; the first $k$ selected columns of ${\bf C}_k^T$ give the indices ${\bf p}$, from which we build ${\bf R}_k = {\bf A}({\bf p},@@:@@)$. We refer to this technique as ``QR-CUR.'' Figure~\ref{fig:test10} compares the results for 100~trials involving sparse random matrices of dimension $300,000\times 300$ having the form of our first experiment~(\ref{eq:sparsex2}). DEIM-CUR and QR-CUR produce factorizations with similar accuracy, which we illustrate with a histogram of the ratio of $\|{\bf A}-{\bf C}_k{\bf U}_k{\bf R}_k\|$ for DEIM-CUR to QR-CUR, for $k=1,\ldots, 100$. (DEIM-CUR produces a smaller error when the ratio is less than one.) While these errors are similar, the error constants $\eta_p$ and $\eta_q$ for the two methods are quite different. The bottom plots in Figure~\ref{fig:test10} compare histograms of $\log_{10} \eta_p$. For DEIM-CUR, the $\eta_p$ values are quite consistent across the 100~random ${\bf A}$, while for QR-CUR the $\eta_p$ values are both larger and rather less consistent. (The figures for $\eta_q$ are qualitatively identical, but about an order of magnitude smaller for both methods.) \medskip The advantage of DEIM-CUR over approximations based on leverage scores remains when the singular values decrease more sharply. Modify~(\ref{eq:sparsex2}) to give a more significant drop between $\sigma_{10}$ and $\sigma_{11}$: \begin{equation} \label{eq:sparsex1000} {\bf A} = \sum_{j=1}^{10} {1000\over j}\,{\bf x}_j^{}{\bf y}_j^T + \sum_{j=11}^{300} {1\over j}\,{\bf x}_j^{}{\bf y}_j^T. \end{equation} As seen in Figures~\ref{fig:sparse_ex3_cur} and~\ref{fig:sparse_ex3_eta}, the DEIM-CUR approach again delivers excellent approximations, while selecting the rows and columns with highest leverage scores does not perform nearly as well. (In Figure~\ref{fig:sparse_ex3_eta}, note the significant jump in the ``LS~(10)'' error constant $\eta_q$ corresponding to those $k$ values where $\|{\bf A}-{\bf C}_k{\bf U}_k{\bf R}_k\|/\sigma_{k+1}$ is large.) \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.65]{sparse_ex3_cur} \end{center} \vspace*{-20pt} \caption{\label{fig:sparse_ex3_cur} Accuracy of CUR approximations using $k$ rows and columns, for DEIM-CUR and two leverage score strategies for the sparse, nonnegative matrix~(\ref{eq:sparsex1000}). ``LS (all)'' selects rows and columns having the highest leverage scores computed using all 300~singular vectors; ``LS (10)'' uses the leading 10~singular vectors.} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.45]{sparse_ex3_eta_ls13}\quad \includegraphics[scale=0.45]{sparse_ex3_eta_d1} \end{center} \vspace*{-20pt} \caption{\label{fig:sparse_ex3_eta} Error constants $\eta_p = \|({\bf P}_k^T{\bf V}_k)^{-1}\|$ and $\eta_q = \|({\bf W}_k^T{\bf Q}_k)^{-1}\|$ for rows and columns selected using two leverage score strategies (left plot) and the DEIM algorithm (right plot), for the sparse matrix ${\bf A}$ given in~(\ref{eq:sparsex1000}). } \end{figure} \newpage \noindent \textbf{Example 2}. \emph{TechTC term document data} \medskip \noindent The second example, adapted from Mahoney and Drineas~\cite{MD09}, computes the CUR factorization of a term document matrix with data drawn from the Technion Repository of Text Categorization Datasets (TechTC)~\cite{GM04}. The rows of the data matrix correspond to websites (consolidated from multiple webpages), while the columns correspond to ``features'' (words from the text of the webpages). The $(j,k)$ entry of ${\bf A}$ reflects the importance of the feature text on the given website; most entries are zero. For this experiment we use TechTC-100 test set~26, which concatenates a data set relating to Evansville, Indiana (id~10567) with another for Miami, Florida (id~11346). Following Mahoney and Drineas~\cite{MD09}, we omit all features with four or fewer characters from the data set, leaving a matrix with 139~rows and 15,170~columns. Each row of ${\bf A}$ is then scaled to have unit 2-norm. Ideally a CUR factorization not only gives an accurate low-rank approximation to ${\bf A}$, but also selects rows corresponding to representative webpages from each geographic area, and columns corresponding to meaningful features. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.65]{techtc_rs_cur} \end{center} \vspace*{-20pt} \caption{\label{fig:techtc_rs_cur} Accuracy of CUR factorizations for the TechTC example, selecting rows and columns using top leverage scores for all singular vectors and the leading two singular vectors, and DEIM.} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.65]{techtc_rs_comp_ls2} \end{center} \vspace*{-20pt} \caption{\label{fig:techtc_rs_comp_ls2} The columns selected by DEIM for the TechTC example, compared to leverage scores from the leading two singular vectors.} \vspace*{-10pt} \end{figure} Figure~\ref{fig:techtc_rs_cur} compares DEIM-CUR approximations to row and column selection based on highest leverage scores (from all singular vectors, or the two leading singular vectors). The DEIM-CUR approximations are typically more accurate than those based on leverage scores, but all approaches give errors roughly two times larger than the slowly-decaying optimal value of $\sigma_{k+1}$. How do the DEIM columns (features) compare to those with the highest leverage scores? Figure~\ref{fig:techtc_rs_comp_ls2} shows the leverage scores associated with each column of ${\bf A}$ (based on the two leading singular vectors), along with the first 30~columns selected by DEIM. While the columns with highest leverage scores were found by DEIM, there are DEIM columns with marginal leverage scores, and vice versa. This data is more easily parsed in Table~\ref{tbl:techtc}, which lists the features corresponding to the first 20~DEIM columns. (To ease comparison, we normalize leverage scores so that the maximum value is one.) The leading features identified by DEIM, including ``evansville'' (first DEIM point), ``florida'' (second), ``miami'' (sixth), and ``indiana'' (nineteenth), indeed reveal key geographic terms. These terms scored at least as high when ranked by leverage scores based on two leading singular vectors; when all singular vectors are used, the scores of these terms generally drop, relative to other features. Overall, one notes that DEIM selects a significantly different set of indices than those valued by leverage scores, and, as seen in Figure~\ref{fig:techtc_rs_cur}, tends to provide a somewhat better low-rank approximation. \begin{table}[t!] \caption{\label{tbl:techtc} The features selected by DEIM-CUR for the TechTC data set, compared to the (scaled) leverage scores using the leading two singular vectors, and all singular vectors.} \vspace*{-5pt} \begin{center}\small \begin{tabular}{cr|cc|cc|l} \hline \multicolumn{1}{c}{DEIM } & \multicolumn{1}{c}{index} & \multicolumn{2}{|c}{LS (2)} & \multicolumn{2}{|c}{LS (all) } & \multicolumn{1}{|c}{} \\ \multicolumn{1}{c}{rank } & \multicolumn{1}{c}{$q_j$} & \multicolumn{1}{|c}{rank } & \multicolumn{1}{c}{score} & \multicolumn{1}{|c}{rank } & \multicolumn{1}{c}{score} & \multicolumn{1}{|c}{feature} \\ \hline 1 & 10973 & 1 & 1.000 & 4 & 0.875 & evansville\\ 2 & 1 & 2 & 0.741 & 8 & 0.726 & florida\\ 3 & 1547 & 13 & 0.031 & 2 & 0.948 & spacer\\ 4 & 109 & 8 & 0.055 & 66 & 0.347 & contact\\ 5 & 209 & 12 & 0.040 & 32 & 0.458 & service\\ 6 & 50 & 4 & 0.116 & 6 & 0.739 & miami\\ 7 & 824 & 46 & 0.007 & 5 & 0.809 & chapter\\ 8 & 1841 & 33 & 0.010 & 20 & 0.537 & health\\ 9 & 171 & 5 & 0.113 & 13 & 0.617 & information\\ 10 & 234 & 16 & 0.026 & 37 & 0.436 & events\\ 11 & 595 & 84 & 0.004 & 15 & 0.576 & church\\ 12 & 60 & 15 & 0.026 & 67 & 0.347 & email\\ 13 & 945 & 10 & 0.047 & 30 & 0.474 & services\\ 14 & 1670 & 129 & 0.002 & 1 & 1.000 & bullet\\ 15 & 216 & 35 & 0.009 & 38 & 0.430 & music\\ 16 & 78 & 3 & 0.246 & 24 & 0.492 & south\\ 17 & 213 & 19 & 0.018 & 110 & 0.259 & their\\ 18 & 138 & 14 & 0.030 & 43 & 0.408 & please\\ 19 & 6110 & 7 & 0.060 & 95 & 0.280 & indiana\\ 20 & 1152 & 70 & 0.005 & 152 & 0.221 & member \end{tabular}\end{center} \vspace*{-2em} \end{table} \medskip \noindent \textbf{Example 3}. \emph{Tumor detection in genetics data} \medskip \noindent Our final example uses the GSE10072 cancer genetics data set from the National Institutes of Health, previously investigated by Kundu, Nambirijan, and Drineas~\cite{KND}. The matrix ${\bf A}\in\mbox {\Bb R}^{22,283\times 107}$ contains data for 22,283~probes applied to 107~patients. The $(j,k)$ entry of ${\bf A}$ reflects how strongly patient $k$ responded to probe $j$. This experiment seeks probes that segment the population into two clusters: the 58~patients with tumors, and the 49~without.% \footnote{The data is available from \url{http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE10072}.} To center the data, we subtract the mean of each row from all entries in that row. As shown in~\cite{KND}, the leading two principal vectors of this matrix segment the population very well. Like the TechTC data, the singular values of ${\bf A}$ decay slowly, as seen in Figure~\ref{fig:gene_cur}. Once again the DEIM-CUR procedure produces a more accurate low-rank approximation than obtained by selecting the rows and columns with highest leverage scores, whether those are computed using all the singular vectors, or just the leading two or ten. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.65]{gene_cur} \end{center} \vspace*{-17pt} \caption{\label{fig:gene_cur} Accuracy of CUR factorizations for a genetics data set. DEIM-CUR consistently outperforms factorizations derived by taking the rows and columns with largest leverage scores, regardless of whether these scores are drawn from all singular vectors, the leading ten singular vectors, or the leading two singular vectors. } \end{figure} Table~\ref{tbl:gene_deim} reports the first 15~rows selected by the DEIM-CUR process, along with the corresponding leverage scores based on two, ten, and all singular vectors. Do the probes selected by DEIM discriminate the patients with tumors (``sick'') from those without (``well'')? To investigate, for each selected probe we count the number of large positive entries corresponding to sick and well patients.% \footnote{In particular, we call an entry of the mean-centered matrix ${\bf A}$ large if its value exceeds one. Of the 22,283 probes, for only 23~probes do at least~30 of the~58 sick patients have such large entries; for only~95 probes do at least~30 of the~49 well patients have large entries. There is no overlap between the probes that are strongly expressed by the sick and well patients.} Some but not all of the DEIM-CUR probes effectively select only sick or well patients. Contrast these results with Table~\ref{tbl:gene_ls2}, which shows the probes with highest leverage scores (drawn from the leading two singular vectors). Only four of these probes were also selected by the DEIM procedure (even if we continue the DEIM procedure to select the maximum number, $n=107$, of indices). This discrepancy is quite different from the good agreement between DEIM and leverage score indices for the TechTC data in Table~\ref{tbl:techtc}, despite the similar dimensions and the comparably slow decay of the singular values. \begin{table}[t!] \caption{\label{tbl:gene_deim} Genetics example: the probes selected by DEIM-CUR, compared to the (scaled) leverage scores using the leading two singular vectors, ten singular vectors, and all singular vectors.} \begin{center}\small \begin{tabular}{cr|ll|cc|cc|cc|cc} \hline \multicolumn{1}{c}{DEIM } & \multicolumn{1}{c}{index} & \multicolumn{1}{|c}{probe} & \multicolumn{1}{c}{gene} & \multicolumn{1}{|c}{number} & \multicolumn{1}{c}{number} & \multicolumn{2}{|c}{LS (2) } & \multicolumn{2}{|c}{LS (10)} & \multicolumn{2}{|c}{LS (all) } \\ \multicolumn{1}{c}{rank } & \multicolumn{1}{c}{$q_j$} & \multicolumn{1}{|c}{set } & \multicolumn{1}{c}{name} & \multicolumn{1}{|c}{sick } & \multicolumn{1}{c}{well} & \multicolumn{1}{|c}{rank } & \multicolumn{1}{c}{score} & \multicolumn{1}{|c}{rank } & \multicolumn{1}{c}{score} & \multicolumn{1}{|c}{rank } & \multicolumn{1}{c}{score} \\ \hline 1 & 9565 & 210081\_at & AGER & 2 & 45 & 1 & 1.000 & 45 & 0.504 & 386 & 0.123 \\ 2 & 14270 & 214895\_s\_at & ADAM10 & 8 & 3 & 211 & 0.173 & 1171 & 0.108 & 3344 & 0.036 \\ 3 & 8650 & 209156\_s\_at & COL6A2 & 5 & 6 & 15156 & 0.005 & 252 & 0.245 & 708 & 0.091 \\ 4 & 11057 & 211653\_x\_at & AKR1C2 & 18 & 1 & 6440 & 0.017 & 11 & 0.656 & 146 & 0.185 \\ 5 & 14153 & 214777\_at & IGKV4-1 & 27 & 3 & 281 & 0.148 & 19 & 0.607 & 106 & 0.209 \\ 6 & 18976 & 219612\_s\_at & FGG & 17 & 17 & 2591 & 0.039 & 2 & 0.956 & 4 & 0.825 \\ 7 & 3831 & 204304\_s\_at & PROM1 & 16 & 4 & 992 & 0.073 & 70 & 0.417 & 32 & 0.345 \\ 8 & 3351 & 203824\_at & TSPAN8 & 17 & 4 & 9687 & 0.011 & 21 & 0.582 & 31 & 0.355 \\ 9 & 4275 & 204748\_at & PTGS2 & 18 & 14 & 424 & 0.118 & 13 & 0.624 & 42 & 0.313 \\ 10 & 1437 & 201909\_at & RPS4Y1 & 21 & 34 & 8232 & 0.013 & 3 & 0.913 & 5 & 0.736 \\ 11 & 14150 & 214774\_x\_at & TOX3 & 34 & 0 & 95 & 0.262 & 49 & 0.492 & 102 & 0.210 \\ 12 & 10518 & 211074\_at & FOLR1 & 7 & 4 & 9482 & 0.011 & 926 & 0.124 & 213 & 0.159 \\ 13 & 9580 & 210096\_at & CYP4B1 & 6 & 44 & 8 & 0.797 & 65 & 0.431 & 54 & 0.284 \\ 14 & 4002 & 204475\_at & MMP1 & 27 & 0 & 34 & 0.406 & 24 & 0.564 & 21 & 0.465 \\ 15 & 13990 & 214612\_x\_at & MAGEA & 16 & 0 & 489 & 0.110 & 134 & 0.323 & 35 & 0.339 \end{tabular}\end{center} \vspace*{-18pt} \end{table} \begin{table}[t!] \caption{\label{tbl:gene_ls2} Genetics example: the probes with top (scaled) leverage scores, derived from the first two singular vectors.} \begin{center}\small \begin{tabular}{crr|ll|cc|c} \hline \multicolumn{1}{c}{LS (2) } & \multicolumn{1}{c}{index} & \multicolumn{1}{c}{LS (2)} & \multicolumn{1}{|c}{probe} & \multicolumn{1}{c}{gene} & \multicolumn{1}{|c}{number} & \multicolumn{1}{c}{number} & \multicolumn{1}{|c}{DEIM } \\ \multicolumn{1}{c}{rank } & \multicolumn{1}{c}{$q_j$} & \multicolumn{1}{c}{score} & \multicolumn{1}{|c}{set} & \multicolumn{1}{c}{name} & \multicolumn{1}{|c}{sick } & \multicolumn{1}{c}{well} & \multicolumn{1}{|c}{rank } \\ \hline 1 & 9565 & 1.000 & 210081\_at & AGER & 2 & 45 & 1 \\ 2 & 13766 & 0.922 & 214387\_x\_at & SFTPC& 6 & 48 & --- \\ 3 & 11135 & 0.907 & 211735\_x\_at & SFTPC& 5 & 48 & 73 \\ 4 & 9361 & 0.899 & 209875\_s\_at & SPP1& 50 & 2 & --- \\ 5 & 5509 & 0.896 & 205982\_x\_at & SFTPC& 5 & 48 & --- \\ 6 & 9103 & 0.835 & 209613\_s\_at & ADH1B& 2 & 47 & --- \\ 7 & 14827 & 0.834 & 215454\_x\_at & SFTPC& 0 & 46 & --- \\ 8 & 9580 & 0.797 & 210096\_at & CYP4B1& 6 & 44 & 13 \\ 9 & 4239 & 0.754 & 204712\_at & WIF1& 5 & 43 & 70 \\ 10 & 3507 & 0.724 & 203980\_at & FABP4& 2 & 44 & --- \\ 11 & 18594 & 0.717 & 219230\_at & TMEM100& 2 & 38 & --- \\ 12 & 9102 & 0.684 & 209612\_s\_at & ADH1B& 2 & 46 & --- \\ 13 & 13514 & 0.626 & 214135\_at & CLDN18& 3 & 47 & --- \\ 14 & 5393 & 0.626 & 205866\_at & FCN3& 0 & 39 & --- \\ 15 & 4727 & 0.614 & 205200\_at & CLEC3B& 0 & 39 & --- \\ \end{tabular}\end{center} \vspace*{-18pt} \end{table} While the rows selected from leverage scores did not produce as accurate an approximation, $\|{\bf A}-{\bf C}_k{\bf U}_k{\bf R}_k\|$, as DEIM, these probes do a much more effective job of discriminating patients with tumors from those without. Indeed, for~14 of the top~15 probes, the tumor-free patients express strongly, while the patients with tumors do not; in the remaining case, the opposite occurs. \section{CUR Factorization} \label{sec:factor} We are concerned with large matrices ${\bf A}\in\mbox {\Bb R}^{m\times n}$ that represent nearly low-rank data, which can therefore be expressed as \begin{equation} \label{eq:CURF} {\bf A} = {\bf C} {\bf U} {\bf R} + {\bf F}, \end{equation} with $\|{\bf F}\|$ small relative to $\|{\bf A}\|$.\ \ The matrix ${\bf C}\in\mbox {\Bb R}^{m\times k}$ is formed by extracting $k$ columns from ${\bf A}$, and ${\bf R}\in\mbox {\Bb R}^{k\times n}$ from $k$ rows of ${\bf A}$. The selected row and column indices are stored in the vectors ${\bf p}, {\bf q}\in\mbox {\Bb N}^k$, so that ${\bf C} = {\bf A}(:,{\bf q})$ and ${\bf R} = {\bf A}({\bf p},:)$. Our choice for ${\bf p}$ and ${\bf q}$ is guided by knowledge of the rank-$k$ SVD (or an approximation to it). Before detailing the method for selecting these indices, we discuss how, given ${\bf p}$ and ${\bf q}$, one should construct ${\bf U}$ so that ${\bf C}{\bf U}{\bf R}$ satisfies desirable approximation properties. As motivation, suppose for the moment that ${\bf A}$ has exact rank~$k$, and ${\bf C}$ and ${\bf R}$ are full-rank subsets of the columns and rows of ${\bf A}$.\ \ Now let ${\bf Y}\in\mbox {\Bb R}^{m\times k}$ and ${\bf Z}\in\mbox {\Bb R}^{n\times k}$ be any matrices that satisfy ${\bf Y}^T {\bf C} = {\bf R} {\bf Z} = {\bf I}\in\mbox {\Bb R}^{k\times k}$. Then ${\bf C}{\bf Y}^T$ is a projector onto ${\rm Ran}({\bf C})={\rm Ran}({\bf A})$ and $({\bf Z} {\bf R})^T$ is a projector onto ${\rm Ran}({\bf R}^T)={\rm Ran}({\bf A}^T)$, where ${\rm Ran}(\cdot)$ denotes the range (column space). It follows that ${\bf C}{\bf Y}^T {\bf A} = {\bf A}$ and $({\bf Z}{\bf R})^T{\bf A}^T = {\bf A}^T$.\ \ Putting ${\bf U} \equiv {\bf Y}^T{\bf A}{\bf Z}$ gives \[ {\bf C}{\bf U}{\bf R} = {\bf C}{\bf Y}^T{\bf A}{\bf Z}{\bf R} = {\bf A}{\bf Z}{\bf R} = {\bf A}.\] Thus, any choice of ${\bf Y}$ and ${\bf Z}$ that satisfies ${\bf Y}^T {\bf C} = {\bf R} {\bf Z} = {\bf I}$ gives a ${\bf U}$ such that ${\bf C}{\bf U}{\bf R}$ exactly recovers ${\bf A}$.\ \ In general different choices for ${\bf Y}$ and ${\bf Z}$ give different ${\bf U} = {\bf Y}^T {\bf A} {\bf Z}$. Now consider the general case~(\ref{eq:CURF}). Once ${\bf p}$, ${\bf q}$, ${\bf Y}$, and ${\bf Z}$ have been specified, then \[ {\bf U} = {\bf Y}^T {\bf A} {\bf Z} \quad {\rm and } \quad {\bf F} \equiv {\bf A} - {\bf C} {\bf U} {\bf R}. \] One might design ${\bf Y}$ and ${\bf Z}$ so that ${\bf C}{\bf U}{\bf R}$ matches the selected columns ${\bf C} = {\bf A}(:,{\bf q})$ and rows ${\bf R} = {\bf A}({\bf p},:)$ of ${\bf A}$ exactly. This can be accomplished with \emph{interpolatory projectors}, which we discuss in detail in the next section. For now, let ${\bf P} = {\bf I}(@:@,{\bf p})\in\mbox {\Bb R}^{m\times k}$ and ${\bf Q} = {\bf I}(@:@,{\bf q})\in\mbox {\Bb R}^{n\times k}$ be submatrices of the identity, so that ${\bf P}^T {\bf a} = {\bf a}({\bf p})$ and ${\bf b}^T {\bf Q} = {\bf b}({\bf q})^T$ for arbitrary vectors ${\bf a}$ and ${\bf b}$ of appropriate dimensions. Now define ${\bf Y}^T = ({\bf P}^T {\bf C})^{-1} {\bf P}^T$ and ${\bf Z} = {\bf Q} ({\bf R}{\bf Q})^{-1}$ (presuming ${\bf P}^T{\bf C}$ and ${\bf R}{\bf Q}$ are invertible). Then since ${\bf C} = {\bf A}(:,{\bf q})$ and ${\bf R} = {\bf A}({\bf p},:)$, \[ {\bf P}^T {\bf C} = {\bf C}({\bf p},:) = {\bf A}({\bf p},{\bf q}) \quad \mbox{and}\quad {\bf R} {\bf Q} = {\bf R}(:,{\bf q}) = {\bf A}({\bf p},{\bf q}),\] so \[ {\bf U} = {\bf Y}^T {\bf A} {\bf Z} = ({\bf P}^T{\bf C})^{-1}{\bf P}^T {\bf A}{\bf Q} ({\bf R}{\bf Q})^{-1} = {\bf A}({\bf p},{\bf q})^{-1}{\bf A}({\bf p},{\bf q}) {\bf A}({\bf p},{\bf q})^{-1} = {\bf A}({\bf p},{\bf q})^{-1}.\] This CUR approximation matches the ${\bf q}$ columns and ${\bf p}$ rows of ${\bf A}$, \[ {\bf A}(:,{\bf q}) = {\bf C} {\bf U} {\bf R}(:,{\bf q}) \ \ {\rm and} \ \ {\bf A}({\bf p},:) = {\bf C}({\bf p},:) {\bf U} {\bf R}, \] and, in our experiments, usually delivers a very good approximation. However, a CUR factorization with better theoretical approximation properties results from orthogonal projection, as originally suggested by Stewart~\cite[p.~320]{Ste99}; see also, e.g., Mahoney and Drineas \cite{MD09}. Given a selection of indices ${\bf p}$ and ${\bf q}$, again put \[ {\bf C} = {\bf A}(:,{\bf q}) \quad\mbox{and}\quad {\bf R} = {\bf A}({\bf p},:). \] Assume that ${\bf C}$ and ${\bf R}$ both have full rank $k$, and now let ${\bf Y}^T = {\bf C}^I \equiv ({\bf C}^T {\bf C})^{-1} {\bf C}^T $ and ${\bf Z} = {\bf R}^I \equiv {\bf R}^T ({\bf R} {\bf R}^T)^{-1} $ denote left and right inverses of ${\bf C}$ and ${\bf R}$. These choices also satisfy ${\bf Y}^T{\bf C} = {\bf I}$ and ${\bf R}{\bf Z}={\bf I}$, but now ${\bf C}{\bf Y}^T = {\bf C}\bC^I$ and ${\bf Z}{\bf R} = {\bf R}^I{\bf R}$ are \emph{orthogonal projectors}. We compute \[ {\bf U} = {\bf Y}^T {\bf A}{\bf Z} = {\bf C}^I {\bf A} {\bf R}^I, \] yielding a CUR factorization that can be viewed as a two step process: first the columns of ${\bf A}$ are projected onto Ran(${\bf C}$), then the result is projected onto the row space of ${\bf R}$: \[ 1) \ \ {\bf M} = {\bf C}\bC^I {\bf A}, \quad 2)\ \ {\bf C} {\bf U} {\bf R} = {\bf M} {\bf R}^I {\bf R}. \] Both steps are optimal with respect to the 2-norm error, which is the primary source of the excellent approximation properties of this approach. Several strategies for selecting ${\bf p}$ and ${\bf q}$ have been proposed.% \footnote{In the theoretical computer science literature, one often takes ${\bf C}$ and/or ${\bf R}$ to have rank larger than $k$, but then builds ${\bf U}$ with rank $k$. By selecting these extra columns and/or rows, one seeks to get within some factor $1+\varepsilon$ of the optimal approximation; see, e.g.,~\cite{BW}.} The approach presented in the next section is simple to implement and has complexity $ m@k $ and $n@k $ to select the indices ${\bf p}$ and ${\bf q}$, provided the leading $k$ right and left singular vectors of ${\bf A}$ are available. Thus the overall complexity is dominated by the construction of the rank-$k$ SVD ${\bf A} \approx {\bf V} {\bf S} {\bf W}^T$, where ${\bf V}^T {\bf V} = {\bf W}^T {\bf W} = {\bf I}\in\mbox {\Bb R}^{k\times k}$ and ${\bf S} = {\rm diag}(\sigma_1, \sigma_2, \ldots , \sigma_{k})$ is the $k \times k$ matrix of dominant singular values $\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_{k}$. \section{Introduction} This work presents a new CUR matrix factorization based upon the Discrete Empirical Interpolation Method (DEIM).\ \ A CUR factorization is a low rank approximation of a matrix ${\bf A} \in \mbox {\Bb R}^{m \times n}$ of the form ${\bf A} \approx {\bf C} {\bf U} {\bf R}$, where ${\bf C} = {\bf A}(:,{\bf q}) \in \mbox {\Bb R}^{m \times k}$ is a subset of the columns of ${\bf A}$ and ${\bf R} = {\bf A}({\bf p},:) \in \mbox {\Bb R}^{k \times n} $ is a subset of the rows of ${\bf A}$.\ \ (We generally assume $m\ge n$ throughout.) The $k \times k$ matrix ${\bf U}$ is constructed to assure that ${\bf C} {\bf U} {\bf R}$ is a good approximation to ${\bf A} $. Assuming the best rank-$k$ singular value decomposition (SVD) ${\bf A} \approx {\bf V} {\bf S} {\bf W}^T$ is available, the algorithm uses the DEIM index selection procedure, ${\bf q} = {\rm DEIM}({\bf V})$ and ${\bf p} = {\rm DEIM}({\bf W})$, to determine ${\bf C}$ and ${\bf R}$. The resulting approximate factorization is nearly as accurate as the best rank-$k$ SVD, with \[ \| {\bf A} - {\bf C} {\bf U} {\bf R} \| \le (\eta_p + \eta_q)\,\sigma_{k+1}, \] where $\sigma_{k+1}$ is the first neglected singular value of ${\bf A}$, $\eta_p \equiv \| {\bf V}({\bf p},:\,)^{-1} \|$, and $\eta_q \equiv \| {\bf W}({\bf q},:\,)^{-1} \|$. Here and throughout, $\|\cdot\|$ denotes the vector 2-norm and the matrix norm it induces, and $\|\cdot\|_F$ is the Frobenius norm. We use MATLAB notation to index vectors and matrices, so that, e.g., ${\bf A}({\bf p},:)$ denotes the $k$ rows of ${\bf A}$ whose indices are specified by the entries of the vector ${\bf p}\in\mbox {\Bb N}^k$, while ${\bf A}(:,{\bf q})$ denotes the $k$ columns of ${\bf A}$ indexed by ${\bf q}\in\mbox {\Bb N}^k$. The CUR factorization is an important tool for handling large-scale data sets, offering two advantages over the SVD: when ${\bf A}$ is sparse, so too are ${\bf C}$ and ${\bf R}$, unlike the matrices ${\bf V}$ and ${\bf W}$ of singular vectors; and the columns and rows that comprise ${\bf C}$ and ${\bf R}$ are representative of the data (e.g., sparse, nonnegative, integer valued, etc.).\ \ The following simple example, adapted from Mahoney and Drineas~\cite[Fig.~1b]{MD09}, illustrates the latter advantage. Construct ${\bf A}\in\mbox {\Bb R}^{2\times n}$ so that its first $n/2$ columns have the form \[ \left[\begin{array}{c} x_1 \\ x_2 \end{array}\right]\] and the remaining $n/2$ columns have the form \[ {\sqrt{2}\over2} \left[\begin{array}{cc} -1 & 1 \\ 1 & 1\end{array}\right] \left[\begin{array}{c} x_1 \\ x_2 \end{array}\right], \] where in both cases $x_1 \sim N(0,1)$ and $x_2 \sim N(0,4^2)$ are independent samples of normal random variables, i.e., the columns of ${\bf A}$ are drawn from two different multivariate normal distributions. Figure~\ref{fig:mv_example} shows that the two left singular vectors, though orthogonal by construction, fail to represent the true nature of the data; in contrast, the first two columns selected by the DEIM-CUR procedure give a much better overall representation. While trivial in this two-dimensional case, one can imagine the utility of such approximations for high-dimensional data. We shall illustrate the advantages of CUR approximations with further computational examples in Section~\ref{sec:examples}. \begin{figure}[b!] \begin{center} \includegraphics[scale=0.475]{mv_example1}\qquad\quad \includegraphics[scale=0.475]{mv_example2} \end{center} \vspace*{-10pt} \caption{\label{fig:mv_example} Comparison of singular vectors (left, scaled, in red) and DEIM-CUR columns (right, in blue) for a data set drawn from two multivariate normal distributions having different principal axes.} \end{figure} CUR-type factorizations originated with ``pseudoskeleton'' approximations~\cite{GTZ97} and pivoted, truncated QR decompositions~\cite{Ste99}; in recent years many new algorithms have been proposed in the numerical linear algebra and theoretical computer science literatures. Some approaches seek to maximize the \emph{volume} of the decomposition~\cite{GTZ97,TKB12}. Numerous other algorithms instead use \emph{leverage scores}~\cite{BW,DMM08,MD09,WZ13}. These methods typically first compute a singular value decomposition% \footnote{We use the nonstandard notation ${\bf V} {\bf S} {\bf W}^T$ for the SVD to avoid conflicts with ${\bf U}$ in the standard CUR notation.} ${\bf A} = {\bf V}{\bf S}{\bf W}^T$ (or an approximation to it), with ${\bf V}\in\mbox {\Bb R}^{m\times n}$, ${\bf W}\in\mbox {\Bb R}^{n\times n}$.\ \ The leverage score for the $j$th row ($k$th column) of ${\bf A}$ is the squared two-norm of the $j$th row of ${\bf V}$ ($k$th row of ${\bf W}$). When scaled by the number of singular vectors, these leverage scores give probability distributions for randomly sampling the columns and rows to form ${\bf C}$ and ${\bf R}$. This approach leads to probabilistic bounds on $\|{\bf A} - {\bf C} {\bf U} {\bf R} \|_F$~\cite{DMM08,MD09}. In cases where ${\bf A}$ has small singular values (precisely the case where one would seek a low-rank factorization), the singular vectors can be sensitive to perturbations to ${\bf A}$, making the leverages scores unstable~\cite{IW}. Thus leverage scores are often computed using only the leading few singular vectors, but the choice of how many vectors to keep can be somewhat ad hoc. The algorithm described in Sections~\ref{sec:factor} and~\ref{sec:deim} is entirely deterministic and involves few (if any) parameters. The method is supported by an error analysis in Section~\ref{sec:theory} that also applies to a broad class of CUR factorizations. This section includes an improved bound on the error constants $\eta_p$ and $\eta_q$ for DEIM row and column selection, which also applies to the analysis of DEIM-based model order reduction~\cite{CS10}. In Section~\ref{sec:qr} we propose a novel incremental QR algorithm for approximating the SVD (and potentially also approximating leverage scores). Section~\ref{sec:examples} illustrates the performance of this new CUR factorization on several examples. In many applications one cares primarily about key columns or rows of ${\bf A}$, rather than an explicit ${\bf A}={\bf C}{\bf U}{\bf R}$ factorization. The DEIM technique, which identifies rows and columns of ${\bf A}$ independently, can easily be used to select only columns or rows, leading to an ``interpolatory decomposition'' of the form ${\bf A} = {\bf C}\widehat{{\bf U}}$ or ${\bf A} = \widehat{{\bf U}}{\bf R}$; such factorizations have the advantage that $\widehat{{\bf U}}$ can be much better conditioned than the ${\bf U}$ matrix in the CUR factorization. For further details about general interpolatory decompositions, see~\cite[\S1]{CGMR05}. \section{Incremental QR Factorization} \label{sec:qr} The DEIM point selection process presumes access to the first $k$ left and right singular vectors of ${\bf A}\in\mbox {\Bb R}^{m\times n}$. If either $m$ or $n$ is of modest size (say ${}\le 1000$) and ${\bf A}$ can be stored as a dense matrix, \ library software for computing the ``economy sized'' SVD, e.g., \verb|[V,S,W] = svd(A,'econ')| in MATLAB, usually performs very well. For larger scale problems, the leading $k$ singular vectors can be computed using iterative methods, such as the Krylov subspace-based ARPACK software~\cite{LSY98} (used by MATLAB's {\tt svds} command), PROPACK~\cite{Lar05}, IRLBA~\cite{BR13}, or the Jacobi--Davidson algorithm~\cite{Hoc01}. Randomized SVD algorithms provide an appealing alternative with probabilistic error bounds~\cite{HMT11}. Here we describe another approach that satisfies a deterministic error bound (Lemma~\ref{lem:incqr}) and \emph{only requires one pass through the matrix ${\bf A}$}, a key property for massive data sets that cannot easily be stored in memory. \begin{algorithm}[h!] \begin{center} \fbox{\begin{tabular}{p{10cm}} \ \\[-25pt] \begin{tabbing} defghasijkl\=bbbb\=cccc\= \kill $ \ \ $ {\bf Input:} \> $ {\bf A}$, an $m \times n$ matrix \\ $ \ \ $ \> {\it tol}, a positive scalar controlling the accuracy of the factorization\\[5pt] $\ $ {\bf Output:} \> $\bQ$, an $m \times k$ matrix with orthonormal columns\\ $\ \ \ \ \ \ \ $ \> $\bR$, a $k \times n$ rectangular matrix \\ $\ \ \ \ \ \ \ $ \>\>\> with ${\bf A} \approx \bQ \bR$ \\[5pt] $\ \ \ $ Choose $k \ll \min(m,n)$\\ $\ \ \ $ Compute the QR factorization ${\bf A}(:,1:k) = \bQ \bR$, with $\bQ \in \mbox {\Bb R}^{n\times k}$ and $\bR\in\mbox {\Bb R}^{k\times m}$\\ $ \ \ \ $ rownorms($i$) = $\|\bR(i,:)\|^2$ for $i = 1, \ldots, k$ \\ $\ \ \ $ $j = k+1$\\ $\ \ \ $ {\bf while} $j \le n$\\ $ \ \ \ $ \> ${\bf a} = {\bf A}(@:@,j) ; \ \br = \bQ^T{\bf a}; \ \ {\bf f} = {\bf a} - \bQ \br; \ \ \rho = \|{\bf f} \|; \ \ \bq = {\bf f}/\rho$\\ $\ \ \ $ \> $\bQ = [\bQ, \ \bq]; \ \ \bR = \left[ \begin{array}{cc} \bR & \br\\ {\bf 0} & \rho \\ \end{array} \right]$ \\[5pt] $\ \ \ $ \> rownorms($i$) = rownorms($i$) + $\br(i)^2$ for $i = 1,\ldots, k$ \\ $\ \ \ $ \> rownorms($k+1$) ${}= \rho^2; \ \ $ \\ $ \ \ \ $ \> FnormR = {\rm sum}(rownorms); \\ $ \ \ \ $ \> $[\sigma, i_{\rm min}] = {\rm min}({\rm rownorms}(1:k+1));$ \\[5pt] $ \ \ \ $ \> {\bf if} $\sigma > ({\it tol}^2)*({\rm FnormR} - {\rm rownorms}(i_{\rm min})$ \\ $ \ \ \ $ \>\> \% \emph{no deflation} \\ $ \ \ \ $ \>\> $ k = k + 1;$ \\ $ \ \ \ $ \> {\bf else} \% \emph{deflation required}\\ $ \ \ \ $ \>\> {\bf if} $i_{\rm min} < k+1$\\ $ \ \ \ $ \>\>\> $\bR(i_{\rm min},@:@) = \bR(k+1,@:@); \ \bQ(@:@,i_{\rm min}) = \bQ(@:@, k+1)$ \\ $ \ \ \ $ \>\>\> $\ {\rm rownorms}(i_{min}) = {\rm rownorms}(k+1) $ \\ $ \ \ \ $ \>\> {\bf end} \\ $ \ \ \ $ \>\> \% \emph{delete the minimum norm row of $\bR$}\\ $ \ \ \ $ \>\> $\bQ = \bQ(@:@,1:k); \bR = \bR(1:k,@:@) $ \\ $ \ \ \ $ \> {\bf end} \\ $ \ \ \ $ \> $j = j+1$\\ $ \ \ \ $ {\bf end} \\ \end{tabbing}\\[-23pt]\end{tabular}} \end{center} \vspace*{-10pt} \caption{Incremental QR low rank approximate factorization} \label{fig:QRincremental} \end{algorithm} This approach is based on an {\it incremental} low rank ${\bf A}\approx{\bf Q} \bR $ approximation, where $\bQ\in\mbox {\Bb R}^{n\times k}$ has orthonormal columns and $\bR\in\mbox {\Bb R}^{k\times m}$ is upper triangular. (In this section only, $\bQ$ and $\bR$ denote different quantities from elsewhere in the paper.) Take the dense (economy sized) SVD $\bR = \widehat{{\bf V}} {\bf S} {\bf W}^T$, and put ${\bf V} = \bQ \widehat{{\bf V}} $ to get \begin{equation} \label{eq:AQRSVD} {\bf A} \approx \bQ \bR = {\bf V} {\bf S} {\bf W}^T. \end{equation} Incremental algorithms for building the QR factorization and SVD have been proposed by Stewart~\cite{Ste99}, Baker, Gallivan, and Van Dooren~\cite{BGV12} and many others, as surveyed in~\cite{Bak04}; these ideas are also closely related to rank-revealing QR factorizations~\cite{GE96}. Algorithm~\ref{fig:QRupdate} differs from those of Stewart in its use of internal pivoting and threshold truncation in place of Stewart's column pivoting. This distinction enables a one-pass algorithm that is closely related to~\cite[Algorithm~1]{BGV12}. The proposed method is presented in Algorithm~\ref{fig:QRincremental}, which proceeds at each step by orthogonalizing a column of ${\bf A}$ against the previously orthogonalized columns. The rank of the resulting factors is controlled through an update-and-delete procedure that is illustrated in Figure~\ref{fig:QRupdate}. After orthogonalizing a column of ${\bf A}$, the algorithm checks if any row of $\bR$ has small relative norm; if such a row exists, the corresponding column of ${\bf Q}$ makes little contribution to the factorization, so that column of ${\bf Q}$ and row of $\bR$ can be deleted at only a small loss of accuracy in the factorization. (Future columns of ${\bf A}$ will not be orthogonalized against the vector deleted from ${\bf Q}$, so this direction can re-emerge if a later column in ${\bf A}$ warrants it.) Robust implementations of Algorithm~\ref{fig:QRupdate} should replace the classical Gram--Schmidt operations \[ \br = \bQ^T {\bf a}, \ \ {\bf f} = {\bf a} - \bQ \br \] with a re-orthogonalization step, as suggested by Daniel, Gragg, Kaufman, and Stewart~\cite{DGKS}: \begin{eqnarray} \br &=& \bQ^T {\bf a} \nonumber \\ {\bf f} &=& {\bf a} - \bQ \br \nonumber \\ &\ & {\bf c} = \bQ^T {\bf f} \label{eq:reorth1} \\ &\ & {\bf f} = {\bf f} - \bQ{\bf c} \label{eq:reorth2}\\ &\ & \br = \br + {\bf c} \label{eq:reorth3}\\ \rho &=& \| {\bf f} \| \nonumber \\ \bq &=& {\bf f}/\rho. \nonumber \end{eqnarray} The extra steps~(\ref{eq:reorth1})--(\ref{eq:reorth3}) generally provide a $\bQ$ that is numerically orthogonal to working precision. Pathological cases are easily overcome with some additional slight modifications; see~\cite{GIR05} for a complete analysis. Because this algorithm uses the classical Gram--Schmidt method, one can easily block it for parallel efficiency. \begin{figure}[t!] \begin{center} \begin{picture}(300,50) \put(-40,-20){${\bf A}(@:@,1:j) = {}$} \put(-25,-80){ \begin{minipage}[t]{2.5in}\begin{center} \hspace*{30pt}\includegraphics[scale=0.35]{QRupdateA}\\ Partial ${\bf Q}\bR$ factorization\end{center} \end{minipage}} \put(160,-20){\small ${\bf A}(:,1\!:\!j\!+\!1) = {}$} \put(177,-80){\begin{minipage}[t]{2.5in}\begin{center}\hspace*{30pt} \includegraphics[scale=0.35]{QRupdateB}\\ Extend with Gram--Schmidt\end{center} \end{minipage}} \put(-95,-230){\begin{minipage}[t]{3in}\begin{center} \includegraphics[scale=0.35]{QRupdateC}\\ Find $\bq_i$ with $\|\bR(i,:)\|^2 < \tol^{@2}\,(\|\bR\|_F^2 - \|\bR(i,:)\|^2)$\end{center} \end{minipage}} \put(90,-230){\begin{minipage}[t]{2in}\begin{center} \includegraphics[scale=0.35]{QRupdateD}\\ Replace $\bq_i$, $\bR(i,:)$ \end{center} \end{minipage}} \put(255,-230){\begin{minipage}[t]{2in}\begin{center} \includegraphics[scale=0.35]{QRupdateE}\\ Truncate last column\\ of $\bQ$ and row of $\bR$\end{center} \end{minipage}} \end{picture} \vspace*{3.35in} \end{center} \caption{\label{fig:QRupdate} Diagram illustrating the QR update procedure.} \end{figure} \subsection{Incremental QR Error Bounds} At step $j$ the truncation criterion in Algorithm~\ref{fig:QRincremental} will delete row $\br_i^T = {\bf e}_i^T \bR_j \ $ if \[ \| \br_i \| \le \tol\, \| \widehat{\bR}_j \|_F, \] where $\br_i^T$ is the row of minimum norm and $\widehat{\bR}_j$ denotes $\bR_j$ with the $i$th row deleted. This strategy has a straightforward error analysis, which, in light of the approximation~(\ref{eq:AQRSVD}), also implies an error bound on the resulting SVD. \begin{lemma}\label{lem:incqr} Let $\bR_j$ be the triangular factor at step $j$ of Algorithm~\ref{fig:QRincremental}, and $\bQ_j $ the corresponding orthonormal columns in the approximate QR factorization ${\bf A}_j \approx \bQ_j \bR_j$, where ${\bf A}_j$ consists of the first $j$ columns of ${\bf A}$. Then \[ \| {\bf A}_j - \bQ_j \bR_j \|_F \le \tol \cdot d_j \cdot \| \bR_j \|_F, \] where $d_j$ is the number of coloumn/row deletions that have been made up to and including step $j$. (Note that $\bQ_j \in \mbox {\Bb R}^{m\times (j-d_j)}$, $\bR\in\mbox {\Bb R}^{(j-d_j)\times j}$, and $d_n = \min\{m,n\} - k$, where $k = { {{\rm rank }\,} } (\bQ_n\bR_n)$.) \end{lemma} Before proving this lemma, we note that it gives a bound on the error in the resulting approximate SVD of ${\bf A}$. Suppose $d_n$ deletions are made when this algorithm computes the approximate factorization ${\bf A} \approx {\bf Q}{\bf R}$ with tolerance $\tol$. Given the SVD ${\bf R} = \widehat{{\bf V}}{\bf S}{\bf W}^*$, set ${\bf V} \equiv {\bf Q}\widehat{{\bf V}}$. Then \[ \|{\bf A} - {\bf V}{\bf S}{\bf W}^*\|_F \le \tol \cdot d_n \cdot \|\bR\|_F.\] {\it Proof of Lemma~\ref{lem:incqr}:} The proof shall be by induction. Let ${\bf E}_j = {\bf A}_j - \bQ_j \bR_j$ and assume \begin{equation} \label{eq:inductive} \| {\bf E}_j \|_F \le \tol \cdot d_j \cdot \| \bR_j \|_F. \end{equation} Orthogonalize column $j+1$ of ${\bf A}$ using Gram--Schmidt to obtain \[ {\bf A}_{j+1} = \bQ_{j+1}\bR_{j+1} + [{\bf E}_j, {\bf 0}]. \] If no deflation occurs at this step, the bound holds trivially since \[ \| {\bf E}_{j+1} \|_F = \|[{\bf E}_j, {\bf 0}]\|_F \le tol\cdot d_j \cdot \|\bR_j\|_F \le \tol \cdot d_{j+1} \cdot \| \bR_{j+1} \|_F, \] because $d_{j+1}= d_j$ and $\|\bR_j \|_F \le \| \bR_{j+1}\|_F$. Suppose $\bR_j$ has dimension $k \times j$ (i.e., $k=j-d_j$). Let $i$ be the index of the row of minimum norm and let $\widehat{\bR}_{j+1}$ be obtained by deleting the $i$th row of $\bR_{j+1}$.\ \ If $\br_i^T = {\bf e}_i^T \bR_{j+1} $ satisfies $\|\br_i^T \| \le tol\cdot \| \widehat{\bR}_{j+1} \|_F$ then deflation occurs. Deleting column $i$ of $\bQ_{j+1}$ and row $i$ of $\bR_{j+1}$ replaces $\bQ_{j+1}$ and $\bR_{j+1}$ with $\widehat{\bQ}_{j+1}$ and $\widehat{\bR}_{j+1}$. Then \[ \widehat{\bQ}_{j+1} \widehat{\bR}_{j+1} = \bQ_{j+1} ( \bR_{j+1} - {\bf e}_i^{} \br_i^T), \] and \[ {\bf A}_{j+1} = \bQ_{j+1} (\bR_{j+1} - {\bf e}_i^{} \br_i^T) + [{\bf E}_j, {\bf 0}] + \bQ_{j+1}{\bf e}_i^{} \br_i^T. \] Hence the deletion gives the overall error \[ {\bf E}_{j+1} = {\bf A}_{j+1} - \widehat{\bQ}_{j+1} \widehat{\bR}_{j+1} = [{\bf E}_j, {\bf 0}] + \bQ_{j+1}{\bf e}_i^{}\br_i^T. \] Therefore, when $i < k+1$, the inductive assumption~(\ref{eq:inductive}) implies \[ \| {\bf E}_{j+1}\|_F \le \| {\bf E}_j \|_F + \|\br_i^T \| \le \tol \cdot ( d_j \cdot \| \bR_j \|_F + \|\widehat{\bR}_{j+1} \|_F) \le \tol \cdot (d_j+1) \cdot \| \widehat{\bR}_{j+1}\|_F, \] since $\widehat{\bR}_{j+1}$ contains row $k+1$ of $\bR_{j+1}$, which must have a norm larger than the row marked for deletion. Since row $k+1$ of $\widehat{\bR}_{j+1}$ consists of just one nonzero element, \[ \| \widehat{\bR}_{j+1} \|^2_F \ge \| \widehat{\bR}_j \|_F^2 + \rho_{k+1,j+1}^2 \ge \| \bR_j \|_F^2, \] where $\rho_{k+1,j+1}$ is the element $\bR_{j+1}(k+1,j+1) $ and $\widehat{\bR}_j$ is the matrix $\bR_j$ with $i$th row deleted. If $i=k+1$, then the last row of $\bR_{j+1} $ is deleted and the desired inequality must hold, since $\bR_j$ is a submatrix of $\widehat{\bR}_{j+1}$. At the end of this process, replace $\bR_{j+1}$ and $\bQ_{j+1}$ with $\widehat{\bR}_{j+1}$ and $\widehat{\bQ}_{j+1}$ to obtain the approximation \[ \|{\bf A}_{j+1}\| \le \tol \cdot d_{j+1}\cdot\|\bR_{j+1}\|_F,\] since $d_{j+1} = d_j + 1$. The error bound for the base case $j=1$ clearly holds, completing the induction. { { \hfill\rule{3mm}{3mm} } } \medskip The approximate QR factorization that results from this algorithm could be used directly for the approximation of leverage scores. The perturbation theory of Ipsen and Wentworth~\cite{IW} describes how the tolerance in our algorithm will affect the accuracy of the resulting leverage scores. We also note that for extra expediency this one-pass QR algorithm could be stopped when $\|\widehat{\bR}_j\|_F \approx \|{\bf A}\|_F$ (at the cost of an extra pass through ${\bf A}$ to compute $\|{\bf A}\|_F$), or applied to only a random sampling of $k$ columns of ${\bf A}$. (Drma\v{c} and Gugercin propose a different random approach to DEIM index selection, based on sampling rows of ${\bf V}$ to compute DEIM indices~\cite{DG}.) \section{CUR Approximation Properties} \label{sec:theory} While the theory presented in this section was designed to bound $\|{\bf A}-{\bf C}{\bf U}{\bf R}\|$ for the DEIM-CUR method, the analysis applies to \emph{any} CUR factorization with full rank ${\bf C} \in \mbox {\Bb R}^{m\times k}$ and ${\bf R}\in \mbox {\Bb R}^{k\times n}$, and ${\bf U} = {\bf C}^I{\bf A}{\bf R}^I$, regardless of the procedure used for selecting the columns and rows.% \footnote{We are grateful to Ilse Ipsen for noting the applicability of this analysis to all such ${\bf C}{\bf U}{\bf R}$ factorizations, and for also pointing out that, given knowledge of \emph{all} the singular values and vectors of ${\bf A}$, our Lemma~\ref{ProjErr} can be sharpened via application of~\cite[Thm.~9.1]{HMT11}. Indeed, Ipsen observes that the interpolatory projector proof of Lemma~\ref{ProjErr} can be adapted to simplify the multipage proof of \cite[Thm.~9.1]{HMT11}.} Consider a CUR factorization that uses row indices ${\bf p} \in \mbox {\Bb N}^k$ and column indices ${\bf q}\in\mbox {\Bb N}^k$, and set \[ {\bf P} = {\bf I}(@@:@@,{\bf p}) = [{\bf e}_{p_1}, \ldots, {\bf e}_{p_k}]\in\mbox {\Bb R}^{m\times k}, \qquad {\bf Q} = {\bf I}(@@:@@,{\bf q}) = [{\bf e}_{q_1}, \ldots, {\bf e}_{q_k}]\in\mbox {\Bb R}^{n\times k}. \] The first step in this analysis bounds the mismatch between ${\bf A}$ and its interpolatory projection $\mbox{\boldmath$\cal P$}{\bf A}$. \begin{lemma} \label{DEIM_bound} Assume ${\bf P}^T{\bf V}$ is invertible and let $\mbox{\boldmath$\cal P$} = {\bf V} ({\bf P}^T{\bf V})^{-1}{\bf P}^T$ be the interpolatory projector~$(\ref{eq:intproj})$. If ${\bf V}^T{\bf V}={\bf I}$, then any ${\bf A}\in\mbox {\Bb R}^{m\times n}$ satisfies \[ \| {\bf A} - \mbox{\boldmath$\cal P$}{\bf A} \| \le \|({\bf P}^T {\bf V})^{-1} \| \|({\bf I} - {\bf V} {\bf V}^T){\bf A} \|. \] Additionally, if ${\bf V}$ consists of the leading $k$ left singular vectors of ${\bf A}$, then \[ \| {\bf A} - \mbox{\boldmath$\cal P$}{\bf A} \| = \|({\bf I} - \mbox{\boldmath$\cal P$}) {\bf A}\| \le \|({\bf P}^T {\bf V})^{-1} \|\, \sigma_{k+1}. \] \end{lemma} \noindent{\it Proof:} First note that $\mbox{\boldmath$\cal P$} {\bf V} = {\bf V} ({\bf P}^T {\bf V})^{-1} {\bf P}^T {\bf V} = {\bf V}$, so that $({\bf I} - \mbox{\boldmath$\cal P$}){\bf V} = {\bf 0}$. Therefore \[ \| {\bf A} - \mbox{\boldmath$\cal P$}{\bf A} \| = \|({\bf I} - \mbox{\boldmath$\cal P$}) {\bf A}\| = \|({\bf I} - \mbox{\boldmath$\cal P$})({\bf I} - {\bf V} {\bf V}^T) {\bf A}\| \le \|({\bf I} - \mbox{\boldmath$\cal P$})\| \|({\bf I} - {\bf V} {\bf V}^T) {\bf A}\|. \] It is well known that \[ \|{\bf I} - \mbox{\boldmath$\cal P$}\| = \| \mbox{\boldmath$\cal P$} \| = \|({\bf P}^T {\bf V})^{-1} \| \] so long as $\mbox{\boldmath$\cal P$} \ne {\bf 0} \ {\rm or} \ {\bf I}$; see, e.g.,~\cite{Szy06}. This establishes the first result. The second follows from the fact that \[ \|({\bf I} - {\bf V} {\bf V}^T) {\bf A}\| = \| {\bf A} - {\bf V} {\bf S} {\bf W}^T\| = \sigma_{k+1} \] when ${\bf V}$ consists of the leading $k$ left singular vectors of ${\bf A}$. { { \hfill\rule{3mm}{3mm} } } \medskip Now let ${\bf V} {\bf S} {\bf W}^T \approx {\bf A}$ be a rank-$k$ SVD of ${\bf A}$. (The singular vectors play a crucial role in this analysis, even if ${\bf p}$ and ${\bf q}$ were selected using some scheme that did not reference them.) In addition to the interpolatory projector $\mbox{\boldmath$\cal P$} = {\bf V}({\bf P}^T{\bf V})^{-1}{\bf P}^T$ that operates on the left of ${\bf A}$, we shall also use $\mbox{\boldmath$\cal Q$} = {\bf Q}({\bf W}^T {\bf Q})^{-1} {\bf W}^T$, which operates on the right of ${\bf A}$.\ \ Assuming that ${\bf P}^T{\bf V}$ and ${\bf W}^T{\bf Q}$ are invertible, define the error constants \[ \eta_p \equiv \|({\bf P}^T{\bf V})^{-1}\|, \qquad \eta_q \equiv \|({\bf W}^T{\bf Q})^{-1}\|. \] Lemma~\ref{DEIM_bound} implies \begin{equation} \label{eq:iebnd} \| {\bf A} ({\bf I} - \mbox{\boldmath$\cal Q$}) \| \le \eta_q @@\sigma_{k+1} \quad {\rm and } \quad \| ({\bf I} - \mbox{\boldmath$\cal P$}) {\bf A} \| \le \eta_p @@\sigma_{k+1}. \end{equation} The next lemma shows that these bounds on the error of the interpolatory projection of ${\bf A}$ onto the select columns and rows also apply to the \emph{orthogonal} projections of ${\bf A}$ onto the same column and row spaces. \begin{lemma} \label{ProjErr} Suppose the row and column indices ${\bf p}$ and ${\bf q}$ give full rank matrices ${\bf C} = {\bf A}(@@:@@,{\bf q}) = {\bf A}{\bf Q}\in\mbox {\Bb R}^{m\times k}$ and ${\bf R} = {\bf A}({\bf p},@@:@@)={\bf P}{\bf A}\in\mbox {\Bb R}^{k\times n}$, with finite error constants $\eta_p$ and $\eta_q$, and suppose that $k<\min\{m,n\}$. Then \[ \| ({\bf I} - {\bf C} {\bf C}^I) {\bf A} \| \le \eta_q @@\sigma_{k+1} \quad {\rm and}\quad \| {\bf A}({\bf I} - {\bf R}^I {\bf R}) \| \le \eta_p @@\sigma_{k+1}. \] \end{lemma} {\it Proof:} Using the formula ${\bf C}= {\bf A}{\bf Q}$, we have ${\bf C}^I = ({\bf C}^T{\bf C})^{-1}{\bf C}^T = ({\bf Q}^T{\bf A}^T{\bf A}{\bf Q})^{-1}({\bf A}{\bf Q})^T$, so the orthogonal projection of ${\bf A}$ onto ${\rm Ran}({\bf C})$ is \[ {\bf C} {\bf C}^I {\bf A} = ({\bf A} {\bf Q}({\bf Q}^T {\bf A}^T {\bf A} {\bf Q})^{-1} {\bf Q}^T {\bf A}^T){\bf A} = {\bf A} ({\bf Q} ({\bf Q}^T {\bf A}^T {\bf A} {\bf Q})^{-1} {\bf Q}^T {\bf A}^T{\bf A}). \] Hence the error in the orthogonal projection of ${\bf A}$ is \begin{equation} \label{eq:lemid1} ({\bf I} - {\bf C} {\bf C}^I) {\bf A} = {\bf A}({\bf I} - \mbox{\boldmath$\Phi$}), \ \ {\rm where} \ \ \mbox{\boldmath$\Phi$} = {\bf Q}({\bf Q}^T {\bf A}^T {\bf A} {\bf Q})^{-1} {\bf Q}^T {\bf A}^T{\bf A}. \end{equation} Note that $\mbox{\boldmath$\Phi$}$ is an oblique projector onto ${\rm Ran}({\bf Q})$, so $\mbox{\boldmath$\Phi$} {\bf Q} = {\bf Q}$. Therefore, $\mbox{\boldmath$\Phi$} \mbox{\boldmath$\cal Q$} = \mbox{\boldmath$\cal Q$}$, since \[ \mbox{\boldmath$\Phi$} \mbox{\boldmath$\cal Q$} = \mbox{\boldmath$\Phi$} {\bf Q} ({\bf W}^T{\bf Q})^{-1} {\bf W}^T = {\bf Q} ({\bf W}^T{\bf Q})^{-1} {\bf W}^T = \mbox{\boldmath$\cal Q$}. \] This implies that \[ {\bf A}({\bf I} - \mbox{\boldmath$\Phi$}) = {\bf A}({\bf I} - \mbox{\boldmath$\Phi$})({\bf I} - \mbox{\boldmath$\cal Q$}) = ({\bf I} - {\bf C} {\bf C}^I) {\bf A} ({\bf I} - \mbox{\boldmath$\cal Q$}), \] and so from~(\ref{eq:lemid1}) we have \begin{eqnarray*} \|({\bf I} - {\bf C} {\bf C}^I) {\bf A} \| &=& \| {\bf A}({\bf I} - \mbox{\boldmath$\Phi$})\| \\ &=& \| ({\bf I} - {\bf C} {\bf C}^I) {\bf A} ({\bf I} - \mbox{\boldmath$\cal Q$})\| \\ &\le& \| {\bf I} - {\bf C} {\bf C}^I \| \|{\bf A} ({\bf I} - \mbox{\boldmath$\cal Q$})\| \\ &\le& \eta_q@@\sigma_{k+1}. \end{eqnarray*} The last line follows from the bound~(\ref{eq:iebnd}) and the fact that $ \| {\bf I} - {\bf C} {\bf C}^I \| = 1$, since ${\bf C}\bC^I$ is an orthogonal projector and $k<\min\{m,n\}$. A similar argument shows that \[ {\bf A}({\bf I} - {\bf R}^I {\bf R}) = ({\bf I} - \mbox{\boldmath$\Psi$}) {\bf A} \] where $\mbox{\boldmath$\Psi$} = {\bf A} {\bf A}^T {\bf P} ({\bf P}^T{\bf A}\bA^T{\bf P})^{-1}{\bf P}^T$, and also that \[ ({\bf I} - \mbox{\boldmath$\Psi$}){\bf A} = ({\bf I} - \mbox{\boldmath$\cal P$})({\bf I} - \mbox{\boldmath$\Psi$}){\bf A} = ({\bf I} - \mbox{\boldmath$\cal P$}) {\bf A} ({\bf I} - {\bf R}^I {\bf R}), \] from which follows the error bound \[ \|{\bf A}({\bf I} - {\bf R}^I {\bf R})\| \le \|({\bf I} - \mbox{\boldmath$\cal P$}) {\bf A} \| \|{\bf I} - {\bf R}^I {\bf R}\| \le \eta_p @@\sigma_{k+1}.\qquad { { \hfill\rule{3mm}{3mm} } } \] \medskip The main result on approximation of ${\bf A}$ by ${\bf C}{\bf U}{\bf R}$ readily follows from combining this last lemma with a basic CUR analysis technique used by Mahoney and Drineas~\cite[eq.~(6)]{MD09}. \begin{theorem} \label{basic_bound} Given ${\bf A}\in\mbox {\Bb R}^{m\times n}$ and $1\le k<\min\{m,n\}$, let ${\bf C} = {\bf A}(@@:@@,{\bf q})\in\mbox {\Bb R}^{m\times k}$ and ${\bf R} = {\bf A}({\bf p},@@:@@)\in\mbox {\Bb R}^{k\times n}$ with finite error constants $\eta_p$ and $\eta_q$, and set ${\bf U} = {\bf C}^I{\bf A}{\bf R}^I$. Then \[ \| {\bf A} - {\bf C} {\bf U} {\bf R} \| \le (\eta_p + \eta_q)@@ \sigma_{k+1}. \] \end{theorem} \noindent{\it Proof:} From the definitions, \[ {\bf A} - {\bf C} {\bf U} {\bf R} = {\bf A} - {\bf C} {\bf C}^I {\bf A} {\bf R}^I {\bf R} = ({\bf I} - {\bf C} {\bf C}^I ){\bf A} + {\bf C}\bC^I {\bf A} ({\bf I} - {\bf R}^I {\bf R}). \] Applying Lemma~\ref{ProjErr}, \begin{eqnarray*} \|{\bf A} - {\bf C} {\bf U} {\bf R} \| &\le& \|({\bf I} - {\bf C} {\bf C}^I ){\bf A} \| + \|{\bf C}\bC^I \| \| {\bf A} ({\bf I} - {\bf R}^I {\bf R}) \| \\ &\le& \eta_q@@\sigma_{k+1} + \eta_p@@ \sigma_{k+1}\\ &=& ( \eta_p + \eta_q )@@\sigma_{k+1}, \end{eqnarray*} since $\|{\bf C}\bC^I\| = 1$. { { \hfill\rule{3mm}{3mm} } } \medskip Theorem~\ref{basic_bound} shows that ${\bf C}{\bf U}{\bf R}$ is within a factor of $\eta_p +\eta_q$ of the optimal rank-$k$ approximation, hence these error constants suggest a way to assess a wide variety of column/row selection schemes. The quality of the approximation is controlled by the conditioning of the selected $k$ rows of the dominant $k$ (exact) singular vectors. If those singular vectors are available as part of the column/row selection process, then Theorem~\ref{basic_bound} provides an \emph{a posteriori} bound requiring only the fast ($ {\cal O} (k^3)$) computation of $\eta_p$ and $\eta_q$, and thus could suggest methods for adjusting either $k$ or the point selection process to reduce the error constants. In this context, notice that if ${\bf V}{\bf S}{\bf W}^T$ is only an \emph{approximation} to the optimal rank-$k$ SVD with ${\bf V}$ and ${\bf W}$ having orthonormal columns (as computed, for example, using the incremental QR algorithm described in the next section), the preceding analysis gives \begin{eqnarray} \|{\bf A} - {\bf C}{\bf U}{\bf R}\| \!\!&\le&\!\! \|({\bf I}-{\bf C}\bC^I){\bf A}\| + \|{\bf A}({\bf I}-{\bf R}^I{\bf R})\| \nonumber \\ \!\!&=&\!\! \|{\bf A}({\bf I}-\mbox{\boldmath$\cal Q$})\| + \|({\bf I}-\mbox{\boldmath$\cal P$}){\bf A}\| \nonumber \\ \!\!&\le&\!\! \|({\bf W}^T{\bf Q})^{-1}\| \|{\bf A}({\bf I}-{\bf W}\bW^T)\| + \|({\bf P}^T{\bf V})^{-1}\| \|({\bf I}-{\bf V}\bV^T){\bf A}\|, \label{eq:approxbnd} \end{eqnarray} showing how $\sigma_{k+1}$ in Theorem~\ref{basic_bound} is replaced by the error in the approximate SVD through $\|{\bf A}({\bf I}-{\bf W}\bW^T)\|$ and $\|({\bf I}-{\bf V}\bV^T){\bf A}\|$. In this case $\|({\bf W}^T{\bf Q})^{-1}\|$ and $\|({\bf P}^T{\bf V})^{-1}\|$ are computed using the approximate singular vectors in ${\bf V}$ and ${\bf W}$, rather than the exact singular vectors in the theorem. Alternatively, if one has probabilistic bounds for $\eta_p$ and $\eta_q$, then Theorem~\ref{basic_bound} immediately gives a probabilistic bound for $\| {\bf A} - {\bf C} {\bf U} {\bf R} \|$. Numerical examples in Section~\ref{sec:examples} compare how the error constants evolve as $k$ increases for the DEIM-CUR factorization and several other factorizations based on leverage scores. \subsection{Interpretation of the bound for DEIM-CUR} For DEIM-CUR, we can ensure the hypotheses of Theorem~\ref{basic_bound} are satisfied and bound the error constants. Suppose the DEIM points are selected using the exact rank-$k$ SVD ${\bf A} \approx {\bf V}{\bf S}{\bf W}^T$.\ \ Lemma~\ref{DEIM_inv} ensures that the matrices ${\bf P}^T{\bf V}$ and ${\bf W}^T{\bf Q}$ are invertible, so $\eta_p$ and $\eta_q$ are finite. The DEIM strategy also gives full rank ${\bf C}$ and ${\bf R}$ matrices, presuming $k\le {\rm rank}({\bf A})$. To see this, note that for any unit vector ${\bf y}\in\mbox {\Bb R}^k$, \[ {\bf C}{\bf y} = {\bf A} {\bf Q}@{\bf y} = {\bf V} {\bf S} {\bf W}^T {\bf Q}@{\bf y} + {\bf E}{\bf Q}@ {\bf y}, \] where ${\bf E} = {\bf A} - {\bf V}{\bf S}{\bf W}^T$.\ \ Since ${\bf V}^T {\bf E} = {\bf 0},$ \[ \| {\bf C}{\bf y}\|^2 = \| {\bf A} {\bf Q}@{\bf y}\|^2 =\| {\bf V} {\bf S} {\bf W}^T {\bf Q}{\bf y}\|^2 + \| {\bf E}{\bf Q}@{\bf y} \|^2. \] Since $\| {\bf W}^T {\bf Q}{\bf y}\| \ge \|{\bf y}\|/\|({\bf W}^T{\bf Q})^{-1}\| = 1/\eta_q$, \[ \| {\bf C}{\bf y}\| \ge \| {\bf V} {\bf S} {\bf W}^T {\bf Q}{\bf y}\| \ge \sigma_k / \eta_q > 0. \] Thus ${\bf C}$ must be full rank. A similar argument shows ${\bf R}$ to be full rank as well. \medskip The examples in Section~\ref{sec:examples} illustrate that $\eta_p$ and $\eta_q$ are often quite modest for the DEIM-CUR approach, e.g., $ {\cal O} (100)$.\ \ However, worst-case bounds permit significant growth in $k$ that is generally not observed in practice. We begin by stating a bound on this growth developed by Chaturantabut and Sorensen~\cite[Lemma~3.2]{CS10}. \begin{lemma} For the DEIM selection scheme derived above, \[\eta_p \le {(1+\sqrt{2m})^{k-1}\over \|{\bf v}_1\|_\infty}, \qquad \eta_q \le {(1+\sqrt{2n})^{k-1} \over \|{\bf w}_1\|_\infty},\] where ${\bf v}_1$ and ${\bf w}_1$ denote the first columns of ${\bf V}$ and ${\bf W}$. \end{lemma} Motivated by recent work by Drma\v{c} and Gugercin~\cite{DG} on a modified DEIM-like algorithm for model reduction, we can improve this bound considerably. \begin{lemma} \label{lemma:new_eta_bound} For the DEIM selection scheme derived above, \[ \eta_p < \sqrt{mk\over 3} \ {2^k}, \qquad \eta_q < \sqrt{nk\over 3} \ {2^k}. \] \end{lemma} \noindent{\it Proof:} We shall prove the result for $\eta_p$; the result for $\eta_q$ follows similarly. As usual, let ${\bf V} \in \mbox {\Bb R}^{m \times k}$ have orthonormal columns, and let ${\bf p} = {\rm DEIM}({\bf V})$ denote the row index vector derived from the DEIM selection scheme described above. Let ${\bf P} = {\bf I}(\,:\,,{\bf p})$ so that ${\bf P}^T {\bf V} = {\bf V}({\bf p},\,:\,)$. Without loss of generality, assume the DEIM index selection gives ${\bf p} = [1, 2, \ldots, k]^T$. (Otherwise, introduce a permutation matrix to the argument that follows.) As described in section~\ref{sec:deim}, the DEIM index selection is precisely the index selection of LU decomposition with partial pivoting, so one can write \[ {\bf V} = {\bf L}\kern1pt {\bf T}, \] where the nonsingular matrix ${\bf T}\in\mbox {\Bb R}^{k\times k}$ is upper triangular and ${\bf L}\in\mbox {\Bb R}^{m\times k}$ is unit lower triangular with $ | {\bf L}(i,j) | \le 1$, ${\bf L}(j,j) = 1, \ 1 \le j \le k$ and ${\bf L}(i,j) = 0, \ j > i $. Let ${\bf L}_1 \equiv {\bf L}(1:k,1:k)$. Then ${\bf V}({\bf p},\,:\,) = {\bf L}_1 {\bf T}$ and thus \[ \eta_p \equiv \| ({\bf P}^T {\bf V})^{-1} \| = \|({\bf L}_1 {\bf T})^{-1} \| \le \| {\bf T}^{-1}\| \| {\bf L}_1^{-1}\|. \] (The linear independence of the columns of ${\bf V}$ ensure that ${\bf L}_1$ and ${\bf T}$ are invertible.) Upper bounds for $\|{\bf T}^{-1}\|$ and $\| {\bf L}_1^{-1}\|$ will give an upper bound for $\eta_p$. To bound $\| {\bf T}^{-1}\|$, let ${\bf y}\in\mbox {\Bb R}^k$ be a unit vector such that $\| {\bf T}^{-1} {\bf y} \| = \| {\bf T}^{-1}\|$. Then \[ \| {\bf T}^{-1} \| = \| {\bf T}^{-1} {\bf y} \| = \| {\bf V} {\bf T}^{-1} {\bf y} \| = \| {\bf L} {\bf y} \|. \] Now \[ \| {\bf L} {\bf y} \| \le \sqrt{m}\, \|{\bf L}{\bf y}\|_\infty = \sqrt{m}\, |{\bf e}_j^T {\bf L} {\bf y} |, \] for some index $j\in\{1,\ldots, m\}$.\ \ By the Cauchy--Schwarz inequality and the bound $|{\bf L}(i,j)|\le 1$, \[ |{\bf e}_j^T {\bf L} {\bf y} | \le \|{\bf L}^T {\bf e}_j\| \| {\bf y} \| \le \sqrt{k}\cdot 1, \] and so it follows that $\| {\bf T}^{-1} \| \le \sqrt{mk}$. The inverse of ${\bf L}_1$ can be bounded using forward substitution. Let ${\bf L}_1 {\bf z} = {\bf y}$, where $\|{\bf y}\| = 1$ and $\| {\bf z} \| = \| {\bf L}_1^{-1}\|$. Forward substitution provides \begin{eqnarray*} \zeta_1 &=& \gamma_1 \\ \zeta_i &=& \gamma_i - \sum_{j=1}^{i-1} \lambda_{ij} \zeta_j, \rlap{\qquad $i = 2,\ldots, k$,}\\ \end{eqnarray*} where $\zeta_i = {\bf z}(i)$, $\gamma_i = {\bf y}(i)$ and $\lambda_{ij} = {\bf L}(i,j)$. We now use induction to prove \[ | \zeta_i | \le 2^{i-1}, \rlap{\qquad $1 \le i \le k$.} \] First note that $\zeta_1 = \gamma_1$, so $|\zeta_1| \le |\gamma_1| \le 1 = 2^0$ to establish the base case. Assume for some $i \ge 1$ that \[ | \zeta_j | \le 2^{j-1}, \rlap{\qquad $1 \le j \le i$.} \] Then \begin{eqnarray*} | \zeta_{i+1} | = \bigg| \gamma_{i+1} - \sum_{j=1}^{i} \lambda_{ij} \zeta_j \bigg| &\le& |\gamma_{i+1}| + \sum_{j=1}^{i} |\lambda_{ij}| |\zeta_j | \\ &\le& 1 + \sum_{j=1}^{i} 1 \cdot 2^{j-1} = 1 + \sum_{j=0}^{i-1} 2^j = 1 + (2^i - 1) = 2^i \end{eqnarray*} to complete the induction. Now since $\|{\bf z}\| = \|{\bf L}_1^{-1}\|$, \[ \| {\bf L}_1^{-1} \|^2 = {\bf z}^T {\bf z} = \sum_{i=1}^k |\zeta_i|^2 \le \sum_{i=0}^{k-1} 4^i = (4^k - 1)/3. \] Thus $ \| {\bf L}_1^{-1} \| < 2^k/\sqrt{3}$, which, together with the bound on the inverse of ${\bf T}$, provides the final result for $m>k$: \[ \eta_p \equiv \| ({\bf P}^T {\bf V})^{-1} \| < \sqrt{mk\over 3}\, 2^k. \] If $m=k$, then $\eta_p=1$, and the result holds trivially. \quad { { \hfill\rule{3mm}{3mm} } } \medskip Note that this proof only relies on the orthonormality of the columns of ${\bf V}$ and ${\bf W}$, and hence it applies when the DEIM selection scheme is applied to approximate singular vectors, as in CUR error bound in~(\ref{eq:approxbnd}). Lemma~\ref{lemma:new_eta_bound} was inspired by the proof technique developed by Drma\v{c} and Gugercin~\cite{DG} to bound $\|({\bf P}^T {\bf V})^{-1} \|$, when ${\bf P}$ is selected by applying a pivoted rank-revealing QR factorization scheme to ${\bf V}$.\ \ Note that this new bound is on the same order of magnitude as the Drma\v{c}--Gugercin scheme. In practice, their scheme seems to give slightly smaller growth that is more consistent over a wide range of examples. Neither scheme experienced exponential growth over very extensive testing. For the DEIM approach, this absence of exponential growth is closely related to decades of experience with Gaussian elimination with partial pivoting. Element growth in ${\bf T}$ is bounded by a factor of $2^{k-1}$ (for a $k \times k$ matrix), and there is an example that achieves this growth. Nevertheless, this algorithm is almost exclusively used to solve linear systems because such growth is never experienced.% \footnote{See, for example, the extensive numerical tests involving random matrices described in~\cite[lecture~22]{TB97} and~\cite{TS90}. Interestingly, in the experiments of Trefethen and Schreiber~\cite{TS90}, random matrices with orthonormal columns tend to have slightly larger growth factors than Gaussian matrices, though both cases are very far indeed from the exponential upper bound.} Indeed, a similar near worst case example can be constructed for DEIM, although this growth has not been observed in practice. \medskip \noindent{\bf A Growth Example:} We now construct an orthonormal matrix ${\bf V}$ with the property \begin{equation}\label{eq:growthex} {1\over \sqrt{8}}\ 2^{k} < \eta_p \equiv \|({\bf P}^T {\bf V})^{-1} \| < \sqrt{m k\over 3}\ 2^k \end{equation} where ${\bf P}^T {\bf V} = {\bf V}({\bf p},\,:\,)$ with ${\bf p} = {\rm DEIM}({\bf V})$. To construct ${\bf V}$, begin by defining \[ {\bf L} := \left[\begin{array}{cccc} 1 & & & \\ -1 & 1 & & \\ \vdots & \ddots & \ddots & \\ -1 & \cdots & -1 & 1 \\ -1 & \cdots & -1 & -1 \\ \vdots & & & \vdots \\ -1 & \cdots & -1 & -1 \end{array}\right]\in \mbox {\Bb R}^{m\times k}.\] Now construct ${\bf V}{\bf T}_1 \equiv {\bf L}$ as an economy-sized QR factorization of ${\bf L}$ (with no column pivoting). Since the columns of ${\bf L}$ are linearly independent by construction, ${\bf T}_1\in\mbox {\Bb R}^{k\times k}$ is invertible; define ${\bf T} \equiv {\bf T}_1^{-1}$, so that ${\bf V} = {\bf L}\kern1pt{\bf T}$. (Note that ${\bf T}$ plays the same role it does in the proof of Lemma~\ref{lemma:new_eta_bound}.) If the DEIM procedure is applied to ${\bf V}$, then by construction ${\bf p} = [1, 2, \ldots, k]$ (in exact arithmetic): during the DEIM procedure, the relations \[ \mbox{\boldmath$\ell$}_j\kern1pt\tau_{jj} = {\bf v}_j - {\bf V}_{j-1}^{} ( {\bf P}_{j-1}^T {\bf V}_{j-1}^{})^{-1} {\bf P}_{j-1}^T {\bf v}_j, \rlap{\qquad $j > 1$} \] hold, with $\mbox{\boldmath$\ell$}_j = {\bf L}(\,:\,,j)$, $\tau_{jj} = {\bf T}(j,j)$, ${\bf v}_j = {\bf V}(\,:\,,j)$ , ${\bf P}_{j-1} = {\bf I}(\,:\,,1:j-1)$ and ${\bf V}_{j-1} = {\bf V}(\,:\,,1:j-1)$. Thus ${\bf p}(j) = j , \ j>1 $ and it is easily seen that ${\bf p}(1) = 1$. Note that $ {\bf V} {\bf T}^{-1} = {\bf L} $ implies ${\bf T}^{-T} {\bf T}^{-1} = {\bf L}^T {\bf L}$ hence $\| {\bf T} \| = 1/\sigma_k$, where $\sigma_k $ is the smallest singular value of ${\bf L}$.\ \ Let ${\bf y}$ be the corresponding right singular vector, so that \[ \sigma_k^2 = {\bf y}^T {\bf L}^T {\bf L} {\bf y}. \] We claim that $\sigma_k \ge \sqrt{2}$.\ \ To see this, write ${\bf L}$ in the form \[ {\bf L} = \left[ \begin{array}{c} {\bf I}_k \\ {\bf 0} \end{array} \right] - \left[ \begin{array}{c} {\bf L}_0 \\ {\bf E} \end{array} \right], \] where \[ {\bf I}_k = \left[\begin{array}{cccc} 1 \\ & 1 \\ & & \ddots \\ & & & 1 \end{array}\right], \quad {\bf L}_0 = \left[\begin{array}{cccc} 0 \\ 1 & \ddots \\ \vdots & \ddots & \ddots \\ 1 & \cdots & 1 & 0\end{array}\right], \quad {\bf E} = \left[\begin{array}{cccc} 1 & 1 & \cdots & 1 \\ 1 & 1 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 1\end{array}\right] = {\bf f}\kern1pt {\bf e}^T, \] for ${\bf f} = [1,\ldots, 1]^T \in \mbox {\Bb R}^{m-k}$.\ \ \begin{eqnarray*} {\bf L}^T {\bf L} &=& {\bf I}_k - {\bf L}_0 - {\bf L}_0^T + {\bf L}_0^T {\bf L}_0 + {\bf E}^T {\bf E} \\[.25em] &=& {\bf I}_k - ({\bf e} {\bf e}^T - {\bf I}_k) + {\bf L}_0^T{\bf L}_0 + (m-k){\bf e}\bfe^T \\[.25em] &=& 2\kern1pt {\bf I}_k + {\bf L}_0^T {\bf L}_0 + (m - k -1) {\bf e} {\bf e}^T \\[.25em] &=& 2\kern1pt {\bf I}_k + {\bf M}, \end{eqnarray*} where ${\bf M}:= {\bf L}_0^T {\bf L}_0 + (m - k -1) {\bf e} {\bf e}^T$ is symmetric and positive semidefinite whenever $m>k$. Thus \[ \sigma_k^2 = {\bf y}^T {\bf L}^T {\bf L} {\bf y} = {\bf y}^T (2{\bf I}_k + {\bf M}) {\bf y} \ge 2 \] and hence it follows that \[ \| {\bf T} \| \le 1/\sqrt{2}. \] This implies \[ \|({\bf P}^T {\bf V})^{-1} \| = \| {\bf T}^{-1} {\bf L}_1^{-1} \| \ge \frac{\| {\bf L}_1^{-1} \|}{\| {\bf T} \|} \ge \sqrt{2}\ \|{\bf L}_1^{-1} \|. \] To complete the lower bound, we must analyze $\|{\bf L}_1^{-1}\|$. Forward substitution gives ${\bf L}_1^{-1} {\bf e}_1 = [1, 1, 2, 4, \ldots , 2^{k-2}]^T$ and thus \[ \|{\bf L}_1^{-1} \| > \|{\bf L}_1^{-1} {\bf e}_1\| = \sqrt{1 + (4^{k-1} - 1)/3} \ > 2^{k-2}. \] We arrive at the lower bound \[ \eta_p \equiv \|({\bf P}^T {\bf V})^{-1} \| \ge \sqrt{2}\ \|{\bf L}_1^{-1} \| \ > \sqrt{2}\cdot 2^{k-2}, \] and thus for this choice of ${\bf V}$, the DEIM error constant satisfies \[ {1\over\sqrt{8}}\ 2^k < \eta_p < \sqrt{{m k\over 3}} \ 2^k. \] This example is interesting because it relies on the behavior of the classic example for growth in LU decomposition~\cite[lecture~22]{TB97}. However, in this case the pathological growth is caused by ${\bf L}$ and not by ${\bf T}$.
{ "timestamp": "2015-09-22T02:00:52", "yymm": "1407", "arxiv_id": "1407.5516", "language": "en", "url": "https://arxiv.org/abs/1407.5516", "abstract": "We derive a CUR matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix $A$, such a factorization provides a low rank approximate decomposition of the form $A \\approx C U R$, where $C$ and $R$ are subsets of the columns and rows of $A$, and $U$ is constructed to make $CUR$ a good approximation. Given a low-rank singular value decomposition $A \\approx V S W^T$, the DEIM procedure uses $V$ and $W$ to select the columns and rows of $A$ that form $C$ and $R$. Through an error analysis applicable to a general class of CUR factorizations, we show that the accuracy tracks the optimal approximation error within a factor that depends on the conditioning of submatrices of $V$ and $W$. For large-scale problems, $V$ and $W$ can be approximated using an incremental QR algorithm that makes one pass through $A$. Numerical examples illustrate the favorable performance of the DEIM-CUR method, compared to CUR approximations based on leverage scores.", "subjects": "Numerical Analysis (math.NA)", "title": "A DEIM Induced CUR Factorization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.98593637543616, "lm_q2_score": 0.8333245953120233, "lm_q1q2_score": 0.821605031063741 }
https://arxiv.org/abs/1712.06587
Solving satisfiability using inclusion-exclusion
Using Maple, we implement a SAT solver based on the principle of inclusion-exclusion and the Bonferroni inequalities. Using randomly generated input, we investigate the performance of our solver as a function of the number of variables and number of clauses. We also test it against Maple's built-in tautology procedure. Finally, we implement the Lovász local lemma with Maple and discuss its applicability to SAT.
\section{Introduction to SAT} First, some terminology. A \emph{Boolean variable} is a variable which can take on values in $\{true, false\}$, or, equivalently, $\{0,1\}$ (e.g. $x$). A \emph{literal} is a Boolean variable or its negation (e.g. $\lnot x$). \emph{Disjunction} means ``or" ($\lor$) and conjunction means ``and'' ($\land$). A \emph{disjunctive clause} is a disjunction of literals (e.g. $x\lor \lnot y \lor z$); similarly, we can define the \emph{conjunctive clause}. A \emph{conjunctive normal form} (CNF) is a conjunction of disjunctive clauses (e.g. $\lnot z \land (y \lor z)\land(x\lor \lnot y)$); similarly, we can define the \emph{disjunctive normal form} (DNF). We say that a CNF $S$ in the variables $x_1,\dots,x_n$ is \emph{satisfiable} iff there exists an assignment of truth values to $x_1,\dots,x_n$ that makes $S$ true. For example, the CNF in the previous paragraph is satisfiable: the first clause forces $z=false$; then the second forces $y=true$; and the third forces $x=true$, giving us a valid assignment. On the other hand, the CNF $(x \lor y) \land \lnot x \land \lnot y$ is, of course, not satisfiable. Given a CNF in $n$ variables, one obvious way to determine its satisfiability is to check all $2^n$ assignments to the variables. There is an ongoing effort to develop more efficient algorithms to determine satisfiability. We call these algorithms ``SAT solvers.'' Currently, even the most efficient SAT solvers are exponential time; one can always construct worst-case scenarios that take long for the algorithm to analyze. In fact, SAT has been shown to be NP-complete, so a polynomial time SAT solver would indeed be breaking news. Here, we shall certainly not present a polynomial-time algorithm, or even one that is practically more competent than current solvers. Rather, we wish to outline a simple, novel approach to solving SAT, analyze its strengths and weaknesses, and discuss how it might be used as the basis for a more powerful solver. \section{SAT and inclusion-exclusion} Suppose $S=C_1\land\cdots\land C_N$ is a CNF with $N$ clauses and $n$ variables $x_1,\dots,x_n$. Then, $S$ is satisfiable iff $\lnot S=\lnot C_1 \lor \cdots \lor \lnot C_N$ is not a tautology. So SAT can be rephrased as ``given an arbitrary DNF, determine if it is a tautology.'' We shall use this formulation in our approach. Thus, let $S=C_1\lor\cdots\lor C_N$ be a DNF with $N$ clauses and $n$ variables $x_1,\dots,x_n$. We wish to determine if all $2^n$ possible assignments to the variables result in $S$ being true. We can interpret this probabilistically: If we pick a uniform random assignment, is $\Pr[S=true]=1$? Equivalently, letting $A_k$ be the event that $C_k$ is satisfied, is $\Pr[\cup_k A_k]=1$? Recall that we can compute the probability of such a union using the following: \begin{prop}[Principle of Inclusion-Exclusion] Let $A_1,\dots,A_N$ be events in a finite probability space. For $I \subset [N]$, define $$A_I=\bigcap_{j\in I}A_j.$$ Then, $$ \Pr[\cup_k A_k]=\sum_{i=1}^N (-1)^{i+1} \sum_{I \subset [N], |I|=i} \Pr[A_I]. $$ \end{prop} So our problem amounts to finding $\Pr[A_I]$ for arbitrary $I \subset [N]$, which is easy: Let $V$ be the set of literals appearing in the clauses $\{C_j : j \in I\}$; then, $\Pr[A_I]=0$ if $V$ contains a variable and its negation, and $\Pr[A_I]=2^{-|V|}$ otherwise. This idea is easily implemented to produce a simple SAT solver which always terminates with a correct answer. Such a solver, along with some test results, is briefly outlined in [GC]. However, notice that the sums in Proposition 2.1 grow with the number of clauses. Luckily, we have the \emph{Bonferroni inequalities}, which tell us that we can compute the outer sum partially and still get a bound on the probability we are after: \begin{prop}[Bonferroni Inequalities] With the notation of Proposition 2.1, let $1\leq k \leq N$. Then, $$ \Pr[\cup_k A_k] \bowtie \sum_{i=1}^k (-1)^{i+1} \sum_{I \subset [N], |I|=i} \Pr[A_I], $$ where $\bowtie$ means $\leq$ if $k$ is odd and $\geq$ if $k$ is even. \end{prop} Using the Bonferroni inequalities and looping over $k$, we can get a sequence of upper and lower bounds on $\Pr[\cup_k A_k]$. If at some point we find that $\Pr[\cup_k A_k]<1$ or $\Pr[\cup_k A_k]\geq 1$, then we can exit the loop and determine that $S$ is not a tautology or a tautology, respectively. In the worst case, we have to go up to $k=N$, but (hopefully) we arrive at a decision after significantly less steps. \subsection{Details of the algorithm} The method outlined above is implemented in the Maple package \verb+sat.txt+; see Section 5 for instructions to obtain the package. We encode a DNF as a set of sets of integers: For example, \verb+{{1,-2},{3}}+ corresponds to $(x_1 \land \lnot x_2) \lor x_3$. The \verb+Merge+ procedure is the equivalent of conjunction: \verb+Merge({-1,2},{2,3})+ returns \verb+{-1,2,3}+, while \verb+Merge({1,2},{-2,3})+ returns \verb+false+ since these two clauses are ``incompatible,'' i.e., not simultaneously satisfiable. The main procedure is \verb+Taut+. It inputs a DNF \verb+S+ and threshold \verb+K+. We initialize \verb+P=0+ and \verb+N=nops(S)+, the number of clauses. For \verb+k+ from 1 to \verb+K+, we compute the \verb+k+th term in the inclusion-exclusion sum and add it to \verb+P+. For the sake of efficiency, a table is used to keep track of all \emph{compatible} conjunctions of \verb+k+ clauses in \verb+S+, so that at the \verb+k+th stage, the table has at \emph{most} \verb+N+ choose \verb+k+ entries. If we obtain a conclusive bound at some point in the loop, we return \verb+[ans,k]+, where the first entry is \verb+true+ or \verb+false+, depending on whether we found \verb+S+ to be a tautology. If we complete the whole loop without coming to a conclusion, we return \verb+[P,k]+. \section{Testing the solver} To test our solver, we use the procedure \verb+RandNF(n,N,M)+, which generates a random DNF with \verb+N+ clauses in \verb+n+ variables, each containing \verb+M+ uniform random literals. By default, \verb+M=3+, which we shall assume from now on. The procedure \verb+MetaTaut(n,N,K,M)+ runs \verb+Taut+ on \verb+M+ random DNFs with \verb+n+ variables and \verb+N+ clauses and threshold \verb+K+, and it records the run time and output of each trial. The procedure \verb+MetaTaut(n,N,K,M)+ does the same, but instead of our solver, it uses Maple's built-in \verb+tautology+ procedure. \subsection{Runtimes} As one would expect, our solver seems to perform most competently when there are lots of variables but not too many clauses. For example, Figure 1 shows a histogram of runtimes resulting from using \verb+Taut+ on 1000 random DNFs generated by \verb+RandNF(100,10)+. In all of these cases, our solver arrived at the correct answer by the third step of the loop, and the longest runtime was .006s. As Figure 1 shows, the Maple solver performed slower in this case. Further, we tested \verb+Taut+ on 10 random DNFs generated by \verb+RandNF(1000,20)+, and it decided each of them was not a tautology by the seventh inclusion-exclusion step. The runtimes ranged from 2-58 minutes, with an average of 19. In this case, using \verb+MapleTaut+ resulted in an overflow error. On the other hand, Figure 2 shows the results when 100 random DNFs generated by \verb+RandTaut(100,20)+ are used. Already, the number of clauses is enough to make our solver slower than Maple. In fact, in this case, only fifteen of the 100 random DNFs are solvable by \verb+Taut+ with threshold $k=6$. Also, we should point out that, in the situations where our method does seem promising, it seems that it almost always returns false. So, as it is, it probably has little practical use. Further, we are only testing it against a na\:ive built-in Maple tautology function, rather than a sophisticated SAT solver. \begin{figure} \centering \begin{subfigure}[l]{0.45\textwidth} \includegraphics[width=\textwidth]{satshort.eps} \end{subfigure} \begin{subfigure}[r]{0.45\textwidth} \includegraphics[width=\textwidth]{mapleshort.eps} \end{subfigure} \caption{As shown in these runtime frequency plots, when the variable to clause ratio is high enough, our solver (left) out-performs Maple (right).} \end{figure} \begin{figure} \centering \begin{subfigure}[l]{0.45\textwidth} \includegraphics[width=\textwidth]{satlong.eps} \end{subfigure} \begin{subfigure}[r]{0.45\textwidth} \includegraphics[width=\textwidth]{maplelong.eps} \end{subfigure} \caption{With a lower variable-to-clause ratio, our solver (left) loses to Maple (right).} \end{figure} \subsection{Thresholds} Recall that, in \verb+Taut(S,k)+, the argument \verb+k+ is the threshold, that is, the number of inclusion-exclusion summands computed before the procedure quits. Now, we investigate how the required threshold is related to the number of variables \verb+n+ and number of clauses \verb+N+. The procedure \verb+HowManyFinished(n,N,k,M)+ runs \verb+Taut+ with threshold \verb+k+ on \verb+M+ random DNFs generated by \verb+RandNF(n,N)+, and it outputs the proportion of conclusive runs. In other words, it estimates \emph{success probability} that a DNF generated by \verb+RandNF(n,N)+ is solvable by our algorithm with threshold \verb+k+. Empirical evidence shows a phase shift behavior in the success probability If we fix $n$ and $k$ and vary $N$. Namely, there appears to be a critical number of clauses $N_c(n,k)$ at which the graph of the success probability has an inflection point. Of course, we have $N_c>k$, with $N_c$ increasing in $k$. Some plots exhibiting this phase shift are shown in Figure 3. Note that this behavior is reminiscent of the satisfiability phase shift studied in [XW], where the behavior of the probability of a random CNF being satisfiable as a function of the ratio of the number of variables and clauses is studied. \begin{figure} \centering \includegraphics[width=.7\textwidth]{phase.eps} \caption{Here, $n$ and $N$ correlate with the number of variables and clauses, respectively; $k$ is the threshold used in our solver; and $P$ is the proportion of times our solver was successful, based on 200 runs with random DNFs.} \end{figure} \section{SAT and the Lov\'asz local lemma} \subsection{Computerizing the local lemma} Given some ``bad events'' $\mathcal{A}=\{A_1,\dots,A_N\}$, the Lov\'asz local lemma can be used to verify that there is a positive probability that \emph{none} of them occurs. Suppose $G$ is a \emph{dependency graph} on the vertex set $\mathcal{A}$: That is, events in $\mathcal{A}$ are mutually independent of their non-neighbors in $G$. Let $\Gamma(A)$ denote the neighborhood of $A$ in $G$. Then the following holds: \begin{prop}[Asymmetric Lov\'asz local lemma] Suppose there exists a weight function $x: \mathcal{A} \to [0,1)$ such that $$ \forall A \in \mathcal{A}, \,\,\,\, \Pr(A)\leq x(A) \prod_{B \in \Gamma(A)} (1-x(B)). $$ Then $\Pr(\bigcap_i A_i^c)>0$. \end{prop} In applications, the weight function $x(A)$ is usually found ad hoc. If we assume each vertex of the dependency graph has degree $\leq d$ and set $x\equiv 1/(d+1)$, we obtain the following: \begin{prop}[Symmetric Lov\'asz local lemma] Suppose each event $A_i$ satisfies $\Pr(A_i)\leq p$ and is independent of all but at most $d$ of the other events. If $$ ep(d+1)\leq 1, $$ then $\Pr(\bigcap_i A_i^c)>0$. \end{prop} The procedure \verb+LLLs(P,G)+ in the Maple package checks if the events $A_i$ satisfy the symmetric local lemma, where the lists $P$ and $G$ satisfy$P[i]=\Pr(A_i)$ and $G[i]=\{j : A_j \in \Gamma(A_i)\}$. Computerizing the asymmetric local lemma is harder, since, as far as we know, there is no systematic and efficient way to look for a valid weight function $x$. Somewhat arbitrarily, the procedure \verb+LLL(P,G)+ uses the weight function $x(A)=1/(|\Gamma(A)|+1)$. The motivation for this choice is that, when the dependency graph is uniform, it reduces to the symmetric local lemma. \subsection{Applying the local lemma to SAT} The article [G] addresses a theoretical application of the local lemma to SAT, focusing on using it to derive combinatorial conditions for the satisfiability of CNFs. Here, present a computer application of the local lemma to SAT. Let us return to the setup in Section 2. We have a DNF $S=C_1\lor\cdots\lor C_N$ with variables $x_1,\dots,x_n$, which are assigned true/false values uniformly at random. We let $A_k$ be the event that $C_k$ is true. Then $S$ is \emph{not} a tautology iff there is a positive probability that \emph{none} of the events $A_k$ occurs. So we can apply the local lemma. We form a dependency graph $G$ by joining $A_i$ and $A_j$ iff the clauses $C_i$ and $C_j$ have common variables. Also, $\Pr(A_i)=2^{-n_i}$, where $n_i$ is the number of literals in $C_i$; for example, for 3-SAT, $\Pr(A_i)=1/8$. The procedure \verb+DNFtoPG(S)+ converts the DNF $S$ to a pair $P,G$, which can be passed to one of the \verb+LLL+ procedures. If the procedure returns \emph{true}, then we can conclude that $S$ was \emph{not} a tautology; otherwise, this method is inconclusive. Unfortunately, \verb+LLLs+ rarely succeeds at detecting a non-tautology, and \verb+LLL+ is only slightly better. For example, out of 100 non-tautologies generated by \verb+RandNF(100,10)+, only 26 were detected by \verb+LLLs+ and 37 by \verb+LLL+. Out of 100 non-tautologies generated by \verb+RandNF(100,15)+, only 2 were detected by \verb+LLLs+ and 3 by \verb+LLL+. We expect that this is due to the behavior of the dependency graph. It would be interesting to develop a ``clever'' asymmetric local lemma algorithm that tailors the weight function to work for the given dependency graph. \section{Using the Maple package} The Maple package \verb+sat.txt+ accompanying this paper may be found at the following URL: \\ \url{http://www.math.rutgers.edu/~az202/Z}. To use the Maple package, place \verb+sat.txt+ in the working directory and execute \verb+read(`sat.txt`);+. To see the main procedures, execute \verb+Help();+. For help on a specific procedure, use \verb+Help(<procedure name>);+. Here are some things to try: \begin{itemize} \item \verb+Taut({{-3,1,4},{-2,-1,4}},2);+ determines if $(\lnot x_3 \land x_1 \land x_4)\lor (\lnot x_2 \land \lnot x_1 \land x_2)$ is a tautology using inclusion-exclusion with threshold 2. \item \verb+MapleTaut({{-3,1,4},{-2,-1,4}});+ determines if $(\lnot x_3 \land x_1 \land x_4)\lor (\lnot x_2 \land \lnot x_1 \land x_2)$ is a tautology using Maple's \verb+tautology+ procedure. \item \verb+RandNF(10,4);+ generates a random DNF with approximately 10 variables and 4 clauses. \item \verb+MetaTaut(5,25,8,10);+ runs \verb+Taut(RandNF(5,25),8)+ 10 times. \item \verb+LLL(DNFtoPG({{-3,1,4},{-2,-1,4}}));+ determines if the LLL applies to the given DNF (if true is returned, then it is not a tautology). \item \verb+MetaLLL(100,10,100);+ applies the local lemma to 100 DNFs generated by \verb+RandNF(100,10)+ and outputs the proportion of non-tautologies detected by \verb+LLL+ and the actual proportion of non-tautologies. \end{itemize} \section*{Acknowledgement} The author thanks Dr. Doron Zeilberger for introducing this project to him and guiding his research in the right direction. \section{References} \begin{itemize} \item[{[G]}] Heidi Gebauer, Robin A. Moser, Dominik Scheder, and Emo Welzl, The Lov\'asz Local Lemma and Satisfiability, in: Efficient Algorithms, Part I, pp. 30-54, Susanne Albers, Helmut Alt, and Stefan N\:aher (eds.). Heidelberg: Springer-Verlag, 2009. \item[{[GC]}] G\'abor Kusper and Csaba Bir\'o, Solving SAT by an Iterative Version of the Inclusion-Exclusion Principle, 2015 17th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), Timisoara, 2015, pp. 189-190. \item[{[XW]}] Ke Xu and Wei Li, The SAT phase transition, \\ \url{http://www.math.ucsd.edu/~sbuss/CourseWeb/Math268_2007WS/satphase.pdf} \end{itemize} \end{document}
{ "timestamp": "2017-12-20T02:00:10", "yymm": "1712", "arxiv_id": "1712.06587", "language": "en", "url": "https://arxiv.org/abs/1712.06587", "abstract": "Using Maple, we implement a SAT solver based on the principle of inclusion-exclusion and the Bonferroni inequalities. Using randomly generated input, we investigate the performance of our solver as a function of the number of variables and number of clauses. We also test it against Maple's built-in tautology procedure. Finally, we implement the Lovász local lemma with Maple and discuss its applicability to SAT.", "subjects": "Data Structures and Algorithms (cs.DS); Combinatorics (math.CO)", "title": "Solving satisfiability using inclusion-exclusion", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9859363700641143, "lm_q2_score": 0.8333245973817158, "lm_q1q2_score": 0.8216050286276684 }
https://arxiv.org/abs/1308.1427
Regions of Stability for a Linear Differential Equation with Two Rationally Dependent Delays
Stability analysis is performed for a linear differential equation with two delays. Geometric arguments show that when the two delays are rationally dependent, then the region of stability increases. When the ratio has the form 1/n, this study finds the asymptotic shape and size of the stability region. For example, a delay ration of 1/3 asymptotically produces a stability region 44.3% larger than any nearby delay ratios, showing extreme sensitivity in the delays. The study provides a systematic and geometric approach to finding the eigenvalues on the boundary of stability for this delay differential equation. A nonlinear model with two delays illustrates how our methods can be applied.
\section*{Regions of Stability for a Linear Differential Equation with Two Rationally Dependent Delays} \end{center} \begin{center} Joseph M. Mahaffy \\ \begin{small} Nonlinear Dynamical Systems Group \\ Department of Mathematics \\ San Diego State University \\ San Diego, CA 92182, USA \\ \end{small} \vspace{0.15in} Timothy C. Busken \\ \begin{small} Department of Mathematics \\ Grossmont College \\ El Cajon, CA 92020, USA \\ \end{small} \end{center} \begin{abstract} Stability analysis is performed for a linear differential equation with two delays. Geometric arguments show that when the two delays are rationally dependent, then the region of stability increases. When the ratio has the form $1/n$, this study finds the asymptotic shape and size of the stability region. For example, a delay ration of $1/3$ asymptotically produces a stability region 44.3\% larger than any nearby delay ratios, showing extreme sensitivity in the delays. The study provides a systematic and geometric approach to finding the eigenvalues on the boundary of stability for this delay differential equation. A nonlinear model with two delays illustrates how our methods can be applied. \end{abstract} \noindent {\bf Keywords:} Delay differential equation; bifurcation; stability analysis; exponential polynomial; eigenvalue \vspace{0.1in} \begin{small} \noindent {\bf Submitted:} 7/24/2013 \end{small} \section{Introduction} Delay differential equations (DDEs) are used in a variety of applications, and understanding their stability properties is a complex and important problem. The addition of a second delay significantly increases the difficulty of the stability analysis. E.\ F.\ Infante \cite{Inf} stated that an economic model with two delays, which are rationally related, has a region of stability that is larger than one with delays nearby that are irrationally related. This meta-theorem inspires much of the work below, where we examine the linear two-delay differential equation: \begin{equation} \dot{y}(t) + A\,y(t) + B\,y(t-1) + C\,y(t-R) = 0, \label{DDE2} \end{equation} as the parameters $A$, $B$, $C$, and $R \in (0,1)$ vary. (Note that time has been scaled to make one delay unit time). Our efforts concentrate on the stability region near delays of the form $R = \frac{1}{n}$ with $n$ a small integer. The stability analysis of Eqn.~(\ref{DDE2}) for the cases $R = \frac{1}{3}$ and $\frac{1}{4}$ were studied in some detail in Mahaffy {\it et al.} \cite{MZJa, MZJ} and Busken \cite{Busk}, and this work extends those ideas. Discrete time delays have been used in the mathematical modeling of many scientific applications to account for intrinsic lags in time in the physical or biological system. Often there are numerous stages in the process, such as maturation or transport, which utilize multiple discrete time delays. Some biological examples include physiological control \cite{BEL, BeCa, CamBel}, hematopoietic systems \cite{BEM, BMM, McDc, McDd}, neural networks \cite{BCvdD,GopMo,GCC}, epidemiology \cite{CoY}, and population models \cite{BRV,MNB}. Control loops in optics \cite{MiI} and robotics \cite{HaS} have been modeled with multiple delays. Economic models \cite{BEMb, HOR, MACe} include production and distribution time lags. Bifurcation analysis of these models is often quite complex. \begin{figure}[htb] \centerline{ \includegraphics[width=2.5in]{images/one_delay} } \caption{\small{Stability region for one-delay, $\dot{y}(t) = A\,y(t) + C\,y(t-R)$. Violet line gives the real root crossing, while the blue line gives the imaginary root crossing.}} \label{fig:one_delay} \end{figure} The bifurcation analysis of the one-delay version of (\ref{DDE2}) ($B = 0$) began with the work of Hayes \cite{HAY}. The complete stability region in the $AC$-plane has been characterized by several authors \cite{BelC,bohay,Boese94,ELS,HIT} with the stability boundary easily parameterized by the delay $R$ (which can be scaled out). Fig.~\ref{fig:one_delay} shows this region of stability. The boundary of the stability region for the two-delay equation~(\ref{DDE2}) has been studied by many researchers \cite{BEL,elsken,HAHw,HT,LRW,NUS,MZJ}. Several authors study the special case where $A = 0$ \cite{Hal1,LEV,LRW,NUS,PIOT,RM,RUC,saka,YTJ}. Hale and Huang \cite{HAHw} performed a stability analysis of the two-delay problem, \begin{equation} \dot{y}(t) + a\,y(t) + b\,y(t-r_1) + c\,y(t-r_2) = 0, \label{dde2h} \end{equation} where they fixed the parameters, $a$, $b$, and $c$, then constructed the boundary of stability in the $r_1r_2$ delay space. Braddock and van Driessche \cite{BRV} completely determined the stability of (\ref{dde2h}) when $b = c$, and partially extended the results outside that special case. Most of these analyses have studied the 2D stability structure of either (\ref{DDE2}) or (\ref{dde2h}) with one parameter equal to zero or fixing some of the parameters. Often the 2D analyses result in observing disconnected stability regions for (\ref{dde2h}). Elsken \cite{elsken} has proved that the stability region of (\ref{dde2h}) is connected in the $abc$-parameter space with fixed $r_1$ and $r_2$. Recently, Bortz \cite{bortz} developed an asymptotic expansion using Lambert W functions to efficiently compute roots of the characteristic equation for (\ref{dde2h}) with some restrictions. Mahaffy {\it et al.} \cite{MZJa, MZJ} studied (\ref{DDE2}) for specific values of $R$, examining the 2D cross-sections for fixed $A$ and developing 3D bifurcation surfaces in the $ABC$-parameter space. The work below shows why delays of the form $R = \frac{1}{n}$ have enlarged regions of stability. \vspace{0.25in} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{figure}{0} \setcounter{table}{0} \section{Motivating Example} \vspace{.15in} In 1987, B{\'e}lair and Mackey \cite{BEM} developed a two delay model for platelet production. The time delays resulted from a delay of maturation and another delay representing the finite life-span of platelets. The resulting numerical simulation for certain parameters produced fairly complex dynamics. Here we examine a slight modification of their model and demonstrate the extreme sensitivity of the model behavior near rationally dependent delays. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{images/delay13} & \includegraphics[width=0.45\textwidth]{images/delay12} \end{tabular} \caption{Modified platelet model \cite{BEM}. Simulations show the model sensitivity to the delays $R = \frac{1}{3}$ and $\frac{1}{2}$ with nearby delays showing complex oscillations.} \label{fig:platelet} \end{center} \end{figure} The modified model that we consider is given by: \[ \frac{dP}{dt} = -\gamma P(t) + \beta(P(t-R)) - f\cdot\beta(P(t-1)), \] where $\beta(P) = \frac{\beta_0\theta^nP}{\theta^n + P^n}$. This model has a standard linear decay term and nonlinear delayed production and destruction terms. The total lifespan is normalized to one, while the maturation time is $R$. This model differs from the platelet model by choosing an arbitrary fractional multiplier, $f$, instead of having a delay dependent fraction. For our simulations we fixed $\gamma = 100$, $\beta_0 = 168.6$, $n = 4$, $\theta = 10$, and $f = 0.35$. This gives the equilibria $P_e = 0$ and $P_e \approx 5.565$ with $\beta'(5.565) \approx 100$. Fig.~\ref{fig:platelet} shows six simulations near $R = \frac{1}{2}$ and $R = \frac{1}{3}$, where the model is asymptotically stable. However, fairly small perturbations of the delay away from these values result in unstable oscillatory solutions, as is readily seen in the figure. The oscillating solutions are visibly complex. This paper will explain some of the results shown in Fig.~\ref{fig:platelet}. \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{figure}{0} \setcounter{table}{0} \section{Background} \vspace{.15in} \subsection{Definitions and Theorems} There are a number of key definitions and theorems that are needed to build the background for our study. Our analysis centers around finding the stability of (\ref{DDE2}). Stability analysis of a linear DDE begins with the characteristic equation, which is found in a manner similar to ordinary differential equations by seeking solutions of the form $y(t) = c\,e^{\lambda t}$. The characteristic equation for (\ref{DDE2}) is given by: \begin{equation} \lambda + A + B\,e^{-\lambda} + C\,e^{-\lambda\,R} = 0. \label{chareqn} \end{equation} This is an exponential polynomial, which has infinitely many solutions, as one would expect because a DDE is infinite dimensional. Stability occurs if all of the eigenvalues satisfy \[ {\textnormal{Re}} (\lambda) \le 0, \qquad {\rm for\ all}\ \lambda. \] One can readily see from (\ref{chareqn}) that the $A + B + C = 0$ plane, $\Lambda_0$, provides one boundary where a real eigenvalue $\lambda$ crosses between positive and negative, so creates a bifurcation surface. To understand what is meant by rationally dependent delays resulting in larger regions of asymptotic stability, we need an important theorem about the minimum region of stability for (\ref{DDE2}). \begin{theorem} \textbf{Minimum Region of Stability (MRS)}\label{thm1} For $A>|B|+|C|$, \,all solutions $\lambda$ to Eqn.~(\ref{chareqn}) have ${\textnormal{Re}}(\lambda) < 0$, \,which implies that Eqn.~(\ref{DDE2}) is asymptotically stable inside the pyramidal-shaped region centered about the positive $A$-axis, independent of $R$. \end{theorem} The proof of the MRS Theorem can be found in both Zaron \cite{ZAR} and Boese \cite{BOE}. Note that one face of this MRS is formed by the plane $A + B + C = 0$, $\Lambda_0$, where the zero root crossing occurs. The other way that (\ref{DDE2}) can lose stability is by roots passing through the imaginary axis or $\lambda = i\omega$. This is substituted into (\ref{chareqn}). Since the real and imaginary parts are zero, we obtain a parametric representation of the bifurcation curves for $B(\omega)$ and $C(\omega)$. These are given by the expressions: \begin{eqnarray} B(\omega) & = & \frac{A\sin(\omega\,R)+\omega\cos(\omega\,R)} {\sin(\omega(1-R))}, \label{bifB} \\ C(\omega) & = & -\frac{A\sin(\omega)+\omega\cos(\omega)} {\sin(\omega(1-R))}, \label{bifC} \end{eqnarray} where $\frac{(j-1)\pi}{1-R} < \omega < \frac{j\pi}{1-R}$, and $j\in\mathbb{Z^+}$. Clearly, there are singularities for $B(\omega)$ and $C(\omega)$ at $\omega = \frac{j\pi}{1-R}$. This leads to the following definition for bifurcation surfaces. \begin{definition} When a value of $R$ in the interval $(0,1)$ is chosen, \textit{\textbf{Bifurcation Surface j}}, $\Lambda_j$, is determined by Eqns.~(\ref{bifB}) and (\ref{bifC}), and is defined parametrically for $\frac{(j-1)\pi}{1-R}<\omega<\frac{j\pi}{1-R}$ and $A\in\mathbb{R}$. This creates a separate parameterized surface representing solutions of the characteristic equation, (\ref{chareqn}), $\lambda = i\omega$, which can be sketched in the $ABC$ coefficient-parameter space of (\ref{DDE2}), for each positive integer, $j$. \end{definition} Because the MRS is centered on the $A$-axis, we often choose to fix $A$ and view the cross-section of the bifurcation surfaces. Thus, we have the related definition: \begin{definition} \textit{\textbf{Bifurcation Curve j}}, $\Gamma_j$, is determined by Eqns.~(\ref{bifB}) and (\ref{bifC}) and is defined parametrically for $\frac{(j-1)\pi}{1-R} < \omega < \frac{j\pi}{1-R}$ with the values of $R$ and $A$ fixed. This creates a parametric curve, which can be drawn in the $BC$-plane for each $j$. \end{definition} \begin{figure}[htb] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{images/R_thirdx} & \includegraphics[width=0.3\textwidth]{images/R_45x} & \includegraphics[width=0.3\textwidth]{images/R_halfx} \\ $R = \frac{1}{3}$ & $R = 0.45$ & $R = \frac{1}{2}$ \end{tabular} \caption{Bifurcation curves: Shows first 100 parametric curves in the $BC$-parameter space for $A = 1000$ and various delays. The dashed curve shows the boundary of the MRS.} \label{fig:bif_curves} \end{center} \end{figure} For most values of A, the bifurcation curves in the $BC$ plane generated by (\ref{bifB}) and (\ref{bifC}) tend to infinity parallel to the lines $B+C=0$ or $B-C=0$. When $A$ and $R$ are fixed, one can show using the partitioning method of d'El'sgol'Ts \cite{ELS} that a finite number of bifurcation curves will intersect in the $BC$ parameter space to form the remainder of the boundary of the stability region not given by part of the real root-crossing plane. It is along this boundary where eigenvalues of (\ref{chareqn}) cross the imaginary axis in the complex plane. Fig.~\ref{fig:bif_curves} gives three examples showing the first 100 bifurcation curves for $A = 1000$ and delays of $R = \frac{1}{3}$, $R = 0.45$, and $R = \frac{1}{2}$. Assuming that 100 bifurcation curves give a good representation of the stability region, this figure shows how different the regions of stability are for the different delays. The stability region for $R = \frac{1}{2}$ is significantly greater than the others, and the stability region of $R = 0.45$ is very close to the MRS. As seen in Fig.~\ref{fig:bif_curves}, the bifurcation curves can intersect often along the boundary of the region of stability, which creates challenges in describing the evolution of the complete stability surface for (\ref{DDE2}) in the $ABC$-parameter space. We need to discuss how we construct the 3D bifurcation surface using a few more defined quantities. Mahaffy {\it et al.} \cite{MZJ} proved that if $R > R_0$ for $R_0 \approx 0.012$, then the stability surface comes to a point with a smallest value, $A_0$. \begin{theorem}[Starting Point] \label{Ao} If $R > R_{0}$, then the stability surface for Eqn.~(\ref{DDE2}) comes to a point at $(A_{0},B_{0},C_{0}) = \left(-\frac{R+1}{R}_{,} \ \frac{R}{R-1}_{,} \ \frac{1}{R(1-R)}\right)$, and Eqn.~(\ref{DDE2}) is unstable for $A < A_{0}$. \end{theorem} For some range of $A$ values with $A > A_0$, the stability surface is exclusively composed of $\Lambda_1$ and $\Lambda_0$. As $\omega \to 0^+$, $\Lambda_1$ intersects $\Lambda_0$. The surface $\Lambda_1$ bends back and intersects $\Lambda_0$ again, enclosing the stability region. As $A$ increases, $\Lambda_2$ approaches $\Lambda_1$, and at least for a range of $R$, $\Lambda_2$ self-intersects. In the 2D $BC$-parameter plane, this creates a disconnected stability region, which later joins the main bifurcation surface emanating from the starting point. The $A$ value, where this self-intersecting bifurcation surface joins, is the $1^{st}$ transition. Transitions are one of the most important occurrences that affect the shape of the bifurcation surface. \begin{figure}[htb] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{images/transition_bef1} & \includegraphics[width=0.3\textwidth]{images/transition_at} & \includegraphics[width=0.3\textwidth]{images/transition_af1} \end{tabular} \caption{Transition: Enclosed regions in the figures above are the regions of stability for $R = 0.25$ as $A$ moves through a transition and the stability spur joins the main stable surface. } \label{fig:spur} \end{center} \end{figure} \begin{definition}[Transition and Degeneracy Line] There are critical values of $A$ corresponding to where Eqns.~(\ref{bifB}) and (\ref{bifC}) become indeterminate at $\omega=\frac{j\pi}{1-R}$. These \textbf{\textit{transitional}} values of $A$ are denoted by $A^{*}_{j}$, where \begin{equation} \label{atrans} A^{*}_{j} = \textstyle{-\left(\frac{j\pi}{1-R}\right) \cot\left(\frac{jR\pi}{1-R}\right)}, \ j = 1,2,... \ . \end{equation} At a \textbf{\textit{transition}}, Curves $j$ and $(j+1)$ coincide at the specific point $(B^{*}_{j}, C^{*}_{j})$, where \begin{equation*} B^{*}_{j} = (-1)^{j}\frac{(1-R)\cos\left(\frac{jR\pi}{1-R}\right) - jR\pi\csc\left(\frac{jR\pi}{1-R}\right)}{(1-R)^{2}} \\[1.0ex] \end{equation*} \begin{equation} \label{bctrans} C^{*}_{j} = -(-1)^{j}\frac{(1-R)\cos\left(\frac{j\pi}{1-R}\right) -j\pi\csc\left(\frac{j\pi}{1-R}\right)}{(1-R)^{2}} = \frac{j\pi\csc\left(\frac{jR\pi}{1-R}\right)- (1-R)\cos\left(\frac{jR\pi}{1-R}\right)}{(1-R)^{2}}\\[1.0ex] \end{equation} All along the \textbf{\textit{degeneracy line}}, $\Delta_j$, \begin{equation}\label{deg} (B-B^{*}_{j}) +(-1)^{j}(C-C^{*}_{j})=0, \hspace{0.2in} A^{*}_{j}, \end{equation} there are two roots of (\ref{chareqn}) on the imaginary axis with $\lambda=\frac{j\pi}{1-R}i$. \end{definition} If $\Lambda_j$ is on the boundary of the stability region for $A$ slightly less than $A^{*}_{j} $, then $\Delta_j$ becomes part of the stability region's boundary, at Transition $j$. Subsequently, $\Lambda_{j+1}$ enters the boundary of the stability region. These transitions create the greatest distortion to the stability surface and attach stability spurs. It is important to note that many transitions occur outside the stability surface and only affect the organization of the bifurcation curves. Fig.~\ref{fig:spur} shows cross-sections in the $BC$-plane as $A$ goes through a transition. \begin{definition}[Stability Spur] If $\Lambda_{j+1}$ self-intersects and encloses a region of stability for (\ref{DDE2}) as $A$ increases with $A_j^p$ being the cusp or \textbf{the Starting Point of Spur $j$}, then this quasi-cone-shaped \textbf{stability spur} has its cross-sectional area monotonically increase with $A$ until $A$ reaches the transitional value, $A^*_j$. For $A = A^*_j$, the Stability Spur $j$, $Sp_j(R)$, connects with the larger stability surface, via the degeneracy line, $\Delta_j$. \end{definition} There are a couple of other ways for bifurcation surfaces to enter (or leave) the boundary of the main stability region as $A$ increases. We define these means of altering the boundary as \textit{transferrals} and \textit{tangencies}, which relate to higher frequency eigenvalues becoming part of the boundary (or being lost) as $A$ increases. \begin{definition}[Transferral and Reverse Transferral] The \textbf{transferral} value of $A = A_{i,j}^z$ \,is the value of $A$ corresponding to the intersection of $\Lambda_j$ (or $\Gamma_j$) with $\Lambda_i$ (or $\Gamma_i$) at $\Lambda_0$. $\Lambda_j$ (or $\Gamma_j$) enters the boundary of the stability region for $A > A_{i,j}^z$. For some values of $R$ the stability surface can undergo a \textbf{reverse transferral}, \,$\tilde{A}_{j,i}^z$, \,which is a transferral characterized by $\Lambda_j$ (or $\Gamma_j$) leaving the boundary, \,or a transferring \textit{back over} to $\Lambda_i$ (or $\Gamma_i$) the portion of the boundary originally taken by $\Lambda_j$ (or $\Gamma_j$) at $A_{i,j}^z (<\tilde{A}^z_{j,i})$. \end{definition} \begin{figure}[htb] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{images/transferral_1} & \includegraphics[width=0.3\textwidth]{images/transferral_3} & \includegraphics[width=0.3\textwidth]{images/transferral_4} \end{tabular} \caption{Transferral 1, $A^z_{1,6}$. The stability boundary for $R=0.25$ located in the $BC$ plane before, during and after the first transferral is given by the closed region enveloping the MRS (bold dashed line). That closed region is formed by $\Lambda_0$ (violet), $\Gamma_1$ (blue), $\Gamma_2$ (green), $\Gamma_3$ (black), $\Gamma_5$ (dark green), and $\Gamma_6$ (orange). } \label{fig:transfer} \end{center} \end{figure} \begin{definition}[Tangency and Reverse Tangency] The value of $A$ corresponding to the \textbf{tangency} of two surfaces $i$ and $j$ is denoted $A_{i,j}^t$. $\Lambda_j$ (or $\Gamma_j$) becomes tangent to $\Lambda_i$ (or $\Gamma_i$), where $\Lambda_i$ (or $\Gamma_i$) is a part of the stability boundary prior to $A = A_{i,j}^t$. As $A$ increases from $A_{i,j}^t$, $\Lambda_j$ (or $\Gamma_j$) becomes part of the boundary of the stability region, separating segments of the bifurcation surface to which it was tangent. However, many times as $A$ is increased $\Lambda_j$ (or $\Gamma_j$), the same surface (curve) which entered the boundary through tangency $A_{i,j}^t$, can be seen leaving the stability boundary via a \textbf{reverse tangency}, denoted $\tilde{A}_{j,i}^t$. \end{definition} Fig.~\ref{fig:transfer} shows an example of the transferral, $A^z_{1,6}$, where bifurcation curve, $\Gamma_6$, enters the stability surface for $A > A^z_{1,6}$ when $R = \frac{1}{4}$. We can readily see this change in the stability region near where $\Gamma_0$ and $\Gamma_1$ intersect. Fig.~\ref{fig:tang} shows an example of a tangency, $A^t_{3,9}$, where bifurcation curve, $\Gamma_9$, enters the stability surface for $A > A^t_{3,9}$ when $R = \frac{1}{4}$. In this case, $\Gamma_9$ becomes tangent to $\Gamma_3$ and for larger $A$ values becomes part of the stability boundary. We note that the majority of the changes to the stability surface come from tangencies (or reverse tangencies). \begin{figure}[htb] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{images/tang_b4} & \includegraphics[width=0.3\textwidth]{images/tang_at} & \includegraphics[width=0.3\textwidth]{images/tang_after} \end{tabular} \caption{Tangency, $A^t_{3,9}$. The stability boundary in the $BC$ plane for $R=0.25$ is given before, during and after $A_{3,9}^{t}\approx 49$. \,The color scheme for the curves is: $\Lambda_0$ (violet), $\Gamma_1$ (blue), $\Gamma_2$ (green), $\Gamma_3$ (black), $\Gamma_6$ (orange), and $\Gamma_9$ (gray). The boundary is the closed region composed of portions of various competing curves that enshroud the MRS (bold dashed line).} \label{fig:tang} \end{center} \end{figure} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{figure}{0} \setcounter{table}{0} \section{Examples from Numerical Studies} \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.4\textwidth,height=.4\linewidth]{images/timb1} & \includegraphics[width=0.5\textwidth]{images/q3} \end{tabular} \caption{Pictures of the portion of the stability surface comprised of $A\in[-6,21]$ for $R = \frac{1}{5}$.} \label{fig:Rfifth} \end{center} \end{figure} We have developed a number of tools in MatLab to facilitate our stability studies of (\ref{DDE2}). The ability to rapidly generate bifurcation curves has led to significant insight into how the stability region evolves in $A$ and $R$ as viewed in the $BC$-parameter plane. In this section we begin with some 3D stability surfaces for $R = \frac{1}{5}$ to illustrate how the region of stability varies with $A$. We present a diagram, which was numerically generated, to illustrate the systematic ordering of transitions, transferrals, and tangencies for a range of $R$ values. Finally, we end this section with detailed numerical studies for specific values of $R$ as $R \to \frac{1}{4}$. Fig.~\ref{fig:Rfifth} provides two views of the stability region of (\ref{DDE2}) with $R = \frac{1}{5}$ and $A \le 21$. For $R = \frac{1}{5}$ there are only three transitions with a $4^{th}$ transition occurring at $A = +\infty$. Fig.~\ref{fig:Rfifth} shows the starting point of the stability surface at $\left(-6, -\frac{1}{4}, \frac{25}{4}\right)$. Initially, $\Lambda_1$ (blue) and $\Lambda_0$ (violet) bound the stability region. Next the first stability spur (green) enters and attaches $\Lambda_2$ to the stability region at the first transition, $A_1^*$. Subsequently, $\Lambda_3$ (black) and $\Lambda_4$ (red) adjoin the boundary of stability. At $A \approx 21$ a transferral occurs bringing $\Lambda_7$ onto the boundary, and around $A \approx 70$, there is a tangency of $\Lambda_{11}$ interrupting $\Lambda_4$. For this stability surface with no other transitions affecting the boundary, we only observe additional tangencies, followed by reverse tangencies, where bifurcation surfaces leave the boundary, which change the boundary of the stability surface for larger $A$. Fig.~\ref{fig:Rfifth} shows the MRS (black) interior to the stability surface, and visually it is apparent how much the transitions and stability spurs distort the stability surface away from the MRS. It is worth noting that the stability spurs are shrinking in size as $A$ increases. \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth]{images/smalltree} \caption{The various transitions, transferrals, and tangencies are illustrated along with $A_0$ for $R\in[0.20, 0.26]$ and $A\leq200$.} \label{fig:Rall} \end{center} \end{figure} Fig.~\ref{fig:Rall} has a diagram for the range of $R \in [0.2,0.26]$ showing all observed initial points $A_0$, transitions, transferrals, and tangencies. Following a vertical line, {\it i.e.}, fixing $R$, shows exactly which bifurcation curves enter or leave the boundary of the stability surface as $A$ increases. The transition curves in the $RA$-plane increase monotonically. The $1^{st}$ transition is asymptotic with $A_1^* \to \infty$ at $R = \frac{1}{2}$, while $A_2^* \to \infty$ at $R = \frac{1}{3}$. When $R = \frac{1}{4}$, $A_3^* \to \infty$, with the other two transitions, their stability spurs, and one transferral all occurring before $A = 15$. Afterwards, when $R = \frac{1}{4}$, all remaining changes to the stability surface occur through tangencies, reverse tangencies, or a reverse transferral. None of these events dramatically change the shape of the boundary of the stability region. What is significant is that the $3^{rd}$ transition does not occur until $A_3^* = +\infty$, causing the distortion of this transition to persist, while nearby delays do not have this distortion for sufficiently large $A$. \subsection{Stability region near $R = \frac{1}{4}$} The characteristic equation (\ref{chareqn}) is an analytic function, so there is continuity of the stability surfaces as the parameters vary. To study what happens to the stability surface for $R = \frac{1}{4}$, we explore in some detail the stability surface for $R = 0.249$. Not surprisingly, there are many similarities between these surfaces until the singularity occurs at the transition, $A_3^* = 749.93$ for $R = 0.249$. We provide details of the evolution of stability surface for $R = 0.249$, which suggests why the region of stability for $R = \frac{1}{4}$ remains larger than the MRS as $A$ increases. Fig.~\ref{fig:Rall} provides key information on how to determine which bifurcation surfaces compose the boundary of the stability region. Critical changes to the stability region are determined by the intersection of the various curves with a vertical line from a given $R$ as $A$ increases. For $R = 0.249$, this figure shows that the stability region begins at the starting point, $(A_0, B_0, C_0) \approx (-5.016 ,-0.3316, 5.348)$. A stability spur begins at $A_1^p \approx -2.733$ and joins the main stability surface at $A_1^* \approx -2.446$. A second stability spur, which is significantly smaller in length, joins the main stability surface near $A_2^* \approx 4.71$. Continuing vertically in Fig.~\ref{fig:Rall} at $R = 0.249$, we see there is a transferral, $A_{1,6}^z \approx 13.3$. At this stage, the $BC$ cross-section of the stability surface is very similar to the images in Fig.~\ref{fig:transfer}. The boundary at the transferral is comprised of $\Lambda_0$, $\Lambda_1$, $\Lambda_2$, and $\Lambda_3$. Subsequently, $\Lambda_6$ enters the boundary near the intersection of $\Lambda_0$ and $\Lambda_1$. The next change in the stability surface for $R = 0.249$ is a tangency, which occurs at $A^t_{3,9} \approx 49.4$. This tangency has little effect on the shape of the boundary of the stability region, but allows higher frequency eigenvalues to participate in destabilizing (\ref{DDE2}). This tangency is very similar to the one depicted in Fig.~\ref{fig:tang}, occurring in the $1^{st}$ quadrant. As $A$ increases, a series of tangencies happens, alternating between the $1^{st}$ and $4^{th}$ quadrants. There are a total of 11 tangencies that occur before $A_3^* \approx 749.93$, with the last one being $A^t_{33,39} \approx 462.063$. Following $A^t_{33,39}$, all of these tangencies undergo a reverse tangency (in the reverse sequential order) with higher frequency eigenvalues leaving the boundary of the region of stability. Table~\ref{summarytable249} summarizes all of these events. The onset of reverse tangencies, which occur prior to $A_3^*$, create significant expansion of the stability region, primarily in the $1^{st}$ and $4^{th}$ quadrants of the $BC$-plane. At the same time the stability surface becomes much larger than the MRS. We note that very rapidly after $A_3^*$, there are many tangencies, which occur for increasing $A$ and reduce the size of the boundary of stability for $R = 0.249$. This results in the region of stability shrinking back to being near the MRS for large $A$. Since $R = 0.249$ is rational, we conjecture that the boundary of the stability region never asymptotically approaches the MRS, yet it is substantially closer than for $R = \frac{1}{4}$. \begin{table}[htb] \hspace*{0.01in} \parbox{4.9in}{ \caption{List of 2D boundary changes for $R=0.249$ and $A\in[A_0, 750]$.} } \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{|l|l|l|l|} \hline surface change & $A$ & surface change & $A$\\ \hline $A_0$ & $-5.016$ & reverse tangency & $\tilde{A}^t_{39,33}\approx559.216$\\ \hline spur 1 & $[A^p_1, A^*_1] \approx [-2.7326, -2.4464]$ & reverse tangency & $\tilde{A}^t_{36,30}\approx622.341$ \\[0.1in] \hline spur 2 & $[A^p_2, A^*_2] \approx [4.7067098, 4.70671]$&reverse tangency & $\tilde{A}^t_{33,27}\approx655.407$ \\[0.1in] \hline transferral & $A^z_{1,6}\approx13.3$& reverse tangency & $\tilde{A}^t_{30,24}\approx678.811$\\[0.1in] \hline tangency & $A^t_{3,9}\approx49.4$& reverse tangency & $\tilde{A}^t_{27,21}\approx696.727 $\\[0.1in] \hline tangency & $A^t_{6,12}\approx80.216$& reverse tangency & $\tilde{A}^t_{24,18}\approx 710.883 $\\[0.1in] \hline tangency & $A^t_{9,15}\approx108.4$& reverse tangency & $\tilde{A}^t_{21,15}\approx722.187$\\[0.1in] \hline tangency & $A^t_{12,18}\approx142.479$& reverse tangency & $\tilde{A}^t_{18,12}\approx731.176$\\[0.1in] \hline tangency & $A^t_{15,21}\approx174.915$& reverse tangency & $\tilde{A}^t_{15,9}\approx738.192 $\\[0.1in] \hline tangency & $A^t_{18,24}\approx 208.787 $& reverse tangency & $\tilde{A}^t_{12,6}\approx743.460 $\\[0.1in] \hline tangency & $A^t_{21,27}\approx244.699 $& reverse tangency & $\tilde{A}^t_{9,3}\approx747.134$\\[0.1in] \hline tangency & $A^t_{24,30}\approx283.613$& reverse transferral & $\tilde{A}^z_{6,1}\approx749.4$\\[0.1in] \hline tangency & $A^t_{27,33}\approx327.299$& spur~3 & $ A^*_3 \approx 749.93$\\[0.1in] \hline tangency & $A^t_{30,36}\approx379.973$& transferral & $A^z_{1,7}\approx749.94$\\[0.1in] \hline tangency & $A^t_{33,39}\approx462.063$& tangency & $A^t_{4,10}\approx749.953 $\\[0.1in] \hline && tangency &$A^t_{7,13}\approx750.044$\\[0.1in] \hline \end{tabular} \end{center} \label{summarytable249} \end{table} For $R = 0.249$ at $A_3^* = 749.93$, the boundary of the region of stability is reduced to only 5 bifurcation curves. There are lines from $\Lambda_0$ (violet) and the degeneracy line, $\Delta_3$, with $\lambda = \pm\frac{3i\pi}{0.751}$ at $A_3^*$. The boundaries in the $1^{st}$ and $4^{th}$ quadrants are formed from $\Gamma_3$ (black) and $\Gamma_1$ (blue), respectively. Finally, there is a very small segment of the boundary formed by the $\Gamma_2$ (green). This stability region is shown in Fig.~\ref{fig:R249_tran}. We see distinct bulges away from the MRS in the $1^{st}$ and $4^{th}$ quadrants, increasing the size of the region of stability. This simple boundary easily allows the computation of the area of the region of stability. A numerical integration gives that the region of stability outside the MRS is approximately 26.85\% of the area of the MRS, which is a substantial increase in the region of stability. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=3.0in]{images/A750} & \includegraphics[width=3.0in]{images/A750a} \end{tabular} \end{center} \caption{\small{The five curves on the boundary of the stability region for $R = 0.249$ at $A_3^* = 749.93$ are $\Lambda_0$ (violet), $\Gamma_1$ (blue), $\Gamma_3$ (black), $\Delta_3$ (dashed red) and minimally $\Gamma_2$ (green) with the MRS (dashed black).}} \label{fig:R249_tran} \end{figure} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{figure}{0} \setcounter{table}{0} \section{Analysis} The objective of this section is to convince the reader that the asymptotic stability region for $R = \frac{1}{n}$ with $A \to +\infty$ reduces to a simple set of curves bounded away from the MRS. As $R \to \frac{1}{n}$, there is a critical transition with $A^*_{n-1} \to +\infty$, which leaves the boundary of the stability region for (\ref{DDE2}) composed of only $\Lambda_0$, $\Gamma_1$, $\Gamma_{n-1}$, and the degeneracy line, $\Delta_{n-1}$. (There may be small segments of other bifurcation curves for any $R < \frac{1}{n}$ at $A^*_{n-1}$.) Mahaffy {\it et al.} \cite{MZJa, MZJ} showed that as $\omega \to 0^+$, $\Gamma_1$ intersects $\Lambda_0$ along the line \begin{equation} \frac{A + 1}{1 - R} = \frac{B - 1}{R} = -C. \label{plane_bif1} \end{equation} For $R = \frac{1}{n}$, it follows that $B = \frac{A + n}{n-1}$ and $C = -\frac{n(A +1)}{n-1}$. Thus, the distance from the MRS to $\Gamma_1$ along $\Lambda_0$ asymptotically (large $A$) extends past the MRS by a length that is a factor of $\frac{1}{n - 1}$ longer than the length of the edge of the MRS. This will provide one measure of the extension of the stability region for (\ref{DDE2}). The cases for $R = \frac{1}{2n}$ and $\frac{1}{2n-1}$ give different shaped regions, but ultimately as $A \to +\infty$, the regions are bounded by only four curves with two lying on the boundary of the MRS and two bulging away from this region. We prove analytically the position of some key points, then rely on the limited number of families of curves and their distinct ordering to give the simple structure. The observed orderly appearance and disappearance of the tangencies on the boundary of the region of stability provide our argument for the ultimate simple structure of the stability region and the continued bulge away from the MRS for these particular rational delays. From the monotonicity of the transition curves, we note that all limiting arguments require $R$ approaching $\frac{1}{n}$ from below. \subsection{Stability Region for $R = \frac{1}{2n}$} In this section we provide more details on the increased size of the region of stability for delays of the form $R = \frac{1}{2n}$ as $A \to +\infty$. Earlier we presented evidence that as $R \to \frac{1}{4}$, the third transition, $A_3^*$, tended to infinity, and at that transition the boundary of the stability region in the $BC$-plane reduced to just four curves. $\Lambda_0$ provides the lower left boundary along the MRS in the $3^{rd}$ quadrant. With $R < \frac{1}{4}$, $\Delta_3$, which occurs at $A_3^*$ with eigenvalues $\lambda = \frac{3\pi}{1-R}$, creates a boundary parallel to the MRS in the $2^{nd}$ quadrant. This line approaches the MRS as $R \to \frac{1}{4}$ from below. $\Gamma_1$ forms a boundary in the $4^{th}$ quadrant, which significantly bulges away from the MRS. Its intersection at $\Lambda_0$ extends $\frac{1}{3}$ times the length of the MRS along the line of this plane into the $4^{th}$~quadrant. $\Gamma_3$ creates an almost mirror image across the $C$-axis in the $1^{st}$ quadrant, bulging away from the MRS the same distance. Fig.~\ref{fig:R249_tran} illustrates this expanded stability region very well. Below we prove some results about the four curves on the boundary of the stability region and give additional information on why we believe the stability region has its enlarged character. The case $R = \frac{1}{4}$ extends generically to the case $R = \frac{1}{2n}$. $\Lambda_0$ is always one part of the stability boundary. Symmetric to this boundary across the $B$-axis is $\Delta_{2n-1}$ at $A_{2n-1}^*$, which occurs with $A_{2n-1}^* \to \infty$ as $R \to \frac{1}{2n}$. Below we demonstrate that $\Delta_{2n-1}$ approaches the line $C - B = A_{2n-1}^*$ in the $BC$-plane as $A_{2n-1}^* \to \infty$, which is one side of the MRS. The other two sides are composed of $\Gamma_1$ in the $4^{th}$ quadrant and symmetric to this, $\Gamma_{2n-1}$ in the $1^{st}$ quadrant. To help obtain the symmetrical shape discussed above (and exclude the bifurcation curves that pass near the point $(B, C) = (-A_{2n-1}^*,0)$), there are alternate forms of the Eqns.~(\ref{bctrans}). We multiply and divide the cosecant terms by $\cos\left(\frac{jR\pi}{1-R}\right)$, then use the definition of $A_j^*$ to obtain \begin{eqnarray} B_j^*(R) & = & (-1)^j\left(\frac{1}{1-R}\cos\left(\textstyle{\frac{jR\pi}{1-R}}\right) + \frac{RA_j^*}{(1-R)\cos\left(\textstyle{\frac{jR\pi}{1-R}}\right)}\right) \nonumber \\ C_j^*(R) & = & \frac{-A_j^*}{(1-R)\cos\left(\textstyle{\frac{jR\pi}{1-R}}\right)} - \frac{1}{1-R}\cos\left(\textstyle{\frac{jR\pi}{1-R}}\right) \label{bc_alt} \end{eqnarray} \begin{lemma}\label{lem_lim_line_even} For $R = \frac{1}{2n}$, one boundary of the region of stability is the limiting line \[ C - B = A_{2n-1}^*, \] which lies on the MRS. \end{lemma} \noindent{\bf Proof:} \quad For $R < \frac{1}{2n}$ and $R \to \frac{1}{2n}$, we consider the transition $A_{2n-1}^*$. The degeneracy line, $\Delta_{2n-1}$, satisfies \[ A = A_{2n-1}^*, \quad C - B = C_{2n-1}^* - B_{2n-1}^*. \] Since $j = 2n-1$ and $A_{2n-1}^* = -\frac{(2n-1)\pi}{1-R} \cot\left(\frac{(2n-1)R\pi}{1-R}\right)$, Eqns.~(\ref{bc_alt}) give \[ C_{2n-1}^* - B_{2n-1}^* = \textstyle{\frac{(2n-1)\pi}{1-R}\csc\left(\frac{(2n-1)R\pi}{1-R}\right)}. \] Now consider \[ \lim_{R \to \frac{1}{2n}^-} \frac{C_{2n-1}^* - B_{2n-1}^*}{A_{2n-1}^*} = \lim_{R \to \frac{1}{2n}^-}\textstyle{-\sec\left(\frac{(2n-1)R\pi}{1-R}\right)} = 1. \] Thus, $C_{2n-1}^* - B_{2n-1}^* \to A_{2n-1}^*$ for $R < \frac{1}{2n}$ as $R \to \frac{1}{2n}$, so $\Delta_{2n-1}$ tends towards one edge of the MRS. We note that the small distance between $\Delta_{2n-1}$ and the MRS may allow a very small segment of $\Gamma_2$ to be part of the stability region for all $R < \frac{1}{2n}$. \hfill\mbox{\raggedright\rule{0.07in}{0.1in}}\vspace{0.1in} For $R = \frac{1}{2n}$, we showed that $\Gamma_1$ intersects $\Lambda_0$ at $(B,C) = \left(\frac{A+2n}{2n-1}, -\frac{2n(A+1)}{2n-1}\right)$. We now show that as $R \to \frac{1}{2n}$ from below and $A_{2n-1}^* \to +\infty$, the point $(B_{2n-1}^*, C_{2n-1}^*)$ tends to the ordered pair that has $B$-axis symmetry to the intersection of $\Gamma_1$ and $\Lambda_0$. \begin{lemma}\label{lem_even_pt} For $R < \frac{1}{2n}$ and $R \to \frac{1}{2n}$, the bifurcation curve $\Gamma_{2n-1}$ comes to the point \[ (B_{2n-1}^*, C_{2n-1}^*) = \left(\frac{A_{2n-1}^*+2n}{2n-1}, \frac{2n(A_{2n-1}^*+1)}{2n-1}\right) \] with $A_{2n-1}^* \to +\infty$. \end{lemma} \noindent{\bf Proof:} \quad For $R \to \frac{1}{2n}$ from below with $j = 2n - 1$, Eqns.~\ref{bc_alt} yield \begin{eqnarray*} \lim_{R \to \frac{1}{2n}} B_{2n-1}^*(R) & = & \lim_{R \to \frac{1}{2n}}\left(-\frac{\cos\left(\frac{(2n-1)R\pi}{1-R}\right)}{1-R} - \frac{RA_{2n-1}^*}{(1-R)\cos\left(\frac{(2n-1)R\pi}{1-R}\right)}\right) \\ & = & \frac{2n}{2n-1} + \frac{A_{2n-1}^*}{2n-1}, \end{eqnarray*} since $1 - R \to \frac{2n-1}{2n}$ and $\cos\left(\frac{(2n-1)R\pi}{1-R}\right) \to \cos(\pi) = -1$. Similarly, \begin{eqnarray*} \lim_{R \to \frac{1}{2n}^-} C_{2n-1}^*(R) & = & \lim_{R \to \frac{1}{2n}^-}\left(- \frac{A_{2n-1}^*} {(1-R)\cos\left(\frac{(2n-1)R\pi}{1-R}\right)} - \frac{\cos\left(\frac{(2n-1)R\pi}{1-R}\right)}{1-R} \right) \\ & = & \frac{2n\,A_{2n-1}^*}{2n-1} + \frac{2n}{2n-1}. \end{eqnarray*} ({\bf Note:} The limit as $R \to \frac{1}{2n}^+$, we have $A^*_{2n-1} \to -\infty$, which is not of interest to our study.) \hfill\mbox{\raggedright\rule{0.07in}{0.1in}}\vspace{0.1in} The next step in our analysis is to show that the bifurcation curves, $\Gamma_1$ and $\Gamma_{2n-1}$, pass arbitrarily close to the point $(A_{2n-1}^*,0)$ in the $BC$-plane as $R \to \frac{1}{2n}$. Note that this is the point on the MRS, which is opposite the point of intersection of $\Lambda_0$ and $\Delta_{2n-1}$ at $A^*_{2n-1}$. \begin{lemma} \label{C_cross_even} For $R < \frac{1}{2n}$ and $R \to \frac{1}{2n}$, the bifurcation curves, $\Gamma_1$ and $\Gamma_{2n-1}$, pass arbitrarily close to the point $(A_{2n-1}^*,0)$ in the $BC$-plane with $A_{2n-1}^* \to +\infty$. \end{lemma} \noindent{\bf Proof:} \quad The bifurcation curves cross the $B$-axis whenever $C(w) = 0$. From (\ref{bifC}), $C(w) = 0$ implies \begin{equation}\label{Czero} \omega = -\frac{A \sin(\omega)}{\cos(\omega)} \qquad {\rm or} \qquad -\omega\cos(\omega) = A\sin(\omega). \end{equation} With this information it follows from (\ref{bifB}) that \[ B(\omega) = -\frac{A}{\cos(\omega)}. \] For $\Gamma_1$, $0 < \omega < \frac{\pi}{1-R}$, which tends to the interval $0 < \omega < \frac{2n\pi}{2n-1}$ as $R \to \frac{1}{2n}$. When $C(\omega) = 0$, the expression $-\omega\cos(\omega)$ is bounded near $\pi$ as $\omega \to \pi$. For $C(\omega) = 0$ as $A$ becomes arbitrarily large, then $\sin(\omega) \to 0$ or $\omega \to \pi^-$. Thus, as $A \to \infty$, $\omega \to \pi^-$ for $\Gamma_1$ to cross the $B$-axis, and \[ \lim_{\omega \to \pi^-} B(\omega) = A. \] It follows that as $R \to \frac{1}{2n}$ from below, then $A_{2n-1}^* \to \infty$ and $\Gamma_1$ passes arbitrarily close to the point $(A_{2n-1}^*,0)$. From the definition of $\Gamma_{2n-1}$, $\frac{(2n-2)\pi}{1-R} < \omega < \frac{(2n-1)\pi}{1-R}$, which tends to the interval $\frac{2n(2n-2)\pi}{2n-1} < \omega < 2n\pi$ as $R \to \frac{1}{2n}$. It is easy to see that $\omega = (2n-1)\pi$ is inside this interval for $\Gamma_{2n-1}$. This $\omega$ being an odd multiple of $\pi$ and $n$ being fixed and finite, the same arguments above for $\Gamma_1$ hold, which implies \[ \lim_{\omega \to (2n-1)\pi^-} B(\omega) = A. \] It follows that as $R \to \frac{1}{2n}$ from below, then $A_{2n-1}^* \to \infty$ and $\Gamma_{2n-1}$ passes arbitrarily close to the point $(A_{2n-1}^*,0)$. \hfill\mbox{\raggedright\rule{0.07in}{0.1in}}\vspace{0.1in} The lemmas above prove some of the features of the stability region in Fig.~\ref{fig:R249_tran} for $R = \frac{1}{2n}$ with $A \to +\infty$. In particular, we see the left boundaries aligning with the MRS in the $2^{nd}$ and $3^{rd}$ quadrants. We also proved that $\Gamma_1$ and $\Gamma_{2n-1}$ intersect near $B = A^*_{2n-1}$ when $C = 0$. Finally, we proved that $\Gamma_1$ and $\Gamma_{2n-1}$ intersect $\Lambda_0$ and $\Delta_{2n-1}$, respectively, in a symmetric manner at \[ (B, C) = \left(\frac{A_{2n-1}^*+2n}{2n-1}, \pm\frac{2n(A_{2n-1}^*+1)}{2n-1}\right), \] as $R \to \frac{1}{2n}$ from below and $A^*_{2n-1} \to +\infty$. It remains to show that $\Gamma_1$ and $\Gamma_{2n-1}$ are the only bifurcation curves on the boundary as $R \to \frac{1}{2n}$ from below and $A \to +\infty$. In this work we will only present numerical evidence for this result, providing some ideas for how a rigorous proof might proceed. The numerical results will include how much larger the asymptotic region of stability is for rational delays of the form $R = \frac{1}{2n}$. As noted earlier, we developed a MatLab code for easily generating and observing bifurcation curves at various values of $A$ and $R$ in the $BC$-plane. Mahaffy {\it et al.} \cite{MZJ} showed that the rational delays, $R$, due to the periodic nature of the sinusoidal functions, result in the bifurcation curves ordering themselves into \textit{\textbf{families of curves}}. \begin{definition}\label{family_defn} For $A$ \,fixed, take $R=\frac{k}{n}$ and $j=n-k$. From Eqns.~(\ref{bifB}) and (\ref{bifC}), one can see that the singularities occur at $\frac{ni\pi}{j}, i=0,1,...$ . The bifurcation curve $i$, $\Gamma_i$, with $\frac{n(i-1)\pi}{j}<\omega<\frac{ni\pi}{j}$ satisfies: \vspace*{-0.15in} \begin{equation*} B_{i}(\omega)=\frac{A\sin(\frac{k\omega}{n})+\omega\cos(\frac{k\omega}{n})}{\sin(\frac{j\omega}{n})}, \hspace{0.3in} C_{i}(\omega)=-\frac{A\sin(\omega)+\omega\cos(\omega)}{\sin(\frac{j\omega}{n})} \end{equation*} \noindent Now consider $\Gamma_{i+2j}$ with $\mu=\omega+2n\pi$, then \begin{align*} B_{i+2j}(\mu)&=\frac{A\sin(\frac{k\mu}{n})+\mu\cos(\frac{k\mu}{n})}{\sin(\frac{j\mu}{n})}= \frac{A\sin(\frac{k\omega}{n})+(\omega+2n\pi)\cos(\frac{k\omega}{n})}{\sin(\frac{j\omega}{n})} \\[0.2in] C_{i+2j}(\mu)&= -\frac{A\sin(\omega)+(\omega+2n\pi)\cos(\omega)}{\sin(\frac{j\omega}{n})} \end{align*} These equations show that $B_{i+2j}(\mu)$ follows the same trajectory as $B_{i}(\omega)$ with a shift of $2n\pi\cos(\frac{k\omega}{n})/\sin(\frac{j\omega}{n})$ \,for $\omega\in\Big( \frac{(j-1)\pi}{1-R}, \frac{j\pi}{1-R}\Big)$, while $C_{i+2j}(\mu)$ \,follows the same trajectory as $C_{i}(\omega)$ with a shift of \,$2n\pi\cos(\omega)/\sin(\frac{j\omega}{n})$ \,over the same values of $\omega$. \ This related behavior of bifurcation curves separated by $\omega=2n\pi$ creates $2j$ \textbf{families of curves} in the $BC$ plane for fixed $A$. \,Thus, there is a quasi-periodicity among the bifurcation curves when $R$ is rational. \end{definition} The organization of these families of curves and systematic transitions allow one to observe how the different bifurcation curves enter and leave the boundary of the stability region. (See Fig.~\ref{fig:Rall}.) We provide more details following some analytic results for delays of the form $R = \frac{1}{2n+1}$. \subsection{Stability region for $R = \frac{1}{2n+1}$} \begin{figure}[htb] \begin{minipage}[b]{0.55\linewidth} \begin{center} \includegraphics[width=\textwidth]{images/A800} \end{center} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.4\linewidth} \begin{center} \includegraphics[width=0.7\textwidth]{images/A800a} \\ \includegraphics[width=0.7\textwidth]{images/A800b} \end{center} \end{minipage} \caption{\small{Typical region for $R = \frac{1}{2n+1}$. Example shows $R = 0.199 \approx \frac{1}{5}$ with $A^*_4 = 799.9$. The boundary of the stability region consists of $\Lambda_0$ (violet), $\Gamma_1$ (blue), $\Delta_4$ (dashed red), $\Gamma_4$ (red), and very small segments of $\Gamma_2$ (green) and $\Gamma_3$ (black).}} \label{fig:R333_tran} \end{figure} When $R = \frac{1}{2n+1}$ and $A \to +\infty$, the stability region again bulges away from the MRS. The stability region again asymptotically reduces to just four bifurcation curves. However, the regions of stability, which lie outside the MRS, now appear in the $2^{nd}$ and $4^{th}$ quadrants. Fig.~\ref{fig:R333_tran} shows $R = 0.199$ and $A^*_4 = 799.9$ with primarily four bifurcation surfaces comprising the boundary of the region of stability. This figure is quite representative of any $R \to \frac{1}{2n+1}$ from below as $A^*_{2n} \to +\infty$. Below we present several lemmas, which analytically show results similar to Lemmas~4.1-3 for $R = \frac{1}{2n}$. The next section will complete the study with numerical results. For any delay, $R$, $\Lambda_0$ is always one part of the stability boundary, appearing in the $3^{rd}$ quadrant of the $BC$-plane along the MRS with $B + C = -A$. When $R \to \frac{1}{2n+1}$ from below, there is a transition $A^*_{2n} \to \infty$, which results in a degeneracy line that approaches \[ B + C = A^*_{2n}, \] is parallel to $\Lambda_0$, and lies on the opposite side of the MRS. As in the previous case ($R = \frac{1}{2n}$), $\Gamma_1$ in the $4^{th}$ quadrant creates another edge of the stability region. Finally, the stability region for $R = \frac{1}{2n+1}$, asymptotically finds $\Gamma_{2n}$ mirroring $\Gamma_1$ in the $2^{nd}$ quadrant to complete the simple enlarged stability region. Below we present a few lemmas to prove some of our claims. \begin{lemma} For $R = \frac{1}{2n+1}$, one boundary of the region of stability is the limiting line \[ B + C = A_{2n}^*, \] which lies on the MRS. \end{lemma} \noindent{\bf Proof:} \quad The proof of this lemma closely parallels the proof of Lemma~\ref{lem_lim_line_even}. With $R$ near $\frac{1}{2n+1}$, we consider the transition $A^*_{2n}$, and $\Delta_{2n}$ satisfies \[ A = A_{2n}^*, \quad B + C = B_{2n}^* + C_{2n}^*. \] Since $A_{2n}^* = -\frac{(2n)\pi}{1-R} \cot\left(\frac{(2n)R\pi}{1-R}\right)$, Eqns.~(\ref{bc_alt}) give \[ B_{2n}^* + C_{2n}^* = \textstyle{\frac{(2n)\pi}{1-R}\csc\left(\frac{(2n)R\pi}{1-R}\right)}. \] It follows that \[ \lim_{R \to \frac{1}{2n+1}^-} \frac{B_{2n}^*+C_{2n}^*}{A_{2n}^*} = \lim_{R \to \frac{1}{2n+1}^-}\textstyle{-\sec\left(\frac{(2n)R\pi}{1-R}\right)} = 1. \] Thus, $B_{2n}^* + C_{2n}^* \to A_{2n}^*$ for $R < \frac{1}{2n+1}$ as $R \to \frac{1}{2n+1}$. \hfill\mbox{\raggedright\rule{0.07in}{0.1in}}\vspace{0.1in} For $R = \frac{1}{2n+1}$, $\Gamma_1$ intersects the $\Lambda_0$ at \[ (B,C) = \left(\frac{A+2n+1}{2n}, -\frac{(2n+1)(A+1)}{2n}\right). \] The next lemma shows that as $R \to \frac{1}{2n+1}$ from below and $A_{2n}^* \to +\infty$, the point $(B_{2n}^*, C_{2n}^*)$ tends to a value symmetric with the origin to the intersection of $\Gamma_1$ and $\Lambda_0$. \begin{lemma}\label{lem_odd_pt} For $R < \frac{1}{2n+1}$ and $R \to \frac{1}{2n+1}$, the bifurcation curve $\Gamma_{2n}$ comes to the point \[ (B_{2n}^*, C_{2n}^*) = \left(-\frac{A_{2n}^*+2n+1}{2n}, \frac{(2n+1)(A_{2n}^*+1)}{2n}\right) \] with $A_{2n}^* \to +\infty$. \end{lemma} \noindent{\bf Proof:} \quad This proof is very similar to the proof of Lemma~\ref{lem_even_pt}, so we omit it here. \hfill\mbox{\raggedright\rule{0.07in}{0.1in}}\vspace{0.1in} Figure~\ref{fig:R333_tran} shows that $\Gamma_1$ and $\Gamma_{2n}$ cross the $C$-axis near the corners of the MRS. The next lemma proves this asymptotic limit, giving more information about the symmetric shape of the region. \begin{lemma} \label{C_cross_odd} For $R < \frac{1}{2n+1}$ and $R \to \frac{1}{2n+1}$, the bifurcation curves, $\Gamma_1$ and $\Gamma_{2n}$, pass arbitrarily close to the point $\left(A_{2n}^*,0\right)$ and $\left(-A_{2n}^*,0\right)$, respectively, in the $BC$-plane with $A_{2n}^* \to +\infty$. \end{lemma} \noindent{\bf Proof:} \quad The argument for $\Gamma_1$ passing arbitrarily close to $(A_{2n}^*,0)$ is almost identical to the argument given in Lemma~\ref{C_cross_even}. Similarly, $\Gamma_{2n}$ has $\frac{(2n-1)\pi}{1-R} < \omega < \frac{(2n)\pi}{1-R}$, which has $\omega = 2n\pi$ inside the interval. Since this is an even multiple of $\pi$, the $\cos(\omega) \to 1$. Otherwise, the arguments parallel Lemma~\ref{C_cross_even} and so $\Gamma_{2n}$ passes arbitrarily close to $\left(-A_{2n}^*,0\right)$. \hfill\mbox{\raggedright\rule{0.07in}{0.1in}}\vspace{0.1in} This completes our analytic results to date. The next section provides more numerical details to support our claims of increased stability regions for delays of the form $R = \frac{1}{n}$ with $A$ large. \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{figure}{0} \setcounter{table}{0} \section{Asymptotic Stability Region for $R = \frac{1}{n}$} The previous section presents the simple stability regions for $R$ near $\frac{1}{n}$, and we gave some details for $R = 0.249$ on how the stability region evolves as $A$ increases. In this section we present more details from numerical studies to convince the reader of the enlarged stability region for these specific rational delays and provide some measure of the size increase compared to the MRS. Geometrically, for a fixed $A$ the MRS is a square region with one side bounded by $\Gamma_0$. Definition~\ref{family_defn} shows that rational delays, $R$, create families of smooth curves with similar properties and following similar trajectories. The limited number of family members for a fixed $R = \frac{1}{n}$ creates a type of harmonic, which prevents the complete collection of bifurcation curves from asymptotically approaching all sides of the MRS. Hence, the stability region for $R = \frac{1}{n}$ is enlarged with larger regions for smaller values of $n$. This section provides some details and numerical results for the structure and size of the asymptotic stability region. \subsection{$R$ near $\frac{1}{4}$} To illustrate our analysis we concentrate on the case $R = \frac{1}{4}$ and note that the arguments generalize to other delays of the form $R = \frac{1}{n}$. For $R = \frac{1}{n}$, Definition~\ref{family_defn} gives $2(n-1)$ distinct family members, which implies $R = \frac{1}{4}$ has six distinct family members. As noted above the key bifurcation curves on the boundary of the stability region asymptotically as $A^*_3 \to +\infty$ with $R \to \frac{1}{4}$ from below are $\Gamma_1$ and $\Gamma_3$ with $\Lambda_0$ and $\Delta_3$, creating the other two boundaries along the MRS. $\Gamma_1$ and $\Gamma_3$ are obviously the first bifurcation curves of the first and third families, while $\Delta_3$ arises from the singular point $\omega = \frac{3\pi}{1-R}$ between $\Gamma_3$ and $\Gamma_4$. \begin{figure}[htb] \begin{center} \includegraphics[width=4.0in]{images/249_full} \begin{tabular}{cccc} \includegraphics[width=0.23\textwidth]{images/fig_249_a} & \includegraphics[width=0.22\textwidth]{images/fig_249_b} & \includegraphics[width=0.22\textwidth]{images/249_fa} & \includegraphics[width=0.22\textwidth]{images/249_fb} \end{tabular} \caption{\small{Ten bifurcation curves for each of the six families for $R = 0.249$ at $A_3^* = 749.93$ with close-ups at the corners of the MRS.}} \label{fig:R249_fam} \end{center} \end{figure} Fig.~\ref{fig:R249_fam} shows 60 bifurcation curves for $R = 0.249$ at $A^*_3 = 749.93$. By continuity of the characteristic equation (\ref{chareqn}), we expect similarities between $R = \frac{1}{4}$ and $R = 0.249$, particularly for $A < A^*_3$. Fig.~\ref{fig:R249_fam} shows clearly the six family structure we predict for $R = \frac{1}{4}$. (Note that $R = 0.249$ is predicted to have 1502 families by Definition~\ref{family_defn}, which will ultimately result in a much closer approach of the stability region to the MRS as $A \to +\infty$.) The figure shows the distinct ordering of the family members within the six families and the characteristic pattern of each of the six families. The coloring pattern in Figure~\ref{fig:R249_fam} follows $\Lambda_0$ in violet and then the six successive families in blue, green, black, red, gray, and orange. Fig.~\ref{fig:R249_tran} shows the five curves on the boundary of the stability region. As noted before, $\Lambda_0$ is always on the boundary in the $3^{rd}$ quadrant. It connects to $\Gamma_1$, the $1^{st}$ member of the first family. The first close-up in Fig.~\ref{fig:R249_fam} shows the organization of the first family with all members lying outside the boundary of the region of stability with each successive member further away. This first close-up also shows the $6^{th}$ family, which parallels the first family along the boundary in the $4^{th}$ quadrant, then diverges opposite the first family below $\Lambda_0$. Thus, in the $4^{th}$ quadrant the boundary of the stability region consists only of the curves from $\Lambda_0$ and $\Gamma_1$. In the $2^{nd}$ quadrant, the primary boundary is $\Delta_3$. By Lemma~\ref{lem_lim_line_even}, as $R \to \frac{1}{4}$, $\Delta_3$ approaches the line $C - B = A^*_3$, which is visible in Fig.~\ref{fig:R249_fam}. Since $R = 0.249 < \frac{1}{4}$, there is a small gap between $\Delta_3$ and the boundary of the MRS, so we see a small segment of $\Gamma_2$ on the boundary of the stability region, which is visible in the final close-up of Fig.~\ref{fig:R249_fam}. The line $\Delta_3$ extends into the $1^{st}$ quadrant. We see that outside the tiny segment of $\Gamma_2$ (left of the MRS), the second and fifth families are outside the stability region running symmetrically about the $C$-axis through the $2^{nd}$ and $3^{rd}$ quadrants. In the $1^{st}$ quadrant, we see $\Gamma_3$ on the boundary of the stability region. The third close-up of Fig.~\ref{fig:R249_fam} shows the ordering of the third family with all members successively outside $\Gamma_3$. This figure also shows all members of the fourth family outside (and intertwined with) the third family, paralleling each other in the first quadrant. The fourth family diverges opposite the third family parallel to $\Delta_3$. Numerically, our MatLab programs allow us to observe the stability region for any delay $R$ slightly less than $\frac{1}{4}$ at $A^*_3$, and the boundary of the stability region is almost identical to Fig.~\ref{fig:R249_fam}, except that $A^*_3$ increases with $R \to \frac{1}{4}$, expanding the scales of the $B$ and $C$ axes. To further understand the evolution of this stability surface as $R \to \frac{1}{4}$ from below, we detail how Fig.~\ref{fig:Rall} can be used to explain the process. As noted earlier, the key elements causing the bulges in the stability region are the limited number of families of bifurcation curves and the transitions, which distort the boundary. At $R = \frac{1}{4}$, all the transitions $A^*_{3n} \to +\infty$, $n = 1,2,...$, which is easily verified by Definition~\ref{atrans}. Furthermore, it can be shown using perturbation analysis that the transitions have a distinct ordering, \[ A^*_3 > A^*_6 > ... > A^*_{3n} > A^*_{3(n+1)} > ... \] for delays $R < \frac{1}{4}$. (Details of this proof are not included, but depend on $n$ and $R$, as would be expected.) This sequence allows one to readily determine changes to the boundary of the stability region. Table~\ref{summarytable249} provides the complete evolution of the stability surface for $R = 0.249$ from its beginning at $A_0 \approx -5.016$ until $A^*_3 \approx 749.93$. We call attention to the sequence of reverse tangencies and the reverse transferral for $A < A^*_3$. These events all occur just prior to one of the transitions, $A^*_{3n}$. The transition $A^*_3$ creates $\Delta_3$ with $C - B = C^*_3 - B^*_3$ at $\omega = \frac{3\pi}{1-R}$, which becomes part of the boundary of the stability region. The transition, $A^*_6 \approx 749.72$, results in $\Delta_6$ passing through the point $(B^*_6,C^*_6) \approx (250.05, -1000.19)$ with the line parallel and below $\Lambda_0$. At $A^*_6$, $\Gamma_1$ intersects $\Lambda_0$ at $(B, C) \approx (247.91, -999.63)$, which lies closer to the stability region than $(B^*_6,C^*_6)$. Thus, this transition pulls $\Gamma_6$ and $\Gamma_7$ outside the stability region, swapping the directions in which the curves go to infinity, giving $\Gamma_7$ its flow paralleling $\Gamma_1$ and $\Lambda_0$ and maintaining its position outside the stability region. This distortion from the transition $A^*_6$ results in the reverse transferral, $\tilde{A}^z_{6,1} \approx 749.4$, just prior to the transition. In a similar fashion, $A^*_9 \approx 749.37$ creates $\Delta_{9}$, which passes through $(B^*_9, C^*_9) \approx (250.10, 1000.42)$ and is parallel to $\Delta_3$. We note that $\Delta_3$ through $(B^*_3, C^*_3) = (250.01, 1000.05)$. Again $(B^*_9, C^*_9)$ is outside the stability region pulling $\Gamma_9$ and $\Gamma_{10}$ to their positions paralleling, but outside $\Gamma_3$ in the $1^{st}$ quadrant. Subsequently, $\Gamma_{10}$ falls in order with other members of the fourth family, which is seen with the red curves of Fig.~\ref{fig:R249_fam}. The distortion from $A^*_9$ and reordering of the curves is what produces the reverse tangency, $\tilde{A}^t_{9,3} \approx 747.13$, simplifying the composition of the boundary of the stability region. Following the alternating pattern, we find $A^*_{12} \approx 748.88$, producing $\Delta_{12}$ passing through $(B^*_{12}, C^*_{12})$ $\approx (250.18, -1000.74)$, which is parallel to $\Lambda_0$. Now $(B^*_{12}, C^*_{12})$ is outside the stability region pulling $\Gamma_{13}$ to a position paralleling, but outside $\Gamma_7$. The distortion from $A^*_{12}$ and the reorganization of the curves create the reverse tangency, $\tilde{A}^t_{12,6} \approx 743.46$, losing the curve $\Gamma_{12}$ from the boundary of the stability region. Fig.~\ref{fig:R249_intermed} shows the boundary of the stability region at $A^*_{15}$, where $\Delta_{15}$ is formed. This pulls $\Gamma_{15}$ and $\Gamma_{16}$ outside the stability region, which earlier resulted in the reverse tangency, $\tilde{A}^t_{15,9} \approx 738.19$, and $\Gamma_{15}$ leaving the boundary of the stability region. Fig.~\ref{fig:R249_intermed} shows the $4^{th}$ family (red) lined sequentially outside the region of stability for $\Gamma_{22}$, $\Gamma_{28}$, ... with the $3^{rd}$ family (black) paralleling $\Gamma_3$ along the upper right boundary of the stability region, then moving away, except for $\Gamma_{3}$ and $\Gamma_{9}$. Since $\tilde{A}^t_{9,3} \approx 747.13$, $\Gamma_9$ has just left the boundary of the stability region and soon transitions with $\Gamma_{10}$ at $A^*_9 \approx 749.37$, causing $\Gamma_{10}$ to follow the pattern of the other members of the $4^{th}$ family outside the region of stability. \begin{figure}[htb] \begin{center} \includegraphics[width=3.5in]{images/249_ot} \caption{\small{Nine and ten bifurcation curves for the third and fourth families, respectively, for $R = 0.249$ at $A_{15}^* = 748.25$ are shown, including the $\Delta_{15}$. Note that $\Gamma_3$ and $\Gamma_9$ remain close to the boundary of the stability region with $\Gamma_3$ constructing this portion of the boundary. $\tilde{A}^t_{9,3} \approx 747.134$ has recently occurred, removing $\Gamma_9$ from the boundary of the stability region.}} \label{fig:R249_intermed} \end{center} \end{figure} As seen in Table~\ref{summarytable249}, there is an alternating pattern of reverse tangencies as we progress to lower values of $A$, and each reverse tangency simplifies the boundary of the stability region and organizes the families into the pattern seen in Fig.~\ref{fig:R249_fam} because of one of the $A^*_{3n}$ transitions. This same sequence of events occurs for each $R < \frac{1}{4}$ (sufficiently close) with more tangencies and reverse tangencies before $A^*_3$ as $R \to \frac{1}{4}$ from below and $A^*_3$ getting larger. Thus, the geometric orientation of the curves and the sequence of reverse tangencies and transferrals are virtually identical to the figures shown with $R = 0.249$, except the $B$ and $C$ scales increase as $R \to \frac{1}{4}$. The continuity of the characteristic equation shows that all delays $R < \frac{1}{4}$, yet sufficiently close to $R = \frac{1}{4}$, will generate a simplified stability region very similar to Fig.~\ref{fig:R249_tran} at $A^*_3$. As $R \to \frac{1}{4}$ from below, $\Delta_3$ gets closer to the MRS and the contribution of $\Gamma_2$ on the boundary shrinks. We have been unable to definitively prove whether $\Gamma_2$ persists on the boundary of the stability region for $R < \frac{1}{4}$ at $A_3^*$ or if $R$ sufficiently close to $\frac{1}{4}$ sees $\Gamma_2$ exit the boundary of the stability region. This simple shape of the stability region allows easy numerical computation of how enlarged the stability region is relative to the MRS. Table~\ref{table:R1over4} gives the relative increase of this region of stability for several values of $R \to \frac{1}{4}$, indicating the predicted asymptotic increase in size of the stability region for $R = \frac{1}{4}$ at $A^*_3 = +\infty$. Since the transitions, $A^*_{3n}$, all occur at $+\infty$ when $R = \frac{1}{4}$, continuity of the characteristic equation suggests this enlarged stability region persists for $R = \frac{1}{4}$ and should be 26.86\% larger than the MRS. For any $R < \frac{1}{4}$ and $A > A^*_3$, the six family structure breaks down, leading to new tangencies and a new ordering of the larger families, which results in significantly smaller regions of stability. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c||c|c|} \hline $R$ & Area Ratio & $R$ & Area Ratio \\ \hline 0.249 & 1.2687437 & 0.24999 & 1.2686377 \\ \hline 0.2499 & 1.2686388 & 0.249999 & 1.2686377 \\ \hline \end{tabular} \caption{\small{Numerical computation of the region of stability at $A^*_3$ near $R = \frac{1}{4}$. Area given as ratio of stability region to MRS.}} \label{table:R1over4} \end{center} \end{table} \subsection{Increased Area for $R = \frac{1}{n}$} Figs.~\ref{fig:R249_tran} and \ref{fig:R333_tran} show the simple bifurcation curve structure on the typical boundary of stability region for $R$ near $\frac{1}{n}$ at $A^*_{n-1}$. The symmetrical shape varies depending on whether $n$ is even or odd, but all these stability regions at $A^*_{n-1}$ reduce to having $\Lambda_0$ on one edge, $\Delta_{n-1}$ on another, and the bifurcation curves $\Gamma_1$ and $\Gamma_{n-1}$ comprising the majority of the remaining two edges of the boundary of the region of stability. (The gap between $\Delta_{n-1}$ and the MRS allows small segments of $\Gamma_2$ and possibly $\Gamma_{n-2}$ to remain on the boundary of the stability region at $A_{n-1}^*$, shrinking as $R \to \frac{1}{n}$.) $\Gamma_1$ and $\Gamma_{n-1}$ bulge out from the MRS, leaving an increased region of stability, which is readily computed. For $R = \frac{1}{n}$, Def.~\ref{family_defn} gives $2(n-1)$ families of bifurcation curves. These families organize much in the same way as shown in the previous section ($R$ near $\frac{1}{4}$) to help maintain the increased regions of stability for $R = \frac{1}{n}$ over the MRS. The orderly family structure allows one to study each $R = \frac{1}{n}$ much as we did in the previous section and observe similar sequences of transferrals, tangencies, reverse tangencies, and reverse transferrals, which ultimately lead to the simple structure of the stability region seen in Figs.~\ref{fig:R249_tran} and \ref{fig:R333_tran} at $A^*_{n-1}$ for $R < \frac{1}{n}$, sufficiently close. Using the continuity (pointwise) of the characteristic equation, we claim that the bulge in the region of stability persists for $R = \frac{1}{n}$. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c||c|c|c|} \hline $R$ & Area Ratio & Linear Extension & $R$ & Area Ratio & Linear Extension\\ \hline $\frac{1}{2}\rule[-0.08in]{0in}{.25in}$ & 2.0000 & 1.0000 & $\frac{1}{7}\rule[-0.08in]{0in}{.25in}$ & 1.1084 & 0.1667\\ \hline $\frac{1}{3}\rule[-0.08in]{0in}{.25in}$ & 1.4431 & 0.5000 & $\frac{1}{8}\rule[-0.08in]{0in}{.25in}$ & 1.0878 & 0.1429 \\ \hline $\frac{1}{4}\rule[-0.08in]{0in}{.25in}$ & 1.2686 & 0.3333 & $\frac{1}{9}\rule[-0.08in]{0in}{.25in}$ & 1.0729 & 0.1250 \\ \hline $\frac{1}{5}\rule[-0.08in]{0in}{.25in}$ & 1.1859 & 0.2500 & $\frac{1}{10}\rule[-0.08in]{0in}{.25in}$ & 1.0617 & 0.1111 \\ \hline $\frac{1}{6}\rule[-0.08in]{0in}{.25in}$ & 1.1386 & 0.2000 & & &\\ \hline \end{tabular} \caption{\small{Increases in the Region of Stability for $R = \frac{1}{n}$. Area ratio is the ratio of the stability area to the MRS. The linear extension is the ratio of how far the stability region extends past the MRS.}} \label{table:area_bulge} \end{center} \end{table} We showed the region of stability extends linearly along the MRS by a factor of $\frac{1}{n-1}$ for $R = \frac{1}{n}$. The simple bifurcation curve structure allows easy numerical computation of this area bulging from the MRS. Table~\ref{table:area_bulge} gives the size of the increased region of stability for various $R = \frac{1}{n}$. Asymptotically, the region of stability for $R = \frac{1}{2}$ is triangular with only $\Lambda_0$, $\Gamma_1$, and $\Delta_{1}$. This results in a region that is twice the size of the MRS, asymptotically. As the denominator increases, the asymptotic region of stability decreases relative to the MRS, yet it is still over 10\% larger than the MRS when $R = \frac{1}{7}$. \subsection{Stability Spurs} The discussion above shows how transitions increase the area of the region of stability for $R = \frac{1}{n}$. Just before a transition, $A_j^*$, bifurcation curve $\Gamma_j$ extends out toward bifurcation curve $\Gamma_{j+1}$, expanding the main region of stability. Numerically, we observe that just prior to $A_j^*$, $\Gamma_{j+1}$ self-intersects creating an island of stability that is disconnected in the $BC$-plane for a fixed $A$. Definition~3.6 describes the stability spurs, which connect to the main stability surface at transitions, $A^*_j$. In this section we provide some details from our numerical simulations about the stability spurs. \begin{figure}[htb] \centering\includegraphics[width=0.55\textwidth]{images/spur} \caption{The spur Lengths associated with $Sp_1(R)$ (green), $Sp_2(R)$ (black) and $Sp_3(R)$ (red) as a function of $R$.} \label{fig:spurlength} \end{figure} Numerically, we observe exactly $n-1$ stability spurs for $R \in \left(\frac{1}{n+1}, \frac{1}{n}\right)$. The largest stability spurs, $Sp_1(R)$, correspond to $A_1^*$ for at least a limited range of $R$. We performed an extensive study of the stability spurs for $R \in \left(\frac{1}{5},\frac{1}{4}\right)$. Over this range there appear to be exactly three stability spurs. Fig.~(\ref{fig:spurlength}) shows the variation in length of the three stability spurs, which are all monotonically decreasing in $R$. The length of the first spur for $R \in \left[\frac{1}{5},\frac{1}{4}\right]$ ranges from $|Sp_1(0.2)| = 0.4064$ adjoining the main stability surface at $A_1^*(0.2) = -3.927$ to $|Sp_1(0.25)| = 0.2840$ adjoining the main stability surface at $A_1^*(0.25) = -2.418$. Where this first stability spur adjoins the main stability surface, the cross-sectional area in the $BC$-plane of $Sp_1(0.2)$ is 17.65\% of the total stable region, while the area of $Sp_1(0.25)$ at $A_1^*(0.25)$ is 7.08\% of the total stable cross-section. (See Fig.~\ref{fig:spur}.) The sizes of the second and third stability spurs are significantly smaller. The length of $Sp_2(0.2)$ is 0.1136, while the length of $Sp_2(0.25)$ is 0.0305. The cross-sectional area of $Sp_2(0.2)$ is 0.3003\% of the total stable region at $A_2^*(0.2) = 0$, while the area of $Sp_2(0.25)$ is 0.01483\% of the total stable region at $A_2^*(0.25) = 4.837$. the length of $Sp_3(0.2)$ is 0.0110, and this spur adjoins the main stability region at $A_3^*(0.2) = 11.78$. Clearly, no spur occurs at $R = \frac{1}{4}$, as $A_3^*(0.25) = +\infty$. The stability spurs are easy to observe in Fig.~\ref{fig:Rfifth} for $R = \frac{1}{5}$. From a numerical perspective the length of a stability spur is difficult to accurately compute because of the cusp point, $A_j^p$, which is a singularity. Both the length and the self-intersection of $Sp_3(R)$ as $R \to \frac{1}{4}$ becomes impossible to compute, and effectively, $Sp_3(R)$ vanishes as $R \to \frac{1}{4}$. However, as we have already seen, the main stability region bulges out in the $1^{st}$ quadrant because of $A_3^*(R)$, but we have been unable to confirm that $\Gamma_4$ always self-intersects as $A_3^*(R) \to + \infty$ with $R \to \frac{1}{4}$. \section{Example Continued} In Section~2, we examined an example motivated by work of B{\'e}lair and Mackey \cite{BEM}, and Fig.~\ref{fig:platelet} showed how rational delays stabilized the model. Here we use some of the information above to provide more details about the complex behavior observed in Fig.~\ref{fig:platelet}. With the parameters given in Section 2, the model is linearized about the nontrivial equilibrium, $P_e \approx 5.565$. If $y(t) = P(t) - P_e$, then the approximate linearized model becomes: \begin{equation} \frac{dy}{dt} = -100\,y(t) + 100\,y(t-R) - 35\,y(t-1). \label{lin_model} \end{equation} This is Eqn.~(\ref{DDE2}) with $(A, B, C) = (100, 35, -100)$. We use our MatLab program to generate plots of 40 bifurcation curves in the $BC$-plane with $A = 100$ for various values of $R$. Fig.~\ref{fig:model_13} shows the distinct four family feature for the delay $R = \frac{1}{3}$ and the enlarged region of stability. The linearized model is in the region of stability, which agrees with the numerical simulation in Fig.~\ref{fig:platelet}, where the delay $R = \frac{1}{3}$ gives a stable solution. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.35\textwidth]{images/plat13bifa} & \includegraphics[width=0.35\textwidth]{images/plat13bifb} \end{tabular} \end{center} \caption{Bifurcation curves for the modified platelet model with $R = \frac{1}{3}$ and $A = 100$, showing the point where the model exists in the parameter space. This point is clearly in the stable region.} \label{fig:model_13} \end{figure} When the delay is decreased to $R = 0.318$, there is a transition at $A_2^* = 42.8$. The four family structure rapidly unravels, and the bifurcation curves begin approaching the MRS more closely. Fig.~\ref{fig:model_318} shows that Eqn.~(\ref{lin_model}) has its equilibrium outside the curves $\Gamma_9$, $\Gamma_{13}$, and $\Gamma_{17}$. With the help of Maple (using information from Fig.~\ref{fig:model_318}), the eigenvalues of Eqn.~(\ref{chareqn}) with positive real part are computed. These eigenvalues are: \[ \lambda_1 = 0.1056\pm 58.36\,i \qquad \lambda_2 = 0.06238\pm 77.43\,i \qquad \lambda_3 = 0.04914\pm 39.32\,i. \] The leading eigenvalue comes from the equilibrium point being furthest from $\Gamma_{13}$, and its frequency suggests a period of $2\pi/58.36 \approx 0.108$, which is close to the period of oscillation seen in Fig.~\ref{fig:platelet} for $R = 0.318$. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.35\textwidth]{images/plat13bifd} & \includegraphics[width=0.35\textwidth]{images/plat13bifc} \end{tabular} \end{center} \caption{Bifurcation curves for the modified platelet model with $R = 0.318$ and $A = 100$, showing the point where the model exists in the parameter space. This point is outside the region of stability.} \label{fig:model_318} \end{figure} A similar analysis can be performed for $R = 0.34$. Since this delay is larger than $R = \frac{1}{3}$, there is not the four family structure. Fig.~\ref{fig:model_34} shows four bifurcation curves, $\Gamma_{8}$, $\Gamma_{12}$, $\Gamma_{16}$, and $\Gamma_{19}$, between the region of stability and where the linearized model is located. As before, it is easy to use this information to determine the eigenvalues: \[ \lambda_1 = 0.2424\pm 53.51\,i \qquad \lambda_2 = 0.1988\pm 35.21\,i \qquad \lambda_3 = 0.1625\pm 71.87\,i \qquad \lambda_4 = 0.002273\pm 90.31\,i. \] The equilibrium point is furthest from $\Gamma_{12}$, and the period from $\lambda_1$ is $2\pi/53.51 \approx 0.117$, which is similar to the period of oscillation seen in Fig.~\ref{fig:platelet} for $R = 0.34$. The next two eigenvalues are moderately large, resulting in the additional irregular structure observed in the simulation. Note that the last eigenvalue with positive real part has just barely crossed the imaginary axis, which again is apparent from Fig.~\ref{fig:model_34}. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.35\textwidth]{images/plat13biff} & \includegraphics[width=0.35\textwidth]{images/plat13bife} \end{tabular} \end{center} \caption{Bifurcation curves for the modified platelet model with $R = 0.34$ and $A = 100$, showing the point where the model exists in the parameter space. This point is outside the region of stability.} \label{fig:model_34} \end{figure} The modified platelet model also considered delays near $R = \frac{1}{2}$. The simulation in Fig.~\ref{fig:platelet} showed the stability of the solution at $R = \frac{1}{2}$. For $R = \frac{1}{2}$, there are only two families of bifurcation curves, leaving a fairly large region of stability. This is apparent in the leftmost graph of Fig.~\ref{fig:model_12}, where the point for the model parameters is clearly inside the region of stability. When the delay is decreased to $R = 0.48$, there is a transition, $A_1^* = 24.5$. Thus, when $A = 100$, the two family structure has broken down, and the bifurcation curves form a very different pattern. The result is shown in the middle graph of Fig.~\ref{fig:model_12}, where the bifurcation curves $\Gamma_{9}$, $\Gamma_{11}$, and $\Gamma_{13}$ are visible between the model parameter point and the region of stability. Once again, it is easy to compute the eigenvalues with positive real part for this case giving: \[ \lambda_1 = 0.07503\pm 64.71\,i \qquad \lambda_2 = 0.05662\pm 77.46\,i \qquad \lambda_3 = 0.04591\pm 51.96\,i. \] The leading eigenvalue, $\lambda_1$, has a frequency of 64.71, which suggests a period of 0.09710. This is consistent with the observed period of oscillation in Fig.~\ref{fig:platelet}. We observe that this oscillatory behavior is irregular, which reflects a strong contribution from $\lambda_2$ and $\lambda_3$, which have higher and lower frequencies, respectively. For $R = 0.51$, the rightmost graph of Fig.~\ref{fig:model_12} shows the bifurcation curves $\Gamma_{10}$ and $\Gamma_{12}$ between the model point and the region of stability. When the eigenvalues are computed, we obtain: \[ \lambda_1 = 0.03415\pm 60.02\,i \qquad \lambda_2 = 0.02930\pm 72.28\,i. \] The frequency of the leading eigenvalue is 60.02, which yields a period of 0.1047. Again, this is consistent with the observed oscillations in Fig.~\ref{fig:platelet}. \begin{figure}[htb] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth]{images/plat12bifa} & \includegraphics[width=0.32\textwidth]{images/plat12bifb} & \includegraphics[width=0.32\textwidth]{images/plat12bife} \end{tabular} \end{center} \caption{Bifurcation curves for the modified platelet model near $R = 0.5$ and $A = 100$, showing the point where the model exists in the parameter space. The left figure is at $R = \frac{1}{2}$ with only two families of bifurcation curves, showing the model parameters inside the stable region. The middle figure is for $R = 0.48$, which has $\Gamma_{9}$, $\Gamma_{11}$, and $\Gamma_{13}$ between the model point and the stable region. The right figure with $R = 0.51$ has only $\Gamma_{10}$ and $\Gamma_{12}$ between the model point and the stable region.} \label{fig:model_12} \end{figure} \setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{figure}{0} \setcounter{table}{0} \section{Discussion and Conclusion} Delay differential equations (DDEs) with multiple time delays are used in a wide array of applications. When studying the stability of two delay models, our results show the high sensitivity of the DDE for certain delays (rationally dependent) for some ranges of the parameters. In particular, the DDE (\ref{DDE2}) shows unusually large regions of stability for $R = \frac{1}{n}$, when $n$ is a small integer. For example, when $R = \frac{1}{2}$, the region of stability doubles over the Minimum Region of Stability (MRS), which is independent of the delay. Our geometric approach allows a systematic method for visualizing the region of stability and provides a simplified understanding for how the stability region evolves. In our motivating example, we demonstrated how easily the leading eigenvalues could be found, which helped explain some of the observed behavior in the nonlinear problem. The characteristic equation for (\ref{DDE2}) is an exponential polynomial, which is deceivingly complex to analyze. For rational delays, $R$, this characteristic equation organizes into families of curves, which undergo only a few types of reorderings. The most significant changes occur at values of the parameter, $A$, which we defined as transitions, $A^*_j$. One interesting phenomenon that can occur at a transition is a ``stability spur,'' where a region of stability outside the main region of stability joins, distorting the stability region to become larger. These ``stability spurs'' also lead to interesting disconnected regions of stability in the $BC$-cross sectional parameter space. More significantly, as $R \to \frac{1}{n}^-$, the transition $A_{n-1}^* \to +\infty$, moving the accompanying distortion further away and maintaining an increased region of stability. We showed that for any $R < \frac{1}{n}$, but close, the boundary of the stability region in the $BC$-cross section at $A_{n-1}^*$ reduces primarily to just four simple curves. The regions of stability, as depicted in Figs.~\ref{fig:R249_tran} and \ref{fig:R333_tran} with primarily only four curves, allowed us to easily compute the increased area of stability for delays $R = \frac{1}{n}$ as $A \to +\infty$. The evolution of the stability region had limited, yet very orderly ways of changing for $R$ near $\frac{1}{n}$. This is the quasi-periodicity of the families of bifurcation curves, which could self-intersect mostly through tangencies. The organization of the curves, as seen in many of the figures, produced clear patterns that could be carefully analyzed for $R = \frac{1}{n}$, resulting in the observed larger regions of stability. In this paper we analytically proved some results to support our claims. However, more analytic work is needed around these singularities that occur in the characteristic equation for $R = \frac{1}{n}$. Furthermore, other rational delays, like $R = \frac{2}{5}$, show similar increases in their regions of stability, but we have not investigated the details on how these rationally dependent delays produce larger regions of stability. We have produced a framework for future studies of the DDE (\ref{DDE2}) and have excellent MatLab programs for further geometric investigations. Understanding the stability properties of DDE (\ref{DDE2}) is very important for a number of applications with time delays. Our results show that selecting delays of $R = \frac{1}{n}$ for $n$ small in a model could give the investigator stability that is easily lost with only a very small change in the delay. This ultra-sensitivity in the model can be explained by our results. This two delay problem is very complex, but our geometric analysis provides a valuable tool for future stability analysis of delay models. \newpage
{ "timestamp": "2013-08-08T02:00:43", "yymm": "1308", "arxiv_id": "1308.1427", "language": "en", "url": "https://arxiv.org/abs/1308.1427", "abstract": "Stability analysis is performed for a linear differential equation with two delays. Geometric arguments show that when the two delays are rationally dependent, then the region of stability increases. When the ratio has the form 1/n, this study finds the asymptotic shape and size of the stability region. For example, a delay ration of 1/3 asymptotically produces a stability region 44.3% larger than any nearby delay ratios, showing extreme sensitivity in the delays. The study provides a systematic and geometric approach to finding the eigenvalues on the boundary of stability for this delay differential equation. A nonlinear model with two delays illustrates how our methods can be applied.", "subjects": "Dynamical Systems (math.DS)", "title": "Regions of Stability for a Linear Differential Equation with Two Rationally Dependent Delays", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9808759626900698, "lm_q2_score": 0.8376199592797929, "lm_q1q2_score": 0.821601283926984 }